Code Monkey home page Code Monkey logo

ansible-role-varnish's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ansible-role-varnish's Issues

Figure out why tests are failing on Ubuntu 14.04

The test is a lie...

From a failed build:

TASK [role_under_test : Ensure Varnish is started and set to run on startup.] **
changed: [localhost]

RUNNING HANDLER [role_under_test : restart varnish] ****************************
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "/etc/init.d/varnish: 36: ulimit: error setting limit (Operation not permitted)\n/etc/init.d/varnish: 36: ulimit: error setting limit (Operation not permitted)\n"}

Locally, I can see that varnish is starting, but something's not working correctly with the restart:

$ docker exec --tty a21a930e env TERM=xterm ps -ax
  PID TTY      STAT   TIME COMMAND
    1 ?        Ss     0:00 /sbin/init
  960 ?        Ss     0:00 /usr/sbin/varnishd -a :80 -f /etc/varnish/default.vcl
  967 ?        Sl     0:00 /usr/sbin/varnishd -a :80 -f /etc/varnish/default.vcl

Varnish systemd unit file location defaults to /lib/systemd/system/varnish.service

When building a large number of Varnish servers last week, I noticed that one of the servers wasn't picking up my custom varnish_listen_port. As it turns out, running systemctl status varnish revealed that the systemd unit file being used was /lib/systemd/system/varnish.service.

This role currently stores a varnish unit file at /etc/systemd/system/varnish.service, but it looks like it would be more correct to store it at the /lib path. I'm going to do a test on a branch and see if it works the same—if so, I'll move the file there in this role's configuration, and then make sure the /etc file is symlinked to the /lib one.

page-185-198 Ansible for DevOps

Following what you have outlined in your book

[root@ansible1 lamp-infrastructure]# tree
.
├── configure.yml
├── index.php.j2
├── inventories
├── playbooks
│ ├── db
│ │ ├── main.yml
│ │ └── vars.yml
│ ├── memcached
│ │ ├── main.yml
│ │ └── vars.yml
│ ├── varnish
│ │ ├── main.yml
│ │ ├── templates
│ │ │ └── default.vcl.j2
│ │ └── vars.yml
│ └── www
│ ├── index.php.j2
│ ├── main.yml
│ └── vars.yml
├── provisioners
├── provision.yml
└── Vagrantfile
9 directories, 13 files
[root@ansible1 lamp-infrastructure]# ansible-playbook configure.yml
ERROR! variable files must contain either a dictionary of variables, or a list of dictionaries. Got: firewall_all
owed_tcp_ports - "22" - "80" (<class 'ansible.parsing.yaml.objects.AnsibleUnicode'>)
[root@ansible1 lamp-infrastructure]#

Testing from Google Cloud Compute instance - ansinble1

task "Ensure Varnish services are started and enabled on startup." is skipped on Ubuntu Bionic

There is a check for ansible_os_family that restricts this working on Ubuntu bionic

- name: Ensure Varnish services are started and enabled on startup.
  service:
    name: "{{ item }}"
    state: started
    enabled: true
  with_items: "{{ varnish_enabled_services | default([]) }}"
  when: >
    varnish_enabled_services and
    (ansible_os_family != 'Debian' and ansible_distribution_release != "xenial")

Should just be:

- name: Ensure Varnish services are started and enabled on startup.
  service:
    name: "{{ item }}"
    state: started
    enabled: true
  with_items: "{{ varnish_enabled_services | default([]) }}"
  when: >
    varnish_enabled_services and (ansible_os_family != 'Debian' or
    (ansible_os_family == 'Debian' and ansible_distribution_release != "xenial"))

Service is not set to enabled on boot with default role settings

After using the defaults and deploying to an Ubuntu 16.04 instance, I'm seeing:

# systemctl status varnish
...
/etc/systemd/system/varnish.service; disabled

And if I reboot, Varnish is not started after boot... therefore I have to manually start it. Maybe this is an issue with the service module and systemd, and the Varnish repo?

Error installing on RHEL/CentOS due to missing gcc dependency

Was using this role after following the Highly Available Infrastructure cookbook in Ansible for DevOps, however I was installing against CentOS 7 in Vagrant.

The Install Varnish step failed as it requires redhat-rpm-config & gcc

Error: Package: varnish-4.0.3-3.el7.x86_64 (epel)
           Requires: redhat-rpm-config
Error: Package: varnish-4.0.3-3.el7.x86_64 (epel)
           Requires: gcc

If I remove the disablerepo attribute (line 14 - tasks/setup-RedHat.yml) then Varnish installs correctly. This does seem to be a CentOS 7 specific issue as it installed without any errors on CentOS 6 with the repos disabled

Varnish doesn't have package for Ubuntu 20.04 Focal Fossa

Hi, I am not able to get this to install on Ubuntu 20.04
Below error:

Ign:5 https://packagecloud.io/varnishcache/varnish64/ubuntu focal InRelease
Err:6 https://packagecloud.io/varnishcache/varnish64/ubuntu focal Release
  404  Not Found [IP: 2600:1f1c:2e5:6900:4e24:4dad:908b:18c6 443]
Reading package lists... Done
E: The repository 'https://packagecloud.io/varnishcache/varnish64/ubuntu focal Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.

Config file of Varnish service (varnish.service) don't use role variables for storage

The template file for varnish.service has the storage config hardcoded.

-ExecStart=/usr/sbin/varnishd -a :{{ varnish_listen_port }} -T {{ varnish_admin_listen_host }}:{{ varnish_admin_listen_port }} -f /etc/varnish/default.vcl -S /etc/varnish/secret -s malloc,256m

So, it always use malloc and 256 Mb.

The path of default.vcl and secret are hardcoded two.

Failed to install Varnish on AWS Linux 2

Hi
I was attempting to install Varnish on an Amazon Linux 2 AMI and it failed.

TASK [mwp.varnish : Ensure Varnish 6.1 is installed.] **********************************************************************************************************************************************************************************************************
fatal: [10.202.1.164]: FAILED! => {"changed": false, "msg": "Failure talking to yum: failure: repodata/repomd.xml from varnishcache_varnish61: [Errno 256] No more mirrors to try.\nhttps://packagecloud.io/varnishcache/varnish61/el/2/x86_64/repodata/repomd.xml: [Errno 14] HTTPS Error 404 - Not Found"}

The problem relates to
varnish_yum_repo_baseurl: https://packagecloud.io/varnishcache/{{ varnish_packagecloud_repo }}/el/{{ ansible_distribution_major_version|int }}/$basearch

When using Amazon Linux 2 ansible_distribution is
ansible tag_Name_${linuxdistro}_${role}_base_build_${date}_01 -m setup | grep ansible_distribution_major_version

   "ansible_distribution_major_version": "2",

The role ran fine on a RHEL7 instance,

To get around the issues I hard coded the EP version for the moment.

The question I suppose it, would you recommend an alternative solution?

Kind regards

reload fails for 4.1

OS: Ubuntu 16.04

systemd Reload configuration calls a script that looks eerily like old init.d script.

ExecReload=/usr/share/varnish/reload-vcl

This script in turn gets params from here:
/etc/default/varnish
which is not touched by this role.

When calling an ansible systemd reload handler (use case: update default.vcl but don't want to lose memory contents)

- name: reload varnish
  systemd: name=varnish state=reloaded

This fails because admin host is defined as "localhost" in untouched params file and "127.0.0.1" in ExecStart

I am happy to roll a patch for this.

  • Should this role overwrite /etc/default/varnish from a template?
  • are there any other command line functions that may be affected by this change?

Fail to rotate logs for varnishncsa

When installing the role on Debian Jessie, the logrotation seems to have a bug:
The cron complains of the following issue:

/etc/cron.daily/logrotate:
error: error running non-shared postrotate script for /var/log/varnish/varnishncsa.log of '/var/log/varnish/varnishncsa.log '
run-parts: /etc/cron.daily/logrotate exited with return code 1

When looking at the configuration, it looks like invoke.rc has some issue with restarting the varnishncsa service. By updating it simply with:

/var/log/varnish/varnishncsa.log {
  daily
  rotate 7
  compress
  delaycompress
  missingok
  postrotate
    if [ -d /run/systemd/system ]; then
       systemctl -q is-active varnishncsa.service || exit 0
    fi
-    /usr/sbin/invoke-rc.d varnishncsa reload > /dev/null
+    systemctl reload varnishncsa.service > /dev/null
  endscript
}

Seem to solve this issue. Is it something you already encounter?

Failed starting varnish

Using the role with an Amazon Linux AMI I get this error;

TASK [ansible-role-varnish : Ensure Varnish services are started enabled on startup.] ******************************************************************************************************************************************************************************************
failed: [172.20.30.251] (item=varnish) => {"failed": true, "item": "varnish", "msg": "Starting Varnish Cache: [FAILED]\r\n"}

Trying to start and stop the service manually I get:
Stopping Varnish Cache: [FAILED]
Starting Varnish Cache: [FAILED]

In my configuration I use Varnish 4.1

Varnish doesn't start on boot on Debian Jessie

After running the role Varnish get's installed and running, but it does not start on system boot. You must start it manually (service varnish start). The NCSA and log services do not start on boot either.

Apparently, varnish-cache.org's package does not enable them on systemd config.

Make default.vcl path configurable for simplicity

For scenarios where someone just wants to drop in their own VCL template instead of the extremely simplistic default.vcl.j2 template included with the role, the role should allow the local path to the default VCL to be configured as a variable.

I'd like this in particular to make the deployment of Varnish on Drupal VM (per geerlingguy/drupal-vm#97) much simpler (no need for any extra copy/restart tasks in the playbook since the role will take care of everything).

Use variables for all templated files

Hi

you already added a variable for default VCL varnish_default_vcl_template_path.

It would be nice to have variables for all templated files, like varnish.service.j2 or varnish.params.j2

Thanks for your work !

Fix bare variable warning in 'Ensure Varnish services are started' task

See example: https://travis-ci.org/geerlingguy/drupal-vm/jobs/652146023#L2202

TASK [geerlingguy.varnish : Ensure Varnish services are started enabled on startup (Xenial specific)] ***
[DEPRECATION WARNING]: evaluating [u'varnish'] as a bare variable, this 
behaviour will go away and you might need to add |bool to the expression in the
 future. Also see CONDITIONAL_BARE_VARS configuration toggle.. This feature 
will be removed in version 2.12. Deprecation warnings can be disabled by 
setting deprecation_warnings=False in ansible.cfg.
skipping: [localhost] => (item=varnish)

Cannot configure listen IP

This role does not fully support the -a parameter, which accepts a wide range of IP address and port formats. Instead, this role limits the listen address configuration to port customization only, by hard-coding the : prefix for the -a switch, forcing the default IP bind address to be used.

It should be possible to configure the listen IP in addition to the port. I would suggest not having separate settings for IP and port, since this would still only support one binding but it's clear from the documentation that multiple bindings can be specified with comma (,) separators.

[enhancement] Enable the use of a PID for varnishd

Varnish has the ability to create a PID file when it starts, if you use the -P parameter when starting the service.

This enhancement implies the addition of a couple variables (1 to enable the use of the PID and another for the PID path), and the modification of the different service templates (varnish.j2 and varnish.service.j2)

Can not add repository (DrupalVM, vagrant_box: geerlingguy/ubuntu1804)

TASK [geerlingguy.varnish : include_tasks] *************************************
skipping: [drupalvm]

TASK [geerlingguy.varnish : include_tasks] *************************************
included: /vagrant/provisioning/roles/geerlingguy.varnish/tasks/setup-Debian.yml for drupalvm

TASK [geerlingguy.varnish : Ensure APT HTTPS Transport is installed.] **********
changed: [drupalvm]

TASK [geerlingguy.varnish : Add packagecloud.io Varnish apt key.] **************
changed: [drupalvm]

TASK [geerlingguy.varnish : Add packagecloud.io Varnish apt repository.] *******
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: apt.cache.FetchFailedException: E:The repository 'https://packagecloud.io/varnishcache/varnish51/ubuntu bionic Release' does not have a Release file.
fatal: [drupalvm]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File "/tmp/ansible_ARMCpS/ansible_module_apt_repository.py", line 551, in \n main()\n File "/tmp/ansible_ARMCpS/ansible_module_apt_repository.py", line 543, in main\n cache.update()\n File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 505, in update\n raise FetchFailedException(e)\napt.cache.FetchFailedException: E:The repository 'https://packagecloud.io/varnishcache/varnish51/ubuntu bionic Release' does not have a Release file.\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 1}

Allow adding extra DAEMON_OPTs or varnishd flags

Right now, with this role's templates, you can't pass in extra Varnishd flags like -p http_max_hdr=128 (which, incidentally, I need to pass on one of my projects... therefore I'm going to fix this in a few minutes).

Listen protocol not configurable

It would be great to be able to specify the listening protocol of varnishd (-a switch).

A new variable would be necessary : varnish_listen_protocol and the ExecStart line of systemd service would be modified as :

varnish.service.j2 :
ExecStart=/usr/sbin/varnishd -a {{ varnish_listen_address }}:{{ varnish_listen_port }},{{ varnish_listen_protocol }} -T {{ varnish_admin_listen_host }}:{{ varnish_admin_listen_port }}{% if varnish_pidfile %} -P {{ varnish_pidfile }}{% endif %} -f {{ varnish_config_path }}/default.vcl -S {{ varnish_config_path }}/secret -s {{ varnish_storage }} {{ varnishd_extra_options }}

Also in varnish.params.j2 : a new option :
VARNISH_LISTEN_PROTOCOL={{ varnish_listen_protocol }}

Warning in daemon.log

The filemode of varnish.service generates a Warning in daemon.log:

Aug 10 11:20:21 debian systemd[1]: Configuration file /etc/systemd/system/varnish.service is marked executable. Please remove executable permission bits. Proceeding anyway.

The role sets 0655 when it should be 0644

Varnish role not completing installation on Debian 9 (Stretch)

I'm getting the following:

TASK [geerlingguy.varnish : Add Varnish apt repository.] ***********************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: apt.cache.FetchFailedException: E:The repository 'https://repo.varnish-cache.org/debian stretch Release' does not have a Release file.
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "module_stderr": "Traceback (most recent call last):\n  File \"/tmp/ansible_5C07Yh/ansible_module_apt_repository.py\", line 565, in <module>\n    main()\n  File \"/tmp/ansible_5C07Yh/ansible_module_apt_repository.py\", line 553, in main\n    cache.update()\n  File \"/usr/lib/python2.7/dist-packages/apt/cache.py\", line 464, in update\n    raise FetchFailedException(e)\napt.cache.FetchFailedException: E:The repository 'https://repo.varnish-cache.org/debian stretch Release' does not have a Release file.\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 0}

Simple/quick fix would be to switch to the system package (which is 5.0.0 right now anyways). But I think I may be able to switch repos at some point and this problem will just go away.

Utopic Unicorn Fails To Install

The Varnish apt repositories lack builds for Utopic Unicorn, Ubuntu 14.10, so after obtaining the key the build of the box fails.

I fixed this locally by simply installing the version of Varnish 4.0 that is in the Ubuntu repositories.

Should I do this programmatically and submit a PR?

Varnish < 6.1, problem with varnish.service.j2 on Debian 9 Stretch

Hello,

Reload does not work anymore on Debian Stretch with varnish 4.1.10 (and I think any version < 6.1).

In commit e0b2412 ExecReload has been set to /usr/sbin/varnishreload instead of /usr/share/varnish/reload-vcl. With Varnish 4.1.10 on Debian Stretch, varnishreload does not exists.

I think varnishreload has been added in packages for the 6.1 version and the ExecReload line should be:

ExecReload={% if varnish_version | version_compare('6.1', '<') %}/usr/share/varnish/reload-vcl{% else %}/usr/sbin/varnishreload{% endif %}

Thank you

Error starting varnish on Centos 7

I get the following error trying to provision a CentOs 7 machine:

TASK [geerlingguy.varnish : Ensure Varnish services are started and enabled on startup.] ***
failed: [lsv3] (item=varnish) => {"changed": false, "item": "varnish", "msg": "Unable to start service varnish: Job for varnish.service failed because the control process exited with error code. See \"systemctl status varnish.service\" and \"journalctl -xe\" for details.\n"}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.