ThorneLabs

Setup Varnish 2.1.5 on CentOS 6 as a Caching Server and Load Balancer

• Updated May 3, 2017


Varnish is a highly regarded HTTP caching server. It sits in front of your web server tier and caches content in RAM so subsequent requests are served as quickly as possible.

Varnish can also be a basic load balancer. Combining a caching server and a load balancer works well when one or more of your web servers becomes unavailable. Because Varnish is also acting as the load balancer, no longer will the end user possibly see a “Service unavailable” message while the load balancer is removing the unhealthy web server from the load balanced pool; They will simply see a cached paged instead.

With that brief overview, this post will provide the steps necessary to setup Varnish 2.1.5 on CentOS 6 as a caching server and load balancer in front of two web servers.

This post assumes you already have two or more web servers running and a new third server running CentOS 6 to install Varnish on.

Install Varnish

Start by logging in to the new third server.

Install EPEL 6 if you do not already have it installed:

rpm -ivh http://dl.fedoraproject.org/pub/epel/6Server/x86_64/epel-release-6-8.noarch.rpm

Now, install Varnish:

yum install -y varnish

Configure Varnish

Once the package installs, there are two configuration files to edit, /etc/sysconfig/varnish and /etc/varnish/default.vcl.

By default, Varnish runs on port 6081. Your end users should talk to your Varnish server before talking to your web servers, so change Varnish’s listening port to 80 by opening /etc/sysconfig/varnish and changing VARNISH_LISTEN_PORT to the following:

VARNISH_LISTEN_PORT=80

Next, open /etc/varnish/default.vcl and paste the following configuration. Most of the following are defaults from the /etc/varnish/default.vcl files installed by the Varnish 2.1.5 EPEL package. Be sure to replace PUBLIC_IP_ADDRESS with the IP address of your web servers.

backend web1 {
    .host = "PUBLIC_IP_ADDRESS";
    .port = "80";
    .probe = {
        .url = "/";
        .interval = 5s;
        .timeout = 1s;
        .window = 5;
        .threshold = 3;
    }
}

backend web2 {
    .host = "PUBLIC_IP_ADDRESS";
    .port = "80";
    .probe = {
        .url = "/";
        .interval = 5s;
        .timeout = 1s;
        .window = 5;
        .threshold = 3;
    }
}

director apache round-robin {
    { .backend = web1; }
    { .backend = web2; }
}


sub vcl_recv {
    set req.backend = apache;

    unset req.http.Cookie;

    # Allow the backend to serve up stale content if it is responding slowly.
    if (req.backend.healthy) {
        set req.grace = 30s;
    } else {
        set req.grace = 5h;
    }

    if (req.restarts == 0) {
        if (req.http.x-forwarded-for) {
            set req.http.X-Forwarded-For =
            req.http.X-Forwarded-For ", " client.ip;
        } else {
            set req.http.X-Forwarded-For = client.ip;
        }
    }

    if (req.request != "GET" &&
      req.request != "HEAD" &&
      req.request != "PUT" &&
      req.request != "POST" &&
      req.request != "TRACE" &&
      req.request != "OPTIONS" &&
      req.request != "DELETE") {
        /* Non-RFC2616 or CONNECT which is weird. */
        return (pipe);
    }

    if (req.request != "GET" && req.request != "HEAD") {
        /* We only deal with GET and HEAD by default */
        return (pass);
    }

    # Do not cache listed file extensions
    if (req.url ~ "\.(zip|sql|tar|gz|tgz|bzip2|bz2|mp3|mp4|m4a|flv|ogg|swf|aiff|exe|dmg|iso|box|qcow2)") {
        set req.http.X-Cacheable = "NO:nocache file";
        return (pipe);
    }

    if (req.http.Authorization || req.http.Cookie) {
        /* Not cacheable by default */
        return (pass);
    }

    return (lookup);
}
 
sub vcl_pipe {
    return (pipe);
}

sub vcl_pass {
    return (pass);
}

sub vcl_hash {
    set req.hash += req.url;

    if (req.http.host) {
        set req.hash += req.http.host;
    } else {
        set req.hash += server.ip;
    }

    return (hash);
}

sub vcl_hit {
    if (!obj.cacheable) {
        return (pass);
    }

    return (deliver);
}

sub vcl_miss {
    return (fetch);
}

sub vcl_fetch {
    # Allow items to be stale if needed.
    set beresp.grace = 6h;
    set beresp.ttl = 12h;

    if (!beresp.cacheable) {
        return (pass);
    }

    if (beresp.http.Set-Cookie) {
        return (pass);
    }

    return (deliver);
}

sub vcl_deliver {
    # Set a header to track cache HIT or MISS.
    if (obj.hits > 0) {
            set resp.http.X-Cache = "HIT";
    } else {
            set resp.http.X-Cache = "MISS";
    }

    return (deliver);
}

Enable and Start the varnish Service

Enable the varnish service on boot:

chkconfig varnish on

Finally, start the varnish service:

service varnish start

Verify Varnish is Working

Assuming everything is working properly and you have changed your website’s DNS records to point to the new Varnish server, you should be able to browse to any of your websites in a web browser without issue.

If you have not yet changed your website’s DNS records to point to the new Varnish server, you can still verify Varnish is working by running command curl -i -H 'Host: example.com' varnish.example.com | less (example.com is your website’s domain name and varnish.example.com is the hostname of the new Varnish server).

At the top of the output should be a header called X-Cache. The value of this header will probably be MISS. If you run the curl command again, the value should now be HIT, because Varnish cached the web page after you ran the first curl command.

Once you change your website’s DNS records to point to the new Varnish server, you can run a similar curl command, curl -i http://example.com | less, to verify Varnish is working.

In addition, you can monitor what Varnish is doing by running varnishlog, varnishhist, varnishstat, and varnishtop on the server running the varnish service.

References