Skip to content

Deploying Varnish

Introduction

Varnish Cache is a powerful HTTP accelerator designed for content-heavy dynamic websites and APIs. As a reverse proxy that sits in front of your web server, Varnish caches responses and serves them directly to clients, dramatically reducing backend load and improving response times.

Key highlights of Varnish Cache:

  • High Performance: Designed to handle massive amounts of traffic with minimal hardware
  • VCL Configuration: Flexible Varnish Configuration Language for custom caching rules
  • Edge Side Includes: Support for ESI for partial page caching
  • Grace Mode: Serve stale content while fetching fresh data from backends
  • Health Checks: Monitor backend server health and route traffic accordingly
  • Request Handling: Modify, redirect, or block requests based on custom rules
  • Cache Invalidation: Purge or ban cached content programmatically
  • Logging and Statistics: Detailed real-time statistics and logging

This guide walks through deploying Varnish Cache on Klutch.sh using Docker, configuring VCL rules, and optimizing caching for your web applications.

Why Deploy Varnish on Klutch.sh

Deploying Varnish on Klutch.sh provides several advantages for web acceleration:

Simplified Deployment: Klutch.sh automatically detects your Dockerfile and builds Varnish without complex orchestration. Push to GitHub, and your cache layer deploys automatically.

Persistent Configuration: Attach persistent volumes for VCL files and cache storage. Your configuration survives container restarts.

HTTPS by Default: Klutch.sh provides automatic SSL certificates with Varnish handling HTTP acceleration behind the SSL termination.

GitHub Integration: Connect your VCL configuration repository directly from GitHub. Updates trigger automatic redeployments.

Scalable Resources: Allocate memory and CPU based on your caching needs and traffic patterns.

Prerequisites

Before deploying Varnish on Klutch.sh, ensure you have:

  • A Klutch.sh account
  • A GitHub account with a repository for your Varnish configuration
  • Basic familiarity with Docker and containerization concepts
  • A backend web server or application to accelerate
  • Understanding of HTTP caching concepts

Understanding Varnish Architecture

Varnish operates as a reverse proxy with several key components:

Frontend: Accepts incoming HTTP requests from clients

Backend: Your origin server(s) that Varnish fetches content from

Cache: In-memory storage for cached responses

VCL: Configuration language that defines caching behavior

Preparing Your Repository

Create a GitHub repository containing your Dockerfile and VCL configuration.

Repository Structure

varnish-deploy/
├── Dockerfile
├── default.vcl
└── .dockerignore

Creating the Dockerfile

FROM varnish:7.4
# Copy custom VCL configuration
COPY default.vcl /etc/varnish/
# Set environment variables
ENV VARNISH_SIZE=${VARNISH_SIZE:-256m}
ENV VARNISH_BACKEND_HOST=${VARNISH_BACKEND_HOST:-localhost}
ENV VARNISH_BACKEND_PORT=${VARNISH_BACKEND_PORT:-8080}
# Expose Varnish port
EXPOSE 80
# Start Varnish with custom configuration
CMD ["varnishd", "-F", "-a", ":80", "-f", "/etc/varnish/default.vcl", "-s", "malloc,${VARNISH_SIZE}"]

Creating the VCL Configuration

Create a default.vcl file:

vcl 4.1;
import std;
# Backend definition
backend default {
.host = "${VARNISH_BACKEND_HOST}";
.port = "${VARNISH_BACKEND_PORT}";
.connect_timeout = 5s;
.first_byte_timeout = 60s;
.between_bytes_timeout = 60s;
.max_connections = 100;
# Health check
.probe = {
.url = "/health";
.timeout = 2s;
.interval = 5s;
.window = 5;
.threshold = 3;
}
}
# Handle incoming requests
sub vcl_recv {
# Remove cookies for static content
if (req.url ~ "\.(css|js|png|gif|jp(e)?g|swf|ico|woff|woff2|ttf|svg)$") {
unset req.http.Cookie;
return (hash);
}
# Cache GET and HEAD requests only
if (req.method != "GET" && req.method != "HEAD") {
return (pass);
}
# Skip cache for authenticated requests
if (req.http.Authorization) {
return (pass);
}
return (hash);
}
# Process backend responses
sub vcl_backend_response {
# Cache static content for 1 day
if (bereq.url ~ "\.(css|js|png|gif|jp(e)?g|swf|ico|woff|woff2|ttf|svg)$") {
set beresp.ttl = 1d;
unset beresp.http.Set-Cookie;
}
# Enable grace mode
set beresp.grace = 1h;
return (deliver);
}
# Customize response delivery
sub vcl_deliver {
# Add cache hit/miss header for debugging
if (obj.hits > 0) {
set resp.http.X-Cache = "HIT";
set resp.http.X-Cache-Hits = obj.hits;
} else {
set resp.http.X-Cache = "MISS";
}
return (deliver);
}

Environment Variables Reference

VariableRequiredDefaultDescription
VARNISH_SIZENo256mCache size (e.g., 256m, 1g)
VARNISH_BACKEND_HOSTYeslocalhostBackend server hostname
VARNISH_BACKEND_PORTYes8080Backend server port

Deploying Varnish on Klutch.sh

    Push Your Repository to GitHub

    Initialize your repository and push to GitHub with your Dockerfile and VCL configuration.

    Create a New Project on Klutch.sh

    Navigate to the Klutch.sh dashboard and create a new project. Give it a descriptive name like “varnish-cache” or “web-accelerator”.

    Create a New App

    Within your project, create a new app. Connect your GitHub account if you haven’t already, then select the repository containing your Varnish Dockerfile.

    Configure HTTP Traffic

    In the deployment settings:

    • Select HTTP as the traffic type
    • Set the internal port to 80 (Varnish’s default port)

    Set Environment Variables

    Add the following environment variables:

    VariableValue
    VARNISH_SIZE512m (adjust based on available memory)
    VARNISH_BACKEND_HOSTYour backend server hostname
    VARNISH_BACKEND_PORTYour backend server port

    Allocate Resources

    Varnish benefits from available memory for caching:

    • Allocate sufficient RAM for your expected cache size
    • Consider traffic patterns when sizing

    Deploy Your Application

    Click Deploy to start the build process. Klutch.sh will build the container and provision an HTTPS certificate.

    Configure Your Backend

    Point your DNS or load balancer to use Varnish as the frontend, with Varnish forwarding requests to your backend server.

Advanced VCL Configuration

URL-Based Caching Rules

sub vcl_recv {
# Never cache admin pages
if (req.url ~ "^/admin") {
return (pass);
}
# Cache API responses for 5 minutes
if (req.url ~ "^/api/") {
return (hash);
}
}
sub vcl_backend_response {
if (bereq.url ~ "^/api/") {
set beresp.ttl = 5m;
}
}

Cache Purging

acl purge {
"localhost";
"10.0.0.0"/8;
}
sub vcl_recv {
if (req.method == "PURGE") {
if (!client.ip ~ purge) {
return (synth(405, "Not allowed"));
}
return (purge);
}
}

Grace Mode for High Availability

sub vcl_backend_response {
# Serve stale content for up to 6 hours if backend is down
set beresp.grace = 6h;
}
sub vcl_hit {
if (obj.ttl >= 0s) {
return (deliver);
}
if (obj.ttl + obj.grace > 0s) {
return (deliver);
}
return (miss);
}

Monitoring Varnish

Checking Cache Statistics

Access Varnish statistics through the container:

Terminal window
varnishstat

Viewing Request Logs

Terminal window
varnishlog

Key Metrics to Monitor

  • cache_hit: Number of cache hits
  • cache_miss: Number of cache misses
  • client_req: Total client requests
  • backend_conn: Backend connections

Troubleshooting

Common Issues

Low Hit Rate: Review VCL rules for cache-preventing headers, ensure cookies are being handled properly for static content.

Backend Health Failures: Check backend connectivity, verify health check endpoints are responding.

Memory Issues: Increase VARNISH_SIZE or optimize what gets cached.

Additional Resources

Conclusion

Deploying Varnish on Klutch.sh gives you a powerful HTTP accelerator to dramatically improve your web application’s performance. The combination of Varnish’s flexible VCL configuration and Klutch.sh’s deployment simplicity means you can quickly add a robust caching layer to your infrastructure.