Skip to content

Deploying HAProxy

Introduction

HAProxy (High Availability Proxy) is a powerful, reliable, and high-performance TCP/HTTP load balancer and proxy server. It’s widely used for distributing traffic across multiple backend servers, providing high availability, load balancing, and application acceleration. HAProxy is known for its efficiency, flexibility, and ability to handle massive amounts of concurrent connections with minimal resource usage.

HAProxy is an excellent choice for:

  • Load balancing web applications across multiple servers
  • SSL/TLS termination to offload encryption from backend servers
  • TCP and HTTP traffic routing and proxying
  • Health checking and automatic failover
  • Rate limiting and DDoS protection
  • Advanced traffic routing based on headers, URLs, and other criteria
  • WebSocket proxying and HTTP/2 support

This comprehensive guide walks you through deploying HAProxy on Klutch.sh using a Dockerfile. We’ll cover everything from basic installation and configuration to advanced load balancing strategies, SSL termination, health checks, monitoring, and production deployment best practices. Whether you’re setting up a simple reverse proxy or building a complex multi-tier load balancing architecture, this guide provides the detailed steps you need to get HAProxy running reliably on Klutch.sh.


Prerequisites

Before deploying HAProxy, ensure you have:

  • A Klutch.sh account
  • A GitHub repository for your HAProxy deployment files
  • A Klutch.sh project created for your application
  • Basic understanding of load balancing concepts, Docker, and HAProxy configuration syntax
  • Backend servers/applications to proxy traffic to
  • (Optional) SSL/TLS certificates for HTTPS termination

Understanding HAProxy Architecture

HAProxy operates as a reverse proxy and load balancer between clients and backend servers. Key components include:

  • Frontend: Defines how HAProxy receives incoming client connections (IP, port, SSL settings)
  • Backend: Defines a pool of servers where HAProxy forwards requests
  • ACLs (Access Control Lists): Rules for routing decisions based on request properties
  • Health Checks: Automated monitoring of backend server availability
  • Sticky Sessions: Ensures client requests go to the same backend server
  • Statistics Page: Built-in dashboard for monitoring HAProxy performance

For production deployments, HAProxy requires a well-tuned configuration file and proper health check configuration to ensure high availability.


Getting Started: Project Structure

To deploy HAProxy on Klutch.sh, you’ll need to create a repository with the necessary configuration files. Here’s the recommended structure:

haproxy-deploy/
├── Dockerfile
├── haproxy.cfg
├── .dockerignore
├── certs/ (optional, for SSL certificates)
│ ├── server.pem
│ └── ca-bundle.crt
└── README.md

The haproxy.cfg file is the main configuration file where you define frontends, backends, and routing rules.


Deploying with a Dockerfile

Klutch.sh automatically detects a Dockerfile if present in the root directory of your repository. You don’t need to specify Docker as a deployment option in the UI—the platform handles this automatically.

1. Create a Basic HAProxy Configuration

Create a haproxy.cfg file in your repository root. This example shows a basic HTTP load balancer configuration:

global
log stdout format raw local0
maxconn 4096
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
retries 3
option redispatch
# Statistics page
listen stats
bind *:8404
stats enable
stats uri /stats
stats refresh 30s
stats show-legends
stats show-node
# Frontend - receives client connections
frontend http-in
bind *:80
default_backend web_servers
# Optional: Custom headers
http-request set-header X-Forwarded-Proto http
http-request set-header X-Forwarded-For %[src]
# Backend - pool of application servers
backend web_servers
balance roundrobin
option httpchk GET /health
http-check expect status 200
# Add your backend servers here
server web1 backend1.example.com:80 check inter 3000 rise 2 fall 3
server web2 backend2.example.com:80 check inter 3000 rise 2 fall 3
server web3 backend3.example.com:80 check inter 3000 rise 2 fall 3

Configuration Breakdown:

  • global: Sets process-wide parameters like logging and connection limits
  • defaults: Default settings for all frontend/backend sections
  • listen stats: Exposes HAProxy statistics on port 8404 at /stats
  • frontend http-in: Accepts incoming HTTP connections on port 80
  • backend web_servers: Defines backend servers with round-robin load balancing and health checks

2. Create the Dockerfile

Create a Dockerfile in your repository root:

# Use official HAProxy Alpine-based image for smaller size
FROM haproxy:2.9-alpine
# Copy HAProxy configuration
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
# Optional: Copy SSL certificates if using HTTPS
# COPY certs/ /etc/haproxy/certs/
# HAProxy listens on multiple ports - we'll expose the main ones
# Port 80 for HTTP traffic
# Port 443 for HTTPS traffic (if using SSL)
# Port 8404 for statistics dashboard
EXPOSE 80 443 8404
# Validate configuration on build
RUN haproxy -c -f /usr/local/etc/haproxy/haproxy.cfg
# HAProxy runs as non-root by default
# The base image provides the default CMD

This Dockerfile uses the official HAProxy Alpine image (smaller footprint), copies your configuration file, validates it during build, and exposes the necessary ports.

3. Create a .dockerignore file

To optimize your Docker build, create a .dockerignore file:

.git
.gitignore
README.md
*.md
.env
.env.*
docker-compose.yml
node_modules

4. Advanced Configuration Example: SSL Termination

For production deployments with HTTPS, create an advanced configuration with SSL termination:

global
log stdout format raw local0
maxconn 10000
user haproxy
group haproxy
daemon
# SSL/TLS settings
ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384
ssl-default-bind-options ssl-min-ver TLSv1.2
tune.ssl.default-dh-param 2048
defaults
log global
mode http
option httplog
option dontlognull
option forwardfor
option http-server-close
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
timeout http-keep-alive 10s
timeout http-request 10s
retries 3
option redispatch
# Security headers
http-response set-header X-Frame-Options SAMEORIGIN
http-response set-header X-Content-Type-Options nosniff
http-response set-header X-XSS-Protection "1; mode=block"
# Statistics page (internal only)
listen stats
bind *:8404
stats enable
stats uri /stats
stats refresh 10s
stats show-legends
stats show-node
stats auth admin:CHANGE_THIS_PASSWORD
# Frontend - HTTPS with SSL termination
frontend https-in
bind *:443 ssl crt /etc/haproxy/certs/server.pem
# Redirect HTTP to HTTPS (define http-in frontend for this)
http-request set-header X-Forwarded-Proto https
http-request set-header X-Forwarded-For %[src]
# ACL for different routing rules
acl is_api path_beg /api
acl is_admin path_beg /admin
# Route based on path
use_backend api_servers if is_api
use_backend admin_servers if is_admin
default_backend web_servers
# Frontend - HTTP (redirect to HTTPS)
frontend http-in
bind *:80
# Redirect all HTTP to HTTPS
http-request redirect scheme https code 301 unless { ssl_fc }
# Backend - Web application servers
backend web_servers
balance roundrobin
option httpchk GET /health HTTP/1.1\r\nHost:\ example-app.klutch.sh
http-check expect status 200
cookie SERVERID insert indirect nocache
server web1 10.0.1.10:8080 check cookie web1 inter 3000 rise 2 fall 3 maxconn 500
server web2 10.0.1.11:8080 check cookie web2 inter 3000 rise 2 fall 3 maxconn 500
server web3 10.0.1.12:8080 check cookie web3 inter 3000 rise 2 fall 3 maxconn 500
# Backend - API servers
backend api_servers
balance leastconn
option httpchk GET /api/health
http-check expect status 200
server api1 10.0.2.10:8080 check inter 2000 rise 2 fall 3
server api2 10.0.2.11:8080 check inter 2000 rise 2 fall 3
# Backend - Admin servers
backend admin_servers
balance roundrobin
option httpchk GET /admin/health
http-check expect status 200
server admin1 10.0.3.10:8080 check inter 5000 rise 2 fall 3

Advanced Features:

  • SSL/TLS termination with modern cipher suites
  • HTTP to HTTPS redirection
  • Path-based routing using ACLs
  • Multiple backend pools (web, API, admin)
  • Sticky sessions using cookies
  • Different balancing algorithms (roundrobin, leastconn)
  • Security headers added to responses
  • Password-protected statistics page

5. TCP Mode Configuration Example

For TCP traffic (databases, custom protocols), create a TCP-mode configuration:

global
log stdout format raw local0
maxconn 4096
user haproxy
group haproxy
daemon
defaults
log global
mode tcp
option tcplog
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
retries 3
# Statistics page
listen stats
bind *:8404
mode http
stats enable
stats uri /stats
stats refresh 30s
# TCP frontend for database connections
frontend database-in
bind *:5432
default_backend postgres_servers
# Backend - PostgreSQL servers
backend postgres_servers
balance roundrobin
option tcp-check
server db1 postgres1.example.com:5432 check inter 3000 rise 2 fall 3
server db2 postgres2.example.com:5432 check inter 3000 rise 2 fall 3

Deploying to Klutch.sh

Follow these steps to deploy HAProxy on Klutch.sh:

    Step 1: Prepare Your Repository

    Push your HAProxy configuration files (Dockerfile, haproxy.cfg) to your GitHub repository.

    Step 2: Create a New App

    Step 3: Configure Repository

    • Select your GitHub repository containing the HAProxy Dockerfile
    • Choose the branch you want to deploy (e.g., main or production)
    • Klutch.sh will automatically detect the Dockerfile in your repository root

    Step 4: Configure Traffic Type

    • Select HTTP as the traffic type for web applications
    • For TCP mode deployments (databases, custom protocols), select TCP instead
    • When using TCP mode, you can connect to your deployed app on port 8000

    Step 5: Set Internal Port

    • Set the internal port to 80 for HTTP traffic (or the port your frontend binds to in haproxy.cfg)
    • For HTTPS-only deployments, set internal port to 443
    • If you need to access the statistics page, note that it runs on port 8404 internally

    Step 6: Configure Environment Variables

    HAProxy itself doesn’t require environment variables for basic operation, but you may want to set these for advanced use cases:

    • BACKEND_SERVERS - If you’re dynamically generating backend server lists
    • SSL_CERT_PATH - Path to SSL certificates if mounted via volume
    • MAX_CONNECTIONS - Override the maxconn setting
    • STATS_PASSWORD - Password for the statistics page

    To customize HAProxy behavior through environment variables, you’ll need to modify your Dockerfile to use environment variable substitution. For Nixpacks customizations:

    • START_COMMAND - Override the start command (e.g., haproxy -f /usr/local/etc/haproxy/haproxy.cfg -db)
    • BUILD_COMMAND - Custom build steps if needed

    Step 7: Attach Persistent Storage (Optional)

    If you need to persist logs, certificates, or configuration files across deployments, attach a persistent volume:

    • Mount path: /var/log/haproxy (for persistent logs)
    • Mount path: /etc/haproxy/certs (for SSL certificates)
    • Size: Choose based on your needs (5-10 GB is typically sufficient for logs)

    For SSL certificates, you can either:

    1. Include them in your Docker image during build (for static certificates)
    2. Mount them from a persistent volume (for dynamic certificate management)

    Step 8: Choose Resources

    • Select your preferred region for deployment
    • Choose compute resources based on expected traffic:
      • Light traffic: 1 vCPU, 1 GB RAM
      • Moderate traffic: 2 vCPU, 2 GB RAM
      • High traffic: 4+ vCPU, 4+ GB RAM
    • Set the number of instances (for high availability, deploy at least 2)

    Step 9: Deploy

    • Review all settings
    • Click Create to deploy your HAProxy instance
    • Klutch.sh will build your Docker image and start the container
    • Monitor the build logs to ensure successful deployment

    Step 10: Verify Deployment

    Once deployed, your HAProxy instance will be available at a URL like example-app.klutch.sh. You can:

    • Test the main service by accessing your configured frontend port
    • Access the statistics dashboard at http://example-app.klutch.sh:8404/stats (if exposed)
    • Verify health checks are passing for your backend servers
    • Test load balancing by making multiple requests and checking which backend serves them

Environment Variables and Configuration

While HAProxy primarily uses the configuration file, you can use environment variables for dynamic configuration. Here’s how to modify your Dockerfile to support environment variables:

FROM haproxy:2.9-alpine
# Install envsubst for environment variable substitution
RUN apk add --no-cache gettext
# Copy template configuration
COPY haproxy.cfg.template /usr/local/etc/haproxy/haproxy.cfg.template
# Create entrypoint script
RUN echo '#!/bin/sh' > /docker-entrypoint.sh && \
echo 'envsubst < /usr/local/etc/haproxy/haproxy.cfg.template > /usr/local/etc/haproxy/haproxy.cfg' >> /docker-entrypoint.sh && \
echo 'haproxy -c -f /usr/local/etc/haproxy/haproxy.cfg' >> /docker-entrypoint.sh && \
echo 'exec haproxy -f /usr/local/etc/haproxy/haproxy.cfg -db' >> /docker-entrypoint.sh && \
chmod +x /docker-entrypoint.sh
EXPOSE 80 443 8404
ENTRYPOINT ["/docker-entrypoint.sh"]

Then create haproxy.cfg.template with environment variable placeholders:

global
log stdout format raw local0
maxconn ${MAX_CONNECTIONS:-4096}
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:80
default_backend web_servers
backend web_servers
balance roundrobin
option httpchk GET /health
server web1 ${BACKEND_1:-backend1.example.com}:${BACKEND_PORT:-80} check
server web2 ${BACKEND_2:-backend2.example.com}:${BACKEND_PORT:-80} check

Persistent Storage Configuration

For production deployments, you may want to persist certain data. Here are recommended mount paths:

Logs Directory

Mount a volume to persist HAProxy logs:

  • Mount path: /var/log/haproxy
  • Recommended size: 5-10 GB
  • Purpose: Persist access logs and error logs across deployments

Update your haproxy.cfg to log to files:

global
log /var/log/haproxy/haproxy.log local0
log /var/log/haproxy/haproxy-error.log local0 err

SSL Certificates Directory

Mount a volume for SSL certificates that need to be updated without rebuilding:

  • Mount path: /etc/haproxy/certs
  • Recommended size: 1 GB
  • Purpose: Store and update SSL/TLS certificates dynamically

Configuration Directory (Advanced)

For dynamic configuration reloading:

  • Mount path: /usr/local/etc/haproxy
  • Recommended size: 1 GB
  • Purpose: Update configuration without rebuilding the image

Note: When attaching persistent volumes in Klutch.sh, you only need to specify the mount path and size. The volume name is managed automatically.


Load Balancing Algorithms

HAProxy supports several load balancing algorithms. Choose based on your use case:

  • roundrobin: Distributes requests evenly across servers (default, good for most cases)
  • leastconn: Routes to server with fewest active connections (good for long-lived connections)
  • source: Routes based on client IP (ensures same client goes to same server)
  • uri: Routes based on URL (good for caching)
  • random: Random selection (simple, works well for stateless apps)

Example configuration for different algorithms:

backend web_roundrobin
balance roundrobin
server web1 backend1.example.com:80 check
server web2 backend2.example.com:80 check
backend api_leastconn
balance leastconn
server api1 backend1.example.com:8080 check
server api2 backend2.example.com:8080 check
backend cache_uri
balance uri
hash-type consistent
server cache1 backend1.example.com:80 check
server cache2 backend2.example.com:80 check

Health Checks and Monitoring

Configuring Health Checks

Health checks ensure HAProxy only routes traffic to healthy backend servers:

backend web_servers
balance roundrobin
# HTTP health check
option httpchk GET /health HTTP/1.1\r\nHost:\ example-app.klutch.sh
http-check expect status 200
# Check every 3 seconds, mark healthy after 2 successful checks
# Mark unhealthy after 3 failed checks
server web1 backend1.example.com:80 check inter 3000 rise 2 fall 3
server web2 backend2.example.com:80 check inter 3000 rise 2 fall 3

TCP Health Checks

For TCP services (databases, message queues):

backend database_servers
mode tcp
balance leastconn
# TCP connection check
option tcp-check
tcp-check connect
server db1 postgres1.example.com:5432 check inter 5000 rise 2 fall 3
server db2 postgres2.example.com:5432 check inter 5000 rise 2 fall 3

Monitoring with Statistics Page

Access the built-in statistics page to monitor:

  • Current connections and queue size
  • Server status (UP, DOWN, MAINTENANCE)
  • Request rates and error rates
  • Session information
  • Health check status

Configure the stats page in your haproxy.cfg:

listen stats
bind *:8404
stats enable
stats uri /stats
stats refresh 10s
stats show-legends
stats show-node
stats auth admin:secure-password-here
stats admin if TRUE

Access at: http://example-app.klutch.sh:8404/stats


SSL/TLS Configuration

Generating SSL Certificates

For SSL termination, you need a certificate file in PEM format containing:

  1. Private key
  2. Certificate
  3. Certificate chain (if applicable)

Combine them into a single file:

Terminal window
cat server.key server.crt intermediate.crt > server.pem

SSL Configuration in HAProxy

frontend https-in
bind *:443 ssl crt /etc/haproxy/certs/server.pem
# Force TLS 1.2+
ssl-min-ver TLSv1.2
# Modern cipher suite
ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384
# Enable HSTS (HTTP Strict Transport Security)
http-response set-header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
# Forward protocol header
http-request set-header X-Forwarded-Proto https
default_backend web_servers

Automatic HTTP to HTTPS Redirect

frontend http-in
bind *:80
# Redirect all HTTP traffic to HTTPS
redirect scheme https code 301 if !{ ssl_fc }

Advanced Routing with ACLs

HAProxy’s Access Control Lists (ACLs) enable sophisticated routing decisions:

Path-Based Routing

frontend main
bind *:80
# Define ACLs
acl is_api path_beg /api
acl is_static path_beg /static /images /css /js
acl is_admin path_beg /admin
# Route based on ACLs
use_backend api_servers if is_api
use_backend static_servers if is_static
use_backend admin_servers if is_admin
default_backend web_servers

Header-Based Routing

frontend main
bind *:80
# Route based on headers
acl is_mobile hdr_sub(User-Agent) -i mobile android iphone
acl is_api hdr(X-API-Key) -m found
use_backend mobile_backend if is_mobile
use_backend api_backend if is_api
default_backend web_servers

Host-Based Routing (Virtual Hosts)

frontend main
bind *:80
# Define ACLs for different domains
acl host_api hdr(host) -i api.example.com
acl host_admin hdr(host) -i admin.example.com
acl host_www hdr(host) -i www.example.com example.com
# Route to different backends
use_backend api_servers if host_api
use_backend admin_servers if host_admin
use_backend web_servers if host_www

Rate Limiting and DDoS Protection

Protect your backend servers with HAProxy’s rate limiting features:

frontend http-in
bind *:80
# Track connection rates per IP
stick-table type ip size 100k expire 30s store conn_rate(3s)
# Define ACL for rate limiting
acl too_many_requests src_conn_rate ge 20
# Deny excessive requests
http-request deny deny_status 429 if too_many_requests
default_backend web_servers

Connection Limiting

backend web_servers
# Limit connections per server
server web1 backend1.example.com:80 check maxconn 100
server web2 backend2.example.com:80 check maxconn 100

WebSocket Support

Enable WebSocket proxying for real-time applications:

frontend http-in
bind *:80
# Detect WebSocket upgrade requests
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket hdr_beg(Host) -i ws.
use_backend websocket_backend if is_websocket
default_backend web_servers
backend websocket_backend
balance leastconn
option http-server-close
# Increase timeouts for WebSocket connections
timeout tunnel 3600s
timeout server 3600s
server ws1 backend1.example.com:8080 check
server ws2 backend2.example.com:8080 check

Logging and Debugging

Structured Logging

Configure detailed logging for troubleshooting:

global
log stdout format raw local0 info
log stdout format raw local0 err
defaults
log global
option httplog
option logasap
# Custom log format
log-format "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r"

Debug Mode

For troubleshooting, you can enable debug mode by modifying your start command:

Set the Nixpacks START_COMMAND environment variable in Klutch.sh:

START_COMMAND=haproxy -f /usr/local/etc/haproxy/haproxy.cfg -d

Note: Debug mode is verbose and should only be used temporarily for troubleshooting.


Security Best Practices

1. Use Strong SSL/TLS Configuration

global
# Modern SSL configuration
ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384
ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets

2. Add Security Headers

http-response set-header X-Frame-Options SAMEORIGIN
http-response set-header X-Content-Type-Options nosniff
http-response set-header X-XSS-Protection "1; mode=block"
http-response set-header Referrer-Policy "strict-origin-when-cross-origin"
http-response set-header Permissions-Policy "geolocation=(), microphone=(), camera=()"

3. Protect Statistics Page

listen stats
bind *:8404
stats enable
stats uri /stats
stats auth admin:YOUR_SECURE_PASSWORD_HERE
stats hide-version
# Restrict access by IP (optional)
acl allowed_ips src 10.0.0.0/8
http-request deny unless allowed_ips

4. Hide HAProxy Version

defaults
option http-server-close
http-response del-header Server
http-response set-header Server "WebServer"

5. Implement Request Filtering

frontend http-in
bind *:80
# Block common attack patterns
acl bad_user_agent hdr_sub(User-Agent) -i sqlmap nikto nmap
acl bad_method method TRACE
acl bad_path path_beg -i /phpmyadmin /admin /wp-admin
http-request deny if bad_user_agent
http-request deny if bad_method
http-request deny deny_status 404 if bad_path

Performance Tuning

System-Level Optimizations

For high-traffic deployments, optimize HAProxy performance:

global
# Increase connection limit
maxconn 50000
# Use nbthread for multi-core systems
# Note: nbproc (processes) is deprecated in HAProxy 2.5+ in favor of nbthread (threads)
# Threads provide better performance and resource sharing than processes
nbthread 4
# CPU affinity for better performance
cpu-map auto:1/1-4 0-3
# Tune buffers
tune.bufsize 32768
tune.maxrewrite 1024
# SSL session cache
tune.ssl.cachesize 100000
tune.ssl.lifetime 300
tune.ssl.default-dh-param 2048

Connection Pooling

Reuse backend connections for better performance:

backend web_servers
balance leastconn
option http-keep-alive
option prefer-last-server
# Connection pooling
http-reuse aggressive
server web1 backend1.example.com:80 check maxconn 1000
server web2 backend2.example.com:80 check maxconn 1000

Timeout Tuning

Adjust timeouts based on your application requirements:

defaults
# Short-lived requests
timeout connect 5s
timeout client 30s
timeout server 30s
# Keep-alive
timeout http-keep-alive 10s
# Queue timeout
timeout queue 30s
# Tunnel timeout for WebSockets
timeout tunnel 3600s

Troubleshooting Common Issues

1. HAProxy Not Starting

Symptom: Container exits immediately or won’t start

Solutions:

  • Validate configuration syntax:
    Terminal window
    haproxy -c -f /usr/local/etc/haproxy/haproxy.cfg
  • Check container logs for error messages
  • Ensure ports aren’t conflicting
  • Verify file permissions on configuration files

2. 503 Service Unavailable

Symptom: Clients receive 503 errors

Solutions:

  • Check backend server health in statistics page
  • Verify backend servers are accessible from HAProxy
  • Check health check configuration
  • Ensure backend servers are listening on correct ports
  • Review firewall rules between HAProxy and backends

3. SSL Certificate Errors

Symptom: SSL handshake failures or certificate warnings

Solutions:

  • Verify certificate file format (must be PEM with key, cert, and chain)
  • Check file permissions on certificate files
  • Ensure certificate matches the domain
  • Verify certificate isn’t expired
  • Test certificate with: openssl s_client -connect example-app.klutch.sh:443

4. High Latency

Symptom: Slow response times

Solutions:

  • Check HAProxy statistics for queue size
  • Increase maxconn limits if queues are building
  • Optimize backend response times
  • Enable connection pooling with http-reuse
  • Consider increasing instance resources (CPU/RAM)
  • Review timeout settings

5. Health Checks Failing

Symptom: Servers marked as DOWN despite being healthy

Solutions:

  • Verify health check endpoint exists on backend
  • Check health check interval and thresholds (inter, rise, fall)
  • Ensure health check response matches expected status
  • Review backend server logs for health check requests
  • Test health check manually: curl http://backend1.example.com/health

6. Statistics Page Not Accessible

Symptom: Can’t access /stats endpoint

Solutions:

  • Verify stats listen block is configured in haproxy.cfg
  • Ensure port 8404 is accessible
  • Check authentication credentials
  • Verify internal port routing in Klutch.sh
  • Check if statistics page is bound to correct interface (0.0.0.0 vs 127.0.0.1)

Monitoring and Metrics

Built-In Statistics

HAProxy provides comprehensive statistics through multiple interfaces:

  1. Web Dashboard: Access at http://example-app.klutch.sh:8404/stats
  2. Stats Socket: Unix socket for programmatic access
  3. CSV Export: Machine-readable statistics

Key Metrics to Monitor

Monitor these critical metrics for production deployments:

  • Backend Server Status: UP/DOWN state of each backend
  • Active Sessions: Current number of active connections
  • Queue Size: Requests waiting for available backend
  • Error Rate: 4xx and 5xx response rates
  • Response Time: Average backend response time
  • Throughput: Requests per second, bytes transferred
  • Health Check Status: Success/failure rate of health checks

Exporting Metrics

For integration with monitoring tools, enable the statistics socket:

global
stats socket /var/run/haproxy.sock mode 660 level admin
stats timeout 30s

Query statistics via socket:

Terminal window
echo "show stat" | socat stdio /var/run/haproxy.sock

Integration with External Monitoring

For production monitoring, consider integrating with:

  • Prometheus (using HAProxy Exporter)
  • Datadog
  • New Relic
  • Grafana

Scaling and High Availability

Horizontal Scaling

Deploy multiple HAProxy instances for high availability:

  1. In Klutch.sh, set the number of instances to 2 or more
  2. Place a DNS load balancer or another HAProxy in front
  3. Use health checks between layers
  4. Configure session persistence if needed

Vertical Scaling

For high-traffic scenarios, increase resources:

  • Light: 1-2 vCPU, 1-2 GB RAM (up to 10,000 connections)
  • Medium: 2-4 vCPU, 4 GB RAM (up to 50,000 connections)
  • High: 4-8 vCPU, 8-16 GB RAM (100,000+ connections)

Zero-Downtime Configuration Reload

HAProxy supports graceful configuration reloads:

Terminal window
# Validate new configuration
haproxy -c -f /usr/local/etc/haproxy/haproxy.cfg.new
# Reload with zero downtime
haproxy -f /usr/local/etc/haproxy/haproxy.cfg -sf $(cat /var/run/haproxy.pid)

For dynamic configuration updates in Klutch.sh, consider:

  1. Using persistent volumes for configuration
  2. Implementing a sidecar container for configuration management
  3. Triggering redeployments for major config changes

Example: Complete Production Setup

Here’s a comprehensive example for a production HAProxy deployment:

Directory Structure:

haproxy-production/
├── Dockerfile
├── haproxy.cfg
├── .dockerignore
└── README.md

Dockerfile:

FROM haproxy:2.9-alpine
# Install runtime dependencies
RUN apk add --no-cache ca-certificates
# Copy configuration
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
# Validate configuration
RUN haproxy -c -f /usr/local/etc/haproxy/haproxy.cfg
# Create directories for logs and stats socket
RUN mkdir -p /var/log/haproxy /var/run && \
chown -R haproxy:haproxy /var/log/haproxy /var/run
EXPOSE 80 443 8404
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=30s --retries=3 \
CMD haproxy -c -f /usr/local/etc/haproxy/haproxy.cfg || exit 1
# Run as haproxy user
USER haproxy
CMD ["haproxy", "-f", "/usr/local/etc/haproxy/haproxy.cfg", "-db"]

haproxy.cfg:

global
log stdout format raw local0
maxconn 10000
user haproxy
group haproxy
# SSL configuration
ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384
ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
tune.ssl.default-dh-param 2048
tune.ssl.cachesize 100000
defaults
log global
mode http
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.1
timeout connect 5s
timeout client 50s
timeout server 50s
timeout http-keep-alive 10s
retries 3
# Security headers
http-response set-header X-Frame-Options SAMEORIGIN
http-response set-header X-Content-Type-Options nosniff
http-response set-header X-XSS-Protection "1; mode=block"
# Statistics
listen stats
bind *:8404
stats enable
stats uri /stats
stats refresh 30s
stats auth admin:CHANGE_THIS_PASSWORD
stats hide-version
# Frontend HTTPS
frontend https-in
bind *:443 ssl crt /etc/haproxy/certs/server.pem
http-request set-header X-Forwarded-Proto https
http-request set-header X-Real-IP %[src]
# Rate limiting
stick-table type ip size 100k expire 30s store conn_rate(3s)
acl abuse src_conn_rate ge 30
http-request deny deny_status 429 if abuse
# Routing
acl is_api path_beg /api
use_backend api_servers if is_api
default_backend web_servers
# Frontend HTTP (redirect to HTTPS)
frontend http-in
bind *:80
redirect scheme https code 301 if !{ ssl_fc }
# Backend Web Servers
backend web_servers
balance roundrobin
cookie SERVERID insert indirect nocache
option httpchk GET /health HTTP/1.1\r\nHost:\ example-app.klutch.sh
http-check expect status 200
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
server web1 10.0.1.10:8080 check cookie web1 inter 3000 rise 2 fall 3 maxconn 500
server web2 10.0.1.11:8080 check cookie web2 inter 3000 rise 2 fall 3 maxconn 500
server web3 10.0.1.12:8080 check cookie web3 inter 3000 rise 2 fall 3 maxconn 500
# Backend API Servers
backend api_servers
balance leastconn
option httpchk GET /api/health
http-check expect status 200
compression algo gzip
compression type text/html text/plain text/css text/javascript application/json
server api1 10.0.2.10:8080 check inter 2000 rise 2 fall 3 maxconn 1000
server api2 10.0.2.11:8080 check inter 2000 rise 2 fall 3 maxconn 1000

Deployment Steps:

    1. Push the code to GitHub
    2. In Klutch.sh, create a new app with your repository
    3. Select HTTP traffic type
    4. Set internal port to 443 (for HTTPS frontend)
    5. Attach a volume to /etc/haproxy/certs for SSL certificates
    6. Attach a volume to /var/log/haproxy for persistent logs
    7. Set environment variables (if using dynamic config)
    8. Choose 2+ instances for high availability
    9. Deploy and monitor via statistics dashboard

Resources and Further Reading


Conclusion

Deploying HAProxy on Klutch.sh provides a powerful, flexible solution for load balancing and proxying traffic to your applications. With support for HTTP, HTTPS, TCP, and advanced routing capabilities, HAProxy can handle a wide variety of use cases from simple reverse proxying to complex multi-tier load balancing architectures.

Key takeaways:

  • Klutch.sh automatically detects and deploys your Dockerfile
  • Configure HAProxy through the haproxy.cfg file
  • Use persistent volumes for logs and SSL certificates
  • Monitor your deployment through the built-in statistics page
  • Implement health checks to ensure high availability
  • Scale horizontally by deploying multiple instances
  • Secure your deployment with SSL/TLS, rate limiting, and security headers

For additional support and questions, refer to the Klutch.sh documentation or the HAProxy community.