Deploying HAProxy
Introduction
HAProxy (High Availability Proxy) is a powerful, reliable, and high-performance TCP/HTTP load balancer and proxy server. It’s widely used for distributing traffic across multiple backend servers, providing high availability, load balancing, and application acceleration. HAProxy is known for its efficiency, flexibility, and ability to handle massive amounts of concurrent connections with minimal resource usage.
HAProxy is an excellent choice for:
- Load balancing web applications across multiple servers
- SSL/TLS termination to offload encryption from backend servers
- TCP and HTTP traffic routing and proxying
- Health checking and automatic failover
- Rate limiting and DDoS protection
- Advanced traffic routing based on headers, URLs, and other criteria
- WebSocket proxying and HTTP/2 support
This comprehensive guide walks you through deploying HAProxy on Klutch.sh using a Dockerfile. We’ll cover everything from basic installation and configuration to advanced load balancing strategies, SSL termination, health checks, monitoring, and production deployment best practices. Whether you’re setting up a simple reverse proxy or building a complex multi-tier load balancing architecture, this guide provides the detailed steps you need to get HAProxy running reliably on Klutch.sh.
Prerequisites
Before deploying HAProxy, ensure you have:
- A Klutch.sh account
- A GitHub repository for your HAProxy deployment files
- A Klutch.sh project created for your application
- Basic understanding of load balancing concepts, Docker, and HAProxy configuration syntax
- Backend servers/applications to proxy traffic to
- (Optional) SSL/TLS certificates for HTTPS termination
Understanding HAProxy Architecture
HAProxy operates as a reverse proxy and load balancer between clients and backend servers. Key components include:
- Frontend: Defines how HAProxy receives incoming client connections (IP, port, SSL settings)
- Backend: Defines a pool of servers where HAProxy forwards requests
- ACLs (Access Control Lists): Rules for routing decisions based on request properties
- Health Checks: Automated monitoring of backend server availability
- Sticky Sessions: Ensures client requests go to the same backend server
- Statistics Page: Built-in dashboard for monitoring HAProxy performance
For production deployments, HAProxy requires a well-tuned configuration file and proper health check configuration to ensure high availability.
Getting Started: Project Structure
To deploy HAProxy on Klutch.sh, you’ll need to create a repository with the necessary configuration files. Here’s the recommended structure:
haproxy-deploy/├── Dockerfile├── haproxy.cfg├── .dockerignore├── certs/ (optional, for SSL certificates)│ ├── server.pem│ └── ca-bundle.crt└── README.mdThe haproxy.cfg file is the main configuration file where you define frontends, backends, and routing rules.
Deploying with a Dockerfile
Klutch.sh automatically detects a Dockerfile if present in the root directory of your repository. You don’t need to specify Docker as a deployment option in the UI—the platform handles this automatically.
1. Create a Basic HAProxy Configuration
Create a haproxy.cfg file in your repository root. This example shows a basic HTTP load balancer configuration:
global log stdout format raw local0 maxconn 4096 user haproxy group haproxy daemon
defaults log global mode http option httplog option dontlognull timeout connect 5000ms timeout client 50000ms timeout server 50000ms retries 3 option redispatch
# Statistics pagelisten stats bind *:8404 stats enable stats uri /stats stats refresh 30s stats show-legends stats show-node
# Frontend - receives client connectionsfrontend http-in bind *:80 default_backend web_servers
# Optional: Custom headers http-request set-header X-Forwarded-Proto http http-request set-header X-Forwarded-For %[src]
# Backend - pool of application serversbackend web_servers balance roundrobin option httpchk GET /health http-check expect status 200
# Add your backend servers here server web1 backend1.example.com:80 check inter 3000 rise 2 fall 3 server web2 backend2.example.com:80 check inter 3000 rise 2 fall 3 server web3 backend3.example.com:80 check inter 3000 rise 2 fall 3Configuration Breakdown:
global: Sets process-wide parameters like logging and connection limitsdefaults: Default settings for all frontend/backend sectionslisten stats: Exposes HAProxy statistics on port 8404 at/statsfrontend http-in: Accepts incoming HTTP connections on port 80backend web_servers: Defines backend servers with round-robin load balancing and health checks
2. Create the Dockerfile
Create a Dockerfile in your repository root:
# Use official HAProxy Alpine-based image for smaller sizeFROM haproxy:2.9-alpine
# Copy HAProxy configurationCOPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
# Optional: Copy SSL certificates if using HTTPS# COPY certs/ /etc/haproxy/certs/
# HAProxy listens on multiple ports - we'll expose the main ones# Port 80 for HTTP traffic# Port 443 for HTTPS traffic (if using SSL)# Port 8404 for statistics dashboardEXPOSE 80 443 8404
# Validate configuration on buildRUN haproxy -c -f /usr/local/etc/haproxy/haproxy.cfg
# HAProxy runs as non-root by default# The base image provides the default CMDThis Dockerfile uses the official HAProxy Alpine image (smaller footprint), copies your configuration file, validates it during build, and exposes the necessary ports.
3. Create a .dockerignore file
To optimize your Docker build, create a .dockerignore file:
.git.gitignoreREADME.md*.md.env.env.*docker-compose.ymlnode_modules4. Advanced Configuration Example: SSL Termination
For production deployments with HTTPS, create an advanced configuration with SSL termination:
global log stdout format raw local0 maxconn 10000 user haproxy group haproxy daemon
# SSL/TLS settings ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384 ssl-default-bind-options ssl-min-ver TLSv1.2 tune.ssl.default-dh-param 2048
defaults log global mode http option httplog option dontlognull option forwardfor option http-server-close timeout connect 5000ms timeout client 50000ms timeout server 50000ms timeout http-keep-alive 10s timeout http-request 10s retries 3 option redispatch
# Security headers http-response set-header X-Frame-Options SAMEORIGIN http-response set-header X-Content-Type-Options nosniff http-response set-header X-XSS-Protection "1; mode=block"
# Statistics page (internal only)listen stats bind *:8404 stats enable stats uri /stats stats refresh 10s stats show-legends stats show-node stats auth admin:CHANGE_THIS_PASSWORD
# Frontend - HTTPS with SSL terminationfrontend https-in bind *:443 ssl crt /etc/haproxy/certs/server.pem
# Redirect HTTP to HTTPS (define http-in frontend for this) http-request set-header X-Forwarded-Proto https http-request set-header X-Forwarded-For %[src]
# ACL for different routing rules acl is_api path_beg /api acl is_admin path_beg /admin
# Route based on path use_backend api_servers if is_api use_backend admin_servers if is_admin default_backend web_servers
# Frontend - HTTP (redirect to HTTPS)frontend http-in bind *:80
# Redirect all HTTP to HTTPS http-request redirect scheme https code 301 unless { ssl_fc }
# Backend - Web application serversbackend web_servers balance roundrobin option httpchk GET /health HTTP/1.1\r\nHost:\ example-app.klutch.sh http-check expect status 200
cookie SERVERID insert indirect nocache
server web1 10.0.1.10:8080 check cookie web1 inter 3000 rise 2 fall 3 maxconn 500 server web2 10.0.1.11:8080 check cookie web2 inter 3000 rise 2 fall 3 maxconn 500 server web3 10.0.1.12:8080 check cookie web3 inter 3000 rise 2 fall 3 maxconn 500
# Backend - API serversbackend api_servers balance leastconn option httpchk GET /api/health http-check expect status 200
server api1 10.0.2.10:8080 check inter 2000 rise 2 fall 3 server api2 10.0.2.11:8080 check inter 2000 rise 2 fall 3
# Backend - Admin serversbackend admin_servers balance roundrobin option httpchk GET /admin/health http-check expect status 200
server admin1 10.0.3.10:8080 check inter 5000 rise 2 fall 3Advanced Features:
- SSL/TLS termination with modern cipher suites
- HTTP to HTTPS redirection
- Path-based routing using ACLs
- Multiple backend pools (web, API, admin)
- Sticky sessions using cookies
- Different balancing algorithms (roundrobin, leastconn)
- Security headers added to responses
- Password-protected statistics page
5. TCP Mode Configuration Example
For TCP traffic (databases, custom protocols), create a TCP-mode configuration:
global log stdout format raw local0 maxconn 4096 user haproxy group haproxy daemon
defaults log global mode tcp option tcplog timeout connect 5000ms timeout client 50000ms timeout server 50000ms retries 3
# Statistics pagelisten stats bind *:8404 mode http stats enable stats uri /stats stats refresh 30s
# TCP frontend for database connectionsfrontend database-in bind *:5432 default_backend postgres_servers
# Backend - PostgreSQL serversbackend postgres_servers balance roundrobin option tcp-check
server db1 postgres1.example.com:5432 check inter 3000 rise 2 fall 3 server db2 postgres2.example.com:5432 check inter 3000 rise 2 fall 3Deploying to Klutch.sh
Follow these steps to deploy HAProxy on Klutch.sh:
- Log in to Klutch.sh dashboard
- Create a new project if you haven’t already
- Create a new app within your project
- Select your GitHub repository containing the HAProxy Dockerfile
- Choose the branch you want to deploy (e.g.,
mainorproduction) - Klutch.sh will automatically detect the Dockerfile in your repository root
- Select HTTP as the traffic type for web applications
- For TCP mode deployments (databases, custom protocols), select TCP instead
- When using TCP mode, you can connect to your deployed app on port 8000
- Set the internal port to 80 for HTTP traffic (or the port your frontend binds to in haproxy.cfg)
- For HTTPS-only deployments, set internal port to 443
- If you need to access the statistics page, note that it runs on port 8404 internally
BACKEND_SERVERS- If you’re dynamically generating backend server listsSSL_CERT_PATH- Path to SSL certificates if mounted via volumeMAX_CONNECTIONS- Override the maxconn settingSTATS_PASSWORD- Password for the statistics pageSTART_COMMAND- Override the start command (e.g.,haproxy -f /usr/local/etc/haproxy/haproxy.cfg -db)BUILD_COMMAND- Custom build steps if needed- Mount path:
/var/log/haproxy(for persistent logs) - Mount path:
/etc/haproxy/certs(for SSL certificates) - Size: Choose based on your needs (5-10 GB is typically sufficient for logs)
- Include them in your Docker image during build (for static certificates)
- Mount them from a persistent volume (for dynamic certificate management)
- Select your preferred region for deployment
- Choose compute resources based on expected traffic:
- Light traffic: 1 vCPU, 1 GB RAM
- Moderate traffic: 2 vCPU, 2 GB RAM
- High traffic: 4+ vCPU, 4+ GB RAM
- Set the number of instances (for high availability, deploy at least 2)
- Review all settings
- Click Create to deploy your HAProxy instance
- Klutch.sh will build your Docker image and start the container
- Monitor the build logs to ensure successful deployment
- Test the main service by accessing your configured frontend port
- Access the statistics dashboard at
http://example-app.klutch.sh:8404/stats(if exposed) - Verify health checks are passing for your backend servers
- Test load balancing by making multiple requests and checking which backend serves them
Step 1: Prepare Your Repository
Push your HAProxy configuration files (Dockerfile, haproxy.cfg) to your GitHub repository.
Step 2: Create a New App
Step 3: Configure Repository
Step 4: Configure Traffic Type
Step 5: Set Internal Port
Step 6: Configure Environment Variables
HAProxy itself doesn’t require environment variables for basic operation, but you may want to set these for advanced use cases:
To customize HAProxy behavior through environment variables, you’ll need to modify your Dockerfile to use environment variable substitution. For Nixpacks customizations:
Step 7: Attach Persistent Storage (Optional)
If you need to persist logs, certificates, or configuration files across deployments, attach a persistent volume:
For SSL certificates, you can either:
Step 8: Choose Resources
Step 9: Deploy
Step 10: Verify Deployment
Once deployed, your HAProxy instance will be available at a URL like example-app.klutch.sh. You can:
Environment Variables and Configuration
While HAProxy primarily uses the configuration file, you can use environment variables for dynamic configuration. Here’s how to modify your Dockerfile to support environment variables:
FROM haproxy:2.9-alpine
# Install envsubst for environment variable substitutionRUN apk add --no-cache gettext
# Copy template configurationCOPY haproxy.cfg.template /usr/local/etc/haproxy/haproxy.cfg.template
# Create entrypoint scriptRUN echo '#!/bin/sh' > /docker-entrypoint.sh && \ echo 'envsubst < /usr/local/etc/haproxy/haproxy.cfg.template > /usr/local/etc/haproxy/haproxy.cfg' >> /docker-entrypoint.sh && \ echo 'haproxy -c -f /usr/local/etc/haproxy/haproxy.cfg' >> /docker-entrypoint.sh && \ echo 'exec haproxy -f /usr/local/etc/haproxy/haproxy.cfg -db' >> /docker-entrypoint.sh && \ chmod +x /docker-entrypoint.sh
EXPOSE 80 443 8404
ENTRYPOINT ["/docker-entrypoint.sh"]Then create haproxy.cfg.template with environment variable placeholders:
global log stdout format raw local0 maxconn ${MAX_CONNECTIONS:-4096} user haproxy group haproxy daemon
defaults log global mode http option httplog timeout connect 5000ms timeout client 50000ms timeout server 50000ms
frontend http-in bind *:80 default_backend web_servers
backend web_servers balance roundrobin option httpchk GET /health
server web1 ${BACKEND_1:-backend1.example.com}:${BACKEND_PORT:-80} check server web2 ${BACKEND_2:-backend2.example.com}:${BACKEND_PORT:-80} checkPersistent Storage Configuration
For production deployments, you may want to persist certain data. Here are recommended mount paths:
Logs Directory
Mount a volume to persist HAProxy logs:
- Mount path:
/var/log/haproxy - Recommended size: 5-10 GB
- Purpose: Persist access logs and error logs across deployments
Update your haproxy.cfg to log to files:
global log /var/log/haproxy/haproxy.log local0 log /var/log/haproxy/haproxy-error.log local0 errSSL Certificates Directory
Mount a volume for SSL certificates that need to be updated without rebuilding:
- Mount path:
/etc/haproxy/certs - Recommended size: 1 GB
- Purpose: Store and update SSL/TLS certificates dynamically
Configuration Directory (Advanced)
For dynamic configuration reloading:
- Mount path:
/usr/local/etc/haproxy - Recommended size: 1 GB
- Purpose: Update configuration without rebuilding the image
Note: When attaching persistent volumes in Klutch.sh, you only need to specify the mount path and size. The volume name is managed automatically.
Load Balancing Algorithms
HAProxy supports several load balancing algorithms. Choose based on your use case:
- roundrobin: Distributes requests evenly across servers (default, good for most cases)
- leastconn: Routes to server with fewest active connections (good for long-lived connections)
- source: Routes based on client IP (ensures same client goes to same server)
- uri: Routes based on URL (good for caching)
- random: Random selection (simple, works well for stateless apps)
Example configuration for different algorithms:
backend web_roundrobin balance roundrobin server web1 backend1.example.com:80 check server web2 backend2.example.com:80 check
backend api_leastconn balance leastconn server api1 backend1.example.com:8080 check server api2 backend2.example.com:8080 check
backend cache_uri balance uri hash-type consistent server cache1 backend1.example.com:80 check server cache2 backend2.example.com:80 checkHealth Checks and Monitoring
Configuring Health Checks
Health checks ensure HAProxy only routes traffic to healthy backend servers:
backend web_servers balance roundrobin
# HTTP health check option httpchk GET /health HTTP/1.1\r\nHost:\ example-app.klutch.sh http-check expect status 200
# Check every 3 seconds, mark healthy after 2 successful checks # Mark unhealthy after 3 failed checks server web1 backend1.example.com:80 check inter 3000 rise 2 fall 3 server web2 backend2.example.com:80 check inter 3000 rise 2 fall 3TCP Health Checks
For TCP services (databases, message queues):
backend database_servers mode tcp balance leastconn
# TCP connection check option tcp-check tcp-check connect
server db1 postgres1.example.com:5432 check inter 5000 rise 2 fall 3 server db2 postgres2.example.com:5432 check inter 5000 rise 2 fall 3Monitoring with Statistics Page
Access the built-in statistics page to monitor:
- Current connections and queue size
- Server status (UP, DOWN, MAINTENANCE)
- Request rates and error rates
- Session information
- Health check status
Configure the stats page in your haproxy.cfg:
listen stats bind *:8404 stats enable stats uri /stats stats refresh 10s stats show-legends stats show-node stats auth admin:secure-password-here stats admin if TRUEAccess at: http://example-app.klutch.sh:8404/stats
SSL/TLS Configuration
Generating SSL Certificates
For SSL termination, you need a certificate file in PEM format containing:
- Private key
- Certificate
- Certificate chain (if applicable)
Combine them into a single file:
cat server.key server.crt intermediate.crt > server.pemSSL Configuration in HAProxy
frontend https-in bind *:443 ssl crt /etc/haproxy/certs/server.pem
# Force TLS 1.2+ ssl-min-ver TLSv1.2
# Modern cipher suite ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384
# Enable HSTS (HTTP Strict Transport Security) http-response set-header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
# Forward protocol header http-request set-header X-Forwarded-Proto https
default_backend web_serversAutomatic HTTP to HTTPS Redirect
frontend http-in bind *:80
# Redirect all HTTP traffic to HTTPS redirect scheme https code 301 if !{ ssl_fc }Advanced Routing with ACLs
HAProxy’s Access Control Lists (ACLs) enable sophisticated routing decisions:
Path-Based Routing
frontend main bind *:80
# Define ACLs acl is_api path_beg /api acl is_static path_beg /static /images /css /js acl is_admin path_beg /admin
# Route based on ACLs use_backend api_servers if is_api use_backend static_servers if is_static use_backend admin_servers if is_admin default_backend web_serversHeader-Based Routing
frontend main bind *:80
# Route based on headers acl is_mobile hdr_sub(User-Agent) -i mobile android iphone acl is_api hdr(X-API-Key) -m found
use_backend mobile_backend if is_mobile use_backend api_backend if is_api default_backend web_serversHost-Based Routing (Virtual Hosts)
frontend main bind *:80
# Define ACLs for different domains acl host_api hdr(host) -i api.example.com acl host_admin hdr(host) -i admin.example.com acl host_www hdr(host) -i www.example.com example.com
# Route to different backends use_backend api_servers if host_api use_backend admin_servers if host_admin use_backend web_servers if host_wwwRate Limiting and DDoS Protection
Protect your backend servers with HAProxy’s rate limiting features:
frontend http-in bind *:80
# Track connection rates per IP stick-table type ip size 100k expire 30s store conn_rate(3s)
# Define ACL for rate limiting acl too_many_requests src_conn_rate ge 20
# Deny excessive requests http-request deny deny_status 429 if too_many_requests
default_backend web_serversConnection Limiting
backend web_servers # Limit connections per server server web1 backend1.example.com:80 check maxconn 100 server web2 backend2.example.com:80 check maxconn 100WebSocket Support
Enable WebSocket proxying for real-time applications:
frontend http-in bind *:80
# Detect WebSocket upgrade requests acl is_websocket hdr(Upgrade) -i WebSocket acl is_websocket hdr_beg(Host) -i ws.
use_backend websocket_backend if is_websocket default_backend web_servers
backend websocket_backend balance leastconn option http-server-close
# Increase timeouts for WebSocket connections timeout tunnel 3600s timeout server 3600s
server ws1 backend1.example.com:8080 check server ws2 backend2.example.com:8080 checkLogging and Debugging
Structured Logging
Configure detailed logging for troubleshooting:
global log stdout format raw local0 info log stdout format raw local0 err
defaults log global option httplog option logasap
# Custom log format log-format "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r"Debug Mode
For troubleshooting, you can enable debug mode by modifying your start command:
Set the Nixpacks START_COMMAND environment variable in Klutch.sh:
START_COMMAND=haproxy -f /usr/local/etc/haproxy/haproxy.cfg -dNote: Debug mode is verbose and should only be used temporarily for troubleshooting.
Security Best Practices
1. Use Strong SSL/TLS Configuration
global # Modern SSL configuration ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384 ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256 ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets2. Add Security Headers
http-response set-header X-Frame-Options SAMEORIGINhttp-response set-header X-Content-Type-Options nosniffhttp-response set-header X-XSS-Protection "1; mode=block"http-response set-header Referrer-Policy "strict-origin-when-cross-origin"http-response set-header Permissions-Policy "geolocation=(), microphone=(), camera=()"3. Protect Statistics Page
listen stats bind *:8404 stats enable stats uri /stats stats auth admin:YOUR_SECURE_PASSWORD_HERE stats hide-version
# Restrict access by IP (optional) acl allowed_ips src 10.0.0.0/8 http-request deny unless allowed_ips4. Hide HAProxy Version
defaults option http-server-close http-response del-header Server http-response set-header Server "WebServer"5. Implement Request Filtering
frontend http-in bind *:80
# Block common attack patterns acl bad_user_agent hdr_sub(User-Agent) -i sqlmap nikto nmap acl bad_method method TRACE acl bad_path path_beg -i /phpmyadmin /admin /wp-admin
http-request deny if bad_user_agent http-request deny if bad_method http-request deny deny_status 404 if bad_pathPerformance Tuning
System-Level Optimizations
For high-traffic deployments, optimize HAProxy performance:
global # Increase connection limit maxconn 50000
# Use nbthread for multi-core systems # Note: nbproc (processes) is deprecated in HAProxy 2.5+ in favor of nbthread (threads) # Threads provide better performance and resource sharing than processes nbthread 4
# CPU affinity for better performance cpu-map auto:1/1-4 0-3
# Tune buffers tune.bufsize 32768 tune.maxrewrite 1024
# SSL session cache tune.ssl.cachesize 100000 tune.ssl.lifetime 300 tune.ssl.default-dh-param 2048Connection Pooling
Reuse backend connections for better performance:
backend web_servers balance leastconn option http-keep-alive option prefer-last-server
# Connection pooling http-reuse aggressive
server web1 backend1.example.com:80 check maxconn 1000 server web2 backend2.example.com:80 check maxconn 1000Timeout Tuning
Adjust timeouts based on your application requirements:
defaults # Short-lived requests timeout connect 5s timeout client 30s timeout server 30s
# Keep-alive timeout http-keep-alive 10s
# Queue timeout timeout queue 30s
# Tunnel timeout for WebSockets timeout tunnel 3600sTroubleshooting Common Issues
1. HAProxy Not Starting
Symptom: Container exits immediately or won’t start
Solutions:
- Validate configuration syntax:
Terminal window haproxy -c -f /usr/local/etc/haproxy/haproxy.cfg - Check container logs for error messages
- Ensure ports aren’t conflicting
- Verify file permissions on configuration files
2. 503 Service Unavailable
Symptom: Clients receive 503 errors
Solutions:
- Check backend server health in statistics page
- Verify backend servers are accessible from HAProxy
- Check health check configuration
- Ensure backend servers are listening on correct ports
- Review firewall rules between HAProxy and backends
3. SSL Certificate Errors
Symptom: SSL handshake failures or certificate warnings
Solutions:
- Verify certificate file format (must be PEM with key, cert, and chain)
- Check file permissions on certificate files
- Ensure certificate matches the domain
- Verify certificate isn’t expired
- Test certificate with:
openssl s_client -connect example-app.klutch.sh:443
4. High Latency
Symptom: Slow response times
Solutions:
- Check HAProxy statistics for queue size
- Increase maxconn limits if queues are building
- Optimize backend response times
- Enable connection pooling with
http-reuse - Consider increasing instance resources (CPU/RAM)
- Review timeout settings
5. Health Checks Failing
Symptom: Servers marked as DOWN despite being healthy
Solutions:
- Verify health check endpoint exists on backend
- Check health check interval and thresholds (inter, rise, fall)
- Ensure health check response matches expected status
- Review backend server logs for health check requests
- Test health check manually:
curl http://backend1.example.com/health
6. Statistics Page Not Accessible
Symptom: Can’t access /stats endpoint
Solutions:
- Verify stats listen block is configured in haproxy.cfg
- Ensure port 8404 is accessible
- Check authentication credentials
- Verify internal port routing in Klutch.sh
- Check if statistics page is bound to correct interface (0.0.0.0 vs 127.0.0.1)
Monitoring and Metrics
Built-In Statistics
HAProxy provides comprehensive statistics through multiple interfaces:
- Web Dashboard: Access at
http://example-app.klutch.sh:8404/stats - Stats Socket: Unix socket for programmatic access
- CSV Export: Machine-readable statistics
Key Metrics to Monitor
Monitor these critical metrics for production deployments:
- Backend Server Status: UP/DOWN state of each backend
- Active Sessions: Current number of active connections
- Queue Size: Requests waiting for available backend
- Error Rate: 4xx and 5xx response rates
- Response Time: Average backend response time
- Throughput: Requests per second, bytes transferred
- Health Check Status: Success/failure rate of health checks
Exporting Metrics
For integration with monitoring tools, enable the statistics socket:
global stats socket /var/run/haproxy.sock mode 660 level admin stats timeout 30sQuery statistics via socket:
echo "show stat" | socat stdio /var/run/haproxy.sockIntegration with External Monitoring
For production monitoring, consider integrating with:
- Prometheus (using HAProxy Exporter)
- Datadog
- New Relic
- Grafana
Scaling and High Availability
Horizontal Scaling
Deploy multiple HAProxy instances for high availability:
- In Klutch.sh, set the number of instances to 2 or more
- Place a DNS load balancer or another HAProxy in front
- Use health checks between layers
- Configure session persistence if needed
Vertical Scaling
For high-traffic scenarios, increase resources:
- Light: 1-2 vCPU, 1-2 GB RAM (up to 10,000 connections)
- Medium: 2-4 vCPU, 4 GB RAM (up to 50,000 connections)
- High: 4-8 vCPU, 8-16 GB RAM (100,000+ connections)
Zero-Downtime Configuration Reload
HAProxy supports graceful configuration reloads:
# Validate new configurationhaproxy -c -f /usr/local/etc/haproxy/haproxy.cfg.new
# Reload with zero downtimehaproxy -f /usr/local/etc/haproxy/haproxy.cfg -sf $(cat /var/run/haproxy.pid)For dynamic configuration updates in Klutch.sh, consider:
- Using persistent volumes for configuration
- Implementing a sidecar container for configuration management
- Triggering redeployments for major config changes
Example: Complete Production Setup
Here’s a comprehensive example for a production HAProxy deployment:
Directory Structure:
haproxy-production/├── Dockerfile├── haproxy.cfg├── .dockerignore└── README.mdDockerfile:
FROM haproxy:2.9-alpine
# Install runtime dependenciesRUN apk add --no-cache ca-certificates
# Copy configurationCOPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
# Validate configurationRUN haproxy -c -f /usr/local/etc/haproxy/haproxy.cfg
# Create directories for logs and stats socketRUN mkdir -p /var/log/haproxy /var/run && \ chown -R haproxy:haproxy /var/log/haproxy /var/run
EXPOSE 80 443 8404
# Health checkHEALTHCHECK --interval=30s --timeout=10s --start-period=30s --retries=3 \ CMD haproxy -c -f /usr/local/etc/haproxy/haproxy.cfg || exit 1
# Run as haproxy userUSER haproxy
CMD ["haproxy", "-f", "/usr/local/etc/haproxy/haproxy.cfg", "-db"]haproxy.cfg:
global log stdout format raw local0 maxconn 10000 user haproxy group haproxy
# SSL configuration ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384 ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets tune.ssl.default-dh-param 2048 tune.ssl.cachesize 100000
defaults log global mode http option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.1 timeout connect 5s timeout client 50s timeout server 50s timeout http-keep-alive 10s retries 3
# Security headers http-response set-header X-Frame-Options SAMEORIGIN http-response set-header X-Content-Type-Options nosniff http-response set-header X-XSS-Protection "1; mode=block"
# Statisticslisten stats bind *:8404 stats enable stats uri /stats stats refresh 30s stats auth admin:CHANGE_THIS_PASSWORD stats hide-version
# Frontend HTTPSfrontend https-in bind *:443 ssl crt /etc/haproxy/certs/server.pem
http-request set-header X-Forwarded-Proto https http-request set-header X-Real-IP %[src]
# Rate limiting stick-table type ip size 100k expire 30s store conn_rate(3s) acl abuse src_conn_rate ge 30 http-request deny deny_status 429 if abuse
# Routing acl is_api path_beg /api use_backend api_servers if is_api default_backend web_servers
# Frontend HTTP (redirect to HTTPS)frontend http-in bind *:80 redirect scheme https code 301 if !{ ssl_fc }
# Backend Web Serversbackend web_servers balance roundrobin cookie SERVERID insert indirect nocache option httpchk GET /health HTTP/1.1\r\nHost:\ example-app.klutch.sh http-check expect status 200
http-request set-header X-Forwarded-Port %[dst_port] http-request add-header X-Forwarded-Proto https if { ssl_fc }
server web1 10.0.1.10:8080 check cookie web1 inter 3000 rise 2 fall 3 maxconn 500 server web2 10.0.1.11:8080 check cookie web2 inter 3000 rise 2 fall 3 maxconn 500 server web3 10.0.1.12:8080 check cookie web3 inter 3000 rise 2 fall 3 maxconn 500
# Backend API Serversbackend api_servers balance leastconn option httpchk GET /api/health http-check expect status 200
compression algo gzip compression type text/html text/plain text/css text/javascript application/json
server api1 10.0.2.10:8080 check inter 2000 rise 2 fall 3 maxconn 1000 server api2 10.0.2.11:8080 check inter 2000 rise 2 fall 3 maxconn 1000Deployment Steps:
- Push the code to GitHub
- In Klutch.sh, create a new app with your repository
- Select HTTP traffic type
- Set internal port to 443 (for HTTPS frontend)
- Attach a volume to
/etc/haproxy/certsfor SSL certificates - Attach a volume to
/var/log/haproxyfor persistent logs - Set environment variables (if using dynamic config)
- Choose 2+ instances for high availability
- Deploy and monitor via statistics dashboard
Resources and Further Reading
- HAProxy Official Documentation
- HAProxy Configuration Tutorials
- Klutch.sh Apps Documentation
- Klutch.sh Volumes Documentation
- Klutch.sh Networking Documentation
- HAProxy Configuration Manual
Conclusion
Deploying HAProxy on Klutch.sh provides a powerful, flexible solution for load balancing and proxying traffic to your applications. With support for HTTP, HTTPS, TCP, and advanced routing capabilities, HAProxy can handle a wide variety of use cases from simple reverse proxying to complex multi-tier load balancing architectures.
Key takeaways:
- Klutch.sh automatically detects and deploys your Dockerfile
- Configure HAProxy through the
haproxy.cfgfile - Use persistent volumes for logs and SSL certificates
- Monitor your deployment through the built-in statistics page
- Implement health checks to ensure high availability
- Scale horizontally by deploying multiple instances
- Secure your deployment with SSL/TLS, rate limiting, and security headers
For additional support and questions, refer to the Klutch.sh documentation or the HAProxy community.