Deploying NGINX
Introduction
NGINX is one of the world’s most popular web servers, powering a significant portion of the internet’s busiest websites. Known for its high performance, stability, and low resource consumption, NGINX excels at serving static content, acting as a reverse proxy, and load balancing across backend services.
Originally created to solve the C10K problem (handling 10,000+ concurrent connections), NGINX uses an asynchronous, event-driven architecture that efficiently handles massive numbers of simultaneous connections with minimal memory footprint. This makes it ideal for everything from simple static sites to complex microservices architectures.
Key highlights of NGINX:
- High Performance: Efficiently handles thousands of concurrent connections
- Static File Serving: Optimized delivery of static assets (HTML, CSS, JS, images)
- Reverse Proxy: Route requests to backend applications
- Load Balancing: Distribute traffic across multiple backend servers
- SSL/TLS Termination: Handle HTTPS encryption and decryption
- HTTP/2 and HTTP/3: Modern protocol support
- Caching: Built-in caching for improved performance
- URL Rewriting: Flexible request manipulation
- Gzip Compression: Automatic response compression
- Active Community: Extensive documentation and community support
This guide walks through deploying NGINX on Klutch.sh using Docker.
Why Deploy NGINX on Klutch.sh
Deploying NGINX on Klutch.sh provides several advantages:
Simplified Deployment: Push your configuration to GitHub and Klutch.sh handles the containerization and deployment automatically.
Static Site Hosting: Serve static websites and single-page applications with excellent performance.
HTTPS by Default: Klutch.sh provides automatic SSL certificates, working alongside NGINX’s web serving capabilities.
Persistent Configuration: Store your NGINX configuration and static files in persistent volumes.
Scalable Resources: Adjust CPU and memory based on traffic requirements.
Custom Domains: Use your own domain with automatic HTTPS.
Prerequisites
Before deploying NGINX on Klutch.sh, ensure you have:
- A Klutch.sh account
- A GitHub account with a repository for your configuration
- Your static website files or reverse proxy configuration
- Basic familiarity with NGINX configuration syntax
Understanding NGINX Architecture
NGINX uses a master-worker process model:
Master Process: Reads configuration, binds ports, and manages worker processes.
Worker Processes: Handle actual request processing. Each worker can handle thousands of connections.
Configuration: Declarative configuration files define server behavior, locations, and upstream backends.
Modules: Extend functionality for specific use cases (SSL, gzip, headers, etc.).
Preparing Your Repository
Create a GitHub repository for your NGINX deployment.
Repository Structure for Static Site
nginx-deploy/├── Dockerfile├── nginx.conf├── .dockerignore└── html/ ├── index.html ├── css/ │ └── style.css └── js/ └── app.jsCreating the Dockerfile (Static Site)
FROM nginx:alpine
# Remove default configurationRUN rm /etc/nginx/conf.d/default.conf
# Copy custom configurationCOPY nginx.conf /etc/nginx/nginx.conf
# Copy static filesCOPY html/ /usr/share/nginx/html/
# Expose port 80EXPOSE 80
# Health checkHEALTHCHECK --interval=30s --timeout=3s \ CMD wget --quiet --tries=1 --spider http://localhost/ || exit 1Basic nginx.conf for Static Site
worker_processes auto;error_log /var/log/nginx/error.log warn;pid /var/run/nginx.pid;
events { worker_connections 1024;}
http { include /etc/nginx/mime.types; default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent"';
access_log /var/log/nginx/access.log main;
sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048;
# Gzip compression gzip on; gzip_types text/plain text/css application/json application/javascript text/xml application/xml; gzip_min_length 1000;
server { listen 80; server_name _;
root /usr/share/nginx/html; index index.html;
# Cache static assets location ~* \.(css|js|jpg|jpeg|png|gif|ico|svg|woff|woff2)$ { expires 30d; add_header Cache-Control "public, immutable"; }
# SPA support - serve index.html for non-file routes location / { try_files $uri $uri/ /index.html; }
# Health check endpoint location /health { return 200 'healthy'; add_header Content-Type text/plain; } }}Creating the .dockerignore File
.git.github*.mdLICENSE.gitignore.DS_Storenode_modules/Deploying NGINX on Klutch.sh
- Select HTTP as the traffic type
- Set the internal port to 80
Prepare Your Static Files
Add your website files to the html/ directory in your repository.
Push Your Repository to GitHub
Initialize and push your repository with the Dockerfile, configuration, and static files.
Create a New Project on Klutch.sh
Navigate to the Klutch.sh dashboard and create a project named “nginx” or your site name.
Create a New App
Create a new app and connect your GitHub repository.
Configure HTTP Traffic
NGINX serves web content over HTTP:
Attach Persistent Volumes (Optional)
For dynamic content or logs:
| Mount Path | Recommended Size | Purpose |
|---|---|---|
/usr/share/nginx/html | 5 GB | Dynamic static files |
/var/log/nginx | 1 GB | Access and error logs |
Deploy Your Application
Click Deploy to build and start your NGINX instance.
Access Your Site
Visit https://your-app-name.klutch.sh to view your site.
Configuration Examples
Reverse Proxy Configuration
Route requests to backend services:
http { upstream backend { server backend-service:8080; }
server { listen 80;
location / { proxy_pass http://backend; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } }}Load Balancing
Distribute traffic across multiple backends:
upstream backend_pool { least_conn; server backend1:8080; server backend2:8080; server backend3:8080;}
server { listen 80;
location / { proxy_pass http://backend_pool; }}API and Static Split
Serve static files and proxy API requests:
server { listen 80;
# Static files location / { root /usr/share/nginx/html; try_files $uri $uri/ /index.html; }
# API proxy location /api/ { proxy_pass http://api-backend:3000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; }}Security Headers
Add security headers to responses:
server { listen 80;
add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Content-Type-Options "nosniff" always; add_header X-XSS-Protection "1; mode=block" always; add_header Referrer-Policy "strict-origin-when-cross-origin" always; add_header Content-Security-Policy "default-src 'self'" always;
location / { root /usr/share/nginx/html; }}Caching Configuration
Configure response caching:
http { proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache:10m max_size=1g inactive=60m;
server { listen 80;
location / { proxy_cache cache; proxy_cache_valid 200 60m; proxy_cache_valid 404 1m; proxy_pass http://backend; add_header X-Cache-Status $upstream_cache_status; } }}Performance Tuning
Worker Optimization
Match workers to CPU cores:
worker_processes auto;worker_rlimit_nofile 65535;
events { worker_connections 4096; multi_accept on; use epoll;}Buffer Settings
Optimize proxy buffers:
http { proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k;
client_body_buffer_size 10K; client_header_buffer_size 1k; client_max_body_size 8m; large_client_header_buffers 2 1k;}Compression
Enable efficient compression:
http { gzip on; gzip_vary on; gzip_proxied any; gzip_comp_level 6; gzip_types text/plain text/css text/xml application/json application/javascript application/xml application/xml+rss image/svg+xml;}Logging
Access Log Format
Custom log formats:
log_format detailed '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent" ' '$request_time $upstream_response_time';
access_log /var/log/nginx/access.log detailed;Conditional Logging
Log only errors:
map $status $loggable { ~^[23] 0; default 1;}
access_log /var/log/nginx/error-access.log combined if=$loggable;Production Best Practices
Security
- Keep NGINX updated for security patches
- Limit request sizes to prevent DoS
- Hide version information with
server_tokens off - Use security headers
- Implement rate limiting
Performance
- Enable gzip compression
- Configure browser caching headers
- Use sendfile for static content
- Tune worker processes and connections
Monitoring
- Monitor access and error logs
- Track response times
- Set up health checks
- Alert on error rate increases
Troubleshooting
502 Bad Gateway
- Check upstream server is running
- Verify upstream address and port
- Review proxy timeout settings
- Check backend logs
504 Gateway Timeout
- Increase
proxy_read_timeout - Check backend performance
- Verify network connectivity
Configuration Errors
Test configuration before deployment:
nginx -tPermission Denied
- Check file ownership
- Verify worker process user
- Review directory permissions
Additional Resources
- Official NGINX Documentation
- NGINX Wiki
- NGINX GitHub Mirror
- Klutch.sh Persistent Volumes
- Klutch.sh Deployments
Conclusion
Deploying NGINX on Klutch.sh gives you access to one of the most powerful and flexible web servers available. Whether you’re serving a simple static site, reverse proxying to backend applications, or building a complex load-balanced infrastructure, NGINX handles it all with exceptional performance.
The combination of NGINX’s battle-tested reliability and Klutch.sh’s simplified deployment creates an ideal environment for web serving and reverse proxy needs.