Deploying Anubis
Introduction
Anubis is an open-source Web AI Firewall Utility that “weighs the soul of your connection” using proof-of-work challenges to protect upstream resources from scraper bots and AI crawlers. Named after the Egyptian god who judged souls, Anubis provides a lightweight yet effective defense mechanism for websites that want to protect their content from automated scraping without relying on third-party services like Cloudflare.
Deploying Anubis on Klutch.sh gives you a scalable, production-ready bot protection layer with automatic HTTPS, persistent storage for challenge data, and simple environment configuration. Anubis sits between your reverse proxy and your application, challenging suspicious requests with JavaScript-based proof-of-work while allowing legitimate traffic through.
This guide walks you through deploying Anubis using the official Docker image on Klutch.sh, configuring bot policies, setting up persistent storage with bbolt, and best practices for protecting your web applications from AI scrapers.
What You’ll Learn
- How to deploy Anubis with a Dockerfile on Klutch.sh
- Setting up persistent storage for challenge data using bbolt
- Configuring bot policies to allow, deny, or challenge requests
- Setting up environment variables for your target application
- Best practices for production deployment and reverse proxy integration
Prerequisites
Before you begin, ensure you have:
- A Klutch.sh account
- A GitHub repository (can be a new empty repo)
- Basic familiarity with Docker and reverse proxy concepts
- A target application/service that Anubis will protect (can be hosted anywhere)
Understanding Anubis Architecture
Anubis is designed as a reverse proxy that sits in front of your application:
- Challenge Engine: Issues proof-of-work challenges to browser-like clients using JavaScript
- Policy Engine: Evaluates requests against configurable rules to ALLOW, DENY, or CHALLENGE
- Storage Backend: Stores challenge data (supports memory, bbolt, Valkey/Redis, or S3)
- Metrics Server: Exposes Prometheus metrics on a separate port for monitoring
The application listens on port 8923 by default for HTTP traffic and port 9090 for metrics. Anubis forwards validated requests to your configured TARGET service.
[Client] → [Anubis :8923] → [Your Application] ↓ [Challenge Page]Step 1: Prepare Your GitHub Repository
-
Create a new GitHub repository for your Anubis deployment.
-
Create a
Dockerfilein the root of your repository: - Create a bot policy file named
policies.yamlfor custom bot rules: - Update the Dockerfile to include the policy file:
- (Optional) Create a
.dockerignorefile: - Commit and push your changes to GitHub:
FROM ghcr.io/techarohq/anubis:latest
# Create data directory for bbolt storageRUN mkdir -p /data && chown 1000:1000 /data
# Expose Anubis portsEXPOSE 8923EXPOSE 9090
# Default command (already set in base image)CMD ["anubis"]# Bot policy configuration for Anubis# See: https://anubis.techaro.lol/docs/admin/policies
bots: # Allow well-known paths without challenge - name: well-known path_regex: ^/.well-known/.*$ action: ALLOW
# Allow favicon requests - name: favicon path_regex: ^/favicon.ico$ action: ALLOW
# Allow robots.txt - name: robots-txt path_regex: ^/robots.txt$ action: ALLOW
# Allow health check endpoints - name: health-checks path_regex: ^/(health|healthz|ready|readyz)$ action: ALLOW
# Deny known AI scrapers - name: amazonbot user_agent_regex: Amazonbot action: DENY
- name: gptbot user_agent_regex: GPTBot action: DENY
- name: claudebot user_agent_regex: ClaudeBot|Claude-Web action: DENY
- name: bytespider user_agent_regex: Bytespider action: DENY
- name: ccbot user_agent_regex: CCBot action: DENY
# Allow Internet Archive - name: internet-archive user_agent_regex: archive\.org_bot|ia_archiver action: ALLOW
# Challenge browser-like clients - name: generic-browser user_agent_regex: Mozilla action: CHALLENGE
# Storage configuration for persistent challenge datastore: backend: bbolt parameters: path: /data/anubis.bdb
# Logging configurationlogging: sink: stdio level: INFOFROM ghcr.io/techarohq/anubis:latest
# Create data directory for bbolt storageRUN mkdir -p /data && chown 1000:1000 /data
# Copy policy configurationCOPY policies.yaml /etc/anubis/policies.yaml
# Expose Anubis portsEXPOSE 8923EXPOSE 9090
# Start Anubis with the policy fileCMD ["anubis"].git.github*.md.env.env.localgit add .git commit -m "Add Anubis deployment configuration for Klutch.sh"git push origin mainStep 2: Create Your App on Klutch.sh
-
Log in to Klutch.sh and navigate to the dashboard.
-
Create a new project (if you don’t have one already) by clicking “New Project” and providing a project name.
-
Create a new app within your project by clicking “New App”.
-
Connect your GitHub repository by selecting it from the list of available repositories.
-
Configure the build settings:
- Klutch.sh will automatically detect the Dockerfile in your repository root
- The build will use this Dockerfile automatically
-
Set the internal port to
8923(Anubis’s default port). This is the port that traffic will be routed to within the container. -
Select HTTP traffic for the app’s traffic type.
Step 3: Configure Persistent Storage
Anubis stores challenge data to track which clients have successfully completed proof-of-work challenges. Using persistent storage ensures that challenge passes survive container restarts.
-
In your app settings, navigate to the “Volumes” section.
-
Add a persistent volume with the following configuration:
- Mount Path:
/data - Size: 1 GB (sufficient for most deployments)
- Mount Path:
-
Save the volume configuration.
The /data directory contains:
- The bbolt database file (
anubis.bdb) storing challenge passes - Temporary challenge data
For more details on managing persistent storage, see the Volumes Guide.
Step 4: Configure Environment Variables
Configure the necessary environment variables to connect Anubis to your target application.
-
In your app settings, navigate to the “Environment Variables” section.
-
Add the required variables:
- Configure optional settings:
-
Generate an Ed25519 private key (required for persistent storage):
Run this command locally to generate a key:
Terminal window openssl rand -hex 32Copy the output (64 hexadecimal characters) and set it as
ED25519_PRIVATE_KEY_HEX. -
Mark sensitive values as secrets in the Klutch.sh UI to prevent them from appearing in logs.
# Target application URL - the service Anubis protectsTARGET=https://your-backend-app.klutch.sh
# Policy file locationPOLICY_FNAME=/etc/anubis/policies.yaml
# Cookie domain (your deployed domain without subdomain)COOKIE_DOMAIN=klutch.sh
# Challenge difficulty (4 = default, higher = harder)DIFFICULTY=4
# Ed25519 private key for signing JWTs (generate with: openssl rand -hex 32)ED25519_PRIVATE_KEY_HEX=your-64-character-hex-key-here# Show webmaster contact on error pagesWEBMASTER_EMAIL=admin@example.com
# Cookie expiration time (default: 168h = 7 days)COOKIE_EXPIRATION_TIME=168h
# Log level (DEBUG, INFO, WARN, ERROR)SLOG_LEVEL=INFO
# Serve a default robots.txt that blocks AI scrapersSERVE_ROBOTS_TXT=trueImportant Security Notes:
- The
ED25519_PRIVATE_KEY_HEXmust be set when using persistent storage backends like bbolt - Never commit the private key to your repository
- Use the same key across all Anubis instances on the same domain
Step 5: Deploy Your Application
-
Review your configuration to ensure all settings are correct:
- Dockerfile is detected
- Internal port is set to
8923 - Persistent volume is mounted to
/data - Environment variables are configured
- Traffic type is set to HTTP
-
Click “Deploy” to start the build and deployment process.
-
Monitor the build logs to ensure the deployment completes successfully. The build typically takes 1-2 minutes.
-
Wait for the deployment to complete. Once done, you’ll see your app URL (e.g.,
https://example-app.klutch.sh).
Step 6: Test Your Deployment
-
Access your Anubis instance by navigating to your app URL (e.g.,
https://example-app.klutch.sh). -
You should see the Anubis challenge page with a message like “Making sure you’re not a bot!” and a loading animation.
-
Wait for the challenge to complete. Your browser will automatically solve the proof-of-work challenge and redirect you to the target application.
-
Verify the challenge cookie is set by checking your browser’s developer tools (Application → Cookies).
-
Test bot protection by using curl without a browser user agent:
# This should return an error page (looks like success to bots)curl -A "Amazonbot/1.0" https://example-app.klutch.sh/
# This should trigger a challenge (returns JavaScript)curl -A "Mozilla/5.0" https://example-app.klutch.sh/Getting Started: Sample Policy Configurations
Basic Protection Policy
A minimal policy that challenges browsers and denies known scrapers:
bots: - name: well-known path_regex: ^/.well-known/.*$ action: ALLOW
- name: favicon path_regex: ^/favicon.ico$ action: ALLOW
- name: robots-txt path_regex: ^/robots.txt$ action: ALLOW
- name: ai-scrapers user_agent_regex: GPTBot|ClaudeBot|Amazonbot|Bytespider|CCBot action: DENY
- name: generic-browser user_agent_regex: Mozilla action: CHALLENGE
store: backend: bbolt parameters: path: /data/anubis.bdbAPI Protection Policy
A policy for protecting API endpoints with lighter challenges:
bots: # Allow authenticated API requests - name: api-with-auth path_regex: ^/api/.*$ headers_regex: Authorization: Bearer .* action: ALLOW
# Challenge unauthenticated API requests - name: api-requests path_regex: ^/api/.*$ action: CHALLENGE challenge: difficulty: 2 # Lighter challenge for APIs
# Deny known scrapers - name: ai-scrapers user_agent_regex: GPTBot|ClaudeBot|Amazonbot action: DENY
# Challenge browsers - name: generic-browser user_agent_regex: Mozilla action: CHALLENGE
store: backend: bbolt parameters: path: /data/anubis.bdbAllowlist Good Bots Policy
A policy that explicitly allows search engines and archive services:
bots: # Allow search engines - name: googlebot user_agent_regex: Googlebot action: ALLOW
- name: bingbot user_agent_regex: bingbot action: ALLOW
- name: duckduckgo user_agent_regex: DuckDuckBot action: ALLOW
# Allow Internet Archive - name: internet-archive user_agent_regex: archive\.org_bot|ia_archiver action: ALLOW
# Deny AI scrapers - name: ai-scrapers user_agent_regex: GPTBot|ClaudeBot|Amazonbot|Bytespider|CCBot|anthropic-ai action: DENY
# Challenge everything else with Mozilla - name: generic-browser user_agent_regex: Mozilla action: CHALLENGE
store: backend: bbolt parameters: path: /data/anubis.bdbAdvanced Configuration
Using Valkey/Redis for Multi-Instance Deployments
If you need to run multiple Anubis instances (for high availability), use Valkey or Redis as the storage backend:
First, deploy a Redis instance on Klutch.sh, then update your policy:
store: backend: valkey parameters: url: "redis://redis-app.klutch.sh:8000/0"Update your environment variables:
# Remove bbolt-related settings and add Redis URL# The policy file handles the Redis configurationCustom Challenge Difficulty
Adjust challenge difficulty based on request characteristics:
bots: # Harder challenges for suspicious user agents - name: suspicious-bot user_agent_regex: (?i:bot|crawler|spider) action: CHALLENGE challenge: difficulty: 8 # Harder than default algorithm: slow # Intentionally slower
# Standard challenge for browsers - name: generic-browser user_agent_regex: Mozilla action: CHALLENGE challenge: difficulty: 4 algorithm: fastIP-Based Rules
Allow or challenge based on IP ranges:
bots: # Allow internal network - name: internal-network action: ALLOW remote_addresses: - 10.0.0.0/8 - 172.16.0.0/12 - 192.168.0.0/16
# Challenge Cloudflare Workers - name: cloudflare-workers headers_regex: CF-Worker: .* action: DENYOpen Graph Passthrough
Allow social media previews without challenges:
# Enable Open Graph tag passthroughopenGraph: enabled: true expiry: 24h cacheConsiderHost: false
bots: # Allow social media crawlers for previews - name: social-media user_agent_regex: facebookexternalhit|Twitterbot|LinkedInBot|Slackbot action: ALLOWProduction Best Practices
Security
- Generate a unique Ed25519 key: Use
openssl rand -hex 32and keep it secret - Use HTTPS only: Klutch.sh provides automatic HTTPS for all apps
- Set appropriate cookie domain: Match your actual domain
- Rotate keys periodically: Update
ED25519_PRIVATE_KEY_HEXif compromised - Monitor deny rates: Watch for false positives affecting legitimate users
Performance
- Use bbolt for single instances: Provides fast, persistent storage
- Use Valkey/Redis for multiple instances: Ensures challenge data is shared
- Adjust difficulty appropriately: Higher difficulty = more CPU for clients
- Monitor memory usage: Anubis is lightweight (~128Mi RAM typically)
Monitoring
- Enable Prometheus metrics: Available on port 9090
- Monitor challenge pass rates: Track
anubis_challenges_passed_total - Watch for errors: Monitor
anubis_errors_totalfor issues - Log analysis: Parse JSON logs for insights
Cookie Configuration
- Set COOKIE_DOMAIN correctly: Use your base domain (e.g.,
klutch.shforapp.klutch.sh) - Use COOKIE_DYNAMIC_DOMAIN for multi-subdomain setups
- Adjust COOKIE_EXPIRATION_TIME based on your security needs
Troubleshooting
Challenge Loop
Issue: Users are stuck in an endless challenge loop
Solutions:
- Verify
ED25519_PRIVATE_KEY_HEXis set correctly - Check that cookies are enabled in the browser
- Ensure
COOKIE_DOMAINmatches your deployment domain - Verify persistent storage is mounted correctly
Target Not Reachable
Issue: Anubis can’t connect to the target application
Solutions:
- Verify the
TARGETURL is correct and accessible - Check that the target application is running
- Ensure network connectivity between Anubis and target
- Review logs for connection errors
Legitimate Users Blocked
Issue: Real users are being denied
Solutions:
- Review your bot policies for overly aggressive rules
- Add allowlist rules for legitimate user agents
- Lower the challenge difficulty
- Check for rules that match too broadly
Storage Errors
Issue: bbolt database errors or permission issues
Solutions:
- Verify the persistent volume is mounted to
/data - Check that the volume has sufficient space
- Ensure user 1000:1000 has write permissions
- Review logs for specific error messages
Challenge Page Not Loading
Issue: Blank page or JavaScript errors on challenge
Solutions:
- Ensure JavaScript is enabled in the browser
- Check browser console for errors
- Verify the challenge assets are being served correctly
- Test with a different browser
Monitoring with Prometheus
Anubis exposes metrics on port 9090. To collect these metrics:
-
Deploy Prometheus on Klutch.sh or use an external service
-
Configure Prometheus to scrape your Anubis instance:
scrape_configs: - job_name: 'anubis' static_configs: - targets: ['example-app.klutch.sh:9090']- Key metrics to monitor:
anubis_challenges_issued_total: Total challenges issuedanubis_challenges_passed_total: Successful challenge completionsanubis_challenges_failed_total: Failed challengesanubis_requests_total: Total requests processedanubis_rule_matches_total: Rule match counts by name
Local Development with Docker Compose
For local testing before deploying to Klutch.sh, use Docker Compose:
version: '3.8'
services: anubis: image: ghcr.io/techarohq/anubis:latest ports: - "8923:8923" - "9090:9090" environment: - TARGET=http://backend:3000 - POLICY_FNAME=/etc/anubis/policies.yaml - ED25519_PRIVATE_KEY_HEX=0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - DIFFICULTY=4 - SLOG_LEVEL=DEBUG volumes: - ./policies.yaml:/etc/anubis/policies.yaml:ro - anubis_data:/data
backend: image: nginx:alpine ports: - "3000:80"
volumes: anubis_data:Run locally with:
docker-compose up -dAccess Anubis at http://localhost:8923.
Note: Docker Compose is for local development only. Deploy to Klutch.sh using the Dockerfile approach described in this guide.
Updating Anubis
To update to a newer version of Anubis:
- Update your Dockerfile to use a specific version or latest:
- Commit and push the changes:
-
Redeploy through the Klutch.sh dashboard.
-
Verify the update by checking the version in the challenge page footer or logs.
# Use latestFROM ghcr.io/techarohq/anubis:latest
# Or pin to a specific versionFROM ghcr.io/techarohq/anubis:v1.23.1git add Dockerfilegit commit -m "Update Anubis to latest version"git push origin mainIntegration Examples
Protecting a Node.js Application
Deploy both your Node.js app and Anubis on Klutch.sh:
# In Anubis environment variablesTARGET=https://my-nodejs-app.klutch.shProtecting Multiple Services
Deploy multiple Anubis instances for different services:
# Anubis instance 1TARGET=https://api.klutch.shCOOKIE_DOMAIN=klutch.sh
# Anubis instance 2TARGET=https://web.klutch.shCOOKIE_DOMAIN=klutch.shUsing with Custom Domains
When using a custom domain with Klutch.sh:
# Set cookie domain to your custom domainCOOKIE_DOMAIN=example.comTARGET=https://backend.example.comResources
- Anubis GitHub Repository
- Anubis Official Documentation
- Bot Policy Documentation
- Klutch.sh Quick Start Guide
- Klutch.sh Volumes Guide
- Klutch.sh Deployments Guide
Conclusion
You now have a fully functional Anubis deployment running on Klutch.sh, protecting your web applications from AI scrapers and malicious bots. This setup provides:
- Proof-of-work challenge protection for browser-like clients
- Customizable bot policies to allow, deny, or challenge requests
- Persistent storage for challenge data
- Prometheus metrics for monitoring
- Automatic HTTPS and scalable infrastructure
Anubis offers a self-hosted alternative to services like Cloudflare for bot protection, giving you complete control over how your web applications handle suspicious traffic. The lightweight design (typically under 128Mi RAM) makes it efficient to deploy alongside your applications.
For community support and discussions, visit the Anubis GitHub Discussions or check the official documentation.