Deploying DragonFly
Introduction
DragonFly is a modern, high-performance in-memory data store that serves as a drop-in replacement for Redis and Memcached. Built from the ground up for cloud-native workloads, DragonFly delivers up to 25X more throughput than Redis while consuming 80% fewer resources. It achieves this through a novel shared-nothing architecture, efficient data structures based on the Dash paper, and a transactional framework that eliminates the need for locks.
With full compatibility with Redis and Memcached APIs, DragonFly requires no code changes to adopt. It currently supports approximately 185 Redis commands (equivalent to Redis 5.0 API) and all Memcached commands except cas. Whether you’re running caching layers, session stores, message queues, or real-time analytics, DragonFly provides sub-millisecond latency at massive scale.
This guide will walk you through deploying DragonFly on Klutch.sh, configuring it for production workloads, setting up persistence, and optimizing performance to handle millions of operations per second.
Why Deploy DragonFly on Klutch.sh
Deploying DragonFly on Klutch.sh offers several advantages for modern application architectures:
Simplified Deployment: Klutch.sh automatically detects your Dockerfile and builds DragonFly with zero configuration. No complex orchestration or manual server setup required.
High Performance Networking: With TCP traffic support and dedicated port routing, your applications can connect to DragonFly at port 8000 with minimal latency. Internal routing is optimized for high-throughput workloads.
Persistent Storage: Attach persistent volumes to DragonFly for RDB snapshots and append-only file (AOF) backups, ensuring data durability across container restarts and deployments.
Vertical Scaling: Unlike traditional Redis that’s limited by single-threaded architecture, DragonFly scales vertically with CPU cores. Deploy on larger instances and watch throughput increase linearly.
Environment Management: Securely configure DragonFly settings like authentication passwords, memory limits, and cache modes through environment variables without exposing sensitive data.
GitHub Integration: Connect your DragonFly configuration repository directly from GitHub. Updates to your Dockerfile or configuration files trigger automatic redeployments.
Resource Efficiency: DragonFly’s memory-efficient design means you can handle larger datasets with smaller instances, reducing infrastructure costs significantly compared to Redis.
Multi-Protocol Support: Run both Redis and HTTP protocols on the same port. Access metrics via HTTP while your applications use the Redis protocol, all without additional configuration.
Monitoring and Metrics: DragonFly exposes Prometheus-compatible metrics out of the box, making it easy to integrate with your existing observability stack on Klutch.sh.
Production Ready: Built for cloud environments with kernel 4.19+ support, DragonFly runs reliably on modern Linux distributions with optimizations for both x86_64 and ARM64 architectures.
Prerequisites
Before deploying DragonFly on Klutch.sh, ensure you have:
- A Klutch.sh account
- A GitHub account with a repository for your DragonFly configuration
- Basic understanding of Redis/Memcached concepts and commands
- Knowledge of Docker and containerization
- Familiarity with in-memory data stores and caching strategies
- (Optional) A Redis client like
redis-clifor testing connections - (Optional) Understanding of RDB snapshots and AOF persistence
Understanding DragonFly Architecture
DragonFly’s architecture is fundamentally different from traditional Redis:
Shared-Nothing Architecture: DragonFly partitions the keyspace across CPU threads, with each thread managing its own “shard” of dictionary data. This eliminates contention and allows full utilization of multi-core processors.
VLL Transaction Framework: Based on the research paper “VLL: a lock manager redesign for main memory database systems”, DragonFly achieves atomicity for multi-key operations without mutexes or spinlocks.
Dash Hashtable: The core data structure is based on the Dash paper, providing incremental hashing during growth, stateless scanning, and superior memory efficiency compared to traditional Redis dictionaries.
Memory Efficiency: DragonFly uses 30% less memory than Redis in idle state and doesn’t show visible memory increase during snapshot operations, while Redis can spike to 3X memory usage during BGSAVE.
Protocol Support: DragonFly automatically detects whether a connection is using Redis protocol or HTTP protocol, allowing both on the same port (default 6379).
Novel Caching: When cache mode is enabled, DragonFly uses a unified, adaptive eviction algorithm that’s more efficient than LRU or LFU strategies, with zero memory overhead.
Preparing Your Repository
To deploy DragonFly on Klutch.sh, you’ll need to create a GitHub repository with a Dockerfile and optional configuration files.
Creating the Dockerfile
Create a Dockerfile in the root of your repository. This example uses the official DragonFly image with custom configuration:
FROM docker.dragonflydb.io/dragonflydb/dragonfly:latest
# Set working directoryWORKDIR /data
# Copy custom configuration if you have one# COPY dragonfly.conf /etc/dragonfly/dragonfly.conf
# Expose the main port (6379) and optional admin portEXPOSE 6379
# DragonFly will use environment variables for configuration# The ENTRYPOINT is already set in the base image# CMD is set to run dragonfly with default optionsAlternative Dockerfile with Custom Configuration
If you need more control over DragonFly’s startup parameters, create a custom entrypoint:
FROM docker.dragonflydb.io/dragonflydb/dragonfly:latest
# Install additional tools if neededUSER rootRUN apt-get update && apt-get install -y \ redis-tools \ curl \ && rm -rf /var/lib/apt/lists/*
# Set working directory for data persistenceWORKDIR /data
# Copy startup scriptCOPY start-dragonfly.sh /usr/local/bin/RUN chmod +x /usr/local/bin/start-dragonfly.sh
# Expose main portEXPOSE 6379
# Switch back to dragonfly user if exists, or run as root# USER dragonfly
# Run custom startup scriptCMD ["/usr/local/bin/start-dragonfly.sh"]Creating a Startup Script
Create start-dragonfly.sh for advanced configuration:
#!/bin/bashset -e
# Set default valuesDRAGONFLY_PORT=${DRAGONFLY_PORT:-6379}DRAGONFLY_BIND=${DRAGONFLY_BIND:-0.0.0.0}DRAGONFLY_REQUIREPASS=${DRAGONFLY_REQUIREPASS:-}DRAGONFLY_MAXMEMORY=${DRAGONFLY_MAXMEMORY:-0}DRAGONFLY_DBFILENAME=${DRAGONFLY_DBFILENAME:-dump.rdb}DRAGONFLY_DIR=${DRAGONFLY_DIR:-/data}DRAGONFLY_CACHE_MODE=${DRAGONFLY_CACHE_MODE:-false}DRAGONFLY_DBNUM=${DRAGONFLY_DBNUM:-16}DRAGONFLY_SNAPSHOT_CRON=${DRAGONFLY_SNAPSHOT_CRON:-}
# Build command argumentsARGS="--logtostderr"ARGS="$ARGS --port=$DRAGONFLY_PORT"ARGS="$ARGS --bind=$DRAGONFLY_BIND"ARGS="$ARGS --dir=$DRAGONFLY_DIR"ARGS="$ARGS --dbfilename=$DRAGONFLY_DBFILENAME"ARGS="$ARGS --maxmemory=${DRAGONFLY_MAXMEMORY}"ARGS="$ARGS --dbnum=$DRAGONFLY_DBNUM"
# Add password if setif [ -n "$DRAGONFLY_REQUIREPASS" ]; then ARGS="$ARGS --requirepass=$DRAGONFLY_REQUIREPASS"fi
# Enable cache mode if setif [ "$DRAGONFLY_CACHE_MODE" = "true" ]; then ARGS="$ARGS --cache_mode=true"fi
# Add snapshot cron if setif [ -n "$DRAGONFLY_SNAPSHOT_CRON" ]; then ARGS="$ARGS --snapshot_cron=$DRAGONFLY_SNAPSHOT_CRON"fi
# Additional flags from environmentif [ -n "$DRAGONFLY_EXTRA_FLAGS" ]; then ARGS="$ARGS $DRAGONFLY_EXTRA_FLAGS"fi
echo "Starting DragonFly with arguments: $ARGS"
# Execute dragonflyexec dragonfly $ARGSConfiguration File (Optional)
If you prefer using a configuration file instead of environment variables, create dragonfly.conf:
# DragonFly Configuration File
# Network settingsport 6379bind 0.0.0.0
# Authentication# requirepass yoursecurepasswordhere
# Memory managementmaxmemory 4gbcache_mode false
# Persistencedir /datadbfilename dump.rdb# snapshot_cron 0 */1 * * *
# Database settingsdbnum 16
# Performance tuninghz 100keys_output_limit 8192
# Logginglogtostderr true
# HTTP consoleprimary_port_http_enabled true
# Admin console (optional)# admin_port 6380# admin_bind localhost.dockerignore File
Create a .dockerignore to exclude unnecessary files:
.git.github*.mdREADME.mdLICENSE.gitignore*.logtmp/temp/.DS_Storenode_modules/Environment Variables Reference
Create a .env.example file to document available configuration options:
# DragonFly ConfigurationDRAGONFLY_PORT=6379DRAGONFLY_BIND=0.0.0.0DRAGONFLY_REQUIREPASS=changemeDRAGONFLY_MAXMEMORY=4gbDRAGONFLY_DBFILENAME=dump.rdbDRAGONFLY_DIR=/dataDRAGONFLY_CACHE_MODE=falseDRAGONFLY_DBNUM=16DRAGONFLY_SNAPSHOT_CRON=0 */1 * * *DRAGONFLY_KEYS_OUTPUT_LIMIT=8192DRAGONFLY_HZ=100
# Additional flags (space-separated)# DRAGONFLY_EXTRA_FLAGS=--admin_port=6380 --cluster_mode=emulatedDeploying DragonFly on Klutch.sh
Once your repository is prepared, follow these steps to deploy DragonFly:
DRAGONFLY_REQUIREPASS: Set a strong password for authenticationDRAGONFLY_MAXMEMORY: Set memory limit (e.g., “4gb”, “8gb”, “16gb”)DRAGONFLY_CACHE_MODE: Set to “true” if using DragonFly as a cache with evictionDRAGONFLY_SNAPSHOT_CRON: Configure automatic snapshots (e.g., “0 /1 **” for hourly)DRAGONFLY_DBNUM: Number of databases (default: 16)DRAGONFLY_DIR: Set to “/data” for persistent storage- Add a persistent volume with mount path:
/data - Set the volume size based on your dataset (e.g., 10GB, 50GB, 100GB)
- This volume will store RDB snapshots and AOF files
Create a New Project
Navigate to the Klutch.sh dashboard and create a new project. Give it a descriptive name like “dragonfly-cache” or “dragonfly-production”.
Connect Your GitHub Repository
Link your GitHub account if you haven’t already, then select the repository containing your DragonFly Dockerfile. Klutch.sh will automatically detect the Dockerfile and prepare for deployment.
Configure TCP Traffic
Since DragonFly is a database/cache service, select TCP traffic in the deployment settings. This ensures your applications can connect to DragonFly on port 8000 externally. Set the internal port to 6379 (DragonFly’s default port).
Set Environment Variables
Configure the following environment variables in the Klutch.sh dashboard:
Attach Persistent Volume
To ensure data persistence across container restarts:
Deploy
Click the deploy button. Klutch.sh will build your Docker image and start the DragonFly container with TCP traffic on port 8000.
Connecting to DragonFly
After deployment, your DragonFly instance will be accessible via TCP on port 8000. The internal container port 6379 is automatically routed.
Connection String Format
redis://:<password>@example-app.klutch.sh:8000Replace:
<password>with yourDRAGONFLY_REQUIREPASSvalueexample-app.klutch.shwith your actual Klutch.sh app URL
Using redis-cli
Connect from your local machine:
redis-cli -h example-app.klutch.sh -p 8000 -a yourpasswordTest basic operations:
# Set a keySET mykey "Hello DragonFly"
# Get a keyGET mykey
# Set with expirationSETEX session:user123 3600 "user-data"
# Check if key existsEXISTS mykey
# Get all keys (use with caution in production)KEYS *
# Get server infoINFO
# Check memory usageINFO memory
# Get statisticsINFO statsConnection Examples by Language
Node.js with ioredis:
const Redis = require('ioredis');
const redis = new Redis({ host: 'example-app.klutch.sh', port: 8000, password: 'yourpassword', retryStrategy: (times) => { const delay = Math.min(times * 50, 2000); return delay; }, maxRetriesPerRequest: 3,});
// Test connectionredis.set('test', 'value', (err, result) => { if (err) { console.error('Error:', err); } else { console.log('Set result:', result); }});
redis.get('test', (err, result) => { if (err) { console.error('Error:', err); } else { console.log('Get result:', result); }});Python with redis-py:
import redis
# Create connectionr = redis.Redis( host='example-app.klutch.sh', port=8000, password='yourpassword', decode_responses=True, socket_keepalive=True, socket_connect_timeout=5, retry_on_timeout=True)
# Test connectionr.ping()
# Set and get valuesr.set('user:1000', 'John Doe')name = r.get('user:1000')print(f"User name: {name}")
# Hash operationsr.hset('user:1001', mapping={ 'name': 'Jane Smith', 'email': 'jane@example.com', 'age': 28})
user = r.hgetall('user:1001')print(f"User data: {user}")Go with go-redis:
package main
import ( "context" "fmt" "github.com/go-redis/redis/v8")
var ctx = context.Background()
func main() { rdb := redis.NewClient(&redis.Options{ Addr: "example-app.klutch.sh:8000", Password: "yourpassword", DB: 0, })
// Test connection pong, err := rdb.Ping(ctx).Result() if err != nil { panic(err) } fmt.Println("Connected:", pong)
// Set value err = rdb.Set(ctx, "key", "value", 0).Err() if err != nil { panic(err) }
// Get value val, err := rdb.Get(ctx, "key").Result() if err != nil { panic(err) } fmt.Println("key:", val)}PHP with Predis:
<?phprequire 'vendor/autoload.php';
$client = new Predis\Client([ 'scheme' => 'tcp', 'host' => 'example-app.klutch.sh', 'port' => 8000, 'password' => 'yourpassword',]);
// Test connection$client->ping();
// Set and get$client->set('foo', 'bar');$value = $client->get('foo');echo "Value: $value\n";
// List operations$client->lpush('mylist', 'item1', 'item2', 'item3');$items = $client->lrange('mylist', 0, -1);print_r($items);?>Ruby with redis-rb:
require 'redis'
redis = Redis.new( host: 'example-app.klutch.sh', port: 8000, password: 'yourpassword', timeout: 5, reconnect_attempts: 3)
# Test connectionredis.ping
# Set and getredis.set('mykey', 'myvalue')value = redis.get('mykey')puts "Value: #{value}"
# Set with expirationredis.setex('session:123', 3600, 'session-data')
# Hash operationsredis.hset('user:1', 'name', 'Alice')redis.hset('user:1', 'email', 'alice@example.com')user = redis.hgetall('user:1')puts "User: #{user}"Data Persistence Configuration
DragonFly supports multiple persistence strategies to ensure data durability.
RDB Snapshots
RDB (Redis Database) snapshots create point-in-time backups of your dataset:
Manual Snapshots:
# Create snapshot immediatelyBGSAVE
# Check snapshot statusLASTSAVEAutomatic Snapshots with Cron:
Set the DRAGONFLY_SNAPSHOT_CRON environment variable in Klutch.sh:
# Every hourDRAGONFLY_SNAPSHOT_CRON="0 */1 * * *"
# Every 15 minutesDRAGONFLY_SNAPSHOT_CRON="*/15 * * * *"
# Daily at midnightDRAGONFLY_SNAPSHOT_CRON="0 0 * * *"
# Every 6 hoursDRAGONFLY_SNAPSHOT_CRON="0 */6 * * *"Snapshot Configuration:
# Set snapshot filenameDRAGONFLY_DBFILENAME=production-dump.rdb
# Set data directory (must match volume mount)DRAGONFLY_DIR=/dataChecking Snapshot Files
You can verify snapshots are being created by accessing the container:
# List files in data directoryls -lh /data/
# Check snapshot sizedu -sh /data/*.rdb
# View snapshot timestampstat /data/dump.rdbSnapshot Best Practices
Snapshot Frequency: Balance between data loss tolerance and I/O overhead. For most applications, hourly snapshots are sufficient. High-write workloads might need more frequent snapshots.
Volume Size: Ensure your persistent volume is at least 2X your expected dataset size to accommodate snapshots and temporary files during the save process.
Monitoring: Track LASTSAVE timestamp to ensure snapshots are completing successfully. Set up alerts if snapshots haven’t run within expected intervals.
Testing: Periodically test snapshot restoration to verify backup integrity. Create a test instance and load the RDB file to confirm data recovery works.
Memory Management and Cache Mode
DragonFly offers sophisticated memory management options for different use cases.
Setting Memory Limits
Configure maximum memory usage via environment variable:
# Set 4GB limitDRAGONFLY_MAXMEMORY=4gb
# Set 8GB limitDRAGONFLY_MAXMEMORY=8gb
# Set 16GB limitDRAGONFLY_MAXMEMORY=16gb
# No limit (use with caution)DRAGONFLY_MAXMEMORY=0Cache Mode
DragonFly’s cache mode uses a novel eviction algorithm that’s more efficient than LRU or LFU:
Enable cache mode:
DRAGONFLY_CACHE_MODE=trueIn cache mode:
- DragonFly automatically evicts keys when approaching
maxmemory - Uses a unified, adaptive algorithm with zero memory overhead
- Evicts items least likely to be accessed in the future
- More efficient than Redis’s LRU/LFU implementations
When to Use Cache Mode:
✅ Session stores with short TTLs ✅ Application caching layers ✅ Rate limiting counters ✅ Temporary data storage ✅ Content delivery network (CDN) caching
When NOT to Use Cache Mode:
❌ Persistent data storage ❌ Message queues requiring durability ❌ Pub/Sub systems ❌ When you need explicit control over eviction ❌ Time-series data that must be retained
Memory Efficiency
DragonFly’s memory efficiency advantages:
30% Less Memory: Compared to Redis in idle state, DragonFly uses significantly less memory for the same dataset size.
No Snapshot Overhead: During BGSAVE operations, DragonFly doesn’t show visible memory increase, while Redis can spike to 3X memory usage due to copy-on-write.
Efficient Data Structures: The Dash hashtable design provides better memory utilization than traditional Redis dictionaries.
Zero-Overhead Eviction: When cache mode is enabled, the eviction algorithm doesn’t require additional memory for tracking.
Monitoring Memory Usage
Check memory statistics:
# Get memory infoINFO memory
# Key metrics to monitor:# - used_memory: Total memory used by DragonFly# - used_memory_rss: Resident set size (actual RAM)# - used_memory_peak: Peak memory usage# - mem_fragmentation_ratio: Memory fragmentation ratioPerformance Optimization
DragonFly is designed for high performance, but proper configuration ensures optimal throughput and latency.
CPU and Threading
DragonFly automatically utilizes all available CPU cores through its shared-nothing architecture:
Vertical Scaling: Unlike Redis, DragonFly’s throughput scales linearly with CPU cores. Deploy on instances with more cores for higher throughput.
Thread Sharding: Each thread manages its own shard of the keyspace, eliminating contention and lock overhead.
Recommendation: For production workloads, use instances with 8+ CPU cores to maximize DragonFly’s multi-threaded performance.
Connection Pooling
Implement connection pooling in your application to reduce connection overhead:
Node.js Example:
const Redis = require('ioredis');
// Create cluster of connectionsconst redis = new Redis({ host: 'example-app.klutch.sh', port: 8000, password: 'yourpassword', // Connection pool settings lazyConnect: true, enableOfflineQueue: true, maxRetriesPerRequest: 3, // Keep-alive keepAlive: 30000,});Python Example:
from redis import ConnectionPool, Redis
pool = ConnectionPool( host='example-app.klutch.sh', port=8000, password='yourpassword', max_connections=50, socket_keepalive=True, socket_connect_timeout=5, retry_on_timeout=True)
redis_client = Redis(connection_pool=pool)Pipelining
Use pipelining to batch multiple commands and reduce round-trip latency:
# Python examplepipe = redis_client.pipeline()pipe.set('key1', 'value1')pipe.set('key2', 'value2')pipe.set('key3', 'value3')pipe.get('key1')pipe.get('key2')results = pipe.execute()Key Design Patterns
Use Short Key Names: Shorter keys consume less memory. Use abbreviations where possible.
# Instead ofuser:session:authenticated:userid:12345
# Useu:s:a:12345Leverage Data Structures: Use appropriate Redis data structures for your use case:
- Strings: Simple key-value pairs, counters
- Hashes: Objects with multiple fields (user profiles, settings)
- Lists: Queues, activity feeds, recent items
- Sets: Unique collections, tags, relationships
- Sorted Sets: Leaderboards, time-series data, priority queues
Set Appropriate TTLs: Always set expiration for temporary data:
# Session with 1 hour TTLSETEX session:user123 3600 "session-data"
# Cache with 5 minute TTLSETEX cache:article:456 300 "{\"title\":\"Article\",\"content\":\"...\"}"Performance Benchmarking
Test your DragonFly instance performance using the built-in Redis benchmark tool:
# Basic benchmarkredis-benchmark -h example-app.klutch.sh -p 8000 -a yourpassword -t set,get -n 100000 -q
# Pipeline benchmarkredis-benchmark -h example-app.klutch.sh -p 8000 -a yourpassword -t set,get -n 100000 -P 16 -q
# Comprehensive benchmarkredis-benchmark -h example-app.klutch.sh -p 8000 -a yourpassword -n 100000 -d 256 -c 50 -t set,get,lpush,lpop,sadd,spop,zadd,zpopmin,hset,hgetExpected results (depending on instance size):
- Small instances (2-4 cores): 50K-150K ops/sec
- Medium instances (8-16 cores): 300K-1M ops/sec
- Large instances (32+ cores): 2M-3M+ ops/sec
DragonFly’s performance scales with CPU cores, so larger instances deliver proportionally higher throughput.
Monitoring and Metrics
DragonFly provides comprehensive metrics for monitoring performance and health.
HTTP Metrics Endpoint
DragonFly exposes Prometheus-compatible metrics via HTTP:
# Access metricscurl http://example-app.klutch.sh:8000/metrics
# If using password authentication, metrics may require authorization# or access via internal monitoring systemsKey Metrics to Monitor
Operations:
dragonfly_commands_total: Total commands processeddragonfly_commands_duration_seconds: Command latencydragonfly_connections_current: Active connectionsdragonfly_keyspace_hits_total: Cache hit countdragonfly_keyspace_misses_total: Cache miss count
Memory:
dragonfly_memory_used_bytes: Current memory usagedragonfly_memory_max_bytes: Maximum memory limitdragonfly_memory_fragmentation_ratio: Memory fragmentation
Persistence:
dragonfly_last_save_time: Timestamp of last snapshotdragonfly_save_duration_seconds: Snapshot duration
Redis INFO Command
Get detailed statistics using the INFO command:
# All infoINFO
# Specific sectionsINFO serverINFO clientsINFO memoryINFO statsINFO replicationINFO cpuINFO keyspaceCache Hit Rate
Calculate cache effectiveness:
# Get statsINFO stats
# Calculate hit ratehit_rate = keyspace_hits / (keyspace_hits + keyspace_misses) * 100A healthy cache hit rate is typically 80% or higher. Lower rates indicate:
- Insufficient memory (increase
maxmemory) - Poor TTL configuration
- Cache warming needed
- Suboptimal key design
Health Checks
Implement health checks in your application:
# Simple pingPING
# Check latencyredis-cli -h example-app.klutch.sh -p 8000 -a yourpassword --latency
# Monitor commands in real-timeredis-cli -h example-app.klutch.sh -p 8000 -a yourpassword MONITORAlerting Thresholds
Set up monitoring alerts for:
- Memory Usage: Alert when > 80% of
maxmemory - Connection Count: Alert on sudden spikes or drops
- Command Latency: Alert when P99 latency > 5ms
- Snapshot Age: Alert if last snapshot > 2 hours old
- Cache Hit Rate: Alert when < 70%
- CPU Usage: Alert when > 80% sustained
Security Best Practices
Securing your DragonFly instance is critical for production deployments.
Password Authentication
Always set a strong password:
# Generate a strong passwordopenssl rand -base64 32
# Set in Klutch.sh environment variablesDRAGONFLY_REQUIREPASS=<your-generated-password>Network Security
TCP Traffic: DragonFly on Klutch.sh uses TCP traffic on port 8000. Ensure:
- Only authorized applications can access this port
- Use firewall rules to restrict access if needed
- Consider VPN or private network access for sensitive data
Disable HTTP Console: If you don’t need the HTTP metrics endpoint exposed:
# Add to DRAGONFLY_EXTRA_FLAGSDRAGONFLY_EXTRA_FLAGS="--nohttp_admin_console"Encryption in Transit
For sensitive data, consider:
TLS Support: DragonFly supports TLS encryption. Configure TLS certificates:
# Add TLS flags to DRAGONFLY_EXTRA_FLAGSDRAGONFLY_EXTRA_FLAGS="--tls --tls_cert_file=/path/to/cert.pem --tls_key_file=/path/to/key.pem"Application-Level Encryption: Encrypt sensitive data before storing:
from cryptography.fernet import Fernet
# Generate key (store securely)key = Fernet.generate_key()cipher = Fernet(key)
# Encrypt before storingencrypted_data = cipher.encrypt(b"sensitive data")redis_client.set('encrypted:key', encrypted_data)
# Decrypt after retrievingstored_data = redis_client.get('encrypted:key')decrypted_data = cipher.decrypt(stored_data)Access Control
Multiple Databases: Use database selection for logical separation:
# Set number of databasesDRAGONFLY_DBNUM=16
# In application, select databaseSELECT 1 # Use database 1SELECT 2 # Use database 2Command Restrictions: If your application doesn’t need dangerous commands, implement restrictions at the application layer:
Dangerous commands to restrict:
FLUSHDB/FLUSHALL: Delete all keysCONFIG: Modify configurationSHUTDOWN: Stop serverBGREWRITEAOF: Rewrite AOF fileDEBUG: Debug commands
Regular Security Audits
- Review access logs regularly
- Rotate passwords periodically (every 90 days)
- Monitor for unusual command patterns
- Keep DragonFly updated to latest version
- Review connected clients:
CLIENT LIST
Backup and Recovery
Implement a comprehensive backup strategy for data durability.
Automated Backups
Configure automatic snapshots:
# Hourly backupsDRAGONFLY_SNAPSHOT_CRON="0 */1 * * *"
# Daily backups at 2 AMDRAGONFLY_SNAPSHOT_CRON="0 2 * * *"Manual Backups
Create on-demand backups:
# Trigger background saveBGSAVE
# Check save statusLASTSAVE
# Get last save time in Unix timestampLASTSAVEBackup Files
DragonFly stores snapshots in the configured data directory:
# Default snapshot file/data/dump.rdb
# Custom filename/data/production-dump.rdbExporting Backups
To export backup files from your persistent volume:
- Access the container via Klutch.sh console
- Copy the RDB file to a temporary location
- Use
scpor similar tool to download - Store backups in external storage (S3, GCS, etc.)
Backup Script Example
Create a backup automation script:
#!/bin/bashBACKUP_DIR="/backups"TIMESTAMP=$(date +%Y%m%d_%H%M%S)BACKUP_FILE="dragonfly-backup-${TIMESTAMP}.rdb"
# Create backup directory if it doesn't existmkdir -p $BACKUP_DIR
# Trigger BGSAVEredis-cli -h example-app.klutch.sh -p 8000 -a "$DRAGONFLY_REQUIREPASS" BGSAVE
# Wait for save to completesleep 10
# Copy RDB filecp /data/dump.rdb "${BACKUP_DIR}/${BACKUP_FILE}"
# Compress backupgzip "${BACKUP_DIR}/${BACKUP_FILE}"
# Upload to S3 (example)# aws s3 cp "${BACKUP_DIR}/${BACKUP_FILE}.gz" s3://my-backups/dragonfly/
# Clean up old backups (keep last 7 days)find $BACKUP_DIR -name "dragonfly-backup-*.rdb.gz" -mtime +7 -delete
echo "Backup completed: ${BACKUP_FILE}.gz"Recovery Process
To restore from backup:
- Stop the DragonFly instance
- Replace
/data/dump.rdbwith your backup file - Restart DragonFly
- Verify data integrity
# Example recovery commands# 1. Stop DragonFly (via Klutch.sh dashboard)
# 2. Replace RDB filecp /backups/dragonfly-backup-20241216_120000.rdb /data/dump.rdb
# 3. Restart DragonFly (via Klutch.sh dashboard)
# 4. Verifyredis-cli -h example-app.klutch.sh -p 8000 -a yourpassword DBSIZEBackup Best Practices
- Multiple Copies: Store backups in multiple locations (local + cloud)
- Regular Testing: Periodically test restoration process
- Retention Policy: Keep daily backups for 7 days, weekly for 4 weeks, monthly for 12 months
- Monitoring: Alert if backups fail or aren’t created on schedule
- Encryption: Encrypt backup files before storing externally
- Documentation: Document recovery procedures for your team
Troubleshooting Common Issues
Connection Refused
Symptoms: Unable to connect to DragonFly from application.
Causes:
- Incorrect host or port
- Firewall blocking TCP traffic
- DragonFly not running
- Wrong password
Solutions:
# 1. Verify DragonFly is running# Check container status in Klutch.sh dashboard
# 2. Test connectionredis-cli -h example-app.klutch.sh -p 8000 PING
# 3. Check with authenticationredis-cli -h example-app.klutch.sh -p 8000 -a yourpassword PING
# 4. Verify TCP traffic is enabled in Klutch.sh# Navigate to project settings and confirm TCP is selectedHigh Memory Usage
Symptoms: Memory usage approaching or exceeding maxmemory limit.
Causes:
- Dataset larger than available memory
- Memory leak in application
- No eviction policy (cache mode disabled)
- Large key values
Solutions:
# 1. Check memory usageINFO memory
# 2. Enable cache mode if using as cacheDRAGONFLY_CACHE_MODE=true
# 3. Increase maxmemoryDRAGONFLY_MAXMEMORY=16gb
# 4. Analyze memory usage by key typeMEMORY USAGE key_name
# 5. Find large keysredis-cli -h example-app.klutch.sh -p 8000 -a yourpassword --bigkeys
# 6. Delete unnecessary keysDEL unused_key1 unused_key2Slow Response Times
Symptoms: High latency or slow command execution.
Causes:
- Blocking commands (KEYS, FLUSHDB)
- Insufficient CPU resources
- Network latency
- Large dataset operations
- No connection pooling
Solutions:
# 1. Check latencyredis-cli -h example-app.klutch.sh -p 8000 -a yourpassword --latency
# 2. Monitor slow commandsSLOWLOG GET 10
# 3. Check connected clientsCLIENT LIST
# 4. Avoid KEYS command, use SCAN insteadSCAN 0 MATCH user:* COUNT 100
# 5. Use pipelining for multiple commands# (implement in application code)
# 6. Scale vertically - deploy on larger instance with more CPU coresSnapshot Failures
Symptoms: Snapshots not being created, LASTSAVE not updating.
Causes:
- Insufficient disk space
- File permissions
- I/O errors
- Snapshot cron syntax error
Solutions:
# 1. Check last save timeLASTSAVE
# 2. Manually trigger saveBGSAVE
# 3. Check disk spacedf -h /data
# 4. Verify snapshot cron syntax# Correct: "0 */1 * * *" (hourly)# Incorrect: "0 * * * *" might be too frequent
# 5. Check DragonFly logs for errors# View logs in Klutch.sh dashboard
# 6. Increase persistent volume size if neededAuthentication Failures
Symptoms: “NOAUTH Authentication required” or “ERR invalid password” errors.
Causes:
- Wrong password
- Password not set in environment variables
- Application using old credentials
Solutions:
# 1. Verify password is set# Check DRAGONFLY_REQUIREPASS in Klutch.sh environment variables
# 2. Test authenticationredis-cli -h example-app.klutch.sh -p 8000 -a correct-password PING
# 3. Update application configuration# Ensure app is using current password
# 4. Check for password in connection stringredis://:<password>@example-app.klutch.sh:8000Cache Hit Rate Too Low
Symptoms: Cache hit rate below 70%, high number of cache misses.
Causes:
- Insufficient memory
- TTLs too short
- Cache not properly warmed
- Traffic patterns changed
Solutions:
# 1. Check hit rateINFO stats# Look at keyspace_hits and keyspace_misses
# 2. Increase memory allocationDRAGONFLY_MAXMEMORY=16gb
# 3. Adjust TTLs# Increase TTL for frequently accessed data
# 4. Implement cache warming# Preload hot data on startup
# 5. Review key access patterns# Monitor which keys are being requestedAdvanced Configuration
Cluster Mode (Emulated)
DragonFly supports emulated cluster mode for compatibility with Redis Cluster clients:
DRAGONFLY_EXTRA_FLAGS="--cluster_mode=emulated"Admin Console
Enable admin console for advanced debugging:
# Add to environment variablesDRAGONFLY_EXTRA_FLAGS="--admin_port=6380 --admin_bind=0.0.0.0"
# Access via HTTPcurl http://example-app.klutch.sh:6380/Note: Only enable admin console in secure networks. Use authentication to protect access.
Memcached Protocol
DragonFly also supports Memcached protocol on a separate port:
# Enable Memcached on port 11211DRAGONFLY_EXTRA_FLAGS="--memcached_port=11211"Multiple Databases
Configure the number of logical databases:
DRAGONFLY_DBNUM=32Use in application:
# Select database 0 (default)SELECT 0
# Select database 5SELECT 5
# Get current databaseCLIENT INFOExpiration Frequency
Adjust how often DragonFly checks for expired keys:
# Higher values = more CPU usage, faster expiration# Lower values = less CPU usage, slower expiration# Default: 100DRAGONFLY_EXTRA_FLAGS="--hz=200"Key Output Limit
Limit the number of keys returned by the KEYS command:
# Default: 8192# Recommended: Keep low to prevent memory issuesDRAGONFLY_EXTRA_FLAGS="--keys_output_limit=10000"Migration from Redis
Migrating from Redis to DragonFly is straightforward due to protocol compatibility.
Prerequisites for Migration
- Redis RDB file or access to Redis instance
- DragonFly deployed and configured
- Minimal application downtime acceptable
- Backup of Redis data
Migration Methods
Method 1: RDB File Transfer
# 1. Create Redis snapshotredis-cli -h redis-host -p 6379 -a redis-password BGSAVE
# 2. Download Redis RDB filescp user@redis-host:/var/lib/redis/dump.rdb ./redis-dump.rdb
# 3. Stop DragonFly (via Klutch.sh dashboard)
# 4. Replace DragonFly RDB filecp redis-dump.rdb /data/dump.rdb
# 5. Start DragonFly (via Klutch.sh dashboard)
# 6. Verify dataredis-cli -h example-app.klutch.sh -p 8000 -a yourpassword DBSIZEMethod 2: Live Migration with RIOT
Use RIOT (Redis Input/Output Tools):
# Install RIOTwget https://github.com/redis-developer/riot/releases/download/v3.1.4/riot-redis-3.1.4.zipunzip riot-redis-3.1.4.zip
# Migrate data./riot-redis --uri redis://source-redis:6379 replicate \ --uri redis://:yourpassword@example-app.klutch.sh:8000Method 3: Application-Level Migration
import redis
# Source Redissource = redis.Redis(host='source-redis', port=6379, password='redis-pass')
# Target DragonFlytarget = redis.Redis(host='example-app.klutch.sh', port=8000, password='dragonfly-pass')
# Migrate all keysfor key in source.scan_iter(): # Get TTL ttl = source.ttl(key)
# Get value value = source.dump(key)
# Restore to DragonFly if ttl > 0: target.restore(key, ttl * 1000, value, replace=True) else: target.restore(key, 0, value, replace=True)
print(f"Migrated: {key}")
print("Migration complete!")Post-Migration Validation
# 1. Compare key countsredis-cli -h source-redis -p 6379 DBSIZEredis-cli -h example-app.klutch.sh -p 8000 -a yourpassword DBSIZE
# 2. Verify sample keysredis-cli -h example-app.klutch.sh -p 8000 -a yourpassword GET sample_key
# 3. Check memory usageredis-cli -h example-app.klutch.sh -p 8000 -a yourpassword INFO memory
# 4. Test application connectivity# Run application integration tests
# 5. Monitor performanceredis-cli -h example-app.klutch.sh -p 8000 -a yourpassword --latency-historyApplication Updates
Update Redis connection strings in your application:
// Before (Redis)const redis = new Redis({ host: 'source-redis', port: 6379, password: 'redis-pass'});
// After (DragonFly)const redis = new Redis({ host: 'example-app.klutch.sh', port: 8000, password: 'dragonfly-pass'});Most Redis clients work with DragonFly without code changes. Simply update connection parameters.
Rollback Plan
Maintain Redis instance during initial migration period:
- Keep Redis running for 24-48 hours
- Monitor DragonFly performance and stability
- If issues arise, switch connection back to Redis
- After validation period, decommission Redis
Production Checklist
Before going to production with DragonFly on Klutch.sh, ensure:
Configuration:
- ✅ Strong password set via
DRAGONFLY_REQUIREPASS - ✅ Appropriate
maxmemoryconfigured based on dataset size - ✅ Cache mode enabled if using as cache
- ✅ Snapshot cron configured for automatic backups
- ✅ Data directory set to
/datafor persistence - ✅ Number of databases (
dbnum) configured appropriately
Security:
- ✅ Password authentication enabled and tested
- ✅ Network access restricted to authorized applications
- ✅ Admin console disabled or secured
- ✅ TLS enabled for sensitive data (if required)
- ✅ Dangerous commands restricted at application layer
- ✅ Regular password rotation schedule established
Persistence:
- ✅ Persistent volume attached to
/data - ✅ Volume size appropriate for dataset (2X data size minimum)
- ✅ Automatic snapshots configured and tested
- ✅ Manual backup process documented
- ✅ Recovery procedure tested successfully
- ✅ Backup retention policy defined
Performance:
- ✅ Connection pooling implemented in applications
- ✅ Pipelining used for batch operations
- ✅ Appropriate instance size selected (8+ CPU cores recommended)
- ✅ Benchmark tests completed with expected results
- ✅ Key design patterns optimized for memory efficiency
- ✅ TTLs set appropriately for temporary data
Monitoring:
- ✅ Health checks implemented in application
- ✅ Metrics endpoint configured (if using monitoring stack)
- ✅ Alerts configured for critical metrics (memory, latency, cache hit rate)
- ✅ Logging strategy defined
- ✅ Dashboard created for key metrics
- ✅ On-call procedures documented
High Availability:
- ✅ Backup and restore procedures tested
- ✅ Disaster recovery plan documented
- ✅ Failover strategy defined
- ✅ Data replication considered (if needed)
- ✅ Incident response playbook created
Documentation:
- ✅ Connection details documented for team
- ✅ Environment variables documented
- ✅ Architecture diagram created
- ✅ Troubleshooting guide available
- ✅ Migration plan documented (if applicable)
- ✅ Runbook for common operations
Additional Resources
- Official DragonFly Documentation
- DragonFly GitHub Repository
- DragonFly Benchmarking Guide
- DragonFly Community Discord
- Dashtable Design Documentation
- DragonFly Command Reference
- Klutch.sh Persistent Volumes
- Klutch.sh Deployments
- Klutch.sh TCP Traffic Configuration
Conclusion
Deploying DragonFly on Klutch.sh provides a powerful, high-performance alternative to Redis and Memcached with significant improvements in throughput, memory efficiency, and vertical scaling capabilities. With 25X more throughput than Redis and 80% less resource consumption, DragonFly is ideal for modern applications requiring extreme performance at scale.
By following this guide, you’ve learned how to deploy DragonFly with proper configuration, implement data persistence, optimize performance, secure your instance, and monitor operations effectively. DragonFly’s drop-in compatibility with Redis APIs means you can migrate existing applications with minimal code changes while gaining substantial performance improvements.
Whether you’re building real-time applications, high-traffic caching layers, session stores, or message queues, DragonFly on Klutch.sh provides the infrastructure foundation for reliable, scalable data storage. With automatic Dockerfile detection, TCP traffic support, persistent volumes, and seamless GitHub integration, Klutch.sh makes deploying and managing DragonFly straightforward and efficient.
Start with a small instance to test your workload, monitor performance metrics, and scale vertically as your needs grow. DragonFly’s multi-threaded architecture ensures that performance scales linearly with CPU cores, making it easy to handle increasing traffic without architectural changes. Happy building!