Skip to content

Deploying DragonFly

Introduction

DragonFly is a modern, high-performance in-memory data store that serves as a drop-in replacement for Redis and Memcached. Built from the ground up for cloud-native workloads, DragonFly delivers up to 25X more throughput than Redis while consuming 80% fewer resources. It achieves this through a novel shared-nothing architecture, efficient data structures based on the Dash paper, and a transactional framework that eliminates the need for locks.

With full compatibility with Redis and Memcached APIs, DragonFly requires no code changes to adopt. It currently supports approximately 185 Redis commands (equivalent to Redis 5.0 API) and all Memcached commands except cas. Whether you’re running caching layers, session stores, message queues, or real-time analytics, DragonFly provides sub-millisecond latency at massive scale.

This guide will walk you through deploying DragonFly on Klutch.sh, configuring it for production workloads, setting up persistence, and optimizing performance to handle millions of operations per second.

Why Deploy DragonFly on Klutch.sh

Deploying DragonFly on Klutch.sh offers several advantages for modern application architectures:

Simplified Deployment: Klutch.sh automatically detects your Dockerfile and builds DragonFly with zero configuration. No complex orchestration or manual server setup required.

High Performance Networking: With TCP traffic support and dedicated port routing, your applications can connect to DragonFly at port 8000 with minimal latency. Internal routing is optimized for high-throughput workloads.

Persistent Storage: Attach persistent volumes to DragonFly for RDB snapshots and append-only file (AOF) backups, ensuring data durability across container restarts and deployments.

Vertical Scaling: Unlike traditional Redis that’s limited by single-threaded architecture, DragonFly scales vertically with CPU cores. Deploy on larger instances and watch throughput increase linearly.

Environment Management: Securely configure DragonFly settings like authentication passwords, memory limits, and cache modes through environment variables without exposing sensitive data.

GitHub Integration: Connect your DragonFly configuration repository directly from GitHub. Updates to your Dockerfile or configuration files trigger automatic redeployments.

Resource Efficiency: DragonFly’s memory-efficient design means you can handle larger datasets with smaller instances, reducing infrastructure costs significantly compared to Redis.

Multi-Protocol Support: Run both Redis and HTTP protocols on the same port. Access metrics via HTTP while your applications use the Redis protocol, all without additional configuration.

Monitoring and Metrics: DragonFly exposes Prometheus-compatible metrics out of the box, making it easy to integrate with your existing observability stack on Klutch.sh.

Production Ready: Built for cloud environments with kernel 4.19+ support, DragonFly runs reliably on modern Linux distributions with optimizations for both x86_64 and ARM64 architectures.

Prerequisites

Before deploying DragonFly on Klutch.sh, ensure you have:

  • A Klutch.sh account
  • A GitHub account with a repository for your DragonFly configuration
  • Basic understanding of Redis/Memcached concepts and commands
  • Knowledge of Docker and containerization
  • Familiarity with in-memory data stores and caching strategies
  • (Optional) A Redis client like redis-cli for testing connections
  • (Optional) Understanding of RDB snapshots and AOF persistence

Understanding DragonFly Architecture

DragonFly’s architecture is fundamentally different from traditional Redis:

Shared-Nothing Architecture: DragonFly partitions the keyspace across CPU threads, with each thread managing its own “shard” of dictionary data. This eliminates contention and allows full utilization of multi-core processors.

VLL Transaction Framework: Based on the research paper “VLL: a lock manager redesign for main memory database systems”, DragonFly achieves atomicity for multi-key operations without mutexes or spinlocks.

Dash Hashtable: The core data structure is based on the Dash paper, providing incremental hashing during growth, stateless scanning, and superior memory efficiency compared to traditional Redis dictionaries.

Memory Efficiency: DragonFly uses 30% less memory than Redis in idle state and doesn’t show visible memory increase during snapshot operations, while Redis can spike to 3X memory usage during BGSAVE.

Protocol Support: DragonFly automatically detects whether a connection is using Redis protocol or HTTP protocol, allowing both on the same port (default 6379).

Novel Caching: When cache mode is enabled, DragonFly uses a unified, adaptive eviction algorithm that’s more efficient than LRU or LFU strategies, with zero memory overhead.

Preparing Your Repository

To deploy DragonFly on Klutch.sh, you’ll need to create a GitHub repository with a Dockerfile and optional configuration files.

Creating the Dockerfile

Create a Dockerfile in the root of your repository. This example uses the official DragonFly image with custom configuration:

FROM docker.dragonflydb.io/dragonflydb/dragonfly:latest
# Set working directory
WORKDIR /data
# Copy custom configuration if you have one
# COPY dragonfly.conf /etc/dragonfly/dragonfly.conf
# Expose the main port (6379) and optional admin port
EXPOSE 6379
# DragonFly will use environment variables for configuration
# The ENTRYPOINT is already set in the base image
# CMD is set to run dragonfly with default options

Alternative Dockerfile with Custom Configuration

If you need more control over DragonFly’s startup parameters, create a custom entrypoint:

FROM docker.dragonflydb.io/dragonflydb/dragonfly:latest
# Install additional tools if needed
USER root
RUN apt-get update && apt-get install -y \
redis-tools \
curl \
&& rm -rf /var/lib/apt/lists/*
# Set working directory for data persistence
WORKDIR /data
# Copy startup script
COPY start-dragonfly.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/start-dragonfly.sh
# Expose main port
EXPOSE 6379
# Switch back to dragonfly user if exists, or run as root
# USER dragonfly
# Run custom startup script
CMD ["/usr/local/bin/start-dragonfly.sh"]

Creating a Startup Script

Create start-dragonfly.sh for advanced configuration:

#!/bin/bash
set -e
# Set default values
DRAGONFLY_PORT=${DRAGONFLY_PORT:-6379}
DRAGONFLY_BIND=${DRAGONFLY_BIND:-0.0.0.0}
DRAGONFLY_REQUIREPASS=${DRAGONFLY_REQUIREPASS:-}
DRAGONFLY_MAXMEMORY=${DRAGONFLY_MAXMEMORY:-0}
DRAGONFLY_DBFILENAME=${DRAGONFLY_DBFILENAME:-dump.rdb}
DRAGONFLY_DIR=${DRAGONFLY_DIR:-/data}
DRAGONFLY_CACHE_MODE=${DRAGONFLY_CACHE_MODE:-false}
DRAGONFLY_DBNUM=${DRAGONFLY_DBNUM:-16}
DRAGONFLY_SNAPSHOT_CRON=${DRAGONFLY_SNAPSHOT_CRON:-}
# Build command arguments
ARGS="--logtostderr"
ARGS="$ARGS --port=$DRAGONFLY_PORT"
ARGS="$ARGS --bind=$DRAGONFLY_BIND"
ARGS="$ARGS --dir=$DRAGONFLY_DIR"
ARGS="$ARGS --dbfilename=$DRAGONFLY_DBFILENAME"
ARGS="$ARGS --maxmemory=${DRAGONFLY_MAXMEMORY}"
ARGS="$ARGS --dbnum=$DRAGONFLY_DBNUM"
# Add password if set
if [ -n "$DRAGONFLY_REQUIREPASS" ]; then
ARGS="$ARGS --requirepass=$DRAGONFLY_REQUIREPASS"
fi
# Enable cache mode if set
if [ "$DRAGONFLY_CACHE_MODE" = "true" ]; then
ARGS="$ARGS --cache_mode=true"
fi
# Add snapshot cron if set
if [ -n "$DRAGONFLY_SNAPSHOT_CRON" ]; then
ARGS="$ARGS --snapshot_cron=$DRAGONFLY_SNAPSHOT_CRON"
fi
# Additional flags from environment
if [ -n "$DRAGONFLY_EXTRA_FLAGS" ]; then
ARGS="$ARGS $DRAGONFLY_EXTRA_FLAGS"
fi
echo "Starting DragonFly with arguments: $ARGS"
# Execute dragonfly
exec dragonfly $ARGS

Configuration File (Optional)

If you prefer using a configuration file instead of environment variables, create dragonfly.conf:

# DragonFly Configuration File
# Network settings
port 6379
bind 0.0.0.0
# Authentication
# requirepass yoursecurepasswordhere
# Memory management
maxmemory 4gb
cache_mode false
# Persistence
dir /data
dbfilename dump.rdb
# snapshot_cron 0 */1 * * *
# Database settings
dbnum 16
# Performance tuning
hz 100
keys_output_limit 8192
# Logging
logtostderr true
# HTTP console
primary_port_http_enabled true
# Admin console (optional)
# admin_port 6380
# admin_bind localhost

.dockerignore File

Create a .dockerignore to exclude unnecessary files:

.git
.github
*.md
README.md
LICENSE
.gitignore
*.log
tmp/
temp/
.DS_Store
node_modules/

Environment Variables Reference

Create a .env.example file to document available configuration options:

# DragonFly Configuration
DRAGONFLY_PORT=6379
DRAGONFLY_BIND=0.0.0.0
DRAGONFLY_REQUIREPASS=changeme
DRAGONFLY_MAXMEMORY=4gb
DRAGONFLY_DBFILENAME=dump.rdb
DRAGONFLY_DIR=/data
DRAGONFLY_CACHE_MODE=false
DRAGONFLY_DBNUM=16
DRAGONFLY_SNAPSHOT_CRON=0 */1 * * *
DRAGONFLY_KEYS_OUTPUT_LIMIT=8192
DRAGONFLY_HZ=100
# Additional flags (space-separated)
# DRAGONFLY_EXTRA_FLAGS=--admin_port=6380 --cluster_mode=emulated

Deploying DragonFly on Klutch.sh

Once your repository is prepared, follow these steps to deploy DragonFly:

    Create a New Project

    Navigate to the Klutch.sh dashboard and create a new project. Give it a descriptive name like “dragonfly-cache” or “dragonfly-production”.

    Connect Your GitHub Repository

    Link your GitHub account if you haven’t already, then select the repository containing your DragonFly Dockerfile. Klutch.sh will automatically detect the Dockerfile and prepare for deployment.

    Configure TCP Traffic

    Since DragonFly is a database/cache service, select TCP traffic in the deployment settings. This ensures your applications can connect to DragonFly on port 8000 externally. Set the internal port to 6379 (DragonFly’s default port).

    Set Environment Variables

    Configure the following environment variables in the Klutch.sh dashboard:

    • DRAGONFLY_REQUIREPASS: Set a strong password for authentication
    • DRAGONFLY_MAXMEMORY: Set memory limit (e.g., “4gb”, “8gb”, “16gb”)
    • DRAGONFLY_CACHE_MODE: Set to “true” if using DragonFly as a cache with eviction
    • DRAGONFLY_SNAPSHOT_CRON: Configure automatic snapshots (e.g., “0 /1 **” for hourly)
    • DRAGONFLY_DBNUM: Number of databases (default: 16)
    • DRAGONFLY_DIR: Set to “/data” for persistent storage

    Attach Persistent Volume

    To ensure data persistence across container restarts:

    • Add a persistent volume with mount path: /data
    • Set the volume size based on your dataset (e.g., 10GB, 50GB, 100GB)
    • This volume will store RDB snapshots and AOF files

    Deploy

    Click the deploy button. Klutch.sh will build your Docker image and start the DragonFly container with TCP traffic on port 8000.

Connecting to DragonFly

After deployment, your DragonFly instance will be accessible via TCP on port 8000. The internal container port 6379 is automatically routed.

Connection String Format

redis://:<password>@example-app.klutch.sh:8000

Replace:

  • <password> with your DRAGONFLY_REQUIREPASS value
  • example-app.klutch.sh with your actual Klutch.sh app URL

Using redis-cli

Connect from your local machine:

Terminal window
redis-cli -h example-app.klutch.sh -p 8000 -a yourpassword

Test basic operations:

Terminal window
# Set a key
SET mykey "Hello DragonFly"
# Get a key
GET mykey
# Set with expiration
SETEX session:user123 3600 "user-data"
# Check if key exists
EXISTS mykey
# Get all keys (use with caution in production)
KEYS *
# Get server info
INFO
# Check memory usage
INFO memory
# Get statistics
INFO stats

Connection Examples by Language

Node.js with ioredis:

const Redis = require('ioredis');
const redis = new Redis({
host: 'example-app.klutch.sh',
port: 8000,
password: 'yourpassword',
retryStrategy: (times) => {
const delay = Math.min(times * 50, 2000);
return delay;
},
maxRetriesPerRequest: 3,
});
// Test connection
redis.set('test', 'value', (err, result) => {
if (err) {
console.error('Error:', err);
} else {
console.log('Set result:', result);
}
});
redis.get('test', (err, result) => {
if (err) {
console.error('Error:', err);
} else {
console.log('Get result:', result);
}
});

Python with redis-py:

import redis
# Create connection
r = redis.Redis(
host='example-app.klutch.sh',
port=8000,
password='yourpassword',
decode_responses=True,
socket_keepalive=True,
socket_connect_timeout=5,
retry_on_timeout=True
)
# Test connection
r.ping()
# Set and get values
r.set('user:1000', 'John Doe')
name = r.get('user:1000')
print(f"User name: {name}")
# Hash operations
r.hset('user:1001', mapping={
'name': 'Jane Smith',
'email': 'jane@example.com',
'age': 28
})
user = r.hgetall('user:1001')
print(f"User data: {user}")

Go with go-redis:

package main
import (
"context"
"fmt"
"github.com/go-redis/redis/v8"
)
var ctx = context.Background()
func main() {
rdb := redis.NewClient(&redis.Options{
Addr: "example-app.klutch.sh:8000",
Password: "yourpassword",
DB: 0,
})
// Test connection
pong, err := rdb.Ping(ctx).Result()
if err != nil {
panic(err)
}
fmt.Println("Connected:", pong)
// Set value
err = rdb.Set(ctx, "key", "value", 0).Err()
if err != nil {
panic(err)
}
// Get value
val, err := rdb.Get(ctx, "key").Result()
if err != nil {
panic(err)
}
fmt.Println("key:", val)
}

PHP with Predis:

<?php
require 'vendor/autoload.php';
$client = new Predis\Client([
'scheme' => 'tcp',
'host' => 'example-app.klutch.sh',
'port' => 8000,
'password' => 'yourpassword',
]);
// Test connection
$client->ping();
// Set and get
$client->set('foo', 'bar');
$value = $client->get('foo');
echo "Value: $value\n";
// List operations
$client->lpush('mylist', 'item1', 'item2', 'item3');
$items = $client->lrange('mylist', 0, -1);
print_r($items);
?>

Ruby with redis-rb:

require 'redis'
redis = Redis.new(
host: 'example-app.klutch.sh',
port: 8000,
password: 'yourpassword',
timeout: 5,
reconnect_attempts: 3
)
# Test connection
redis.ping
# Set and get
redis.set('mykey', 'myvalue')
value = redis.get('mykey')
puts "Value: #{value}"
# Set with expiration
redis.setex('session:123', 3600, 'session-data')
# Hash operations
redis.hset('user:1', 'name', 'Alice')
redis.hset('user:1', 'email', 'alice@example.com')
user = redis.hgetall('user:1')
puts "User: #{user}"

Data Persistence Configuration

DragonFly supports multiple persistence strategies to ensure data durability.

RDB Snapshots

RDB (Redis Database) snapshots create point-in-time backups of your dataset:

Manual Snapshots:

Terminal window
# Create snapshot immediately
BGSAVE
# Check snapshot status
LASTSAVE

Automatic Snapshots with Cron:

Set the DRAGONFLY_SNAPSHOT_CRON environment variable in Klutch.sh:

Terminal window
# Every hour
DRAGONFLY_SNAPSHOT_CRON="0 */1 * * *"
# Every 15 minutes
DRAGONFLY_SNAPSHOT_CRON="*/15 * * * *"
# Daily at midnight
DRAGONFLY_SNAPSHOT_CRON="0 0 * * *"
# Every 6 hours
DRAGONFLY_SNAPSHOT_CRON="0 */6 * * *"

Snapshot Configuration:

Terminal window
# Set snapshot filename
DRAGONFLY_DBFILENAME=production-dump.rdb
# Set data directory (must match volume mount)
DRAGONFLY_DIR=/data

Checking Snapshot Files

You can verify snapshots are being created by accessing the container:

Terminal window
# List files in data directory
ls -lh /data/
# Check snapshot size
du -sh /data/*.rdb
# View snapshot timestamp
stat /data/dump.rdb

Snapshot Best Practices

Snapshot Frequency: Balance between data loss tolerance and I/O overhead. For most applications, hourly snapshots are sufficient. High-write workloads might need more frequent snapshots.

Volume Size: Ensure your persistent volume is at least 2X your expected dataset size to accommodate snapshots and temporary files during the save process.

Monitoring: Track LASTSAVE timestamp to ensure snapshots are completing successfully. Set up alerts if snapshots haven’t run within expected intervals.

Testing: Periodically test snapshot restoration to verify backup integrity. Create a test instance and load the RDB file to confirm data recovery works.

Memory Management and Cache Mode

DragonFly offers sophisticated memory management options for different use cases.

Setting Memory Limits

Configure maximum memory usage via environment variable:

Terminal window
# Set 4GB limit
DRAGONFLY_MAXMEMORY=4gb
# Set 8GB limit
DRAGONFLY_MAXMEMORY=8gb
# Set 16GB limit
DRAGONFLY_MAXMEMORY=16gb
# No limit (use with caution)
DRAGONFLY_MAXMEMORY=0

Cache Mode

DragonFly’s cache mode uses a novel eviction algorithm that’s more efficient than LRU or LFU:

Enable cache mode:

Terminal window
DRAGONFLY_CACHE_MODE=true

In cache mode:

  • DragonFly automatically evicts keys when approaching maxmemory
  • Uses a unified, adaptive algorithm with zero memory overhead
  • Evicts items least likely to be accessed in the future
  • More efficient than Redis’s LRU/LFU implementations

When to Use Cache Mode:

✅ Session stores with short TTLs ✅ Application caching layers ✅ Rate limiting counters ✅ Temporary data storage ✅ Content delivery network (CDN) caching

When NOT to Use Cache Mode:

❌ Persistent data storage ❌ Message queues requiring durability ❌ Pub/Sub systems ❌ When you need explicit control over eviction ❌ Time-series data that must be retained

Memory Efficiency

DragonFly’s memory efficiency advantages:

30% Less Memory: Compared to Redis in idle state, DragonFly uses significantly less memory for the same dataset size.

No Snapshot Overhead: During BGSAVE operations, DragonFly doesn’t show visible memory increase, while Redis can spike to 3X memory usage due to copy-on-write.

Efficient Data Structures: The Dash hashtable design provides better memory utilization than traditional Redis dictionaries.

Zero-Overhead Eviction: When cache mode is enabled, the eviction algorithm doesn’t require additional memory for tracking.

Monitoring Memory Usage

Check memory statistics:

Terminal window
# Get memory info
INFO memory
# Key metrics to monitor:
# - used_memory: Total memory used by DragonFly
# - used_memory_rss: Resident set size (actual RAM)
# - used_memory_peak: Peak memory usage
# - mem_fragmentation_ratio: Memory fragmentation ratio

Performance Optimization

DragonFly is designed for high performance, but proper configuration ensures optimal throughput and latency.

CPU and Threading

DragonFly automatically utilizes all available CPU cores through its shared-nothing architecture:

Vertical Scaling: Unlike Redis, DragonFly’s throughput scales linearly with CPU cores. Deploy on instances with more cores for higher throughput.

Thread Sharding: Each thread manages its own shard of the keyspace, eliminating contention and lock overhead.

Recommendation: For production workloads, use instances with 8+ CPU cores to maximize DragonFly’s multi-threaded performance.

Connection Pooling

Implement connection pooling in your application to reduce connection overhead:

Node.js Example:

const Redis = require('ioredis');
// Create cluster of connections
const redis = new Redis({
host: 'example-app.klutch.sh',
port: 8000,
password: 'yourpassword',
// Connection pool settings
lazyConnect: true,
enableOfflineQueue: true,
maxRetriesPerRequest: 3,
// Keep-alive
keepAlive: 30000,
});

Python Example:

from redis import ConnectionPool, Redis
pool = ConnectionPool(
host='example-app.klutch.sh',
port=8000,
password='yourpassword',
max_connections=50,
socket_keepalive=True,
socket_connect_timeout=5,
retry_on_timeout=True
)
redis_client = Redis(connection_pool=pool)

Pipelining

Use pipelining to batch multiple commands and reduce round-trip latency:

Terminal window
# Python example
pipe = redis_client.pipeline()
pipe.set('key1', 'value1')
pipe.set('key2', 'value2')
pipe.set('key3', 'value3')
pipe.get('key1')
pipe.get('key2')
results = pipe.execute()

Key Design Patterns

Use Short Key Names: Shorter keys consume less memory. Use abbreviations where possible.

Terminal window
# Instead of
user:session:authenticated:userid:12345
# Use
u:s:a:12345

Leverage Data Structures: Use appropriate Redis data structures for your use case:

  • Strings: Simple key-value pairs, counters
  • Hashes: Objects with multiple fields (user profiles, settings)
  • Lists: Queues, activity feeds, recent items
  • Sets: Unique collections, tags, relationships
  • Sorted Sets: Leaderboards, time-series data, priority queues

Set Appropriate TTLs: Always set expiration for temporary data:

Terminal window
# Session with 1 hour TTL
SETEX session:user123 3600 "session-data"
# Cache with 5 minute TTL
SETEX cache:article:456 300 "{\"title\":\"Article\",\"content\":\"...\"}"

Performance Benchmarking

Test your DragonFly instance performance using the built-in Redis benchmark tool:

Terminal window
# Basic benchmark
redis-benchmark -h example-app.klutch.sh -p 8000 -a yourpassword -t set,get -n 100000 -q
# Pipeline benchmark
redis-benchmark -h example-app.klutch.sh -p 8000 -a yourpassword -t set,get -n 100000 -P 16 -q
# Comprehensive benchmark
redis-benchmark -h example-app.klutch.sh -p 8000 -a yourpassword -n 100000 -d 256 -c 50 -t set,get,lpush,lpop,sadd,spop,zadd,zpopmin,hset,hget

Expected results (depending on instance size):

  • Small instances (2-4 cores): 50K-150K ops/sec
  • Medium instances (8-16 cores): 300K-1M ops/sec
  • Large instances (32+ cores): 2M-3M+ ops/sec

DragonFly’s performance scales with CPU cores, so larger instances deliver proportionally higher throughput.

Monitoring and Metrics

DragonFly provides comprehensive metrics for monitoring performance and health.

HTTP Metrics Endpoint

DragonFly exposes Prometheus-compatible metrics via HTTP:

Terminal window
# Access metrics
curl http://example-app.klutch.sh:8000/metrics
# If using password authentication, metrics may require authorization
# or access via internal monitoring systems

Key Metrics to Monitor

Operations:

  • dragonfly_commands_total: Total commands processed
  • dragonfly_commands_duration_seconds: Command latency
  • dragonfly_connections_current: Active connections
  • dragonfly_keyspace_hits_total: Cache hit count
  • dragonfly_keyspace_misses_total: Cache miss count

Memory:

  • dragonfly_memory_used_bytes: Current memory usage
  • dragonfly_memory_max_bytes: Maximum memory limit
  • dragonfly_memory_fragmentation_ratio: Memory fragmentation

Persistence:

  • dragonfly_last_save_time: Timestamp of last snapshot
  • dragonfly_save_duration_seconds: Snapshot duration

Redis INFO Command

Get detailed statistics using the INFO command:

Terminal window
# All info
INFO
# Specific sections
INFO server
INFO clients
INFO memory
INFO stats
INFO replication
INFO cpu
INFO keyspace

Cache Hit Rate

Calculate cache effectiveness:

Terminal window
# Get stats
INFO stats
# Calculate hit rate
hit_rate = keyspace_hits / (keyspace_hits + keyspace_misses) * 100

A healthy cache hit rate is typically 80% or higher. Lower rates indicate:

  • Insufficient memory (increase maxmemory)
  • Poor TTL configuration
  • Cache warming needed
  • Suboptimal key design

Health Checks

Implement health checks in your application:

Terminal window
# Simple ping
PING
# Check latency
redis-cli -h example-app.klutch.sh -p 8000 -a yourpassword --latency
# Monitor commands in real-time
redis-cli -h example-app.klutch.sh -p 8000 -a yourpassword MONITOR

Alerting Thresholds

Set up monitoring alerts for:

  • Memory Usage: Alert when > 80% of maxmemory
  • Connection Count: Alert on sudden spikes or drops
  • Command Latency: Alert when P99 latency > 5ms
  • Snapshot Age: Alert if last snapshot > 2 hours old
  • Cache Hit Rate: Alert when < 70%
  • CPU Usage: Alert when > 80% sustained

Security Best Practices

Securing your DragonFly instance is critical for production deployments.

Password Authentication

Always set a strong password:

Terminal window
# Generate a strong password
openssl rand -base64 32
# Set in Klutch.sh environment variables
DRAGONFLY_REQUIREPASS=<your-generated-password>

Network Security

TCP Traffic: DragonFly on Klutch.sh uses TCP traffic on port 8000. Ensure:

  • Only authorized applications can access this port
  • Use firewall rules to restrict access if needed
  • Consider VPN or private network access for sensitive data

Disable HTTP Console: If you don’t need the HTTP metrics endpoint exposed:

Terminal window
# Add to DRAGONFLY_EXTRA_FLAGS
DRAGONFLY_EXTRA_FLAGS="--nohttp_admin_console"

Encryption in Transit

For sensitive data, consider:

TLS Support: DragonFly supports TLS encryption. Configure TLS certificates:

Terminal window
# Add TLS flags to DRAGONFLY_EXTRA_FLAGS
DRAGONFLY_EXTRA_FLAGS="--tls --tls_cert_file=/path/to/cert.pem --tls_key_file=/path/to/key.pem"

Application-Level Encryption: Encrypt sensitive data before storing:

from cryptography.fernet import Fernet
# Generate key (store securely)
key = Fernet.generate_key()
cipher = Fernet(key)
# Encrypt before storing
encrypted_data = cipher.encrypt(b"sensitive data")
redis_client.set('encrypted:key', encrypted_data)
# Decrypt after retrieving
stored_data = redis_client.get('encrypted:key')
decrypted_data = cipher.decrypt(stored_data)

Access Control

Multiple Databases: Use database selection for logical separation:

Terminal window
# Set number of databases
DRAGONFLY_DBNUM=16
# In application, select database
SELECT 1 # Use database 1
SELECT 2 # Use database 2

Command Restrictions: If your application doesn’t need dangerous commands, implement restrictions at the application layer:

Dangerous commands to restrict:

  • FLUSHDB / FLUSHALL: Delete all keys
  • CONFIG: Modify configuration
  • SHUTDOWN: Stop server
  • BGREWRITEAOF: Rewrite AOF file
  • DEBUG: Debug commands

Regular Security Audits

  • Review access logs regularly
  • Rotate passwords periodically (every 90 days)
  • Monitor for unusual command patterns
  • Keep DragonFly updated to latest version
  • Review connected clients: CLIENT LIST

Backup and Recovery

Implement a comprehensive backup strategy for data durability.

Automated Backups

Configure automatic snapshots:

Terminal window
# Hourly backups
DRAGONFLY_SNAPSHOT_CRON="0 */1 * * *"
# Daily backups at 2 AM
DRAGONFLY_SNAPSHOT_CRON="0 2 * * *"

Manual Backups

Create on-demand backups:

Terminal window
# Trigger background save
BGSAVE
# Check save status
LASTSAVE
# Get last save time in Unix timestamp
LASTSAVE

Backup Files

DragonFly stores snapshots in the configured data directory:

Terminal window
# Default snapshot file
/data/dump.rdb
# Custom filename
/data/production-dump.rdb

Exporting Backups

To export backup files from your persistent volume:

  1. Access the container via Klutch.sh console
  2. Copy the RDB file to a temporary location
  3. Use scp or similar tool to download
  4. Store backups in external storage (S3, GCS, etc.)

Backup Script Example

Create a backup automation script:

backup-dragonfly.sh
#!/bin/bash
BACKUP_DIR="/backups"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="dragonfly-backup-${TIMESTAMP}.rdb"
# Create backup directory if it doesn't exist
mkdir -p $BACKUP_DIR
# Trigger BGSAVE
redis-cli -h example-app.klutch.sh -p 8000 -a "$DRAGONFLY_REQUIREPASS" BGSAVE
# Wait for save to complete
sleep 10
# Copy RDB file
cp /data/dump.rdb "${BACKUP_DIR}/${BACKUP_FILE}"
# Compress backup
gzip "${BACKUP_DIR}/${BACKUP_FILE}"
# Upload to S3 (example)
# aws s3 cp "${BACKUP_DIR}/${BACKUP_FILE}.gz" s3://my-backups/dragonfly/
# Clean up old backups (keep last 7 days)
find $BACKUP_DIR -name "dragonfly-backup-*.rdb.gz" -mtime +7 -delete
echo "Backup completed: ${BACKUP_FILE}.gz"

Recovery Process

To restore from backup:

  1. Stop the DragonFly instance
  2. Replace /data/dump.rdb with your backup file
  3. Restart DragonFly
  4. Verify data integrity
Terminal window
# Example recovery commands
# 1. Stop DragonFly (via Klutch.sh dashboard)
# 2. Replace RDB file
cp /backups/dragonfly-backup-20241216_120000.rdb /data/dump.rdb
# 3. Restart DragonFly (via Klutch.sh dashboard)
# 4. Verify
redis-cli -h example-app.klutch.sh -p 8000 -a yourpassword DBSIZE

Backup Best Practices

  • Multiple Copies: Store backups in multiple locations (local + cloud)
  • Regular Testing: Periodically test restoration process
  • Retention Policy: Keep daily backups for 7 days, weekly for 4 weeks, monthly for 12 months
  • Monitoring: Alert if backups fail or aren’t created on schedule
  • Encryption: Encrypt backup files before storing externally
  • Documentation: Document recovery procedures for your team

Troubleshooting Common Issues

Connection Refused

Symptoms: Unable to connect to DragonFly from application.

Causes:

  • Incorrect host or port
  • Firewall blocking TCP traffic
  • DragonFly not running
  • Wrong password

Solutions:

Terminal window
# 1. Verify DragonFly is running
# Check container status in Klutch.sh dashboard
# 2. Test connection
redis-cli -h example-app.klutch.sh -p 8000 PING
# 3. Check with authentication
redis-cli -h example-app.klutch.sh -p 8000 -a yourpassword PING
# 4. Verify TCP traffic is enabled in Klutch.sh
# Navigate to project settings and confirm TCP is selected

High Memory Usage

Symptoms: Memory usage approaching or exceeding maxmemory limit.

Causes:

  • Dataset larger than available memory
  • Memory leak in application
  • No eviction policy (cache mode disabled)
  • Large key values

Solutions:

Terminal window
# 1. Check memory usage
INFO memory
# 2. Enable cache mode if using as cache
DRAGONFLY_CACHE_MODE=true
# 3. Increase maxmemory
DRAGONFLY_MAXMEMORY=16gb
# 4. Analyze memory usage by key type
MEMORY USAGE key_name
# 5. Find large keys
redis-cli -h example-app.klutch.sh -p 8000 -a yourpassword --bigkeys
# 6. Delete unnecessary keys
DEL unused_key1 unused_key2

Slow Response Times

Symptoms: High latency or slow command execution.

Causes:

  • Blocking commands (KEYS, FLUSHDB)
  • Insufficient CPU resources
  • Network latency
  • Large dataset operations
  • No connection pooling

Solutions:

Terminal window
# 1. Check latency
redis-cli -h example-app.klutch.sh -p 8000 -a yourpassword --latency
# 2. Monitor slow commands
SLOWLOG GET 10
# 3. Check connected clients
CLIENT LIST
# 4. Avoid KEYS command, use SCAN instead
SCAN 0 MATCH user:* COUNT 100
# 5. Use pipelining for multiple commands
# (implement in application code)
# 6. Scale vertically - deploy on larger instance with more CPU cores

Snapshot Failures

Symptoms: Snapshots not being created, LASTSAVE not updating.

Causes:

  • Insufficient disk space
  • File permissions
  • I/O errors
  • Snapshot cron syntax error

Solutions:

Terminal window
# 1. Check last save time
LASTSAVE
# 2. Manually trigger save
BGSAVE
# 3. Check disk space
df -h /data
# 4. Verify snapshot cron syntax
# Correct: "0 */1 * * *" (hourly)
# Incorrect: "0 * * * *" might be too frequent
# 5. Check DragonFly logs for errors
# View logs in Klutch.sh dashboard
# 6. Increase persistent volume size if needed

Authentication Failures

Symptoms: “NOAUTH Authentication required” or “ERR invalid password” errors.

Causes:

  • Wrong password
  • Password not set in environment variables
  • Application using old credentials

Solutions:

Terminal window
# 1. Verify password is set
# Check DRAGONFLY_REQUIREPASS in Klutch.sh environment variables
# 2. Test authentication
redis-cli -h example-app.klutch.sh -p 8000 -a correct-password PING
# 3. Update application configuration
# Ensure app is using current password
# 4. Check for password in connection string
redis://:<password>@example-app.klutch.sh:8000

Cache Hit Rate Too Low

Symptoms: Cache hit rate below 70%, high number of cache misses.

Causes:

  • Insufficient memory
  • TTLs too short
  • Cache not properly warmed
  • Traffic patterns changed

Solutions:

Terminal window
# 1. Check hit rate
INFO stats
# Look at keyspace_hits and keyspace_misses
# 2. Increase memory allocation
DRAGONFLY_MAXMEMORY=16gb
# 3. Adjust TTLs
# Increase TTL for frequently accessed data
# 4. Implement cache warming
# Preload hot data on startup
# 5. Review key access patterns
# Monitor which keys are being requested

Advanced Configuration

Cluster Mode (Emulated)

DragonFly supports emulated cluster mode for compatibility with Redis Cluster clients:

Terminal window
DRAGONFLY_EXTRA_FLAGS="--cluster_mode=emulated"

Admin Console

Enable admin console for advanced debugging:

Terminal window
# Add to environment variables
DRAGONFLY_EXTRA_FLAGS="--admin_port=6380 --admin_bind=0.0.0.0"
# Access via HTTP
curl http://example-app.klutch.sh:6380/

Note: Only enable admin console in secure networks. Use authentication to protect access.

Memcached Protocol

DragonFly also supports Memcached protocol on a separate port:

Terminal window
# Enable Memcached on port 11211
DRAGONFLY_EXTRA_FLAGS="--memcached_port=11211"

Multiple Databases

Configure the number of logical databases:

Terminal window
DRAGONFLY_DBNUM=32

Use in application:

Terminal window
# Select database 0 (default)
SELECT 0
# Select database 5
SELECT 5
# Get current database
CLIENT INFO

Expiration Frequency

Adjust how often DragonFly checks for expired keys:

Terminal window
# Higher values = more CPU usage, faster expiration
# Lower values = less CPU usage, slower expiration
# Default: 100
DRAGONFLY_EXTRA_FLAGS="--hz=200"

Key Output Limit

Limit the number of keys returned by the KEYS command:

Terminal window
# Default: 8192
# Recommended: Keep low to prevent memory issues
DRAGONFLY_EXTRA_FLAGS="--keys_output_limit=10000"

Migration from Redis

Migrating from Redis to DragonFly is straightforward due to protocol compatibility.

Prerequisites for Migration

  • Redis RDB file or access to Redis instance
  • DragonFly deployed and configured
  • Minimal application downtime acceptable
  • Backup of Redis data

Migration Methods

Method 1: RDB File Transfer

Terminal window
# 1. Create Redis snapshot
redis-cli -h redis-host -p 6379 -a redis-password BGSAVE
# 2. Download Redis RDB file
scp user@redis-host:/var/lib/redis/dump.rdb ./redis-dump.rdb
# 3. Stop DragonFly (via Klutch.sh dashboard)
# 4. Replace DragonFly RDB file
cp redis-dump.rdb /data/dump.rdb
# 5. Start DragonFly (via Klutch.sh dashboard)
# 6. Verify data
redis-cli -h example-app.klutch.sh -p 8000 -a yourpassword DBSIZE

Method 2: Live Migration with RIOT

Use RIOT (Redis Input/Output Tools):

Terminal window
# Install RIOT
wget https://github.com/redis-developer/riot/releases/download/v3.1.4/riot-redis-3.1.4.zip
unzip riot-redis-3.1.4.zip
# Migrate data
./riot-redis --uri redis://source-redis:6379 replicate \
--uri redis://:yourpassword@example-app.klutch.sh:8000

Method 3: Application-Level Migration

import redis
# Source Redis
source = redis.Redis(host='source-redis', port=6379, password='redis-pass')
# Target DragonFly
target = redis.Redis(host='example-app.klutch.sh', port=8000, password='dragonfly-pass')
# Migrate all keys
for key in source.scan_iter():
# Get TTL
ttl = source.ttl(key)
# Get value
value = source.dump(key)
# Restore to DragonFly
if ttl > 0:
target.restore(key, ttl * 1000, value, replace=True)
else:
target.restore(key, 0, value, replace=True)
print(f"Migrated: {key}")
print("Migration complete!")

Post-Migration Validation

Terminal window
# 1. Compare key counts
redis-cli -h source-redis -p 6379 DBSIZE
redis-cli -h example-app.klutch.sh -p 8000 -a yourpassword DBSIZE
# 2. Verify sample keys
redis-cli -h example-app.klutch.sh -p 8000 -a yourpassword GET sample_key
# 3. Check memory usage
redis-cli -h example-app.klutch.sh -p 8000 -a yourpassword INFO memory
# 4. Test application connectivity
# Run application integration tests
# 5. Monitor performance
redis-cli -h example-app.klutch.sh -p 8000 -a yourpassword --latency-history

Application Updates

Update Redis connection strings in your application:

// Before (Redis)
const redis = new Redis({
host: 'source-redis',
port: 6379,
password: 'redis-pass'
});
// After (DragonFly)
const redis = new Redis({
host: 'example-app.klutch.sh',
port: 8000,
password: 'dragonfly-pass'
});

Most Redis clients work with DragonFly without code changes. Simply update connection parameters.

Rollback Plan

Maintain Redis instance during initial migration period:

  1. Keep Redis running for 24-48 hours
  2. Monitor DragonFly performance and stability
  3. If issues arise, switch connection back to Redis
  4. After validation period, decommission Redis

Production Checklist

Before going to production with DragonFly on Klutch.sh, ensure:

Configuration:

  • ✅ Strong password set via DRAGONFLY_REQUIREPASS
  • ✅ Appropriate maxmemory configured based on dataset size
  • ✅ Cache mode enabled if using as cache
  • ✅ Snapshot cron configured for automatic backups
  • ✅ Data directory set to /data for persistence
  • ✅ Number of databases (dbnum) configured appropriately

Security:

  • ✅ Password authentication enabled and tested
  • ✅ Network access restricted to authorized applications
  • ✅ Admin console disabled or secured
  • ✅ TLS enabled for sensitive data (if required)
  • ✅ Dangerous commands restricted at application layer
  • ✅ Regular password rotation schedule established

Persistence:

  • ✅ Persistent volume attached to /data
  • ✅ Volume size appropriate for dataset (2X data size minimum)
  • ✅ Automatic snapshots configured and tested
  • ✅ Manual backup process documented
  • ✅ Recovery procedure tested successfully
  • ✅ Backup retention policy defined

Performance:

  • ✅ Connection pooling implemented in applications
  • ✅ Pipelining used for batch operations
  • ✅ Appropriate instance size selected (8+ CPU cores recommended)
  • ✅ Benchmark tests completed with expected results
  • ✅ Key design patterns optimized for memory efficiency
  • ✅ TTLs set appropriately for temporary data

Monitoring:

  • ✅ Health checks implemented in application
  • ✅ Metrics endpoint configured (if using monitoring stack)
  • ✅ Alerts configured for critical metrics (memory, latency, cache hit rate)
  • ✅ Logging strategy defined
  • ✅ Dashboard created for key metrics
  • ✅ On-call procedures documented

High Availability:

  • ✅ Backup and restore procedures tested
  • ✅ Disaster recovery plan documented
  • ✅ Failover strategy defined
  • ✅ Data replication considered (if needed)
  • ✅ Incident response playbook created

Documentation:

  • ✅ Connection details documented for team
  • ✅ Environment variables documented
  • ✅ Architecture diagram created
  • ✅ Troubleshooting guide available
  • ✅ Migration plan documented (if applicable)
  • ✅ Runbook for common operations

Additional Resources

Conclusion

Deploying DragonFly on Klutch.sh provides a powerful, high-performance alternative to Redis and Memcached with significant improvements in throughput, memory efficiency, and vertical scaling capabilities. With 25X more throughput than Redis and 80% less resource consumption, DragonFly is ideal for modern applications requiring extreme performance at scale.

By following this guide, you’ve learned how to deploy DragonFly with proper configuration, implement data persistence, optimize performance, secure your instance, and monitor operations effectively. DragonFly’s drop-in compatibility with Redis APIs means you can migrate existing applications with minimal code changes while gaining substantial performance improvements.

Whether you’re building real-time applications, high-traffic caching layers, session stores, or message queues, DragonFly on Klutch.sh provides the infrastructure foundation for reliable, scalable data storage. With automatic Dockerfile detection, TCP traffic support, persistent volumes, and seamless GitHub integration, Klutch.sh makes deploying and managing DragonFly straightforward and efficient.

Start with a small instance to test your workload, monitor performance metrics, and scale vertically as your needs grow. DragonFly’s multi-threaded architecture ensures that performance scales linearly with CPU cores, making it easy to handle increasing traffic without architectural changes. Happy building!