Deploying ComfyUI
Introduction
ComfyUI is a powerful and modular node-based graphical user interface for Stable Diffusion that offers unprecedented flexibility and control over AI image generation workflows. Unlike traditional linear interfaces, ComfyUI provides a visual graph/nodes system where you can design and customize complex image generation pipelines, experiment with different models, apply multiple processing steps, and fine-tune every aspect of the generation process. With support for custom nodes, advanced workflows, ControlNet, LoRA models, and extensive model management, ComfyUI has become the preferred tool for AI artists, researchers, and professionals seeking maximum control over their generative AI projects.
Deploying ComfyUI on Klutch.sh provides a production-ready, scalable infrastructure for your AI image generation workflows with automated Docker deployments, persistent storage for models and generated outputs, secure environment variable management, and reliable uptime for continuous creative work. Whether you’re running personal creative projects, developing custom workflows, or building AI-powered services, Klutch.sh simplifies the deployment process and ensures your ComfyUI instance is always accessible with the resources it needs.
This comprehensive guide walks you through deploying ComfyUI on Klutch.sh using a Dockerfile, configuring persistent volumes for model storage and outputs, setting up environment variables for customization, implementing GPU support for accelerated generation, and following production best practices for reliable AI image generation at scale.
Prerequisites
Before you begin deploying ComfyUI on Klutch.sh, ensure you have:
- A Klutch.sh account (sign up here)
- A GitHub repository for your ComfyUI deployment configuration
- Basic understanding of Docker containers and AI model management
- Stable Diffusion models (download from Hugging Face or Civitai)
- (Optional) GPU access for faster image generation
- Access to the Klutch.sh dashboard
Understanding ComfyUI Architecture
ComfyUI consists of several key components:
- Python Backend: Core engine handling model loading, inference, and workflow execution
- Web UI: Browser-based node editor for creating and managing generation workflows
- Model Storage: Local filesystem for storing Stable Diffusion checkpoints, LoRA, VAE, and other models
- Output Directory: Storage for generated images and intermediate results
- Custom Nodes: Extensible plugin system for adding new functionality
- Workflow System: JSON-based workflow definitions for reproducible generation pipelines
When deployed on Klutch.sh, ComfyUI automatically detects your Dockerfile and builds a container image. The platform manages traffic routing via HTTP, provides SSL certificates automatically, and offers persistent storage options to preserve your models, custom nodes, and generated outputs across deployments.
Project Structure
A minimal repository structure for deploying ComfyUI on Klutch.sh:
comfyui-deployment/├── Dockerfile├── .dockerignore├── .gitignore├── README.md└── config/ └── extra_model_paths.yaml (optional)This simple structure allows Klutch.sh to automatically detect and build your ComfyUI container. You’ll configure model storage using persistent volumes rather than including large model files in your repository.
Creating Your Dockerfile
Klutch.sh automatically detects a Dockerfile in the root directory of your repository. Create a Dockerfile that sets up ComfyUI with all necessary dependencies:
Option 1: Simple Dockerfile (Recommended for Quick Start)
FROM python:3.10-slim
# Install system dependenciesRUN apt-get update && apt-get install -y \ git \ wget \ libgl1 \ libglib2.0-0 \ && rm -rf /var/lib/apt/lists/*
# Set working directoryWORKDIR /app
# Clone ComfyUI repositoryRUN git clone https://github.com/comfyanonymous/ComfyUI.git /app
# Install Python dependenciesRUN pip install --no-cache-dir -r requirements.txt
# Create directories for models and outputsRUN mkdir -p /app/models /app/output /app/input /app/custom_nodes
# Expose the default ComfyUI portEXPOSE 8188
# Start ComfyUICMD ["python", "main.py", "--listen", "0.0.0.0", "--port", "8188"]Option 2: Dockerfile with GPU Support
For accelerated image generation with GPU:
FROM nvidia/cuda:11.8.0-runtime-ubuntu22.04
# Install system dependenciesRUN apt-get update && apt-get install -y \ python3.10 \ python3-pip \ git \ wget \ libgl1 \ libglib2.0-0 \ && rm -rf /var/lib/apt/lists/*
# Set working directoryWORKDIR /app
# Clone ComfyUI repositoryRUN git clone https://github.com/comfyanonymous/ComfyUI.git /app
# Install PyTorch with CUDA supportRUN pip install --no-cache-dir torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
# Install ComfyUI dependenciesRUN pip install --no-cache-dir -r requirements.txt
# Create directories for models and outputsRUN mkdir -p /app/models/checkpoints \ /app/models/vae \ /app/models/loras \ /app/models/controlnet \ /app/models/embeddings \ /app/output \ /app/input \ /app/custom_nodes
# Expose the application portEXPOSE 8188
# Start ComfyUI with GPU supportCMD ["python", "main.py", "--listen", "0.0.0.0", "--port", "8188", "--enable-cors-header"]Option 3: Production Dockerfile with Custom Nodes
FROM python:3.10-slim
# Install system dependenciesRUN apt-get update && apt-get install -y \ git \ wget \ curl \ libgl1 \ libglib2.0-0 \ && rm -rf /var/lib/apt/lists/*
# Set working directoryWORKDIR /app
# Clone ComfyUI repositoryRUN git clone https://github.com/comfyanonymous/ComfyUI.git /app
# Install Python dependenciesRUN pip install --no-cache-dir -r requirements.txt
# Install additional dependencies for custom nodesRUN pip install --no-cache-dir \ opencv-python \ scipy \ scikit-image
# Create comprehensive directory structureRUN mkdir -p \ /app/models/checkpoints \ /app/models/vae \ /app/models/loras \ /app/models/controlnet \ /app/models/embeddings \ /app/models/upscale_models \ /app/output \ /app/input \ /app/custom_nodes
# Expose the application portEXPOSE 8188
# Add health check for monitoringHEALTHCHECK --interval=60s --timeout=10s --start-period=60s --retries=3 \ CMD curl -f http://localhost:8188/ || exit 1
# Start ComfyUICMD ["python", "main.py", "--listen", "0.0.0.0", "--port", "8188", "--enable-cors-header"]Important Notes:
- ComfyUI listens on port 8188 by default
- Klutch.sh will route external HTTP traffic to port 8188 in your container
- Models should be stored in persistent volumes, not in the Docker image
- GPU support requires appropriate hardware allocation on Klutch.sh
- The
--enable-cors-headerflag allows API access from external domains
Deploying to Klutch.sh
-
Create Your Repository
Create a new GitHub repository and add your Dockerfile:
Terminal window mkdir comfyui-deploymentcd comfyui-deployment# Create Dockerfile (use one of the examples above)cat > Dockerfile << 'EOF'FROM python:3.10-slimRUN apt-get update && apt-get install -y \git \wget \curl \libgl1 \libglib2.0-0 \&& rm -rf /var/lib/apt/lists/*WORKDIR /appRUN git clone https://github.com/comfyanonymous/ComfyUI.git /appRUN pip install --no-cache-dir -r requirements.txtRUN mkdir -p /app/models /app/output /app/input /app/custom_nodesEXPOSE 8188HEALTHCHECK --interval=60s --timeout=10s --start-period=60s --retries=3 \CMD curl -f http://localhost:8188/ || exit 1CMD ["python", "main.py", "--listen", "0.0.0.0", "--port", "8188", "--enable-cors-header"]EOF# Create .gitignorecat > .gitignore << 'EOF'models/output/input/custom_nodes/.env.DS_Store*.ckpt*.safetensorsEOF# Create READMEcat > README.md << 'EOF'# ComfyUI DeploymentThis repository contains the Docker configuration for deploying ComfyUI on Klutch.sh.## ModelsAdd your Stable Diffusion models to the persistent volume mounted at `/app/models/checkpoints`.EOF# Initialize git and pushgit initgit add .git commit -m "Initial ComfyUI deployment setup"git remote add origin https://github.com/YOUR_USERNAME/comfyui-deployment.gitgit push -u origin main -
Access the Klutch.sh Dashboard
Navigate to klutch.sh/app and log in to your account.
-
Create a New Project
- Click “New Project” in the dashboard
- Enter a project name (e.g., “AI Image Generation”)
- Select your preferred region for deployment
-
Create a New Application
- Within your project, click “New App”
- Name your application (e.g., “ComfyUI”)
- Connect your GitHub repository containing the Dockerfile
-
Configure Build Settings
Klutch.sh automatically detects the Dockerfile in your repository root and builds your application using Docker.
-
Configure Traffic Settings
In the app settings:
- Select HTTP as the traffic type
- Set the internal port to 8188 (ComfyUI’s default port)
- Klutch.sh will route external traffic to this port
-
Set Up Persistent Storage
ComfyUI requires persistent storage for models, outputs, and custom nodes. You’ll need to create multiple volumes:
Models Volume (Critical - Large Storage Required):
- In the app settings, navigate to the “Volumes” section
- Click “Add Volume”
- Set the mount path to
/app/models - Set the volume size (recommended: 50GB minimum, 100GB+ for multiple models)
- Click “Add” to attach the volume
Output Volume (For Generated Images):
- Click “Add Volume” again
- Set the mount path to
/app/output - Set the volume size (recommended: 20GB minimum, adjust based on usage)
- Click “Add” to attach the volume
Custom Nodes Volume (Optional):
- Click “Add Volume” again
- Set the mount path to
/app/custom_nodes - Set the volume size (recommended: 5GB)
- Click “Add” to attach the volume
Storage Breakdown:
/app/models: Stable Diffusion checkpoints, VAE, LoRA, ControlNet models/app/output: All generated images and intermediate results/app/custom_nodes: Custom node installations and extensions/app/input: Optional - for input images used in workflows
-
Configure Environment Variables
In the app settings, add environment variables for customization:
Basic Configuration:
Terminal window COMFYUI_PORT=8188COMFYUI_HOST=0.0.0.0Memory and Performance Settings:
Terminal window PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512COMFYUI_VRAM_MODE=autoSecurity (Recommended for Production):
Terminal window COMFYUI_ENABLE_AUTH=trueCOMFYUI_USERNAME=adminCOMFYUI_PASSWORD=your-secure-password-hereOptional API Configuration:
Terminal window COMFYUI_API_KEY=your-api-key-here -
Deploy Your Application
- Review all settings
- Click “Deploy” or “Create”
- Klutch.sh will build the Docker image from your Dockerfile
- Monitor the build logs in the dashboard (initial build may take 5-10 minutes)
- Once deployed, your ComfyUI instance will be accessible at the provided URL
-
Access Your ComfyUI Instance
After deployment completes:
- Visit your app URL (e.g.,
https://example-app.klutch.sh) - The ComfyUI web interface will load
- Before generating images, you’ll need to add Stable Diffusion models to your persistent volume
- Visit your app URL (e.g.,
Installing Models
ComfyUI requires Stable Diffusion models to generate images. Here’s how to add models to your deployment:
Downloading Models
Download Stable Diffusion models from these sources:
Popular Models to Get Started
Base Models:
- Stable Diffusion 1.5: Great for general purpose, smaller file size (~4GB)
- Stable Diffusion XL (SDXL): Higher quality results, larger file size (~7GB)
- Stable Diffusion 2.1: Alternative base model with different training data
Recommended Starting Model: Download Stable Diffusion 1.5 from Hugging Face.
Transferring Models to Your Deployment
Since models are stored in persistent volumes, you have several options:
Option 1: Upload via ComfyUI Web Interface (Easiest)
Some custom nodes provide file upload capabilities. Install a file manager node to upload models directly.
Option 2: Download Directly in Container
Access your container and download models:
# Access the container (use Klutch.sh console or SSH)cd /app/models/checkpoints
# Download a model using wgetwget https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt
# Or use curlcurl -L -o sd-v1-5.ckpt "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt"Option 3: Pre-populate Before Deployment
If you have direct access to the persistent volume, you can pre-populate models before starting ComfyUI.
Model Directory Structure
Organize your models in the appropriate directories:
/app/models/├── checkpoints/ # Main Stable Diffusion models (.ckpt, .safetensors)├── vae/ # VAE models for improved quality├── loras/ # LoRA models for style/character modifications├── controlnet/ # ControlNet models for guided generation├── embeddings/ # Textual inversion embeddings└── upscale_models/ # Models for upscaling generated imagesExample after adding models:
/app/models/checkpoints/├── sd-v1-5-pruned-emaonly.ckpt├── sd-xl-base-1.0.safetensors└── realistic-vision-v5.safetensors
/app/models/loras/├── detail-tweaker-lora.safetensors└── lighting-lora.safetensorsGetting Started with ComfyUI
Once your ComfyUI instance is deployed and you’ve added models, here’s how to create your first image:
Creating Your First Image Generation Workflow
-
Access the ComfyUI Interface
Navigate to your ComfyUI URL (e.g.,
https://example-app.klutch.sh) -
Load the Default Workflow
ComfyUI starts with a default workflow that includes:
- Load Checkpoint: Loads your Stable Diffusion model
- CLIP Text Encode (Prompt): Your positive prompt
- CLIP Text Encode (Negative): Negative prompt to avoid unwanted features
- Empty Latent Image: Sets the output image size
- KSampler: The sampling algorithm and settings
- VAE Decode: Converts latent space to visible image
- Save Image: Saves the generated image
-
Configure Your First Generation
Select Your Model:
- Click on the “Load Checkpoint” node
- Select your downloaded model from the dropdown
Set Your Prompt:
Positive Prompt: "a beautiful landscape with mountains and a lake, sunset,highly detailed, 8k, photorealistic"Negative Prompt: "blurry, low quality, distorted, ugly, bad anatomy"Configure Generation Settings:
- Width/Height: Start with 512x512 (or 1024x1024 for SDXL)
- Batch Size: 1 (generate one image at a time)
- Steps: 20-30 (more steps = higher quality but slower)
- CFG Scale: 7-8 (how closely to follow the prompt)
- Sampler: Euler, DPM++ 2M, or Karras (experiment to find preferences)
- Seed: Random or set a specific number for reproducibility
-
Generate Your First Image
Click “Queue Prompt” to start generation. The process takes 30 seconds to several minutes depending on:
- Image resolution
- Number of steps
- Hardware (GPU vs CPU)
- Model size
-
View Your Generated Image
The generated image appears in the “Save Image” node and is saved to
/app/output/
Example Workflow: Portrait Generation
Here’s a practical example workflow for generating portraits:
Positive Prompt:"portrait of a woman, professional photography, studio lighting,detailed eyes, detailed skin texture, elegant, high quality"
Negative Prompt:"low quality, blurry, distorted face, extra limbs, bad anatomy,deformed, watermark, text"
Settings:- Model: Realistic Vision V5- Size: 768x768- Steps: 25- CFG Scale: 7.5- Sampler: DPM++ 2M Karras- Seed: RandomExample Workflow: Landscape Generation
Positive Prompt:"epic fantasy landscape, ancient castle on a cliff, dramatic clouds,golden hour lighting, cinematic composition, ultra detailed, 8k"
Negative Prompt:"people, humans, modern buildings, cars, low quality, blurry"
Settings:- Model: Stable Diffusion 1.5- Size: 1024x512 (wide aspect)- Steps: 30- CFG Scale: 8- Sampler: Euler a- Seed: RandomUnderstanding the Node System
ComfyUI’s power comes from connecting nodes to create custom workflows:
Basic Node Types:
- Loaders: Load models, LoRAs, embeddings
- Conditioning: Handle prompts and guidance
- Sampling: Control the generation process
- Latent: Work with latent space representations
- Image: Process and save final images
Creating Custom Workflows:
- Right-click on canvas → “Add Node”
- Browse available nodes by category
- Connect nodes by dragging from output to input
- Configure each node’s parameters
- Save workflows as JSON for reuse
Environment Variables Reference
Complete list of useful environment variables for ComfyUI:
Core Settings
# Port configuration (match your Dockerfile)COMFYUI_PORT=8188
# Host bindingCOMFYUI_HOST=0.0.0.0
# Enable CORS for API accessCOMFYUI_ENABLE_CORS=truePerformance Settings
# Memory managementPYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512
# VRAM management mode (auto, low, high)COMFYUI_VRAM_MODE=auto
# CPU threadsCOMFYUI_NUM_THREADS=4Security Settings
# Authentication (if implemented via custom nodes)COMFYUI_ENABLE_AUTH=trueCOMFYUI_USERNAME=adminCOMFYUI_PASSWORD=your-secure-password
# API access controlCOMFYUI_API_KEY=your-api-key-herePath Configuration
# Custom paths (if not using default structure)COMFYUI_MODELS_PATH=/app/modelsCOMFYUI_OUTPUT_PATH=/app/outputCOMFYUI_INPUT_PATH=/app/inputSet these in the Klutch.sh dashboard under Environment Variables. Mark sensitive values (passwords, API keys) as secrets.
Persistent Storage and Data Management
Storage Layout
ComfyUI requires significant storage for models and outputs:
/app/├── models/│ ├── checkpoints/ # 4-7GB per model│ ├── vae/ # 100-300MB per VAE│ ├── loras/ # 10-200MB per LoRA│ ├── controlnet/ # 1-5GB per ControlNet│ ├── embeddings/ # 1-100MB per embedding│ └── upscale_models/ # 1-300MB per upscaler├── output/ # Generated images├── input/ # Source images for img2img└── custom_nodes/ # Extension installationsStorage Sizing Guide
Minimal Setup (Testing):
- Models Volume: 10GB (1-2 base models)
- Output Volume: 5GB
- Total: ~15GB
Standard Setup (Personal Use):
- Models Volume: 50GB (multiple base models, LoRAs, ControlNet)
- Output Volume: 20GB
- Custom Nodes: 5GB
- Total: ~75GB
Production Setup (Heavy Use):
- Models Volume: 100-200GB (extensive model library)
- Output Volume: 50-100GB
- Custom Nodes: 10GB
- Total: ~160-310GB
Model Storage Considerations
Model File Sizes:
- Stable Diffusion 1.5: ~4GB
- Stable Diffusion XL: ~7GB
- LoRA models: 10-200MB each
- ControlNet models: 1-5GB each
- VAE models: 100-300MB each
Storage Optimization:
- Use
.safetensorsformat when available (safer and sometimes smaller) - Remove unused models periodically
- Compress old outputs or move to external storage
- Use model pruning tools to reduce checkpoint sizes
Backup Recommendations
- Regular Backups: Schedule backups of your models volume (expensive to re-download)
- Workflow Backups: Export and save your custom workflows (JSON files)
- Output Archival: Move completed projects to long-term storage
- Custom Nodes: Document installed custom nodes for easy restoration
Production Best Practices
Security
-
Access Control
- Implement authentication via custom nodes or reverse proxy
- Use strong, unique passwords
- Consider placing behind VPN for sensitive work
- Limit public access if handling proprietary content
-
API Security
- Use API keys if exposing ComfyUI API
- Implement rate limiting to prevent abuse
- Monitor API usage for unusual patterns
-
Model Security
- Be cautious with community models (scan for malicious code)
- Verify model sources before downloading
- Keep track of model licenses and usage terms
Performance Optimization
-
Resource Allocation
- CPU-Only: Minimum 4 CPU cores, 8GB RAM (very slow generation)
- GPU-Enabled: Minimum 1 GPU, 8GB VRAM, 16GB RAM (recommended)
- Production: Multiple GPUs, 16-32GB RAM for parallel processing
-
Generation Optimization
- Use appropriate image sizes (512x512 for SD1.5, 1024x1024 for SDXL)
- Optimize step count (20-30 steps often sufficient)
- Use efficient samplers (DPM++ 2M Karras is fast and high quality)
- Enable model offloading for limited VRAM
-
Model Loading
- Keep frequently used models in memory
- Use model switching strategies to minimize reload time
- Consider using quantized models for faster loading
Monitoring and Maintenance
-
System Monitoring
- Track memory usage (both RAM and VRAM)
- Monitor storage capacity and growth
- Set up alerts for resource exhaustion
- Review generation times to detect performance degradation
-
Regular Maintenance
- Update ComfyUI to latest version monthly
- Clean output directory of old generations
- Review and update custom nodes
- Test backup restoration procedures
-
Logging
- Enable detailed logging for troubleshooting
- Monitor generation errors and failures
- Track API usage if exposed
- Log model loading times for performance analysis
Troubleshooting
Common Issues and Solutions
Issue: “No models found” error
Solution:
- Verify models are in
/app/models/checkpoints/ - Check file permissions (container needs read access)
- Ensure model files are complete (not corrupted downloads)
- Refresh the ComfyUI interface
- Check volume is properly mounted
Issue: Out of memory errors
Solution:
- Reduce image resolution (try 512x512)
- Lower batch size to 1
- Enable model offloading: Use
--lowvramor--normalvramflags - Increase allocated memory in Klutch.sh
- Close other memory-intensive processes
- Use smaller models (SD 1.5 instead of SDXL)
Issue: Slow generation times
Solution:
- Verify GPU is being used (check CUDA availability)
- Reduce number of steps (try 20 instead of 50)
- Use faster samplers (Euler, DPM++ 2M)
- Ensure model is loaded in VRAM
- Check for CPU bottlenecks
- Consider upgrading to GPU-enabled plan
Issue: ComfyUI won’t start or returns errors
Solution:
- Check container logs in Klutch.sh dashboard
- Verify port 8188 is correctly configured
- Ensure sufficient memory allocation (8GB minimum)
- Check for Python dependency conflicts
- Verify Docker image built successfully
- Review health check status
Issue: Models won’t load
Solution:
- Verify model file format (.ckpt or .safetensors)
- Check model is in correct directory
- Ensure model file isn’t corrupted (verify checksum)
- Check for sufficient disk space
- Review file permissions
- Try renaming model file (remove special characters)
Issue: Generated images are corrupted or poor quality
Solution:
- Verify model is fully downloaded
- Check VAE is loaded correctly
- Ensure prompts are well-formed
- Adjust CFG scale (try 7-8)
- Increase sampling steps (try 25-30)
- Use different sampler (DPM++ 2M Karras)
- Check for VRAM overflow errors
Issue: Custom nodes not working
Solution:
- Verify custom nodes are installed in
/app/custom_nodes/ - Check for dependency conflicts
- Review custom node documentation for requirements
- Restart ComfyUI after installing nodes
- Check custom node compatibility with ComfyUI version
- Review container logs for node loading errors
Debug Commands
Access container via Klutch.sh console:
# Check ComfyUI is runningps aux | grep python
# Verify models directoryls -lh /app/models/checkpoints/
# Check disk spacedf -h
# View recent logstail -f /app/comfy.log
# Test GPU availability (if applicable)python -c "import torch; print(torch.cuda.is_available())"
# Check memory usagefree -h
# List loaded Python packagespip listAdvanced Configuration
Installing Custom Nodes
ComfyUI’s functionality can be extended with custom nodes:
Popular Custom Nodes:
- ComfyUI-Manager: Node package manager for easy installation
- ControlNet Auxiliary Preprocessors: Enhanced ControlNet support
- Ultimate SD Upscale: Advanced upscaling workflows
- Image Saver: Enhanced image saving options
- WAS Node Suite: Extensive utility nodes
Installing Custom Nodes:
# Access your containercd /app/custom_nodes
# Clone a custom node repositorygit clone https://github.com/ltdrdata/ComfyUI-Manager.git
# Install dependencies if requiredcd ComfyUI-Managerpip install -r requirements.txt
# Restart ComfyUI to load new nodesUsing LoRA Models
LoRA (Low-Rank Adaptation) models add styles or concepts without retraining base models:
Adding LoRA to Workflow:
- Download LoRA models to
/app/models/loras/ - Add “Load LoRA” node to your workflow
- Connect it between your checkpoint loader and conditioning
- Set LoRA strength (0.5-1.0 typically)
Example LoRA Workflow:
Load Checkpoint → Load LoRA → CLIP Text Encode → KSampler → VAE DecodeControlNet Integration
ControlNet provides precise control over generation using reference images:
Setup:
- Download ControlNet models to
/app/models/controlnet/ - Download preprocessor models (OpenPose, Canny, Depth, etc.)
- Add ControlNet nodes to workflow
- Connect preprocessor → ControlNet → conditioning
Use Cases:
- Pose Control: Guide character poses with OpenPose
- Edge Control: Use Canny edge detection for composition
- Depth Control: Maintain depth structure from reference
- Scribble Control: Sketch rough composition
API Access
ComfyUI provides a REST API for automation:
Submitting a Workflow via API:
# Get the workflow JSON from ComfyUI UI (Export button)# Submit via APIcurl -X POST https://example-app.klutch.sh/prompt \ -H "Content-Type: application/json" \ -d @workflow.jsonChecking Generation Status:
curl https://example-app.klutch.sh/historyDownloading Generated Image:
curl https://example-app.klutch.sh/view?filename=image.png \ -o output.pngMulti-Step Workflows
Create complex generation pipelines:
Example: Upscale Workflow
Generate Base Image (512x512)↓Upscale with Latent Upscaler (1024x1024)↓High-res Fix Pass↓Final Upscale (2048x2048)↓Save ImageExample: ControlNet + LoRA Workflow
Load Checkpoint↓Load LoRA (Style)↓Load ControlNet (Pose)↓Process Reference Image (OpenPose)↓Apply ControlNet↓Generate with Combined Conditioning↓Save ImageScaling Considerations
As your usage grows:
-
Vertical Scaling
- Increase CPU/RAM for better performance
- Add GPU for dramatically faster generation
- Recommended: 4+ vCPU, 16GB RAM, 1+ GPU (16GB VRAM)
-
Storage Scaling
- Expand models volume as you add more checkpoints
- Increase output volume based on generation volume
- Archive old outputs to external storage
- Implement automated cleanup policies
-
Workflow Organization
- Create workflow templates for common tasks
- Document custom workflows
- Use version control for workflow JSON files
- Share workflows across team members
-
Batch Processing
- Use batch mode for multiple generations
- Queue multiple prompts for unattended generation
- Consider multiple instances for parallel processing
Resources
- ComfyUI GitHub Repository
- ComfyUI Examples and Workflows
- ComfyUI Manager
- Civitai (Models and LoRAs)
- Hugging Face Models
- Klutch.sh Quick Start Guide
- Klutch.sh Volumes Guide
- Klutch.sh Networking Guide
Next Steps
After successfully deploying ComfyUI on Klutch.sh:
- Download Base Models: Start with Stable Diffusion 1.5 or SDXL
- Test Default Workflow: Generate your first image with the built-in workflow
- Explore Custom Nodes: Install ComfyUI-Manager for easier node management
- Learn Node Basics: Understand how different nodes connect and interact
- Experiment with Settings: Try different samplers, steps, and CFG scales
- Build Custom Workflows: Create reusable workflows for your specific needs
- Add LoRAs: Download style-specific LoRAs to expand creative options
- Try ControlNet: Use reference images for precise control
- Optimize Performance: Monitor resource usage and adjust settings
- Join Community: Participate in ComfyUI communities to learn advanced techniques
Deploying ComfyUI on Klutch.sh provides a powerful, flexible platform for AI image generation. Whether you’re creating art, developing products, or experimenting with AI capabilities, Klutch.sh ensures your ComfyUI instance is accessible, scalable, and reliable.