Skip to content

Deploying ComfyUI

Introduction

ComfyUI is a powerful and modular node-based graphical user interface for Stable Diffusion that offers unprecedented flexibility and control over AI image generation workflows. Unlike traditional linear interfaces, ComfyUI provides a visual graph/nodes system where you can design and customize complex image generation pipelines, experiment with different models, apply multiple processing steps, and fine-tune every aspect of the generation process. With support for custom nodes, advanced workflows, ControlNet, LoRA models, and extensive model management, ComfyUI has become the preferred tool for AI artists, researchers, and professionals seeking maximum control over their generative AI projects.

Deploying ComfyUI on Klutch.sh provides a production-ready, scalable infrastructure for your AI image generation workflows with automated Docker deployments, persistent storage for models and generated outputs, secure environment variable management, and reliable uptime for continuous creative work. Whether you’re running personal creative projects, developing custom workflows, or building AI-powered services, Klutch.sh simplifies the deployment process and ensures your ComfyUI instance is always accessible with the resources it needs.

This comprehensive guide walks you through deploying ComfyUI on Klutch.sh using a Dockerfile, configuring persistent volumes for model storage and outputs, setting up environment variables for customization, implementing GPU support for accelerated generation, and following production best practices for reliable AI image generation at scale.


Prerequisites

Before you begin deploying ComfyUI on Klutch.sh, ensure you have:

  • A Klutch.sh account (sign up here)
  • A GitHub repository for your ComfyUI deployment configuration
  • Basic understanding of Docker containers and AI model management
  • Stable Diffusion models (download from Hugging Face or Civitai)
  • (Optional) GPU access for faster image generation
  • Access to the Klutch.sh dashboard

Understanding ComfyUI Architecture

ComfyUI consists of several key components:

  • Python Backend: Core engine handling model loading, inference, and workflow execution
  • Web UI: Browser-based node editor for creating and managing generation workflows
  • Model Storage: Local filesystem for storing Stable Diffusion checkpoints, LoRA, VAE, and other models
  • Output Directory: Storage for generated images and intermediate results
  • Custom Nodes: Extensible plugin system for adding new functionality
  • Workflow System: JSON-based workflow definitions for reproducible generation pipelines

When deployed on Klutch.sh, ComfyUI automatically detects your Dockerfile and builds a container image. The platform manages traffic routing via HTTP, provides SSL certificates automatically, and offers persistent storage options to preserve your models, custom nodes, and generated outputs across deployments.


Project Structure

A minimal repository structure for deploying ComfyUI on Klutch.sh:

comfyui-deployment/
├── Dockerfile
├── .dockerignore
├── .gitignore
├── README.md
└── config/
└── extra_model_paths.yaml (optional)

This simple structure allows Klutch.sh to automatically detect and build your ComfyUI container. You’ll configure model storage using persistent volumes rather than including large model files in your repository.


Creating Your Dockerfile

Klutch.sh automatically detects a Dockerfile in the root directory of your repository. Create a Dockerfile that sets up ComfyUI with all necessary dependencies:

FROM python:3.10-slim
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
wget \
libgl1 \
libglib2.0-0 \
&& rm -rf /var/lib/apt/lists/*
# Set working directory
WORKDIR /app
# Clone ComfyUI repository
RUN git clone https://github.com/comfyanonymous/ComfyUI.git /app
# Install Python dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Create directories for models and outputs
RUN mkdir -p /app/models /app/output /app/input /app/custom_nodes
# Expose the default ComfyUI port
EXPOSE 8188
# Start ComfyUI
CMD ["python", "main.py", "--listen", "0.0.0.0", "--port", "8188"]

Option 2: Dockerfile with GPU Support

For accelerated image generation with GPU:

FROM nvidia/cuda:11.8.0-runtime-ubuntu22.04
# Install system dependencies
RUN apt-get update && apt-get install -y \
python3.10 \
python3-pip \
git \
wget \
libgl1 \
libglib2.0-0 \
&& rm -rf /var/lib/apt/lists/*
# Set working directory
WORKDIR /app
# Clone ComfyUI repository
RUN git clone https://github.com/comfyanonymous/ComfyUI.git /app
# Install PyTorch with CUDA support
RUN pip install --no-cache-dir torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
# Install ComfyUI dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Create directories for models and outputs
RUN mkdir -p /app/models/checkpoints \
/app/models/vae \
/app/models/loras \
/app/models/controlnet \
/app/models/embeddings \
/app/output \
/app/input \
/app/custom_nodes
# Expose the application port
EXPOSE 8188
# Start ComfyUI with GPU support
CMD ["python", "main.py", "--listen", "0.0.0.0", "--port", "8188", "--enable-cors-header"]

Option 3: Production Dockerfile with Custom Nodes

FROM python:3.10-slim
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
wget \
curl \
libgl1 \
libglib2.0-0 \
&& rm -rf /var/lib/apt/lists/*
# Set working directory
WORKDIR /app
# Clone ComfyUI repository
RUN git clone https://github.com/comfyanonymous/ComfyUI.git /app
# Install Python dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Install additional dependencies for custom nodes
RUN pip install --no-cache-dir \
opencv-python \
scipy \
scikit-image
# Create comprehensive directory structure
RUN mkdir -p \
/app/models/checkpoints \
/app/models/vae \
/app/models/loras \
/app/models/controlnet \
/app/models/embeddings \
/app/models/upscale_models \
/app/output \
/app/input \
/app/custom_nodes
# Expose the application port
EXPOSE 8188
# Add health check for monitoring
HEALTHCHECK --interval=60s --timeout=10s --start-period=60s --retries=3 \
CMD curl -f http://localhost:8188/ || exit 1
# Start ComfyUI
CMD ["python", "main.py", "--listen", "0.0.0.0", "--port", "8188", "--enable-cors-header"]

Important Notes:

  • ComfyUI listens on port 8188 by default
  • Klutch.sh will route external HTTP traffic to port 8188 in your container
  • Models should be stored in persistent volumes, not in the Docker image
  • GPU support requires appropriate hardware allocation on Klutch.sh
  • The --enable-cors-header flag allows API access from external domains

Deploying to Klutch.sh

    1. Create Your Repository

      Create a new GitHub repository and add your Dockerfile:

      Terminal window
      mkdir comfyui-deployment
      cd comfyui-deployment
      # Create Dockerfile (use one of the examples above)
      cat > Dockerfile << 'EOF'
      FROM python:3.10-slim
      RUN apt-get update && apt-get install -y \
      git \
      wget \
      curl \
      libgl1 \
      libglib2.0-0 \
      && rm -rf /var/lib/apt/lists/*
      WORKDIR /app
      RUN git clone https://github.com/comfyanonymous/ComfyUI.git /app
      RUN pip install --no-cache-dir -r requirements.txt
      RUN mkdir -p /app/models /app/output /app/input /app/custom_nodes
      EXPOSE 8188
      HEALTHCHECK --interval=60s --timeout=10s --start-period=60s --retries=3 \
      CMD curl -f http://localhost:8188/ || exit 1
      CMD ["python", "main.py", "--listen", "0.0.0.0", "--port", "8188", "--enable-cors-header"]
      EOF
      # Create .gitignore
      cat > .gitignore << 'EOF'
      models/
      output/
      input/
      custom_nodes/
      .env
      .DS_Store
      *.ckpt
      *.safetensors
      EOF
      # Create README
      cat > README.md << 'EOF'
      # ComfyUI Deployment
      This repository contains the Docker configuration for deploying ComfyUI on Klutch.sh.
      ## Models
      Add your Stable Diffusion models to the persistent volume mounted at `/app/models/checkpoints`.
      EOF
      # Initialize git and push
      git init
      git add .
      git commit -m "Initial ComfyUI deployment setup"
      git remote add origin https://github.com/YOUR_USERNAME/comfyui-deployment.git
      git push -u origin main
    2. Access the Klutch.sh Dashboard

      Navigate to klutch.sh/app and log in to your account.

    3. Create a New Project

      • Click “New Project” in the dashboard
      • Enter a project name (e.g., “AI Image Generation”)
      • Select your preferred region for deployment
    4. Create a New Application

      • Within your project, click “New App”
      • Name your application (e.g., “ComfyUI”)
      • Connect your GitHub repository containing the Dockerfile
    5. Configure Build Settings

      Klutch.sh automatically detects the Dockerfile in your repository root and builds your application using Docker.

    6. Configure Traffic Settings

      In the app settings:

      • Select HTTP as the traffic type
      • Set the internal port to 8188 (ComfyUI’s default port)
      • Klutch.sh will route external traffic to this port
    7. Set Up Persistent Storage

      ComfyUI requires persistent storage for models, outputs, and custom nodes. You’ll need to create multiple volumes:

      Models Volume (Critical - Large Storage Required):

      • In the app settings, navigate to the “Volumes” section
      • Click “Add Volume”
      • Set the mount path to /app/models
      • Set the volume size (recommended: 50GB minimum, 100GB+ for multiple models)
      • Click “Add” to attach the volume

      Output Volume (For Generated Images):

      • Click “Add Volume” again
      • Set the mount path to /app/output
      • Set the volume size (recommended: 20GB minimum, adjust based on usage)
      • Click “Add” to attach the volume

      Custom Nodes Volume (Optional):

      • Click “Add Volume” again
      • Set the mount path to /app/custom_nodes
      • Set the volume size (recommended: 5GB)
      • Click “Add” to attach the volume

      Storage Breakdown:

      • /app/models: Stable Diffusion checkpoints, VAE, LoRA, ControlNet models
      • /app/output: All generated images and intermediate results
      • /app/custom_nodes: Custom node installations and extensions
      • /app/input: Optional - for input images used in workflows
    8. Configure Environment Variables

      In the app settings, add environment variables for customization:

      Basic Configuration:

      Terminal window
      COMFYUI_PORT=8188
      COMFYUI_HOST=0.0.0.0

      Memory and Performance Settings:

      Terminal window
      PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512
      COMFYUI_VRAM_MODE=auto

      Security (Recommended for Production):

      Terminal window
      COMFYUI_ENABLE_AUTH=true
      COMFYUI_USERNAME=admin
      COMFYUI_PASSWORD=your-secure-password-here

      Optional API Configuration:

      Terminal window
      COMFYUI_API_KEY=your-api-key-here
    9. Deploy Your Application

      • Review all settings
      • Click “Deploy” or “Create”
      • Klutch.sh will build the Docker image from your Dockerfile
      • Monitor the build logs in the dashboard (initial build may take 5-10 minutes)
      • Once deployed, your ComfyUI instance will be accessible at the provided URL
    10. Access Your ComfyUI Instance

      After deployment completes:

      • Visit your app URL (e.g., https://example-app.klutch.sh)
      • The ComfyUI web interface will load
      • Before generating images, you’ll need to add Stable Diffusion models to your persistent volume

Installing Models

ComfyUI requires Stable Diffusion models to generate images. Here’s how to add models to your deployment:

Downloading Models

Download Stable Diffusion models from these sources:

Base Models:

  • Stable Diffusion 1.5: Great for general purpose, smaller file size (~4GB)
  • Stable Diffusion XL (SDXL): Higher quality results, larger file size (~7GB)
  • Stable Diffusion 2.1: Alternative base model with different training data

Recommended Starting Model: Download Stable Diffusion 1.5 from Hugging Face.

Transferring Models to Your Deployment

Since models are stored in persistent volumes, you have several options:

Option 1: Upload via ComfyUI Web Interface (Easiest)

Some custom nodes provide file upload capabilities. Install a file manager node to upload models directly.

Option 2: Download Directly in Container

Access your container and download models:

Terminal window
# Access the container (use Klutch.sh console or SSH)
cd /app/models/checkpoints
# Download a model using wget
wget https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt
# Or use curl
curl -L -o sd-v1-5.ckpt "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt"

Option 3: Pre-populate Before Deployment

If you have direct access to the persistent volume, you can pre-populate models before starting ComfyUI.

Model Directory Structure

Organize your models in the appropriate directories:

/app/models/
├── checkpoints/ # Main Stable Diffusion models (.ckpt, .safetensors)
├── vae/ # VAE models for improved quality
├── loras/ # LoRA models for style/character modifications
├── controlnet/ # ControlNet models for guided generation
├── embeddings/ # Textual inversion embeddings
└── upscale_models/ # Models for upscaling generated images

Example after adding models:

/app/models/checkpoints/
├── sd-v1-5-pruned-emaonly.ckpt
├── sd-xl-base-1.0.safetensors
└── realistic-vision-v5.safetensors
/app/models/loras/
├── detail-tweaker-lora.safetensors
└── lighting-lora.safetensors

Getting Started with ComfyUI

Once your ComfyUI instance is deployed and you’ve added models, here’s how to create your first image:

Creating Your First Image Generation Workflow

  1. Access the ComfyUI Interface

    Navigate to your ComfyUI URL (e.g., https://example-app.klutch.sh)

  2. Load the Default Workflow

    ComfyUI starts with a default workflow that includes:

    • Load Checkpoint: Loads your Stable Diffusion model
    • CLIP Text Encode (Prompt): Your positive prompt
    • CLIP Text Encode (Negative): Negative prompt to avoid unwanted features
    • Empty Latent Image: Sets the output image size
    • KSampler: The sampling algorithm and settings
    • VAE Decode: Converts latent space to visible image
    • Save Image: Saves the generated image
  3. Configure Your First Generation

    Select Your Model:

    • Click on the “Load Checkpoint” node
    • Select your downloaded model from the dropdown

    Set Your Prompt:

    Positive Prompt: "a beautiful landscape with mountains and a lake, sunset,
    highly detailed, 8k, photorealistic"
    Negative Prompt: "blurry, low quality, distorted, ugly, bad anatomy"

    Configure Generation Settings:

    • Width/Height: Start with 512x512 (or 1024x1024 for SDXL)
    • Batch Size: 1 (generate one image at a time)
    • Steps: 20-30 (more steps = higher quality but slower)
    • CFG Scale: 7-8 (how closely to follow the prompt)
    • Sampler: Euler, DPM++ 2M, or Karras (experiment to find preferences)
    • Seed: Random or set a specific number for reproducibility
  4. Generate Your First Image

    Click “Queue Prompt” to start generation. The process takes 30 seconds to several minutes depending on:

    • Image resolution
    • Number of steps
    • Hardware (GPU vs CPU)
    • Model size
  5. View Your Generated Image

    The generated image appears in the “Save Image” node and is saved to /app/output/

Example Workflow: Portrait Generation

Here’s a practical example workflow for generating portraits:

Positive Prompt:
"portrait of a woman, professional photography, studio lighting,
detailed eyes, detailed skin texture, elegant, high quality"
Negative Prompt:
"low quality, blurry, distorted face, extra limbs, bad anatomy,
deformed, watermark, text"
Settings:
- Model: Realistic Vision V5
- Size: 768x768
- Steps: 25
- CFG Scale: 7.5
- Sampler: DPM++ 2M Karras
- Seed: Random

Example Workflow: Landscape Generation

Positive Prompt:
"epic fantasy landscape, ancient castle on a cliff, dramatic clouds,
golden hour lighting, cinematic composition, ultra detailed, 8k"
Negative Prompt:
"people, humans, modern buildings, cars, low quality, blurry"
Settings:
- Model: Stable Diffusion 1.5
- Size: 1024x512 (wide aspect)
- Steps: 30
- CFG Scale: 8
- Sampler: Euler a
- Seed: Random

Understanding the Node System

ComfyUI’s power comes from connecting nodes to create custom workflows:

Basic Node Types:

  • Loaders: Load models, LoRAs, embeddings
  • Conditioning: Handle prompts and guidance
  • Sampling: Control the generation process
  • Latent: Work with latent space representations
  • Image: Process and save final images

Creating Custom Workflows:

  1. Right-click on canvas → “Add Node”
  2. Browse available nodes by category
  3. Connect nodes by dragging from output to input
  4. Configure each node’s parameters
  5. Save workflows as JSON for reuse

Environment Variables Reference

Complete list of useful environment variables for ComfyUI:

Core Settings

Terminal window
# Port configuration (match your Dockerfile)
COMFYUI_PORT=8188
# Host binding
COMFYUI_HOST=0.0.0.0
# Enable CORS for API access
COMFYUI_ENABLE_CORS=true

Performance Settings

Terminal window
# Memory management
PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512
# VRAM management mode (auto, low, high)
COMFYUI_VRAM_MODE=auto
# CPU threads
COMFYUI_NUM_THREADS=4

Security Settings

Terminal window
# Authentication (if implemented via custom nodes)
COMFYUI_ENABLE_AUTH=true
COMFYUI_USERNAME=admin
COMFYUI_PASSWORD=your-secure-password
# API access control
COMFYUI_API_KEY=your-api-key-here

Path Configuration

Terminal window
# Custom paths (if not using default structure)
COMFYUI_MODELS_PATH=/app/models
COMFYUI_OUTPUT_PATH=/app/output
COMFYUI_INPUT_PATH=/app/input

Set these in the Klutch.sh dashboard under Environment Variables. Mark sensitive values (passwords, API keys) as secrets.


Persistent Storage and Data Management

Storage Layout

ComfyUI requires significant storage for models and outputs:

/app/
├── models/
│ ├── checkpoints/ # 4-7GB per model
│ ├── vae/ # 100-300MB per VAE
│ ├── loras/ # 10-200MB per LoRA
│ ├── controlnet/ # 1-5GB per ControlNet
│ ├── embeddings/ # 1-100MB per embedding
│ └── upscale_models/ # 1-300MB per upscaler
├── output/ # Generated images
├── input/ # Source images for img2img
└── custom_nodes/ # Extension installations

Storage Sizing Guide

Minimal Setup (Testing):

  • Models Volume: 10GB (1-2 base models)
  • Output Volume: 5GB
  • Total: ~15GB

Standard Setup (Personal Use):

  • Models Volume: 50GB (multiple base models, LoRAs, ControlNet)
  • Output Volume: 20GB
  • Custom Nodes: 5GB
  • Total: ~75GB

Production Setup (Heavy Use):

  • Models Volume: 100-200GB (extensive model library)
  • Output Volume: 50-100GB
  • Custom Nodes: 10GB
  • Total: ~160-310GB

Model Storage Considerations

Model File Sizes:

  • Stable Diffusion 1.5: ~4GB
  • Stable Diffusion XL: ~7GB
  • LoRA models: 10-200MB each
  • ControlNet models: 1-5GB each
  • VAE models: 100-300MB each

Storage Optimization:

  • Use .safetensors format when available (safer and sometimes smaller)
  • Remove unused models periodically
  • Compress old outputs or move to external storage
  • Use model pruning tools to reduce checkpoint sizes

Backup Recommendations

  1. Regular Backups: Schedule backups of your models volume (expensive to re-download)
  2. Workflow Backups: Export and save your custom workflows (JSON files)
  3. Output Archival: Move completed projects to long-term storage
  4. Custom Nodes: Document installed custom nodes for easy restoration

Production Best Practices

Security

  1. Access Control

    • Implement authentication via custom nodes or reverse proxy
    • Use strong, unique passwords
    • Consider placing behind VPN for sensitive work
    • Limit public access if handling proprietary content
  2. API Security

    • Use API keys if exposing ComfyUI API
    • Implement rate limiting to prevent abuse
    • Monitor API usage for unusual patterns
  3. Model Security

    • Be cautious with community models (scan for malicious code)
    • Verify model sources before downloading
    • Keep track of model licenses and usage terms

Performance Optimization

  1. Resource Allocation

    • CPU-Only: Minimum 4 CPU cores, 8GB RAM (very slow generation)
    • GPU-Enabled: Minimum 1 GPU, 8GB VRAM, 16GB RAM (recommended)
    • Production: Multiple GPUs, 16-32GB RAM for parallel processing
  2. Generation Optimization

    • Use appropriate image sizes (512x512 for SD1.5, 1024x1024 for SDXL)
    • Optimize step count (20-30 steps often sufficient)
    • Use efficient samplers (DPM++ 2M Karras is fast and high quality)
    • Enable model offloading for limited VRAM
  3. Model Loading

    • Keep frequently used models in memory
    • Use model switching strategies to minimize reload time
    • Consider using quantized models for faster loading

Monitoring and Maintenance

  1. System Monitoring

    • Track memory usage (both RAM and VRAM)
    • Monitor storage capacity and growth
    • Set up alerts for resource exhaustion
    • Review generation times to detect performance degradation
  2. Regular Maintenance

    • Update ComfyUI to latest version monthly
    • Clean output directory of old generations
    • Review and update custom nodes
    • Test backup restoration procedures
  3. Logging

    • Enable detailed logging for troubleshooting
    • Monitor generation errors and failures
    • Track API usage if exposed
    • Log model loading times for performance analysis

Troubleshooting

Common Issues and Solutions

Issue: “No models found” error

Solution:

  • Verify models are in /app/models/checkpoints/
  • Check file permissions (container needs read access)
  • Ensure model files are complete (not corrupted downloads)
  • Refresh the ComfyUI interface
  • Check volume is properly mounted

Issue: Out of memory errors

Solution:

  • Reduce image resolution (try 512x512)
  • Lower batch size to 1
  • Enable model offloading: Use --lowvram or --normalvram flags
  • Increase allocated memory in Klutch.sh
  • Close other memory-intensive processes
  • Use smaller models (SD 1.5 instead of SDXL)

Issue: Slow generation times

Solution:

  • Verify GPU is being used (check CUDA availability)
  • Reduce number of steps (try 20 instead of 50)
  • Use faster samplers (Euler, DPM++ 2M)
  • Ensure model is loaded in VRAM
  • Check for CPU bottlenecks
  • Consider upgrading to GPU-enabled plan

Issue: ComfyUI won’t start or returns errors

Solution:

  • Check container logs in Klutch.sh dashboard
  • Verify port 8188 is correctly configured
  • Ensure sufficient memory allocation (8GB minimum)
  • Check for Python dependency conflicts
  • Verify Docker image built successfully
  • Review health check status

Issue: Models won’t load

Solution:

  • Verify model file format (.ckpt or .safetensors)
  • Check model is in correct directory
  • Ensure model file isn’t corrupted (verify checksum)
  • Check for sufficient disk space
  • Review file permissions
  • Try renaming model file (remove special characters)

Issue: Generated images are corrupted or poor quality

Solution:

  • Verify model is fully downloaded
  • Check VAE is loaded correctly
  • Ensure prompts are well-formed
  • Adjust CFG scale (try 7-8)
  • Increase sampling steps (try 25-30)
  • Use different sampler (DPM++ 2M Karras)
  • Check for VRAM overflow errors

Issue: Custom nodes not working

Solution:

  • Verify custom nodes are installed in /app/custom_nodes/
  • Check for dependency conflicts
  • Review custom node documentation for requirements
  • Restart ComfyUI after installing nodes
  • Check custom node compatibility with ComfyUI version
  • Review container logs for node loading errors

Debug Commands

Access container via Klutch.sh console:

Terminal window
# Check ComfyUI is running
ps aux | grep python
# Verify models directory
ls -lh /app/models/checkpoints/
# Check disk space
df -h
# View recent logs
tail -f /app/comfy.log
# Test GPU availability (if applicable)
python -c "import torch; print(torch.cuda.is_available())"
# Check memory usage
free -h
# List loaded Python packages
pip list

Advanced Configuration

Installing Custom Nodes

ComfyUI’s functionality can be extended with custom nodes:

Popular Custom Nodes:

  • ComfyUI-Manager: Node package manager for easy installation
  • ControlNet Auxiliary Preprocessors: Enhanced ControlNet support
  • Ultimate SD Upscale: Advanced upscaling workflows
  • Image Saver: Enhanced image saving options
  • WAS Node Suite: Extensive utility nodes

Installing Custom Nodes:

Terminal window
# Access your container
cd /app/custom_nodes
# Clone a custom node repository
git clone https://github.com/ltdrdata/ComfyUI-Manager.git
# Install dependencies if required
cd ComfyUI-Manager
pip install -r requirements.txt
# Restart ComfyUI to load new nodes

Using LoRA Models

LoRA (Low-Rank Adaptation) models add styles or concepts without retraining base models:

Adding LoRA to Workflow:

  1. Download LoRA models to /app/models/loras/
  2. Add “Load LoRA” node to your workflow
  3. Connect it between your checkpoint loader and conditioning
  4. Set LoRA strength (0.5-1.0 typically)

Example LoRA Workflow:

Load Checkpoint → Load LoRA → CLIP Text Encode → KSampler → VAE Decode

ControlNet Integration

ControlNet provides precise control over generation using reference images:

Setup:

  1. Download ControlNet models to /app/models/controlnet/
  2. Download preprocessor models (OpenPose, Canny, Depth, etc.)
  3. Add ControlNet nodes to workflow
  4. Connect preprocessor → ControlNet → conditioning

Use Cases:

  • Pose Control: Guide character poses with OpenPose
  • Edge Control: Use Canny edge detection for composition
  • Depth Control: Maintain depth structure from reference
  • Scribble Control: Sketch rough composition

API Access

ComfyUI provides a REST API for automation:

Submitting a Workflow via API:

Terminal window
# Get the workflow JSON from ComfyUI UI (Export button)
# Submit via API
curl -X POST https://example-app.klutch.sh/prompt \
-H "Content-Type: application/json" \
-d @workflow.json

Checking Generation Status:

Terminal window
curl https://example-app.klutch.sh/history

Downloading Generated Image:

Terminal window
curl https://example-app.klutch.sh/view?filename=image.png \
-o output.png

Multi-Step Workflows

Create complex generation pipelines:

Example: Upscale Workflow

Generate Base Image (512x512)
Upscale with Latent Upscaler (1024x1024)
High-res Fix Pass
Final Upscale (2048x2048)
Save Image

Example: ControlNet + LoRA Workflow

Load Checkpoint
Load LoRA (Style)
Load ControlNet (Pose)
Process Reference Image (OpenPose)
Apply ControlNet
Generate with Combined Conditioning
Save Image

Scaling Considerations

As your usage grows:

  1. Vertical Scaling

    • Increase CPU/RAM for better performance
    • Add GPU for dramatically faster generation
    • Recommended: 4+ vCPU, 16GB RAM, 1+ GPU (16GB VRAM)
  2. Storage Scaling

    • Expand models volume as you add more checkpoints
    • Increase output volume based on generation volume
    • Archive old outputs to external storage
    • Implement automated cleanup policies
  3. Workflow Organization

    • Create workflow templates for common tasks
    • Document custom workflows
    • Use version control for workflow JSON files
    • Share workflows across team members
  4. Batch Processing

    • Use batch mode for multiple generations
    • Queue multiple prompts for unattended generation
    • Consider multiple instances for parallel processing

Resources


Next Steps

After successfully deploying ComfyUI on Klutch.sh:

  1. Download Base Models: Start with Stable Diffusion 1.5 or SDXL
  2. Test Default Workflow: Generate your first image with the built-in workflow
  3. Explore Custom Nodes: Install ComfyUI-Manager for easier node management
  4. Learn Node Basics: Understand how different nodes connect and interact
  5. Experiment with Settings: Try different samplers, steps, and CFG scales
  6. Build Custom Workflows: Create reusable workflows for your specific needs
  7. Add LoRAs: Download style-specific LoRAs to expand creative options
  8. Try ControlNet: Use reference images for precise control
  9. Optimize Performance: Monitor resource usage and adjust settings
  10. Join Community: Participate in ComfyUI communities to learn advanced techniques

Deploying ComfyUI on Klutch.sh provides a powerful, flexible platform for AI image generation. Whether you’re creating art, developing products, or experimenting with AI capabilities, Klutch.sh ensures your ComfyUI instance is accessible, scalable, and reliable.