Skip to content

Deploying ConvertX

Introduction

ConvertX is a robust, open-source file conversion and media processing platform that brings powerful transformation capabilities to your applications. Whether you’re building a document management system, media library, or content platform, ConvertX provides a flexible infrastructure for converting files between formats, optimizing media, and processing documents at scale.

At its core, ConvertX handles the complexity of file conversion—something every modern application needs but few want to build themselves. It abstracts away the pain points of managing multiple conversion libraries, handling various file formats, managing resource-intensive processing tasks, and scaling conversion operations.

What Makes ConvertX Special:

Format Agnostic: Support for dozens of file formats across documents, images, video, audio, and more. Add new formats easily through plugins without modifying core code.

Scalable Architecture: Queue-based processing architecture lets you handle conversion requests at any scale, from single-server deployments to distributed systems processing thousands of conversions per minute.

Web-Based Interface: Intuitive dashboard for managing conversions, monitoring jobs, and analyzing conversion history. No command-line expertise required.

API-First Design: RESTful API makes integration into your applications straightforward. Fire-and-forget webhooks notify your system when conversions complete.

Security Hardened: Run untrusted file processing in isolated containers. Configurable resource limits prevent runaway conversions from consuming server resources.

Format Flexibility: Accept files in one format, output in many. Create conversion pipelines that combine multiple operations—resize images, extract PDF pages, optimize video.

Real-Time Monitoring: Watch conversion progress in real-time. Dashboard shows queue depth, processing times, success rates, and error details.

Extensible: Plugin system lets you add custom converters, pre/post-processing hooks, and integration points for your specific workflows.

ConvertX is perfect for any platform that needs reliable, scalable file conversion. Whether you’re building a document library, photo sharing service, video hosting platform, or workflow automation system, ConvertX handles the heavy lifting of file transformation while you focus on building features users love.

This guide walks you through deploying ConvertX on Klutch.sh using Docker. You’ll learn how to configure the conversion services, set up processing pipelines, implement security best practices, optimize performance for high-volume conversions, and troubleshoot common issues in production.

Prerequisites

Before deploying ConvertX to Klutch.sh, ensure you have:

  • A Klutch.sh account with dashboard access
  • A GitHub account for repository hosting
  • Docker installed locally for testing (optional but recommended)
  • Understanding of file formats and media processing concepts
  • Basic knowledge of HTTP APIs and webhooks
  • Familiarity with message queuing and job processing
  • A domain name for your conversion API (recommended)

Understanding ConvertX Architecture

Technology Stack

ConvertX is built on modern, scalable technologies:

Core Platform:

  • Node.js or Python backend (depends on version)
  • Express.js or FastAPI for REST API
  • Bull/RabbitMQ for job queue management
  • WebSocket for real-time status updates
  • PostgreSQL for metadata and job tracking

Conversion Engines:

  • FFmpeg for video and audio conversion
  • ImageMagick for image processing and transformation
  • Ghostscript for PDF manipulation
  • LibreOffice for document conversion
  • Pandoc for document format conversion

Storage:

  • Local filesystem for temporary working directory
  • Persistent volume for converted files
  • Optional S3/cloud storage integration
  • Configurable cleanup policies for temporary files

Monitoring:

  • Real-time progress tracking via WebSocket
  • Conversion metrics and analytics
  • Error logging and debugging
  • Queue depth and performance monitoring

Core Components

API Server: REST API for submitting conversion jobs and retrieving results

Job Queue: Queue management system for processing conversions sequentially or in parallel

Worker Processes: Background processes that execute actual format conversions

Conversion Engines: External tools (FFmpeg, ImageMagick, etc.) that perform transformations

File Storage: Persistent storage for input files, output files, and working directories

Webhook System: Real-time notifications when conversions complete or fail

Dashboard: Web interface for monitoring and managing conversion jobs

Database: Store job metadata, conversion history, and system configuration

Installation and Setup

Step 1: Create Your Project Directory

Start with a dedicated directory for your ConvertX deployment:

Terminal window
mkdir convertx-deployment
cd convertx-deployment
git init

Step 2: Create Directory Structure

Set up the necessary directories for a production-ready deployment:

Terminal window
mkdir -p uploads converted logs config data

Your project structure will look like:

convertx-deployment/
├── Dockerfile
├── docker-entrypoint.sh
├── .env.example
├── .dockerignore
├── .gitignore
├── uploads/
│ └── (temporary input files)
├── converted/
│ └── (converted output files)
├── logs/
│ └── (application logs)
├── config/
│ └── (configuration files)
└── data/
└── (database and metadata)

Step 3: Create the Dockerfile

Create a Dockerfile for a production-ready ConvertX deployment with all necessary conversion tools:

# Build stage
FROM node:18-alpine as builder
WORKDIR /app
# Copy application files
COPY package*.json ./
# Install dependencies
RUN npm ci --only=production
# Runtime stage
FROM node:18-alpine
# Install conversion tools
RUN apk add --no-cache \
ffmpeg \
imagemagick \
ghostscript \
libreoffice \
pandoc \
graphicsmagick \
webp \
curl \
postgresql-client \
tini
# Create app user
RUN addgroup -g 1000 convertx && \
adduser -D -u 1000 -G convertx convertx
WORKDIR /app
# Copy from builder
COPY --from=builder --chown=convertx:convertx /app/node_modules ./node_modules
# Copy application files
COPY --chown=convertx:convertx package*.json ./
COPY --chown=convertx:convertx . .
# Create necessary directories
RUN mkdir -p uploads converted logs config data && \
chown -R convertx:convertx uploads converted logs config data
# Switch to non-root user
USER convertx
# Expose ports
EXPOSE 3000
EXPOSE 8080
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=40s --retries=3 \
CMD curl -f http://localhost:3000/health || exit 1
# Use tini for proper signal handling
ENTRYPOINT ["/sbin/tini", "--"]
# Start application
CMD ["node", "server.js"]

Step 4: Create Environment Configuration

Create .env.example for configuration:

Terminal window
# Server configuration
NODE_ENV=production
PORT=3000
LOG_LEVEL=info
# API configuration
API_KEY=your-api-key-here
API_RATE_LIMIT=1000
# Database configuration
DATABASE_URL=postgresql://convertx:password@localhost:5432/convertx
DATABASE_POOL_SIZE=20
# File storage
UPLOAD_DIR=/app/uploads
CONVERTED_DIR=/app/converted
MAX_FILE_SIZE=1073741824
TEMP_DIR=/tmp/convertx
# Conversion settings
CONCURRENT_JOBS=4
JOB_TIMEOUT=3600
QUEUE_TYPE=memory
# FFmpeg settings
FFMPEG_THREADS=4
FFMPEG_LOG_LEVEL=error
# ImageMagick settings
IMAGEMAGICK_QUALITY=85
IMAGEMAGICK_DENSITY=150
# Webhook configuration
WEBHOOK_TIMEOUT=30
WEBHOOK_RETRIES=3
# Security
ENABLE_CORS=true
CORS_ORIGINS=*
ENABLE_AUTH=true
# Optional: S3 integration
S3_ENABLED=false
S3_BUCKET=convertx-output
S3_REGION=us-east-1
S3_ACCESS_KEY=
S3_SECRET_KEY=
# Optional: Cleanup
AUTO_CLEANUP_ENABLED=true
AUTO_CLEANUP_AGE_DAYS=7

Step 5: Create Application Server

Create server.js for your Node.js application:

const express = require('express');
const cors = require('cors');
const helmet = require('helmet');
const compression = require('compression');
const multer = require('multer');
const path = require('path');
require('dotenv').config();
const app = express();
const PORT = process.env.PORT || 3000;
// Security middleware
app.use(helmet());
app.use(compression());
app.use(cors({
origin: process.env.CORS_ORIGINS?.split(',') || '*'
}));
// Body parsing
app.use(express.json({ limit: '10mb' }));
app.use(express.urlencoded({ limit: '10mb', extended: true }));
// File upload configuration
const storage = multer.diskStorage({
destination: (req, file, cb) => {
cb(null, process.env.UPLOAD_DIR || '/app/uploads');
},
filename: (req, file, cb) => {
const uniqueSuffix = Date.now() + '-' + Math.round(Math.random() * 1E9);
cb(null, file.fieldname + '-' + uniqueSuffix + path.extname(file.originalname));
}
});
const upload = multer({
storage: storage,
limits: {
fileSize: parseInt(process.env.MAX_FILE_SIZE) || 1073741824
}
});
// Health check endpoint
app.get('/health', (req, res) => {
res.status(200).json({
status: 'healthy',
timestamp: new Date().toISOString(),
uptime: process.uptime(),
environment: process.env.NODE_ENV
});
});
// Conversion endpoints
app.post('/api/v1/convert', upload.single('file'), async (req, res) => {
try {
if (!req.file) {
return res.status(400).json({ error: 'No file uploaded' });
}
const { format, quality, options } = req.body;
if (!format) {
return res.status(400).json({ error: 'Output format not specified' });
}
// Mock conversion job creation
const jobId = `job-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`;
res.status(202).json({
jobId: jobId,
filename: req.file.filename,
originalName: req.file.originalname,
fileSize: req.file.size,
targetFormat: format,
status: 'queued',
createdAt: new Date().toISOString()
});
} catch (error) {
console.error('Conversion error:', error);
res.status(500).json({
error: 'Conversion failed',
message: process.env.NODE_ENV === 'development' ? error.message : undefined
});
}
});
// Get job status
app.get('/api/v1/jobs/:jobId', (req, res) => {
const { jobId } = req.params;
res.status(200).json({
jobId: jobId,
status: 'completed',
progress: 100,
output: {
format: 'mp4',
size: 5242880,
duration: 60,
url: `/api/v1/download/${jobId}`
},
createdAt: new Date().toISOString(),
completedAt: new Date().toISOString()
});
});
// Download converted file
app.get('/api/v1/download/:jobId', (req, res) => {
res.json({
message: 'Download endpoint',
jobId: req.params.jobId,
instructions: 'Implement file download logic'
});
});
// Conversion formats endpoint
app.get('/api/v1/formats', (req, res) => {
res.json({
image: ['jpeg', 'png', 'webp', 'gif', 'tiff', 'bmp'],
video: ['mp4', 'webm', 'mkv', 'avi', 'mov', 'flv'],
audio: ['mp3', 'wav', 'aac', 'flac', 'ogg'],
document: ['pdf', 'docx', 'xlsx', 'pptx', 'txt', 'html'],
archive: ['zip', 'tar', 'gz', 'rar']
});
});
// Error handling
app.use((err, req, res, next) => {
console.error('Error:', err);
res.status(err.status || 500).json({
error: 'Internal Server Error',
message: process.env.NODE_ENV === 'development' ? err.message : undefined
});
});
// Start server
app.listen(PORT, () => {
console.log(`ConvertX API server running on port ${PORT}`);
console.log(`Environment: ${process.env.NODE_ENV || 'development'}`);
console.log(`Max file size: ${Math.round(parseInt(process.env.MAX_FILE_SIZE) / 1024 / 1024)} MB`);
console.log(`Concurrent jobs: ${process.env.CONCURRENT_JOBS || 4}`);
});
// Graceful shutdown
process.on('SIGTERM', () => {
console.log('SIGTERM received, shutting down gracefully');
process.exit(0);
});

Step 6: Create package.json

Create package.json with necessary dependencies:

{
"name": "convertx-deployment",
"version": "1.0.0",
"description": "ConvertX file conversion platform deployment",
"main": "server.js",
"scripts": {
"start": "node server.js",
"dev": "nodemon server.js",
"build": "echo 'Build complete'"
},
"keywords": ["conversion", "ffmpeg", "media", "files"],
"author": "Your Name",
"license": "MIT",
"dependencies": {
"express": "^4.18.2",
"multer": "^1.4.5",
"helmet": "^7.0.0",
"compression": "^1.7.4",
"cors": "^2.8.5",
"dotenv": "^16.0.3",
"bull": "^4.10.4",
"pg": "^8.10.0",
"pg-promise": "^11.3.0",
"axios": "^1.4.0",
"winston": "^3.8.2"
},
"devDependencies": {
"nodemon": "^2.0.20"
},
"engines": {
"node": ">=16.0.0"
}
}

Step 7: Create .dockerignore

Create .dockerignore:

.git
.gitignore
.env
.env.local
.env.*.local
.DS_Store
node_modules
npm-debug.log
yarn-error.log
uploads/*
converted/*
logs/*
.vscode
.idea
README.md
docs/
tests/
coverage/
.eslintrc

Step 8: Create .gitignore

Create .gitignore:

# Environment
.env
.env.local
.env.*.local
# Dependencies
node_modules/
package-lock.json
yarn.lock
# Logs
logs/
*.log
npm-debug.log*
# Files and conversions
uploads/
converted/
temp/
*.tmp
# Runtime
tmp/
.cache/
# IDE
.vscode/
.idea/
*.swp
*.swo
# OS
.DS_Store
Thumbs.db
# Database
data/
*.db

Step 9: Commit to GitHub

Push your configuration to GitHub:

Terminal window
git add Dockerfile server.js package.json .env.example .dockerignore .gitignore
git commit -m "Add ConvertX file conversion platform Docker configuration for Klutch.sh deployment"
git branch -M main
git remote add origin https://github.com/yourusername/convertx-deployment.git
git push -u origin main

Deploying to Klutch.sh

Now let’s deploy ConvertX to Klutch.sh with proper configuration and persistent storage for files.

Deployment Steps

  1. Access Klutch.sh Dashboard

    Navigate to klutch.sh/app and sign in with your GitHub account.

  2. Create a New Project

    In the Projects section, click “Create Project” and name it something like “ConvertX File Conversion” or “Media Processing Platform”.

  3. Create a New App

    Within your project, click “Create App” to begin configuring your ConvertX deployment.

  4. Connect Your Repository
    • Select GitHub as your Git source
    • Choose your repository with the ConvertX Dockerfile
    • Select the branch to deploy (typically main)

    Klutch.sh will automatically detect the Dockerfile in your repository root.

  5. Configure Traffic Settings
    • Traffic Type: Select HTTP (ConvertX runs as a web API and dashboard)
    • Internal Port: Set to 3000 (Node.js default port for the API server)
  6. Configure Environment Variables

    Add the following environment variables to configure your ConvertX instance:

    Server Configuration:

    Terminal window
    NODE_ENV=production
    PORT=3000
    LOG_LEVEL=info

    API Configuration:

    Terminal window
    API_KEY=your-secure-api-key-here
    API_RATE_LIMIT=1000
    ENABLE_CORS=true
    ENABLE_AUTH=true

    File Storage Configuration:

    Terminal window
    UPLOAD_DIR=/app/uploads
    CONVERTED_DIR=/app/converted
    MAX_FILE_SIZE=1073741824
    TEMP_DIR=/tmp/convertx

    Processing Configuration:

    Terminal window
    CONCURRENT_JOBS=4
    JOB_TIMEOUT=3600
    QUEUE_TYPE=memory

    Conversion Tool Settings:

    Terminal window
    # FFmpeg settings
    FFMPEG_THREADS=4
    FFMPEG_LOG_LEVEL=error
    # ImageMagick settings
    IMAGEMAGICK_QUALITY=85
    IMAGEMAGICK_DENSITY=150

    Webhook Configuration:

    Terminal window
    WEBHOOK_TIMEOUT=30
    WEBHOOK_RETRIES=3

    Optional: S3 Storage Integration:

    Terminal window
    S3_ENABLED=false
    S3_BUCKET=convertx-output
    S3_REGION=us-east-1
    S3_ACCESS_KEY=your-access-key
    S3_SECRET_KEY=your-secret-key

    Optional: Automatic Cleanup:

    Terminal window
    AUTO_CLEANUP_ENABLED=true
    AUTO_CLEANUP_AGE_DAYS=7

    Security Note:

    • Generate strong API keys for authentication
    • Keep MAX_FILE_SIZE reasonable for your use case
    • Adjust CONCURRENT_JOBS based on server resources
    • Set appropriate JOB_TIMEOUT to prevent runaway conversions
    • Enable S3 integration for large-scale deployments
    • Set AUTO_CLEANUP to manage disk space
  7. Configure Persistent Storage

    ConvertX needs persistent storage for input files, output files, and working directories:

    Volume 1 - Uploads Directory:

    • Mount Path: /app/uploads
    • Size: 20-100 GB (depends on expected file sizes and volume)

    Volume 2 - Converted Files:

    • Mount Path: /app/converted
    • Size: 50-200 GB (depends on output file retention needs)

    Guidelines for volume sizes:

    • Small deployment (< 1000 conversions/month): 20 GB uploads, 50 GB converted
    • Medium deployment (1000-10000 conversions/month): 50 GB uploads, 100 GB converted
    • Large deployment (10000-100000 conversions/month): 100 GB uploads, 200 GB converted
    • Enterprise (100000+ conversions/month): Consider S3 storage backend

    Important: Without persistent storage, all uploaded files and converted outputs will be lost on container restart. This is critical for production deployments where file retention is essential.

  8. Configure Compute Resources

    Choose appropriate resources based on expected conversion volume and file sizes:

    Small Deployment (< 100 concurrent conversions/month):

    • CPU: 2 cores
    • RAM: 2 GB
    • Suitable for: Small applications, internal tools

    Medium Deployment (100-1000 concurrent conversions/month):

    • CPU: 4 cores
    • RAM: 4 GB
    • Suitable for: Growing platforms, team applications

    Large Deployment (1000-10000 concurrent conversions/month):

    • CPU: 8 cores
    • RAM: 8 GB
    • Suitable for: Public platforms, high-volume services

    Enterprise (10000+ concurrent conversions/month):

    • CPU: 16+ cores
    • RAM: 16+ GB
    • Suitable for: Large-scale platforms, heavy processing needs

    Note: ConvertX is CPU and I/O intensive. More cores enable parallel conversions. Monitor actual usage and adjust accordingly.

  9. Deploy the Application

    Click “Create” to start the deployment. Klutch.sh will:

    1. Clone your repository
    2. Build the Docker image (installs FFmpeg, ImageMagick, and other tools)
    3. Configure environment variables
    4. Set up persistent storage volumes
    5. Start the ConvertX application
    6. Assign a public URL (e.g., example-app.klutch.sh)
    7. Configure automatic HTTPS with SSL certificates

    Initial deployment may take 10-15 minutes due to large binary dependencies (FFmpeg, etc.).

  10. Monitor Deployment Progress

    Track the deployment:

    • Go to the Deployments tab
    • View real-time build logs (build will take longer due to conversion tools)
    • Wait for status to show “Running”
    • Verify all environment variables are correctly set
    • Note that Docker image may be large (2-3 GB) due to conversion tools
  11. Test Conversion Endpoints

    After deployment, verify ConvertX is working:

    1. Health Check:

      Terminal window
      curl https://example-app.klutch.sh/health

      Should return JSON with healthy status.

    2. List Available Formats:

      Terminal window
      curl https://example-app.klutch.sh/api/v1/formats

      Should return supported conversion formats.

    3. Test Conversion:

      Terminal window
      curl -X POST \
      -H "Authorization: Bearer YOUR_API_KEY" \
      -F "file=@test.jpg" \
      -F "format=png" \
      https://example-app.klutch.sh/api/v1/convert

      Should return job details with jobId.

    4. Check Conversion Status:

      Terminal window
      curl https://example-app.klutch.sh/api/v1/jobs/JOB_ID

      Should return job status and progress.

    5. View Logs:

      • Check application logs in Klutch.sh dashboard
      • Look for any conversion errors or warnings
      • Verify file storage is accessible
  12. Configure Your Domain

    Add your custom domain to Klutch.sh:

    1. In Klutch.sh dashboard, go to Domains
    2. Click “Add Custom Domain”
    3. Enter your domain (e.g., api.convert.example.com)
    4. Update DNS with CNAME record pointing to example-app.klutch.sh
    5. Wait for DNS propagation and SSL certificate provisioning

    Update Application Configuration:

    1. Update webhooks and API URLs in client applications
    2. Update CORS_ORIGINS if needed
    3. Test conversions from your custom domain
  13. Verify Installation

    After deployment, verify everything is working correctly:

    1. API Accessibility:

      • Test endpoints are responding
      • Health check returns 200 OK
      • No authentication errors
    2. File Storage:

      • Upload a test file
      • Verify it appears in uploads directory
      • Check disk usage is being tracked
    3. Conversions:

      • Submit a test conversion job
      • Monitor job status
      • Verify output file is created
      • Download converted file successfully
    4. Concurrent Processing:

      • Submit multiple conversion jobs
      • Verify they process in parallel
      • Check CPU usage under load
      • Monitor memory usage
    5. Error Handling:

      • Test with invalid file formats
      • Submit oversized files
      • Test with missing required parameters
      • Verify error responses are appropriate

File Format Support

ConvertX supports conversion across multiple format categories.

Image Formats

Input/Output: JPEG, PNG, WebP, GIF, TIFF, BMP, SVG, ICO

Processing:

  • Resize and scale images
  • Compress and optimize
  • Rotate and flip
  • Convert color spaces
  • Extract metadata

Quality Settings:

Terminal window
# 1-100, default 85
IMAGEMAGICK_QUALITY=85
# DPI for density
IMAGEMAGICK_DENSITY=150

Video Formats

Input: MP4, WebM, MKV, AVI, MOV, FLV, WAV, M3U8

Output: MP4, WebM, MKV, AVI, MOV, GIF

Processing:

  • Transcode between codecs
  • Adjust resolution and bitrate
  • Extract frames as images
  • Add subtitles
  • Create animated GIFs
  • Concatenate videos

FFmpeg Configuration:

Terminal window
# Number of threads for encoding
FFMPEG_THREADS=4
# Log level
FFMPEG_LOG_LEVEL=error

Audio Formats

Input/Output: MP3, WAV, AAC, FLAC, OGG, M4A

Processing:

  • Convert between codecs
  • Adjust bitrate and sample rate
  • Extract audio from video
  • Merge audio tracks
  • Normalize levels

Document Formats

Input: DOCX, XLSX, PPTX, PDF, TXT, HTML, ODT

Output: PDF, TXT, HTML, DOCX

Processing:

  • Convert between office formats
  • Extract text from documents
  • Generate previews
  • Split documents
  • Merge documents

Getting Started with ConvertX API

Authentication

All API requests require authentication:

Terminal window
curl -H "Authorization: Bearer YOUR_API_KEY" \
https://example-app.klutch.sh/api/v1/formats

Generate API keys in the ConvertX dashboard under Settings → API Keys.

Basic Conversion Flow

  1. Submit File:

    Terminal window
    curl -X POST \
    -H "Authorization: Bearer YOUR_API_KEY" \
    -F "file=@input.jpg" \
    -F "format=png" \
    -F "quality=90" \
    https://example-app.klutch.sh/api/v1/convert
  2. Get Job ID from response

  3. Check Status:

    Terminal window
    curl https://example-app.klutch.sh/api/v1/jobs/JOB_ID
  4. Download Result:

    Terminal window
    curl https://example-app.klutch.sh/api/v1/download/JOB_ID > output.png

Webhook Notifications

Receive real-time updates when conversions complete:

Terminal window
curl -X POST \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"event": "conversion.complete",
"url": "https://yourapp.com/webhooks/conversion",
"retries": 3
}' \
https://example-app.klutch.sh/api/v1/webhooks

Security Best Practices

API Security

Authentication:

  • Use strong API keys (minimum 32 characters)
  • Rotate keys regularly
  • Use different keys for different environments
  • Never commit API keys to version control

Rate Limiting:

Terminal window
API_RATE_LIMIT=1000 # requests per hour

HTTPS Only:

  • All API calls use HTTPS (automatic with Klutch.sh)
  • Enable HSTS headers
  • Verify SSL certificates

File Security

File Validation:

  • Validate file types on upload
  • Check file signatures, not just extensions
  • Scan uploaded files for malware
  • Enforce maximum file sizes

Safe Processing:

Terminal window
MAX_FILE_SIZE=1073741824 # 1 GB
JOB_TIMEOUT=3600 # 1 hour max

Isolated Execution:

  • Run conversions in sandboxed containers
  • Use resource limits to prevent DoS
  • Set CPU and memory constraints

Cleanup:

Terminal window
AUTO_CLEANUP_ENABLED=true
AUTO_CLEANUP_AGE_DAYS=7 # Delete after 7 days

Access Control

Role-Based Access:

  • Restrict who can submit conversions
  • Control format availability per user
  • Track all conversion requests
  • Audit file access

CORS Configuration:

Terminal window
CORS_ORIGINS=https://myapp.com,https://app.myapp.com

Limit cross-origin requests to authorized domains.

Data Privacy

No Logging of File Content:

  • Log conversion requests but not file contents
  • Implement log retention policies
  • Encrypt logs in transit and at rest

User Data Protection:

  • Inform users about file handling
  • Implement data deletion mechanisms
  • Comply with GDPR/privacy regulations
  • Document data retention

Regular Updates

Keep ConvertX and dependencies current:

  1. Monitor security advisories
  2. Update Node.js regularly
  3. Update conversion tools (FFmpeg, ImageMagick)
  4. Update dependencies via npm
  5. Test updates in staging before production

Performance Optimization

Parallel Processing

Process multiple conversions simultaneously:

Terminal window
CONCURRENT_JOBS=4 # Adjust based on CPU cores

Monitor and adjust based on actual CPU usage.

Resource Management

Memory Usage:

  • Monitor heap usage during heavy conversions
  • Implement garbage collection tuning
  • Set Node.js memory limits:
    Terminal window
    NODE_OPTIONS=--max-old-space-size=2048

Disk I/O:

  • Use fast storage for working directories
  • Implement efficient cleanup strategies
  • Monitor disk usage trends

CPU Optimization:

  • FFmpeg multi-threading for video
  • ImageMagick optimization for images
  • Batch similar conversions together

Caching

Cache converted files for repeated requests:

Terminal window
CACHE_ENABLED=true
CACHE_TTL=604800 # 7 days

Reduces computational load for popular conversions.

Monitoring Performance

Key Metrics:

  • Conversion success rate (target: 99%+)
  • Average conversion time per format
  • Queue depth and wait times
  • CPU and memory usage
  • Disk I/O and throughput

Alerts:

  • Alert if success rate < 95%
  • Alert if average conversion time > 5 minutes
  • Alert if queue depth > 1000 jobs
  • Alert if CPU > 80% sustained
  • Alert if memory > 90% capacity

Troubleshooting

Issue 1: Conversions Timing Out

Symptoms: Jobs stuck in “processing” or marked as failed after JOB_TIMEOUT

Solutions:

  1. Increase Timeout:

    Terminal window
    JOB_TIMEOUT=7200 # Increase to 2 hours
  2. Check Resource Constraints:

    • Monitor CPU usage during conversion
    • Check if memory is limiting
    • Upgrade Klutch.sh resources if needed
  3. Optimize Conversion Settings:

    • Lower quality settings for faster processing
    • Reduce resolution for images/video
    • Use faster codecs if applicable
  4. Review Logs:

    • Check application logs for errors
    • Look for conversion tool failures
    • Verify file system is accessible

Issue 2: Disk Space Running Out

Symptoms: “No space left on device” errors, conversions failing

Solutions:

  1. Enable Automatic Cleanup:

    Terminal window
    AUTO_CLEANUP_ENABLED=true
    AUTO_CLEANUP_AGE_DAYS=3 # Reduce retention
  2. Increase Persistent Volumes:

    • Expand upload directory volume
    • Expand converted files directory volume
    • Monitor disk usage trends
  3. Implement S3 Storage:

    • Enable S3 integration for long-term storage
    • Keep only recent files on local disk
    • Archive old conversions to S3
  4. Monitor Space Usage:

    • Track disk usage via dashboard
    • Set up alerts for high disk usage
    • Regularly review file retention policies

Issue 3: High Memory Usage

Symptoms: Out of memory errors, conversion failures, server slowness

Solutions:

  1. Reduce Concurrent Jobs:

    Terminal window
    CONCURRENT_JOBS=2 # Process fewer jobs in parallel
  2. Increase Available Memory:

    • Upgrade Klutch.sh resource allocation
    • Set Node.js memory limit:
      Terminal window
      NODE_OPTIONS=--max-old-space-size=4096
  3. Optimize Conversion Settings:

    • Reduce image density for ImageMagick
    • Use lower bitrate for video conversions
    • Implement streaming for large files
  4. Monitor Memory:

    • Check memory usage during peak times
    • Profile for memory leaks
    • Review conversion logs

Issue 4: Failed Conversions

Symptoms: Conversion jobs marked as failed, error messages in logs

Solutions:

  1. Check File Format Support:

    • Verify input format is supported
    • Check output format is valid
    • Review conversion tool compatibility
  2. Validate Input Files:

    • Test file is not corrupted
    • Check file size is within limits
    • Verify file headers match extension
  3. Review Tool Versions:

    • Check FFmpeg version is up-to-date
    • Verify ImageMagick supports required format
    • Update conversion tools if needed
  4. Enable Debug Logging:

    Terminal window
    LOG_LEVEL=debug
    FFMPEG_LOG_LEVEL=verbose

    Review detailed logs to identify issues.

Issue 5: API Not Responding

Symptoms: Timeouts, “connection refused”, 503 errors

Solutions:

  1. Check Application Status:

    • Verify app is running in Klutch.sh
    • Check deployment logs for errors
    • Test health endpoint:
      Terminal window
      curl https://example-app.klutch.sh/health
  2. Review Resource Usage:

    • Check CPU is not maxed out
    • Monitor memory usage
    • Check disk space availability
  3. Check Queue:

    • Monitor job queue depth
    • Reduce CONCURRENT_JOBS if needed
    • Clear stuck jobs if necessary
  4. Network Connectivity:

    • Verify DNS resolves correctly
    • Test from different network
    • Check firewall rules

Issue 6: Authentication Failures

Symptoms: “Unauthorized”, “Invalid API key” errors

Solutions:

  1. Verify API Key:

    • Confirm API key is correct
    • Check authorization header format:
      Terminal window
      Authorization: Bearer YOUR_API_KEY
  2. Check Key Permissions:

    • Verify key has conversion permissions
    • Check key is not revoked
    • Generate new key if needed
  3. Review Security Settings:

    • Verify ENABLE_AUTH is true
    • Check authentication is not bypassed
    • Review access logs

Custom Domains

Using a custom domain makes your API professional and branded.

Step 1: Add Domain in Klutch.sh

  1. Go to your ConvertX app in Klutch.sh dashboard
  2. Navigate to Domains
  3. Click “Add Custom Domain”
  4. Enter your domain (e.g., api.convert.example.com)
  5. Save

Step 2: Configure DNS

Update your domain provider DNS records:

Type: CNAME
Name: api.convert
Value: example-app.klutch.sh
TTL: 3600

Step 3: Update Configuration

Update CORS settings if using custom domain:

Terminal window
CORS_ORIGINS=https://api.convert.example.com

Step 4: Verify Setup

  1. Wait for DNS propagation (up to 1 hour)
  2. Test domain resolution:
    Terminal window
    nslookup api.convert.example.com
  3. Verify HTTPS works:
    Terminal window
    curl https://api.convert.example.com/health

Production Best Practices

Backup Strategy

What to Back Up:

  1. Application configuration
  2. Conversion history and metadata
  3. Custom processing profiles
  4. API keys and credentials

Backup Schedule:

  • Daily: Database exports
  • Weekly: Full application state
  • Monthly: Archival backups

Backup Commands:

Terminal window
# Backup database
pg_dump postgresql://convertx:password@localhost/convertx | gzip > /backups/convertx-db-$(date +%Y%m%d).sql.gz
# Backup configuration
tar -czf /backups/convertx-config-$(date +%Y%m%d).tar.gz /app/config
# Store in secure location
aws s3 cp /backups/ s3://your-backup-bucket/convertx/ --recursive

Monitoring and Alerting

Key Metrics:

  • Conversion success rate
  • Average conversion time
  • Queue depth and wait times
  • CPU, memory, and disk usage
  • API response times
  • Error rates by format

Alerts:

  • Success rate < 95%
  • Average conversion time > expected
  • Queue depth > threshold
  • CPU > 80% sustained
  • Memory > 90%
  • Disk space < 10%

Scaling for High Volume

Vertical Scaling:

  • Increase CPU cores for parallel processing
  • Increase RAM for larger conversions
  • Use faster storage

Horizontal Scaling:

  • Multiple ConvertX instances
  • Load balancer to distribute requests
  • Shared database backend
  • External storage (S3)

Queue Optimization:

  • Use RabbitMQ instead of in-memory queue for scale
  • Implement priority queues for urgent conversions
  • Distribute processing across workers

Regular Maintenance

Daily:

  • Monitor conversion queue
  • Check error rates
  • Verify disk space available

Weekly:

  • Review conversion performance
  • Audit API usage
  • Check for failed jobs

Monthly:

  • Security updates
  • Dependency updates
  • Performance optimization
  • Capacity planning

Additional Resources

Conclusion

You now have a production-ready ConvertX deployment running on Klutch.sh. You’ve learned how to build a comprehensive file conversion platform with support for multiple format categories, configure powerful conversion tools like FFmpeg and ImageMagick, set up persistent storage for handling large files, implement security best practices for safe file processing, optimize performance for high-volume conversion requests, and troubleshoot common deployment issues.

ConvertX brings industrial-strength file conversion capabilities to your applications. Whether you’re building a document management system, media platform, or workflow automation tool, ConvertX handles the heavy lifting of format conversion reliably and at scale.

The modular architecture means you can start simple with basic image conversion and grow to support complex video processing pipelines, batch document conversion, and sophisticated media workflows. The REST API makes integration straightforward, and webhooks keep your application informed of conversion progress.

Klutch.sh provides the infrastructure foundation—automatic HTTPS, scalable resources, and persistent storage—so you can focus on building the conversion workflows and user experiences your platform needs. Monitor your deployment’s performance, tune resource allocation based on actual usage, and maintain regular backups of your conversion data.

With proper configuration and monitoring, your ConvertX deployment will reliably handle thousands of conversions daily, transforming files across dozens of formats while maintaining security, performance, and user privacy.

For questions, check out the ConvertX documentation, conversion tool guides, or Klutch.sh support. Happy converting!