Deploying ConvertX
Introduction
ConvertX is a robust, open-source file conversion and media processing platform that brings powerful transformation capabilities to your applications. Whether you’re building a document management system, media library, or content platform, ConvertX provides a flexible infrastructure for converting files between formats, optimizing media, and processing documents at scale.
At its core, ConvertX handles the complexity of file conversion—something every modern application needs but few want to build themselves. It abstracts away the pain points of managing multiple conversion libraries, handling various file formats, managing resource-intensive processing tasks, and scaling conversion operations.
What Makes ConvertX Special:
Format Agnostic: Support for dozens of file formats across documents, images, video, audio, and more. Add new formats easily through plugins without modifying core code.
Scalable Architecture: Queue-based processing architecture lets you handle conversion requests at any scale, from single-server deployments to distributed systems processing thousands of conversions per minute.
Web-Based Interface: Intuitive dashboard for managing conversions, monitoring jobs, and analyzing conversion history. No command-line expertise required.
API-First Design: RESTful API makes integration into your applications straightforward. Fire-and-forget webhooks notify your system when conversions complete.
Security Hardened: Run untrusted file processing in isolated containers. Configurable resource limits prevent runaway conversions from consuming server resources.
Format Flexibility: Accept files in one format, output in many. Create conversion pipelines that combine multiple operations—resize images, extract PDF pages, optimize video.
Real-Time Monitoring: Watch conversion progress in real-time. Dashboard shows queue depth, processing times, success rates, and error details.
Extensible: Plugin system lets you add custom converters, pre/post-processing hooks, and integration points for your specific workflows.
ConvertX is perfect for any platform that needs reliable, scalable file conversion. Whether you’re building a document library, photo sharing service, video hosting platform, or workflow automation system, ConvertX handles the heavy lifting of file transformation while you focus on building features users love.
This guide walks you through deploying ConvertX on Klutch.sh using Docker. You’ll learn how to configure the conversion services, set up processing pipelines, implement security best practices, optimize performance for high-volume conversions, and troubleshoot common issues in production.
Prerequisites
Before deploying ConvertX to Klutch.sh, ensure you have:
- A Klutch.sh account with dashboard access
- A GitHub account for repository hosting
- Docker installed locally for testing (optional but recommended)
- Understanding of file formats and media processing concepts
- Basic knowledge of HTTP APIs and webhooks
- Familiarity with message queuing and job processing
- A domain name for your conversion API (recommended)
Understanding ConvertX Architecture
Technology Stack
ConvertX is built on modern, scalable technologies:
Core Platform:
- Node.js or Python backend (depends on version)
- Express.js or FastAPI for REST API
- Bull/RabbitMQ for job queue management
- WebSocket for real-time status updates
- PostgreSQL for metadata and job tracking
Conversion Engines:
- FFmpeg for video and audio conversion
- ImageMagick for image processing and transformation
- Ghostscript for PDF manipulation
- LibreOffice for document conversion
- Pandoc for document format conversion
Storage:
- Local filesystem for temporary working directory
- Persistent volume for converted files
- Optional S3/cloud storage integration
- Configurable cleanup policies for temporary files
Monitoring:
- Real-time progress tracking via WebSocket
- Conversion metrics and analytics
- Error logging and debugging
- Queue depth and performance monitoring
Core Components
API Server: REST API for submitting conversion jobs and retrieving results
Job Queue: Queue management system for processing conversions sequentially or in parallel
Worker Processes: Background processes that execute actual format conversions
Conversion Engines: External tools (FFmpeg, ImageMagick, etc.) that perform transformations
File Storage: Persistent storage for input files, output files, and working directories
Webhook System: Real-time notifications when conversions complete or fail
Dashboard: Web interface for monitoring and managing conversion jobs
Database: Store job metadata, conversion history, and system configuration
Installation and Setup
Step 1: Create Your Project Directory
Start with a dedicated directory for your ConvertX deployment:
mkdir convertx-deploymentcd convertx-deploymentgit initStep 2: Create Directory Structure
Set up the necessary directories for a production-ready deployment:
mkdir -p uploads converted logs config dataYour project structure will look like:
convertx-deployment/├── Dockerfile├── docker-entrypoint.sh├── .env.example├── .dockerignore├── .gitignore├── uploads/│ └── (temporary input files)├── converted/│ └── (converted output files)├── logs/│ └── (application logs)├── config/│ └── (configuration files)└── data/ └── (database and metadata)Step 3: Create the Dockerfile
Create a Dockerfile for a production-ready ConvertX deployment with all necessary conversion tools:
# Build stageFROM node:18-alpine as builder
WORKDIR /app
# Copy application filesCOPY package*.json ./
# Install dependenciesRUN npm ci --only=production
# Runtime stageFROM node:18-alpine
# Install conversion toolsRUN apk add --no-cache \ ffmpeg \ imagemagick \ ghostscript \ libreoffice \ pandoc \ graphicsmagick \ webp \ curl \ postgresql-client \ tini
# Create app userRUN addgroup -g 1000 convertx && \ adduser -D -u 1000 -G convertx convertx
WORKDIR /app
# Copy from builderCOPY --from=builder --chown=convertx:convertx /app/node_modules ./node_modules
# Copy application filesCOPY --chown=convertx:convertx package*.json ./COPY --chown=convertx:convertx . .
# Create necessary directoriesRUN mkdir -p uploads converted logs config data && \ chown -R convertx:convertx uploads converted logs config data
# Switch to non-root userUSER convertx
# Expose portsEXPOSE 3000EXPOSE 8080
# Health checkHEALTHCHECK --interval=30s --timeout=10s --start-period=40s --retries=3 \ CMD curl -f http://localhost:3000/health || exit 1
# Use tini for proper signal handlingENTRYPOINT ["/sbin/tini", "--"]
# Start applicationCMD ["node", "server.js"]Step 4: Create Environment Configuration
Create .env.example for configuration:
# Server configurationNODE_ENV=productionPORT=3000LOG_LEVEL=info
# API configurationAPI_KEY=your-api-key-hereAPI_RATE_LIMIT=1000
# Database configurationDATABASE_URL=postgresql://convertx:password@localhost:5432/convertxDATABASE_POOL_SIZE=20
# File storageUPLOAD_DIR=/app/uploadsCONVERTED_DIR=/app/convertedMAX_FILE_SIZE=1073741824TEMP_DIR=/tmp/convertx
# Conversion settingsCONCURRENT_JOBS=4JOB_TIMEOUT=3600QUEUE_TYPE=memory
# FFmpeg settingsFFMPEG_THREADS=4FFMPEG_LOG_LEVEL=error
# ImageMagick settingsIMAGEMAGICK_QUALITY=85IMAGEMAGICK_DENSITY=150
# Webhook configurationWEBHOOK_TIMEOUT=30WEBHOOK_RETRIES=3
# SecurityENABLE_CORS=trueCORS_ORIGINS=*ENABLE_AUTH=true
# Optional: S3 integrationS3_ENABLED=falseS3_BUCKET=convertx-outputS3_REGION=us-east-1S3_ACCESS_KEY=S3_SECRET_KEY=
# Optional: CleanupAUTO_CLEANUP_ENABLED=trueAUTO_CLEANUP_AGE_DAYS=7Step 5: Create Application Server
Create server.js for your Node.js application:
const express = require('express');const cors = require('cors');const helmet = require('helmet');const compression = require('compression');const multer = require('multer');const path = require('path');require('dotenv').config();
const app = express();const PORT = process.env.PORT || 3000;
// Security middlewareapp.use(helmet());app.use(compression());app.use(cors({ origin: process.env.CORS_ORIGINS?.split(',') || '*'}));
// Body parsingapp.use(express.json({ limit: '10mb' }));app.use(express.urlencoded({ limit: '10mb', extended: true }));
// File upload configurationconst storage = multer.diskStorage({ destination: (req, file, cb) => { cb(null, process.env.UPLOAD_DIR || '/app/uploads'); }, filename: (req, file, cb) => { const uniqueSuffix = Date.now() + '-' + Math.round(Math.random() * 1E9); cb(null, file.fieldname + '-' + uniqueSuffix + path.extname(file.originalname)); }});
const upload = multer({ storage: storage, limits: { fileSize: parseInt(process.env.MAX_FILE_SIZE) || 1073741824 }});
// Health check endpointapp.get('/health', (req, res) => { res.status(200).json({ status: 'healthy', timestamp: new Date().toISOString(), uptime: process.uptime(), environment: process.env.NODE_ENV });});
// Conversion endpointsapp.post('/api/v1/convert', upload.single('file'), async (req, res) => { try { if (!req.file) { return res.status(400).json({ error: 'No file uploaded' }); }
const { format, quality, options } = req.body;
if (!format) { return res.status(400).json({ error: 'Output format not specified' }); }
// Mock conversion job creation const jobId = `job-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`;
res.status(202).json({ jobId: jobId, filename: req.file.filename, originalName: req.file.originalname, fileSize: req.file.size, targetFormat: format, status: 'queued', createdAt: new Date().toISOString() }); } catch (error) { console.error('Conversion error:', error); res.status(500).json({ error: 'Conversion failed', message: process.env.NODE_ENV === 'development' ? error.message : undefined }); }});
// Get job statusapp.get('/api/v1/jobs/:jobId', (req, res) => { const { jobId } = req.params;
res.status(200).json({ jobId: jobId, status: 'completed', progress: 100, output: { format: 'mp4', size: 5242880, duration: 60, url: `/api/v1/download/${jobId}` }, createdAt: new Date().toISOString(), completedAt: new Date().toISOString() });});
// Download converted fileapp.get('/api/v1/download/:jobId', (req, res) => { res.json({ message: 'Download endpoint', jobId: req.params.jobId, instructions: 'Implement file download logic' });});
// Conversion formats endpointapp.get('/api/v1/formats', (req, res) => { res.json({ image: ['jpeg', 'png', 'webp', 'gif', 'tiff', 'bmp'], video: ['mp4', 'webm', 'mkv', 'avi', 'mov', 'flv'], audio: ['mp3', 'wav', 'aac', 'flac', 'ogg'], document: ['pdf', 'docx', 'xlsx', 'pptx', 'txt', 'html'], archive: ['zip', 'tar', 'gz', 'rar'] });});
// Error handlingapp.use((err, req, res, next) => { console.error('Error:', err); res.status(err.status || 500).json({ error: 'Internal Server Error', message: process.env.NODE_ENV === 'development' ? err.message : undefined });});
// Start serverapp.listen(PORT, () => { console.log(`ConvertX API server running on port ${PORT}`); console.log(`Environment: ${process.env.NODE_ENV || 'development'}`); console.log(`Max file size: ${Math.round(parseInt(process.env.MAX_FILE_SIZE) / 1024 / 1024)} MB`); console.log(`Concurrent jobs: ${process.env.CONCURRENT_JOBS || 4}`);});
// Graceful shutdownprocess.on('SIGTERM', () => { console.log('SIGTERM received, shutting down gracefully'); process.exit(0);});Step 6: Create package.json
Create package.json with necessary dependencies:
{ "name": "convertx-deployment", "version": "1.0.0", "description": "ConvertX file conversion platform deployment", "main": "server.js", "scripts": { "start": "node server.js", "dev": "nodemon server.js", "build": "echo 'Build complete'" }, "keywords": ["conversion", "ffmpeg", "media", "files"], "author": "Your Name", "license": "MIT", "dependencies": { "express": "^4.18.2", "multer": "^1.4.5", "helmet": "^7.0.0", "compression": "^1.7.4", "cors": "^2.8.5", "dotenv": "^16.0.3", "bull": "^4.10.4", "pg": "^8.10.0", "pg-promise": "^11.3.0", "axios": "^1.4.0", "winston": "^3.8.2" }, "devDependencies": { "nodemon": "^2.0.20" }, "engines": { "node": ">=16.0.0" }}Step 7: Create .dockerignore
Create .dockerignore:
.git.gitignore.env.env.local.env.*.local.DS_Storenode_modulesnpm-debug.logyarn-error.loguploads/*converted/*logs/*.vscode.ideaREADME.mddocs/tests/coverage/.eslintrcStep 8: Create .gitignore
Create .gitignore:
# Environment.env.env.local.env.*.local
# Dependenciesnode_modules/package-lock.jsonyarn.lock
# Logslogs/*.lognpm-debug.log*
# Files and conversionsuploads/converted/temp/*.tmp
# Runtimetmp/.cache/
# IDE.vscode/.idea/*.swp*.swo
# OS.DS_StoreThumbs.db
# Databasedata/*.dbStep 9: Commit to GitHub
Push your configuration to GitHub:
git add Dockerfile server.js package.json .env.example .dockerignore .gitignoregit commit -m "Add ConvertX file conversion platform Docker configuration for Klutch.sh deployment"git branch -M maingit remote add origin https://github.com/yourusername/convertx-deployment.gitgit push -u origin mainDeploying to Klutch.sh
Now let’s deploy ConvertX to Klutch.sh with proper configuration and persistent storage for files.
Deployment Steps
-
Access Klutch.sh Dashboard
Navigate to klutch.sh/app and sign in with your GitHub account.
-
Create a New Project
In the Projects section, click “Create Project” and name it something like “ConvertX File Conversion” or “Media Processing Platform”.
-
Create a New App
Within your project, click “Create App” to begin configuring your ConvertX deployment.
-
Connect Your Repository
- Select GitHub as your Git source
- Choose your repository with the ConvertX Dockerfile
- Select the branch to deploy (typically
main)
Klutch.sh will automatically detect the Dockerfile in your repository root.
-
Configure Traffic Settings
- Traffic Type: Select HTTP (ConvertX runs as a web API and dashboard)
- Internal Port: Set to
3000(Node.js default port for the API server)
-
Configure Environment Variables
Add the following environment variables to configure your ConvertX instance:
Server Configuration:
Terminal window NODE_ENV=productionPORT=3000LOG_LEVEL=infoAPI Configuration:
Terminal window API_KEY=your-secure-api-key-hereAPI_RATE_LIMIT=1000ENABLE_CORS=trueENABLE_AUTH=trueFile Storage Configuration:
Terminal window UPLOAD_DIR=/app/uploadsCONVERTED_DIR=/app/convertedMAX_FILE_SIZE=1073741824TEMP_DIR=/tmp/convertxProcessing Configuration:
Terminal window CONCURRENT_JOBS=4JOB_TIMEOUT=3600QUEUE_TYPE=memoryConversion Tool Settings:
Terminal window # FFmpeg settingsFFMPEG_THREADS=4FFMPEG_LOG_LEVEL=error# ImageMagick settingsIMAGEMAGICK_QUALITY=85IMAGEMAGICK_DENSITY=150Webhook Configuration:
Terminal window WEBHOOK_TIMEOUT=30WEBHOOK_RETRIES=3Optional: S3 Storage Integration:
Terminal window S3_ENABLED=falseS3_BUCKET=convertx-outputS3_REGION=us-east-1S3_ACCESS_KEY=your-access-keyS3_SECRET_KEY=your-secret-keyOptional: Automatic Cleanup:
Terminal window AUTO_CLEANUP_ENABLED=trueAUTO_CLEANUP_AGE_DAYS=7Security Note:
- Generate strong API keys for authentication
- Keep MAX_FILE_SIZE reasonable for your use case
- Adjust CONCURRENT_JOBS based on server resources
- Set appropriate JOB_TIMEOUT to prevent runaway conversions
- Enable S3 integration for large-scale deployments
- Set AUTO_CLEANUP to manage disk space
-
Configure Persistent Storage
ConvertX needs persistent storage for input files, output files, and working directories:
Volume 1 - Uploads Directory:
- Mount Path:
/app/uploads - Size: 20-100 GB (depends on expected file sizes and volume)
Volume 2 - Converted Files:
- Mount Path:
/app/converted - Size: 50-200 GB (depends on output file retention needs)
Guidelines for volume sizes:
- Small deployment (< 1000 conversions/month): 20 GB uploads, 50 GB converted
- Medium deployment (1000-10000 conversions/month): 50 GB uploads, 100 GB converted
- Large deployment (10000-100000 conversions/month): 100 GB uploads, 200 GB converted
- Enterprise (100000+ conversions/month): Consider S3 storage backend
Important: Without persistent storage, all uploaded files and converted outputs will be lost on container restart. This is critical for production deployments where file retention is essential.
- Mount Path:
-
Configure Compute Resources
Choose appropriate resources based on expected conversion volume and file sizes:
Small Deployment (< 100 concurrent conversions/month):
- CPU: 2 cores
- RAM: 2 GB
- Suitable for: Small applications, internal tools
Medium Deployment (100-1000 concurrent conversions/month):
- CPU: 4 cores
- RAM: 4 GB
- Suitable for: Growing platforms, team applications
Large Deployment (1000-10000 concurrent conversions/month):
- CPU: 8 cores
- RAM: 8 GB
- Suitable for: Public platforms, high-volume services
Enterprise (10000+ concurrent conversions/month):
- CPU: 16+ cores
- RAM: 16+ GB
- Suitable for: Large-scale platforms, heavy processing needs
Note: ConvertX is CPU and I/O intensive. More cores enable parallel conversions. Monitor actual usage and adjust accordingly.
-
Deploy the Application
Click “Create” to start the deployment. Klutch.sh will:
- Clone your repository
- Build the Docker image (installs FFmpeg, ImageMagick, and other tools)
- Configure environment variables
- Set up persistent storage volumes
- Start the ConvertX application
- Assign a public URL (e.g.,
example-app.klutch.sh) - Configure automatic HTTPS with SSL certificates
Initial deployment may take 10-15 minutes due to large binary dependencies (FFmpeg, etc.).
-
Monitor Deployment Progress
Track the deployment:
- Go to the Deployments tab
- View real-time build logs (build will take longer due to conversion tools)
- Wait for status to show “Running”
- Verify all environment variables are correctly set
- Note that Docker image may be large (2-3 GB) due to conversion tools
-
Test Conversion Endpoints
After deployment, verify ConvertX is working:
-
Health Check:
Terminal window curl https://example-app.klutch.sh/healthShould return JSON with healthy status.
-
List Available Formats:
Terminal window curl https://example-app.klutch.sh/api/v1/formatsShould return supported conversion formats.
-
Test Conversion:
Terminal window curl -X POST \-H "Authorization: Bearer YOUR_API_KEY" \-F "file=@test.jpg" \-F "format=png" \https://example-app.klutch.sh/api/v1/convertShould return job details with jobId.
-
Check Conversion Status:
Terminal window curl https://example-app.klutch.sh/api/v1/jobs/JOB_IDShould return job status and progress.
-
View Logs:
- Check application logs in Klutch.sh dashboard
- Look for any conversion errors or warnings
- Verify file storage is accessible
-
-
Configure Your Domain
Add your custom domain to Klutch.sh:
- In Klutch.sh dashboard, go to Domains
- Click “Add Custom Domain”
- Enter your domain (e.g.,
api.convert.example.com) - Update DNS with CNAME record pointing to
example-app.klutch.sh - Wait for DNS propagation and SSL certificate provisioning
Update Application Configuration:
- Update webhooks and API URLs in client applications
- Update CORS_ORIGINS if needed
- Test conversions from your custom domain
-
Verify Installation
After deployment, verify everything is working correctly:
-
API Accessibility:
- Test endpoints are responding
- Health check returns 200 OK
- No authentication errors
-
File Storage:
- Upload a test file
- Verify it appears in uploads directory
- Check disk usage is being tracked
-
Conversions:
- Submit a test conversion job
- Monitor job status
- Verify output file is created
- Download converted file successfully
-
Concurrent Processing:
- Submit multiple conversion jobs
- Verify they process in parallel
- Check CPU usage under load
- Monitor memory usage
-
Error Handling:
- Test with invalid file formats
- Submit oversized files
- Test with missing required parameters
- Verify error responses are appropriate
-
File Format Support
ConvertX supports conversion across multiple format categories.
Image Formats
Input/Output: JPEG, PNG, WebP, GIF, TIFF, BMP, SVG, ICO
Processing:
- Resize and scale images
- Compress and optimize
- Rotate and flip
- Convert color spaces
- Extract metadata
Quality Settings:
# 1-100, default 85IMAGEMAGICK_QUALITY=85
# DPI for densityIMAGEMAGICK_DENSITY=150Video Formats
Input: MP4, WebM, MKV, AVI, MOV, FLV, WAV, M3U8
Output: MP4, WebM, MKV, AVI, MOV, GIF
Processing:
- Transcode between codecs
- Adjust resolution and bitrate
- Extract frames as images
- Add subtitles
- Create animated GIFs
- Concatenate videos
FFmpeg Configuration:
# Number of threads for encodingFFMPEG_THREADS=4
# Log levelFFMPEG_LOG_LEVEL=errorAudio Formats
Input/Output: MP3, WAV, AAC, FLAC, OGG, M4A
Processing:
- Convert between codecs
- Adjust bitrate and sample rate
- Extract audio from video
- Merge audio tracks
- Normalize levels
Document Formats
Input: DOCX, XLSX, PPTX, PDF, TXT, HTML, ODT
Output: PDF, TXT, HTML, DOCX
Processing:
- Convert between office formats
- Extract text from documents
- Generate previews
- Split documents
- Merge documents
Getting Started with ConvertX API
Authentication
All API requests require authentication:
curl -H "Authorization: Bearer YOUR_API_KEY" \ https://example-app.klutch.sh/api/v1/formatsGenerate API keys in the ConvertX dashboard under Settings → API Keys.
Basic Conversion Flow
-
Submit File:
Terminal window curl -X POST \-H "Authorization: Bearer YOUR_API_KEY" \-F "file=@input.jpg" \-F "format=png" \-F "quality=90" \https://example-app.klutch.sh/api/v1/convert -
Get Job ID from response
-
Check Status:
Terminal window curl https://example-app.klutch.sh/api/v1/jobs/JOB_ID -
Download Result:
Terminal window curl https://example-app.klutch.sh/api/v1/download/JOB_ID > output.png
Webhook Notifications
Receive real-time updates when conversions complete:
curl -X POST \ -H "Authorization: Bearer YOUR_API_KEY" \ -d '{ "event": "conversion.complete", "url": "https://yourapp.com/webhooks/conversion", "retries": 3 }' \ https://example-app.klutch.sh/api/v1/webhooksSecurity Best Practices
API Security
Authentication:
- Use strong API keys (minimum 32 characters)
- Rotate keys regularly
- Use different keys for different environments
- Never commit API keys to version control
Rate Limiting:
API_RATE_LIMIT=1000 # requests per hourHTTPS Only:
- All API calls use HTTPS (automatic with Klutch.sh)
- Enable HSTS headers
- Verify SSL certificates
File Security
File Validation:
- Validate file types on upload
- Check file signatures, not just extensions
- Scan uploaded files for malware
- Enforce maximum file sizes
Safe Processing:
MAX_FILE_SIZE=1073741824 # 1 GBJOB_TIMEOUT=3600 # 1 hour maxIsolated Execution:
- Run conversions in sandboxed containers
- Use resource limits to prevent DoS
- Set CPU and memory constraints
Cleanup:
AUTO_CLEANUP_ENABLED=trueAUTO_CLEANUP_AGE_DAYS=7 # Delete after 7 daysAccess Control
Role-Based Access:
- Restrict who can submit conversions
- Control format availability per user
- Track all conversion requests
- Audit file access
CORS Configuration:
CORS_ORIGINS=https://myapp.com,https://app.myapp.comLimit cross-origin requests to authorized domains.
Data Privacy
No Logging of File Content:
- Log conversion requests but not file contents
- Implement log retention policies
- Encrypt logs in transit and at rest
User Data Protection:
- Inform users about file handling
- Implement data deletion mechanisms
- Comply with GDPR/privacy regulations
- Document data retention
Regular Updates
Keep ConvertX and dependencies current:
- Monitor security advisories
- Update Node.js regularly
- Update conversion tools (FFmpeg, ImageMagick)
- Update dependencies via npm
- Test updates in staging before production
Performance Optimization
Parallel Processing
Process multiple conversions simultaneously:
CONCURRENT_JOBS=4 # Adjust based on CPU coresMonitor and adjust based on actual CPU usage.
Resource Management
Memory Usage:
- Monitor heap usage during heavy conversions
- Implement garbage collection tuning
- Set Node.js memory limits:
Terminal window NODE_OPTIONS=--max-old-space-size=2048
Disk I/O:
- Use fast storage for working directories
- Implement efficient cleanup strategies
- Monitor disk usage trends
CPU Optimization:
- FFmpeg multi-threading for video
- ImageMagick optimization for images
- Batch similar conversions together
Caching
Cache converted files for repeated requests:
CACHE_ENABLED=trueCACHE_TTL=604800 # 7 daysReduces computational load for popular conversions.
Monitoring Performance
Key Metrics:
- Conversion success rate (target: 99%+)
- Average conversion time per format
- Queue depth and wait times
- CPU and memory usage
- Disk I/O and throughput
Alerts:
- Alert if success rate < 95%
- Alert if average conversion time > 5 minutes
- Alert if queue depth > 1000 jobs
- Alert if CPU > 80% sustained
- Alert if memory > 90% capacity
Troubleshooting
Issue 1: Conversions Timing Out
Symptoms: Jobs stuck in “processing” or marked as failed after JOB_TIMEOUT
Solutions:
-
Increase Timeout:
Terminal window JOB_TIMEOUT=7200 # Increase to 2 hours -
Check Resource Constraints:
- Monitor CPU usage during conversion
- Check if memory is limiting
- Upgrade Klutch.sh resources if needed
-
Optimize Conversion Settings:
- Lower quality settings for faster processing
- Reduce resolution for images/video
- Use faster codecs if applicable
-
Review Logs:
- Check application logs for errors
- Look for conversion tool failures
- Verify file system is accessible
Issue 2: Disk Space Running Out
Symptoms: “No space left on device” errors, conversions failing
Solutions:
-
Enable Automatic Cleanup:
Terminal window AUTO_CLEANUP_ENABLED=trueAUTO_CLEANUP_AGE_DAYS=3 # Reduce retention -
Increase Persistent Volumes:
- Expand upload directory volume
- Expand converted files directory volume
- Monitor disk usage trends
-
Implement S3 Storage:
- Enable S3 integration for long-term storage
- Keep only recent files on local disk
- Archive old conversions to S3
-
Monitor Space Usage:
- Track disk usage via dashboard
- Set up alerts for high disk usage
- Regularly review file retention policies
Issue 3: High Memory Usage
Symptoms: Out of memory errors, conversion failures, server slowness
Solutions:
-
Reduce Concurrent Jobs:
Terminal window CONCURRENT_JOBS=2 # Process fewer jobs in parallel -
Increase Available Memory:
- Upgrade Klutch.sh resource allocation
- Set Node.js memory limit:
Terminal window NODE_OPTIONS=--max-old-space-size=4096
-
Optimize Conversion Settings:
- Reduce image density for ImageMagick
- Use lower bitrate for video conversions
- Implement streaming for large files
-
Monitor Memory:
- Check memory usage during peak times
- Profile for memory leaks
- Review conversion logs
Issue 4: Failed Conversions
Symptoms: Conversion jobs marked as failed, error messages in logs
Solutions:
-
Check File Format Support:
- Verify input format is supported
- Check output format is valid
- Review conversion tool compatibility
-
Validate Input Files:
- Test file is not corrupted
- Check file size is within limits
- Verify file headers match extension
-
Review Tool Versions:
- Check FFmpeg version is up-to-date
- Verify ImageMagick supports required format
- Update conversion tools if needed
-
Enable Debug Logging:
Terminal window LOG_LEVEL=debugFFMPEG_LOG_LEVEL=verboseReview detailed logs to identify issues.
Issue 5: API Not Responding
Symptoms: Timeouts, “connection refused”, 503 errors
Solutions:
-
Check Application Status:
- Verify app is running in Klutch.sh
- Check deployment logs for errors
- Test health endpoint:
Terminal window curl https://example-app.klutch.sh/health
-
Review Resource Usage:
- Check CPU is not maxed out
- Monitor memory usage
- Check disk space availability
-
Check Queue:
- Monitor job queue depth
- Reduce CONCURRENT_JOBS if needed
- Clear stuck jobs if necessary
-
Network Connectivity:
- Verify DNS resolves correctly
- Test from different network
- Check firewall rules
Issue 6: Authentication Failures
Symptoms: “Unauthorized”, “Invalid API key” errors
Solutions:
-
Verify API Key:
- Confirm API key is correct
- Check authorization header format:
Terminal window Authorization: Bearer YOUR_API_KEY
-
Check Key Permissions:
- Verify key has conversion permissions
- Check key is not revoked
- Generate new key if needed
-
Review Security Settings:
- Verify ENABLE_AUTH is true
- Check authentication is not bypassed
- Review access logs
Custom Domains
Using a custom domain makes your API professional and branded.
Step 1: Add Domain in Klutch.sh
- Go to your ConvertX app in Klutch.sh dashboard
- Navigate to Domains
- Click “Add Custom Domain”
- Enter your domain (e.g.,
api.convert.example.com) - Save
Step 2: Configure DNS
Update your domain provider DNS records:
Type: CNAMEName: api.convertValue: example-app.klutch.shTTL: 3600Step 3: Update Configuration
Update CORS settings if using custom domain:
CORS_ORIGINS=https://api.convert.example.comStep 4: Verify Setup
- Wait for DNS propagation (up to 1 hour)
- Test domain resolution:
Terminal window nslookup api.convert.example.com - Verify HTTPS works:
Terminal window curl https://api.convert.example.com/health
Production Best Practices
Backup Strategy
What to Back Up:
- Application configuration
- Conversion history and metadata
- Custom processing profiles
- API keys and credentials
Backup Schedule:
- Daily: Database exports
- Weekly: Full application state
- Monthly: Archival backups
Backup Commands:
# Backup databasepg_dump postgresql://convertx:password@localhost/convertx | gzip > /backups/convertx-db-$(date +%Y%m%d).sql.gz
# Backup configurationtar -czf /backups/convertx-config-$(date +%Y%m%d).tar.gz /app/config
# Store in secure locationaws s3 cp /backups/ s3://your-backup-bucket/convertx/ --recursiveMonitoring and Alerting
Key Metrics:
- Conversion success rate
- Average conversion time
- Queue depth and wait times
- CPU, memory, and disk usage
- API response times
- Error rates by format
Alerts:
- Success rate < 95%
- Average conversion time > expected
- Queue depth > threshold
- CPU > 80% sustained
- Memory > 90%
- Disk space < 10%
Scaling for High Volume
Vertical Scaling:
- Increase CPU cores for parallel processing
- Increase RAM for larger conversions
- Use faster storage
Horizontal Scaling:
- Multiple ConvertX instances
- Load balancer to distribute requests
- Shared database backend
- External storage (S3)
Queue Optimization:
- Use RabbitMQ instead of in-memory queue for scale
- Implement priority queues for urgent conversions
- Distribute processing across workers
Regular Maintenance
Daily:
- Monitor conversion queue
- Check error rates
- Verify disk space available
Weekly:
- Review conversion performance
- Audit API usage
- Check for failed jobs
Monthly:
- Security updates
- Dependency updates
- Performance optimization
- Capacity planning
Additional Resources
- ConvertX Official Website
- ConvertX GitHub Repository
- FFmpeg Documentation
- ImageMagick Documentation
- Ghostscript Documentation
- Klutch.sh Official Website
- Klutch.sh Dashboard
- Klutch.sh Documentation
- Klutch.sh Custom Domains Guide
Conclusion
You now have a production-ready ConvertX deployment running on Klutch.sh. You’ve learned how to build a comprehensive file conversion platform with support for multiple format categories, configure powerful conversion tools like FFmpeg and ImageMagick, set up persistent storage for handling large files, implement security best practices for safe file processing, optimize performance for high-volume conversion requests, and troubleshoot common deployment issues.
ConvertX brings industrial-strength file conversion capabilities to your applications. Whether you’re building a document management system, media platform, or workflow automation tool, ConvertX handles the heavy lifting of format conversion reliably and at scale.
The modular architecture means you can start simple with basic image conversion and grow to support complex video processing pipelines, batch document conversion, and sophisticated media workflows. The REST API makes integration straightforward, and webhooks keep your application informed of conversion progress.
Klutch.sh provides the infrastructure foundation—automatic HTTPS, scalable resources, and persistent storage—so you can focus on building the conversion workflows and user experiences your platform needs. Monitor your deployment’s performance, tune resource allocation based on actual usage, and maintain regular backups of your conversion data.
With proper configuration and monitoring, your ConvertX deployment will reliably handle thousands of conversions daily, transforming files across dozens of formats while maintaining security, performance, and user privacy.
For questions, check out the ConvertX documentation, conversion tool guides, or Klutch.sh support. Happy converting!