Deploying a FlowiseAI App
Introduction
FlowiseAI is an open-source platform that allows you to build customized LLM (Large Language Model) orchestration flows and AI agents with a visual, drag-and-drop interface. It enables teams to create sophisticated AI workflows without extensive coding, connecting various LLM models, vector databases, and tools to build powerful conversational AI applications.
Deploying FlowiseAI on Klutch.sh provides you with a scalable, managed environment for running your AI workflows in production. With support for persistent storage, environment variables for API keys, and automatic HTTPS routing, Klutch.sh makes it simple to deploy and maintain FlowiseAI applications. Whether you’re building chatbots, AI assistants, or complex workflow automation, this guide will walk you through the complete deployment process.
This guide covers installing FlowiseAI locally, deploying it on Klutch.sh using a Dockerfile, configuring persistent storage for your flows and data, and production best practices for running FlowiseAI reliably at scale.
Prerequisites
- Node.js 18+ and npm installed locally for development
- Git and a GitHub account
- A Klutch.sh account (sign up here)
- Basic knowledge of Docker and Node.js
- API keys for any LLM providers you plan to use (OpenAI, Anthropic, etc.)
Getting Started: Install FlowiseAI Locally
Before deploying to Klutch.sh, let’s set up FlowiseAI locally to understand how it works.
-
Create a new directory for your FlowiseAI project:
Terminal window mkdir my-flowise-appcd my-flowise-appnpm init -y -
Install FlowiseAI:
Terminal window npm install flowise -
Create a start script (
start.sh) for running FlowiseAI:#!/usr/bin/env bash# start.sh - Launch FlowiseAIset -euo pipefail# Start FlowiseAI on 0.0.0.0 so it can receive external trafficnpx flowise start --port 3000 -
Make the script executable:
Terminal window chmod +x start.sh -
Update your
package.jsonto include a start script:{"name": "my-flowise-app","version": "1.0.0","scripts": {"start": "npx flowise start --port 3000"},"dependencies": {"flowise": "^1.4.0"}} -
Test FlowiseAI locally:
Terminal window npm startVisit http://localhost:3000 to access the FlowiseAI interface. You should see the visual flow builder where you can create AI workflows.
-
Create a simple test flow:
- In the FlowiseAI UI, click “Add New Chatflow”
- Drag a “Chat Model” node (e.g., ChatOpenAI) onto the canvas
- Drag a “Conversation Chain” node and connect it to your Chat Model
- Configure your API key in the Chat Model settings
- Click “Save” to save your chatflow
Deploying Without a Dockerfile
You can deploy FlowiseAI using Klutch.sh’s automatic build detection with Nixpacks.
-
Push your FlowiseAI project to a GitHub repository:
Terminal window git initgit add .git commit -m "Initial FlowiseAI setup"git remote add origin https://github.com/yourusername/my-flowise-app.gitgit push -u origin main -
Log in to Klutch.sh.
-
Create a new project and give it a name like “FlowiseAI”.
-
- Select your FlowiseAI GitHub repository and branch
- Select HTTP as the traffic type
- Set the internal port to 3000 (FlowiseAI’s default port)
- Choose your preferred region and compute resources
- Set the number of instances (start with 1 for testing)
-
Add environment variables for your LLM API keys and FlowiseAI configuration:
FLOWISE_USERNAME- Admin username for FlowiseAIFLOWISE_PASSWORD- Admin password (mark as secret)OPENAI_API_KEY- Your OpenAI API key (if using OpenAI, mark as secret)FLOWISE_SECRETKEY_OVERWRITE- A secret key for encryption (mark as secret)DATABASE_PATH- Set to/app/datafor persistent storage
-
Attach a persistent volume:
- Mount path:
/app/data - Size: 10 GB (or larger depending on your needs)
- Mount path:
-
Click “Create” to deploy. Klutch.sh will automatically detect your Node.js application, install dependencies, and deploy FlowiseAI.
Your app will be available at a URL like example-app.klutch.sh.
Customization Notes:
- If you need to customize the start command, set the
NIXPACKS_START_CMDenvironment variable to your custom command (e.g.,npx flowise start --port 3000) - For custom build commands, use the
NIXPACKS_BUILD_CMDenvironment variable - Configure file size limits and other settings using environment variables like
FLOWISE_FILE_SIZE_LIMIT=50mb
Deploying With a Dockerfile (Recommended)
Using a Dockerfile provides greater control over the build process and ensures reproducible deployments. Klutch.sh automatically detects a Dockerfile in your repository’s root directory.
-
Create a
Dockerfilein your project root:# Use Node.js 18 LTS Alpine for a smaller imageFROM node:18-alpine# Set working directoryWORKDIR /app# Install system dependencies that may be neededRUN apk add --no-cache python3 make g++# Copy package filesCOPY package*.json ./# Install FlowiseAI and dependenciesRUN npm install --production# Copy application filesCOPY . .# Create data directory for persistent storageRUN mkdir -p /app/data# Expose FlowiseAI portEXPOSE 3000# Start FlowiseAICMD ["npx", "flowise", "start", "--port", "3000"] -
Optionally, create a
.dockerignorefile to exclude unnecessary files:node_modulesnpm-debug.log.git.gitignoreREADME.md.env.DS_Store -
For a more production-ready setup, create a multi-stage Dockerfile for smaller image size:
# Build stageFROM node:18-alpine AS builderWORKDIR /app# Install dependenciesCOPY package*.json ./RUN npm ci --production# Production stageFROM node:18-alpineWORKDIR /app# Install runtime dependencies onlyRUN apk add --no-cache python3 make g++# Copy installed dependencies from builderCOPY --from=builder /app/node_modules ./node_modulesCOPY package*.json ./COPY . .# Create data directoryRUN mkdir -p /app/data && \chown -R node:node /app# Run as non-root userUSER nodeEXPOSE 3000CMD ["npx", "flowise", "start", "--port", "3000"] -
Push your code with the Dockerfile to GitHub:
Terminal window git add Dockerfile .dockerignoregit commit -m "Add Dockerfile for FlowiseAI deployment"git push -
Follow the deployment steps from the previous section. Klutch.sh will automatically detect and use your Dockerfile to build the application.
-
Set the internal port to 3000 and select HTTP as the traffic type.
-
Configure environment variables and persistent volumes as described in the previous section.
-
Click “Create” to deploy.
Persistent Storage Configuration
FlowiseAI stores chatflow configurations, credentials, and execution data locally. To persist this data across deployments and restarts, you must attach a persistent volume.
Required Persistent Volume:
- Mount Path:
/app/data- This is where FlowiseAI stores all its data by default - Recommended Size: 10 GB minimum (increase based on your expected data volume)
Steps to Configure Persistent Storage:
-
In your Klutch.sh app settings, navigate to the Volumes section.
-
Add a new persistent volume:
- Mount path:
/app/data - Size: 10 GB (or larger)
- Mount path:
-
Ensure your
DATABASE_PATHenvironment variable points to this location:DATABASE_PATH=/app/data -
Verify the volume is mounted correctly after deployment by checking that your chatflows persist after restarting the app.
What Gets Stored:
- Chatflow configurations and nodes
- API credentials (encrypted)
- Conversation history
- Uploaded files and documents
- Vector store embeddings (if using local storage)
- Application logs and cache
Important Notes:
- Without persistent storage, all your chatflows and data will be lost when the container restarts
- Make sure file permissions allow the FlowiseAI process to write to the mounted directory
- Consider implementing regular backups of your volume for production deployments
Environment Variables and Secrets
FlowiseAI relies on environment variables for configuration and API key management. Always store sensitive values as secrets in Klutch.sh (marked as secret to prevent logging).
Required Environment Variables:
# Admin AuthenticationFLOWISE_USERNAME=adminFLOWISE_PASSWORD=your-secure-password # Mark as secret
# SecurityFLOWISE_SECRETKEY_OVERWRITE=your-secret-key-for-encryption # Mark as secret
# StorageDATABASE_PATH=/app/dataAPIKEY_PATH=/app/dataSECRETKEY_PATH=/app/dataLOG_PATH=/app/data/logsOptional Environment Variables:
# LLM Provider API Keys (mark all as secrets)OPENAI_API_KEY=sk-...ANTHROPIC_API_KEY=sk-ant-...COHERE_API_KEY=...HUGGINGFACE_API_KEY=...
# Application ConfigurationPORT=3000FLOWISE_FILE_SIZE_LIMIT=50mbCORS_ORIGINS=*IFRAME_ORIGINS=*
# Database (if using external PostgreSQL instead of SQLite)DATABASE_TYPE=postgresDATABASE_HOST=your-postgres-hostDATABASE_PORT=5432DATABASE_NAME=flowiseDATABASE_USER=flowise_userDATABASE_PASSWORD=... # Mark as secret
# Tool/Integration API Keys (mark all as secrets)SERPER_API_KEY=...SERPAPI_API_KEY=...LANGCHAIN_API_KEY=...Security Best Practices:
- Never commit API keys or secrets to your Git repository
- Always mark sensitive environment variables as “secret” in the Klutch.sh dashboard
- Use strong, randomly generated passwords for
FLOWISE_PASSWORDandFLOWISE_SECRETKEY_OVERWRITE - Rotate API keys periodically
- Use environment-specific keys (separate keys for development and production)
- Consider using a secrets management service for enterprise deployments
Sample FlowiseAI Configuration
Here’s a complete example of how to structure your FlowiseAI project for deployment:
Project Structure:
my-flowise-app/├── Dockerfile├── .dockerignore├── package.json├── package-lock.json├── README.md└── start.shComplete package.json Example:
{ "name": "my-flowise-app", "version": "1.0.0", "description": "FlowiseAI deployment on Klutch.sh", "main": "index.js", "scripts": { "start": "npx flowise start --port 3000", "dev": "npx flowise start --port 3000" }, "keywords": ["flowise", "ai", "llm", "workflow"], "author": "Your Name", "license": "MIT", "dependencies": { "flowise": "^1.4.0" }}Environment Variables Template (.env.example):
# Copy this to .env and fill in your values (DO NOT commit .env to git)
# Admin credentialsFLOWISE_USERNAME=adminFLOWISE_PASSWORD=change-this-password
# SecurityFLOWISE_SECRETKEY_OVERWRITE=generate-a-random-secret-key
# Storage pathsDATABASE_PATH=/app/dataAPIKEY_PATH=/app/dataSECRETKEY_PATH=/app/dataLOG_PATH=/app/data/logs
# LLM Provider KeysOPENAI_API_KEY=your-openai-keyANTHROPIC_API_KEY=your-anthropic-key
# App configurationPORT=3000CORS_ORIGINS=*FLOWISE_FILE_SIZE_LIMIT=50mbGetting Started with FlowiseAI
Once deployed, you can start building AI workflows:
-
Access your FlowiseAI instance:
- Navigate to your app URL:
https://example-app.klutch.sh - Log in with your
FLOWISE_USERNAMEandFLOWISE_PASSWORD
- Navigate to your app URL:
-
Create your first chatflow:
- Click “Add New Chatflow” from the dashboard
- Give your chatflow a name (e.g., “Customer Support Bot”)
-
Build a simple conversational AI:
- Drag a “ChatOpenAI” node onto the canvas
- Configure it with your model preference (e.g., gpt-4)
- Drag a “Conversation Chain” node
- Connect the ChatOpenAI output to the Conversation Chain’s LLM input
- Add a “Buffer Memory” node to maintain conversation history
- Connect it to the Conversation Chain’s memory input
-
Test your chatflow:
- Click the “Save Chatflow” button
- Use the built-in chat interface on the right side to test
- Ask questions and verify the AI responds appropriately
-
Get the API endpoint:
- Click “API” in the top right to view your chatflow’s API endpoint
- Use this endpoint to integrate FlowiseAI into your applications
Terminal window # Example API callcurl -X POST https://example-app.klutch.sh/api/v1/prediction/your-chatflow-id \-H "Content-Type: application/json" \-d '{"question": "Hello, how can you help me?"}' -
Build more advanced flows:
- Add document loaders for RAG (Retrieval-Augmented Generation)
- Connect to vector databases (Pinecone, Weaviate, etc.)
- Integrate tools and APIs (web scraping, calculators, etc.)
- Use agents for complex, multi-step reasoning
Docker Compose for Local Development
For local development and testing, you can use Docker Compose. Note: Docker Compose is not supported by Klutch.sh for deployment; use this only for local development.
docker-compose.yml Example:
version: '3.8'
services: flowise: build: . container_name: flowise restart: unless-stopped ports: - "3000:3000" volumes: - ./flowise-data:/app/data environment: - PORT=3000 - FLOWISE_USERNAME=admin - FLOWISE_PASSWORD=test1234 - FLOWISE_SECRETKEY_OVERWRITE=mySecretKey123 - DATABASE_PATH=/app/data - APIKEY_PATH=/app/data - SECRETKEY_PATH=/app/data - LOG_PATH=/app/data/logs - CORS_ORIGINS=* - FLOWISE_FILE_SIZE_LIMIT=50mb
volumes: flowise-data:To run locally with Docker Compose:
# Start FlowiseAIdocker-compose up -d
# View logsdocker-compose logs -f
# Stop FlowiseAIdocker-compose downAccess FlowiseAI at http://localhost:3000.
Production Best Practices
Performance and Scaling:
-
Resource Allocation:
- Start with at least 1 GB RAM and 1 vCPU
- Monitor resource usage and scale up as needed
- For production workloads with multiple concurrent users, consider 2+ GB RAM
-
Database Considerations:
- Default SQLite database is suitable for small to medium deployments
- For high-traffic production apps, use PostgreSQL:
Terminal window DATABASE_TYPE=postgresDATABASE_HOST=your-postgres.klutch.shDATABASE_PORT=5432DATABASE_NAME=flowiseDATABASE_USER=flowise_userDATABASE_PASSWORD=secure-password
-
Caching and Performance:
- Enable caching for frequently used embeddings
- Use vector databases with built-in caching (Pinecone, Weaviate)
- Implement rate limiting for public-facing APIs
-
Multiple Instances:
- Deploy multiple instances for high availability
- Use session affinity if maintaining stateful conversations
- Ensure persistent volume is shared across instances or use external database
Security:
-
Authentication:
- Always enable FlowiseAI username/password protection
- Use strong passwords and rotate them regularly
- Consider adding an authentication proxy for additional security
-
API Security:
- Implement API key authentication for production endpoints
- Use CORS configuration to restrict origins
- Rate limit API requests to prevent abuse
-
Data Protection:
- Encrypt sensitive data at rest
- Use HTTPS for all communications (provided by Klutch.sh)
- Regularly backup your persistent volume
- Implement access controls for admin functions
-
Network Security:
- Use HTTP traffic type in Klutch.sh (HTTPS is handled automatically)
- Restrict access to sensitive endpoints
- Monitor for suspicious activity
Monitoring and Maintenance:
-
Health Monitoring:
- Monitor application logs in the Klutch.sh dashboard
- Set up alerts for errors and performance issues
- Track API response times and error rates
-
Logging:
- Configure
LOG_PATHto write to persistent storage - Implement log rotation to prevent disk space issues
- Review logs regularly for errors and security issues
- Configure
-
Backups:
- Regularly backup your persistent volume
- Export critical chatflows and configurations
- Test restoration procedures
-
Updates:
- Keep FlowiseAI updated to the latest version
- Test updates in a staging environment first
- Review changelogs for breaking changes
- Pin specific versions in production for stability
-
Cost Optimization:
- Monitor LLM API usage and costs
- Implement caching to reduce redundant API calls
- Use cheaper models for non-critical operations
- Set up billing alerts with your LLM providers
Development Workflow:
- Use Git for version control of your configuration
- Maintain separate environments (development, staging, production)
- Test chatflows thoroughly before deploying to production
- Document your chatflows and integrations
- Use environment variables for environment-specific configuration
Troubleshooting Common Issues
Issue: Application won’t start
- Check environment variables are set correctly
- Verify
DATABASE_PATHpoints to a writable directory - Review startup logs in the Klutch.sh dashboard
Issue: Chatflows not persisting
- Ensure persistent volume is attached at
/app/data - Verify
DATABASE_PATHenvironment variable is set - Check file permissions on the mounted volume
Issue: API key errors
- Verify API keys are correctly set as environment variables
- Ensure keys are marked as secrets in Klutch.sh
- Check that keys have sufficient permissions/credits
Issue: Out of memory
- Increase compute resources in Klutch.sh
- Optimize chatflows to reduce memory usage
- Consider using external vector databases instead of in-memory
Issue: Slow response times
- Check LLM provider API response times
- Enable caching for embeddings and responses
- Optimize chatflow complexity
- Consider using faster LLM models
Resources
- FlowiseAI Official Documentation
- FlowiseAI GitHub Repository
- Klutch.sh Quick Start Guide
- Klutch.sh Volumes Guide
- Klutch.sh Builds Guide
- Klutch.sh Deployments Guide
Deploying FlowiseAI on Klutch.sh provides you with a powerful, scalable platform for building and running AI workflows in production. With persistent storage, secure environment variable management, and automatic HTTPS, you can focus on building amazing AI applications without worrying about infrastructure. Start building your AI workflows today!