Skip to content

Deploying a FlowiseAI App

Introduction

FlowiseAI is an open-source platform that allows you to build customized LLM (Large Language Model) orchestration flows and AI agents with a visual, drag-and-drop interface. It enables teams to create sophisticated AI workflows without extensive coding, connecting various LLM models, vector databases, and tools to build powerful conversational AI applications.

Deploying FlowiseAI on Klutch.sh provides you with a scalable, managed environment for running your AI workflows in production. With support for persistent storage, environment variables for API keys, and automatic HTTPS routing, Klutch.sh makes it simple to deploy and maintain FlowiseAI applications. Whether you’re building chatbots, AI assistants, or complex workflow automation, this guide will walk you through the complete deployment process.

This guide covers installing FlowiseAI locally, deploying it on Klutch.sh using a Dockerfile, configuring persistent storage for your flows and data, and production best practices for running FlowiseAI reliably at scale.


Prerequisites

  • Node.js 18+ and npm installed locally for development
  • Git and a GitHub account
  • A Klutch.sh account (sign up here)
  • Basic knowledge of Docker and Node.js
  • API keys for any LLM providers you plan to use (OpenAI, Anthropic, etc.)

Getting Started: Install FlowiseAI Locally

Before deploying to Klutch.sh, let’s set up FlowiseAI locally to understand how it works.

    1. Create a new directory for your FlowiseAI project:

      Terminal window
      mkdir my-flowise-app
      cd my-flowise-app
      npm init -y
    2. Install FlowiseAI:

      Terminal window
      npm install flowise
    3. Create a start script (start.sh) for running FlowiseAI:

      #!/usr/bin/env bash
      # start.sh - Launch FlowiseAI
      set -euo pipefail
      # Start FlowiseAI on 0.0.0.0 so it can receive external traffic
      npx flowise start --port 3000
    4. Make the script executable:

      Terminal window
      chmod +x start.sh
    5. Update your package.json to include a start script:

      {
      "name": "my-flowise-app",
      "version": "1.0.0",
      "scripts": {
      "start": "npx flowise start --port 3000"
      },
      "dependencies": {
      "flowise": "^1.4.0"
      }
      }
    6. Test FlowiseAI locally:

      Terminal window
      npm start

      Visit http://localhost:3000 to access the FlowiseAI interface. You should see the visual flow builder where you can create AI workflows.

    7. Create a simple test flow:

      • In the FlowiseAI UI, click “Add New Chatflow”
      • Drag a “Chat Model” node (e.g., ChatOpenAI) onto the canvas
      • Drag a “Conversation Chain” node and connect it to your Chat Model
      • Configure your API key in the Chat Model settings
      • Click “Save” to save your chatflow

Deploying Without a Dockerfile

You can deploy FlowiseAI using Klutch.sh’s automatic build detection with Nixpacks.

    1. Push your FlowiseAI project to a GitHub repository:

      Terminal window
      git init
      git add .
      git commit -m "Initial FlowiseAI setup"
      git remote add origin https://github.com/yourusername/my-flowise-app.git
      git push -u origin main
    2. Log in to Klutch.sh.

    3. Create a new project and give it a name like “FlowiseAI”.

    4. Create a new app:

      • Select your FlowiseAI GitHub repository and branch
      • Select HTTP as the traffic type
      • Set the internal port to 3000 (FlowiseAI’s default port)
      • Choose your preferred region and compute resources
      • Set the number of instances (start with 1 for testing)
    5. Add environment variables for your LLM API keys and FlowiseAI configuration:

      • FLOWISE_USERNAME - Admin username for FlowiseAI
      • FLOWISE_PASSWORD - Admin password (mark as secret)
      • OPENAI_API_KEY - Your OpenAI API key (if using OpenAI, mark as secret)
      • FLOWISE_SECRETKEY_OVERWRITE - A secret key for encryption (mark as secret)
      • DATABASE_PATH - Set to /app/data for persistent storage
    6. Attach a persistent volume:

      • Mount path: /app/data
      • Size: 10 GB (or larger depending on your needs)
    7. Click “Create” to deploy. Klutch.sh will automatically detect your Node.js application, install dependencies, and deploy FlowiseAI.

Your app will be available at a URL like example-app.klutch.sh.

Customization Notes:

  • If you need to customize the start command, set the NIXPACKS_START_CMD environment variable to your custom command (e.g., npx flowise start --port 3000)
  • For custom build commands, use the NIXPACKS_BUILD_CMD environment variable
  • Configure file size limits and other settings using environment variables like FLOWISE_FILE_SIZE_LIMIT=50mb

Using a Dockerfile provides greater control over the build process and ensures reproducible deployments. Klutch.sh automatically detects a Dockerfile in your repository’s root directory.

    1. Create a Dockerfile in your project root:

      # Use Node.js 18 LTS Alpine for a smaller image
      FROM node:18-alpine
      # Set working directory
      WORKDIR /app
      # Install system dependencies that may be needed
      RUN apk add --no-cache python3 make g++
      # Copy package files
      COPY package*.json ./
      # Install FlowiseAI and dependencies
      RUN npm install --production
      # Copy application files
      COPY . .
      # Create data directory for persistent storage
      RUN mkdir -p /app/data
      # Expose FlowiseAI port
      EXPOSE 3000
      # Start FlowiseAI
      CMD ["npx", "flowise", "start", "--port", "3000"]
    2. Optionally, create a .dockerignore file to exclude unnecessary files:

      node_modules
      npm-debug.log
      .git
      .gitignore
      README.md
      .env
      .DS_Store
    3. For a more production-ready setup, create a multi-stage Dockerfile for smaller image size:

      # Build stage
      FROM node:18-alpine AS builder
      WORKDIR /app
      # Install dependencies
      COPY package*.json ./
      RUN npm ci --production
      # Production stage
      FROM node:18-alpine
      WORKDIR /app
      # Install runtime dependencies only
      RUN apk add --no-cache python3 make g++
      # Copy installed dependencies from builder
      COPY --from=builder /app/node_modules ./node_modules
      COPY package*.json ./
      COPY . .
      # Create data directory
      RUN mkdir -p /app/data && \
      chown -R node:node /app
      # Run as non-root user
      USER node
      EXPOSE 3000
      CMD ["npx", "flowise", "start", "--port", "3000"]
    4. Push your code with the Dockerfile to GitHub:

      Terminal window
      git add Dockerfile .dockerignore
      git commit -m "Add Dockerfile for FlowiseAI deployment"
      git push
    5. Follow the deployment steps from the previous section. Klutch.sh will automatically detect and use your Dockerfile to build the application.

    6. Set the internal port to 3000 and select HTTP as the traffic type.

    7. Configure environment variables and persistent volumes as described in the previous section.

    8. Click “Create” to deploy.


Persistent Storage Configuration

FlowiseAI stores chatflow configurations, credentials, and execution data locally. To persist this data across deployments and restarts, you must attach a persistent volume.

Required Persistent Volume:

  • Mount Path: /app/data - This is where FlowiseAI stores all its data by default
  • Recommended Size: 10 GB minimum (increase based on your expected data volume)

Steps to Configure Persistent Storage:

    1. In your Klutch.sh app settings, navigate to the Volumes section.

    2. Add a new persistent volume:

      • Mount path: /app/data
      • Size: 10 GB (or larger)
    3. Ensure your DATABASE_PATH environment variable points to this location:

      DATABASE_PATH=/app/data
    4. Verify the volume is mounted correctly after deployment by checking that your chatflows persist after restarting the app.

What Gets Stored:

  • Chatflow configurations and nodes
  • API credentials (encrypted)
  • Conversation history
  • Uploaded files and documents
  • Vector store embeddings (if using local storage)
  • Application logs and cache

Important Notes:

  • Without persistent storage, all your chatflows and data will be lost when the container restarts
  • Make sure file permissions allow the FlowiseAI process to write to the mounted directory
  • Consider implementing regular backups of your volume for production deployments

Environment Variables and Secrets

FlowiseAI relies on environment variables for configuration and API key management. Always store sensitive values as secrets in Klutch.sh (marked as secret to prevent logging).

Required Environment Variables:

Terminal window
# Admin Authentication
FLOWISE_USERNAME=admin
FLOWISE_PASSWORD=your-secure-password # Mark as secret
# Security
FLOWISE_SECRETKEY_OVERWRITE=your-secret-key-for-encryption # Mark as secret
# Storage
DATABASE_PATH=/app/data
APIKEY_PATH=/app/data
SECRETKEY_PATH=/app/data
LOG_PATH=/app/data/logs

Optional Environment Variables:

Terminal window
# LLM Provider API Keys (mark all as secrets)
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
COHERE_API_KEY=...
HUGGINGFACE_API_KEY=...
# Application Configuration
PORT=3000
FLOWISE_FILE_SIZE_LIMIT=50mb
CORS_ORIGINS=*
IFRAME_ORIGINS=*
# Database (if using external PostgreSQL instead of SQLite)
DATABASE_TYPE=postgres
DATABASE_HOST=your-postgres-host
DATABASE_PORT=5432
DATABASE_NAME=flowise
DATABASE_USER=flowise_user
DATABASE_PASSWORD=... # Mark as secret
# Tool/Integration API Keys (mark all as secrets)
SERPER_API_KEY=...
SERPAPI_API_KEY=...
LANGCHAIN_API_KEY=...

Security Best Practices:

    1. Never commit API keys or secrets to your Git repository
    2. Always mark sensitive environment variables as “secret” in the Klutch.sh dashboard
    3. Use strong, randomly generated passwords for FLOWISE_PASSWORD and FLOWISE_SECRETKEY_OVERWRITE
    4. Rotate API keys periodically
    5. Use environment-specific keys (separate keys for development and production)
    6. Consider using a secrets management service for enterprise deployments

Sample FlowiseAI Configuration

Here’s a complete example of how to structure your FlowiseAI project for deployment:

Project Structure:

my-flowise-app/
├── Dockerfile
├── .dockerignore
├── package.json
├── package-lock.json
├── README.md
└── start.sh

Complete package.json Example:

{
"name": "my-flowise-app",
"version": "1.0.0",
"description": "FlowiseAI deployment on Klutch.sh",
"main": "index.js",
"scripts": {
"start": "npx flowise start --port 3000",
"dev": "npx flowise start --port 3000"
},
"keywords": ["flowise", "ai", "llm", "workflow"],
"author": "Your Name",
"license": "MIT",
"dependencies": {
"flowise": "^1.4.0"
}
}

Environment Variables Template (.env.example):

Terminal window
# Copy this to .env and fill in your values (DO NOT commit .env to git)
# Admin credentials
FLOWISE_USERNAME=admin
FLOWISE_PASSWORD=change-this-password
# Security
FLOWISE_SECRETKEY_OVERWRITE=generate-a-random-secret-key
# Storage paths
DATABASE_PATH=/app/data
APIKEY_PATH=/app/data
SECRETKEY_PATH=/app/data
LOG_PATH=/app/data/logs
# LLM Provider Keys
OPENAI_API_KEY=your-openai-key
ANTHROPIC_API_KEY=your-anthropic-key
# App configuration
PORT=3000
CORS_ORIGINS=*
FLOWISE_FILE_SIZE_LIMIT=50mb

Getting Started with FlowiseAI

Once deployed, you can start building AI workflows:

    1. Access your FlowiseAI instance:

      • Navigate to your app URL: https://example-app.klutch.sh
      • Log in with your FLOWISE_USERNAME and FLOWISE_PASSWORD
    2. Create your first chatflow:

      • Click “Add New Chatflow” from the dashboard
      • Give your chatflow a name (e.g., “Customer Support Bot”)
    3. Build a simple conversational AI:

      • Drag a “ChatOpenAI” node onto the canvas
      • Configure it with your model preference (e.g., gpt-4)
      • Drag a “Conversation Chain” node
      • Connect the ChatOpenAI output to the Conversation Chain’s LLM input
      • Add a “Buffer Memory” node to maintain conversation history
      • Connect it to the Conversation Chain’s memory input
    4. Test your chatflow:

      • Click the “Save Chatflow” button
      • Use the built-in chat interface on the right side to test
      • Ask questions and verify the AI responds appropriately
    5. Get the API endpoint:

      • Click “API” in the top right to view your chatflow’s API endpoint
      • Use this endpoint to integrate FlowiseAI into your applications
      Terminal window
      # Example API call
      curl -X POST https://example-app.klutch.sh/api/v1/prediction/your-chatflow-id \
      -H "Content-Type: application/json" \
      -d '{"question": "Hello, how can you help me?"}'
    6. Build more advanced flows:

      • Add document loaders for RAG (Retrieval-Augmented Generation)
      • Connect to vector databases (Pinecone, Weaviate, etc.)
      • Integrate tools and APIs (web scraping, calculators, etc.)
      • Use agents for complex, multi-step reasoning

Docker Compose for Local Development

For local development and testing, you can use Docker Compose. Note: Docker Compose is not supported by Klutch.sh for deployment; use this only for local development.

docker-compose.yml Example:

version: '3.8'
services:
flowise:
build: .
container_name: flowise
restart: unless-stopped
ports:
- "3000:3000"
volumes:
- ./flowise-data:/app/data
environment:
- PORT=3000
- FLOWISE_USERNAME=admin
- FLOWISE_PASSWORD=test1234
- FLOWISE_SECRETKEY_OVERWRITE=mySecretKey123
- DATABASE_PATH=/app/data
- APIKEY_PATH=/app/data
- SECRETKEY_PATH=/app/data
- LOG_PATH=/app/data/logs
- CORS_ORIGINS=*
- FLOWISE_FILE_SIZE_LIMIT=50mb
volumes:
flowise-data:

To run locally with Docker Compose:

Terminal window
# Start FlowiseAI
docker-compose up -d
# View logs
docker-compose logs -f
# Stop FlowiseAI
docker-compose down

Access FlowiseAI at http://localhost:3000.


Production Best Practices

Performance and Scaling:

    1. Resource Allocation:

      • Start with at least 1 GB RAM and 1 vCPU
      • Monitor resource usage and scale up as needed
      • For production workloads with multiple concurrent users, consider 2+ GB RAM
    2. Database Considerations:

      • Default SQLite database is suitable for small to medium deployments
      • For high-traffic production apps, use PostgreSQL:
        Terminal window
        DATABASE_TYPE=postgres
        DATABASE_HOST=your-postgres.klutch.sh
        DATABASE_PORT=5432
        DATABASE_NAME=flowise
        DATABASE_USER=flowise_user
        DATABASE_PASSWORD=secure-password
    3. Caching and Performance:

      • Enable caching for frequently used embeddings
      • Use vector databases with built-in caching (Pinecone, Weaviate)
      • Implement rate limiting for public-facing APIs
    4. Multiple Instances:

      • Deploy multiple instances for high availability
      • Use session affinity if maintaining stateful conversations
      • Ensure persistent volume is shared across instances or use external database

Security:

    1. Authentication:

      • Always enable FlowiseAI username/password protection
      • Use strong passwords and rotate them regularly
      • Consider adding an authentication proxy for additional security
    2. API Security:

      • Implement API key authentication for production endpoints
      • Use CORS configuration to restrict origins
      • Rate limit API requests to prevent abuse
    3. Data Protection:

      • Encrypt sensitive data at rest
      • Use HTTPS for all communications (provided by Klutch.sh)
      • Regularly backup your persistent volume
      • Implement access controls for admin functions
    4. Network Security:

      • Use HTTP traffic type in Klutch.sh (HTTPS is handled automatically)
      • Restrict access to sensitive endpoints
      • Monitor for suspicious activity

Monitoring and Maintenance:

    1. Health Monitoring:

      • Monitor application logs in the Klutch.sh dashboard
      • Set up alerts for errors and performance issues
      • Track API response times and error rates
    2. Logging:

      • Configure LOG_PATH to write to persistent storage
      • Implement log rotation to prevent disk space issues
      • Review logs regularly for errors and security issues
    3. Backups:

      • Regularly backup your persistent volume
      • Export critical chatflows and configurations
      • Test restoration procedures
    4. Updates:

      • Keep FlowiseAI updated to the latest version
      • Test updates in a staging environment first
      • Review changelogs for breaking changes
      • Pin specific versions in production for stability
    5. Cost Optimization:

      • Monitor LLM API usage and costs
      • Implement caching to reduce redundant API calls
      • Use cheaper models for non-critical operations
      • Set up billing alerts with your LLM providers

Development Workflow:

    1. Use Git for version control of your configuration
    2. Maintain separate environments (development, staging, production)
    3. Test chatflows thoroughly before deploying to production
    4. Document your chatflows and integrations
    5. Use environment variables for environment-specific configuration

Troubleshooting Common Issues

Issue: Application won’t start

  • Check environment variables are set correctly
  • Verify DATABASE_PATH points to a writable directory
  • Review startup logs in the Klutch.sh dashboard

Issue: Chatflows not persisting

  • Ensure persistent volume is attached at /app/data
  • Verify DATABASE_PATH environment variable is set
  • Check file permissions on the mounted volume

Issue: API key errors

  • Verify API keys are correctly set as environment variables
  • Ensure keys are marked as secrets in Klutch.sh
  • Check that keys have sufficient permissions/credits

Issue: Out of memory

  • Increase compute resources in Klutch.sh
  • Optimize chatflows to reduce memory usage
  • Consider using external vector databases instead of in-memory

Issue: Slow response times

  • Check LLM provider API response times
  • Enable caching for embeddings and responses
  • Optimize chatflow complexity
  • Consider using faster LLM models

Resources


Deploying FlowiseAI on Klutch.sh provides you with a powerful, scalable platform for building and running AI workflows in production. With persistent storage, secure environment variable management, and automatic HTTPS, you can focus on building amazing AI applications without worrying about infrastructure. Start building your AI workflows today!