Skip to content

Deploying Leon

Introduction

Leon is an open-source personal assistant that runs on your own server, giving you complete control over your AI assistant without relying on cloud services from major tech companies. Inspired by commercial assistants like Alexa and Google Assistant, Leon provides voice recognition, natural language understanding, and extensible skill modules while keeping your data private.

Built with Node.js and Python, Leon uses a modular architecture that allows developers to create custom skills for virtually any task. The assistant can understand voice commands, execute actions, provide spoken responses, and learn from interactions to improve over time.

Key highlights of Leon:

  • Privacy-Focused: All processing happens on your server; no data is sent to external services
  • Voice Recognition: Built-in speech-to-text and text-to-speech capabilities
  • Natural Language Understanding: Interprets intent from conversational commands
  • Modular Skills: Extensible architecture for adding custom functionality
  • Multi-Language Support: Supports English, French, and other languages
  • Offline Capable: Core functionality works without internet connectivity
  • RESTful API: Integrate Leon with other applications and services
  • Web Interface: Browser-based client for interacting with your assistant
  • Memory System: Remembers context and learns from conversations
  • Open Source: MIT licensed with active community development

This guide walks through deploying Leon on Klutch.sh using Docker, configuring the assistant, and creating your first custom skills.

Why Deploy Leon on Klutch.sh

Deploying Leon on Klutch.sh provides several advantages:

Simplified Deployment: Klutch.sh automatically detects your Dockerfile and builds Leon without complex setup. Push to GitHub, and your assistant deploys automatically.

Persistent Storage: Attach persistent volumes for Leon’s brain data, skills, and configuration. Your assistant’s memory survives restarts.

HTTPS by Default: Klutch.sh provides automatic SSL certificates for secure access to your assistant from anywhere.

GitHub Integration: Connect your repository directly from GitHub. Updates trigger automatic redeployments.

Scalable Resources: Allocate CPU and memory based on your usage patterns. Voice processing benefits from additional resources.

Environment Variable Management: Securely store API keys and configuration without exposing them in your repository.

Custom Domains: Assign a custom domain for accessing your personal assistant.

Always-On Availability: Your assistant remains available 24/7 without managing hardware.

Prerequisites

Before deploying Leon on Klutch.sh, ensure you have:

  • A Klutch.sh account
  • A GitHub account with a repository for your Leon configuration
  • Basic familiarity with Docker and containerization concepts
  • (Optional) API keys for enhanced skills (weather, calendar, etc.)
  • (Optional) A custom domain for your Leon instance

Understanding Leon Architecture

Leon is built on a sophisticated multi-component architecture:

Core Server: Node.js server handling HTTP requests, WebSocket connections, and skill orchestration.

NLU Engine: Natural Language Understanding component that processes text input to determine intent and extract entities.

Python Bridge: Python runtime for executing skill actions and complex computations.

Skills Packages: Modular skill packages organized by domain (calendar, weather, utilities, etc.).

Brain Storage: SQLite database storing conversation history, learned patterns, and skill data.

TTS/STT: Text-to-speech and speech-to-text engines for voice interaction.

Preparing Your Repository

Repository Structure

leon-deploy/
├── Dockerfile
├── README.md
└── .dockerignore

Creating the Dockerfile

FROM leonai/leon:latest
# Set environment variables
ENV LEON_LANG=en
ENV LEON_HOST=0.0.0.0
ENV LEON_PORT=1337
ENV LEON_TIME_ZONE=America/New_York
# Create data directories
RUN mkdir -p /leon/data /leon/skills
# Expose the web interface port
EXPOSE 1337
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=3 \
CMD curl -f http://localhost:1337/api/info || exit 1
# Start Leon
CMD ["npm", "start"]

Creating the .dockerignore File

.git
.github
*.md
LICENSE
.gitignore
*.log
.DS_Store
.env
.env.local
node_modules/

Environment Variables Reference

VariableRequiredDefaultDescription
LEON_LANGNoenPrimary language (en, fr)
LEON_HOSTNo0.0.0.0Server binding address
LEON_PORTNo1337HTTP server port
LEON_TIME_ZONENoUTCTimezone for scheduling
LEON_AFTER_SPEECHNotrueEnable text-to-speech responses
LEON_STT_PROVIDERNocoquiSpeech-to-text provider
LEON_TTS_PROVIDERNofliteText-to-speech provider

Deploying Leon on Klutch.sh

    Push Your Repository to GitHub

    Terminal window
    git init
    git add Dockerfile .dockerignore README.md
    git commit -m "Initial Leon deployment configuration"
    git remote add origin https://github.com/yourusername/leon-deploy.git
    git push -u origin main

    Create a New Project on Klutch.sh

    Navigate to the Klutch.sh dashboard and create a new project named “leon” or “assistant”.

    Create a New App

    Within your project, create a new app. Connect your GitHub account and select your Leon repository.

    Configure HTTP Traffic

    In the deployment settings:

    • Select HTTP as the traffic type
    • Set the internal port to 1337 (Leon’s default port)

    Set Environment Variables

    Add the following environment variables:

    VariableValue
    LEON_LANGen (or your preferred language)
    LEON_TIME_ZONEYour timezone (e.g., America/New_York)
    LEON_HOST0.0.0.0

    Attach Persistent Volumes

    Mount PathRecommended SizePurpose
    /leon/data5 GBBrain data, conversation history, and learned patterns
    /leon/skills2 GBCustom skills and configurations

    Deploy Your Application

    Click Deploy to start the build process. Klutch.sh will build the container, attach volumes, and provision an HTTPS certificate.

    Access Leon

    Once deployment completes, access Leon at https://your-app.klutch.sh. The web interface allows you to interact with your assistant via text or voice.

Initial Configuration

First Interaction

After accessing Leon:

  1. Type or speak “Hello Leon” to test connectivity
  2. Try built-in commands like “What time is it?” or “Tell me a joke”
  3. Explore available skills by asking “What can you do?”

Configuring Skills

Enable and configure skill packages:

  1. Review available skills in the Leon documentation
  2. Configure API keys for skills requiring external services
  3. Set up calendar, weather, and other integrations

Voice Configuration

For voice interaction:

  1. Configure your preferred STT/TTS providers
  2. Test microphone input through the web interface
  3. Adjust sensitivity and language settings

Working with Skills

Built-in Skills

Leon includes several skill packages:

  • Utilities: Calculator, timer, unit conversion
  • Information: Weather, Wikipedia lookups
  • Productivity: Todo lists, reminders, calendar
  • Entertainment: Jokes, music control

Creating Custom Skills

Develop skills specific to your needs:

  1. Create a skill package directory
  2. Define intents and responses
  3. Implement action handlers in Python
  4. Test and deploy your skill

Skill Configuration

Configure skills through environment variables or configuration files:

{
"weather": {
"api_key": "your-api-key",
"default_city": "New York"
}
}

Production Best Practices

Security Recommendations

  • Use strong authentication for the web interface
  • Keep Leon updated to receive security patches
  • Limit network access if not using remotely
  • Review skill permissions before enabling

Performance Optimization

  • Allocate sufficient memory for voice processing
  • Disable unused skills to reduce resource usage
  • Use efficient TTS/STT providers
  • Configure appropriate timeout values

Backup Strategy

  1. Back up the /leon/data directory regularly
  2. Export skill configurations
  3. Version control custom skills

Troubleshooting

Voice Recognition Issues

  • Check microphone permissions in browser
  • Verify STT provider is properly configured
  • Test with text input to isolate the issue
  • Review audio quality and background noise

Skills Not Responding

  • Check skill configurations for errors
  • Verify required API keys are set
  • Review Leon logs for error messages
  • Test skills individually

Connection Problems

  • Verify the deployment is running
  • Check HTTP traffic configuration
  • Ensure port 1337 is correctly mapped
  • Test WebSocket connectivity

Additional Resources

Conclusion

Deploying Leon on Klutch.sh gives you a powerful, privacy-focused personal assistant with automatic builds and secure HTTPS access. Unlike cloud-based alternatives, Leon keeps your conversations and data on your own infrastructure.

With extensible skills, voice capabilities, and an active open-source community, Leon provides a foundation for building a truly personalized AI assistant that respects your privacy.