Deploying Leon
Introduction
Leon is an open-source personal assistant that runs on your own server, giving you complete control over your AI assistant without relying on cloud services from major tech companies. Inspired by commercial assistants like Alexa and Google Assistant, Leon provides voice recognition, natural language understanding, and extensible skill modules while keeping your data private.
Built with Node.js and Python, Leon uses a modular architecture that allows developers to create custom skills for virtually any task. The assistant can understand voice commands, execute actions, provide spoken responses, and learn from interactions to improve over time.
Key highlights of Leon:
- Privacy-Focused: All processing happens on your server; no data is sent to external services
- Voice Recognition: Built-in speech-to-text and text-to-speech capabilities
- Natural Language Understanding: Interprets intent from conversational commands
- Modular Skills: Extensible architecture for adding custom functionality
- Multi-Language Support: Supports English, French, and other languages
- Offline Capable: Core functionality works without internet connectivity
- RESTful API: Integrate Leon with other applications and services
- Web Interface: Browser-based client for interacting with your assistant
- Memory System: Remembers context and learns from conversations
- Open Source: MIT licensed with active community development
This guide walks through deploying Leon on Klutch.sh using Docker, configuring the assistant, and creating your first custom skills.
Why Deploy Leon on Klutch.sh
Deploying Leon on Klutch.sh provides several advantages:
Simplified Deployment: Klutch.sh automatically detects your Dockerfile and builds Leon without complex setup. Push to GitHub, and your assistant deploys automatically.
Persistent Storage: Attach persistent volumes for Leon’s brain data, skills, and configuration. Your assistant’s memory survives restarts.
HTTPS by Default: Klutch.sh provides automatic SSL certificates for secure access to your assistant from anywhere.
GitHub Integration: Connect your repository directly from GitHub. Updates trigger automatic redeployments.
Scalable Resources: Allocate CPU and memory based on your usage patterns. Voice processing benefits from additional resources.
Environment Variable Management: Securely store API keys and configuration without exposing them in your repository.
Custom Domains: Assign a custom domain for accessing your personal assistant.
Always-On Availability: Your assistant remains available 24/7 without managing hardware.
Prerequisites
Before deploying Leon on Klutch.sh, ensure you have:
- A Klutch.sh account
- A GitHub account with a repository for your Leon configuration
- Basic familiarity with Docker and containerization concepts
- (Optional) API keys for enhanced skills (weather, calendar, etc.)
- (Optional) A custom domain for your Leon instance
Understanding Leon Architecture
Leon is built on a sophisticated multi-component architecture:
Core Server: Node.js server handling HTTP requests, WebSocket connections, and skill orchestration.
NLU Engine: Natural Language Understanding component that processes text input to determine intent and extract entities.
Python Bridge: Python runtime for executing skill actions and complex computations.
Skills Packages: Modular skill packages organized by domain (calendar, weather, utilities, etc.).
Brain Storage: SQLite database storing conversation history, learned patterns, and skill data.
TTS/STT: Text-to-speech and speech-to-text engines for voice interaction.
Preparing Your Repository
Repository Structure
leon-deploy/├── Dockerfile├── README.md└── .dockerignoreCreating the Dockerfile
FROM leonai/leon:latest
# Set environment variablesENV LEON_LANG=enENV LEON_HOST=0.0.0.0ENV LEON_PORT=1337ENV LEON_TIME_ZONE=America/New_York
# Create data directoriesRUN mkdir -p /leon/data /leon/skills
# Expose the web interface portEXPOSE 1337
# Health checkHEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=3 \ CMD curl -f http://localhost:1337/api/info || exit 1
# Start LeonCMD ["npm", "start"]Creating the .dockerignore File
.git.github*.mdLICENSE.gitignore*.log.DS_Store.env.env.localnode_modules/Environment Variables Reference
| Variable | Required | Default | Description |
|---|---|---|---|
LEON_LANG | No | en | Primary language (en, fr) |
LEON_HOST | No | 0.0.0.0 | Server binding address |
LEON_PORT | No | 1337 | HTTP server port |
LEON_TIME_ZONE | No | UTC | Timezone for scheduling |
LEON_AFTER_SPEECH | No | true | Enable text-to-speech responses |
LEON_STT_PROVIDER | No | coqui | Speech-to-text provider |
LEON_TTS_PROVIDER | No | flite | Text-to-speech provider |
Deploying Leon on Klutch.sh
- Select HTTP as the traffic type
- Set the internal port to 1337 (Leon’s default port)
Push Your Repository to GitHub
git initgit add Dockerfile .dockerignore README.mdgit commit -m "Initial Leon deployment configuration"git remote add origin https://github.com/yourusername/leon-deploy.gitgit push -u origin mainCreate a New Project on Klutch.sh
Navigate to the Klutch.sh dashboard and create a new project named “leon” or “assistant”.
Create a New App
Within your project, create a new app. Connect your GitHub account and select your Leon repository.
Configure HTTP Traffic
In the deployment settings:
Set Environment Variables
Add the following environment variables:
| Variable | Value |
|---|---|
LEON_LANG | en (or your preferred language) |
LEON_TIME_ZONE | Your timezone (e.g., America/New_York) |
LEON_HOST | 0.0.0.0 |
Attach Persistent Volumes
| Mount Path | Recommended Size | Purpose |
|---|---|---|
/leon/data | 5 GB | Brain data, conversation history, and learned patterns |
/leon/skills | 2 GB | Custom skills and configurations |
Deploy Your Application
Click Deploy to start the build process. Klutch.sh will build the container, attach volumes, and provision an HTTPS certificate.
Access Leon
Once deployment completes, access Leon at https://your-app.klutch.sh. The web interface allows you to interact with your assistant via text or voice.
Initial Configuration
First Interaction
After accessing Leon:
- Type or speak “Hello Leon” to test connectivity
- Try built-in commands like “What time is it?” or “Tell me a joke”
- Explore available skills by asking “What can you do?”
Configuring Skills
Enable and configure skill packages:
- Review available skills in the Leon documentation
- Configure API keys for skills requiring external services
- Set up calendar, weather, and other integrations
Voice Configuration
For voice interaction:
- Configure your preferred STT/TTS providers
- Test microphone input through the web interface
- Adjust sensitivity and language settings
Working with Skills
Built-in Skills
Leon includes several skill packages:
- Utilities: Calculator, timer, unit conversion
- Information: Weather, Wikipedia lookups
- Productivity: Todo lists, reminders, calendar
- Entertainment: Jokes, music control
Creating Custom Skills
Develop skills specific to your needs:
- Create a skill package directory
- Define intents and responses
- Implement action handlers in Python
- Test and deploy your skill
Skill Configuration
Configure skills through environment variables or configuration files:
{ "weather": { "api_key": "your-api-key", "default_city": "New York" }}Production Best Practices
Security Recommendations
- Use strong authentication for the web interface
- Keep Leon updated to receive security patches
- Limit network access if not using remotely
- Review skill permissions before enabling
Performance Optimization
- Allocate sufficient memory for voice processing
- Disable unused skills to reduce resource usage
- Use efficient TTS/STT providers
- Configure appropriate timeout values
Backup Strategy
- Back up the
/leon/datadirectory regularly - Export skill configurations
- Version control custom skills
Troubleshooting
Voice Recognition Issues
- Check microphone permissions in browser
- Verify STT provider is properly configured
- Test with text input to isolate the issue
- Review audio quality and background noise
Skills Not Responding
- Check skill configurations for errors
- Verify required API keys are set
- Review Leon logs for error messages
- Test skills individually
Connection Problems
- Verify the deployment is running
- Check HTTP traffic configuration
- Ensure port 1337 is correctly mapped
- Test WebSocket connectivity
Additional Resources
- Official Leon Website
- Leon Documentation
- Leon GitHub Repository
- Leon Discord Community
- Klutch.sh Persistent Volumes
- Klutch.sh Deployments
Conclusion
Deploying Leon on Klutch.sh gives you a powerful, privacy-focused personal assistant with automatic builds and secure HTTPS access. Unlike cloud-based alternatives, Leon keeps your conversations and data on your own infrastructure.
With extensible skills, voice capabilities, and an active open-source community, Leon provides a foundation for building a truly personalized AI assistant that respects your privacy.