Skip to content

Deploying a Streamlit App

What is Streamlit?

Streamlit is an open-source Python library that turns data scripts into shareable web apps with minimal code. Built specifically for data scientists and machine learning engineers, Streamlit eliminates the need for front-end expertise by providing a simple, intuitive API for creating interactive dashboards, data visualizations, and machine learning applications.

Key features include:

  • Simple, Pythonic API for building web interfaces
  • Automatic rerun on code changes for rapid development
  • Support for interactive widgets (sliders, buttons, text inputs, file uploads)
  • Built-in charting and visualization libraries (Plotly, Altair, Matplotlib)
  • Session state management for stateful applications
  • Caching mechanisms for performance optimization
  • Multi-page app support for larger applications
  • File upload and download capabilities
  • Real-time data streaming
  • Database integration (SQLite, PostgreSQL, MongoDB)
  • Machine learning model integration
  • Custom CSS and HTML support
  • Theme customization
  • Secrets management for sensitive data
  • Analytics and usage tracking
  • Share and deployment capabilities
  • Docker containerization support

Streamlit is ideal for creating data dashboards, building machine learning demos, prototyping data applications, visualizing analysis results, creating interactive reports, monitoring real-time data, deploying ML models, and building data processing tools.

Prerequisites

Before deploying a Streamlit application to Klutch.sh, ensure you have:

  • Python 3.9+ installed on your local machine
  • pip or conda for dependency management
  • Git and a GitHub account
  • A Klutch.sh account with dashboard access
  • Basic understanding of Python programming
  • Optional: Data files or databases for your application
  • Optional: Machine learning models

Getting Started with Streamlit

Step 1: Create Your Project Directory and Virtual Environment

Terminal window
mkdir my-streamlit-app
cd my-streamlit-app
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate

Step 2: Install Streamlit and Dependencies

Terminal window
pip install streamlit pandas numpy plotly altair requests

Key packages:

  • streamlit: The web framework for data apps
  • pandas: Data manipulation and analysis
  • numpy: Numerical computing
  • plotly: Interactive visualization library
  • altair: Statistical visualization
  • requests: HTTP library for API calls

Step 3: Create Your Streamlit Application

Create app.py:

import streamlit as st
import pandas as pd
import numpy as np
import plotly.express as px
from datetime import datetime, timedelta
import os
# Page configuration
st.set_page_config(
page_title="Data Dashboard",
page_icon="��",
layout="wide",
initial_sidebar_state="expanded"
)
# Custom theme
st.markdown("""
<style>
.main {
padding: 2rem;
}
</style>
""", unsafe_allow_html=True)
# Initialize session state
if 'count' not in st.session_state:
st.session_state.count = 0
if 'data' not in st.session_state:
st.session_state.data = None
# Sidebar navigation
st.sidebar.title("Navigation")
app_mode = st.sidebar.radio(
"Select Page",
["📊 Dashboard", "📈 Analytics", "⚙️ Settings"]
)
# Main title
st.title("Streamlit App on Klutch.sh")
st.markdown("---")
# Dashboard Page
if app_mode == "📊 Dashboard":
st.header("Dashboard Overview")
# Metrics
col1, col2, col3, col4 = st.columns(4)
with col1:
st.metric(label="Total Users", value="1,234", delta="+12")
with col2:
st.metric(label="Revenue", value="$45,231", delta="+$2,312")
with col3:
st.metric(label="Growth Rate", value="23.5%", delta="+1.2%")
with col4:
st.metric(label="Active Sessions", value="567", delta="-3")
st.markdown("---")
# Generate sample data
dates = pd.date_range(start='2024-01-01', periods=100, freq='D')
data = pd.DataFrame({
'Date': dates,
'Sales': np.random.randint(100, 1000, 100),
'Visits': np.random.randint(1000, 5000, 100),
'Conversion': np.random.uniform(0.01, 0.1, 100)
})
# Charts
col1, col2 = st.columns(2)
with col1:
st.subheader("Sales Trend")
fig = px.line(data, x='Date', y='Sales', title='Sales Over Time')
st.plotly_chart(fig, use_container_width=True)
with col2:
st.subheader("Website Visits")
fig = px.bar(data, x='Date', y='Visits', title='Daily Visits')
st.plotly_chart(fig, use_container_width=True)
# Data table
st.subheader("Recent Data")
st.dataframe(data.head(10), use_container_width=True)
# Analytics Page
elif app_mode == "📈 Analytics":
st.header("Analytics & Insights")
# Filters
col1, col2, col3 = st.columns(3)
with col1:
metric = st.selectbox("Select Metric", ["Sales", "Visits", "Conversion"])
with col2:
period = st.selectbox("Time Period", ["Daily", "Weekly", "Monthly"])
with col3:
date_range = st.date_input(
"Select Date Range",
value=(datetime.now() - timedelta(days=30), datetime.now())
)
# Summary statistics
st.subheader("Summary Statistics")
summary_data = {
'Metric': ['Mean', 'Median', 'Std Dev', 'Min', 'Max'],
'Value': [
f"${np.random.randint(100, 1000)}",
f"${np.random.randint(100, 1000)}",
f"${np.random.randint(10, 100)}",
f"${np.random.randint(10, 100)}",
f"${np.random.randint(1000, 5000)}"
]
}
st.dataframe(pd.DataFrame(summary_data), use_container_width=True)
# Distribution chart
st.subheader("Distribution Analysis")
values = np.random.normal(500, 100, 1000)
fig = px.histogram(x=values, nbins=30, title='Value Distribution')
st.plotly_chart(fig, use_container_width=True)
# Settings Page
elif app_mode == "⚙️ Settings":
st.header("Application Settings")
col1, col2 = st.columns(2)
with col1:
st.subheader("Display Settings")
theme = st.selectbox("Theme", ["Light", "Dark", "Auto"])
language = st.selectbox("Language", ["English", "Spanish", "French"])
with col2:
st.subheader("Data Settings")
csv_file = st.file_uploader("Upload CSV File", type=['csv'])
if csv_file is not None:
df = pd.read_csv(csv_file)
st.session_state.data = df
st.success("File uploaded successfully!")
st.dataframe(df.head())
st.markdown("---")
# Counter example
st.subheader("Interactive Counter")
col1, col2, col3 = st.columns(3)
with col1:
if st.button("Increment"):
st.session_state.count += 1
with col2:
st.metric("Count", st.session_state.count)
with col3:
if st.button("Reset"):
st.session_state.count = 0
# Footer
st.markdown("---")
st.markdown("""
<div style="text-align: center; color: gray; font-size: small;">
<p>Deployed on Klutch.sh with Streamlit</p>
<p>Last updated: {}</p>
</div>
""".format(datetime.now().strftime("%Y-%m-%d %H:%M:%S")), unsafe_allow_html=True)

Step 4: Create a Requirements File

Terminal window
pip freeze > requirements.txt

Your requirements.txt should contain:

streamlit==1.28.1
pandas==2.1.3
numpy==1.24.3
plotly==5.18.0
altair==5.1.2
requests==2.31.0

Step 5: Create Streamlit Configuration

Create .streamlit/config.toml for production settings:

[theme]
primaryColor = "#FF6B6B"
backgroundColor = "#FFFFFF"
secondaryBackgroundColor = "#F0F2F6"
textColor = "#262730"
font = "sans serif"
[client]
showErrorDetails = false
toolbarMode = "minimal"
[server]
headless = true
port = 8501
runOnSave = true
enableXsrfProtection = true
maxUploadSize = 200
[logger]
level = "info"

Step 6: Test Locally

Create a .env file for local development:

STREAMLIT_SERVER_PORT=8501
STREAMLIT_SERVER_HEADLESS=true
PYTHONUNBUFFERED=1

Run the application:

Terminal window
streamlit run app.py

Access the interface at http://localhost:8501 in your browser. You should see the dashboard with navigation tabs (Dashboard, Analytics, Settings).


Deploying Without a Dockerfile

Klutch.sh uses Nixpacks to automatically detect and build your Streamlit application from your source code.

Prepare Your Repository

  1. Initialize a Git repository and commit your code:
Terminal window
git init
git add .
git commit -m "Initial Streamlit app commit"
  1. Create a .gitignore file:
venv/
__pycache__/
*.pyc
*.pyo
*.egg-info/
.env
.DS_Store
.streamlit/secrets.toml
.streamlit/cache/
.pytest_cache/
*.db
*.sqlite3
uploads/
logs/
  1. Push to GitHub:
Terminal window
git remote add origin https://github.com/YOUR_USERNAME/my-streamlit-app.git
git branch -M main
git push -u origin main

Deploy to Klutch.sh

  1. Log in to Klutch.sh dashboard.

  2. Click “Create a new project” and provide a project name.

  3. Inside your project, click “Create a new app”.

  4. Repository Configuration:

    • Select your GitHub repository containing the Streamlit app
    • Select the branch to deploy (typically main)
  5. Traffic Settings:

    • Select “HTTP” as the traffic type
  6. Port Configuration:

    • Set the internal port to 8501 (the default Streamlit port)
  7. Environment Variables: Set the following environment variables in the Klutch.sh dashboard:

    • STREAMLIT_SERVER_PORT: Set to 8501
    • STREAMLIT_SERVER_HEADLESS: Set to true
    • STREAMLIT_SERVER_ENABLEXSRFPROTECTION: Set to true
    • PYTHONUNBUFFERED: Set to 1 to ensure Python output is logged immediately
    • STREAMLIT_LOGGER_LEVEL: Set to info for logging
  8. Build and Start Commands (Optional): If you need to customize the build or start command, set these environment variables:

    • BUILD_COMMAND: Default runs pip install -r requirements.txt
    • START_COMMAND: Default is streamlit run app.py --server.port=$PORT --server.address=0.0.0.0
  9. Region, Compute, and Instances:

    • Choose your desired region for optimal latency
    • Select compute resources (Starter for prototypes, Pro/Premium for production)
    • Set the number of instances (start with 1-2, scale as needed based on traffic)
  10. Click “Create” to deploy. Klutch.sh will automatically build your application using Nixpacks and deploy it.

  11. Once deployment completes, your app will be accessible at example-app.klutch.sh.

Verifying the Deployment

Navigate to your deployed app:

https://example-app.klutch.sh

You should see the Streamlit dashboard with all three pages (Dashboard, Analytics, Settings) and be able to interact with all the interactive elements.


Deploying With a Dockerfile

If you prefer more control over your build environment, you can provide a custom Dockerfile. Klutch.sh automatically detects and uses a Dockerfile in your repository’s root directory.

Create a Multi-Stage Dockerfile

Create a Dockerfile in your project root:

# Build stage
FROM python:3.11-slim as builder
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
git \
&& rm -rf /var/lib/apt/lists/*
# Copy requirements and install Python dependencies
COPY requirements.txt .
RUN pip install --user --no-cache-dir -r requirements.txt
# Runtime stage
FROM python:3.11-slim
WORKDIR /app
# Install runtime dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
curl \
&& rm -rf /var/lib/apt/lists/*
# Copy Python dependencies from builder
COPY --from=builder /root/.local /root/.local
# Set PATH to use pip from builder
ENV PATH=/root/.local/bin:$PATH
ENV PYTHONUNBUFFERED=1
ENV STREAMLIT_SERVER_HEADLESS=true
ENV STREAMLIT_SERVER_PORT=8501
# Copy application code
COPY . .
# Create non-root user for security
RUN useradd -m -u 1000 streamlit_user && \
chown -R streamlit_user:streamlit_user /app
USER streamlit_user
# Create Streamlit directories
RUN mkdir -p /app/.streamlit /app/uploads /app/logs
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=15s --retries=3 \
CMD curl -f http://localhost:8501/_stcore/health || exit 1
# Expose port
EXPOSE 8501
# Start the application
CMD ["streamlit", "run", "app.py", "--server.port=8501", "--server.address=0.0.0.0"]

Deploy the Dockerfile Version

  1. Push your code with the Dockerfile to GitHub:
Terminal window
git add Dockerfile
git commit -m "Add Dockerfile for custom build"
git push
  1. Log in to Klutch.sh dashboard.

  2. Create a new app:

    • Select your GitHub repository and branch
    • Set traffic type to “HTTP”
    • Set the internal port to 8501
    • Add environment variables (same as Nixpacks deployment)
    • Click “Create”
  3. Klutch.sh will automatically detect your Dockerfile and use it for building and deployment.


Building Multi-Page Applications

Project Structure

Organize your multi-page Streamlit app:

my-streamlit-app/
├── app.py
├── requirements.txt
├── .streamlit/
│ └── config.toml
└── pages/
├── 1_dashboard.py
├── 2_analytics.py
├── 3_reports.py
└── 4_settings.py

Main App Entry Point

Update app.py for multi-page support:

import streamlit as st
st.set_page_config(
page_title="Multi-Page App",
page_icon="🏠",
layout="wide"
)
st.title("Welcome to Multi-Page Streamlit App")
st.markdown("""
This application demonstrates a multi-page Streamlit structure.
Select a page from the sidebar to navigate between different sections:
- **Dashboard**: Overview and key metrics
- **Analytics**: Detailed analysis and insights
- **Reports**: Generated reports and exports
- **Settings**: Application configuration
Pages are automatically detected from the `pages/` directory.
""")
# Sidebar info
st.sidebar.markdown("---")
st.sidebar.info("This is a multi-page Streamlit application deployed on Klutch.sh")

Example Dashboard Page

Create pages/1_dashboard.py:

import streamlit as st
import pandas as pd
import numpy as np
import plotly.express as px
st.set_page_config(page_title="Dashboard", page_icon="📊")
st.title("📊 Dashboard")
# Sample data
data = pd.DataFrame({
'Date': pd.date_range('2024-01-01', periods=100),
'Revenue': np.random.randint(1000, 5000, 100),
'Users': np.random.randint(100, 1000, 100)
})
# Key metrics
col1, col2, col3 = st.columns(3)
with col1:
st.metric("Total Revenue", "$45,231", "+$2,312")
with col2:
st.metric("Total Users", "1,234", "+12")
with col3:
st.metric("Avg Session", "4m 23s", "-1m 2s")
# Charts
st.subheader("Revenue Trend")
fig = px.line(data, x='Date', y='Revenue')
st.plotly_chart(fig, use_container_width=True)
st.subheader("User Growth")
fig = px.bar(data, x='Date', y='Users')
st.plotly_chart(fig, use_container_width=True)

Session State and Caching

Using Session State for Stateful Apps

import streamlit as st
# Initialize session state
if 'page' not in st.session_state:
st.session_state.page = 'home'
if 'user_data' not in st.session_state:
st.session_state.user_data = {}
# Use session state
st.title("Stateful Application")
# Navigation
page = st.radio("Select Page", ["Home", "Profile", "Settings"])
st.session_state.page = page
if page == "Home":
st.write("Welcome home!")
elif page == "Profile":
name = st.text_input("Name", value=st.session_state.user_data.get('name', ''))
email = st.text_input("Email", value=st.session_state.user_data.get('email', ''))
if st.button("Save Profile"):
st.session_state.user_data['name'] = name
st.session_state.user_data['email'] = email
st.success("Profile saved!")
elif page == "Settings":
st.write("Settings page")
if st.button("Clear Data"):
st.session_state.user_data = {}
st.success("Data cleared!")
st.write("Current user data:", st.session_state.user_data)

Using Caching for Performance

import streamlit as st
import pandas as pd
import time
# Cache data loading
@st.cache_data
def load_data():
"""Load data with caching."""
time.sleep(2) # Simulate slow data load
return pd.DataFrame({
'x': range(100),
'y': range(100, 200)
})
# Cache computations
@st.cache_resource
def create_model():
"""Create and cache ML model."""
from sklearn.ensemble import RandomForestClassifier
return RandomForestClassifier()
st.title("Cached Operations")
# Load cached data
data = load_data()
st.write("Data loaded with cache:", data.head())
# Use cached model
model = create_model()
st.write("Model created and cached")
# Clear cache
if st.button("Clear Cache"):
st.cache_data.clear()
st.cache_resource.clear()
st.success("Cache cleared!")

Data Visualization and Interactivity

Interactive Dashboards

import streamlit as st
import pandas as pd
import plotly.express as px
st.title("Interactive Dashboard")
# Sidebar filters
st.sidebar.header("Filters")
category = st.sidebar.selectbox("Select Category", ["All", "A", "B", "C"])
date_range = st.sidebar.slider("Date Range", 1, 100, (1, 100))
# Generate filtered data
data = pd.DataFrame({
'Date': range(1, 101),
'Sales': range(100, 200),
'Category': ['A', 'B', 'C'] * 33 + ['A']
})
if category != "All":
data = data[data['Category'] == category]
data = data[(data['Date'] >= date_range[0]) & (data['Date'] <= date_range[1])]
# Display metrics
col1, col2, col3 = st.columns(3)
with col1:
st.metric("Total Sales", f"${data['Sales'].sum():,}")
with col2:
st.metric("Avg Sale", f"${data['Sales'].mean():.2f}")
with col3:
st.metric("Records", len(data))
# Visualizations
st.subheader("Sales Trend")
fig = px.line(data, x='Date', y='Sales', title='Sales Over Time')
st.plotly_chart(fig, use_container_width=True)
st.subheader("Data Table")
st.dataframe(data, use_container_width=True)

Database Integration

SQLite Database Operations

import streamlit as st
import sqlite3
import pandas as pd
# Initialize database
def init_db():
conn = sqlite3.connect('/app/data.db')
c = conn.cursor()
c.execute('''CREATE TABLE IF NOT EXISTS users
(id INTEGER PRIMARY KEY, name TEXT, email TEXT)''')
conn.commit()
conn.close()
init_db()
st.title("Database Integration")
# Sidebar menu
menu = st.sidebar.radio("Select Action", ["View Users", "Add User", "Delete User"])
if menu == "View Users":
conn = sqlite3.connect('/app/data.db')
df = pd.read_sql_query("SELECT * FROM users", conn)
conn.close()
st.dataframe(df)
elif menu == "Add User":
with st.form("add_user"):
name = st.text_input("Name")
email = st.text_input("Email")
submit = st.form_submit_button("Add User")
if submit:
conn = sqlite3.connect('/app/data.db')
c = conn.cursor()
c.execute("INSERT INTO users (name, email) VALUES (?, ?)", (name, email))
conn.commit()
conn.close()
st.success("User added!")
elif menu == "Delete User":
conn = sqlite3.connect('/app/data.db')
df = pd.read_sql_query("SELECT * FROM users", conn)
conn.close()
user_id = st.selectbox("Select User to Delete", df['id'])
if st.button("Delete"):
conn = sqlite3.connect('/app/data.db')
c = conn.cursor()
c.execute("DELETE FROM users WHERE id = ?", (user_id,))
conn.commit()
conn.close()
st.success("User deleted!")

File Upload and Download

Handling File Operations

import streamlit as st
import pandas as pd
import io
st.title("File Upload & Download")
# File upload
uploaded_file = st.file_uploader("Choose a CSV file", type=['csv'])
if uploaded_file is not None:
df = pd.read_csv(uploaded_file)
st.subheader("File Preview")
st.dataframe(df)
st.subheader("File Statistics")
st.write(f"Shape: {df.shape}")
st.write(f"Columns: {list(df.columns)}")
# Processing options
if st.checkbox("Show Summary Statistics"):
st.write(df.describe())
# Download modified file
st.subheader("Download Options")
# CSV download
csv = df.to_csv(index=False)
st.download_button(
label="Download as CSV",
data=csv,
file_name="output.csv",
mime="text/csv"
)
# Excel download
if st.button("Generate Excel"):
with io.BytesIO() as buffer:
with pd.ExcelWriter(buffer, engine='openpyxl') as writer:
df.to_excel(writer, index=False)
st.download_button(
label="Download as Excel",
data=buffer.getvalue(),
file_name="output.xlsx",
mime="application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"
)

Environment Variables and Configuration

Essential Environment Variables

Configure these variables in the Klutch.sh dashboard:

VariableDescriptionExample
STREAMLIT_SERVER_PORTApplication port8501
STREAMLIT_SERVER_HEADLESSRun in headless modetrue
STREAMLIT_SERVER_ENABLEXSRFPROTECTIONEnable CSRF protectiontrue
PYTHONUNBUFFEREDUnbuffered Python output1
STREAMLIT_LOGGER_LEVELLogging levelinfo
STREAMLIT_CLIENT_SHOWWARNINGONPAGECHANGEShow warningsfalse

Customization Environment Variables (Nixpacks)

For Nixpacks deployments:

VariablePurposeExample
BUILD_COMMANDBuild commandpip install -r requirements.txt
START_COMMANDStart commandstreamlit run app.py --server.port=$PORT --server.address=0.0.0.0

Persistent Storage for Data and Uploads

Adding Persistent Volume

  1. In the Klutch.sh app dashboard, navigate to “Persistent Storage” or “Volumes”
  2. Click “Add Volume”
  3. Set the mount path: /app/data (for databases) or /app/uploads (for user uploads)
  4. Set the size based on your needs (e.g., 20 GB for databases, 50 GB for uploads)
  5. Save and redeploy

Organizing Data Storage

Update your app.py to use persistent data directory:

import streamlit as st
import os
from pathlib import Path
# Set up data directory
DATA_DIR = os.getenv('DATA_DIR', '/app/data')
Path(DATA_DIR).mkdir(parents=True, exist_ok=True)
# Use persistent storage for databases
DB_PATH = os.path.join(DATA_DIR, 'app.db')
st.title("Data Persistence")
# Demonstrate data storage
if st.button("Save Data"):
with open(os.path.join(DATA_DIR, 'sample.txt'), 'w') as f:
f.write('Sample data persisted to volume')
st.success(f"Data saved to {DATA_DIR}")
# List stored files
st.subheader("Stored Files")
if os.path.exists(DATA_DIR):
files = os.listdir(DATA_DIR)
st.write(files if files else "No files stored yet")

Custom Domains

To serve your Streamlit application from a custom domain:

  1. In the Klutch.sh app dashboard, navigate to “Custom Domains”
  2. Click “Add Custom Domain”
  3. Enter your domain (e.g., dashboard.example.com)
  4. Follow the DNS configuration instructions provided
  5. Update any CORS or allowed origins settings

Example DNS configuration:

dashboard.example.com CNAME example-app.klutch.sh

Monitoring and Logging

Application Logging

Configure logging in your Streamlit app:

import streamlit as st
import logging
import os
from pathlib import Path
# Create logs directory
log_dir = os.getenv('LOG_DIR', '/app/logs')
Path(log_dir).mkdir(parents=True, exist_ok=True)
# Configure logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler(os.path.join(log_dir, 'streamlit.log')),
logging.StreamHandler()
]
)
logger = logging.getLogger(__name__)
st.title("Application with Logging")
def process_data(data):
"""Process data with logging."""
logger.info(f"Processing data of length: {len(data)}")
result = data.upper()
logger.info(f"Data processing completed")
return result
text = st.text_input("Enter text")
if st.button("Process"):
logger.info(f"User input received: {text}")
result = process_data(text)
st.write(f"Result: {result}")

Performance Monitoring

Monitor your app through the Klutch.sh dashboard:

  • CPU and memory usage
  • Request count and response times
  • Application errors and exceptions
  • Uptime and availability
  • Traffic patterns

Security Best Practices

  1. Secrets Management: Use Streamlit secrets for sensitive data
  2. Environment Variables: Store secrets in environment variables
  3. HTTPS Only: Always use HTTPS in production
  4. Input Validation: Validate all user inputs
  5. File Upload Security: Validate file types and sizes
  6. Database Security: Use parameterized queries
  7. Authentication: Implement proper user authentication
  8. CORS Configuration: Restrict origins appropriately
  9. Rate Limiting: Implement request rate limiting
  10. Dependency Updates: Keep packages updated for security patches

Example security implementation:

import streamlit as st
import os
# Secrets management
def get_secret(key):
"""Retrieve secret from Streamlit secrets."""
try:
return st.secrets[key]
except KeyError:
# Fall back to environment variable
return os.getenv(key)
# File upload validation
MAX_UPLOAD_SIZE = 100 * 1024 * 1024 # 100 MB
ALLOWED_EXTENSIONS = {'.csv', '.txt', '.xlsx', '.json'}
def validate_file(uploaded_file):
"""Validate uploaded file."""
if uploaded_file is None:
return False, "No file selected"
if uploaded_file.size > MAX_UPLOAD_SIZE:
return False, "File size exceeds limit"
ext = os.path.splitext(uploaded_file.name)[1].lower()
if ext not in ALLOWED_EXTENSIONS:
return False, "File type not allowed"
return True, "File valid"
st.title("Secure Streamlit App")
# Database API key example
api_key = get_secret("DATABASE_URL")
# File upload with validation
uploaded_file = st.file_uploader("Upload file")
if uploaded_file:
is_valid, message = validate_file(uploaded_file)
if is_valid:
st.success(message)
else:
st.error(message)

Troubleshooting

Issue 1: App Takes Long Time to Load

Problem: Application loads slowly on first access.

Solution:

  • Use Streamlit caching decorators (@st.cache_data, @st.cache_resource)
  • Optimize data loading and processing
  • Move expensive operations to startup
  • Monitor and profile slow functions
  • Consider async operations for long-running tasks

Issue 2: Memory Issues

Problem: Application crashes with out-of-memory errors.

Solution:

  • Reduce data size or use pagination
  • Clear cache when not needed
  • Use generators for large datasets
  • Monitor memory usage in dashboard
  • Scale to instances with more memory
  • Implement data streaming

Issue 3: File Upload Failures

Problem: File uploads fail or timeout.

Solution:

  • Check file size limits in configuration
  • Verify disk space on persistent storage
  • Increase timeout settings
  • Optimize file processing logic
  • Monitor upload performance
  • Test with various file sizes

Issue 4: Session State Issues

Problem: Session state not persisting correctly.

Solution:

  • Ensure session state is initialized
  • Use proper data structures for state
  • Avoid mutable objects in state
  • Test locally before deployment
  • Monitor session in dashboard
  • Check for connection issues

Issue 5: Dashboard Display Issues

Problem: Charts or components not displaying correctly.

Solution:

  • Clear browser cache
  • Check data format and validity
  • Verify visualization library versions
  • Test with different Streamlit versions
  • Check console for JavaScript errors
  • Use responsive design patterns

Best Practices for Production Deployment

  1. Enable Logging: Track application behavior

    logging.basicConfig(level=logging.INFO)
  2. Use Caching: Improve performance with caching

    @st.cache_data
    def expensive_function(): pass
  3. Validate Inputs: Check all user inputs

    if not user_input or len(user_input) > 1000:
    st.error("Invalid input")
  4. Persistent Storage: Store data in volumes

    DATA_DIR = os.getenv('DATA_DIR', '/app/data')
  5. Error Handling: Graceful error recovery

    try:
    result = process(data)
    except Exception as e:
    st.error(f"Error: {str(e)}")
  6. Configuration: Use environment variables

    PORT = os.getenv('STREAMLIT_SERVER_PORT', 8501)
  7. Multi-Page Apps: Organize large applications

    pages/
    ├── 1_page1.py
    └── 2_page2.py
  8. Performance: Monitor and optimize

    • Use @st.cache_data for data loading
    • Use @st.cache_resource for resource-heavy objects
    • Profile slow functions
    • Monitor memory usage
  9. Security: Implement security measures

    • Validate user inputs
    • Use HTTPS in production
    • Manage secrets properly
    • Restrict file uploads
    • Enable CSRF protection
  10. Testing: Test thoroughly before deployment

    Terminal window
    streamlit run app.py --logger.level=debug

Resources


Conclusion

Deploying Streamlit applications to Klutch.sh provides a fast, scalable platform for sharing data applications, dashboards, and machine learning tools. Streamlit’s simple API combined with Klutch.sh’s infrastructure makes it easy to transform Python scripts into interactive web applications.

Key takeaways:

  • Use Nixpacks for quick deployments with automatic Python detection
  • Use Docker for complete control over dependencies
  • Implement caching strategies for optimal performance
  • Use session state for stateful, interactive applications
  • Organize large apps with multi-page structure
  • Leverage persistent storage for databases and uploads
  • Configure logging and monitoring for production
  • Implement security measures and input validation
  • Monitor application performance through dashboard
  • Keep dependencies updated for security and performance

For additional help, refer to the Streamlit documentation or Klutch.sh support resources.