Deploying a Streamlit App
What is Streamlit?
Streamlit is an open-source Python library that turns data scripts into shareable web apps with minimal code. Built specifically for data scientists and machine learning engineers, Streamlit eliminates the need for front-end expertise by providing a simple, intuitive API for creating interactive dashboards, data visualizations, and machine learning applications.
Key features include:
- Simple, Pythonic API for building web interfaces
- Automatic rerun on code changes for rapid development
- Support for interactive widgets (sliders, buttons, text inputs, file uploads)
- Built-in charting and visualization libraries (Plotly, Altair, Matplotlib)
- Session state management for stateful applications
- Caching mechanisms for performance optimization
- Multi-page app support for larger applications
- File upload and download capabilities
- Real-time data streaming
- Database integration (SQLite, PostgreSQL, MongoDB)
- Machine learning model integration
- Custom CSS and HTML support
- Theme customization
- Secrets management for sensitive data
- Analytics and usage tracking
- Share and deployment capabilities
- Docker containerization support
Streamlit is ideal for creating data dashboards, building machine learning demos, prototyping data applications, visualizing analysis results, creating interactive reports, monitoring real-time data, deploying ML models, and building data processing tools.
Prerequisites
Before deploying a Streamlit application to Klutch.sh, ensure you have:
- Python 3.9+ installed on your local machine
- pip or conda for dependency management
- Git and a GitHub account
- A Klutch.sh account with dashboard access
- Basic understanding of Python programming
- Optional: Data files or databases for your application
- Optional: Machine learning models
Getting Started with Streamlit
Step 1: Create Your Project Directory and Virtual Environment
mkdir my-streamlit-appcd my-streamlit-apppython3 -m venv venvsource venv/bin/activate # On Windows: venv\Scripts\activateStep 2: Install Streamlit and Dependencies
pip install streamlit pandas numpy plotly altair requestsKey packages:
streamlit: The web framework for data appspandas: Data manipulation and analysisnumpy: Numerical computingplotly: Interactive visualization libraryaltair: Statistical visualizationrequests: HTTP library for API calls
Step 3: Create Your Streamlit Application
Create app.py:
import streamlit as stimport pandas as pdimport numpy as npimport plotly.express as pxfrom datetime import datetime, timedeltaimport os
# Page configurationst.set_page_config( page_title="Data Dashboard", page_icon="��", layout="wide", initial_sidebar_state="expanded")
# Custom themest.markdown("""<style> .main { padding: 2rem; }</style>""", unsafe_allow_html=True)
# Initialize session stateif 'count' not in st.session_state: st.session_state.count = 0
if 'data' not in st.session_state: st.session_state.data = None
# Sidebar navigationst.sidebar.title("Navigation")app_mode = st.sidebar.radio( "Select Page", ["📊 Dashboard", "📈 Analytics", "⚙️ Settings"])
# Main titlest.title("Streamlit App on Klutch.sh")st.markdown("---")
# Dashboard Pageif app_mode == "📊 Dashboard": st.header("Dashboard Overview")
# Metrics col1, col2, col3, col4 = st.columns(4)
with col1: st.metric(label="Total Users", value="1,234", delta="+12") with col2: st.metric(label="Revenue", value="$45,231", delta="+$2,312") with col3: st.metric(label="Growth Rate", value="23.5%", delta="+1.2%") with col4: st.metric(label="Active Sessions", value="567", delta="-3")
st.markdown("---")
# Generate sample data dates = pd.date_range(start='2024-01-01', periods=100, freq='D') data = pd.DataFrame({ 'Date': dates, 'Sales': np.random.randint(100, 1000, 100), 'Visits': np.random.randint(1000, 5000, 100), 'Conversion': np.random.uniform(0.01, 0.1, 100) })
# Charts col1, col2 = st.columns(2)
with col1: st.subheader("Sales Trend") fig = px.line(data, x='Date', y='Sales', title='Sales Over Time') st.plotly_chart(fig, use_container_width=True)
with col2: st.subheader("Website Visits") fig = px.bar(data, x='Date', y='Visits', title='Daily Visits') st.plotly_chart(fig, use_container_width=True)
# Data table st.subheader("Recent Data") st.dataframe(data.head(10), use_container_width=True)
# Analytics Pageelif app_mode == "📈 Analytics": st.header("Analytics & Insights")
# Filters col1, col2, col3 = st.columns(3)
with col1: metric = st.selectbox("Select Metric", ["Sales", "Visits", "Conversion"]) with col2: period = st.selectbox("Time Period", ["Daily", "Weekly", "Monthly"]) with col3: date_range = st.date_input( "Select Date Range", value=(datetime.now() - timedelta(days=30), datetime.now()) )
# Summary statistics st.subheader("Summary Statistics") summary_data = { 'Metric': ['Mean', 'Median', 'Std Dev', 'Min', 'Max'], 'Value': [ f"${np.random.randint(100, 1000)}", f"${np.random.randint(100, 1000)}", f"${np.random.randint(10, 100)}", f"${np.random.randint(10, 100)}", f"${np.random.randint(1000, 5000)}" ] } st.dataframe(pd.DataFrame(summary_data), use_container_width=True)
# Distribution chart st.subheader("Distribution Analysis") values = np.random.normal(500, 100, 1000) fig = px.histogram(x=values, nbins=30, title='Value Distribution') st.plotly_chart(fig, use_container_width=True)
# Settings Pageelif app_mode == "⚙️ Settings": st.header("Application Settings")
col1, col2 = st.columns(2)
with col1: st.subheader("Display Settings") theme = st.selectbox("Theme", ["Light", "Dark", "Auto"]) language = st.selectbox("Language", ["English", "Spanish", "French"])
with col2: st.subheader("Data Settings") csv_file = st.file_uploader("Upload CSV File", type=['csv']) if csv_file is not None: df = pd.read_csv(csv_file) st.session_state.data = df st.success("File uploaded successfully!") st.dataframe(df.head())
st.markdown("---")
# Counter example st.subheader("Interactive Counter") col1, col2, col3 = st.columns(3)
with col1: if st.button("Increment"): st.session_state.count += 1 with col2: st.metric("Count", st.session_state.count) with col3: if st.button("Reset"): st.session_state.count = 0
# Footerst.markdown("---")st.markdown("""<div style="text-align: center; color: gray; font-size: small;"> <p>Deployed on Klutch.sh with Streamlit</p> <p>Last updated: {}</p></div>""".format(datetime.now().strftime("%Y-%m-%d %H:%M:%S")), unsafe_allow_html=True)Step 4: Create a Requirements File
pip freeze > requirements.txtYour requirements.txt should contain:
streamlit==1.28.1pandas==2.1.3numpy==1.24.3plotly==5.18.0altair==5.1.2requests==2.31.0Step 5: Create Streamlit Configuration
Create .streamlit/config.toml for production settings:
[theme]primaryColor = "#FF6B6B"backgroundColor = "#FFFFFF"secondaryBackgroundColor = "#F0F2F6"textColor = "#262730"font = "sans serif"
[client]showErrorDetails = falsetoolbarMode = "minimal"
[server]headless = trueport = 8501runOnSave = trueenableXsrfProtection = truemaxUploadSize = 200
[logger]level = "info"Step 6: Test Locally
Create a .env file for local development:
STREAMLIT_SERVER_PORT=8501STREAMLIT_SERVER_HEADLESS=truePYTHONUNBUFFERED=1Run the application:
streamlit run app.pyAccess the interface at http://localhost:8501 in your browser. You should see the dashboard with navigation tabs (Dashboard, Analytics, Settings).
Deploying Without a Dockerfile
Klutch.sh uses Nixpacks to automatically detect and build your Streamlit application from your source code.
Prepare Your Repository
- Initialize a Git repository and commit your code:
git initgit add .git commit -m "Initial Streamlit app commit"- Create a
.gitignorefile:
venv/__pycache__/*.pyc*.pyo*.egg-info/.env.DS_Store.streamlit/secrets.toml.streamlit/cache/.pytest_cache/*.db*.sqlite3uploads/logs/- Push to GitHub:
git remote add origin https://github.com/YOUR_USERNAME/my-streamlit-app.gitgit branch -M maingit push -u origin mainDeploy to Klutch.sh
-
Log in to Klutch.sh dashboard.
-
Click “Create a new project” and provide a project name.
-
Inside your project, click “Create a new app”.
-
Repository Configuration:
- Select your GitHub repository containing the Streamlit app
- Select the branch to deploy (typically
main)
-
Traffic Settings:
- Select “HTTP” as the traffic type
-
Port Configuration:
- Set the internal port to 8501 (the default Streamlit port)
-
Environment Variables: Set the following environment variables in the Klutch.sh dashboard:
STREAMLIT_SERVER_PORT: Set to8501STREAMLIT_SERVER_HEADLESS: Set totrueSTREAMLIT_SERVER_ENABLEXSRFPROTECTION: Set totruePYTHONUNBUFFERED: Set to1to ensure Python output is logged immediatelySTREAMLIT_LOGGER_LEVEL: Set toinfofor logging
-
Build and Start Commands (Optional): If you need to customize the build or start command, set these environment variables:
BUILD_COMMAND: Default runspip install -r requirements.txtSTART_COMMAND: Default isstreamlit run app.py --server.port=$PORT --server.address=0.0.0.0
-
Region, Compute, and Instances:
- Choose your desired region for optimal latency
- Select compute resources (Starter for prototypes, Pro/Premium for production)
- Set the number of instances (start with 1-2, scale as needed based on traffic)
-
Click “Create” to deploy. Klutch.sh will automatically build your application using Nixpacks and deploy it.
-
Once deployment completes, your app will be accessible at
example-app.klutch.sh.
Verifying the Deployment
Navigate to your deployed app:
https://example-app.klutch.shYou should see the Streamlit dashboard with all three pages (Dashboard, Analytics, Settings) and be able to interact with all the interactive elements.
Deploying With a Dockerfile
If you prefer more control over your build environment, you can provide a custom Dockerfile. Klutch.sh automatically detects and uses a Dockerfile in your repository’s root directory.
Create a Multi-Stage Dockerfile
Create a Dockerfile in your project root:
# Build stageFROM python:3.11-slim as builder
WORKDIR /app
# Install system dependenciesRUN apt-get update && apt-get install -y --no-install-recommends \ build-essential \ git \ && rm -rf /var/lib/apt/lists/*
# Copy requirements and install Python dependenciesCOPY requirements.txt .RUN pip install --user --no-cache-dir -r requirements.txt
# Runtime stageFROM python:3.11-slim
WORKDIR /app
# Install runtime dependenciesRUN apt-get update && apt-get install -y --no-install-recommends \ curl \ && rm -rf /var/lib/apt/lists/*
# Copy Python dependencies from builderCOPY --from=builder /root/.local /root/.local
# Set PATH to use pip from builderENV PATH=/root/.local/bin:$PATHENV PYTHONUNBUFFERED=1ENV STREAMLIT_SERVER_HEADLESS=trueENV STREAMLIT_SERVER_PORT=8501
# Copy application codeCOPY . .
# Create non-root user for securityRUN useradd -m -u 1000 streamlit_user && \ chown -R streamlit_user:streamlit_user /app
USER streamlit_user
# Create Streamlit directoriesRUN mkdir -p /app/.streamlit /app/uploads /app/logs
# Health checkHEALTHCHECK --interval=30s --timeout=10s --start-period=15s --retries=3 \ CMD curl -f http://localhost:8501/_stcore/health || exit 1
# Expose portEXPOSE 8501
# Start the applicationCMD ["streamlit", "run", "app.py", "--server.port=8501", "--server.address=0.0.0.0"]Deploy the Dockerfile Version
- Push your code with the Dockerfile to GitHub:
git add Dockerfilegit commit -m "Add Dockerfile for custom build"git push-
Log in to Klutch.sh dashboard.
-
- Select your GitHub repository and branch
- Set traffic type to “HTTP”
- Set the internal port to 8501
- Add environment variables (same as Nixpacks deployment)
- Click “Create”
-
Klutch.sh will automatically detect your Dockerfile and use it for building and deployment.
Building Multi-Page Applications
Project Structure
Organize your multi-page Streamlit app:
my-streamlit-app/├── app.py├── requirements.txt├── .streamlit/│ └── config.toml└── pages/ ├── 1_dashboard.py ├── 2_analytics.py ├── 3_reports.py └── 4_settings.pyMain App Entry Point
Update app.py for multi-page support:
import streamlit as st
st.set_page_config( page_title="Multi-Page App", page_icon="🏠", layout="wide")
st.title("Welcome to Multi-Page Streamlit App")
st.markdown("""This application demonstrates a multi-page Streamlit structure.
Select a page from the sidebar to navigate between different sections:- **Dashboard**: Overview and key metrics- **Analytics**: Detailed analysis and insights- **Reports**: Generated reports and exports- **Settings**: Application configuration
Pages are automatically detected from the `pages/` directory.""")
# Sidebar infost.sidebar.markdown("---")st.sidebar.info("This is a multi-page Streamlit application deployed on Klutch.sh")Example Dashboard Page
Create pages/1_dashboard.py:
import streamlit as stimport pandas as pdimport numpy as npimport plotly.express as px
st.set_page_config(page_title="Dashboard", page_icon="📊")
st.title("📊 Dashboard")
# Sample datadata = pd.DataFrame({ 'Date': pd.date_range('2024-01-01', periods=100), 'Revenue': np.random.randint(1000, 5000, 100), 'Users': np.random.randint(100, 1000, 100)})
# Key metricscol1, col2, col3 = st.columns(3)with col1: st.metric("Total Revenue", "$45,231", "+$2,312")with col2: st.metric("Total Users", "1,234", "+12")with col3: st.metric("Avg Session", "4m 23s", "-1m 2s")
# Chartsst.subheader("Revenue Trend")fig = px.line(data, x='Date', y='Revenue')st.plotly_chart(fig, use_container_width=True)
st.subheader("User Growth")fig = px.bar(data, x='Date', y='Users')st.plotly_chart(fig, use_container_width=True)Session State and Caching
Using Session State for Stateful Apps
import streamlit as st
# Initialize session stateif 'page' not in st.session_state: st.session_state.page = 'home'
if 'user_data' not in st.session_state: st.session_state.user_data = {}
# Use session statest.title("Stateful Application")
# Navigationpage = st.radio("Select Page", ["Home", "Profile", "Settings"])st.session_state.page = page
if page == "Home": st.write("Welcome home!")elif page == "Profile": name = st.text_input("Name", value=st.session_state.user_data.get('name', '')) email = st.text_input("Email", value=st.session_state.user_data.get('email', ''))
if st.button("Save Profile"): st.session_state.user_data['name'] = name st.session_state.user_data['email'] = email st.success("Profile saved!")elif page == "Settings": st.write("Settings page") if st.button("Clear Data"): st.session_state.user_data = {} st.success("Data cleared!")
st.write("Current user data:", st.session_state.user_data)Using Caching for Performance
import streamlit as stimport pandas as pdimport time
# Cache data loading@st.cache_datadef load_data(): """Load data with caching.""" time.sleep(2) # Simulate slow data load return pd.DataFrame({ 'x': range(100), 'y': range(100, 200) })
# Cache computations@st.cache_resourcedef create_model(): """Create and cache ML model.""" from sklearn.ensemble import RandomForestClassifier return RandomForestClassifier()
st.title("Cached Operations")
# Load cached datadata = load_data()st.write("Data loaded with cache:", data.head())
# Use cached modelmodel = create_model()st.write("Model created and cached")
# Clear cacheif st.button("Clear Cache"): st.cache_data.clear() st.cache_resource.clear() st.success("Cache cleared!")Data Visualization and Interactivity
Interactive Dashboards
import streamlit as stimport pandas as pdimport plotly.express as px
st.title("Interactive Dashboard")
# Sidebar filtersst.sidebar.header("Filters")category = st.sidebar.selectbox("Select Category", ["All", "A", "B", "C"])date_range = st.sidebar.slider("Date Range", 1, 100, (1, 100))
# Generate filtered datadata = pd.DataFrame({ 'Date': range(1, 101), 'Sales': range(100, 200), 'Category': ['A', 'B', 'C'] * 33 + ['A']})
if category != "All": data = data[data['Category'] == category]
data = data[(data['Date'] >= date_range[0]) & (data['Date'] <= date_range[1])]
# Display metricscol1, col2, col3 = st.columns(3)with col1: st.metric("Total Sales", f"${data['Sales'].sum():,}")with col2: st.metric("Avg Sale", f"${data['Sales'].mean():.2f}")with col3: st.metric("Records", len(data))
# Visualizationsst.subheader("Sales Trend")fig = px.line(data, x='Date', y='Sales', title='Sales Over Time')st.plotly_chart(fig, use_container_width=True)
st.subheader("Data Table")st.dataframe(data, use_container_width=True)Database Integration
SQLite Database Operations
import streamlit as stimport sqlite3import pandas as pd
# Initialize databasedef init_db(): conn = sqlite3.connect('/app/data.db') c = conn.cursor() c.execute('''CREATE TABLE IF NOT EXISTS users (id INTEGER PRIMARY KEY, name TEXT, email TEXT)''') conn.commit() conn.close()
init_db()
st.title("Database Integration")
# Sidebar menumenu = st.sidebar.radio("Select Action", ["View Users", "Add User", "Delete User"])
if menu == "View Users": conn = sqlite3.connect('/app/data.db') df = pd.read_sql_query("SELECT * FROM users", conn) conn.close() st.dataframe(df)
elif menu == "Add User": with st.form("add_user"): name = st.text_input("Name") email = st.text_input("Email") submit = st.form_submit_button("Add User")
if submit: conn = sqlite3.connect('/app/data.db') c = conn.cursor() c.execute("INSERT INTO users (name, email) VALUES (?, ?)", (name, email)) conn.commit() conn.close() st.success("User added!")
elif menu == "Delete User": conn = sqlite3.connect('/app/data.db') df = pd.read_sql_query("SELECT * FROM users", conn) conn.close()
user_id = st.selectbox("Select User to Delete", df['id'])
if st.button("Delete"): conn = sqlite3.connect('/app/data.db') c = conn.cursor() c.execute("DELETE FROM users WHERE id = ?", (user_id,)) conn.commit() conn.close() st.success("User deleted!")File Upload and Download
Handling File Operations
import streamlit as stimport pandas as pdimport io
st.title("File Upload & Download")
# File uploaduploaded_file = st.file_uploader("Choose a CSV file", type=['csv'])
if uploaded_file is not None: df = pd.read_csv(uploaded_file)
st.subheader("File Preview") st.dataframe(df)
st.subheader("File Statistics") st.write(f"Shape: {df.shape}") st.write(f"Columns: {list(df.columns)}")
# Processing options if st.checkbox("Show Summary Statistics"): st.write(df.describe())
# Download modified file st.subheader("Download Options")
# CSV download csv = df.to_csv(index=False) st.download_button( label="Download as CSV", data=csv, file_name="output.csv", mime="text/csv" )
# Excel download if st.button("Generate Excel"): with io.BytesIO() as buffer: with pd.ExcelWriter(buffer, engine='openpyxl') as writer: df.to_excel(writer, index=False) st.download_button( label="Download as Excel", data=buffer.getvalue(), file_name="output.xlsx", mime="application/vnd.openxmlformats-officedocument.spreadsheetml.sheet" )Environment Variables and Configuration
Essential Environment Variables
Configure these variables in the Klutch.sh dashboard:
| Variable | Description | Example |
|---|---|---|
STREAMLIT_SERVER_PORT | Application port | 8501 |
STREAMLIT_SERVER_HEADLESS | Run in headless mode | true |
STREAMLIT_SERVER_ENABLEXSRFPROTECTION | Enable CSRF protection | true |
PYTHONUNBUFFERED | Unbuffered Python output | 1 |
STREAMLIT_LOGGER_LEVEL | Logging level | info |
STREAMLIT_CLIENT_SHOWWARNINGONPAGECHANGE | Show warnings | false |
Customization Environment Variables (Nixpacks)
For Nixpacks deployments:
| Variable | Purpose | Example |
|---|---|---|
BUILD_COMMAND | Build command | pip install -r requirements.txt |
START_COMMAND | Start command | streamlit run app.py --server.port=$PORT --server.address=0.0.0.0 |
Persistent Storage for Data and Uploads
Adding Persistent Volume
- In the Klutch.sh app dashboard, navigate to “Persistent Storage” or “Volumes”
- Click “Add Volume”
- Set the mount path:
/app/data(for databases) or/app/uploads(for user uploads) - Set the size based on your needs (e.g., 20 GB for databases, 50 GB for uploads)
- Save and redeploy
Organizing Data Storage
Update your app.py to use persistent data directory:
import streamlit as stimport osfrom pathlib import Path
# Set up data directoryDATA_DIR = os.getenv('DATA_DIR', '/app/data')Path(DATA_DIR).mkdir(parents=True, exist_ok=True)
# Use persistent storage for databasesDB_PATH = os.path.join(DATA_DIR, 'app.db')
st.title("Data Persistence")
# Demonstrate data storageif st.button("Save Data"): with open(os.path.join(DATA_DIR, 'sample.txt'), 'w') as f: f.write('Sample data persisted to volume') st.success(f"Data saved to {DATA_DIR}")
# List stored filesst.subheader("Stored Files")if os.path.exists(DATA_DIR): files = os.listdir(DATA_DIR) st.write(files if files else "No files stored yet")Custom Domains
To serve your Streamlit application from a custom domain:
- In the Klutch.sh app dashboard, navigate to “Custom Domains”
- Click “Add Custom Domain”
- Enter your domain (e.g.,
dashboard.example.com) - Follow the DNS configuration instructions provided
- Update any CORS or allowed origins settings
Example DNS configuration:
dashboard.example.com CNAME example-app.klutch.shMonitoring and Logging
Application Logging
Configure logging in your Streamlit app:
import streamlit as stimport loggingimport osfrom pathlib import Path
# Create logs directorylog_dir = os.getenv('LOG_DIR', '/app/logs')Path(log_dir).mkdir(parents=True, exist_ok=True)
# Configure logginglogging.basicConfig( level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s', handlers=[ logging.FileHandler(os.path.join(log_dir, 'streamlit.log')), logging.StreamHandler() ])
logger = logging.getLogger(__name__)
st.title("Application with Logging")
def process_data(data): """Process data with logging.""" logger.info(f"Processing data of length: {len(data)}") result = data.upper() logger.info(f"Data processing completed") return result
text = st.text_input("Enter text")if st.button("Process"): logger.info(f"User input received: {text}") result = process_data(text) st.write(f"Result: {result}")Performance Monitoring
Monitor your app through the Klutch.sh dashboard:
- CPU and memory usage
- Request count and response times
- Application errors and exceptions
- Uptime and availability
- Traffic patterns
Security Best Practices
- Secrets Management: Use Streamlit secrets for sensitive data
- Environment Variables: Store secrets in environment variables
- HTTPS Only: Always use HTTPS in production
- Input Validation: Validate all user inputs
- File Upload Security: Validate file types and sizes
- Database Security: Use parameterized queries
- Authentication: Implement proper user authentication
- CORS Configuration: Restrict origins appropriately
- Rate Limiting: Implement request rate limiting
- Dependency Updates: Keep packages updated for security patches
Example security implementation:
import streamlit as stimport os
# Secrets managementdef get_secret(key): """Retrieve secret from Streamlit secrets.""" try: return st.secrets[key] except KeyError: # Fall back to environment variable return os.getenv(key)
# File upload validationMAX_UPLOAD_SIZE = 100 * 1024 * 1024 # 100 MBALLOWED_EXTENSIONS = {'.csv', '.txt', '.xlsx', '.json'}
def validate_file(uploaded_file): """Validate uploaded file.""" if uploaded_file is None: return False, "No file selected"
if uploaded_file.size > MAX_UPLOAD_SIZE: return False, "File size exceeds limit"
ext = os.path.splitext(uploaded_file.name)[1].lower() if ext not in ALLOWED_EXTENSIONS: return False, "File type not allowed"
return True, "File valid"
st.title("Secure Streamlit App")
# Database API key exampleapi_key = get_secret("DATABASE_URL")
# File upload with validationuploaded_file = st.file_uploader("Upload file")if uploaded_file: is_valid, message = validate_file(uploaded_file) if is_valid: st.success(message) else: st.error(message)Troubleshooting
Issue 1: App Takes Long Time to Load
Problem: Application loads slowly on first access.
Solution:
- Use Streamlit caching decorators (
@st.cache_data,@st.cache_resource) - Optimize data loading and processing
- Move expensive operations to startup
- Monitor and profile slow functions
- Consider async operations for long-running tasks
Issue 2: Memory Issues
Problem: Application crashes with out-of-memory errors.
Solution:
- Reduce data size or use pagination
- Clear cache when not needed
- Use generators for large datasets
- Monitor memory usage in dashboard
- Scale to instances with more memory
- Implement data streaming
Issue 3: File Upload Failures
Problem: File uploads fail or timeout.
Solution:
- Check file size limits in configuration
- Verify disk space on persistent storage
- Increase timeout settings
- Optimize file processing logic
- Monitor upload performance
- Test with various file sizes
Issue 4: Session State Issues
Problem: Session state not persisting correctly.
Solution:
- Ensure session state is initialized
- Use proper data structures for state
- Avoid mutable objects in state
- Test locally before deployment
- Monitor session in dashboard
- Check for connection issues
Issue 5: Dashboard Display Issues
Problem: Charts or components not displaying correctly.
Solution:
- Clear browser cache
- Check data format and validity
- Verify visualization library versions
- Test with different Streamlit versions
- Check console for JavaScript errors
- Use responsive design patterns
Best Practices for Production Deployment
-
Enable Logging: Track application behavior
logging.basicConfig(level=logging.INFO) -
Use Caching: Improve performance with caching
@st.cache_datadef expensive_function(): pass -
Validate Inputs: Check all user inputs
if not user_input or len(user_input) > 1000:st.error("Invalid input") -
Persistent Storage: Store data in volumes
DATA_DIR = os.getenv('DATA_DIR', '/app/data') -
Error Handling: Graceful error recovery
try:result = process(data)except Exception as e:st.error(f"Error: {str(e)}") -
Configuration: Use environment variables
PORT = os.getenv('STREAMLIT_SERVER_PORT', 8501) -
Multi-Page Apps: Organize large applications
pages/├── 1_page1.py└── 2_page2.py -
Performance: Monitor and optimize
- Use
@st.cache_datafor data loading - Use
@st.cache_resourcefor resource-heavy objects - Profile slow functions
- Monitor memory usage
- Use
-
Security: Implement security measures
- Validate user inputs
- Use HTTPS in production
- Manage secrets properly
- Restrict file uploads
- Enable CSRF protection
-
Testing: Test thoroughly before deployment
Terminal window streamlit run app.py --logger.level=debug
Resources
- Streamlit Official Documentation
- Streamlit Quick Start
- Streamlit Advanced Features
- Streamlit API Reference
- Plotly Python Visualization
- Pandas Data Analysis
- NumPy Numerical Computing
Conclusion
Deploying Streamlit applications to Klutch.sh provides a fast, scalable platform for sharing data applications, dashboards, and machine learning tools. Streamlit’s simple API combined with Klutch.sh’s infrastructure makes it easy to transform Python scripts into interactive web applications.
Key takeaways:
- Use Nixpacks for quick deployments with automatic Python detection
- Use Docker for complete control over dependencies
- Implement caching strategies for optimal performance
- Use session state for stateful, interactive applications
- Organize large apps with multi-page structure
- Leverage persistent storage for databases and uploads
- Configure logging and monitoring for production
- Implement security measures and input validation
- Monitor application performance through dashboard
- Keep dependencies updated for security and performance
For additional help, refer to the Streamlit documentation or Klutch.sh support resources.