Deploying Evidence
Introduction
Evidence is an open-source, code-based business intelligence tool that lets you build reports and dashboards using SQL and markdown. Unlike traditional drag-and-drop BI tools, Evidence treats your analytics as code, enabling version control, code review, and collaboration through familiar development workflows. Evidence generates a static website from markdown files containing SQL queries, charts, and components, making it perfect for data teams who want to deliver beautiful, fast, and maintainable analytics applications.
With Evidence, you write SQL statements directly in markdown files to query your data sources, then use built-in components to render charts, tables, and visualizations. The platform supports templated pages, loops, conditional rendering, and a rich component library that makes creating professional data products straightforward and enjoyable. Evidence is built on modern web technologies including SvelteKit, making it incredibly fast and responsive.
This guide walks you through deploying Evidence on Klutch.sh with Docker, from repository setup through production deployment. You’ll learn how to connect data sources, create your first report, and implement best practices for production analytics workloads.
Why Deploy Evidence on Klutch.sh
- Automatic Dockerfile Detection - Klutch.sh detects and builds your Dockerfile automatically without manual configuration
- GitHub Integration - Direct integration with GitHub for continuous deployment from your repository
- Persistent Storage - Attach volumes for DuckDB files, cache, and static assets
- Environment Variables - Securely manage database credentials and API keys for data sources
- Custom Domains - Connect your own domain with automatic HTTPS certificates
- HTTP Routing - Built-in load balancing and SSL termination for web access
- Scalable Infrastructure - Start small and scale as your analytics needs grow
- Zero Downtime Deployments - Rolling updates keep your reports accessible during deployments
- Version Control Workflow - Deploy from Git branches with full version history
Prerequisites
Before deploying Evidence on Klutch.sh, ensure you have:
- A Klutch.sh account
- A GitHub account with a repository for your Evidence project
- Basic understanding of SQL and markdown
- Access to data sources (PostgreSQL, MySQL, DuckDB, BigQuery, Snowflake, etc.)
- Node.js 18 or higher installed locally for development
- Familiarity with command-line tools
Understanding Evidence Architecture
Evidence is built on several key technologies and concepts:
Core Technologies
- SvelteKit - Modern JavaScript framework for building the frontend
- Node.js - Server runtime for the Evidence application
- DuckDB - Embedded database for local data processing (optional)
- SQL - Primary interface for querying data sources
- Markdown - Content format for reports and documentation
How Evidence Works
- Markdown Files - Write reports in markdown with embedded SQL queries
- SQL Queries - Queries run against configured data sources (PostgreSQL, BigQuery, etc.)
- Components - Built-in components render query results as charts and tables
- Static Generation - Evidence builds a static website from your markdown files
- Templating - Create dynamic pages from data using loops and conditionals
Data Source Support
Evidence connects to multiple data source types:
- PostgreSQL - Relational database queries
- MySQL/MariaDB - MySQL-compatible databases
- DuckDB - Embedded analytics database
- BigQuery - Google Cloud data warehouse
- Snowflake - Cloud data platform
- Redshift - AWS data warehouse
- SQLite - Lightweight embedded database
- CSV Files - Local data files
- Parquet Files - Columnar data files
Creating Your Evidence Project
Step 1: Initialize a New Evidence Project
Start by creating a new Evidence project locally:
# Create project directorymkdir my-evidence-appcd my-evidence-app
# Initialize npm projectnpm init -y
# Install Evidencenpm install --save-exact @evidence-dev/evidenceStep 2: Initialize Evidence
Create the Evidence project structure:
# Initialize Evidence projectnpx evidence initThis creates the following structure:
my-evidence-app/├── pages/│ └── index.md # Your first Evidence page├── sources/│ └── connection.yaml # Data source configuration├── components/│ └── custom/ # Custom Svelte components├── static/│ └── assets/ # Static assets (images, fonts)├── evidence.config.yaml # Evidence configuration├── package.json└── .gitignoreStep 3: Create Your First Page
Edit pages/index.md with a simple query:
# My First Evidence Report
Welcome to your Evidence dashboard!
## Sample Data Query
\`\`\`sql demo_dataSELECT 'January' as month, 1000 as revenue, 100 as customersUNION ALLSELECT 'February', 1200, 120UNION ALLSELECT 'March', 1400, 140\`\`\`
## Revenue Chart
<BarChart data={demo_data} x=month y=revenue title="Monthly Revenue"/>
## Data Table
<DataTable data={demo_data} />Step 4: Test Locally
Run Evidence locally to verify setup:
npm run devVisit http://localhost:3000 to see your report. You should see a bar chart and data table with the sample data.
Configuring Data Sources
PostgreSQL Connection
Create sources/postgres.connection.yaml:
name: postgrestype: postgresoptions: host: ${POSTGRES_HOST} port: ${POSTGRES_PORT} database: ${POSTGRES_DATABASE} user: ${POSTGRES_USER} password: ${POSTGRES_PASSWORD} ssl: trueFor PostgreSQL deployment on Klutch.sh, see our PostgreSQL guide.
DuckDB Connection
For local analytics with DuckDB:
name: duckdbtype: duckdboptions: filename: data/analytics.duckdbBigQuery Connection
Create sources/bigquery.connection.yaml:
name: bigquerytype: bigqueryoptions: project_id: ${BIGQUERY_PROJECT_ID} credentials: ${BIGQUERY_CREDENTIALS}Snowflake Connection
Create sources/snowflake.connection.yaml:
name: snowflaketype: snowflakeoptions: account: ${SNOWFLAKE_ACCOUNT} username: ${SNOWFLAKE_USERNAME} password: ${SNOWFLAKE_PASSWORD} database: ${SNOWFLAKE_DATABASE} warehouse: ${SNOWFLAKE_WAREHOUSE} schema: ${SNOWFLAKE_SCHEMA}Creating the Dockerfile
Create a production-ready Dockerfile at your repository root:
FROM node:20-alpine
WORKDIR /app
# Install system dependenciesRUN apk add --no-cache \ python3 \ make \ g++ \ git
# Copy package filesCOPY package*.json ./
# Install dependenciesRUN npm ci --production=false
# Copy application filesCOPY . .
# Build Evidence projectRUN npm run build
# Remove dev dependenciesRUN npm prune --production
# Create directory for DuckDB filesRUN mkdir -p /app/data
# Set environmentENV NODE_ENV=productionENV PORT=3000
# Expose portEXPOSE 3000
# Health checkHEALTHCHECK --interval=30s --timeout=3s --start-period=40s --retries=3 \ CMD node -e "require('http').get('http://localhost:3000/', (r) => {process.exit(r.statusCode === 200 ? 0 : 1)})"
# Start EvidenceCMD ["npm", "run", "start"]Dockerfile with Custom Build
For advanced configurations:
FROM node:20-alpine AS builder
WORKDIR /app
# Install build dependenciesRUN apk add --no-cache python3 make g++ git
# Copy package filesCOPY package*.json ./RUN npm ci
# Copy source filesCOPY . .
# Build applicationRUN npm run build
# Production stageFROM node:20-alpine
WORKDIR /app
# Install runtime dependencies onlyRUN apk add --no-cache dumb-init
# Copy built applicationCOPY --from=builder /app/.evidence ./evidenceCOPY --from=builder /app/package*.json ./COPY --from=builder /app/node_modules ./node_modules
# Create data directoryRUN mkdir -p /app/data && chown -R node:node /app
USER node
ENV NODE_ENV=productionENV PORT=3000
EXPOSE 3000
HEALTHCHECK --interval=30s --timeout=3s --start-period=40s --retries=3 \ CMD node -e "require('http').get('http://localhost:3000/', (r) => {process.exit(r.statusCode === 200 ? 0 : 1)})"
ENTRYPOINT ["dumb-init", "--"]CMD ["npm", "run", "start"]Environment Configuration
Create a .env.example file to document required environment variables:
# Evidence ConfigurationNODE_ENV=productionPORT=3000EVIDENCE_HOST=0.0.0.0
# PostgreSQL Data SourcePOSTGRES_HOST=postgres-app.klutch.shPOSTGRES_PORT=8000POSTGRES_DATABASE=analyticsPOSTGRES_USER=analytics_userPOSTGRES_PASSWORD=your_secure_password
# BigQuery Data Source (Optional)BIGQUERY_PROJECT_ID=your-project-idBIGQUERY_CREDENTIALS={"type":"service_account","project_id":"..."}
# Snowflake Data Source (Optional)SNOWFLAKE_ACCOUNT=your-accountSNOWFLAKE_USERNAME=your_usernameSNOWFLAKE_PASSWORD=your_passwordSNOWFLAKE_DATABASE=ANALYTICSSNOWFLAKE_WAREHOUSE=COMPUTE_WHSNOWFLAKE_SCHEMA=PUBLIC
# DuckDB ConfigurationDUCKDB_PATH=/app/data/analytics.duckdb
# Evidence SettingsEVIDENCE_BUILD_DEV=falseEVIDENCE_STRICT=trueDeploying to Klutch.sh
-
Prepare Your Repository
Commit all files to Git:
Terminal window git initgit add .git commit -m "Initial Evidence deployment"git branch -M maingit remote add origin https://github.com/your-username/evidence-app.gitgit push -u origin main -
Log in to Klutch.sh
Navigate to klutch.sh/app and sign in to your account.
-
Create a New Project
- Click “New Project”
- Enter a project name (e.g., “Evidence Analytics”)
- Select your organization or personal account
-
Create a New App
- Within your project, click “New App”
- Give your app a name (e.g., “evidence-reports”)
-
Connect Your GitHub Repository
- Select GitHub as your Git source
- Authorize Klutch.sh to access your repositories if prompted
- Choose the repository containing your Evidence project
- Select the branch to deploy (typically
main)
-
Configure Network Settings
Klutch.sh will automatically detect your Dockerfile. Configure the following:
- Traffic Type: Select HTTP (Evidence is a web application)
- Internal Port: Set to 3000 (Evidence’s default port)
-
Set Environment Variables
Add the environment variables for your data sources in the Klutch.sh dashboard:
Required Variables:
NODE_ENV=productionPORT=3000EVIDENCE_HOST=0.0.0.0Data Source Variables (PostgreSQL example):
POSTGRES_HOST=your-postgres-app.klutch.shPOSTGRES_PORT=8000POSTGRES_DATABASE=analyticsPOSTGRES_USER=analytics_userPOSTGRES_PASSWORD=your_secure_passwordImportant: Mark sensitive variables like
POSTGRES_PASSWORDandBIGQUERY_CREDENTIALSas secret. -
Attach Persistent Volume (Optional)
If using DuckDB or storing cache files:
- Click “Add Volume” in the storage section
- Mount Path:
/app/data - Size: Start with 5GB (adjust based on data volume)
-
Configure Additional Settings
- Region: Choose the region closest to your users
- Compute Resources: Minimum 512MB RAM, 1GB+ recommended
- Instances: Start with 1 instance
-
Deploy the Application
Click “Create” or “Deploy” to start the deployment. Klutch.sh will:
- Detect your Dockerfile automatically
- Build the Docker image
- Install dependencies
- Build the Evidence project
- Start the application server
- Assign a URL (e.g.,
https://evidence-app.klutch.sh)
-
Wait for Deployment
Monitor the build logs. The initial deployment may take 3-5 minutes as it:
- Installs Node.js dependencies
- Builds the Evidence application
- Compiles SvelteKit assets
- Starts the server
-
Verify Deployment
Once deployed, visit your application URL (
https://your-app.klutch.sh). You should see your Evidence reports and dashboards.
Building Your First Report
Creating a Sales Dashboard
Create pages/sales-dashboard.md:
# Sales Dashboard
## Overview Metrics
\`\`\`sql summarySELECT SUM(revenue) as total_revenue, COUNT(DISTINCT customer_id) as total_customers, AVG(order_value) as avg_order_value, COUNT(*) as total_ordersFROM salesWHERE order_date >= CURRENT_DATE - INTERVAL '30 days'\`\`\`
<BigValue data={summary} value=total_revenue title="Total Revenue (30d)" fmt="$#,##0"/>
<BigValue data={summary} value=total_customers title="Total Customers"/>
## Revenue Trend
\`\`\`sql revenue_by_daySELECT DATE(order_date) as date, SUM(revenue) as daily_revenueFROM salesWHERE order_date >= CURRENT_DATE - INTERVAL '90 days'GROUP BY DATE(order_date)ORDER BY date\`\`\`
<LineChart data={revenue_by_day} x=date y=daily_revenue title="Daily Revenue (90 days)" yFmt="$#,##0"/>
## Top Products
\`\`\`sql top_productsSELECT product_name, SUM(quantity) as units_sold, SUM(revenue) as product_revenueFROM salesWHERE order_date >= CURRENT_DATE - INTERVAL '30 days'GROUP BY product_nameORDER BY product_revenue DESCLIMIT 10\`\`\`
<DataTable data={top_products} rows=10> <Column id=product_name title="Product" /> <Column id=units_sold title="Units Sold" fmt="#,##0" /> <Column id=product_revenue title="Revenue" fmt="$#,##0.00" /></DataTable>Adding Customer Analysis
Create pages/customer-analysis.md:
# Customer Analysis
## Customer Segmentation
\`\`\`sql customer_segmentsSELECT CASE WHEN total_spent >= 10000 THEN 'Premium' WHEN total_spent >= 5000 THEN 'Gold' WHEN total_spent >= 1000 THEN 'Silver' ELSE 'Bronze' END as segment, COUNT(*) as customer_count, AVG(total_spent) as avg_customer_valueFROM ( SELECT customer_id, SUM(order_value) as total_spent FROM sales GROUP BY customer_id) customer_totalsGROUP BY segmentORDER BY avg_customer_value DESC\`\`\`
<BarChart data={customer_segments} x=segment y=customer_count title="Customers by Segment"/>
## Cohort Analysis
\`\`\`sql cohort_dataSELECT DATE_TRUNC('month', first_purchase_date) as cohort_month, DATE_TRUNC('month', order_date) as order_month, COUNT(DISTINCT customer_id) as active_customersFROM ( SELECT customer_id, order_date, MIN(order_date) OVER (PARTITION BY customer_id) as first_purchase_date FROM sales) cohort_baseWHERE first_purchase_date >= CURRENT_DATE - INTERVAL '12 months'GROUP BY cohort_month, order_monthORDER BY cohort_month, order_month\`\`\`
<Heatmap data={cohort_data} x=order_month y=cohort_month value=active_customers title="Customer Retention Cohorts"/>Using Templated Pages
Create pages/[product_id].md for dynamic product pages:
# {params.product_id} Performance
\`\`\`sql product_detailsSELECT product_name, category, SUM(quantity) as units_sold, SUM(revenue) as total_revenue, AVG(rating) as avg_ratingFROM salesWHERE product_id = '${params.product_id}'GROUP BY product_name, category\`\`\`
## Sales Performance
<BigValue data={product_details} value=total_revenue title="Total Revenue" fmt="$#,##0"/>
## Daily Sales Trend
\`\`\`sql daily_salesSELECT DATE(order_date) as date, SUM(quantity) as units, SUM(revenue) as revenueFROM salesWHERE product_id = '${params.product_id}' AND order_date >= CURRENT_DATE - INTERVAL '90 days'GROUP BY DATE(order_date)ORDER BY date\`\`\`
<LineChart data={daily_sales} x=date y=revenue y2=units title="Sales Trend"/>Advanced Features
Custom Components
Create custom Svelte components in components/custom/:
<script> export let title; export let value; export let change; export let format = "#,##0";</script>
<div class="metric-card"> <div class="title">{title}</div> <div class="value">{value}</div> {#if change} <div class="change" class:positive={change > 0} class:negative={change < 0}> {change > 0 ? '↑' : '↓'} {Math.abs(change)}% </div> {/if}</div>
<style> .metric-card { padding: 1.5rem; border-radius: 8px; background: white; box-shadow: 0 2px 8px rgba(0,0,0,0.1); }
.title { font-size: 0.875rem; color: #666; margin-bottom: 0.5rem; }
.value { font-size: 2rem; font-weight: bold; color: #333; }
.change { font-size: 0.875rem; margin-top: 0.5rem; }
.positive { color: #10b981; } .negative { color: #ef4444; }</style>Use it in your markdown:
<MetricCard title="Monthly Revenue" value="$125,430" change={15.3}/>Loops and Conditionals
Use loops to generate dynamic content:
## Product Performance
\`\`\`sql productsSELECT product_id, product_name, revenueFROM salesGROUP BY product_id, product_nameORDER BY revenue DESCLIMIT 5\`\`\`
{#each products as product}
### {product.product_name}
Revenue: ${product.revenue}
{/each}Use conditionals to show/hide content:
\`\`\`sql sales_targetSELECT SUM(revenue) as actual, 1000000 as targetFROM salesWHERE EXTRACT(MONTH FROM order_date) = EXTRACT(MONTH FROM CURRENT_DATE)\`\`\`
{#if sales_target[0].actual >= sales_target[0].target}
## 🎉 Target Achieved!
Great work! We've exceeded our monthly target.
{:else}
## Keep Pushing
We're at {Math.round(sales_target[0].actual / sales_target[0].target * 100)}% of our target.
{/if}Parameterized Reports
Add date pickers and filters:
---title: Sales Report---
<DateRange name=date_range start='2024-01-01' end='2024-12-31'/>
\`\`\`sql filtered_salesSELECT DATE(order_date) as date, SUM(revenue) as revenueFROM salesWHERE order_date BETWEEN '${inputs.date_range.start}' AND '${inputs.date_range.end}'GROUP BY DATE(order_date)ORDER BY date\`\`\`
<LineChart data={filtered_sales} x=date y=revenue />Production Best Practices
Security Configuration
1. Secure Database Credentials
Never commit credentials to Git. Use Klutch.sh environment variables:
name: postgrestype: postgresoptions: host: ${POSTGRES_HOST} port: ${POSTGRES_PORT} database: ${POSTGRES_DATABASE} user: ${POSTGRES_USER} password: ${POSTGRES_PASSWORD} ssl: true ssl_reject_unauthorized: true2. Use Read-Only Database Users
Create dedicated read-only users for Evidence:
-- PostgreSQLCREATE USER evidence_readonly WITH PASSWORD 'secure_password';GRANT CONNECT ON DATABASE analytics TO evidence_readonly;GRANT USAGE ON SCHEMA public TO evidence_readonly;GRANT SELECT ON ALL TABLES IN SCHEMA public TO evidence_readonly;ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO evidence_readonly;3. Implement Row-Level Security
Use database views or row-level security to restrict data access:
-- Create view with filtered dataCREATE VIEW sales_filtered ASSELECT * FROM salesWHERE region = current_setting('app.user_region');
-- Grant access to view onlyGRANT SELECT ON sales_filtered TO evidence_readonly;Performance Optimization
1. Query Optimization
Write efficient SQL queries:
-- Bad: Selecting all columns and filtering in JavaScriptSELECT * FROM sales;
-- Good: Select only needed columns and filter in SQLSELECT product_id, SUM(revenue) as total_revenueFROM salesWHERE order_date >= CURRENT_DATE - INTERVAL '30 days'GROUP BY product_id;2. Use Materialized Views
Pre-compute expensive queries:
-- Create materialized viewCREATE MATERIALIZED VIEW daily_sales_summary ASSELECT DATE(order_date) as date, product_category, COUNT(*) as order_count, SUM(revenue) as total_revenueFROM salesGROUP BY DATE(order_date), product_category;
-- Refresh periodicallyREFRESH MATERIALIZED VIEW daily_sales_summary;3. Enable Caching
Configure Evidence caching in evidence.config.yaml:
build: cache: true cacheDuration: 3600 # 1 hour in seconds
queries: defaultCacheDuration: 300 # 5 minutes4. Optimize Data Sources
Use connection pooling for databases:
name: postgrestype: postgresoptions: host: ${POSTGRES_HOST} port: ${POSTGRES_PORT} database: ${POSTGRES_DATABASE} user: ${POSTGRES_USER} password: ${POSTGRES_PASSWORD} pool: min: 2 max: 10 idle_timeout: 30000Monitoring and Logging
1. Application Monitoring
Monitor Evidence application health:
# Check application statuscurl -f https://evidence-app.klutch.sh/ || echo "Application down"
# Monitor response timestime curl -s -o /dev/null https://evidence-app.klutch.sh/2. Query Performance Monitoring
Log slow queries in your database:
-- PostgreSQL: Enable slow query loggingALTER DATABASE analytics SET log_min_duration_statement = 1000; -- 1 second
-- View slow queriesSELECT query, mean_exec_time, callsFROM pg_stat_statementsORDER BY mean_exec_time DESCLIMIT 10;3. Error Tracking
Implement error tracking in Evidence:
// Add to evidence.config.yaml or custom componentwindow.addEventListener('error', (event) => { fetch('/api/log-error', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ message: event.message, filename: event.filename, lineno: event.lineno, timestamp: new Date().toISOString() }) });});Backup and Disaster Recovery
1. Version Control
Keep all Evidence code in Git:
# Regular commitsgit add .git commit -m "Update sales dashboard with new metrics"git push origin main
# Tag releasesgit tag -a v1.0.0 -m "Initial production release"git push origin v1.0.02. Data Source Backups
Backup database data regularly:
#!/bin/bashDATE=$(date +%Y%m%d_%H%M%S)BACKUP_DIR="/backups"
pg_dump -h postgres-app.klutch.sh -p 8000 -U analytics_user -d analytics \ | gzip > ${BACKUP_DIR}/analytics_${DATE}.sql.gz
# Keep last 7 daysfind ${BACKUP_DIR} -name "*.sql.gz" -mtime +7 -delete
echo "Backup completed: ${DATE}"3. DuckDB Data Backup
If using DuckDB with persistent volumes:
# Backup DuckDB filedocker cp evidence-container:/app/data/analytics.duckdb ./backups/analytics_$(date +%Y%m%d).duckdb
# Compress backupgzip ./backups/analytics_$(date +%Y%m%d).duckdbScaling Strategies
1. Read Replicas
Use database read replicas for analytics queries:
name: postgres_replicatype: postgresoptions: host: ${POSTGRES_REPLICA_HOST} port: ${POSTGRES_REPLICA_PORT} database: ${POSTGRES_DATABASE} user: ${POSTGRES_USER} password: ${POSTGRES_PASSWORD} ssl: true2. Horizontal Scaling
Increase Evidence instances in Klutch.sh dashboard:
- Navigate to your app settings
- Increase instance count to 2-3 instances
- Klutch.sh automatically load balances traffic
3. CDN Integration
Serve static assets through a CDN:
build: staticPath: /static assetHost: https://cdn.example.comTroubleshooting
Build Failures
Problem: Build fails with dependency errors
npm ERR! Could not resolve dependencySolutions:
-
Delete
package-lock.jsonandnode_modules:Terminal window rm package-lock.jsonrm -rf node_modulesnpm install -
Update Evidence to the latest version:
Terminal window npm install @evidence-dev/evidence@latest -
Check Node.js version in Dockerfile matches local version
Data Source Connection Errors
Problem: Cannot connect to PostgreSQL
Error connecting to data source: Connection refusedSolutions:
-
Verify environment variables are set correctly:
Terminal window echo $POSTGRES_HOSTecho $POSTGRES_PORT -
Check database is accessible from Klutch.sh:
Terminal window psql -h postgres-app.klutch.sh -p 8000 -U analytics_user -d analytics -
Verify SSL settings match database requirements:
options:ssl: truessl_reject_unauthorized: false # Only for testing -
Check firewall rules allow connections from Klutch.sh
Query Errors
Problem: SQL query fails with syntax error
SQL Error: syntax error at or near "SELECT"Solutions:
-
Validate SQL syntax in your database client first
-
Check data source type matches query dialect:
- PostgreSQL vs MySQL have different syntax
- DuckDB has unique functions
-
Escape special characters in queries:
SELECT * FROM products WHERE name LIKE '%O''Brien%' -
Use proper date formatting for your database:
-- PostgreSQLWHERE order_date >= CURRENT_DATE - INTERVAL '30 days'-- MySQLWHERE order_date >= DATE_SUB(CURRENT_DATE, INTERVAL 30 DAY)
Performance Issues
Problem: Pages load slowly
Solutions:
-
Check query execution time:
EXPLAIN ANALYZESELECT * FROM large_table; -
Add indexes to frequently queried columns:
CREATE INDEX idx_order_date ON sales(order_date);CREATE INDEX idx_customer_id ON sales(customer_id); -
Enable query result caching:
queries:defaultCacheDuration: 300 -
Use aggregated tables or materialized views for heavy queries
Chart Rendering Issues
Problem: Charts don’t display correctly
Solutions:
-
Verify query returns expected data structure:
<DataTable data={my_query} /> <!-- Debug with table first --> -
Check column names match chart configuration:
<BarChart data={sales} x=date y=revenue /> <!-- Must match query columns --> -
Ensure data types are correct:
SELECTCAST(order_date AS DATE) as date, -- Ensure date typeCAST(revenue AS DECIMAL(10,2)) as revenue -- Ensure numeric typeFROM sales
Updating Evidence
To update your Evidence deployment:
-
Update Dependencies Locally
Terminal window cd my-evidence-appnpm update @evidence-dev/evidencenpm test -
Test Changes
Terminal window npm run dev# Verify reports work correctly -
Commit and Push
Terminal window git add package.json package-lock.jsongit commit -m "Update Evidence to v2.0.0"git push origin main -
Monitor Deployment
Klutch.sh automatically rebuilds and deploys the updated application. Monitor the deployment logs for any issues.
-
Verify Production
- Test all reports and dashboards
- Check data source connections
- Verify charts render correctly
- Test any custom components
Sample Code Examples
Creating a KPI Dashboard
# Executive Dashboard
## Key Performance Indicators
\`\`\`sql kpisSELECT (SELECT SUM(revenue) FROM sales WHERE order_date >= CURRENT_DATE - INTERVAL '30 days') as revenue_30d, (SELECT SUM(revenue) FROM sales WHERE order_date >= CURRENT_DATE - INTERVAL '60 days' AND order_date < CURRENT_DATE - INTERVAL '30 days') as revenue_prev_30d, (SELECT COUNT(DISTINCT customer_id) FROM sales WHERE order_date >= CURRENT_DATE - INTERVAL '30 days') as customers_30d, (SELECT AVG(order_value) FROM sales WHERE order_date >= CURRENT_DATE - INTERVAL '30 days') as avg_order_30d\`\`\`
\`\`\`sql calculationsSELECT *, ROUND(((revenue_30d - revenue_prev_30d) / revenue_prev_30d * 100), 1) as revenue_changeFROM ${kpis}\`\`\`
<Grid cols=3>
<BigValue data={calculations} value=revenue_30d title="Revenue (30d)" fmt="$#,##0" comparison=revenue_prev_30d comparisonTitle="vs Previous Period"/>
<BigValue data={calculations} value=customers_30d title="Active Customers" fmt="#,##0"/>
<BigValue data={calculations} value=avg_order_30d title="Avg Order Value" fmt="$#,##0.00"/>
</Grid>Geographic Sales Analysis
# Geographic Performance
\`\`\`sql sales_by_regionSELECT region, state, SUM(revenue) as total_revenue, COUNT(*) as order_count, COUNT(DISTINCT customer_id) as customer_countFROM salesWHERE order_date >= CURRENT_DATE - INTERVAL '90 days'GROUP BY region, stateORDER BY total_revenue DESC\`\`\`
<AreaMap data={sales_by_region} areaCol=state value=total_revenue title="Revenue by State" fmt="$#,##0"/>
## Regional Breakdown
<BarChart data={sales_by_region} x=region y=total_revenue title="Revenue by Region" swapXY=true/>Funnel Analysis
# Conversion Funnel
\`\`\`sql funnel_stagesSELECT 'Website Visits' as stage, 1 as stage_order, COUNT(*) as usersFROM website_eventsWHERE event_date >= CURRENT_DATE - INTERVAL '30 days'
UNION ALL
SELECT 'Product Views' as stage, 2 as stage_order, COUNT(DISTINCT user_id) as usersFROM website_eventsWHERE event_type = 'product_view' AND event_date >= CURRENT_DATE - INTERVAL '30 days'
UNION ALL
SELECT 'Add to Cart' as stage, 3 as stage_order, COUNT(DISTINCT user_id) as usersFROM website_eventsWHERE event_type = 'add_to_cart' AND event_date >= CURRENT_DATE - INTERVAL '30 days'
UNION ALL
SELECT 'Purchase' as stage, 4 as stage_order, COUNT(DISTINCT customer_id) as usersFROM salesWHERE order_date >= CURRENT_DATE - INTERVAL '30 days'
ORDER BY stage_order\`\`\`
<FunnelChart data={funnel_stages} nameCol=stage valueCol=users title="User Conversion Funnel"/>Advanced Configuration
Custom Domain Setup
-
Add Domain in Klutch.sh
- Go to your app settings
- Add your custom domain (e.g.,
analytics.yourcompany.com)
-
Configure DNS
Add a CNAME record pointing to your Klutch.sh app:
analytics.yourcompany.com CNAME your-evidence-app.klutch.sh -
Update Evidence Configuration
Update base URL in
evidence.config.yaml:deployment:url: https://analytics.yourcompany.com
Authentication Integration
While Evidence doesn’t include built-in auth, you can add it with a reverse proxy:
server { listen 80; server_name analytics.example.com;
auth_basic "Analytics Access"; auth_basic_user_file /etc/nginx/.htpasswd;
location / { proxy_pass http://localhost:3000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; }}Or use an external authentication service like Auth0 or Clerk.
Multi-Environment Setup
Create separate configurations for development, staging, and production:
environments: development: build: cache: false
staging: build: cache: true cacheDuration: 600
production: build: cache: true cacheDuration: 3600Resources and Further Reading
Official Documentation
Related Klutch.sh Guides
Component Library
Learning Resources
Conclusion
You’ve successfully deployed Evidence on Klutch.sh! Your code-based business intelligence platform is now ready to create beautiful, fast, and maintainable reports from your data. Evidence combines the power of SQL with the simplicity of markdown, enabling you to build sophisticated analytics applications that integrate seamlessly with your development workflow.
With Evidence running on Klutch.sh, you benefit from automatic deployments, secure environment variable management, persistent storage options, and scalable infrastructure. Your reports are version controlled, code-reviewed, and deployed through the same workflows you use for application code.
Start creating reports by writing SQL queries in markdown files, use the rich component library to visualize your data, and leverage templating for dynamic pages. As your analytics needs grow, you can add custom components, implement caching strategies, and scale your deployment to handle increasing traffic.
For questions or issues, join the Evidence community on Slack, consult the documentation, or explore the example projects to see what’s possible with code-based BI.