Skip to content

Deploying Datasette

Datasette is an open-source tool for exploring and publishing data that transforms SQLite databases into interactive websites and JSON APIs. Created by Simon Willison, Datasette makes it effortless to share datasets with the world, enabling anyone to search, filter, visualize, and download data without writing SQL queries. What sets Datasette apart is its plugin ecosystem, which extends functionality with features like authentication, full-text search, geographic visualizations, and custom output formats.

Beyond being a simple database viewer, Datasette serves as a complete data publishing platform. You can use it to create read-only APIs from your databases, build interactive data dashboards, publish research datasets, create documentation sites backed by structured data, or power data journalism projects. The platform’s focus on read-only access, performance optimization through caching, and extensive customization through plugins and metadata files makes it ideal for scenarios where you need to share data safely and efficiently.

Why Deploy Datasette on Klutch.sh?

Deploying Datasette on Klutch.sh offers several advantages for hosting your data publishing platform:

  • Automatic Docker Detection: Klutch.sh recognizes your Dockerfile and handles containerization without manual configuration
  • Persistent Storage: Built-in volume management ensures your SQLite databases and uploaded files persist across deployments
  • HTTPS by Default: Access your data APIs and web interface through secure HTTPS with automatic SSL certificates
  • Simple Scaling: Easily adjust resources as your dataset size and traffic grow
  • Environment Management: Securely configure authentication tokens, API keys, and database connections through environment variables
  • Rapid Deployment: Go from database to live API in minutes with GitHub integration
  • Plugin Support: Install and configure Datasette plugins to extend functionality without infrastructure complexity

Prerequisites

Before deploying Datasette to Klutch.sh, ensure you have:

  • A Klutch.sh account (sign up here)
  • A GitHub account with a repository for your Datasette deployment
  • Basic understanding of Docker and containerization
  • Familiarity with SQLite databases and SQL queries
  • One or more SQLite database files to publish (or sample data to get started)
  • Git installed on your local development machine
  • Optional: Python installed locally for testing Datasette before deployment

Understanding Datasette Architecture

Datasette follows a lightweight architecture designed for efficient data publishing:

Core Components

Python Web Application

Datasette is built with Python using the ASGI framework, specifically leveraging asyncio for handling concurrent connections efficiently. The web application serves both HTML pages for human interaction and JSON endpoints for programmatic access. This dual interface means every database table and query result is simultaneously a web page and an API endpoint.

SQLite Database Engine

At its core, Datasette uses SQLite as the database engine. SQLite’s file-based architecture makes it perfect for read-heavy workloads and simplifies deployment since there’s no separate database server to manage. Datasette can work with multiple SQLite databases simultaneously, each appearing as a separate section in the interface.

Metadata Configuration

Datasette uses JSON or YAML metadata files to configure database presentation, custom queries, permissions, and caching rules. Metadata files let you add human-readable titles, descriptions, source information, and licenses to your databases and tables without modifying the actual database files.

Plugin System

The plugin architecture allows extending Datasette’s functionality through Python packages. Popular plugins add features like authentication (datasette-auth-github), full-text search (datasette-search-all), geographic visualizations (datasette-cluster-map), and export formats (datasette-export-notebook). Plugins hook into Datasette’s event system to modify behavior at various points in the request lifecycle.

Template System

Datasette uses Jinja2 templates for rendering HTML. You can customize the appearance of your Datasette instance by providing custom templates and static files. This makes it possible to brand your data portal or integrate it with existing design systems.

Caching Layer

Datasette includes sophisticated caching mechanisms to ensure fast response times. Query results can be cached in memory or using external cache backends. The caching system respects database changes and invalidates caches automatically when needed.

Request Flow

  1. User requests data through web browser or API client
  2. Datasette receives request and checks authentication/permissions
  3. Request is parsed to determine database, table, and query parameters
  4. Query is executed against SQLite database (or retrieved from cache)
  5. Results are formatted as HTML (for browsers) or JSON (for API clients)
  6. Plugins can intercept and modify any step in this process
  7. Response is returned with appropriate caching headers
  8. Audit logs capture access patterns if logging plugins are enabled

Storage Requirements

Datasette requires persistent storage for:

  • SQLite Databases: Your actual data files (size varies by dataset)
  • Uploaded Content: If using plugins that accept file uploads
  • Plugin Data: Some plugins store additional data in separate databases
  • Static Files: Custom CSS, JavaScript, and images for branding
  • Cache Data: Optional persistent cache storage

A typical deployment might start with 100MB-1GB for databases, but storage needs depend entirely on your dataset size. Plan for growth if you expect to add more data over time.

Installation and Setup

Let’s walk through setting up Datasette for deployment on Klutch.sh.

Step 1: Create the Project Structure

First, create a new directory for your Datasette deployment:

Terminal window
mkdir datasette-deployment
cd datasette-deployment
git init

Step 2: Prepare Sample Data

Create a sample SQLite database to get started. Create a file called create_sample_db.py:

import sqlite3
import csv
from datetime import datetime
# Create a sample database
conn = sqlite3.connect('data/sample.db')
cursor = conn.cursor()
# Create a books table
cursor.execute('''
CREATE TABLE books (
id INTEGER PRIMARY KEY,
title TEXT NOT NULL,
author TEXT NOT NULL,
published_year INTEGER,
genre TEXT,
rating REAL,
pages INTEGER,
isbn TEXT,
description TEXT
)
''')
# Sample book data
books = [
(1, 'The Great Gatsby', 'F. Scott Fitzgerald', 1925, 'Classic', 4.2, 180, '978-0743273565', 'A story of decadence and excess in the Jazz Age'),
(2, 'To Kill a Mockingbird', 'Harper Lee', 1960, 'Classic', 4.5, 324, '978-0060935467', 'A gripping tale of racial injustice and childhood innocence'),
(3, '1984', 'George Orwell', 1949, 'Science Fiction', 4.4, 328, '978-0451524935', 'A dystopian social science fiction novel'),
(4, 'Pride and Prejudice', 'Jane Austen', 1813, 'Romance', 4.3, 432, '978-0141439518', 'A romantic novel of manners'),
(5, 'The Catcher in the Rye', 'J.D. Salinger', 1951, 'Classic', 3.9, 277, '978-0316769488', 'A story about teenage rebellion and angst'),
(6, 'The Hobbit', 'J.R.R. Tolkien', 1937, 'Fantasy', 4.6, 310, '978-0547928227', 'A fantasy novel about a hobbit\'s unexpected journey'),
(7, 'Fahrenheit 451', 'Ray Bradbury', 1953, 'Science Fiction', 4.1, 249, '978-1451673319', 'A dystopian novel about book burning'),
(8, 'Jane Eyre', 'Charlotte Brontë', 1847, 'Classic', 4.2, 507, '978-0141441146', 'A bildungsroman following the experiences of its eponymous heroine'),
(9, 'The Lord of the Rings', 'J.R.R. Tolkien', 1954, 'Fantasy', 4.7, 1178, '978-0544003415', 'An epic high-fantasy novel'),
(10, 'Animal Farm', 'George Orwell', 1945, 'Satire', 4.3, 112, '978-0452284241', 'An allegorical novella about revolution and power'),
]
cursor.executemany('INSERT INTO books VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)', books)
# Create an authors table
cursor.execute('''
CREATE TABLE authors (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
birth_year INTEGER,
death_year INTEGER,
nationality TEXT,
biography TEXT
)
''')
authors = [
(1, 'F. Scott Fitzgerald', 1896, 1940, 'American', 'American novelist and short story writer'),
(2, 'Harper Lee', 1926, 2016, 'American', 'American novelist known for To Kill a Mockingbird'),
(3, 'George Orwell', 1903, 1950, 'British', 'English novelist and essayist, journalist and critic'),
(4, 'Jane Austen', 1775, 1817, 'British', 'English novelist known for romantic fiction'),
(5, 'J.D. Salinger', 1919, 2010, 'American', 'American writer known for The Catcher in the Rye'),
]
cursor.executemany('INSERT INTO authors VALUES (?, ?, ?, ?, ?, ?)', authors)
conn.commit()
conn.close()
print("Sample database created successfully!")

Run the script to create your database:

Terminal window
mkdir -p data
python create_sample_db.py

Step 3: Create Metadata Configuration

Create a metadata.json file to configure your Datasette instance:

{
"title": "My Data Collection",
"description": "A curated collection of datasets published with Datasette",
"license": "CC BY 4.0",
"license_url": "https://creativecommons.org/licenses/by/4.0/",
"source": "Sample Data Collection",
"source_url": "https://example-app.klutch.sh",
"databases": {
"sample": {
"title": "Books Database",
"description": "A collection of classic literature and information about books",
"tables": {
"books": {
"title": "Books",
"description": "Information about books including title, author, and ratings",
"sort_desc": "rating"
},
"authors": {
"title": "Authors",
"description": "Information about book authors"
}
},
"queries": {
"highly_rated_books": {
"title": "Highly Rated Books",
"description": "Books with rating above 4.0",
"sql": "SELECT title, author, published_year, rating FROM books WHERE rating > 4.0 ORDER BY rating DESC"
},
"books_by_decade": {
"title": "Books by Decade",
"description": "Count of books published in each decade",
"sql": "SELECT (published_year / 10) * 10 as decade, COUNT(*) as count FROM books GROUP BY decade ORDER BY decade"
}
}
}
}
}

Step 4: Create Plugins Configuration

Create a plugins.txt file to list Datasette plugins you want to install:

datasette-cluster-map
datasette-vega
datasette-copyable
datasette-json-html
datasette-pretty-json
datasette-search-all
datasette-export-notebook

Step 5: Create the Dockerfile

Create a Dockerfile in the root directory:

FROM python:3.11-slim
# Set environment variables
ENV PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1 \
PIP_NO_CACHE_DIR=1 \
PIP_DISABLE_PIP_VERSION_CHECK=1
# Install system dependencies
RUN apt-get update && \
apt-get install -y --no-install-recommends \
gcc \
&& rm -rf /var/lib/apt/lists/*
# Create app directory
WORKDIR /app
# Create datasette user
RUN useradd -m -u 1000 datasette && \
chown -R datasette:datasette /app
# Copy requirements
COPY plugins.txt .
# Install Datasette and plugins
RUN pip install --no-cache-dir datasette && \
if [ -s plugins.txt ]; then \
pip install --no-cache-dir -r plugins.txt; \
fi
# Create data directory
RUN mkdir -p /app/data && \
chown -R datasette:datasette /app/data
# Copy database files and metadata
COPY --chown=datasette:datasette data/ /app/data/
COPY --chown=datasette:datasette metadata.json /app/
# Switch to datasette user
USER datasette
# Expose port
EXPOSE 8001
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD python -c "import urllib.request; urllib.request.urlopen('http://localhost:8001/-/health').read()" || exit 1
# Start Datasette
CMD ["datasette", "serve", \
"/app/data/*.db", \
"--host", "0.0.0.0", \
"--port", "8001", \
"--metadata", "/app/metadata.json", \
"--cors", \
"--setting", "sql_time_limit_ms", "5000", \
"--setting", "max_returned_rows", "1000"]

Step 6: Create Environment Configuration

Create a .env.example file with configuration options:

Terminal window
# Datasette Configuration
DATASETTE_PORT=8001
# Security Settings (uncomment for production)
# DATASETTE_SECRET=your-secret-key-here
# CORS Settings
# DATASETTE_CORS_ORIGINS=https://yourdomain.com
# Performance Settings
DATASETTE_SQL_TIME_LIMIT_MS=5000
DATASETTE_MAX_RETURNED_ROWS=1000
# Cache Settings
# DATASETTE_CACHE_SIZE_KB=10000
# Plugin Configuration
# For datasette-auth-github (if using authentication)
# DATASETTE_AUTH_GITHUB_CLIENT_ID=your-client-id
# DATASETTE_AUTH_GITHUB_CLIENT_SECRET=your-client-secret

Step 7: Create .dockerignore

Create a .dockerignore file:

.git
.gitignore
.env
.env.local
*.md
README.md
.DS_Store
Thumbs.db
__pycache__
*.pyc
*.pyo
*.pyd
.pytest_cache
.coverage
htmlcov/
create_sample_db.py

Step 8: Create Documentation

Create README.md:

# Datasette Deployment
This repository contains a Datasette deployment configured for Klutch.sh.
## Features
- SQLite database publishing
- Interactive web interface
- JSON API for programmatic access
- Custom queries and metadata
- Plugin ecosystem support
## Local Development
Install dependencies:
```bash
pip install datasette
pip install -r plugins.txt

Run locally:

Terminal window
datasette serve data/*.db --metadata metadata.json --reload

Visit http://localhost:8001

Deployment

This application is configured to deploy on Klutch.sh with automatic Docker detection.

Adding Data

Place SQLite database files in the data/ directory and update metadata.json accordingly.

### Step 9: Initialize Git Repository
```bash
git add .
git commit -m "Initial Datasette setup for Klutch.sh deployment"
git branch -M master
git remote add origin https://github.com/yourusername/datasette-deployment.git
git push -u origin master

Deploying to Klutch.sh

Now that your Datasette application is configured, let’s deploy it to Klutch.sh.

  1. Log in to Klutch.sh

    Navigate to klutch.sh/app and sign in with your GitHub account.

  2. Create a New Project

    Click “New Project” and select “Import from GitHub”. Choose the repository containing your Datasette deployment.

  3. Configure Build Settings

    Klutch.sh will automatically detect the Dockerfile in your repository. The platform will use this for building your container.

  4. Configure Traffic Settings

    Select “HTTP” as the traffic type. Datasette serves on port 8001, and Klutch.sh will route HTTPS traffic to this port.

  5. Set Environment Variables

    In the project settings, add these environment variables:

    • DATASETTE_PORT: 8001
    • DATASETTE_SQL_TIME_LIMIT_MS: 5000 (limits query execution time)
    • DATASETTE_MAX_RETURNED_ROWS: 1000 (limits result set size)

    For production deployments with authentication, also add:

    • DATASETTE_SECRET: Generate a strong random string using python -c "import secrets; print(secrets.token_hex(32))"
  6. Configure Persistent Storage

    If your databases will be updated or you plan to allow uploads through plugins, add a persistent volume:

    • Data Volume:
      • Mount path: /app/data
      • Size: Start with 5GB (adjust based on your dataset size)

    For read-only deployments with databases baked into the Docker image, persistent storage is optional but recommended for caching and plugin data.

  7. Deploy the Application

    Click “Deploy” to start the build process. Klutch.sh will:

    • Clone your repository
    • Build the Docker image using your Dockerfile
    • Install Datasette and all plugins
    • Deploy the container with your databases
    • Provision an HTTPS endpoint

    The build process typically takes 1-2 minutes.

  8. Access Your Datasette Instance

    Once deployment completes, Klutch.sh will provide a URL like example-app.klutch.sh. Your Datasette interface and API will be available at this URL.

Getting Started with Datasette

Once your Datasette instance is deployed, here’s how to use it:

Exploring the Web Interface

Home Page

Visit your deployed URL to see all available databases. Each database is listed with its title, description, and table count.

Browsing Tables

Click on a database to see its tables. Click on a table name to view the data. You can:

  • Sort by clicking column headers
  • Filter using the faceting interface
  • Search within columns (if full-text search is enabled)
  • Export data as CSV or JSON
  • View table schema and indexes

Running Queries

Navigate to the SQL editor by clicking the database name and scrolling to the “Run SQL query” section:

SELECT
title,
author,
rating
FROM books
WHERE rating >= 4.5
ORDER BY published_year DESC

Click “Run SQL” to execute the query. Results appear in a table with export options.

Using the JSON API

Every page in Datasette has a JSON equivalent. Add .json to any URL to get the API version.

Get all tables in a database:

Terminal window
curl https://example-app.klutch.sh/sample.json

Get data from a table:

Terminal window
curl https://example-app.klutch.sh/sample/books.json

Filter and sort:

Terminal window
curl "https://example-app.klutch.sh/sample/books.json?rating__gt=4.0&_sort_desc=rating"

Run a saved query:

Terminal window
curl https://example-app.klutch.sh/sample/highly_rated_books.json

Execute custom SQL:

Terminal window
curl -X POST https://example-app.klutch.sh/sample.json \
-H "Content-Type: application/json" \
-d '{
"sql": "SELECT genre, COUNT(*) as count FROM books GROUP BY genre ORDER BY count DESC"
}'

Query Parameters

Datasette supports powerful filtering through URL parameters:

Exact match:

?author=George+Orwell

Greater than:

?rating__gt=4.0

Less than:

?pages__lt=300

Contains (for text search):

?title__contains=the

Starts with:

?author__startswith=J

Multiple filters:

?genre=Fantasy&rating__gte=4.5

Sorting:

?_sort=published_year
?_sort_desc=rating

Pagination:

?_size=20&_next=10

Faceting Data

Facets help you explore data distributions. Add _facet parameters to see value counts:

Terminal window
curl "https://example-app.klutch.sh/sample/books.json?_facet=genre&_facet=author"

Response includes facet information:

{
"rows": [...],
"facets": {
"results": {
"genre": {
"results": [
{"value": "Classic", "count": 5},
{"value": "Fantasy", "count": 2},
{"value": "Science Fiction", "count": 2}
]
}
}
}
}

Creating Custom Queries

Add custom queries to your metadata.json to create reusable, documented SQL queries:

{
"databases": {
"sample": {
"queries": {
"recent_high_rated": {
"title": "Recent Highly-Rated Books",
"description": "Books published after 1950 with ratings above 4.0",
"sql": "SELECT title, author, published_year, rating FROM books WHERE published_year > 1950 AND rating > 4.0 ORDER BY published_year DESC, rating DESC"
},
"books_by_author": {
"title": "Books by Author",
"description": "Search for books by a specific author",
"sql": "SELECT title, published_year, genre, rating FROM books WHERE author = :author ORDER BY published_year",
"params": {
"author": "J.R.R. Tolkien"
}
}
}
}
}
}

Access custom queries at:

https://example-app.klutch.sh/sample/recent_high_rated
https://example-app.klutch.sh/sample/books_by_author?author=George+Orwell

Working with Plugins

Cluster Map (datasette-cluster-map)

Visualize geographic data on a map. If your table has latitude and longitude columns:

CREATE TABLE locations (
id INTEGER PRIMARY KEY,
name TEXT,
latitude REAL,
longitude REAL
);

The plugin automatically adds a map view to the table page.

Vega Charts (datasette-vega)

Create visualizations using Vega specifications in your metadata:

{
"databases": {
"sample": {
"tables": {
"books": {
"plugins": {
"datasette-vega": {
"rating_distribution": {
"title": "Rating Distribution",
"mark": "bar",
"encoding": {
"x": {"field": "rating", "type": "quantitative", "bin": true},
"y": {"aggregate": "count", "type": "quantitative"}
}
}
}
}
}
}
}
}
}

Search All (datasette-search-all)

Add full-text search across all tables:

Terminal window
https://example-app.klutch.sh/-/search?q=orwell

Returns results from any table containing “orwell”.

Export Notebook (datasette-export-notebook)

Export query results as Jupyter notebooks for analysis:

Terminal window
https://example-app.klutch.sh/sample/books.notebook

Advanced Configuration

Custom Templates

Customize the appearance by creating custom templates. Update your Dockerfile:

# Create templates directory
RUN mkdir -p /app/templates
# Copy custom templates
COPY --chown=datasette:datasette templates/ /app/templates/

Add to the CMD:

CMD ["datasette", "serve", \
"/app/data/*.db", \
"--host", "0.0.0.0", \
"--port", "8001", \
"--metadata", "/app/metadata.json", \
"--template-dir", "/app/templates"]

Create templates/index.html to customize the home page:

{% extends "default:index.html" %}
{% block content %}
<div class="custom-header">
<h1>Welcome to My Data Portal</h1>
<p>Explore our curated datasets below.</p>
</div>
{{ super() }}
{% endblock %}

Static Files

Add custom CSS and JavaScript:

# Create static directory
RUN mkdir -p /app/static
# Copy static files
COPY --chown=datasette:datasette static/ /app/static/

Add to the CMD:

CMD ["datasette", "serve", \
"/app/data/*.db", \
"--host", "0.0.0.0", \
"--port", "8001", \
"--metadata", "/app/metadata.json", \
"--static", "static:/app/static"]

Create static/custom.css:

.custom-header {
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
color: white;
padding: 2rem;
border-radius: 8px;
margin-bottom: 2rem;
}
.custom-header h1 {
margin: 0 0 0.5rem 0;
}

Reference in your custom template:

{% block extra_head %}
<link rel="stylesheet" href="{{ urls.static('custom.css') }}">
{% endblock %}

Authentication

Add authentication using datasette-auth-github plugin. Update plugins.txt:

datasette-auth-github

Add to metadata.json:

{
"plugins": {
"datasette-auth-github": {
"client_id": {"$env": "DATASETTE_AUTH_GITHUB_CLIENT_ID"},
"client_secret": {"$env": "DATASETTE_AUTH_GITHUB_CLIENT_SECRET"}
}
},
"databases": {
"sample": {
"allow": {
"id": "*"
}
}
}
}

Set environment variables in Klutch.sh:

  • DATASETTE_AUTH_GITHUB_CLIENT_ID: Your GitHub OAuth app client ID
  • DATASETTE_AUTH_GITHUB_CLIENT_SECRET: Your GitHub OAuth app client secret

Write Access with datasette-write

For applications that need to modify data, install datasette-write plugin:

datasette-write

Configure permissions in metadata.json:

{
"databases": {
"sample": {
"tables": {
"books": {
"allow_write": {
"id": "admin_user"
}
}
}
}
}
}

Add data via API:

Terminal window
curl -X POST https://example-app.klutch.sh/sample/books.json \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"row": {
"title": "New Book",
"author": "New Author",
"published_year": 2024,
"rating": 4.5
}
}'

Enable SQLite FTS5 full-text search on specific columns:

import sqlite3
conn = sqlite3.connect('data/sample.db')
cursor = conn.cursor()
# Create FTS virtual table
cursor.execute('''
CREATE VIRTUAL TABLE books_fts USING fts5(
title,
author,
description,
content=books,
content_rowid=id
)
''')
# Populate FTS table
cursor.execute('''
INSERT INTO books_fts(rowid, title, author, description)
SELECT id, title, author, description FROM books
''')
conn.commit()
conn.close()

Configure in metadata.json:

{
"databases": {
"sample": {
"tables": {
"books": {
"fts_table": "books_fts",
"fts_pk": "rowid"
}
}
}
}
}

Search using the _search parameter:

Terminal window
curl "https://example-app.klutch.sh/sample/books.json?_search=dystopian"

Canned Queries with Parameters

Create parameterized queries for dynamic filtering:

{
"databases": {
"sample": {
"queries": {
"books_by_year_range": {
"title": "Books Published Between Years",
"sql": "SELECT * FROM books WHERE published_year BETWEEN :start_year AND :end_year ORDER BY published_year",
"params": {
"start_year": 1900,
"end_year": 2000
}
},
"books_with_rating": {
"title": "Books with Minimum Rating",
"sql": "SELECT title, author, rating FROM books WHERE rating >= :min_rating ORDER BY rating DESC",
"params": {
"min_rating": 4.0
}
}
}
}
}
}

Use from the API:

Terminal window
curl "https://example-app.klutch.sh/sample/books_by_year_range.json?start_year=1950&end_year=1960"

Database Immutability

For read-only deployments, configure databases as immutable for better caching:

{
"databases": {
"sample": {
"immutable": true
}
}
}

Or via command line in your Dockerfile:

CMD ["datasette", "serve", \
"-i", "/app/data/sample.db", \
"--host", "0.0.0.0", \
"--port", "8001"]

Immutable databases enable aggressive caching and can significantly improve performance.

Production Best Practices

Follow these recommendations for running Datasette in production:

Performance Optimization

Query Time Limits

Prevent slow queries from consuming resources:

CMD ["datasette", "serve", \
"/app/data/*.db", \
"--setting", "sql_time_limit_ms", "5000"]

Row Limits

Limit result set sizes:

CMD ["datasette", "serve", \
"/app/data/*.db", \
"--setting", "max_returned_rows", "1000"]

Database Optimization

Optimize your SQLite databases before deployment:

import sqlite3
conn = sqlite3.connect('data/sample.db')
# Analyze tables to update query planner statistics
conn.execute('ANALYZE')
# Vacuum to reclaim unused space and defragment
conn.execute('VACUUM')
# Enable WAL mode for better concurrency (if writable)
conn.execute('PRAGMA journal_mode=WAL')
conn.commit()
conn.close()

Indexing

Add indexes for frequently queried columns:

CREATE INDEX idx_books_author ON books(author);
CREATE INDEX idx_books_rating ON books(rating);
CREATE INDEX idx_books_published_year ON books(published_year);
CREATE INDEX idx_books_genre ON books(genre);

Caching

Enable aggressive caching for immutable databases:

{
"databases": {
"sample": {
"immutable": true,
"cache_size_kb": 10000
}
}
}

Security Best Practices

CORS Configuration

Configure CORS for specific origins:

CMD ["datasette", "serve", \
"/app/data/*.db", \
"--cors", \
"--setting", "cors_origins", "https://yourdomain.com"]

SQL Injection Prevention

Datasette automatically uses parameterized queries. Always use named parameters in custom queries:

{
"sql": "SELECT * FROM books WHERE author = :author",
"params": {"author": "George Orwell"}
}

Never concatenate user input into SQL strings.

Rate Limiting

While Datasette doesn’t have built-in rate limiting, use a reverse proxy or Klutch.sh features to prevent abuse.

Authentication for Sensitive Data

Always use authentication plugins for non-public datasets:

{
"databases": {
"private_data": {
"allow": {
"id": ["allowed_user_1", "allowed_user_2"]
}
}
}
}

Data Management

Regular Backups

Back up your SQLite databases regularly:

Terminal window
# Using SQLite backup command
sqlite3 data/sample.db ".backup data/sample_backup.db"
# Or use volume snapshots in Klutch.sh

Database Versioning

Keep track of database schema versions:

CREATE TABLE schema_version (
version INTEGER PRIMARY KEY,
applied_at TEXT DEFAULT CURRENT_TIMESTAMP,
description TEXT
);
INSERT INTO schema_version (version, description)
VALUES (1, 'Initial schema');

Data Validation

Use CHECK constraints to ensure data quality:

CREATE TABLE validated_books (
id INTEGER PRIMARY KEY,
title TEXT NOT NULL CHECK(length(title) > 0),
rating REAL CHECK(rating >= 0 AND rating <= 5),
published_year INTEGER CHECK(published_year > 1000 AND published_year <= 2100)
);

Monitoring

Health Checks

Datasette provides a health endpoint at /-/health. Monitor this endpoint:

Terminal window
curl https://example-app.klutch.sh/-/health

Metadata Endpoint

Check configuration and available databases:

Terminal window
curl https://example-app.klutch.sh/-/metadata.json

Performance Monitoring

Use the /-/threads endpoint to check for long-running queries:

Terminal window
curl https://example-app.klutch.sh/-/threads.json

Logging

Enable detailed logging by setting log level:

ENV DATASETTE_LOG_LEVEL=INFO

Scaling Considerations

Read Replicas

For high-traffic deployments, consider deploying multiple Datasette instances pointing to the same database files (read-only mode).

Database Partitioning

Split large datasets across multiple databases:

/app/data/
├── data_2020.db
├── data_2021.db
├── data_2022.db
└── data_2023.db

Datasette will serve all databases from a single interface.

CDN Integration

Since Datasette returns appropriate cache headers, deploy behind a CDN for optimal performance:

  • Immutable databases: Cache for extended periods
  • API responses: Cache with shorter TTLs
  • Static assets: Cache aggressively

Troubleshooting

Database Issues

Problem: Database file not found

Solutions:

  • Verify database files are in the /app/data directory
  • Check file permissions (should be readable by datasette user)
  • Ensure databases are copied in Dockerfile with correct paths
  • Test locally with datasette serve data/*.db

Problem: Database locked errors

Solutions:

  • Enable WAL mode: PRAGMA journal_mode=WAL
  • Ensure only one process writes to the database
  • Use immutable mode for read-only deployments
  • Check that persistent volume is properly mounted

Performance Issues

Problem: Slow query responses

Solutions:

  • Add indexes to frequently queried columns
  • Reduce max_returned_rows setting
  • Lower sql_time_limit_ms to prevent long queries
  • Use EXPLAIN QUERY PLAN to analyze query performance
  • Enable immutable mode and caching for static data
  • Consider pre-computing expensive queries and storing results

Problem: High memory usage

Solutions:

  • Reduce cache size: --setting cache_size_kb 5000
  • Limit result set sizes
  • Optimize database with VACUUM
  • Monitor query complexity
  • Restart container if memory doesn’t stabilize

Plugin Issues

Problem: Plugin not loading

Solutions:

  • Verify plugin is listed in plugins.txt
  • Check plugin compatibility with your Datasette version
  • Review Docker build logs for installation errors
  • Test plugin locally before deployment
  • Check plugin documentation for configuration requirements

Problem: Plugin configuration not working

Solutions:

  • Verify metadata.json syntax is correct
  • Check that environment variables are set correctly
  • Review plugin documentation for required settings
  • Check logs for plugin-specific error messages

API Issues

Problem: CORS errors in browser

Solutions:

  • Add --cors flag to datasette command
  • Configure specific origins with --setting cors_origins
  • Verify HTTPS is being used
  • Check browser console for specific CORS errors

Problem: Authentication not working

Solutions:

  • Verify OAuth app credentials are correct
  • Check callback URLs match your deployment URL
  • Ensure authentication plugin is installed
  • Review metadata.json permissions configuration
  • Test with curl to isolate browser issues

Deployment Issues

Problem: Container fails to start

Solutions:

  • Check Klutch.sh logs for error messages
  • Verify Dockerfile syntax is correct
  • Ensure all COPY paths exist in repository
  • Test Docker build locally: docker build -t datasette-test .
  • Check that port 8001 is exposed and matches CMD

Problem: Data not persisting

Solutions:

  • Verify persistent volume is configured in Klutch.sh
  • Check mount path is correct (/app/data)
  • Ensure volume size is adequate
  • Test write permissions in container
  • For read-only deployments, ensure databases are in Docker image

Additional Resources

Conclusion

Datasette transforms the way you share and explore data by turning SQLite databases into fully-featured web applications and APIs with minimal configuration. Its plugin ecosystem, powerful query interface, and focus on simplicity make it an excellent choice for data publishing, API development, and building data-driven applications.

Deploying on Klutch.sh gives you the benefits of containerized hosting with automatic HTTPS, persistent storage, and straightforward deployment workflows. Whether you’re publishing research data, building internal data tools, creating public APIs, or developing data journalism projects, Datasette on Klutch.sh provides the infrastructure you need to make your data accessible and useful.

Start sharing your data with the world today and experience the power of effortless data publishing.