Skip to content

Deploying a ClickHouse Database

Introduction

ClickHouse is an open-source, column-oriented database management system (DBMS) designed for online analytical processing (OLAP). Developed by Yandex in 2016, ClickHouse has rapidly become one of the fastest and most efficient databases for real-time analytics, capable of processing billions of rows and handling petabytes of data with exceptional query performance.

ClickHouse is renowned for its:

  • Blazing Fast Query Performance: Processes billions of rows per second with sub-second query response times
  • Column-Oriented Storage: Optimized for analytical queries, reading only necessary columns for tremendous speed gains
  • Data Compression: Achieves 10x-100x compression ratios, significantly reducing storage requirements
  • Distributed Architecture: Native support for sharding and replication across multiple nodes
  • SQL Support: Full SQL compatibility with powerful extensions for analytics and time-series data
  • Real-Time Ingestion: Handles millions of inserts per second with immediate data availability for queries
  • Cost-Effective: Exceptional performance-to-cost ratio compared to traditional data warehouses

Common use cases include web and application analytics, log analysis, time-series data processing, business intelligence dashboards, real-time monitoring, and data warehousing for petabyte-scale datasets.

This comprehensive guide walks you through deploying ClickHouse on Klutch.sh using Docker, including detailed installation steps, sample configurations, and production-ready best practices for persistent storage and optimal performance.

Prerequisites

Before you begin, ensure you have the following:


Installation and Setup

Step 1: Create Your Project Directory

First, create a new directory for your ClickHouse deployment project:

Terminal window
mkdir clickhouse-klutch
cd clickhouse-klutch
git init

Step 2: Create the Dockerfile

Create a Dockerfile in your project root directory. This will define your ClickHouse container configuration:

FROM clickhouse/clickhouse-server:24-alpine
# Set default environment variables
# These can be overridden in the Klutch.sh dashboard
ENV CLICKHOUSE_DB=default
ENV CLICKHOUSE_USER=default
ENV CLICKHOUSE_PASSWORD=ClickHouse123!
# Expose the HTTP interface port (primary)
EXPOSE 8123
# Expose the native protocol port (for clickhouse-client)
EXPOSE 9000
# Optional: Copy custom configuration files
# Configuration files placed in /etc/clickhouse-server/config.d/
# will be merged with the default configuration
# COPY ./config.xml /etc/clickhouse-server/config.d/custom-config.xml
# COPY ./users.xml /etc/clickhouse-server/users.d/custom-users.xml

Note: The ClickHouse Alpine image is lightweight and recommended for production deployments. ClickHouse exposes multiple ports: 8123 for HTTP interface (REST API), 9000 for native protocol (CLI and drivers), 9009 for interserver communication, and 8443/9440 for HTTPS/TLS.

Step 3: (Optional) Create Custom Configuration

For production deployments, you can create custom configuration files. Create a file named config.xml:

<!-- config.xml - Custom ClickHouse Configuration -->
<clickhouse>
<!-- Listen on all interfaces -->
<listen_host>::</listen_host>
<!-- Logging configuration -->
<logger>
<level>information</level>
<console>true</console>
</logger>
<!-- Query settings -->
<max_concurrent_queries>100</max_concurrent_queries>
<max_server_memory_usage_to_ram_ratio>0.9</max_server_memory_usage_to_ram_ratio>
<!-- Performance tuning -->
<mark_cache_size>5368709120</mark_cache_size>
<max_table_size_to_drop>0</max_table_size_to_drop>
<!-- Enable query log for monitoring -->
<query_log>
<database>system</database>
<table>query_log</table>
</query_log>
</clickhouse>

You can also create a custom users configuration file named users.xml:

<!-- users.xml - Custom User Configuration -->
<clickhouse>
<users>
<default>
<password_sha256_hex><!-- SHA256 hash of your password --></password_sha256_hex>
<networks>
<ip>::/0</ip>
</networks>
<profile>default</profile>
<quota>default</quota>
</default>
</users>
</clickhouse>

If you create custom configuration files, uncomment the COPY lines in your Dockerfile to include them.

Step 4: (Optional) Create Initialization Scripts

ClickHouse supports initialization scripts that run when the database first starts. Create a file named init.sql:

-- init.sql - Database initialization script
-- This script runs automatically on first startup
-- Create a database for analytics
CREATE DATABASE IF NOT EXISTS analytics;
-- Create a sample events table with MergeTree engine
CREATE TABLE IF NOT EXISTS analytics.events (
event_time DateTime,
event_date Date DEFAULT toDate(event_time),
user_id UInt32,
event_type String,
page_url String,
country_code FixedString(2),
device_type LowCardinality(String),
session_id String
) ENGINE = MergeTree()
PARTITION BY toYYYYMM(event_date)
ORDER BY (event_date, user_id, event_time)
SETTINGS index_granularity = 8192;
-- Create a materialized view for aggregated statistics
CREATE MATERIALIZED VIEW IF NOT EXISTS analytics.daily_stats
ENGINE = SummingMergeTree()
PARTITION BY toYYYYMM(event_date)
ORDER BY (event_date, event_type)
AS SELECT
event_date,
event_type,
count() AS event_count,
uniq(user_id) AS unique_users
FROM analytics.events
GROUP BY event_date, event_type;
-- Create an index for faster queries
CREATE INDEX IF NOT EXISTS idx_user_id ON analytics.events (user_id) TYPE minmax GRANULARITY 1;
-- Insert sample data
INSERT INTO analytics.events (event_time, user_id, event_type, page_url, country_code, device_type, session_id) VALUES
(now(), 1001, 'page_view', '/home', 'US', 'desktop', 'session_abc123'),
(now(), 1002, 'click', '/products', 'UK', 'mobile', 'session_def456'),
(now(), 1003, 'page_view', '/about', 'CA', 'tablet', 'session_ghi789');

To include initialization scripts in your deployment, update your Dockerfile:

FROM clickhouse/clickhouse-server:24-alpine
ENV CLICKHOUSE_DB=default
ENV CLICKHOUSE_USER=default
ENV CLICKHOUSE_PASSWORD=ClickHouse123!
EXPOSE 8123
EXPOSE 9000
# Copy initialization scripts
COPY ./init.sql /docker-entrypoint-initdb.d/

Step 5: Test Locally (Optional)

Before deploying to Klutch.sh, you can test your ClickHouse setup locally:

Terminal window
# Build the Docker image
docker build -t my-clickhouse .
# Run the container
docker run -d \
--name clickhouse-test \
-p 8123:8123 \
-p 9000:9000 \
-e CLICKHOUSE_PASSWORD=mysecretpassword \
my-clickhouse
# Wait a few seconds for ClickHouse to start
sleep 5
# Test the HTTP interface
curl 'http://localhost:8123/?query=SELECT%20version()'
# Or connect with clickhouse-client (if installed)
clickhouse-client --host localhost --port 9000 --user default --password mysecretpassword --query "SELECT version()"
# Stop and remove the test container when done
docker stop clickhouse-test
docker rm clickhouse-test

Step 6: Push to GitHub

Commit your Dockerfile and any configuration files to your GitHub repository:

Terminal window
git add Dockerfile init.sql config.xml users.xml
git commit -m "Add ClickHouse Dockerfile and configuration files"
git remote add origin https://github.com/yourusername/clickhouse-klutch.git
git push -u origin main

Connecting to ClickHouse

Once deployed, you can connect to your ClickHouse database from any application using the HTTP interface or native protocol. Since Klutch.sh routes TCP traffic through port 8000, use the following connection methods:

HTTP Interface (REST API)

The HTTP interface is the simplest way to interact with ClickHouse and works with any HTTP client:

Terminal window
# Query using curl
curl 'http://example-app.klutch.sh:8000/?query=SELECT%20version()'
# Insert data
curl 'http://example-app.klutch.sh:8000/?query=INSERT%20INTO%20analytics.events%20VALUES' \
--data-binary @data.csv
# With authentication
curl -u default:ClickHouse123! \
'http://example-app.klutch.sh:8000/?query=SELECT%20count()%20FROM%20analytics.events'

Connection String Format

For TCP connections (native protocol):

clickhouse://default:ClickHouse123!@example-app.klutch.sh:8000/default

Replace:

  • default with your database username
  • ClickHouse123! with your database password
  • example-app.klutch.sh with your actual Klutch.sh app URL
  • default at the end with your database name

Example Connection Code

Node.js (using @clickhouse/client):

const { createClient } = require('@clickhouse/client');
const client = createClient({
host: 'http://example-app.klutch.sh:8000',
username: 'default',
password: 'ClickHouse123!',
database: 'default',
});
async function query() {
const resultSet = await client.query({
query: 'SELECT version()',
format: 'JSONEachRow',
});
const data = await resultSet.json();
console.log('ClickHouse version:', data);
}
query().catch(console.error);

Python (using clickhouse-driver):

from clickhouse_driver import Client
client = Client(
host='example-app.klutch.sh',
port=8000,
user='default',
password='ClickHouse123!',
database='default'
)
# Execute query
result = client.execute('SELECT version()')
print('ClickHouse version:', result)
# Insert data
client.execute(
'INSERT INTO analytics.events (event_time, user_id, event_type, page_url, country_code, device_type, session_id) VALUES',
[(datetime.now(), 1004, 'page_view', '/products', 'US', 'mobile', 'session_xyz')]
)
# Query with parameters
events = client.execute(
'SELECT * FROM analytics.events WHERE user_id = %(user_id)s',
{'user_id': 1001}
)

Python (using HTTP client - requests):

import requests
import json
# Define connection parameters
base_url = 'http://example-app.klutch.sh:8000'
auth = ('default', 'ClickHouse123!')
# Execute query
response = requests.get(
f'{base_url}/',
params={'query': 'SELECT version()'},
auth=auth
)
print('ClickHouse version:', response.text)
# Insert JSON data
data = [
{'event_time': '2024-01-15 10:00:00', 'user_id': 1005, 'event_type': 'click', 'page_url': '/checkout', 'country_code': 'FR', 'device_type': 'desktop', 'session_id': 'session_123'}
]
response = requests.post(
f'{base_url}/',
params={'query': 'INSERT INTO analytics.events FORMAT JSONEachRow'},
data='\n'.join(json.dumps(row) for row in data),
auth=auth
)

Go (using clickhouse-go):

package main
import (
"context"
"fmt"
"log"
"time"
"github.com/ClickHouse/clickhouse-go/v2"
)
func main() {
conn, err := clickhouse.Open(&clickhouse.Options{
Addr: []string{"example-app.klutch.sh:8000"},
Auth: clickhouse.Auth{
Database: "default",
Username: "default",
Password: "ClickHouse123!",
},
Protocol: clickhouse.HTTP,
})
if err != nil {
log.Fatal(err)
}
defer conn.Close()
// Query data
var version string
if err := conn.QueryRow(context.Background(), "SELECT version()").Scan(&version); err != nil {
log.Fatal(err)
}
fmt.Printf("ClickHouse version: %s\n", version)
// Insert data
batch, err := conn.PrepareBatch(context.Background(), "INSERT INTO analytics.events")
if err != nil {
log.Fatal(err)
}
err = batch.Append(
time.Now(),
uint32(1006),
"page_view",
"/products",
"US",
"mobile",
"session_abc",
)
if err != nil {
log.Fatal(err)
}
if err := batch.Send(); err != nil {
log.Fatal(err)
}
}

Java (using clickhouse-jdbc):

import com.clickhouse.jdbc.ClickHouseDataSource;
import java.sql.*;
import java.util.Properties;
public class ClickHouseExample {
public static void main(String[] args) throws SQLException {
String url = "jdbc:clickhouse://example-app.klutch.sh:8000/default";
Properties properties = new Properties();
properties.setProperty("user", "default");
properties.setProperty("password", "ClickHouse123!");
ClickHouseDataSource dataSource = new ClickHouseDataSource(url, properties);
try (Connection conn = dataSource.getConnection();
Statement stmt = conn.createStatement()) {
// Query data
ResultSet rs = stmt.executeQuery("SELECT version()");
if (rs.next()) {
System.out.println("ClickHouse version: " + rs.getString(1));
}
// Insert data
PreparedStatement pstmt = conn.prepareStatement(
"INSERT INTO analytics.events (event_time, user_id, event_type, page_url, country_code, device_type, session_id) VALUES (?, ?, ?, ?, ?, ?, ?)"
);
pstmt.setTimestamp(1, new Timestamp(System.currentTimeMillis()));
pstmt.setInt(2, 1007);
pstmt.setString(3, "click");
pstmt.setString(4, "/products");
pstmt.setString(5, "US");
pstmt.setString(6, "desktop");
pstmt.setString(7, "session_xyz");
pstmt.executeUpdate();
}
}
}

Deploying to Klutch.sh

Now that your ClickHouse project is ready and pushed to GitHub, follow these steps to deploy it on Klutch.sh with persistent storage for optimal performance.

Deployment Steps

    1. Log in to Klutch.sh

      Navigate to klutch.sh/app and sign in to your account.

    2. Create a New Project

      Go to Create Project and give your project a meaningful name (e.g., “ClickHouse Analytics Database”).

    3. Create a New App

      Navigate to Create App and configure the following settings:

    4. Select Your Repository

      • Choose GitHub as your Git source
      • Select the repository containing your Dockerfile
      • Choose the branch you want to deploy (usually main or master)
    5. Configure Traffic Type

      • Traffic Type: Select TCP (ClickHouse requires TCP traffic for database connections)
      • Internal Port: Set to 8123 (the default ClickHouse HTTP interface port that your container listens on)

      Note: While ClickHouse also uses port 9000 for native protocol, port 8123 (HTTP interface) is recommended as it’s more universally compatible and easier to work with for most applications.

    6. Set Environment Variables

      Add the following environment variables for your ClickHouse configuration:

      • CLICKHOUSE_DB: The name of your database (e.g., default or analytics)
      • CLICKHOUSE_USER: The database username (e.g., default)
      • CLICKHOUSE_PASSWORD: A strong password for your database user (use a secure password generator)
      • CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT: (Optional) Set to 1 to enable SQL-based user management

      Security Note: Always use strong, unique passwords for production databases. ClickHouse passwords should be at least 12 characters with mixed case, numbers, and special characters.

    7. Attach a Persistent Volume

      This is critical for ensuring your database data persists across deployments and restarts:

      • In the Volumes section, click “Add Volume”
      • Mount Path: Enter /var/lib/clickhouse (this is where ClickHouse stores its data files, tables, and metadata)
      • Size: Choose an appropriate size based on your expected data volume (e.g., 20GB, 50GB, 100GB, etc.)

      Important: ClickHouse requires persistent storage to maintain your data between container restarts. The storage requirements depend on your data volume, but plan for 3-10x compression of your raw data size.

    8. Configure Additional Settings

      • Region: Select the region closest to your users or data sources for optimal latency
      • Compute Resources: Choose CPU and memory based on your workload
        • Minimum: 1 vCPU, 2GB RAM (for development/testing)
        • Recommended: 2+ vCPUs, 4GB+ RAM (for production workloads)
        • Heavy Analytics: 4+ vCPUs, 8GB+ RAM (for high-throughput analytics)
      • Instances: Start with 1 instance (ClickHouse can be scaled horizontally later with clustering)
    9. Deploy Your Database

      Click “Create” to start the deployment. Klutch.sh will:

      • Automatically detect your Dockerfile in the repository root
      • Build the Docker image with ClickHouse
      • Attach the persistent volume
      • Start your ClickHouse container
      • Assign a URL for external connections
    10. Access Your Database

      Once deployment is complete, you’ll receive a URL like example-app.klutch.sh. You can connect to your ClickHouse database using this URL on port 8000:

      HTTP Interface:

      Terminal window
      curl -u default:ClickHouse123! \
      'http://example-app.klutch.sh:8000/?query=SELECT%20version()'

      Connection String:

      clickhouse://default:ClickHouse123!@example-app.klutch.sh:8000/default

Working with ClickHouse

Creating Tables

ClickHouse offers several table engines optimized for different use cases. The most common is MergeTree:

-- Time-series events table
CREATE TABLE events (
timestamp DateTime,
date Date DEFAULT toDate(timestamp),
user_id UInt64,
event_name String,
properties String
) ENGINE = MergeTree()
PARTITION BY toYYYYMM(date)
ORDER BY (date, user_id, timestamp)
SETTINGS index_granularity = 8192;
-- Aggregated metrics table with SummingMergeTree
CREATE TABLE metrics (
date Date,
metric_name String,
value Float64
) ENGINE = SummingMergeTree()
PARTITION BY toYYYYMM(date)
ORDER BY (date, metric_name);
-- Replicated table for high availability
CREATE TABLE distributed_events ON CLUSTER my_cluster (
timestamp DateTime,
user_id UInt64,
event_name String
) ENGINE = ReplicatedMergeTree('/clickhouse/tables/{shard}/distributed_events', '{replica}')
PARTITION BY toYYYYMM(toDate(timestamp))
ORDER BY (toDate(timestamp), user_id, timestamp);

Inserting Data

-- Insert single row
INSERT INTO events (timestamp, user_id, event_name, properties)
VALUES (now(), 12345, 'page_view', '{"page": "/home"}');
-- Insert multiple rows
INSERT INTO events (timestamp, user_id, event_name, properties) VALUES
(now(), 12345, 'page_view', '{"page": "/home"}'),
(now(), 67890, 'click', '{"button": "signup"}'),
(now(), 12345, 'page_view', '{"page": "/products"}');
-- Insert from file (CSV, TSV, JSON)
-- Via HTTP:
curl -u default:ClickHouse123! \
'http://example-app.klutch.sh:8000/?query=INSERT%20INTO%20events%20FORMAT%20CSV' \
--data-binary @data.csv

Querying Data

ClickHouse SQL supports advanced analytics functions:

-- Basic aggregation
SELECT
toDate(timestamp) AS date,
event_name,
count() AS event_count,
uniq(user_id) AS unique_users
FROM events
WHERE date >= today() - 7
GROUP BY date, event_name
ORDER BY date DESC, event_count DESC;
-- Time-series analysis with window functions
SELECT
toStartOfInterval(timestamp, INTERVAL 1 HOUR) AS hour,
event_name,
count() AS events,
uniq(user_id) AS unique_users,
avg(events) OVER (PARTITION BY event_name ORDER BY hour ROWS BETWEEN 3 PRECEDING AND CURRENT ROW) AS moving_avg
FROM events
WHERE timestamp >= now() - INTERVAL 24 HOUR
GROUP BY hour, event_name
ORDER BY hour DESC;
-- User retention cohort analysis
SELECT
toStartOfWeek(first_seen) AS cohort_week,
dateDiff('week', cohort_week, toStartOfWeek(timestamp)) AS weeks_since_signup,
uniq(user_id) AS users
FROM (
SELECT
user_id,
timestamp,
min(timestamp) OVER (PARTITION BY user_id) AS first_seen
FROM events
)
GROUP BY cohort_week, weeks_since_signup
ORDER BY cohort_week DESC, weeks_since_signup;
-- Top N query with LIMIT BY
SELECT
date,
user_id,
count() AS events
FROM events
WHERE date >= today() - 30
GROUP BY date, user_id
ORDER BY date DESC, events DESC
LIMIT 10 BY date;

Materialized Views

Materialized views in ClickHouse automatically aggregate data as it’s inserted:

-- Create a materialized view for hourly statistics
CREATE MATERIALIZED VIEW hourly_stats
ENGINE = SummingMergeTree()
PARTITION BY toYYYYMM(hour)
ORDER BY (hour, event_name)
AS SELECT
toStartOfHour(timestamp) AS hour,
event_name,
count() AS event_count,
uniq(user_id) AS unique_users
FROM events
GROUP BY hour, event_name;
-- Query the materialized view (much faster than the base table)
SELECT * FROM hourly_stats
WHERE hour >= now() - INTERVAL 7 DAY
ORDER BY hour DESC;

Production Best Practices

Security Recommendations

  • Use Strong Passwords: Never use default passwords in production. Generate strong, random passwords with at least 12 characters.
  • Environment Variables: Store sensitive credentials as environment variables in Klutch.sh, never in your Dockerfile or configuration files.
  • User Management: Create separate users with limited permissions for different applications and use cases.
  • Network Security: Consider using HTTPS/TLS for encrypted connections (requires custom configuration).
  • Access Control: Implement row-level security and column-level permissions for sensitive data.
  • Regular Security Updates: Keep your ClickHouse version updated with the latest security patches.

Performance Optimization

  • Proper Table Design: Choose the right table engine (MergeTree, SummingMergeTree, AggregatingMergeTree) based on your use case.
  • Partitioning Strategy: Partition tables by time periods (daily, monthly) for efficient data management and faster queries.
  • Sorting Key: Choose an optimal ORDER BY clause based on your most common query patterns.
  • Compression: ClickHouse automatically compresses data, but you can tune compression codecs for specific columns.
  • Materialized Views: Pre-aggregate frequently queried data with materialized views to reduce query latency.
  • Sampling: Use sampling for approximate queries on large datasets (SAMPLE 0.1 for 10% sample).
  • Dictionaries: Use dictionaries for dimension tables and lookups to improve join performance.

Data Management

  • Partitioning: Use time-based partitioning to efficiently manage data lifecycle:

    -- Drop old partitions to free space
    ALTER TABLE events DROP PARTITION '202301';
    -- Optimize tables regularly to merge parts
    OPTIMIZE TABLE events FINAL;
  • TTL (Time To Live): Automatically delete old data:

    CREATE TABLE events (
    timestamp DateTime,
    event_name String,
    user_id UInt64
    ) ENGINE = MergeTree()
    PARTITION BY toYYYYMM(timestamp)
    ORDER BY (timestamp, user_id)
    TTL timestamp + INTERVAL 90 DAY; -- Delete data older than 90 days
  • Backups: Implement regular backup strategy:

    -- Backup table
    BACKUP TABLE events TO Disk('backups', 'events_backup.zip');
    -- Restore table
    RESTORE TABLE events FROM Disk('backups', 'events_backup.zip');

Resource Allocation

  • Memory: Allocate sufficient memory for your workload. ClickHouse can use most available RAM for caching.
  • CPU: More CPU cores improve query parallelization and overall throughput.
  • Storage: Use fast SSD storage for best performance. Plan for compression (typically 3-10x).
  • Connection Pooling: Use connection pooling in your applications to manage database connections efficiently.

Monitoring

Monitor your ClickHouse database for:

  • Query Performance: Track slow queries using system.query_log table
  • Resource Usage: Monitor CPU, memory, disk I/O, and network usage
  • Insert Rate: Monitor data ingestion rate and batch sizes
  • Merge Activity: Track background merge operations
  • Replication Lag: If using replication, monitor lag between replicas
  • Disk Space: Monitor partition sizes and overall storage usage
-- View recent slow queries
SELECT
query_duration_ms,
query,
user,
read_rows,
read_bytes
FROM system.query_log
WHERE event_time >= now() - INTERVAL 1 HOUR
AND query_duration_ms > 1000
ORDER BY query_duration_ms DESC
LIMIT 10;
-- Check table sizes
SELECT
database,
table,
formatReadableSize(sum(bytes)) AS size,
sum(rows) AS rows,
max(modification_time) AS latest_modification
FROM system.parts
WHERE active
GROUP BY database, table
ORDER BY sum(bytes) DESC;

Troubleshooting

Cannot Connect to Database

  • Verify that you’re using the correct connection string with port 8000
  • Ensure your environment variables (username/password) are set correctly in Klutch.sh
  • Check that the internal port is set to 8123 in your app configuration
  • Verify the ClickHouse container is running and healthy in the Klutch.sh dashboard

Database Not Persisting Data

  • Verify that the persistent volume is correctly attached at /var/lib/clickhouse
  • Check that the volume has sufficient space allocated
  • Ensure the ClickHouse container has proper permissions to write to the volume

Performance Issues

  • Review your table schema and ensure proper ORDER BY clause for your query patterns
  • Consider creating materialized views for frequently aggregated queries
  • Check if queries are using indexes effectively (use EXPLAIN statement)
  • Monitor resource usage and consider increasing compute resources (CPU/memory)
  • Optimize partition size (avoid too many small partitions or too few large ones)

Out of Memory Errors

  • Increase the memory allocation for your ClickHouse container
  • Reduce max_memory_usage or max_bytes_before_external_group_by settings
  • Optimize queries to process less data (use better filters, sampling, or materialized views)
  • Add LIMIT clauses to queries that return too many rows

Slow Queries

  • Use EXPLAIN to analyze query execution plan
  • Check if proper indexes and sorting keys are being used
  • Consider creating materialized views for complex aggregations
  • Optimize WHERE clause conditions to filter data early
  • Use PREWHERE clause for filtering before reading all columns

Disk Space Issues

  • Drop old partitions that are no longer needed
  • Implement TTL policies to automatically delete old data
  • Run OPTIMIZE TABLE ... FINAL to merge small parts and reclaim space
  • Increase the persistent volume size in Klutch.sh if needed

Additional Resources


Conclusion

Deploying ClickHouse to Klutch.sh with Docker provides a powerful, scalable analytics database solution with exceptional query performance and persistent storage. By following this guide, you’ve set up a production-ready ClickHouse database optimized for real-time analytics, with proper data persistence, security configurations, and connection capabilities. Your database is now ready to handle billions of rows and support high-performance analytical workloads for your applications, from real-time dashboards to complex data warehousing tasks.