Deploy a simple Multi-Node Clickhouse Cluster with docker-compose in minutes.

Overview

Simple Multi Node Clickhouse Cluster

I hate those single-node clickhouse clusters and manually installation, I mean, why should we:

this is just weird!

So this repo tries to solve these problem.

Note

  • This is a simplified model of Multi Node Clickhouse Cluster, which lacks: LoadBalancer config/Automated Failover/MultiShard Config generation.
  • All clickhouse data is persisted under event-data, if you need to move clickhouse to some other directory, you'll just need to move the directory(that contains docker-compose.yml) and docker-compose up -d to fire it up again.
  • Host network mode is used to simplify the whole deploy procedure, so you might need to create addition firewall rules if you are running this on a public accessible machine.

Prerequisites

To use this, we need docker and docker-compose installed, recommended OS is ubuntu, and it's recommended to install clickhouse-client on machine, so on a typical ubuntu server, doing the following should be sufficient.

apt update
curl -fsSL https://get.docker.com -o get-docker.sh && sh get-docker.sh && rm -f get-docker.sh
apt install docker-compose clickhouse-client -y

Usage

  1. Clone this repo
  2. Edit the necessary server info in topo.yml
  3. Run python3 generate.py
  4. Your cluster info should be in the cluster directory now
  5. Sync those files to related nodes and run docker-compose up -d on them
  6. Your cluster is ready to go

If you still cannot understand what I'm saying above, see the example below.

Example Usage

Edit information

I've Clone the repo and would like to set a 3-master clickhouse cluster and has the following specs

  • 3 replica(one replica on each node)
  • 1 Shard only

So I need to edit the topo.yml as follows:

global:
  clickhouse_image: "yandex/clickhouse-server:21.3.2.5"
  zookeeper_image: "bitnami/zookeeper:3.6.1"

zookeeper_servers:
  - host: 192.168.33.101
  - host: 192.168.33.102
  - host: 192.168.33.103

clickhouse_servers:
  - host: 192.168.33.101
  - host: 192.168.33.102
  - host: 192.168.33.103

clickhouse_topology:
  - clusters:
      - name: "novakwok_cluster"
        shards:
          - name: "novakwok_shard"
            servers:
              - host: 192.168.33.101
              - host: 192.168.33.102
              - host: 192.168.33.103

Generate Config

After python3 generate.py, a structure has been generated under cluster directory, looks like this:

➜  simple-multinode-clickhouse-cluster git:(master) ✗ python3 generate.py 
Write clickhouse-config.xml to cluster/192.168.33.101/clickhouse-config.xml
Write clickhouse-config.xml to cluster/192.168.33.102/clickhouse-config.xml
Write clickhouse-config.xml to cluster/192.168.33.103/clickhouse-config.xml

➜  simple-multinode-clickhouse-cluster git:(master) ✗ tree cluster 
cluster
├── 192.168.33.101
│   ├── clickhouse-config.xml
│   ├── clickhouse-user-config.xml
│   └── docker-compose.yml
├── 192.168.33.102
│   ├── clickhouse-config.xml
│   ├── clickhouse-user-config.xml
│   └── docker-compose.yml
└── 192.168.33.103
    ├── clickhouse-config.xml
    ├── clickhouse-user-config.xml
    └── docker-compose.yml

3 directories, 9 files

Sync Config

Now we need to sync those files to related hosts(of course you can use ansible here):

rsync -aP ./cluster/192.168.33.101/ [email protected]:/root/ch/
rsync -aP ./cluster/192.168.33.102/ [email protected]:/root/ch/
rsync -aP ./cluster/192.168.33.103/ [email protected]:/root/ch/

Start Cluster

Now run docker-compose up -d on every hosts' /root/ch/ directory.

Validation

On 192.168.33.101, use clickhouse-client to connect to local instance and check if cluster is there.

[email protected]:~/ch# clickhouse-client 
ClickHouse client version 18.16.1.
Connecting to localhost:9000.
Connected to ClickHouse server version 21.3.2 revision 54447.

192-168-33-101 :) SELECT * FROM system.clusters;

SELECT *
FROM system.clusters 

┌─cluster──────────────────────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name──────┬─host_address───┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─estimated_recovery_time─┐
│ novakwok_cluster                             │         1 │            1 │           1 │ 192.168.33.101 │ 192.168.33.101 │ 9000 │        1 │ default │                  │            0 │                       0 │
│ novakwok_cluster                             │         1 │            1 │           2 │ 192.168.33.102 │ 192.168.33.102 │ 9000 │        0 │ default │                  │            0 │                       0 │
│ novakwok_cluster                             │         1 │            1 │           3 │ 192.168.33.103 │ 192.168.33.103 │ 9000 │        0 │ default │                  │            0 │                       0 │
│ test_cluster_two_shards                      │         1 │            1 │           1 │ 127.0.0.1      │ 127.0.0.1      │ 9000 │        1 │ default │                  │            0 │                       0 │
│ test_cluster_two_shards                      │         2 │            1 │           1 │ 127.0.0.2      │ 127.0.0.2      │ 9000 │        0 │ default │                  │            0 │                       0 │
│ test_cluster_two_shards_internal_replication │         1 │            1 │           1 │ 127.0.0.1      │ 127.0.0.1      │ 9000 │        1 │ default │                  │            0 │                       0 │
│ test_cluster_two_shards_internal_replication │         2 │            1 │           1 │ 127.0.0.2      │ 127.0.0.2      │ 9000 │        0 │ default │                  │            0 │                       0 │
│ test_cluster_two_shards_localhost            │         1 │            1 │           1 │ localhost      │ 127.0.0.1      │ 9000 │        1 │ default │                  │            0 │                       0 │
│ test_cluster_two_shards_localhost            │         2 │            1 │           1 │ localhost      │ 127.0.0.1      │ 9000 │        1 │ default │                  │            0 │                       0 │
│ test_shard_localhost                         │         1 │            1 │           1 │ localhost      │ 127.0.0.1      │ 9000 │        1 │ default │                  │            0 │                       0 │
│ test_shard_localhost_secure                  │         1 │            1 │           1 │ localhost      │ 127.0.0.1      │ 9440 │        0 │ default │                  │            0 │                       0 │
│ test_unavailable_shard                       │         1 │            1 │           1 │ localhost      │ 127.0.0.1      │ 9000 │        1 │ default │                  │            0 │                       0 │
│ test_unavailable_shard                       │         2 │            1 │           1 │ localhost      │ 127.0.0.1      │    1 │        0 │ default │                  │            0 │                       0 │
└──────────────────────────────────────────────┴───────────┴──────────────┴─────────────┴────────────────┴────────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────────────┘
↘ Progress: 13.00 rows, 1.58 KB (4.39 thousand rows/s., 532.47 KB/s.) 
13 rows in set. Elapsed: 0.003 sec. 

Let's create a DB with replica:

192-168-33-101 :) create database novakwok_test on cluster novakwok_cluster;

CREATE DATABASE novakwok_test ON CLUSTER novakwok_cluster

┌─host───────────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
│ 192.168.33.103 │ 9000 │      0 │       │                   2 │                0 │
│ 192.168.33.101 │ 9000 │      0 │       │                   1 │                0 │
│ 192.168.33.102 │ 9000 │      0 │       │                   0 │                0 │
└────────────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘
← Progress: 3.00 rows, 174.00 B (16.07 rows/s., 931.99 B/s.)  99%
3 rows in set. Elapsed: 0.187 sec. 

192-168-33-101 :) show databases;

SHOW DATABASES

┌─name──────────┐
│ default       │
│ novakwok_test │
│ system        │
└───────────────┘
↑ Progress: 3.00 rows, 479.00 B (855.61 rows/s., 136.61 KB/s.) 
3 rows in set. Elapsed: 0.004 sec. 

Connect to another host to see if it's really working.

[email protected]:~/ch# clickhouse-client -h 192.168.33.102
ClickHouse client version 18.16.1.
Connecting to 192.168.33.102:9000.
Connected to ClickHouse server version 21.3.2 revision 54447.

192-168-33-102 :) show databases;

SHOW DATABASES

┌─name──────────┐
│ default       │
│ novakwok_test │
│ system        │
└───────────────┘
↘ Progress: 3.00 rows, 479.00 B (623.17 rows/s., 99.50 KB/s.) 
3 rows in set. Elapsed: 0.005 sec. 

License

GPL

Owner
Nova Kwok
43EC 6073 0BFF A16C 34BB 9EF2 8D42 A0E6 99E5 0639
Nova Kwok
Copy a Kubernetes pod and run commands in its environment

copypod Utility for copying a running Kubernetes pod so you can run commands in a copy of its environment, without worrying about it the pod potential

Memrise 4 Apr 08, 2022
Get Response Of Container Deployment Kube with python

get-response-of-container-deployment-kube 概要 get-response-of-container-deployment-kube は、例えばエッジコンピューティング環境のコンテナデプロイメントシステムにおいて、デプロイ元の端末がデプロイ先のコンテナデプロイ

Latona, Inc. 3 Nov 05, 2021
This repository contains useful docker-swarm-tools.

docker-swarm-tools This repository contains useful docker-swarm-tools. swarm-guardian This Docker image is intended to be used in a multihost docker e

NeuroForge GmbH & Co. KG 4 Jan 12, 2022
Let's learn how to build, release and operate your containerized applications to Amazon ECS and AWS Fargate using AWS Copilot.

🚀 Welcome to AWS Copilot Workshop In this workshop, you'll learn how to build, release and operate your containerised applications to Amazon ECS and

Donnie Prakoso 15 Jul 14, 2022
NixOps is a tool for deploying to NixOS machines in a network or cloud.

NixOps NixOps is a tool for deploying to NixOS machines in a network or the cloud. Key features include: Declarative: NixOps determines and carries ou

Nix/Nixpkgs/NixOS 1.2k Jan 02, 2023
MicroK8s is a small, fast, single-package Kubernetes for developers, IoT and edge.

MicroK8s The smallest, fastest Kubernetes Single-package fully conformant lightweight Kubernetes that works on 42 flavours of Linux. Perfect for: Deve

Ubuntu 7.1k Jan 08, 2023
Dockerized service to backup all running database containers

Docker Database Backup Dockerized service to automatically backup all of your database containers. Docker Image Tags: docker.io/jandi/database-backup

Jan Dittrich 16 Dec 31, 2022
This project shows how to serve an TF based image classification model as a web service with TFServing, Docker, and Kubernetes(GKE).

Deploying ML models with CPU based TFServing, Docker, and Kubernetes By: Chansung Park and Sayak Paul This project shows how to serve a TensorFlow ima

Chansung Park 104 Dec 28, 2022
Create pinned requirements.txt inside a Docker image using pip-tools

Pin your Python dependencies! pin-requirements.py is a script that lets you pin your Python dependencies inside a Docker container. Pinning your depen

4 Aug 18, 2022
Repository tracking all OpenStack repositories as submodules. Mirror of code maintained at opendev.org.

OpenStack OpenStack is a collection of interoperable components that can be deployed to provide computing, networking and storage resources. Those inf

Mirrors of opendev.org/openstack 4.6k Dec 28, 2022
A job launching library for docker, EC2, GCP, etc.

doodad A library for packaging dependencies and launching scripts (with a focus on python) on different platforms using Docker. Currently supported pl

Justin Fu 55 Aug 27, 2022
Bash-based Python-venv convenience wrapper

venvrc Bash-based Python-venv convenience wrapper. Demo Install Copy venvrc file to ~/.venvrc, and add the following line to your ~/.bashrc file: # so

1 Dec 29, 2022
Python IMDB Docker - A docker tutorial to containerize a python script.

Python_IMDB_Docker A docker tutorial to containerize a python script. Build the docker in the current directory: docker build -t python-imdb . Run the

Sarthak Babbar 1 Dec 30, 2021
Build Netbox as a Docker container

netbox-docker The Github repository houses the components needed to build Netbox as a Docker container. Images are built using this code and are relea

Farshad Nick 1 Dec 18, 2021
Run your clouds in RAID.

UniKlaud Run your clouds in RAID Table of Contents About The Project Built With Getting Started Installation Usage Roadmap Contributing License Contac

3 Jan 16, 2022
Autoscaling volumes for Kubernetes (with the help of Prometheus)

Kubernetes Volume Autoscaler (with Prometheus) This repository contains a service that automatically increases the size of a Persistent Volume Claim i

DevOps Nirvana 142 Dec 28, 2022
Some automation scripts to setup a deployable development database server (with docker).

Postgres-Docker Database Initializer This is a simple automation script that will create a Docker Postgres database with a custom username, password,

Pysogge 1 Nov 11, 2021
Quick & dirty controller to schedule Kubernetes Jobs later (once)

K8s Jobber Operator Quickly implemented Kubernetes controller to enable scheduling of Jobs at a later time. Usage: To schedule a Job later, Set .spec.

Jukka Väisänen 2 Feb 11, 2022
Tencent Yun tools with python

Tencent_Yun_tools 使用 python3.9 + 腾讯云 AccessKey 利用工具 使用之前请先填写config.ini配置文件 Usage python3 Tencent_rce.py -h Scanner python3 Tencent_rce.py -s 生成CSV

<img src="> 13 Dec 20, 2022
A Python library for the Docker Engine API

Docker SDK for Python A Python library for the Docker Engine API. It lets you do anything the docker command does, but from within Python apps – run c

Docker 6.1k Dec 31, 2022