This guide walks you through installing Docker, creating a Ferrosa cluster, and verifying it’s working — from a completely blank machine.
| Time estimate: About 5 minutes if Docker is already installed, 10-15 minutes if you’re starting from scratch. No prior database experience required. |
Prerequisites
You don’t need any database knowledge to follow this guide. Here’s what you do need:
-
A computer running Linux, macOS, or Windows with WSL2
-
At least 4 GB of RAM available (the three Ferrosa nodes plus the object store will use about 2-3 GB total)
-
An internet connection to download Docker images on first run
-
A terminal — Terminal.app on macOS, any terminal emulator on Linux, or Windows Terminal with WSL2
That’s it. Everything else gets installed in the steps below.
Install Docker
Docker packages your Ferrosa cluster into containers so you don’t have to install Rust, compile anything, or worry about dependencies. If you already have Docker installed, skip to Create the Cluster Files.
Step 1: Install Docker Desktop or Docker Engine
Follow the official instructions for your operating system:
Step 2: Verify Docker is working
Open a terminal and run both of these commands. If they print version numbers, you’re ready to go.
docker --version
# Docker version 27.x.x, build xxxxxxx
docker compose version
# Docker Compose version v2.x.x
On Linux, if you see a "permission denied" error, you may need to add your user to the docker group: sudo usermod -aG docker $USER, then log out and back in.
|
Create the Cluster Files
You only need one file: a Docker Compose configuration that defines all three Ferrosa nodes and a local S3-compatible object store. Let’s create a working directory first.
Step 3: Create a project directory
Pick any location you like. We’ll use your home directory.
mkdir ferrosa-cluster && cd ferrosa-cluster
Step 4: Create the docker-compose.yml file
Copy the entire block below into a new file called docker-compose.yml. You can use any text editor — VS Code, nano, vim, or even Notepad.
# docker-compose.yml — 3-node Ferrosa cluster with S3-compatible storage
# Run with: docker compose up -d
services:
# ── Volume permissions (RustFS runs as UID 10001) ──
volume-permissions:
image: alpine
volumes:
- rustfs-data:/data
command: >
sh -c "chown -R 10001:10001 /data && echo 'permissions fixed'"
restart: "no"
# ── S3-compatible object store (RustFS) ──
rustfs:
image: rustfs/rustfs:latest
depends_on:
volume-permissions:
condition: service_completed_successfully
environment:
RUSTFS_VOLUMES: /data/rustfs0
RUSTFS_ADDRESS: "0.0.0.0:9000"
RUSTFS_CONSOLE_ADDRESS: "0.0.0.0:9001"
RUSTFS_CONSOLE_ENABLE: "true"
RUSTFS_ACCESS_KEY: rustfsadmin
RUSTFS_SECRET_KEY: rustfsadmin # pragma: allowlist secret
volumes:
- rustfs-data:/data
healthcheck:
test: ["CMD", "curl", "-f", "http://127.0.0.1:9000/health"]
interval: 5s
timeout: 5s
retries: 10
start_period: 10s
# ── Create the S3 bucket on first run ──
rustfs-init:
image: minio/mc
depends_on:
rustfs:
condition: service_healthy
entrypoint: >
sh -c "mc alias set local http://rustfs:9000 rustfsadmin rustfsadmin &&
mc mb local/ferrosa --ignore-existing &&
echo 'bucket ferrosa ready'"
restart: "no"
# ── Node 1 (seed node) ──
node1:
build:
context: ../../
dockerfile: Dockerfile
depends_on:
rustfs-init:
condition: service_completed_successfully
ports:
- "9042:9042" # CQL
- "9090:9090" # Web console
- "7474:7474" # Graph
environment:
FERROSA_HOST_ID: 11111111-1111-1111-1111-111111111111
FERROSA_DATA_DIR: /var/lib/ferrosa
FERROSA_AUTH_DISABLED: "true"
FERROSA_CQL_BIND: 0.0.0.0:9042
FERROSA_WEB_BIND: 0.0.0.0:9090
FERROSA_CLUSTER_NAME: ferrosa-tutorial
FERROSA_CLUSTER_MODE: standalone
FERROSA_GRAPH_ENABLED: "true"
FERROSA_S3_ENDPOINT: http://rustfs:9000
FERROSA_S3_BUCKET: ferrosa
FERROSA_S3_REGION: us-east-1
FERROSA_S3_ACCESS_KEY_ID: rustfsadmin
FERROSA_S3_SECRET_ACCESS_KEY: rustfsadmin # pragma: allowlist secret
FERROSA_S3_ALLOW_HTTP: "true"
FERROSA_S3_PREFIX: node1
volumes:
- node1-data:/var/lib/ferrosa
healthcheck:
test: ["CMD-SHELL", "bash -c '</dev/tcp/127.0.0.1/9042' || exit 1"]
interval: 5s
timeout: 5s
retries: 30
start_period: 15s
# ── Node 2 ──
node2:
build:
context: ../../
dockerfile: Dockerfile
depends_on:
node1:
condition: service_healthy
ports:
- "9043:9042" # CQL
- "9091:9090" # Web console
- "7475:7474" # Graph
environment:
FERROSA_HOST_ID: 22222222-2222-2222-2222-222222222222
FERROSA_DATA_DIR: /var/lib/ferrosa
FERROSA_AUTH_DISABLED: "true"
FERROSA_CQL_BIND: 0.0.0.0:9042
FERROSA_WEB_BIND: 0.0.0.0:9090
FERROSA_CLUSTER_NAME: ferrosa-tutorial
FERROSA_CLUSTER_MODE: pair
FERROSA_SEED: node1:7000
FERROSA_GRAPH_ENABLED: "true"
FERROSA_S3_ENDPOINT: http://rustfs:9000
FERROSA_S3_BUCKET: ferrosa
FERROSA_S3_REGION: us-east-1
FERROSA_S3_ACCESS_KEY_ID: rustfsadmin
FERROSA_S3_SECRET_ACCESS_KEY: rustfsadmin # pragma: allowlist secret
FERROSA_S3_ALLOW_HTTP: "true"
FERROSA_S3_PREFIX: node2
volumes:
- node2-data:/var/lib/ferrosa
healthcheck:
test: ["CMD-SHELL", "bash -c '</dev/tcp/127.0.0.1/9042' || exit 1"]
interval: 5s
timeout: 5s
retries: 30
start_period: 15s
# ── Node 3 ──
node3:
build:
context: ../../
dockerfile: Dockerfile
depends_on:
node1:
condition: service_healthy
node2:
condition: service_healthy
ports:
- "9044:9042" # CQL
- "9092:9090" # Web console
- "7476:7474" # Graph
environment:
FERROSA_HOST_ID: 33333333-3333-3333-3333-333333333333
FERROSA_DATA_DIR: /var/lib/ferrosa
FERROSA_AUTH_DISABLED: "true"
FERROSA_CQL_BIND: 0.0.0.0:9042
FERROSA_WEB_BIND: 0.0.0.0:9090
FERROSA_CLUSTER_NAME: ferrosa-tutorial
FERROSA_CLUSTER_MODE: cluster
FERROSA_SEED: node1:7000
FERROSA_GRAPH_ENABLED: "true"
FERROSA_S3_ENDPOINT: http://rustfs:9000
FERROSA_S3_BUCKET: ferrosa
FERROSA_S3_REGION: us-east-1
FERROSA_S3_ACCESS_KEY_ID: rustfsadmin
FERROSA_S3_SECRET_ACCESS_KEY: rustfsadmin # pragma: allowlist secret
FERROSA_S3_ALLOW_HTTP: "true"
FERROSA_S3_PREFIX: node3
volumes:
- node3-data:/var/lib/ferrosa
healthcheck:
test: ["CMD-SHELL", "bash -c '</dev/tcp/127.0.0.1/9042' || exit 1"]
interval: 5s
timeout: 5s
retries: 30
start_period: 15s
volumes:
rustfs-data:
node1-data:
node2-data:
node3-data:
Let’s break down what this file does:
-
rustfs — A local S3-compatible object store. Ferrosa uses S3 as its durable storage backend, and RustFS gives you a lightweight local version for development.
-
rustfs-init — A one-time helper that creates the
ferrosastorage bucket. It runs once and exits. -
node1 — The seed node. It starts first in
standalonemode and acts as the initial contact point for the other nodes. -
node2 — Joins node1 in
pairmode, forming a two-node high-availability pair. -
node3 — The third node triggers the transition to full
clustermode with Raft consensus.
About the host IDs: Each Ferrosa node needs a unique UUID. The simple values used here (11111111-…, 22222222-…, 33333333-…) are just for the tutorial. In production, you would use randomly generated UUIDs.
|
Start the Cluster
Step 5: Launch all services
From your ferrosa-cluster directory, run:
docker compose up -d
The -d flag runs everything in the background. Docker will download the images on the first run, which takes a minute or two depending on your internet connection. After that, subsequent starts take just a few seconds.
Step 6: Watch the startup logs
Follow the logs to see the cluster come online:
docker compose logs -f node1 node2 node3
Here’s what you should see, in roughly this order:
-
Node 1 starts in standalone mode — you’ll see it bind to port 9042 and report "CQL server listening"
-
Node 2 connects to node 1 — the logs will show a peer handshake and pair formation: "Pair mode established with node1"
-
Node 3 triggers the cluster transition — you’ll see Raft leader election messages and "Cluster mode active, 3 nodes in ring"
-
All three nodes report healthy — heartbeat messages confirm the cluster is stable
Press Ctrl+C to stop following the logs. The cluster keeps running in the background.
| How long does startup take? Node 1 is usually ready in 2-3 seconds. Node 2 joins within 5 seconds after node 1 is healthy. Node 3 joins and the Raft election completes within another 5-10 seconds. Total: about 15-20 seconds. |
Verify the Cluster
Let’s make sure all three nodes are up and talking to each other.
Step 7: Check cluster status via the web API
Ferrosa’s web console exposes a REST API. Query it to see the cluster topology:
curl http://localhost:9090/api/cluster/status | python3 -m json.tool
You should see a JSON response listing all three nodes with their status, host IDs, and token assignments. Every node should show "status": "UP".
Step 8: Confirm all containers are running
A quick check that nothing crashed:
docker compose ps
You should see five services: rustfs running, rustfs-init exited (that’s normal — it only runs once), and node1, node2, node3 all showing "healthy".
| Service | Host Port | Purpose |
|---|---|---|
node1 |
|
CQL (primary contact point) |
node1 |
|
Web console & API |
node1 |
|
Graph endpoint |
node2 |
|
CQL |
node2 |
|
Web console & API |
node2 |
|
Graph endpoint |
node3 |
|
CQL |
node3 |
|
Web console & API |
node3 |
|
Graph endpoint |
rustfs |
|
S3 API |
rustfs |
|
RustFS web console |
Connect with cqlsh
Now for the fun part: talking to your cluster. cqlsh is the standard CQL shell that works with both Apache Cassandra and Ferrosa.
Step 9: Install cqlsh
cqlsh is a Python tool. Install it with pip:
pip install cqlsh
Step 10: Connect to node 1
Open a CQL session on the first node:
cqlsh localhost 9042
You should see a welcome banner and a cqlsh> prompt. You’re now connected to your Ferrosa cluster. Let’s create some data to confirm everything works end to end.
Step 11: Create a keyspace and table
Type (or paste) each command at the cqlsh> prompt:
-- Verify cluster is responsive
SELECT cluster_name, data_center FROM system.local;
SELECT * FROM system.peers;
-- Create test keyspace and verify replication
CREATE KEYSPACE IF NOT EXISTS setup_test WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 3};
CREATE TABLE IF NOT EXISTS setup_test.hello (id int PRIMARY KEY, message text);
INSERT INTO setup_test.hello (id, message) VALUES (1, 'Hello from Ferrosa!');
SELECT * FROM setup_test.hello;
You should see your row:
id | message
----+---------------------
1 | Hello from Ferrosa!
(1 rows)
Step 12: Verify replication across nodes
Open a second terminal and connect to node 2 on port 9043:
cqlsh localhost 9043
Now query the same table. Because you set replication_factor: 3, every node has a copy of the data:
SELECT * FROM setup_test.hello;
You should see the exact same row. Your data is replicated across all three nodes.
| What just happened? When you inserted data on node 1, Ferrosa’s coordinator forwarded the write to all three replicas. The write succeeded after a quorum (2 out of 3) acknowledged it. When you queried node 2, it served the data from its own local copy. This is distributed database replication in action. |
Next Steps
Your cluster is up and running. Now pick a tutorial that matches what you want to build:
-
IoT Sensor Data — Ingest millions of sensor readings with time-series tables.
-
Real-Time Analytics — Aggregate financial market data for dashboards.
-
E-Commerce — Product catalog, shopping cart, and order processing.
-
Messaging & Chat — Store and retrieve conversation history at scale.
-
Fraud Detection — Score transactions in real time with pattern matching.
-
Graph Joins & Traversals — Relationship queries using Ferrosa’s built-in graph engine.
| Leave this cluster running as you work through the tutorials. Each tutorial creates its own keyspace so they don’t interfere with each other. |
Useful commands for managing your cluster
# Stop the cluster (preserves data)
docker compose stop
# Start it again
docker compose start
# Stop and remove everything (deletes all data)
docker compose down -v
# View logs for a specific node
docker compose logs -f node2
# Restart a single node
docker compose restart node3