Wiber is a fully functional, multi-node, fault‑tolerant messaging system with a FastAPI gateway and a simple web UI. It runs locally without Kafka or Docker and includes cluster lifecycle controls (start/kill/restart), message history, and real‑time streaming over WebSockets.
- Fault tolerance: failure detection, replication, failover, recovery.
- Replication & consistency: leader-based or quorum, dedup, fast reads.
- Time sync: physical clock sync, Lamport clocks, bounded reordering.
- Consensus: leader election and log replication (e.g., Raft).
- Integration: coherent, testable system across modules.
- Multi-node cluster with start/kill/restart controls
- FastAPI gateway with REST + WebSocket endpoints
- Simple web UI to manage nodes and view logs live
- Message publish, subscribe, and history retrieval
- Runs locally; no Kafka or Docker required
-
Create a virtual environment (optional but recommended)
python3 -m venv .venv
-
Activate it
source .venv/bin/activateWindows PowerShell:
.venv\Scripts\Activate.ps1
-
Install dependencies
pip install -r docs/requirements.txt
-
Launch the FastAPI gateway + web UI
python3 scripts/dm_gateway.py
-
Open the UI in your browser (default port 8080)
http://0.0.0.0:8080/For windows
http://localhost:8081/
From the UI you can:
- Click Start the cluster to spawn all nodes defined in
config/cluster.yaml. - Use the per-node Kill / Restart buttons to manage individual nodes.
- Watch each node’s console output live in the three terminals.
Behind the scenes the gateway shells out to scripts/run_node.py for each node and tracks their subprocesses so the UI always knows which nodes are running.
# Replace n1 with the node id from config/cluster.yaml
python3 scripts/run_node.py --id n1
# Optional flags
# --config path/to/cluster.yaml (defaults to config/cluster.yaml)
# --data-root /custom/data/dir (defaults to ./.data)Each node writes its console output to .data/<node_id>/logs/console.log, which is the
same file the gateway terminals stream in real time.
Start the whole cluster with one command:
python3 scripts/run_cluster.pyThen, view each node's console output in separate terminals:
# Terminal A
tail -f ./.data/n1/logs/console.log
# Terminal B
tail -f ./.data/n2/logs/console.log
# Terminal C
tail -f ./.data/n3/logs/console.log
- Member 1 - IT23632332 (Suhasna Ranatunga): Fault tolerance (node lifecycle, failover, recovery)
- Member 2 - IT23585284 (Praveen Hewage): Replication & consistency (
replication/, dedup, commit flow) - Member 3 - IT23631724 (Luchitha Jayawardena): Time sync (
time/, reorder strategy) - Member 4 - IT23651388 (Sanuk Ratnayake): Consensus (
cluster/raft.py, elections, AppendEntries) - Member 5 - IT23750760 (Dulain Gunawardhana): Integration and Testing
Coordinate interfaces via:
- Append entries API (leader → followers)
- Commit index propagation
- Client
PUB/SUB/HISTORYsemantics over committed entries
- Docker and Kafka assets removed.
- Keep
scenario3.docxfor requirements and evaluation. - The gateway listens on
0.0.0.0:8080by default; override withGATEWAY_PORT.

