A IOT streaming platform to simulate lots of devices (10k+ concurrent connections) and make out the flow with dashboards.
Earlier version was logcore-v1, which worked on top of Docker swarm on Virtual Machines locally.
So this has version additional support for:
- Using a MQTT cluster (vernMQ) to handle much more load of traffic and concurrent connections
- Load Testing with K6
- Migrating from local homelab to GCP's GKE(Kubernetes Engine) to scale better
- Earlier, TimescaleDB was used, now replaced with BigTable
- Support to run locally as well (with K3s and GCP local emulators)
- Helm charts configs for easy Kubernetes setup
- K6: devices mock for load testing (send MQTT messages to message broker - verne)
- Kubernetes cluster has the following namespaces:
- Observe: observability stack with prometheus for metrics, alloy & loki for logs, grafana for visualization
- Verne: MQTT pod for message broker, and a listener that supplies messages from MQTT -> PubSub
- PubSub: Holds messages in source/ topic
- Dataflow: streaming ETL pipeline to transform & push data to data storages
- BigTable: holds the full data, really scalable, no schema needed
- FireStore: device shadows (active devices list)
- K3s used in place of K8s(Kubernetes), its more lightweight and works for local nodes (even in lightweight devices like Pi)
- Dataflow replaced with DirectRunner in Apache Beam SDK (internally it's the same thing)
- GCP emulators (beta) for pubsub & bigtable (internally the same APIs)
Tools needed to run this locally:
- K3s
- Kubectl
- Taskfile
- WSL (only for windows)
- Helm
- Docker (with compose support- recommended: get the Desktop version, CLI works too but seperate)
- Clone repo
git clone https://github.com/ShubhamTiwary914/logcore-v2.git
cd logcore-v2- Run setup script
task _run-localPreview of setup logs:
task: [_run-local] bash -c "./run-local.sh"
[INFO] Setting up Kubernetes namespaces...
[INFO] Creating namespace 'verne'...
namespace/verne created
[INFO] Creating namespace 'observe'...
namespace/observe created
.
.
[INFO] Setup complete!
[INFO] Grafana Dashboard URL: http://10.43.120.70:3045
[INFO] Username: admin
[INFO] Password: *******Use this URL on browser to view the Grafana Dashboard with the credentials
Note
Grafana dashboard may take a few secs to setup, you can check if it ready via: kubectl get pods -n observe | grep grafana, which shows:
grafana-66bd6889cf-b4f7t 1/1 Running 0 2m10sThe dashboard section has custom metrics being shown, previews:

And in case you forget/remove the logs, getting back the address & creds. to access grafana:
task observe-grafana-accessFinally, let's use K6 to perform load test as virtual users:
task _run-k6-test -- <duration(seconds)> <users>Example:
task _run-k6-test -- 60 100 #runs 100 virtual users for 60 secondsLogs for the example here:
█ TOTAL RESULTS
CUSTOM
mqtt_calls....................: 6771 112.778573/s
mqtt_concurrent_connections...: 111 1.848829/s
mqtt_message_duration.........: avg=0 min=0 med=0 max=0 p(90)=0 p(95)=0
mqtt_messages_sent............: 6549 109.080915/s
EXECUTION
iteration_duration............: avg=54.07s min=1.79ms med=1m0s max=1m0s p(90)=1m0s p(95)=1m0s
iterations....................: 111 1.848829/s
vus...........................: 100 min=100 max=100
vus_max.......................: 100 min=100 max=100
NETWORK
data_received.................: 0 B 0 B/s
data_sent.....................: 1.2 MB 20 kB/s- Firestore support for device shadows
- Migrating the cluster from K3s and GCP emulators to GCP
- Benchmarking comparison between local nodes & GKE after final deployments