A progressive example project demonstrating the evolution from monolithic architecture to event-driven microservices using Apache Kafka.
This repository contains three implementations of the same e-commerce system, showcasing architectural evolution:
| Version | Architecture | Description |
|---|---|---|
| v1 | Monolithic | Single backend handling all operations sequentially |
| v2 | Microservices + Single Kafka | Services communicate via Kafka message broker |
| v3 | Microservices + Kafka Cluster | Production-ready setup with 3 Kafka brokers |
Client → Backend → [Payment → Order → Email → Analytics] → Response
All operations run sequentially (~10 seconds total response time).
Client → Payment Service (HTTP)
↓
Kafka: "payment-successful"
↓
Order Service
↓
Kafka: "order-successful"
↓
Email Service
↓
Kafka: "email-successful"
↓
Analytics Service (monitors all events)
Async processing - client receives response in ~3 seconds while background services process independently.
| Service | Port | Responsibility |
|---|---|---|
| Payment Service | 8000 | Processes payments, publishes payment-successful events |
| Order Service | - | Consumes payment events, creates orders, publishes order-successful |
| Email Service | - | Consumes order events, sends notifications, publishes email-successful |
| Analytics Service | - | Consumes all events for business metrics |
| Client | 3000 | Next.js shopping cart UI |
| Kafka UI | 8080 | Web interface for Kafka management |
- Node.js - Runtime environment
- Express.js - HTTP API framework
- KafkaJS - Kafka client library
- Next.js 15 - React framework
- React 19 - UI library
- React Query - Server state management
- TailwindCSS - Styling
- Axios - HTTP client
- Apache Kafka - Event streaming platform
- Docker & Docker Compose - Container orchestration
- Kafka UI - Kafka monitoring dashboard
| Topic | Producer | Consumers |
|---|---|---|
payment-successful |
Payment Service | Order Service, Analytics |
order-successful |
Order Service | Email Service, Analytics |
email-successful |
Email Service | Analytics |
- Node.js (v18+)
- Docker & Docker Compose
- npm or yarn
# 1. Start Kafka infrastructure
cd 2-microservices-single-kafka-server/services/kafka
docker-compose up -d
# 2. Wait for Kafka to be ready, then create topics
npm install && node admin.js
# 3. Start services (each in separate terminal)
# Payment Service
cd ../payment-service
npm install && node index.js
# Order Service
cd ../order-service
npm install && node index.js
# Email Service
cd ../email-service
npm install && node index.js
# Analytics Service
cd ../analytic-service
npm install && node index.js
# Client
cd ../client
npm install && npm run devSame steps as Version 2 - the docker-compose automatically configures 3 Kafka brokers.
cd 3-microservices-kafka-cluster/services/kafka
docker-compose up -d
# Continue with same service startup steps...# Backend
cd 1-without-microservices/backend
npm install && node index.js
# Frontend (separate terminal)
cd ../frontend
npm install && npm run dev| Service | URL |
|---|---|
| Frontend | http://localhost:3000 |
| Payment API | http://localhost:8000 |
| Kafka UI | http://localhost:8080 |
// Request
{
"cart": [
{
"id": 1,
"name": "Nike Air Max",
"price": 129.9,
"image": "/product1.png",
"description": "Classic sneakers"
}
]
}
// Response
"Payment successful"// Request
{
"cart": [...]
}
// Response
{
"orderId": "uuid",
"paymentResult": "success",
"emailResult": "success"
}Ecommerce-Microservices-Kafka-/
├── 1-without-microservices/
│ ├── backend/ # Express.js monolithic server
│ └── frontend/ # Next.js client
│
├── 2-microservices-single-kafka-server/
│ └── services/
│ ├── payment-service/ # HTTP + Kafka producer
│ ├── order-service/ # Kafka consumer/producer
│ ├── email-service/ # Kafka consumer/producer
│ ├── analytic-service/ # Kafka consumer
│ ├── kafka/ # Docker compose + admin scripts
│ └── client/ # Next.js frontend
│
└── 3-microservices-kafka-cluster/
└── services/
├── [Same services as v2]
└── kafka/ # 3-broker cluster config
- Broker:
localhost:9094 - Uses KRaft (no Zookeeper)
- Broker 1:
localhost:9094 - Broker 2:
localhost:9095 - Broker 3:
localhost:9096 - KRaft cluster with 3 controller/broker nodes
- Event-Driven Architecture - Services communicate through events, not direct calls
- Loose Coupling - Services are independent and can be deployed separately
- Async Processing - Long operations don't block the user
- Scalability - Easy to add consumers or scale individual services
- Fault Tolerance - Kafka persists messages; failed services can recover
- Observability - Analytics service monitors all system events
- This is an educational project with simulated database operations
- Authentication middleware is a placeholder (always allows access)
- User IDs and other values are hardcoded for simplicity
- Don't run multiple versions simultaneously (port conflicts)
MIT