An octopus has three hearts! One heart circulates blood around the body, while the other two pump it past the gills, to pick up oxygen.
Octopus is a scalable microservices template built with NestJS, RabbitMQ, PostgreSQL, and Redis. It provides an efficient and developer-friendly foundation for building distributed systems, supporting both Docker and Kubernetes deployments. The system now leverages Minio for object storage, offering a reliable and scalable solution for handling files across services.
1- Kafka → RabbitMQ Migration:
- The description now mentions RabbitMQ instead of Kafka.
- To make it easier to develop.
git clone https://github.com/MahdadGhasemian/octopus.git
cd octopus
pnpm i
docker-compose up --build # For the first add the --buildNote: The following ports (8087, 15679 and 5549) are defined in the docker-compose file.
- URL: http://localhost:8087/
- Authentication:
username: [email protected]
password: randompassword2
connection:
host-name_address: postgres
port: 5432
username: postgres
password: randompassword- URL: http://localhost:15679/
- Authentication:
username: user
password: randompassword- URL: http://localhost:5549/
- Authentication:
username: user
password: randompassword
connection:
host: redis
port: 6379
username: none
password: none- URL: http://localhost:9101/
- Authentication: Note: connect to minio on port 9101 and create an Access Key
username: admin
password: randompasswordmcli alias set octopus http://localhost:9100 admin randompasswordoctopus
|
├── apps
│ ├── auth
│ │ ├── Dockerfile
│ │ ├── Dockerfile.dev
│ │ ├── package.json
│ │ ├── src
│ │ ├── test
│ │ └── tsconfig.app.json
│ │ └── .env
│ ├── storage
│ │ ├── Dockerfile
│ │ ├── Dockerfile.dev
│ │ ├── package.json
│ │ ├── src
│ │ ├── test
│ │ └── tsconfig.app.json
│ │ └── .env
│ └── store
│ ├── Dockerfile
│ ├── Dockerfile.dev
│ ├── package.json
│ ├── src
│ ├── test
│ └── tsconfig.app.json
│ └── .env
├── docker-compose-test.yaml
├── docker-compose.yaml
├── init-scripts
│ └── seed-data.sql
├── init-scripts-test
├── libs
│ └── common
│ ├── src
│ └── tsconfig.lib.json
├── migrations
│ ├── developing
│ │ ├── auth
│ │ ├── storage
│ │ └── store
│ ├── production
│ │ ├── auth
│ │ ├── storage
│ │ └── store
│ └── stage
│ ├── auth
│ ├── storage
│ └── store
├── package.json
├── tsconfig.build.json
└── tsconfig.json
└── .env
└── .env.test
└── .env.migration.developing
└── .env.migration.stage
└── .env.migration.production
- Support dynamic access (role)
- Support auto Caching
- Based on MinIO (S3 Object Storage)
- Support multiple file formats:
- Images: jpg, jpeg, png, bmp, tiff, gif, webp
- Documents: doc, docx, xlsx, pdf, txt, rtf
- Media: mp3, wav, mp4, avi, avi, mkv
- Compressed: zip, rar, tar, 7z, gz
- Support public and private files
- Support resizing and changing the quality of images on download routes
- Support caching on the download routes
- Unique route to upload all files
- Unique route to download all files (if the file is an image type, the system will automatically consider caching and editing utitlies for the file)
- Support fully Pagination
- Support auto Caching
- Auth Service: http://localhost:3000/docs#/
- Store Service: http://localhost:3001/docs#/
- Storage Service: http://localhost:3002/docs#/
- Collections
- Admin User Environment
- Internal User Environment
- Regular User 1 Environment
- Regular User 2 Environment
There is possible to generate and run migration files on different branches separetly (developing, stage, production)
- Create environment files - .env.migration.developing - .env.migration.stage - .env.migration.production example:
POSTGRES_HOST=localhost
POSTGRES_PORT=5436
POSTGRES_USERNAME=postgres
POSTGRES_PASSWORD=randompassword
# POSTGRES_SYNCHRONIZE=true
POSTGRES_SYNCHRONIZE=false
POSTGRES_AUTO_LOAD_ENTITIES=true
- Edit the 'POSTGRES_ENTITIES' parameter inside the package.json file according to your entities
- Generate and run the migratinos
# Developing
npm run migration:generate:developing
npm run migration:run:developing
# Stage
npm run migration:generate:stage
npm run migration:run:stage
# Production
npm run migration:generate:production
npm run migration:run:production- Only GET endpoints are cached.
- Use
@NoCache()decorator to bypass the caching system for specific endpoints. - Use
@GeneralCache()decorator to cache the endpoint without including the user's token in the cache key. - Services caching status:
| Service Name | Module | Cache Status | Decorator | Note |
|---|---|---|---|---|
| Auth | auth | not cached | @NoCache() | |
| Auth | users | cached | are cached according to user's token | |
| Auth | accesses | cached | are cached according to user's token | |
| Store | categories | cached | @GeneralCache() | |
| Store | products | cached | @GeneralCache() | |
| Store | orders | not cached | @NoCache() | |
| Store | payments | not cached | @NoCache() | |
| Storage | not cached |
Run unit tests:
pnpm run test1️⃣ Start required services (database, Redis, etc.) in Terminal 1:
docker-compose -f ./docker-compose-test.yaml up2️⃣ Run E2E tests in Terminal 2:
pnpm run test:e2e- App microservices
- Common libraries
- Logger
- Communication between microservices
- Authentication (JWT, Cookie, Passport)
- Dynamic roles (Access)
- TypeORM Postgresql
- Entities
- Migrations on every branch separately
- Docker-compose
- Env
- Document
- Githab Readme
- Postman
- Auto generated swagger
- Test
- Cache Manager (Redis)
- K8S
- Fix Get OTP to expire its session
- full_name nullable
- Category Tree
- Get Lists be able to support the pagination
- Refresh Token
Contributions are welcome! Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests.
This project is licensed under the MIT License - see the LICENSE.md file for details.
- Supporting Pagination for list retrieval endpoints.
- Sending
EVENT_NAME_USER_CREATEDandEVENT_NAME_USER_UPDATEDfrom the 'auth' service to the 'store' service to update users.
- Transition from JWT_SECRET into JWT_PUBLIC_KEY and JWT_PRIVATE_KEY
- Fxied access guard
- Fixed cache manager and added
FoceToClearCachedecorator - Added new API route to edit user access /users/{id}/access
- Improved the Health Check API to monitor infrastructure connections, including RabbitMQ, PostgreSql and Redis.
- Moved entity files into their respective service directories.
- Fixed the migration script
- Added some unit and e3e tests
- Added database seed data during intializing (docker-compose)
- Renamed
cannotBeDeletedfield tocannot_be_deleted - Added downloadable Postman files
- Migrated from saving files on disk to leveraing the Minio for object storage.
- Migrated from Kafka to RabbitMQ.
- Changed the 'hasFullAccess' field to 'has_full_access' in the
accessentity.
- Added a caching prefix to support separation of multiple branches in production.
- Added Redis Insight to the docker-compose file to provide a GUI for Redis.
- Initial release.



