Distributed E-Commerce Microservices Engine
An event-driven microservices architecture for e-commerce, with ten independent services communicating via RabbitMQ, backed by a MongoDB replica set, and scaled with the Node.js cluster module.
Overview
A reference architecture for distributed e-commerce, decomposing the platform into ten independent microservices that each own their data and communicate asynchronously through RabbitMQ. The project demonstrates production-ready patterns for service isolation, message-driven coordination, and horizontal scaling — built as an architectural foundation that can be extended with full business logic.
A Next.js 14 storefront provides the consumer-facing frontend, communicating with backend services through the message queue.
Service Architecture
Ten services run independently, each with its own process, database connection, and message queue consumer:
- Authentication — user registration and login with bcrypt password hashing, session-based auth via NextAuth
- Product — product catalogue management with relationships to attributes and categories
- Category — hierarchical category trees for product organisation
- Attribute — configurable product attribute definitions and attribute groups
- Option — product option groups and selectable values for variant generation
- Cart — shopping cart management with add, remove, and update operations
- Filter — dynamic product filtering based on attributes and categories
- Article — editorial content and blog post management with metadata
- SEO — metadata management for products, categories, and content pages
- Seeder — database population utility that bootstraps development environments with sample data across all collections
Message-Driven Communication
All inter-service communication flows through RabbitMQ using a request-reply pattern with correlation IDs. When the frontend needs to authenticate a user, it publishes a message to the authentication queue and awaits a correlated response — the services never call each other directly over HTTP.
This pattern is implemented in a shared utility library that handles connection management, message serialisation, and response routing. Each service consumes from its own named queue, processes the request, and publishes a response back to the caller’s reply queue.
The architecture means services can be deployed, restarted, and scaled independently without affecting the rest of the system. If the SEO service goes down, product browsing continues uninterrupted.
Cluster-Based Scaling
Every service implements the Node.js cluster module with a primary/worker pattern. The primary process forks one worker per available CPU core, monitors worker health, and automatically respawns failed workers with configurable rate limiting — a maximum of five restart attempts within a sixty-second window prevents crash loops.
Worker count is configurable via environment variable, allowing resource allocation to be tuned per service based on traffic patterns.
Shared Type Library
A shared internal package provides TypeScript interfaces for all domain entities — users, products, articles, categories, attributes, carts, filters, options, and SEO metadata. It also exports utility functions for MongoDB connections, RabbitMQ channel management, message sending with correlation IDs, and UUID generation.
All services depend on this shared package, ensuring consistent type contracts across the entire architecture. Changes to message formats are caught at compile time rather than at runtime.
Data Layer
Each service connects to its own database within a three-node MongoDB replica set, enforcing data sovereignty at the service boundary. The replica set configuration provides read redundancy and automatic failover.
The seeder service handles cross-service data bootstrapping, populating all collections with consistent sample data for development — articles, categories, attribute groups, and their relationships.
Infrastructure
Docker Compose orchestrates the full stack: ten service containers, a three-node MongoDB replica set, and a RabbitMQ instance. Each service builds from its own Dockerfile, with environment variables controlling database names, queue connections, and cluster worker counts.
Frontend
The Next.js 14 storefront connects directly to RabbitMQ from its API routes, publishing messages to service queues and returning responses to the browser. Authentication uses NextAuth with a credentials provider that delegates to the authentication service via the message queue. Dynamic routes handle product, category, and article pages through a catch-all slug pattern.