The splicing strategy can be either functional or UX based. Let’s take a hybrid approach of splicing the monolith. Let’s say we splice the application into 4 modules –

  • User services – This module would take care of user login, registration and any profile updates.
  • Catalog services – This module would take care of catalog management like CRUD items in a catalog.
  • Order services – This module would take care of order management like CRUD of orders placed by a user.
  • StoreFront UI – This module would handle the entire StoreFront UI along with the Shopping Cart management, as the shopping cart is limited more to a user session.

One of the key considerations I have made were that each of these modules would handle their own data store and any communication from one another to maintain data relations would happen by passing the related UID (Unique Identifiers). Let me clear the cloud a little, every order would be associated to a user. This association would be maintained in the Order modules repo by storing the associated user’s UID. Similarly every order would even have a set of associated LineItems, which would further the associated Item from Catalog service repo and the other parameters like quantity. Even in this case the associated Item’s UID would be used to create a relation.

Each of the services exposes REST API’s which would help the other services for a clean integration. The other advantage being that as they can exist independent to the other services, they can be hosted on their own isolated infra. This would help us setting up the required level of computation need of each service, over scaling the computation uniformly for the entire system in its previous setup of a monolith. The below image depicts the new design.

estore microservice

estore microservice

Hmm, seems like we have achieved a certain level of isolation and we can now scale the application as the need may arise. Well there are some glitches. Let’s in a peek season we see that there are 2 many orders per minute and we know which component in the system needs extra computation, so we heed to increase the computation power of our Order Service by spinning up 5-10 instance more of the same service. Hmm, well technically if it were on the same host we might have run them on different ports, remember you can’t run multiple apps on the same port on a host. Let’s say we run them on different port’s. How would our StoreFront-UI Service know which host:port combination would it have to hit? Hmm, how about we bring a load-balancer in front of our Order Service and setup all the host:port entries into the load-balancer. This would solve our problem of look-ups. Then there would be the other question, should we expose our Order-Service API directly to external entities? Should we not be putting up an API Gateway which fine-grain control the API requests? As we progress ahead we would start having more and more questions. This were let’s switch on to Spring Cloud.

Pages: 1 2 3 4