Before we proceed any further, let’s talk a little about the concept of session stickiness. When a user Visits the eStore, a lb-session cookie is created by the front-ending load-balancer (a Web Server most times) and then based on its configured load-balancing algorithm of either round-robin, load-based etc the request is forwarded to one of the app-server nodes in the app-server farm. Then the app-server instance checks if an application specific app-session cookie has been created and creates a new one or reuses an existing one. Now, the lb-session created by the load-balancer is to make sure that the subsequent requests from the user are forwarded to the same app-server node as only that app-server node would be able to honour the previously created app-session cookie. This is called session stickiness. The problem with this is that in-case distributed session management is not setup on the application server farm, a failure of one or more app-server nodes would void all the sessions created on that app server node and would make users vulnerable to frequent session resets. On the flip side, if we setup a distributed session management either the sessions get replicated on all the other servers or to a central server. Either ways as more objects are added to the session, even this solution grows out of control.
So, the alternate way of scaling the application would be by slicing the application into multiple deployable’s. Hmm, multiple deployable’s, that would mean that we splice the large application into a more manageable smaller deployable artefacts, welcome to the world of microservices. Many strategies have evolved as a solution like Virtualization, Containerization, OSGi, SOA, cloud-native microservices and PaaS & IaaS to name a few. I will try shed some light over these strategies
According to Wikipedia (https://en.wikipedia.org/wiki/Virtualization)
In computing, virtualization refers to the act of creating a virtual (rather than actual) version of something, including virtual computer hardware platforms, storage devices, and computer network resources. Virtualization began in the 1960s, as a method of logically dividing the system resources provided by mainframe computers between different applications. Since then, the meaning of the term has broadened.
According to Wikipedia (https://en.wikipedia.org/wiki/Operating-system-level_virtualization)
Operating-system-level virtualization, also known as containerization, refers to an operating system feature in which the kernel allows the existence of multiple isolated user-space instances. Such instances, called containers, partitions, virtualization engines (VEs) or jails (FreeBSD jail or chroot jail), may look like real computers from the point of view of programs running in them. A computer program running on an ordinary operating system can see all resources (connected devices, files and folders, network shares, CPU power, quantifiable hardware capabilities) of that computer. However, programs running inside a container can only see the container’s contents and devices assigned to the container.On Unix-like operating systems, this feature can be seen as an advanced implementation of the standard chroot mechanism, which changes the apparent root folder for the current running process and its children. In addition to isolation mechanisms, the kernel often provides resource-management features to limit the impact of one container’s activities on other containers.
I will try cover OSGi and SOA on a seperate post. Let’s proceed with the strategy of microservices, initially as a simple microservices and later to cloud-native microservices.