Think #Microservices

April 12, 2015

 

Decomposition is a well-understood software design principle. The idea of breaking a business problem down into smaller parts that promote not only "separation of concerns" but also the notion of reuse is indeed very worthy.

In his highly acclaimed book, "Domain Specific Design," Eric Evans talked about designing software by creating models that mimic the business problem. If you are interested in detailed treatment of domain specific design, terminology, language, I highly recommend reading this book or watching recordings of Eric’s talk on infoQ talk

Once you have modeled the business problem that you are trying to solve into objects and entities, you have a number of decomposition techniques available to you, depending on the language and platform of your choice.

One of the obvious choices is to start with shared libraries, which are available in almost in any language. Shared libraries give you the ability to place some common code that represents an aspect of your solution into a library. This technique has obviously been extensively used over the years. However, one downside of a shared library is its tight coupling – given that library is linked (statically or dynamically) into the process space of the caller invariably leads to tight coupling. This is not to say you cannot achieve loose coupling with libraries. You can, but it is largely up to you to maintain that discipline – there are no built-in language protections to safeguard against tighter coupling. As a result, we often see such a solution degenerating into a tightly coupled brittle solution.

The other downside of a shared library is that the language used to develop the library and the language of the consumer(s) is usually the same.

Component-based design (CBD) was developed as enhancement to the idea of libraries. Examples of component-based design include COM and CORBA. CBDs introduced a notion of binary protocol that serves as a "firewall" between the caller and component, thus enforcing more discipline in terms of sharing. Additionally, this firewall makes possible reuse easier across languages. For example, a C++ based COM component could be invoked via a program written in VB.

The other important benefit of component-based design is process isolation. In other words, the ability to host a component in its own process space. This process decoupling can improve modularity and can provide the ability to independently manage the lifecycle of a shared module – i.e., the ability to deploy newer versions without impacting the callers.

While component-based design was certainly an improvement over shared libraries, it is not without its own downsides. The binary protocol (often proprietary due to the lack of industry standards) used for communication between the components was not Web/Internet friendly and was widely considered too complex. Additionally, despite the process separation, the caller and the component often continued to run in the same security context. Finally, the notion of scalability – scale-out, state management, discovery were not first-class concepts or even at least industry standard interoperable concepts.

SOA was considered an improvement over CBD. In SOA, services became the fundamental elements for developing software, instead of components. However, it is important to note that SOA and CBD are not competing ideas. SOA can be thought of as a way to leverage artifacts of CBD, components, in an attempt to make them easier to consume. The biggest advancement was the support for multiple protocols, including the ones that work well over the Internet. Additionally, SOA came with a well-defined notion to describe its capabilities, functional and non-functional characteristics, thus making them easier to discover and use. Unlike the process isolation, the service boundary was well-defined. SOA ultimately engendered a series of specifications (referred to as WS-* spec) that addressed many aspects including transactions, coordination, management, and quality of service.

Despite the promise, it is safe to say that SOA has not lived up to its potential. We will leave that debate – "why SOA has or has not lived up to its promise" to others.

In the last two to three years, a subset of SOA principles, referred to as "Microservices," has gained popularity. Companies like Netflix and Twitter embody the principles of SOA at the service level. However, the focus is on "micro" sized services as opposed to more coarse-grained services. The idea of microservices is to apply the principle of "single responsibility" (part of Robert Martin’s five principles of OOD) to a service.

Since each microservice exhibits strong cohesion, it leads to a system that is loosely coupled, thus yielding a number of benefits:

i) Failure of one microservice does not cascade to other parts of the system

ii) Each microservice is autonomous – it can deployed into a separate execution context, have it own data store of choice, have its own caching tier

iii) Each microservice can be scaled independently

iv) Each microservice can be easily deployed or updated independently

v) Each microservice can in theory be implemented in a different (within reason of course) underlying technology of choice

vi) Given their granularity, microservices can be composable – in other words, it is possible for to compose a large business process by building on a collection of microservices

All this sounds good, but microservices are not a "free lunch.” As you probably guessed from the earlier description, the "micro" nature of can add significant operations overhead due to proliferation of instances – execution environment (PaaS containers, VMs), networking stack and even data stores. The only way to be successful in deploying a large number of instances and keeping them in sync is through automation. Automation is also key to making the microservices cost effective by spinning things up and down as needed.

So how do we deal with the overhead associated with microservices? It turns out that the cloud offers perhaps the only viable approach for dealing with these overheads while still enabling us to tap into the advantages of microservices. Here is how:

1. Infrastructure as code leading up to automated deployment pipelines (coupled with continous integration) offers a manageable approach to dealing with a potentially very large number of microservices.

2. The ability to spin execution context all the way from the smallest micro sized VM to the largest VM to a multi-tenant compute service where you drop your code and have it be executed in response to a trigger is perfectly suited for microservices. Add to this the support for containers such as Docker, and now you have a way to package and deploy a microservice (granted that Docker support is not limited to the cloud).

3. The ability to tap into the elasticity of a networking stack makes it simple to provision a large number of microservices end points

4. The ability to tap into database as service (SQL and No-SQL stores) to promote autonomy with microservices

5. Emphasis on a DevOps mindset in the cloud offers a cost effective approach to managing microservices

Summary

  • Decompose the business project into models – objects with bounded context
  • Shared libraries, components, and services are decomposability options. All have some limitations associated with them. Microservices seem promising.
  • The cloud offers a way to implement microservices cost-effectively through VMs and Docker containers

One Response to “Think #Microservices”


  1. […] Think #Microservices (Vishwas Lele) […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: