- The Need for Transition
- Creating a New Application with Microservices
- Migrating a Monolithic Application to Microservices
- A Hybrid Approach
Creating a New Application with Microservices
Before we begin, let me say that I have not seen many real-world scenarios of building a microservices-based application from scratch. Typically, an application is already in place, and most applications I have worked on are more of a transition to a microservices architecture from a monolithic architecture. In these cases, the intention of architects and developers has always been to reuse some of the existing implementation. As skills become readily available in the market and some successful implementations are published, we will see more examples of building microservices-based applications from scratch, so it is certainly worthwhile to discuss this scenario.
Let’s say you have all the requirements figured out and ready to go into the architecture design of the application you are going to build. There are many common best practices you need to think about as you get started, which are covered in the following sections.
As we discussed in Chapter 2, “Switching to Microservices,” the first question you have to ask yourself is whether your organization is ready to transition to microservices. That means the various departments of your organization now need to think differently about building and releasing software in the following ways:
Team structure. The monolithic application team (if one exists) needs to be broken down into several small high-performance teams aware of or trained in microservices best practices. As you saw in Figure 4.3, the new system will consist of a set of independent services, each responsible for offering a specific service. This is one key advantage of the microservices paradigm—it reduces the communication overheads, including those multiple nonstop meetings. Teams should be organized by business problems or areas they are trying to address. The communication then becomes about the timing and set of standards/ protocols to follow so that these microservices can work with each other as one platform.
Agility. Each team must be prepared to function independently of others. They should be the size of a standard scrum team; otherwise, communication will become an issue again. Execution is the key, and each team should be able to address the changing business needs.
Tools and training. One of the key needs is the organization’s readiness to invest in new tools and people training. The existing tools and processes, in most cases, would need to be retired and new ones picked up. This will require a large capital investment as well as investment in hiring people with new skills and retraining existing staff members. In the long term, if the decision is right to get on microservices, organizations will see savings and recoup the investment.
Unlike with monolithic applications, with microservices you need to take a self-sustained services-based approach. Think of your application as a bunch of loosely coupled services that communicate with each other to provide complete application functionality. Each service must be thought of as an independent, self-contained service with its own lifecycle that can be developed and maintained by independent teams. These teams may select from a variety of technologies, including languages or databases that best suit their services’ needs. For example, for an e-commerce site, the team would write a completely independent service, such as a shopping cart microservice, with an in-memory database, and another one, such as an ordering microservice, with a relational database. A real-world application may employ microservices for basic functions such as authentication, account, user registration, and notification with the business logic encapsulated in an API gateway that calls these microservices based on the client and external requests.
Just a reminder: a microservice may be a small service implemented by a single developer or a complex service requiring a few developers. With microservices, the size does not matter; it all depends on one function that a service has to provide.
Other aspects that must be considered at this point are scaling, performance, and security. Scaling needs can be different and provided on an as-needed basis at each microservice level. Security should be thought of at all levels, including data at rest, interprocess communication, data at motion, and so on.
Interprocess (Service-to-Service) Communication
We discussed the topic of interprocess communication in depth in Chapter 3, “Interprocess Communication.” Key aspects that must be thought of are security and communication protocol. Asynchronous communication is the way to go, as it keeps all requests on track and does not hold resources for extended periods of time.
Using a message bus such as RabbitMQ may prove to be beneficial for this kind of communication. It is simple and can scale to hundreds of thousands of messages per second. To prevent the messaging system from becoming a single point of failure if it goes down, the messaging bus must be properly designed for high availability. Other options include ActiveMQ, which is another lightweight messaging platform.
Security is key at this stage. In addition to selecting the right communication protocol, industry standard tools such as AppDynamics may be used to monitor and benchmark the interprocess communication. Any anomalies must be reported automatically to the security team.
When there are thousands of microservices, it does become complex to handle everything. We already discussed how to address such issues through discovery services and API gateways in Chapter 3.
The biggest advantage of transitioning to microservices is that it enables choices. Each team can independently select the language, technology, database, and so on, that is the best fit for the given microservice. Usually in a monolithic approach, the team does not have this flexibility, so make sure you do not overlook and miss the opportunity.
Even if a team is handling multiple microservices, each microservice must be looked at as a self-contained service, and it needs be analyzed. Scalability, deployment, build time, integrations and plugins operability, and so on, must be kept in mind when choosing the technology for each microservice. For microservices with lighter data but faster access, an in-memory database may be most suitable, whereas others may share the same relational or NoSQL databases.
Implementation is the critical phase; this is where all the training and best practices knowledge comes in handy. Some of the critical aspects to keep in mind include the following:
Independency. Each microservice should be highly autonomous with its own lifecycle and treated as such. It needs to be developed and maintained without any dependencies on other microservices.
Source control. A proper version control system must be put at place, and each microservice must follow the standards. Standardizing on a repository is also helpful, as it ensures all the teams use the same source control. It helps in various aspects, such as code review, providing easy access to all the code in one place. In the long term, it makes sense to have all the services on the same source control.
Environments. All different environments, such as dev, test, stage, and production, must be properly secured and automated. The automation here includes the build process—that way the code can be integrated as required, mostly on a daily basis. There are several tools available, and Jenkins is widely used. Jenkins is an open source tool that helps automate the software build and release process including continuous integration and continuous delivery.
Failsafe. Things can go wrong, and software failure is inevitable. Handling failures of downstream services must be addressed within the microservice development. Failure of other services must be graceful to the extent that the failure should be invisible to the end user. This includes managing service response times (timeouts), handling API changes for downstream services, and limiting the number of auto-retry.
Reuse. With microservices, don’t be shy about reusing the code by using copy and paste, but do it within limits. This may cause some code duplication, but it’s better than using shared code that may end up coupling services. In microservices, we want decoupling, not tight coupling. For example, you will write code to consume the output response from a service. You can copy this code every time you call the same service from any client. Another way to reuse code is by creating common libraries. Multiple clients can use the same library, but then each client should be responsible for maintaining its libraries. It can sometimes become challenging when you create too many libraries and each client is maintaining a different version. In that case, you may have to include multiple versions of same library, and the build process may become difficult due to backward compatibility and similar concerns. Depending on your needs, you can go either way as long as you can control the number of libraries and versions by clients and put a tight process around them. This will certainly save you from lot of code duplication.
Tagging. Given the sheer number of microservices, debugging a problem may become difficult, so you need to do some kind of instrumentation at this stage. One of the best practices is to tag each request with a unique request ID and log each one of them. This unique ID will identify the originating request and should be passed by each service to any downstream requests. When you see issues, you can clearly track back through logs and identify the problematic service. This solution will be most effective if you establish a centralized logging system. All the services should log in all the messages to this shared system in a standardized format so that teams can replay the events as required all from one place, from infrastructure to application. A shared library for centralized logging is worth looking into, as we previously discussed. There are several log management and aggregation tools out there in the market, such as ELK (Elasticsearch, Logstash, Kibana) and Splunk, that are ideal.
Automation is the key during deployment. Without it, success with a microservices paradigm would be almost impossible. As we discussed, there may be hundreds to thousands of microservices, and for the agile delivery, automation is a must.
Think of deploying thousands of microservices and maintaining them. What happens when one of the microservices goes down? How do you know which machine has enough resources to run your microservices? It becomes very complicated to manage this without automation in place. Various tools, such as Kubernetes and Docker Swarm, can be used to automate the deployment process. Given the importance of this topic, a whole chapter, Chapter 9, “Container Orchestration,” is dedicated to deployment.
The operations part of the process needs to be automated as well. Again, we are talking about hundreds to thousands of microservices—organizational capabilities need to mature enough to handle this level of complexity. You’ll need a support system, including the following:
Monitoring. From infrastructure to application APIs to last-mile performance, everything should be monitored, and automatic alerts with proper thresholds should be put in place. Consider building live dashboards with data and alerts that pop up during issues.
On-demand scalability. With microservices, scalability is the simplest task. Provision another instance of your microservice you want to scale and just put it behind the existing load balancer and you are all set. But in a scaled environment, this also needs to be automated. As we will discuss later, it is a matter of setting up an integer value to tell the number of instances you want to run for a particular microservice.
API exposure. In most cases, you will want to expose the APIs externally for external users to consume. This is best done by using an edge server, which can handle all the external requests. It can utilize the API gateway and discovery service to do its job, and you can use one edge server per device type (e.g., mobile, browser) or use case. An open source application created by Netflix, called Zuul, can be utilized for this function and beyond.
Circuit breaker. Sending a request to a failed service is pointless. Hence, a circuit breaker can be built that tracks the success and failure of every request made to every service. In the case of multiple failures, all the requests to that particular service should be blocked (break the circuit) for a set time. After the set time expires, another attempt should be made, and so on. Once the response is successful, reconnect the circuit. This should be done at the service instance level. Netflix’s Hystrix provides an open source circuit-breaker implementation.