Loading...
You are here:  Home  >  Data Education  >  Big Data News, Articles, & Education  >  Big Data Blogs  >  Current Article

Tech Primer: Microservices

By   /  November 6, 2017  /  No Comments

Click to learn more about author Kong Yang.

Enterprise applications are focusing more on giving customers the best possible end-user experience. As such, lightweight, scalable, and cost-effective technologies, like Functions as a Service (FaaS), microservices, and containers have stepped in to help. In this tech primer, I’ll discuss the role that microservices play in helping organizations achieve scalability, availability, and enhanced agility.

Microservices are a method of developing software applications that are made up of independently deployable, modular services. Each microservice runs a unique process and communicates through a well-defined, lightweight mechanism (such as a container) to serve a business goal.

What to Consider Before You Deploy

Before deploying any microservice, you must first know when to use it. Ideally, you would implement a microservice if you have a monolithic, singular application that may become too complicated to manage by itself. Microservices are also helpful when you need stateless on demand. Primary use cases for microservices include CPU- and- memory-intensive application parts. Essentially, this is when you want to scale the microservice to handle “spike and die,” or a high spike in CPU, to manage a query or action, followed by the removal of the service. Other examples of memory-intensive application parts include text analysis tools, natural language processing (NLP), search functionality, and streaming functions.

While many applications seemingly include these memory-intensive application requirements, it’s important to keep in mind that these lightweight tech constructs are not a silver bullet for all enterprises. As your monolithic, singular application becomes too complex to manage as one unit, you should begin to explore how microservices can help you manage complexities in the following categories:

  • People: When you have very large and/or distributed teams contributing to a particular piece of code, too many cooks in the kitchen can invite difficulties.
  • Multitenancy: For efficiency, there are typically multiple tenants consuming the application. The application then becomes a pool of shared resources, which increases the likelihood that you’ll experience performance degradation or other “noisy neighbor” effects.
  • Multiple user interaction models: If your application has multiple forks depending on what the end-user does, microservices can be a great solution. For example, when you open Netflix®, you’re presented with different content recommendations based on your previous consumption.
  • Ongoing app evolution: Microservices can also step in when you want to break up application functionalities so that each piece can evolve independently. This is particularly useful if an application is used across business units in an organization. A loosely coupled architecture allows you to build a more agile and scalable experience for end-users.
  • Scalability: Microservice architecture allows you to take a service that’s already been coded and continue to apply it across different applications. Because microservices are best-suited to complete one application part and spin down, you can continue to leverage the code in the development cycle without worrying about upgrades or downtime.

What Does it Mean for IT professionals?

These considerations may seem intricate and multifaceted, but microservices are simply another delivery mechanism. However, they can be used to scale and improve time to market. Rather than months-long or even years-long application life cycles, microservices can shorten this time into multiple daily releases, with the microservices living for only micro- to milliseconds.

On top of the faster time-to-market, you also need to provision resources so your microservice can run on compute and memory to execute. Enter lightweight containers. Because microservices are so short-lived, you’ll need to run them in containers, because virtual machines (VMs) are overprovisioned for your needs. In other words, the symbiotic nature of microservices and containers means you can quickly provision infrastructure services, let the microservice run, and then de-provision the container so it retires cleanly.

The best way to manage these interactions is to fortify your automation skills and orchestrate these workflows. Further, you need visibility into your resources to help ensure that the workflows are running smoothly. Even though the resources are lightweight, they still require network and server infrastructure services, so visibility is key. This is especially the case if you’re working with availability, redundancy, resiliency, and interconnectivity, all of which will allow you to measure the performance and health of the application that’s formed from these loosely coupled services.

Best Practices for Working with Microservices

As you consider whether your application is a candidate for service-oriented architecture, the following best practices can help you effectively deploy and manage microservices in your application.

  • Deploy Containers: As long as the microservice is in the container (Docker®, for example), the toolset would be Docker and Docker engine. The toolset allows you to focus on your application’s continuous delivery, and with continuous integration, you can ignore dependencies on hardware and firmware.
  • Maintain Code Maturity; Avoid Frankenstein Apps: It can take a tremendous number of microservices to form an application, but you need to maintain a similar level of code maturity across all microservices. This means that if you want to add code to a microservice that’s been deployed and is working well, the best practice is to create a new microservice for the new code and leave the existing microservice in place. This process—known as the immutable infrastructure principal—allows you to avoid what are affectionately called “Frankenstein applications.” This principal works well because it allows you to continuously test your new functionality in a different environment. If it fails, you can simply test it again without affecting the existing microservice’s performance. When it’s finally bug-free, you can expose it to your end-users and maintain consistency and availability.
  • Maintain Separate Builds: Similar to consistent code maturity levels, maintaining separate builds allows you to have multiple microservices that rely on component files from a repository under your control. This also allows you to more easily decommission code bases.
  • Create a Separate Data Store; Treat Servers as Stateless: There are two best practices that have come under debate in the industry, and are continually evolving as discussions take place. New startups advocate for creating a separate data store for each microservice. In other words, it is advised that any given microservice should reside in a certain back-end storage. This way, although it has a single point of failure, it won’t be impacted if there’s a hardware failure. The other best practice is to treat each of the servers as stateless. Because microservices and containers are so lightweight and loosely coupled, a failure won’t take down the application. Instead, the user would simply trigger the interaction again. Other new startups advocate for moving the industry away from ephemeral (stateless) containers to persistent ones, which are essentially VMs. The problem with this is that it requires one container to run multiple microservices, which can create confusion during any troubleshooting exercise.

Microservices introduce the possibility of bringing new functionalities and improvements to your applications. They also offer the opportunity for IT professionals to expand their skill set in a rapidly changing landscape. Being able to see into these components is critical to the performance and health of your applications, and a healthy application means that business goals are able to move forward unencumbered.

About the author

Kong Yang, Head Geek at Solarwinds. Kong has over 20 years of IT experience specializing in virtualization and cloud management. He is a VMware vExpert™, Cisco® Champion, and active contributing thought leader within the virtualization community. Yang’s industry expertise includes performance tuning and troubleshooting enterprise stacks, virtualization sizing and capacity planning best practices, community engagement, and technology evangelism. Yang is passionate about understanding the behavior of the entire application ecosystem — the analytics of the interdependencies as well as qualifying and quantifying the results to support the organization’s bottom line. He focuses on virtualization and cloud technologies; application performance; hybrid cloud best practices; vehicles for IT application stacks such as containers, hypervisors, and cloud native best practices; DevOps conversations; converged infrastructure technologies; and data analytics. Follow Kong Yang and Solarwinds at: Twitter, LinkedIn, Solarwinds Twitter

You might also like...

Using Your Data Corpus to Further Digital Transformation

Read More →