Your web browser is out of date. Update your browser for more security, speed and the best experience on this site.

Update your browser
CapTech Home Page

Blog May 4, 2018

Application Routing 0-to-60: A quick dive into microservices, Spring Cloud Gateway and Istio. Let's go!

Bradley Barrett

Whether you are new to microservices or a veteran of distributed applications, I hope this post adds value for all readers. Even if you have no knowledge of microservices, read on! We'll build up to the concepts needed to understand the latest routing technology. If this is not your first microservices rodeo, then feel free to read for a refresher or skip on to the Spring Cloud Gateway and Istio content later in the post. With that out of the way, let's dive in!

Contents

  1. Intro to Microservices and Application Routing
  2. Current State of Application Routing Technology
    1. 2.1 Netflix microservice tools
    2. 2.2 Areas of improvement for routing with Netflix tools
  3. Spring Cloud Gateway for External Routing
  4. Istio Side-car Mesh for Internal Routing
    1. 4.1 A quick aside on containers and orchestration
    2. 4.2 Side-car mesh pattern and Istio
  5. Future Routing Architecture and Further Reading
  6. Appendix

1. Intro to Microservices and Application Routing

A microservices architecture is a distributed architecture. One large codebase, known as a monolith, is broken up into many individual pieces. These individual pieces are known as microservices. Each microservice is independent of the other application pieces and has its own data store. Microservices communicate with one another over the network via http REST calls or queues. In a microservices architecture, the monolith becomes a distributed network of pieces as shown below.

Figure 1. Monolith versus distributed architecture.

With the right cloud infrastructure in place, the number of running instances for each microservice can independently increase and decrease with changes in demand. As a result, the application could have multiple instances of each microservice up and running as shown in Figure 2 below.

Figure 2. Distributed with multiple instances of each microservice application.

However, adding multiple instances of each service introduces some additional complexity. The application must now determine how to route networking requests to and within the distributed application. In this case, there are two types of routing: external (north-south) and internal (east-west). External routing directs traffic between the external world and the distributed application. Internal routing directs traffic between microservices that occurs within the distributed application. In the diagram below, the gateway sits at the edge of the distributed application and routes external requests to each cluster of microservices. The load balancers sit in front of each microservice cluster and determine which application instance should receive the incoming request.

Figure 3. Gateway for external routing and load balancers for internal routing.

Note that the diagram in Figure 3 effectively communicates the load balancing concept, but it is not entirely accurate when tracing the flow of communication for a single request from an end user. See Figure 4 in the appendix for a diagram which shows a more accurate flow of communication for an example request.

2. Current State of Application Routing Technology

Before diving into the latest tools in routing, let's take a quick look at the current technology and identify areas for potential improvement. Netflix is the pioneer of the modern day microservices architecture and has open-sourced a number of tools for addressing the challenges that come with building effective microservices. We will begin evaluating the current state of application routing by looking at some of the Netflix microservice tools. It should be noted that routing is only one of many challenges that comes with creating an effective microservices architecture, but those other exciting challenges will be saved for further reading. Let's take a look at the Netflix microservice tools: Eureka, Ribbon, and Zuul.

2.1 Netflix microservice tools

Eureka acts as a phone-book, or registry, for the running instances of each microservice. Once a service registers with Eureka, other services can discover its address and send requests to the service.

Ribbon acts as a load balancer for services registered in Eureka. The load balancer can be configured with a number of different selection algorithms for a set of registered services. Some examples of simple selection strategies include round-robin or random selection.

Zuul acts as the gateway for the distributed application and determines which cluster of load balanced copies should receive the incoming, external request. The big difference between Ribbon and Zuul is the degree to which the section strategies can be customized. Zuul has the ability to look at the data within the request and allows routing rules to be configured using that data. For example, all incoming traffic from mobile devices could be routed to one cluster of microservices and all traffic from web browsers could be routed to another.

Combined with the proper Eureka configuration, this allows for things like canary or shadow deployments. In a canary deployment, a certain percentage of user traffic is routed to an additional microservice cluster with new code changes. The development team can test out new code changes all a small set of users in production and observe performance before updating all microservice instances with the changes. Similarly, a shadow deployment replicates incoming requests and sends them to the service instances with the latest code changes. In a shadow deployment, the code under test can experience the full load of production traffic without effecting user experience.

2.2 Areas of improvement for routing with Netflix tools

The Spring framework did a great job tying all the Netflix tools into a single java application framework. If the microservice application code is written using Spring and Java, then all three of these Netflix tools can be easily integrated and configured within the microservices application.

Eureka, Ribbon, and Zuul combined with the Spring framework form a solid routing solution for a microservices architecture. However, there are still some areas for improvement:

  1. The thread management strategy used by the current version of Zuul (Zuul 1) is implemented with a blocking API. Threads requesting resources in use must wait until those resources become available. Such an implementation does not work well with long-lived connections. It would be great to a have a non-blocking gateway implementation that is also incorporated in the Spring Framework.
  2. The Spring framework and Java need to be used to avoid integration overhead when using Netflix tools. As a result, Zuul, Ribbon, and Eureka are somewhat application code dependent. If your microservice applications are not Spring based with Java, then integrating with Netflix tools is more challenging. The Netflix tools are not exactly plug-and-play out of the box. If we want to have services written in other languages, then integration with the Netflix tools can be tricky.
  3. The Ribbon load balancer has a number of different selection strategies to choose from, but what if more precise internal routing rules are desired to make canary and shadow deployments easier? We could try to place a Zuul gateway in front of every microservice cluster, but that would be cumbersome and heavyweight. It would be great if there was something lightweight that can run as part of the service application code or alongside it.

3. Spring Cloud Gateway for External Routing

Spring Cloud Gateway addresses issue #1 for external routing along with other benefits over Zuul 1. The tool is created by Pivotal, the same team which produces the Spring Framework. The ultimate goal of the Spring Cloud Gateway is to be an improved replacement to Zuul 1 for external routing in the Spring Framework. The improvements offered by Spring Cloud Gateway include:

  • Implemented with a non-blocking API built on top of Spring Reactor and performs well with long-lived connections by returning response payloads as they become available.
  • Routing rules that have access to the entire request: header, URL, path parameters, body, etc. With routing rules specified as Java application code, the entire request payload can be parsed and modified as determined by the configured rules.
  • Routing rules can also be dynamically changed at runtime through Spring's Actuator API. The ability to change routing rules on the fly accommodates the addition of new services and makes it easier to set up production tests like canary deployments.

An additional benefit is that the Spring Cloud Gateway is already part of Spring Framework. Applications currently built on top of the Spring Framework can easily adopt the new external routing implementation. Both the precise filtering capability and the ability to customize routing rules in Java code are impressive. Additional details on how to integrate with the Spring Cloud Gateway and configure rules can be found in section five of this post.

4. Istio Side-car Mesh for Internal Routing

Istio addresses issues #2 and #3 from the routing areas for improvement identified earlier. Istio is created by RedHat and implements a side-car pattern built on top of Google's container orchestration, Kubernetes, and Lyft's container proxy, Envoy. One of Istio's goals is to bring fine-grained routing controls to internal application traffic, but to do so with a design that is application code independent. Application code independence allows Istio to easily integrate with microservices written in many different programming languages. Before we dive into the side-car mesh pattern, let's take quick look at containers and container orchestration.

4.1 A quick aside on containers and orchestration

At a high level, think of a container as a crate with a label explaining how the crate should be handled. Both the label and crate have a well-defined format. As long as the shipper has the proper equipment to handle the crate and knows how to interpret the label, then the sender can be confident that the contents the package will handled correctly. There is some set-up work required by the handler to have all the appropriate resources to handle the crate. However, as long as this set-up is easy and takes advantage of resources already available to most handlers, then almost anyone can handle the crate.

Containers provide operating system (OS) level virtualization with a container file format (the crate and label) and an engine to run the container file on the host operating system (the equipment for handling the crate). Application code is packaged up in a container file which can be run on any computer with the right container engine, operating system and available resources as defined on the crate label. Containers are great for running applications in a cloud environment where cloud providers have a lot of available computers. These computers also come with popular operating systems and support for easily installing container engines. With the cloud and the right container technology, we can run our application code almost anywhere, regardless of the implementation language. In particular, Docker has become the preferred industry tool for containers and is a good place to start to learn more.

However, once our application containers are all up and running in the cloud, how do we control them? When we want to deploy new microservices or horizontally scale existing services, how can these processes be managed and automated for our distributed application cluster? If a container fails, can we seamlessly start up a new one to take its place? These types of container management tasks are handled by orchestration tools. Just as Docker has become the go-to tool for containerization, Kubernetes is becoming the industry preferred tool for Docker container orchestration. While the current version of Istio does depend on both Docker and Kubernetes to achieve its application code independence, the burden of this dependency looks to be minimal as both tools are becoming the industry standard.

To quickly recap, containers are labeled crates which allow applications written in many programming languages to run on the cloud. Container orchestration tools manage the containers running in the cloud. With the intro to containers and container orchestration complete, let's return to Istio and its side-car mesh pattern for controlling internal application routing.

4.2 Side-car mesh pattern and Istio

In a side-car pattern, each microservice container (the primary-car) is provided one additional container that runs alongside it (the side-car). In the case of Istio, the side-car is a docker container which is provisioned and assigned by the orchestration tool Kubernetes. The code running in each side-car is an instance of Lyft's Envoy proxy. The proxy allows the side-car to intercept all the inbound and outbound network traffic of the main-car. With all the network requests and response in hand, the side-car proxy can use the header information to determine where to route each request. The side-cars all know how to find one-another and form a connected mesh where all traffic passes through the side-car proxies before reaching the main-car microservices. With all the side-car proxies in place, routing between all the microservices can easily be configured by changing the routing rules used by each side-car.

Istio is a tool built on top of the orchestration tool and makes it easy to: 1) add Envoy side-car proxies to a group of dockerized applications and 2) manage the routing rules used by each side-car. Side-car configuration is handled by the Pilot within the Istio Control Plane. With Pilot, routing rules can be updated on the fly by modifying simple configuration files. The Control Plane has three pieces in total: Pilot, Mixer and Istio-Auth. However, the Pilot is the most relevant for internal routing rules. The addition of side-car proxies to every microservice does impact the response time for requests. In the current version of Istio, response times are increased by 2-3%. However, Istio is still in its alpha version and RedHat hopes to get that performance hit reduced to less than 1% in later release versions.

Finally, let's revisit the routing improvements offered by Istio and its side-car mesh pattern.

  • The side-car proxy pattern provides both lightweight and application code independent routing for internal traffic.
  • As a docker container, the side-car can integrate with any microservice running inside a docker container.
  • The side-car is an Envoy proxy that can route traffic based on information in the request header. Routing based on the header is less-fine grained compared to Spring Cloud Gateway which has access to all information in the request. However, routing using header information is still great and allows the Envoy proxy to be more lightweight.

5. Future Routing Architecture and Further Reading

Pivotal and RedHat are in conversation as Spring Cloud Gateway and Istio developers, meaning that a microservices architecture combining the two could be coming in the future. Routing is a challenging problem for microservices, but there are powerful tools to tackle both internal and external routing for distributed applications. It is exciting to see how tools like Spring Cloud Gateway and Istio deliver with powerful routing rules, increasingly seamless integration and real-time configuration. I hope you have enjoyed this initial dive into microservices and routing. See the links below to learn more and read some of the great source material that fueled this post.

Spring Cloud Gateway

Istio

Microservice Challenges and Netflix Tools

6. Appendix

Figure 4 traces the communication flow for a single request from an end user. In this example, User 3 makes a request to a service of App 1. As part of its implementation, the service in App 1 makes a call to another service in App 3 for information. The communication flow is as follows:

  1. User 3 makes a request to the App 1 service.
  2. The Gateway routes the request to the appropriate load-balanced cluster for App 1.
  3. Load Balancer 1 routes the request to a particular instance of App 1.
  4. App 1 reaches out to its data source, DB 1, for information needed to service the request.
  5. App 1 makes a request to an App 3 service for additional information needed to service the request.
  6. Load Balancer 3 routes the request from App 1 to a particular instance of App 3.
  7. App 3 reaches out to its data source, DB 3, for information needed to service the request.
  8. The App 3 service returns the information requested by App 1.
  9. App 1 uses the data received from DB 1 and App 3 to create the response for User 3. App 1 returns the response for the request from User 3.

Figure 4. Communication flow for a single request from an end user.