Explaining Microservices and Service Mesh with Istio

Explaining Microservices and Service Mesh with Istio

Application builds when broken down into multiple smaller service components, are known as Microservices. When compared to the traditional Monolithic way, a Microservice Architecture treats each microservice as a standalone entity/module, essentially helping to ease the maintenance of its code and related infrastructure. Each microservice of an application can be written in a different technology stack, and further be deployed, optimized and managed independently.

Though in theory, a Microservice Architecture specifically benefits build of complex large-scale applications, however, it is also widely used for small-scale application builds (for example, a simple shopping cart) – with an eye to scale further.

Benefits of a Microservice Architecture

  • Individual microservices within an application can be developed and deployed through different technology stacks.
  • Each microservice can be optimized, deployed or scaled independently.
  • Better fault handling and error detection.

Components of a Microservice Architecture

A modern cloud-native application running on Microservice Architecture relies on the following critical components –

  • Containerization (through platforms like Docker) – for effective management and deployment of services by breaking them into multiple processes.
  • Orchestration (through platforms like Kubernetes) – for configuration, assignment and management of available system resources to services.
  • Service Mesh (through platforms like Istio) – for inter-service communication through a mesh of service- proxies to connect, manage and secure microservices.

The above three are the most important components of a Microservice Architecture which allow applications in a cloud-native stack to scale under load and perform even during partial failures of the cloud environment.

Complexities of a Microservice Architecture

A large application when broken down to multiple microservices, each using a different technology stack (language, DB, etc.), requiring multiple environments form a complex architecture to manage. Though Docker containerization helps to manage and deploy individual microservices by breaking each into multiple processes running in separate Docker Containers, the inter-services communication remains critically complicated as you have to deal with the overall system health, fault tolerance and multiple points of failure.

Let us understand this by how a shopping cart works on a Microservice Architecture. Microservices here would relate to the inventory database, the payment gateway service, the product suggestion algorithm based on the customer’s access history, etc. While all these services remain a stand-alone mini-module theoritically, they do need to interact among each other. It is important to note that a service-to-service communication is what makes microservices possible.

Why do we need a Service Mesh?

Now that you know the importance of a service-to-service communication in a Microservice Architecture, it seems apparent that the communication channel remains fault-free, secured, highly-available and robust. This is where a Service Mesh comes-in as an infrastructure component, which ensures a controlled service-to-service communication by implementing multiple service proxies. A Service Mesh is responsible for fine-tuning communication among different services rather than adding new functionalities.

In Service Mesh, proxies deployed alongside individual services enabling inter-service communication is widely known as the Sidecar Pattern. The sidecars (proxies) might be designed to handle any functionalities critical to inter-service communication like load balancing, circuit breaking, service discovery, etc.
Service Mesh Sidecar Pattern

Through a Service Mesh, you can –

  • Maintain, Configure and Secure all service-to-service communications among all or selected Microservices of an Application.
  • Configure and perform network functions within Microservices such as network resiliency, load balancing, circuit breaking, service discovery, etc.
  • Network functions are maintained and implemented as a separate entity from the Business Logic, fulfilling the need of a dedicated layer for service-to-service communication decoupling from application code.
  • As a result, developers can focus on the Application’s Business Logic, while all or most of the work related to network communication is handled by the Service Mesh.
  • Since a Microservice to Service Mesh proxy communication is always on top of standard protocols such as HTTP1.x/2.x, gRPC, etc., developers can use any technology to develop individual services.

Components of a Service Mesh Architecture

Untitled-Document--11-

Business Logic
This contains the core application logic and the underlying code of a microservice. A business logic also retains the application’s computation as well as the service-to-service integration logic. Due to the beauty of a Microservice Architecture, the business logic can be written on any platform and remains completely independent from a different service.

Primitive Network Functions
This includes basic network functions used by a microservice to initiate a network call and connect with the service mesh sidecar proxy. Though major network functions among Microservices are handled through the Service Mesh, a given service must contain the basic network functions to connect with the sidecar proxy.

Application Network Functions
Unlike Primitive Network Functions, this component through a service proxy maintains and manages critical network functions including circuit breaking, load balancing, service discovery, etc.

Service Mesh Control Plane
All service mesh proxies are centrally managed and controlled by a Control Plane. Through a Control Plane, you can specify authentication policies, metrics generation, and configure service proxies across the mesh.

Implementing Service Mesh with Istio

While there are several others, being the most popular, we will explore how Istio can be used to implement a Service Mesh architecture for a cloud-native application.

As explained in the sections above, in a Microservice Architecture, Istio does this by forming an infrastructure layer to connect, secure and control communication among distributed services. Istio deploys an Istio proxy (called an Istio sidecar) next to each service with few or no code changes to the service in itself. All inter-service traffic is directed to the Istio proxy, which uses policies to control inter-service communication alongside implementing essential policies of deployments, fault injections, and circuit breakers.

Core Capabilities of Istio

  • Secure service-to-service communication through authentication and authorization.
  • Implement policy layers supporting access controls, quotas and resource allocation.
  • Automatic load balancing for HTTP, gRPC, WebSocket, and TCP traffic.
  • Automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress.
  • Configure and control of inter-service communication through failovers, fault injection and routing rules.
  • Implement policy layers supporting access controls, quotas and resource allocation.

Istio being platform-independent can be run in a variety of environments, including Cloud, On-Premise, Kubernetes, and more. Istio currently supports:

  • Service deployment on Kubernetes
  • Services registered with Consul
  • Services running on individual virtual machines

Core Istio Components

Core Istio Components (Image Source: istio.io)

An Istio service mesh consists of a data plane and a control plane.

  • The data plane consists of the sidecar service proxies (through Envoy), while sidecar communication among microservices is achieved through a policy and telemetry hub (through Mixer).
  • The control plane manages and configures communication among all sidecar proxies through Pilot, Citadel and Galley. While Pilot enforces routing rules and service discovery of Envoy proxies, Citadel acts as the authentication and authorizing channel. Galley, on the other hand, acts as Service Mesh’s configuration validation, ingestion, processing and distribution component.

The control plane eventually ends up managing and maintaining components of the data plane, and hence forms to be the most important layer of the Istio Service Mesh.


In this article today, we got an understanding of how a Service Mesh is critical towards the implementation of a Microservice Architecture, and how Istio solves the purpose of achieving those.
Taking a step further, in the next article, we would go through the steps involved in installing Istio on different platforms, including Kubernetes.

Discover more with Gcore Managed Kubernetes

Explaining Microservices and Service Mesh with Istio

Subscribe
to our newsletter

Get the latest industry trends, exclusive insights, and Gcore
updates delivered straight to your inbox.