Service Mesh — A dedicated infrastructure layer
It’s dedicated infrastructure layer for facilitating service to service communications between Micro services, using a proxy.
Why do we need Service Mesh?
Services within the Microservices-based architecture being modular in nature are difficult to manage. Whenever there is a service call from one Microservice to another, it’s difficult to understand for the teams to infer or debug what’s happening inside the networked service calls.
This may lead to bigger issues if problems are not detected at the right time and properly. Performance issues, security, load balancing problems, tracing the service calls, or proper observability of the service calls are some issues that might occur while handling Microservices.
Service Mesh:
A service mesh can bind all the services in a Kubernetes cluster together so they can communicate with each other. It enables secure, service-to-service communication by creating a Transport Layer Security (TLS)-encrypted gateway. Service mesh features include traffic management, observability, and security.
For example, an e-commerce application app that typically has Microservices architecture, with front-end and back-end components.
So, it needs services to communicate securely to support customer transactions. Such apps can include shopping cart and shipping services.
Now, let’s discuss what do we need for this microservices set up or what are the configuration requirements.
Business Logic
First of all, each microservice requires its own business logic. And obviously these services need to talk to each other. For example, when a user puts something in the shopping cart, the request goes to the web server. It hands it over to the respected microservice and then it goes to the database for persisting required data. So, how does these services know how to communicate with each other and what the endpoints of each services are.
Service Endpoints
Second, all the service endpoints that web server talks to must be configured for the web server. So when we add a new microservice, we need to add the endpoint of that new service to all the microservices that need to talk to it. So, we have that information as part of the application deployment code. Another question, what about security?
Security
Generally, a common environment in many projects have the firewall rules set up for the Kubernetes cluster. Or there can be a proxy as entry point that gets the requests first so that the cluster can be accessed directly. So, we have security around the cluster. However, once the request gets inside the cluster, the communication is insecure.
Communication
Microservices talk to each other over HTTP or some other insecure protocol. Every service inside the cluster can freely talk to any other service without any restrictions. This means that from security perspective, once an attacker gets inside the cluster, it can do anything because we do not have any additional security inside. And maybe that can be okay for small applications that don’t really have any sensitive user data. But for more important applications like online banks or apps where we have lots of user personal data, a higher level of security is very important. So, we want everything to be as secure as possible. For that, we need additional configuration inside each application to secure communication between services within the cluster.
Also, we need retry logic in each microservice to make the whole application more robust. If one microservice is unreachable, we would want to retry the connection so developers would add this retry logic also to the services.
Monitoring
Lastly, we would want to be able monitor how the services are performing. For example, what HTTP errors are you getting, how many requests is your microservice receiving or sending or how long does a request take to identify the bottlenecks in your application. So, development team may add a monitoring logic for example, prometheus.
In conclusion, the developers team of each microservice needs to add all this logic to each microservice. And maybe configure some additional stuff in the cluster to handle all these very important challenges. This means that the developers of microservices are not working on the actual service logic. But are busy adding network logic for metrics, security and communication etc with each microservice. This also adds complexity to the services instead of keeping them simple and light-weight.
Service Mesh Architecture is implemented by software products like
Many Service meshes uses “Envoy Proxy” on the data plane.
Service Mesh Architecture:
Service Mesh consists of network proxies paired with each service in an application and a set of task management processes.
The proxies are called “Data Plane”. Management Processes are called “Control Plane”
Data Plane intercepts calls between different services and processes them.
Control Plane is the brain of the mesh that coordinates the behaviour of the proxies and provide API’s of operations and maintanence personnel to manipulate and observe the entire network. It will have policies, certificates, configurations etc.
Proxy (or Sidecar Proxy) handles networking logic, acts as proxy, 3rd party application, cluster operations can configure it easily.
*Control plane injects sidecar proxy.
Features:
Routing
Resilience
Security
Observability
Dashboard
Preconfigured Prometheus, Grafana and Jaeger
Tracing Support
Access Logs