Dive-In Microservices #4 – Some more patterns to look at!

Health Checks

Health checks are becoming an essential part of modern microservices setup. Every service is expected expose a health check endpoint which can be accessed by server monitoring tool. Health checks provide important attributes as they allow the process responsible for running the application to restart or kill it when it starts to misbehave or fail. Design with this pattern needs to be incredibly careful and not too aggressive to use major cycles to utilize this.

What needs to be recorded with health checks is entirely one’s choice. However, you might run into some recommendations as follows,

– Data store connection status (general connection state, connection pool status)

– Current response time (rolling average)

– Current connections

– Bad requests (running average)

How to determine what would cause an unhealthy state needs to be part of the discussion during the design of the service. For example, no connectivity to the database means the service is completely inoperable, it would report unhealthy and would allow the orchestrator to recycle the container at the same time, an exhausted connection pool could just mean that the service is under high load, and while it is not completely inoperable it could be suffering degraded performance and should just serve a warning.

The same goes for the current response time, when you load test your service once it has been deployed to production, you can build up a picture of the thresholds of operating health. These numbers can be stored in the config and used by the health check. For example, if you know that your service will run an average service request with a 50 milliseconds latency for 4,000 concurrent users; however at 5,000, this time grows to 500 milliseconds as you have exhausted the connection pool. You could set your SLA upper boundary to be 100 milliseconds; then you would start reporting degraded performance from your health check. This should, however, be a rolling average based on the normal distribution. It is always possible for one or two requests to greatly be outside the standard deviation of normal operation, and you do not want to allow this to skew your average which then causes the service to report unhealthy, when in fact the slow response was actually due to the upstream service having slow network connectivity, not your internal state.

When discussing health checks, the pattern of a handshake is considered in most occasions, where each client would send a handshake request to the downstream service before connecting to check if it was capable of receiving its request. Under normal operating conditions and most of the time, this adds an enormous amount of chatter into your application resulting in an overkill. It also implies that you are using client-side load-balancing, as with a server side approach you would have no guarantees that the service you handshake is the one you connect to. The concept however of the downstream service making a decision that it can or can’t handle a request is a valid one. Why not instead call your internal health check as the first operation before processing a request? This way you could immediately fail and give the client the opportunity to attempt another endpoint in the cluster. This call would add almost no overhead to your processing time as all you are doing is reading the state from the health endpoint, not processing any data.

Load balancing

When we discussed service discovery, we examined the concepts of server-side and client-side discovery. For many years server-side discovery was the only option, and there was also a preference for doing SSL termination on the load balancer due to the performance problems. It is a good idea to use TLS secure connections internally. However, what about being able to do sophisticated traffic distribution? That can only be achieved if you have a central source of knowledge. However, there could be a benefit to only sending a certain number of connections to a particular host; but then how do you measure health? You can use layer 6 or 7, but as we have seen by using smart health checks, if the service is too busy then it can just reject a connection. To be able to implement multiple strategies for the load balancer, such as round-robin, random, or more sophisticated strategies like distributed statistics, across multiple instances you can define your own strategy.

Caching

One way you can improve the performance of service is by caching results from databases and other downstream calls in an in-memory cache or a side cache like Redis, rather than by hitting a database every time. Caches are designed to deliver massive throughput by storing precompiled objects in a fast-access data store, frequently based around a concept of a hash key. We know from looking at algorithm performance that a hash table has the average performance of O(1); that is as fast as it gets. Without going too in depth into Big O notation, this means it takes one iteration to be able to find the item you want in the collection. What this means is that, not only can one reduce the load on database, can also reduce your infrastructure costs. Typically, a database is limited by the amount of data that can be read and written from the disk and the time it takes for the CPU to process this information. With an in-memory cache, this limitation is removed by using pre-aggregated data, which is stored in fast memory, not onto a state-full device like a disk. This comes at the cost of consistency because one cannot guarantee that all clients will have the same information at the same time.

Caching strategies can be calculated based on your requirements for this consistency. In theory, the longer the cache expiry, the greater cost saving, and the faster system is, at the expense of reduced consistency. So when planning a feature, one should be talking about consistency and the tradeoffs with performance and cost, and documenting this decision, as these decisions will greatly help create a more successful implementation.

You have probably heard the phrase Premature optimization, so does that mean you should not implement caching until you need it? No; it means you should be attempting to predict the initial load that your system will be under at design time, and the growth in capacity over time, as you are considering the application lifecycle. When creating this design, you will be putting together this data, and you will not be able to reliably predict the speed at which a service will run at. However, you do know that a cache will be cheaper to operate than a data store; so, if possible, you should be designing to use the smallest and cheapest data store possible, and making provision to be able to extend your service by introducing caching at a later date. This way you only do the actual work necessary to get the service out of the door, but you have done the design up front to be able to extend the service when it needs to scale.

The cache will normally have an end date on it. However, if you implement the cache in a way that the code decides to invalidate it, then you can potentially avoid problems if a downstream service or database disappears. Again, this is back to thinking about failure states and asking what is better: the user seeing slightly out-of-date information or an error page? If your cache has expired, the call to the downstream service fails. However, you can always decide to serve the stale cache back to the calling client. In some instances, this will be better than returning a 50x error.

Advertisements

Dive-In Microservices #3 – Patterns – Circuit Breaking

Before we delve deeper into circuit breaking pattern let us understand couple of patterns which will help us understand it better.

Pattern – Timeouts

A timeout is an incredibly useful pattern while communicating with other services or data stores. The idea is that you set a limit on the response of a server and, if you do not receive a response in the given time, then you write a business logic to deal with this failure, such as retrying or sending a failure message back to the upstream service. A timeout could be the only way of detecting a fault with a downstream service. However, no reply does not mean the server has not received and processed the message, or that it might not exist. The key feature of a timeout is to fail fast and to notify the caller of this failure.

There are many reasons why this is a good practice, not only from the perspective of returning early to the client and not keeping them waiting indefinitely but also from the point of view of load and capacity. Timeouts are an effective hygiene factor in large distributed systems, where many small instances of a service are often clustered to achieve high throughput and redundancy. If one of these instances is malfunctioning and you, unfortunately, connect to it, then this can block an entirely functional service. The correct approach is to wait for a response for a set time and then if there is no response in this period, we should cancel the call, and try the next service in the list. The question of what duration your timeouts are set to do not have a simple answer. We also need to consider the different types of timeout which can occur in a network request, for example, you have:

Connection Timeout – The time it takes to open a network connection to the server

Request Timeout – The time it takes for a server to process a request

The request timeout is almost always going to be the longest duration of the two and I recommend the timeout is defined in the configuration of the service. While you might initially set it to an arbitrary value of, say 10 seconds, you can modify this after the system has been running in production, and you have a decent data set of transaction times to look at.

Pattern – Back off

Typically, once a connection has failed, you do not want to retry immediately to avoid flooding the network or the server with requests. To allow this, it’s necessary to implement a back-off approach to your retry strategy. A back-off algorithm waits for a set period before retrying after the first failure, this then increments with subsequent failures up to a maximum duration.

Using this strategy inside a client-called API might not be desirable as it contravenes the requirement to fail fast. However, if we have a worker process that is only processing a queue of messages, then this could be exactly the right strategy to add a little protection to your system.

Pattern – Circuit breaking

We have looked at some patterns like timeouts and back-offs, which help protect our systems from cascading failure in the instance of an outage. However, now it’s time to introduce another pattern which is complementary to this duo. Circuit breaking is all about failing fast, it is a way to automatically degrade functionality when the system is under stress.

Let us consider an example of a frontend example web application that is dependent on a downstream service to provide recommendations for services user can use. Because this call is synchronous with the main page load, the web server will not return the data until it has successfully returned recommendations. Now you have designed for failure and have introduced a timeout of five seconds for this call. However, since there is an issue with the recommendations system, a call which would ordinarily take 20 milliseconds is now taking 5,000 milliseconds to fail. Every user who looks at a services is waiting five seconds longer than usual; your application is not processing requests and releasing resources as quickly as normal, and its capacity is significantly reduced. In addition to this, the number of concurrent connections to the main website has increased due to the length of time it is taking to process a single page request; this is adding load to the front end which is starting to slow down. The net effect is going to be that, if the recommendations service does not start responding, then the whole site is headed for an outage.

There is a simple solution to this: you should stop attempting to call the recommendations service, return the website back to normal operating speeds, and slightly degrade the functionality of the services page. This has three effects:

– You restore the browsing experience to other users on the site.

– You slightly degrade the experience in one area.

– You need to have a conversation with your stakeholders before you implement this feature as it has a direct impact on the system’s business.

Now in this instance, it should be a relatively simple sell. Let’s assume that recommendations increase conversion by 1%; however, slow page loads reduce it by 90%. Then isn’t it better to degrade by 1% instead of 90%? This example, is clear cut but what if the downstream service was a stock checking system; should you accept an order if there is a chance you do not have the stock to fulfill it?

So how will it work ?

Under normal operations, like a circuit breaker in your electricity switch box, the breaker is closed and traffic flows normally. However, once the pre-determined error threshold has been exceeded, the breaker enters the open state, and all requests immediately fail without even being attempted. After a period, a further request would be allowed and the circuit enters a half-open state, in this state a failure immediately returns to the open state regardless of the error threshold. Once some requests have been processed without any error, then the circuit again returns to the closed state, and only if the number of failures exceeded the error threshold would the circuit open again.

Error behaviour is not a question that software engineering can answer on its own; business stakeholders need to be involved in this decision. When you are planning the design of your systems, you talk about failure as part of your non-functional requirements and decide ahead of time what you will do when the downstream service fails.

Dive-In Microservices #2 – Patterns – Service discovery

With monolithic applications, services invoke one another through language level methods or procedure calls. This was relatively straightforward and predictable behavior. As application complexity increased we realized that monolithic applications were not suitable for the scale and demand of modern software, so we moved towards SOA or service-oriented architecture. The monoliths were broken into smaller chunks that typically served a particular purpose. But SOA brought its own caveats in the picture with inter-service calls, SOA services ran at well-known fixed locations, which resulted in static location of services, IP addresses, reconfiguration issues with deployments to name a few.

Microservices are easy; building microservice systems is hard

With microservices all this changes, the application typically runs in a virtualized or containerized environment where the number of instances of a service and their locations can change dynamically, minute by minute. This gives us the ability to scale our application depending on the forces dynamically applied to it, but this flexibility does not come without its own share of problems. One of the main ones knows where your services are to contact them. Without the right patterns, it can almost be impossible, and one of the first ones you will most likely stumble upon even before you get your service out into production is service discovery.

With Service discovery, the services register with the dynamic service registry upon startup, and in addition to the IP address and port they are running on, will also often provide metadata, like service version or other environmental parameters that can be used by a client when querying the registry. Some of the popular examples of service registry are Consul, Etcd. These systems are highly scalable and have strongly consistent methods for storing the location of your services. In addition to this, the consul has the capability to perform health checks on the service to ensure its availability. If the service fails a health check then it is marked as unavailable in the registry and will not be returned by any queries.

There are two main patterns for service discovery,

Server-side service discovery

Server-side service discovery is a microservice antipattern for inter-service calls within the same application. This is the method we used to call services in an SOA environment. Typically, there will be a reverse proxy which acts as a gateway to your services. It contacts the dynamic service registry and forwards your request on to the backend services. The client would access the backend services, implementing a known URI using either a subdomain or a path as a differentiator.

Server side discovery eventually runs into some well known issues, one of them being reverse proxy bottleneck. The backend services can be scaled quickly enough but it requires monitoring. It also introduces latency causing increase in cost for running and maintaining the application.

Server side discovery also potentially increases the failure patterns with downstream calls, internal services and external services. With server side discovery you also need to have a centralize failure logic on server side, which abstracts most of API knowledge from client, handles failures internally, keeps retrying internally and keeping client completely distant till its a success or catastrophic failure.

Client-side service discovery

While server-side service discovery might be an acceptable choice for your public APIs for any internal inter-service communication, I prefer the client-side pattern. This gives you greater control over what happens when a failure occurs. You can implement the business logic on a retry of a failure on a case-by-case basis, and this will also protect you against cascading failure.

This pattern is similar to its server-side partner. However, the client is responsible for the service discovery and load balancing. You still hook into a dynamic service registry to get the information for the services you are going to call. This logic is localized in each client, so it is possible to handle the failure logic on a case-by-case basis.

Dive-In Microservices #1 – Patterns – Event Processing

Event processing is a model which allows you to decouple your micro services by using a message queue. Rather than connect directly to a service which may or may not be at a known location, you broadcast and listen to events which exist on a queue, such as Redis, Amazon SQS, RabbitMQ, Apache Kafka, and a whole host of other sources.

The message queue is a highly distributed and scalable system, and it should be capable of processing millions of messages so we do not need to worry about it not being available. At the other end of the queue, there will be a worker who is listening for new messages pertaining to it. When it receives such a message, it processes the message and then removes it from the queue.

Due to the async nature of the event processing pattern there needs to be a requirement to handle failures in a programmable way,

Event processing with at least once delivery

One of first and basic sync mechanism is to request for delivery, we add the message to the queue and then wait for an ACK from the queue to let us know that the message has been received. Of course, we would not know if the message has been delivered but receiving the ACK should be enough for us to notify the user and proceed. There is always the possibility that the receiving service cannot process the message which could be due to a direct failure or bug in the receiving service or it could be that the message which was added to the queue is not in a format which can be read by the receiving service. We need to deal with both of these issues independently, with handling errors discussed next.

Handling Errors

It is not uncommon for things to go wrong with distributed systems and is the essential factor in micro-service based software design. As per above scenario, if a valid message can not be processed one standard approach is to retry processing the message, normally with a delay. It is important to append the error every time we fail to process a message as it gives us the history of what went wrong, it also provides us with the capability to understand how many times we have tried to process the message because after we exceed this threshold we do not want to continue to retry we need to move this message to a second queue or a dead letter que which we will discuss next.

Debugging the failures with Dead Letter Queue

It is most common practice to remove the message from queue once it is processed. The purpose of the dead letter queue is so that we can examine the failed messages on this queue to assist us with debugging the system. Since we can append the error details to the message body, we know what the error is and we know where the history lies if we need it.

Working with idempotent transactions

While many message queues nowadays offer At Most Once Delivery in addition to the At Least Once, the latter option is still the best for large throughput of messages. To deal with the fact that the receiving service may receive a message twice it needs to be able to handle this in its own logic. One of the common methods for ensuring that the message is not processed twice is to log the message ID in a transactions table. If the message has already been processed and if it will be disposed.

Working with the ordering of messages

One of the common issue while handling failures with retry is receiving a message out of sequence or in an incorrect order, which will end up with inconsistent data in the database. One potential way to avoid this issue is to again leverage the transaction table and to store the message dispatch_date in addition to the id. When the receiving service receives a message then it can not only check if the current message has been processed it can check that it is the most recent message and if not discard it.

Working with atomic transactions

This is the common issue found when moving the legacy systems to micro-services. While storing data, a database can be atomic: that is, all operations occur or none do. Distributed transactions do not give us the same kind of transaction that is found in a database. When part of a database transaction fails, we can roll back the other parts of the transaction. By using this pattern we would only remove the message from the queue if the process succeeded so when something fails, we keep retrying. This gives us a kind of eventually consistent transaction.

Unfortunately, there is no one solution fits all with messaging we need to tailor the solution which matches the operating conditions of the service.