The Sidecar Pattern for Application Developers
Here is the brief introduction to sidecar and ambassador patterns.
Sidecar — A piece of functionality that extends or augments your main application and resides in a separate process. For example, your main application writes logs to stdin / stderr while the sidecar streams the logs from the filesystem into a sink. This way, your application focuses on its business logic while the sidecar encapsulates a reusable function that deals with cross cutting concerns, and can be consumed by multiple teams within an organization.
Ambassador — Similar to the sidecar pattern, the Ambassador pattern is usually concerned with the networking stack and is very often used to proxy connection to and/or from the main application. For example, as a developer you can always target localhost while the proxy takes care of routing the request to its destination based on either pre-defined configuration or dynamic service discovery. This can greatly accelerate development as the code remains the same on the local dev machine and in production while the proxy configuration is set separately based on the environment the code is running in.
From hereon I’ll use the term sidecar to describe both patterns, as it is usually the deployment model of choice for the ambassador pattern.
Now that we have a basic understanding of what a sidecar is, let’s get back to the tweet and see how sidecars and libraries are not mutually exclusive concepts as portrayed.
Consider the following data points:
Many sidecar based solutions do not interact with user code at all, and thus do not replace libraries. Common examples include service meshes that rely on network level interception of the application traffic to perform mutual authentication, retries, and telemetry monitoring.
Sidecar based solutions that interact with user code like Hazelcast and Dapr provide libraries for developers to consume from popular package managers like npm, pip, NuGet and others. These libraries are then used to interact with the sidecar.
In other words, developers do not have to pick sidecars over libraries and can enjoy the benefits of sidecars while using client SDKs and frameworks. We could just stop here.
However, I believe the discussion here is really about in-process vs out-of-process functionality, and in this post I’ll explain the rationale behind splitting application code into separate processes that run on the same machine or network namespace (a Kubernetes pod, for example).
Putting functionality, whether infrastructure related or business domain logic, into separate processes / containers benefits both developers and infrastructure teams greatly. In fact, it helps create the clarity that’s many times sorely needed between developers and ops teams when things break down. For SREs, this makes life a whole lot easier by allowing individual and tailored control over configuration, resource allocation, and monitoring for the different pieces of code that make up the system.
Let’s take a look at the benefits of the sidecar model and how they pertain to both application developers and infrastructure teams. In the end, we’ll have a clear picture of how this pattern not only helps these two different personas individually, but also of how it drives technical alignment and shared responsibilities.
Reliability
There are several immutable facts underlying the very fabric of our reality. The earth being a globe and Star Wars being the greatest space opera of all time are just two examples. A third would be that introducing, debugging, and fixing bugs in a rinse-repeat like fashion is a natural part of an application developer’s life.
Software bugs are the #1 threat to the stability of any application. Developers go to great lengths to avoid them during the code writing process while infrastructure teams accept them as inevitable, focusing on mostly reactive measures to make sure any errors surface up and are remediated as soon as possible.
Using a sidecar pattern allows developers to put guardrails around the critical pieces of their system by using process level isolation. For example, assume we have a piece of business logic code that uses a library to read files from the filesystem and then sends an email once processing is done. Any fatal error related to the filesystem processing code might crash the process and incur downtime, stopping the sending of emails altogether. Our non-business essential filesystem code has a direct and immediate impact on the most crucial part of our system.
Separating it out allows our business code to remain functional and running in case there’s a failure reading files from the filesystem. There’s an additional benefit here, which is the ability to provide this code as a modular and composable component to other parts of the system or different teams in our organization, but we’ll touch more on that later.
So now developers are happy because their codebase is lean and focused on the task at hand, with reduced chances of failure.
The infrastructure team experiences improved uptime and better monitoring as any failures in either filesystem processing or the business logic code are isolated and make pointing out the areas of failures a much more robust process.
This mindset of process isolation to allow parts of the system to remain operational when others fail is crucial in de-risking applications.
Good for: Developers, Infrastructure
Security
Software supply chain attacks are a much discussed topic of late, and for good reason. With the advent of sophisticated security mechanisms to protect distributed systems, attackers are growing more creative in finding weak spots in your defenses and are able to launch attacks several degrees removed from the actual target, in our case the user code.
A very common and dangerous form of a supply chain attack is to compromise an application’s dependencies. As developers use more external libraries, the attack surface grows as each library has its own dependencies and these dependencies have dependencies etc. Sometimes the risk is even that of self-sabotage. These types of attacks have a greater surface to operate on in cloud-native and microservices architectures where there’s a combination of deep dependency graphs for both open-source and private libraries.
Securing the supply chain is an area of focus for many organizations and open-source projects. However, the last line of defense will always be your code. This is something important to remember, and software architecture has a major role to play here.
Once an attacker is able to compromise a dependency and gains remote code execution privileges inside your application process, you can expect they have access to pretty much everything in your memory space. This includes taking memory dumps, invoking functions at will (easier with non-compiled languages like Javascript and Python but very much doable with languages like C# and Java via Reflection) and sniffing incoming/outgoing network traffic. Worse, the attacker can assume the identity of the process and use it to gain unprivileged access into other places in your system.
In order to reduce the attack surface as much as possible, developers can utilize the sidecar pattern to create clear boundaries between their different business units and/or infrastructure code. This allows developers to achieve several things:
- Dependencies are isolated per business unit. A breach in a dependency that an API server uses cannot compromise code and dependencies of an authentication business unit.
- Developers now have a much broader set of defense mechanisms in the form of HTTP1.1/gRPC servers with a wide array of communication protocols to choose from (Unix Domain Sockets, Named Pipes, TCP etc.)
Securing internal servers now becomes an explicit, auditable goal with a focus on the critical security requirements of the relevant business unit at hand. Usually that would involve only accepting requests from localhost, using API tokens, and/or verifying message integrity.
If a library used to fetch messages from RabbitMQ gets compromised, it’s much harder for the malicious actor to access the code that uses the message to perform authentication to a remote system or save it to a database if that part of the system is isolated in a different process altogether.
Application code that is segregated correctly based not only on resource allocation or domain expertise but also on sensitive data boundaries makes both developers and infrastructure teams happy. If a breach occurs in one process, developers and infrastructure teams can identify the scope of the breach much quicker and remediate faster by pushing a new version of the code that touches only the affected part.
Good for: Developers, Infrastructure
Code Reusability
As illustrated in the above meme, the sidecar pattern is a great way for developers to maintain a single codebase in a language that’s best positioned to solve the problem at hand, instead of maintaining multiple codebases when using a polyglot environment with services written in different languages.
Write your shared code once and reuse it in different environments / teams / projects. This has multiple benefits that include removing boilerplate code from the different applications, reducing the risk of language/platform specific bugs and greatly reducing the number of dependencies used and the operational overhead needed to maintain multiple codebases.
Good for: Developers
Resource Allocation
Remember the part from the Reliability section about keeping the application safe in case some parts of the code crash? Well guess what: we can do the same when it comes to resource utilization.
Let’s assume we have an application that reads messages off a queue, does some processing and writes the processed events to a database. We can, at a high level, break down the app’s architecture into the following stages:
- Consume events (I/O bound)
- Process (CPU bound)
- Write to database (I/O bound)
On the surface, it seems like a great idea to keep everything in the same process. And in some cases where there might be a low and predictable volume of events coming in and the database is always sufficiently scaled in terms of resource allocation, that might be the best way to go.
However, in high scale scenarios we can expect different usage patterns, much more dynamic and unpredictable in nature. As engineers we want to ensure our code operates reliably under load, but in the case described here, the different stages act as noisy neighbors to one another. A slowness in the database can delay thread execution which can lead to slower processing from the queue and cause CPU spikes.
In another case, our HTTP server will likely want to use strict CPU/Memory limits to prevent abnormal behavior, but a cache would see a usage pattern of increasing memory over time.
That means less tolerance for failures and thus tight resource limits are not an option. By splitting these pieces of logic into distinct processes, we can apply fine grained resource allocation which provides developers with a clear boundary between processing functions and infrastructure teams with a better way to ensure ongoing reliability.
Good for: Developers, Infrastructure
Sidecar detractors like to use the performance penalty argument to try and disprove the pattern, but this argument is very often used in an out of context, absolute sense — and only a Sith deals in absolutes.
Do sidecars have performance overhead? Most likely, but not always. Does it actually affect your application overall? Well, that question entirely depends on your performance requirements. In the majority of cases, unless you’re building a real time trading application or an embedded real time system, the performance overhead of a sidecar that sits next to your application is negligible. Sockets, Shared Memory, Named Pipes or even plain old TCP on a local loopback provide excellent performance that does well for the majority of use cases. Sure, it’s not as fast as passing a method call in-process, but do you really care about those additional 0.5ms? After all, you did sign up to the big sidecar in the sky when you chose that cloud managed cache service over the high bandwidth VM running in your VPC. And what about that Ingress controller adding double digit milliseconds of latency to your calls?
As with everything, assess your performance requirements, test your applications and sidecars and make an informed decision about whether or not the pattern works for you. But don’t decide not to use something because someone on the internet says it might have “performance issues”.
Conclusion
The sidecar pattern, like other architectural patterns, should be evaluated based on the use case at hand. If you’re coming from a legacy, monolithic type of world where the servers are big and static, utilization is predictable, and the codebase is relatively low on external dependencies, then most likely the sidecar pattern doesn’t make much sense. In the cloud-native world where release cycles are fast and carry code that deals with non-predictable, high scale, and highly fluctuating traffic with dependencies on open-source libraries, the sidecar pattern greatly helps keep things secured, reliable, and reusable. The ability to isolate code into distinct processes with fine grained resource allocation and independent versioning is extremely important when writing distributed systems. Oh wait, didn’t I just describe the main benefits of microservices or their predecessor, SOA? And if that’s the case, then aren’t sidecars just a localized, more performant variant of the microservices architecture that you might be using anyway? Well what do you know, I guess they are.
Containers as a packaging format and Kubernetes as a container orchestrator are great tools that make it easy to implement the sidecar pattern, but they’re not the only means to run sidecars. Whether you orchestrate processes with Chef or Puppet on VMs or run on a multi-container serverless platform, sidecars can be configured to run next to and enhance your application. Frameworks like Daprthat utilize the sidecar pattern provide you with features for cloud-native workloads so you can focus on your business logic and keep your code secure, reliable and portable, with out-of-the-box security features like pub/sub authorization, localhost bound API tokens, authorization middleware and secret management among others. Other technologies like service meshes enable network level resiliency and security features by utilizing the sidecar pattern.
Are you using sidecars? Let me know in the comments below.