Skip to main content
Sidecar Containers in Kubernetes
9 min read

Sidecar Containers in Kubernetes

CICUBE ANALYTICS INSIGHTS
Engineering Velocity: 25% Team Time Lost to CI Issues
View Platform →
3.5h
Time Saved/Dev/Week
40%
Faster Releases
Click for next insight

Introduction

Within Kubernetes, the concept of sidecar containers all revolves around extension and complementing the main application's functionality within the Pod. Such a supporting container provides essential services around logging, security, monitoring, or data synchronization for the main application without touching the code of the primary application.

This article will outline how to leverage sidecar containers in your Kubernetes environment to make your applications highly scalable and maintainable.

Steps we will cover in this article:

Why Use Sidecar Containers?

Sidecar containers are of great value in the Kubernetes environment, as this will provide me an easy way to integrate additional features without actually messing with the code of the main application. Examples of adding such logging or monitoring features include running a sidecar alongside my main application container on the same Pod. This is very useful in those cases where my logging sidecar captures the logs coming from the main application for further analysis or storage.

Consider a web application that needs to forward logs to an external server. Other than having my core application modified for this, I could easily implement a sidecar that listens for log entries and forwards them. This would keep the main application clean, focused on its major tasks, and still achieve something more.

Another common use of sidecar containers is in service mesh configurations. They typically perform the tasks of either routing or telemetry without affecting any changes in the app logic itself. Overall, sidecar containers provide me the flexibility and modularity to extend my application's functionality efficiently.

Setup of K8s Sidecar Container

One can configure a Kubernetes Pod to have a sidecar container by using YAML configuration that comprises both the main application container and a sidecar container. For this, let me take a practical approach with a logging sidecar. First of all, we need to transform the basic structure of my Pod definition to a sidecar pattern as follows:

Here is an example pod setup that creates a sidecar for logging:

apiVersion: v1
kind: Pod
metadata:
name: myapp-logging
spec:
containers:
- name: myapp
image: nginx:latest
ports:
- containerPort: 80
volumeMounts:
- name: logs
mountPath: "/var/log/myapp"
- name: logsidecar
image: alpine:latest
command: ["sh", "-c", "tail -F /var/log/myapp/access.log"]
volumeMounts:
- name: logs
mountPath: /var/log/myapp
volumes:
- name: logs
emptyDir: {}

In this configuration:

  • I've defined a Pod named myapp-logging running an NGINX server as the main application.
  • The first container is myapp, the main application, responsible for serving web traffic. It logs the requests to access.log.
  • The second container, logsidecar, serves as the sidecar; it constantly tails the log file without interfering with the app.
  • The volume shared by the two, logs, enables them to read or write log data common to both. This setup keeps my application code clean and logging nice and smooth.

Understanding Pod Lifecycle with Sidecars

Another important feature of a sidecar container is that they interact very closely with the Pod lifecycle. A sidecar container is an additional container running along with the main application container in the same Pod. One important feature is that the sidecar container runs for the entire life of the Pod, actively supporting until such time as its services are no longer required. For example, I have a logging sidecar that logs constantly while the main application is up and running without getting in the way.

Sidecars are started once the main application container is up and running on Pod startup. In that respect, a sidecar is immediately ready to support, log, or monitor an application. This directly affects how well the sidecar will work, based on the readiness of the main application container.

Kubernetes, at termination, ensures that the sidecar containers do not abruptly shut down. They leave them up until the main application container has finished shutting down to keep supporting whatever might be its last tasks. This order here-after the app and finishing after-has a great outcome on resource utilization and functionality.

In other words, the lifecycle of sidecar containers should supplement the main application with continuous function and increased stability of service in general.

Considering Sidecars with Kubernetes Jobs

Just to put the use of sidecars with Kubernetes Jobs in perspective, I'm going to take you through a Job configuration example that uses a sidecar container for extending the functionality of the Job. Sidecar containers add to the capability of a Job: it can do stuff such as logging or monitoring while running the main job logic without disturbing the main job logic.

Below is a sample Job configuration:

apiVersion: batch
kind: Job
metadata:
name: data-processing-job
spec:
template:
spec:
containers:
- name: processor
image: my-processing-image
command: ['sh', '-c', 'process data']
volumeMounts:
- name: data-volume
mountPath: /
- name: log-shipper
image: alpine
command: ['sh', '-c', 'tail -F /logs/output.log']
volumeMounts:
- name: volume-data
mountPath: /logs
restartPolicy: Always
volumes:
- name: data-volume
emptyDir: {}

In this setting:

  • The main container that does everything heavy like processing the data is named processor.
  • The log-shipper sidecar captures logs and allows monitoring without any modifications to the main container's logic. This environment further shows that sidecars provide additional functionalities in the context of a Job, whose life cycle can be so small compared to a normal Pod. The sidecar does not get in the way of Job's completion, and therefore it contributes to the process through continuous support.

Roles of Containers in a Pod

Each container, in any Kubernetes Pod, takes on a certain role: either as a sidecar container, an init container, or an application container. Understanding these clear roles defines how the sidecars are kept independent and function differently from the app and init containers.

Sidecar Containers

These extra containers, called sidecars, may run in the same Pod as the main application. The extra container is there to extend the functionalities of the primary app without touching its code. So, for instance, if I had a web application, the sidecar may be performing logging or monitoring activities. They share the same network and storage namespace as that of the app container, thus enabling them to work together tightly and hence providing smooth support. They also have their own lifecycle, meaning they can be independently started, stopped, or restarted.

Init Containers

Init containers differ completely from sidecars. They run only during the setup phase in a Pod's life cycle and are utilized for performing initialization tasks. They do not run concurrently with an app container unlike sidecars; they have got to execute their task, after which the main application will start. Therefore, there is no continuous interaction after the execution price-which makes them very suitable for one-time settings, such as database migrations or configuration setups.

Comparison with App Containers

App containers run the primary logic of an application. Unlike sidecars, they do not offer extra services. While one can say that the app containers have a well-defined lifecycle coupled with the functionality of the main application, a sidecar keeps running over a period to provide continued services.

Examples

  1. Sidecar Example: A logging sidecar which would capture the logs produced by the main application and forward them to an external storage system.
  2. Init Container Example: A database migration init container prepares the schema of a database before the main application container starts.
  3. App Container Example: NGINX container serving web content to clients. This detailed understanding of what the different container roles are within a Pod helps in better design and implementation in Kubernetes, whereby each container's purpose within a Pod would be well defined and appropriately executed.

Resource Allocation and Quotas

Kubernetes manages resource allocation for sidecar containers, init containers, and application containers in a Pod quite effectively. The rules that govern resource requests and limits have to be mastered in order to optimize overall resource consumption and effective scheduling of a Pod.

Resource Requests and Limits

Each container in a Pod can request resources and define resource limits to ensure the quantity of resources needed is available without compromising other workloads. Requests specify minimum resources, which are guaranteed to be given to that container, while limits specify maximum utilization of resources for the container:

For example:

resources:
requests:
memory: "64Mi"
cpu: "500m"
limits:
memory: "128Mi"
cpu: "1"

Here, the container is requesting 64 MiB of memory and 500 millicores of CPU. It can use up to 128 MiB of memory and 1 core of CPU. These are good definitions that make it easier for Kubernetes to schedule the Pods efficiently.

Impact on Pod Resource Consumption

So, the request and limitation of the resources in a Pod that has more than one container with sidecars and even init containers become effective cumulatively. The highest resource request from an init container is added in the overall requests of the Pod. This allows init containers to effectively reserve resources required in their execution without impacting the main application of the Pod.

Example of Resource Sharing

Consider, for example, a Pod hosting an app container and a logging sidecar. The app container requests 500m CPU, while the sidecar declares 250m CPU. Hence, the summary effective request would be 750m. Such Pods become very important to schedule since the scheduler needs to make sure that the sum of the requests does not overshoot the cluster capacity.

QoS Tiers

Kubernetes also classifies Pods into several QoS tiers depending on resource requests and limits. This classification then acts in the eviction policy during memory pressure situations so that critical services maintain availability. In other words, Kubernetes efficiently performs resource allocation for better utilization of containers without degradation of the system performance; hence, allowing multiple Containers running sidecars, init containers, or applications to coexist without resource contention.

Conclusion

Sidecar containers are one of the most versatile tools in Kubernetes that try to enable us to extend our application pods with very minimal perturbation. By deploying sidecars, we will be able to handle logs and monitor applications, and much more, without touching the core application code. Understanding how sidecars work internally empowers DevOps teams to create systems that are truly scalable, reliable, and maintainable. Mastering these concepts will let you take full advantage of what Kubernetes has to offer for your application deployments.