Skip to main content
DaemonSets in Kubernetes
11 min read

DaemonSets in Kubernetes

CICUBE ANALYTICS INSIGHTS
Engineering Velocity: 25% Team Time Lost to CI Issues
View Platform →
3.5h
Time Saved/Dev/Week
40%
Faster Releases
Click for next insight

Introduction

In modern cloud environments, which are dynamic, node-level resource management is key to frictionless operations. Kubernetes DaemonSets offer quite a powerful method of running certain pods on multiple nodes in order for uniformity to be achieved across the cluster, as well as ensuring availability. This article shares DaemonSet essentials and how this could provide valuable practical insight for DevOps professionals who want to optimize their flow-on work with Kubernetes.

Steps we will cover in this article:

How to Set Up a DaemonSet in Kubernetes

Creating a DaemonSet in Kubernetes is quite an easy task, given that most of the tasks have already been done through the YAML files. All the necessary fields will include apiVersion, kind and metadata. Here is how you would define a DaemonSet. Below is a simple template for a DaemonSet:

apiVersion: apps
kind: DaemonSet
metadata:
name: my-daemon
spec:
selector:
matchLabels:
app: my-daemon
template:
metadata:
labels:
app: my-daemon
spec:
containers:
- name: my-daemon-container
image: my-daemon-image

In this YAML, the field spec.selector specifies the method used by a DaemonSet to identify the Pods it needs to manage. In the part specifying template, we defined how the Pods to run in that DaemonSet will appear: their labels, among other things, and what container images they run. Each of these Pods must be labelled with the appropriate label in order for them to be easily selected and scaled.

The nodeSelector and affinity fields will further let you pinpoint down to the particular nodes where to schedule the Pods. For example, your configuration may look similar to this:

spec:
template:
spec:
nodeSelector:
disktype: ss
environment: production

With this configuration, only nodes possessing the indicated labels will be launching the Pods, thus ensuring efficient resource use and good performance.

Running Pods Across Cluster Nodes

In general, DaemonSets in Kubernetes are supposed to run Pods on all or selected nodes in the cluster. It is the job of the DaemonSet controller to run the Pods and hence create replicas to each available node. This happens automatically, and if new nodes are added, the controller launches the defined Pods. If nodes are deleted, the operating Pods on those particular nodes get garbage-collected, keeping everything tidy and resource-efficient.

Node selectors and affinities can be utilized to handle effectively where the various Pods get scheduled. Node selectors are somewhat straightforward; they define the simple key-value pairs that identify those nodes that are eligible to run DaemonSet Pods. For instance, the use of a nodeSelector in the YAML can target particular nodes based on labeled characteristics.

On the other hand, node affinity is an advanced way to apply rules based on node labels by writing expressions. Where you have more complex needs to select nodes, that flexibility is quite important. These kinds of configurations will be considered by the DaemonSet controller to make a decision on scheduling, ensuring that the deployment of the Pods is effective with respect to the current status of the cluster.

The DaemonSet controller would watch and change these to ensure that there are Pods running on the hosts specified, while also adhering to any affinity or selector rules defined in the DaemonSet configuration. This kind of dynamic management makes DaemonSets a powerful tool for node-level functions in a Kubernetes environment.

Managing Resource Allocation with DaemonSets

One of the important things about DaemonSet Pods is managing their resource allocations to keep Kubernetes balanced in terms of resource utilization. In the configuration for each DaemonSet, there is usually a place where resource limits can be specified; these allow specifying how much CPU and memory each Pod can use. This can be achieved by making use of the resources field in YAML. Here's a simplified example:

resources:
limits:
memory: "200Mi"
cpu: "100m"
requests:
memory: "200Mi"
cpu: "100m"

In the above example, each Pod is restricted to 200Mi of memory and 100m of CPU. Allocating your resources effectively involves understanding what the normal usage patterns of your application are. This includes the amount of nodes, the workloads hosted on them, and what your application's functionalities require. This might include scenarios where a logging agent is running on all nodes; such application must be polluted in order not to use entrenched amounts of resources that may hamper other services from running.

It is also worth noting that these limits will need to be further monitored and tuned as your application changes and evolves or the cluster grows. It would also be wise to be conservative with your estimates of requests and limits, iterating on performance data and the demands of your workload. Configuration and monitoring tools exist that can potentially offer a better look at resource utilization and allow tuning based on those observations.

This proactive approach goes a long way toward maintaining an efficient Kubernetes cluster.

Using Taints and Tolerations Effectively

Kubernetes uses taints and tolerations to limit which Pods run on which nodes. DaemonSets utilize toleration so their Pods get properly placed, which enables node-level functions to run where needed. DaemonSet objects are created with default tolerations in place, which allow Pods to co-reside on control plane and master nodes:

tolerations:
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule

These tolerations allow DaemonSet Pods to schedule on nodes that would otherwise prevent Pods from bubbling based on their role.

However, you almost certainly want to add custom tolerations for specific use-cases; this is especially true when trying to account for certain conditions on a node. For example, if you have nodes experiencing high disk pressure, you might want to add a custom toleration to tell your logging DaemonSet to tolerate that:

tolerations:
- key: node.kubernetes.io/disk-pressure
operator: Exists
effect: NoSchedule

That way, your logging service can still gather information where other Pods might be inhibited from scheduling onto those nodes. This will effectively utilize taints and tolerations to ensure your resources are fully utilized while smoothly performing some important node-level functions in your cluster.

Rolling Updates and Managing DaemonSet Changes

Performing DaemonSet rolling updates in Kubernetes is quite important to make sure the application is up and running while doing replacements. This enables you to update pods in steps without bringing down the whole service at once to reduce possible disruption of your work. Doing a rolling update only requires changing the Pod template in the DaemonSet's YAML configuration. Suppose you want to update the image version of your logging agent; you would update the Pod template as shown here:

spec:
template:
spec:
containers:
- name: fluentd-elasticsearch
image: quay.io/fluentd_elasticsearch/fluentd:v2.6.0

With the specification changed, use your changes with kubectl apply. The DaemonSet controller will manage an update by creating new Pods with updated configuration and removing old ones so that there won't be any downtime for your services.

The important thing here is to know the rollout status, which can be tracked by commands such as kubectl rollout status daemonset/fluentd-elasticsearch -n kube-system. Handling potential disruptions during updates can be done by setting the field maxUnavailable to control how many pods can be unavailable during the update. You might want to apply a readiness probe, so the pod is never considered operational until the older version is terminated. As you develop an active approach to rolling updates, you ensure that your cluster remains robust and functional during a change.

Communicating with Daemon Pods

In the proper communication of a Kubernetes environment, DaemonSet has to be in effective communication with the Pods. Various interaction patterns do exist, which can be used in various fashions depending on what the application needs.

Push Notifications

A simple approach could be to use the push model, wherein Pods are configured to push their updates to some central service, like a stats database. A DaemonSet for collecting logs can periodically push log entries to a remote logging service. This ensures log data is aggregated constantly and doesn't rely on external clients polling for data.

NodeIP and Known Port

Another common approach would be to use NodeIP, along with known port bindings. Pods running under DaemonSet can expose services on host ports. Thus, clients can connect using the node's IP and the pre-defined port. For instance, if each logging agent in a DaemonSet listens on port 8080, then the clients can simply address requests to their respective node IPs at that port.

DNS-based Discovery

This will enable DNS-based communication by creating a headless service that shares the same Pod selector. With Kubernetes' DNS capabilities, you could discover available Pods and reach them by using their service names. So, with the DaemonSet example in continuation, if you had created a service for your DaemonSet Pods, clients can resolve the service name in order to reach specific instances running in the cluster.

Using Services for Access

Finally, regular Kubernetes service can be created, which will select Pods from the DaemonSet. In this way, interaction will be much easier, since clients can reach any Pod of the random DaemonSet without knowing specific addresses of nodes. This neatly abstracts the infrastructure underneath and adds fault tolerance using load balancing. These communication patterns ensure that your DaemonSet Pods communicate vital information or services inside a Kubernetes cluster.

Alternatives to DaemonSets

When it comes to deciding on which is the best approach for node-level tasks in Kubernetes, DaemonSets fall in line with other alternatives, such as static pods, Init scripts, and Deployments. Each comes with pros and cons that may be more suitable depending on a given use case.

Static Pods

Static Pods are the pods, which the Kubelet manages directly on a node with files placed in a directory. They are simple and quite effective to guarantee the running of some core services, but they lack flexibility; they don't grant much control from the Kubernetes API side.

Pros:

  • Direct management by Kubelet for rapid execution.
  • Ideal for critical services that must operate on a specific node.

Cons:

  • No Kubernetes API management or scaling.
  • Less easy to monitor and debug compared to DaemonSets.

Init Scripts

Init scripts are a way for users to run daemons straight on a node using more 'traditional' process managers such as systemd or init.d.

Pros:

  • Direct system integration, very little overhead needed.
  • Useful for legacy applications or specific node initialization tasks.

Cons:

  • Various lackings in control and monitoring capabilities provided by Kubernetes.
  • More difficult to maintain and manage consistency across nodes in large clusters.

Deployments

Deployments are suited for stateless applications that need to scale quickly and to have rolling updates managed by Kubernetes. Such types of deployments go well with applications which don't have to run on every node, but require high availability and load balancing.

Pros:

  • Easy Scaling and automated rollouts or rollback capabilities.
  • Managed by the Kubernetes API, which can be dynamically adjusted.

Cons:

  • Not designed for node-local processes; doesn't guarantee Pods run on every node.
  • Makes it harder to ensure that necessary components are operating where they are most needed.

When to Use a DaemonSet

Choose a DaemonSet when:

  • You need to ensure that a certain Pod runs on all or specific nodes - log collectors or networking agents. Node-local tasks will be essential for efficiency in cluster operation.
  • You desire automated management of Pods on node addition/removal without manual oversight. In all, DaemonSets provide functionality critical for node-level tasks; however, the position of DaemonSets in relation to other Kubernetes options will go a long way in making better architectural decisions to tailor more precisely to specific operational needs.

Conclusion

DaemonSets in Kubernetes are a significant way node-level functionality is extended in a cluster. Understanding how to configure and schedule them - also knowing about their update mechanisms - will make DevOps teams leverage DaemonSets in maintaining consistent operation and efficiently handling resources. Deploying monitoring tools or deploying network plugins, DaemonSets indeed provide modern cloud infrastructures with the needed flexibility and control. As one navigates the Kubernetes landscapes, incorporating DaemonSets at strategic levels will make one's deployment workflows more efficient and reliable systems.