Introduction
In this dynamic world of Kubernetes-{k8s}
, ensuring that the applications are working optimally might get a little complicated and very important at the same time. The article below will dive into how one can configure liveness, readiness, and startup probes within a k8s environment. Of course, these resources are very key in monitoring application health and enhancing its availability through efficient management of traffic.
It gives insights at an expert level on how to handle containerized applications with practical examples.
Steps we will cover in this article:
- How to Configure Liveness Probes in K8s
- Defining a Readiness Probe in Traffic Management
- Implementing Startup Probes for Slow Starting Containers
- How to Configure a gRPC Liveness Probe
- Combining Probes for Optimum Performance of the Application
How to Configure Liveness Probes in K8s
Liveness probes ensure that the containers in a Kubernetes environment are healthy. If a container encounters any problem, Kubernetes can restart the container automatically with the use of liveness probes, which in turn can increase the availability of your application. Now, to implement this liveness probe, I am going to use a BusyBox container which periodically checks using a command. Here is an example of a configuration for a liveness probe for a Pod:
apiVersion: v1
kind: Pod
metadata:
name: liveness-exec
spec:
containers:
- name: liveness
image: registry.k8s.io/busybox
args:
- /bin/bash
- -c
- 'touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600'
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
In this configuration, Kubernetes runs the command cat /tmp/healthy
every 5 seconds, starting after an initial 5 second delay. For the first 30 seconds, a temporary file is created that makes the probe return success. Then, after 30 seconds, the command fails and Kubernetes restarts the container. You can readily see this happen if you shorten the interval and check the Pod's events while you test it:
kubectl describe pod liveness-exec
Moreover, liveness probes must differ from the readiness probes in that the former concerns the health of an application, while readiness probes are tasked with the duty of determining an application's readiness to receive the traffic. This separation will, in turn, enable Kubernetes to optimize the routing of traffic since different states may be returned by the application.
Defining a Readiness Probe in Traffic Management
Readiness probes serve as an efficient way to manage the readiness of a Pod for startup tasks that enable traffic handling, which ensures incoming requests are not sent to a Pod before it is fully ready. This is quite important for high availability and performance in applications. A readiness probe can be configured using TCP socket checks to manage the flow of traffic. Below are a few example configurations of running goproxy containers with this setup. The Kubernetes configuration would look like the following:
apiVersion: v1
kind: Pod
metadata:
name: GoProxy
labels:
app: GoPro
spec:
containers:
- name: goproxy
image: registry.k
ports:
- containerPort: 808
readinessProbe:
tcpSocket:
port: 808
initialDelaySeconds: 15
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 808
initialDelaySeconds: 15
periodSeconds: 10
In the above configuration, the readiness probe checks whether the goproxy service is available by establishing a TCP connection to port 8080. If it can connect, that implies the Pod is ready to serve traffic; if not, it will be marked as unready. The kubelet will start checking after a delay of 15 seconds and keep making checks every 10 seconds. However, one significant fact is that liveness probes are very different from readiness probes. Liveness probes are explicitly expected to restart the containers of a state not considered healthy. The use of both probes allows Kubernetes to handle traffic optimization for real-life conditions.
Implementing Startup Probes for Slow Starting Containers
Startup probes are very useful when a lot of time is used to come up from applications. It delays the liveness and readiness checks while the startup probe is successful. This is particularly useful to prevent liveness probes from restarting containers that are slow to come up. The following configuration applies a startup probe in the context of a Pod in Kubernetes:
apiVersion: v1
kind: Pod
metadata:
name: slow-start-app
spec:
containers:
- name: slow-start-container
image: myapp:latest
startupProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 20
periodSeconds: 10
failureThreshold: 30
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
In this example, the startupProbe
will probe at endpoint /healthz
, starting 20 seconds after the container has started initializing. It probes every 10 seconds, giving up after 30 failures that indicate the container is not starting successfully. No liveness or readiness checks would interfere as long as the startup probe has not succeeded. The startup probes are very helpful in allowing the containers sufficient time to get up and running properly without getting into unnecessary interruptions. This will be really useful in applications with inconsistent startup times, like when one service is loading resources or dependencies between services.
How to Configure a gRPC Liveness Probe
Kubernetes supports configuration of gRPC liveness probes provided the application supports the Health Checking Protocol. This may be useful for applications for which traditional HTTP or TCP health checks are not practical or sufficient. gRPC probes can have some advantages to other types of probes when health checking gRPC services directly.
Setup
Here’s how you can set it up using an etcd container as an example:
apiVersion: v1
kind: Pod
metadata:
name: etcd-with-grpc
spec:
containers:
- name: etcd
image: registry.k8s.io/etcd:3.5.1-0
command: ["/usr/local/bin/etcd", "--data-dir", "/var/lib/etcd", "--listen-client-urls", "http://0.0.0.0:2379", "--advertise-client-urls", "http://127.0.0.1:2379", "--log-level", "debug"]
ports:
- containerPort: 2379
livenessProbe:
grpc:
port: 2379
initialDelaySeconds: 10
For the latter, we configure the livenessProbe
to use the gRPC protocol and can identify which port the etcd service is listening on. The probe then actually waits 10 seconds after the container has started before checking the health status. In case for any reason—for example, the service being down—the liveness check fails, then Kubernetes will restart the container.
Specify the correct ports because any incorrect settings result in probe failures. Unlike HTTP or TCP probes, gRPC probes do not utilize named ports, so the actual port number must be used. gRPC liveness probes ensure that your gRPC services are alive and can serve requests. It enhances the general resiliency and stability of an application.
Combining Probes for Optimum Performance of the Application
If combined properly, the liveness, readiness, and startup probes can achieve a positive impact on an application running in Kubernetes—its uptime and reliability will improve. By using a strategic implementation of such probes in addressing different operational scenarios, your application should be able to serve the traffic effectively.
Combining Probes Strategies
- Liveness Probes: An important feature in determining that an application has reached a bad state and should be restarted.
- Readiness Probes: These ensure that only healthy instances of your application are receiving traffic. Through the use of readiness probes, you have a chance to configure the routing of the traffic as your application is bootstrapping or going through maintenance.
- Startup Probes: This, in conjunction with liveness and readiness probes, avoids premature liveness checks on applications that take a bit longer to initialize by giving them adequate time to become fully operational.
Configuration Example
Here is an example of how you might put these probes together in effective ways in your Kubernetes deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app-container
image: my-app-image:latest
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 15
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
startupProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 30
Best Practices
- Use Appropriate Delays: Ensure your advance delay settings provide time for the estimated startup times of your application components.
- Monitor Probe Failures: Because most modern systems will automatically retry, logging probe failures allows you to find out about issues early and react before they propagate into significant problems.
- Configuration Testing: Regularly test your probe configurations to ensure they validate their expected behaviour under production conditions. By using all of these types of probes together, you get a resilient application architecture that minimizes downtime and best ensures responsiveness.
Conclusions
Effectively using the probes provided by Kubernetes can give a big boost to an application's resiliency and its reliability. Liveness probes will catch the crashes and recover from them, readiness probes regulate the flow of incoming traffic, and startup probes offer buffering on slow starts. Combined, they are a strong combination in finding solutions for application health and optimizing resource utilization within k8s environments. With the proper mix and match of probes, one can realize stable and performant application deployments.