πŸ”₯Save up to $132K/month in CI costs!Try Freeβ†’
Skip to main content

Mastering kubectl logs - A DevOps Engineer's Guide

8 min read
Author: James Smith
Senior Kubernetes Engineer
Kubernetes is my thingβ€”I love building scalable systems and fine-tuning container workflows for better performance.

This article was last updated on December 21, 2024, to include advanced techniques for working with kubectl logs, such as handling multiple pod logs, debugging crash loops with previous container logs, and managing large log outputs, along with simplified explanations to enhance clarity.

Introduction​

TL;DR

What is kubectl logs?
kubectl logs fetches logs from containers in Kubernetes pods to debug and monitor applications. These streams are directly from a container's stdout and stderr, so it is an important tool for troubleshooting.

How to use kubectl logs to debug Kubernetes pods?
kubectl logs retrieves logs from a pod in Kubernetes. If the pod contains multiple containers, then the name of the container should be defined through -c <container-name>.
kubectl logs -f <pod-name> streams the logs in real time; kubectl logs --since=1h or kubectl logs --since-time=<timestamp> for filtering the logs with time. This is a must-have tooling when monitoring or debugging.

Having debugged numerous Kubernetes clusters, I must confirm that the first command on my mind for a daily tool would be kubectl logs. When trying to debug a failed pod, track application behavior, or simply understand what happened and why things didn't work as expected for a certain deployment, this was literally what saved me hours of sleep on more than one occasion.

Now, let me explain why this is such an important command: When running applications in Kubernetes, you don't have direct access to your containers like you do with Docker on your local machine. The kubectl logs command is your window into what's happening inside those containers. I use it dozens of times daily for:

  • Debugging application crashes
  • Application start-up monitoring
  • Investigating performance issues
  • Verification of configuration changes
  • Troubleshooting network issues

Steps we'll cover:

Understanding kubectl logs​

kubectl logs: This is one command that would be in my tool belt for looking at the container logs in Kubernetes. Like running docker logs, it has additional features to make it perfect for a distributed environment.

Here is the basic syntax I use:

kubectl logs <pod-name> [-c container-name] [flags]

The following command fetches logs directly from the container runtime-such as Docker or containerd-and streams them into my terminal. The logs are taken directly from the container's stdout and stderr streams.

Getting Started with Basic kubectl Commands​

Single Container Logs​

For simple pods with just one container, I use

kubectl logs nginx-pod

This command is quite straightforward, but let's see what happens behind the scenes:

  1. Kubernetes identifies the Pod
  2. Since there is only one container, it automatically selects that container.
  3. Streams the container's stdout/stderr logs

Working with Multiple Containers​

When working with pods that have many containers as is common for a Production environment, it's usual to specify the container name itself:

kubectl logs web-pod -c nginx-container

A mistake that I have seen and have done early on in my career is the forgetting to specify the name of a container in a multi-container pod. You will receive an error like:

Error from server (BadRequest): a container name must be specified for pod web-pod, choose one of: [nginx-container sidecar-container]

Live Log Streaming​

One of my favorite features is streaming logs in real-time with -f:

kubectl logs -f api-pod

I use this constantly during deployments to watch for startup issues.

Checking Previous Container Logs​

If a container crashed and restarted, I see the previous container's logs with:

kubectl logs --previous nginx-pod

This has saved me many times when debugging crash loops.

Working with Log Output​

Time-Based Filtering​

In incident investigations, I often need logs from specific time windows:

# Logs of the last hour
kubectl logs --since=1h nginx-pod

# Logs since a specific timestamp
kubectl logs --since-time=2024-01-01T10:00:00Z nginx-pod

Managing Large Log Outputs​

For chatty applications, I usually limit the output:

# Show only the last 100 lines
kubectl logs --tail=100 nginx-pod

# Only show recent logs with timestamps
kubectl logs --timestamps=true --tail=50 nginx-pod

Handling Multiple Pod Logs​

In my practice of distributed applications, rather frequently I faced a challenge related to having logs collected from several pods:

# Logs from all pods with label app=nginx
kubectl logs -l app=nginx --all-containers=true

# Logs from all containers in a pod
kubectl logs nginx-pod --all-containers=true

When Things Go Wrong: A Debugging Guide​

What to Check When Pods Won't Start​

When something isn't starting off right, here's my standard operating procedure:

# First check current logs
kubectl logs app-pod

# If pod is crash-looping, check previous container logs
kubectl logs --previous app-pod

# Follow logs during restart
kubectl logs -f app-pod

Finding Issues in Production Environments​

In production, I frequently have to look at several containers:

# Check application logs
kubectl app-pod logs -c application-container

# Check sidecar logs
kubectl logs app-pod -c istio-proxy

# Save logs for later analysis
kubectl logs app-pod --all-containers=true > debug_logs.txt

Lessons I've Learned the Hard Way​

I have, over the years, collected quite a few tips that have helped make my life easier working with kubectl logs. These aren't things you'll find in the official documentation but lessons learned after hours and hours of debugging production environments. Here are some of my favorite techniques that I wish someone had told me when I was starting out:

  • Smart Use of Labels
    One of the biggest powers in Kubernetes is really its label system, and a lot of log management would change with this. Being able to quickly get the logs from certain components makes a big difference for me. Instead of having a long list of pod names or writing complex scripts yourself, I use labels, such as:

    # Instead of pod names, use labels
    kubectl logs -l app=backend,environment=prod
  • Command Chaining
    Sometimes you need the logs from the most recent pod in a deployment - this happens during rolling updates, for example. Here's a neat trick I use to avoid finding the latest pod manually:

    # Get logs from the newest pod
    kubectl logs $(kubectl get pod -l app=nginx -o jsonpath='{.items[0].metadata.name}')
  • Save Time with Aliases
    When you are typing these commands hundreds of times a day, every keystroke counts. These aliases probably saved me days of typing over the years:

    alias kl='kubectl logs'
    alias klf='kubectl logs -f'

Common kubectl logs Problems and How I Solve Them​

After years of working with Kubernetes in various environments, I have encountered a few common issues that keep cropping up. Here's how I handle each one of these-the solutions have gone on to become my go-to fixes for some of the most frustrating kubectl logs problems:

What to Do With Massive Log Files​

One of the most common issues I have to deal with involves containers generating gigabytes of logs. When your application is chatty or has run for some time, trying to fetch all logs can overwhelm your terminal or even crash the session. Here's how I do it:

kubectl logs --limit-bytes=100000 large-log-pod

You may also see this error:

Error from server (BadRequest): previous terminated container not found

This usually means that the container has restarted and the logs you were looking for are gone. I would then hasten to add the --previous flag:

kubectl logs --previous large-log-pod

Can't Find Your Container?​

This one used to drive me crazy-you know the container is there, but kubectl logs can't seem to find it. Usually this happens in pods with multiple containers or when container names don't match what you expect. Here's my debugging approach:

First I make sure the pod exist:

kubectl get pod nginx-pod

Then I check the names of the containers:

kubectl get pod nginx-pod -o jsonpath='{.spec.containers[*].name}'

Common errors you might see:

Error from server (NotFound): pod "nginx-pod" not found

This usually means you're in the wrong namespace. I check with:

kubectl config get-contexts

Fixing Access and Permission Issues​

Probably, RBAC issues are the most confusing to debug, especially in production clusters with strict security policies. Before diving into complex RBAC rules, I always start with this simple check:

kubectl auth can-i get pods/log

If that returns 'no', here's my troubleshooting sequence:

  1. Check current context: kubectl config current-context
  2. Check the namespace being used - kubectl config view --minify | grep namespace
  3. List my roles: kubectl get roles,clusterroles

Conclusion​

The kubectl logs command is definitely in my Kubernetes utility belt. It seems simple on the surface, but its various options and ways to use it really make it powerful for debugging and monitoring applications in a Kubernetes cluster. I use it every day, and the nuances have made me way more effective at troubleshooting issues in Kubernetes environments.

Good log management is key to keeping Kubernetes applications healthy. Mastering kubectl logs will make your life easier as a Kubernetes operator whether you're debugging a problem in production or just monitoring the behavior of your application.