Parsing Kubernetes Pod Logs
Parse Kubernetes pod log output including klog format from control plane components with severity level prefix, timestamp, and source file identification.
Detailed Explanation
Kubernetes Log Formats
Kubernetes components and applications running in pods produce logs in various formats. The two most common are the klog format (used by Kubernetes system components) and application-specific formats (often JSON).
klog Format
Kubernetes system components (kube-apiserver, kube-controller-manager, kubelet, kube-scheduler) use the klog library, which produces logs in this format:
Lmmdd hh:mm:ss.uuuuuu threadid file:line] message
Where L is the severity level letter:
- I = INFO
- W = WARNING
- E = ERROR
- F = FATAL
Example klog Lines
I0115 10:30:00.000000 1 controller.go:123] Starting reconciliation loop
W0115 10:30:01.000000 1 reflector.go:456] Watch channel was closed, restarting
E0115 10:30:02.000000 1 leaderelection.go:78] Failed to acquire lease default/my-controller
F0115 10:30:03.000000 1 server.go:234] Unable to start API server: port already in use
Fields Extracted
| Field | Description |
|---|---|
| Severity | Single letter prefix: I, W, E, F |
| Date | MMDD (month and day without year) |
| Time | HH:MM:SS.microseconds |
| Thread ID | Process/goroutine identifier |
| Source File | Go source file and line number |
| Message | The log message content |
Application Logs in Pods
Application pods typically emit their own log format to stdout/stderr, which kubectl logs captures. These can be JSON structured logs, plain text, or any other format — the parser's auto-detect mode handles these seamlessly.
Use Case
Debugging Kubernetes control plane issues, analyzing kubelet and controller-manager logs, investigating pod scheduling failures, monitoring for leader election problems, and troubleshooting CrashLoopBackOff containers by examining their log output.