Redis Cache Deployment with EmptyDir Volume
Deploy Redis as an in-memory cache in Kubernetes using a Deployment with emptyDir volume for optional RDB persistence and TCP socket health probes.
Detailed Explanation
Redis Cache on Kubernetes
Redis as a cache does not need persistent storage — if the pod restarts, the cache can be rebuilt. However, using an emptyDir volume for Redis's RDB dumps provides faster restarts when the dump file exists.
Key Configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
labels:
app: "redis"
role: "cache"
spec:
replicas: 1
template:
spec:
containers:
- name: redis
image: redis:7-alpine
ports:
- name: redis
containerPort: 6379
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "250m"
memory: "256Mi"
volumeMounts:
- name: redis-data
mountPath: /data
livenessProbe:
tcpSocket:
port: 6379
initialDelaySeconds: 10
periodSeconds: 10
readinessProbe:
tcpSocket:
port: 6379
initialDelaySeconds: 5
periodSeconds: 5
volumes:
- name: redis-data
emptyDir: {}
EmptyDir vs PVC
| Feature | emptyDir | PVC |
|---|---|---|
| Persists across container restarts | Yes | Yes |
| Persists across pod restarts | No | Yes |
| Performance | RAM-speed (tmpfs) or disk | Depends on storage class |
| Use case | Cache, temp data | Persistent queue, session store |
For Redis as a pure cache, emptyDir is appropriate. If you need data to survive pod rescheduling (e.g., Redis as a session store), use a PVC instead.
TCP Socket Probe
Redis does not have an HTTP endpoint by default, so we use a TCP socket probe on port 6379. This checks that the Redis process is listening and accepting connections. For more thorough checks, you could use an exec probe running redis-cli ping.
Memory Management
Set the Redis maxmemory directive to slightly less than the container memory limit (e.g., 200Mi if the limit is 256Mi) and configure an eviction policy like allkeys-lru. This prevents Redis from being OOM-killed by the container runtime.
Use Case
Deploying Redis as an ephemeral caching layer for web applications, session storage, or rate limiting in a Kubernetes cluster without requiring persistent data across pod rescheduling.