Kubernetes Pod CIDR Range Configuration

Convert Kubernetes pod and service IP ranges to CIDR notation. Understand cluster CIDR, service CIDR, and node allocation for CNI plugins.

Container Networking

Detailed Explanation

Kubernetes CIDR Configuration

Kubernetes clusters require CIDR blocks for three separate address spaces: Pod CIDR (addresses assigned to pods), Service CIDR (virtual IPs for services), and Node CIDR (the underlying host network).

Typical Kubernetes CIDR Allocation

Pod CIDR:     10.244.0.0 - 10.244.255.255 → 10.244.0.0/16
Service CIDR: 10.96.0.0 - 10.96.127.255   → 10.96.0.0/17
Node Network: 10.0.0.0 - 10.0.0.255       → 10.0.0.0/24

Pod CIDR Sizing

Each node gets a slice of the pod CIDR. The --node-cidr-mask-size flag controls this:

Cluster CIDR Node Mask Pods/Node Max Nodes
/16 /24 256 256
/14 /24 256 1,024
/12 /24 256 4,096
/16 /25 128 512

CNI Plugin Considerations

Different CNI plugins handle CIDR differently:

  • Calico: Supports custom CIDR, uses BGP for routing
  • Flannel: Allocates /24 per node from the cluster CIDR
  • Cilium: Supports CIDR-based IPAM or CRD-based allocation
  • AWS VPC CNI: Uses the VPC subnet directly (no overlay)

kubeadm Configuration

kubeadm init \
  --pod-network-cidr=10.244.0.0/16 \
  --service-cidr=10.96.0.0/12

EKS/GKE/AKS Defaults

Provider Pod CIDR Service CIDR
EKS VPC subnet range 172.20.0.0/16
GKE 10.0.0.0/14 10.4.0.0/19
AKS 10.244.0.0/16 10.0.0.0/16

Converting Non-Standard Ranges

If you're migrating from a non-standard setup where pod ranges were defined as start-end:

Range: 10.244.0.0 - 10.247.255.255
CIDR:  10.244.0.0/14

This /14 block supports up to 1,024 nodes with /24 per-node allocation.

Use Case

A platform engineer is migrating a Kubernetes cluster to a new network. The existing pods use 10.244.0.0 - 10.245.255.255, which they convert to 10.244.0.0/15 for the new cluster's --pod-network-cidr flag in the kubeadm configuration.

Try It — IP Range to CIDR Converter

Open full tool