Validate nodeSelector, Tolerations, and Affinity

Check nodeSelector, tolerations, and affinity configurations in Helm values.yaml for correct types and common patterns.

Resource Management

Detailed Explanation

Node Scheduling Configuration

Kubernetes offers three mechanisms for controlling pod placement: nodeSelector, tolerations, and affinity. Helm charts typically expose all three in values.yaml, and misconfiguring their types is a common source of deployment failures.

Correct Types

# nodeSelector: key-value pairs (object/mapping)
nodeSelector:
  kubernetes.io/os: linux
  node-type: compute

# tolerations: list of toleration objects (array)
tolerations:
  - key: "dedicated"
    operator: "Equal"
    value: "gpu"
    effect: "NoSchedule"

# affinity: nested object (mapping)
affinity:
  nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
        - matchExpressions:
            - key: topology.kubernetes.io/zone
              operator: In
              values:
                - us-east-1a
                - us-east-1b

What Gets Validated

Field Expected Type Common Mistake
nodeSelector object/mapping Using an array or string
tolerations array Using an object instead of array
affinity object/mapping Wrong nesting structure

Default Values Pattern

Most charts use empty defaults that users can override:

nodeSelector: {}
tolerations: []
affinity: {}

This is valid and means "schedule on any node." The validator accepts these empty defaults without warnings.

When Type Errors Cause Failures

If tolerations is accidentally set as an object instead of an array:

# WRONG - will cause template error
tolerations:
  key: "dedicated"
  value: "gpu"

# CORRECT - must be an array
tolerations:
  - key: "dedicated"
    value: "gpu"

The Helm template would produce invalid Kubernetes YAML, causing the deployment to fail with a cryptic API server error.

Use Case

Configuring a Helm chart for deployment on a mixed cluster with GPU nodes, spot instances, and dedicated node pools where correct scheduling constraints prevent pods from landing on wrong nodes.

Try It — Helm Values Validator

Open full tool