Kubernetes Manifest Structure: apiVersion, kind, metadata, and spec
Every Kubernetes resource is defined by four top-level fields: apiVersion, kind, metadata, and spec. apiVersion determines which API group and version handles the resource. kind specifies the resource type. metadata sets the name, namespace, labels, and annotations. spec describes the desired state. Getting any of these wrong produces obscure 'no matches for kind' or validation errors.
The four required fields
apiVersion: <API group>/<version> # e.g., apps/v1, v1, batch/v1
kind: <Resource type> # e.g., Deployment, Service, Pod
metadata:
name: my-resource # required
namespace: default # optional, defaults to default
labels:
app: my-app
env: production
spec:
# resource-specific desired state
Every Kubernetes resource follows this structure. apiVersion and kind together identify which controller handles the resource. metadata provides identity and routing. spec describes what you want.
apiVersion encodes the API group and stability level — using the wrong version causes 'no matches for kind' errors
GotchaKubernetesKubernetes organizes resources into API groups. Core resources (Pod, Service, ConfigMap) are in the v1 group with no prefix. Extended resources (Deployment, ReplicaSet) are in named groups like apps/v1. Newer or alpha resources are in versioned groups like batch/v1, networking.k8s.io/v1. Using apps/v1 for a Pod or v1 for a Deployment fails — the resource type doesn't exist in that API group.
Prerequisites
- kubectl basics
- Kubernetes API groups
Key Points
- v1 (core group): Pod, Service, ConfigMap, Secret, PersistentVolumeClaim, ServiceAccount.
- apps/v1: Deployment, ReplicaSet, StatefulSet, DaemonSet.
- batch/v1: Job, CronJob.
- networking.k8s.io/v1: Ingress, NetworkPolicy.
- Use kubectl api-resources to list all resource types with their correct API versions.
Common resource examples
Deployment (apps/v1):
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
namespace: production
labels:
app: api
version: "2.1"
spec:
replicas: 3
selector:
matchLabels:
app: api # must match pod template labels
template:
metadata:
labels:
app: api # pods get these labels
spec:
containers:
- name: api
image: my-api:2.1
ports:
- containerPort: 8080
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
Service (v1):
apiVersion: v1
kind: Service
metadata:
name: api-service
namespace: production
spec:
selector:
app: api # routes to pods with this label
ports:
- name: http
port: 80 # service port (what clients connect to)
targetPort: 8080 # container port (where traffic goes)
type: ClusterIP # or LoadBalancer, NodePort
ConfigMap (v1):
apiVersion: v1
kind: ConfigMap
metadata:
name: api-config
namespace: production
data:
LOG_LEVEL: "info"
MAX_CONNECTIONS: "100"
config.yaml: |
server:
port: 8080
timeout: 30s
Labels and selectors
Labels are key-value metadata applied to resources. Selectors filter resources by label. This is how Services route traffic to Pods, and how Deployments manage ReplicaSets.
# Service selector must match pod labels exactly
kind: Service
spec:
selector:
app: api # routes to pods where labels include app=api
env: prod # AND env=prod
# Deployment pod template labels must match deployment selector
kind: Deployment
spec:
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api # must include all selector labels
env: prod # can have additional labels
version: "2.1"
A selector mismatch between a Service and its target Pods means the Service has no endpoints — traffic silently drops with connection refused errors.
📝Annotations vs labels: when to use each
Labels and annotations are both key-value maps in metadata, but they serve different purposes:
metadata:
labels:
app: api # used by selectors, kubectl get -l, etc.
env: production
annotations:
deployment.kubernetes.io/revision: "5" # informational metadata
kubectl.kubernetes.io/last-applied-configuration: |
{...}
prometheus.io/scrape: "true" # consumed by Prometheus operator
prometheus.io/port: "9090"
Labels: used for selection, filtering, and grouping. Keep them concise. Labels become part of the resource's identity — they're queried by kubectl, operators, and monitoring systems.
Annotations: arbitrary metadata not used for selection. Documentation, tooling hints, operator configuration, debug information. Annotations can be large and frequently updated without affecting resource identity.
Common annotation use cases:
- Prometheus scraping config (
prometheus.io/scrape,prometheus.io/port) - AWS ALB Ingress Controller settings (
kubernetes.io/ingress.class) - Deployment rollout information (
deployment.kubernetes.io/revision) - IAM role for service accounts on EKS (
eks.amazonaws.com/role-arn)
Checking available API versions
# List all resource types with their API versions
kubectl api-resources
# Output includes:
# NAME SHORTNAMES APIVERSION NAMESPACED KIND
# pods po v1 true Pod
# deployments deploy apps/v1 true Deployment
# services svc v1 true Service
# jobs batch/v1 true Job
# cronjobs cj batch/v1 true CronJob
# ingresses ing networking.k8s.io/v1 true Ingress
# Check what fields are valid for a resource
kubectl explain deployment.spec.strategy
kubectl explain pod.spec.containers
kubectl explain documents every field in the spec without leaving the terminal.
You apply a manifest with kind: Deployment and apiVersion: v1. kubectl apply returns 'no matches for kind Deployment in version v1'. Why?
easyThe manifest is syntactically valid YAML. The cluster is running Kubernetes 1.29. kubectl apply is authenticated to the cluster.
ADeployment requires a specific namespace — the default namespace doesn't support Deployments
Incorrect.Deployments are namespaced resources and work in all namespaces including default. The namespace isn't the issue.BDeployment is in the apps API group, not the core v1 group — the correct apiVersion is apps/v1
Correct!The Kubernetes API is organized into groups. Core resources (Pod, Service, ConfigMap) use v1 (the core group). Deployment, ReplicaSet, StatefulSet, and DaemonSet are in the apps group — they require apps/v1. Kubernetes literally has no Deployment resource registered under the v1 API group, so kubectl reports 'no matches'. Change apiVersion from v1 to apps/v1.CDeployment was removed in Kubernetes 1.29 — use StatefulSet instead
Incorrect.Deployment is a core Kubernetes resource and has not been removed.Dkubectl needs the --all-namespaces flag to create Deployments
Incorrect.kubectl apply doesn't use --all-namespaces for resource creation. This flag is for listing resources across namespaces.
Hint:Which API group contains Deployment, and what apiVersion does it require?