Kubeconfig and EKS Authentication: How aws eks get-token Works
Kubeconfig connects kubectl to your cluster. For EKS, authentication runs through AWS STS — kubectl calls aws eks get-token to get a short-lived STS presigned URL that Kubernetes validates. Understanding this chain explains every 'Unauthorized' error you'll encounter.
The kubeconfig file structure
Kubeconfig is a YAML file at ~/.kube/config (or specified via KUBECONFIG env var) that tells kubectl how to connect to and authenticate with Kubernetes clusters. A single kubeconfig can contain multiple clusters, users, and contexts.
apiVersion: v1
kind: Config
current-context: production
clusters:
- name: production
cluster:
server: https://ABCDEF1234567890.gr7.us-east-1.eks.amazonaws.com
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0t...
users:
- name: arn:aws:eks:us-east-1:123456789012:cluster/production
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
command: aws
args:
- eks
- get-token
- --cluster-name
- production
- --region
- us-east-1
contexts:
- name: production
context:
cluster: production
user: arn:aws:eks:us-east-1:123456789012:cluster/production
namespace: default
The exec credential plugin is where EKS auth differs from standard Kubernetes. Instead of a static token or client certificate, kubectl runs an external command (aws eks get-token) to get a fresh token on each request.
How EKS authentication works: STS presigned URLs
ConceptEKS / AWS IAMaws eks get-token generates an STS presigned URL for the GetCallerIdentity API call. The URL is base64-encoded and passed as a Bearer token. The EKS API server sends this token to the AWS IAM authenticator, which calls STS to validate it and returns the IAM identity. That identity is then mapped to a Kubernetes username via aws-auth or access entries.
Prerequisites
- AWS STS
- IAM roles
- Kubernetes RBAC
- Bearer tokens
Key Points
- Tokens expire after 15 minutes — kubectl re-runs aws eks get-token automatically when needed.
- The token encodes the IAM identity making the request, not a Kubernetes user directly.
- aws-auth ConfigMap (or EKS access entries) maps IAM ARN → Kubernetes username/groups.
- If your IAM role isn't in aws-auth, you get 'Unauthorized' even with valid AWS credentials.
Generating kubeconfig for EKS
# Add cluster to kubeconfig (creates or updates ~/.kube/config)
aws eks update-kubeconfig \
--name production \
--region us-east-1
# With a specific IAM role (cross-account or assuming a different role for EKS access)
aws eks update-kubeconfig \
--name production \
--region us-east-1 \
--role-arn arn:aws:iam::123456789012:role/EKSAdminRole
# Verify the connection
kubectl auth whoami # Kubernetes 1.28+
kubectl get nodes
When --role-arn is specified, the kubeconfig exec section adds --role-arn to the aws eks get-token command. kubectl will assume that role via STS before generating the token.
Working with multiple clusters
Keep multiple clusters in a single kubeconfig or use separate files with KUBECONFIG:
# Switch context (cluster + user + namespace combination)
kubectl config use-context staging
# List all available contexts
kubectl config get-contexts
# Run a single command against a different context
kubectl --context=staging get pods
# Merge multiple kubeconfig files
KUBECONFIG=~/.kube/config:~/.kube/staging-config kubectl config view --merge --flatten > ~/.kube/merged-config
kubectx and kubens are commonly used wrappers for faster context and namespace switching:
# kubectx: switch cluster context
kubectx production
kubectx - # switch back to previous context
# kubens: switch default namespace
kubens kube-system
💡IAM Roles for Service Accounts (IRSA): pod-level IAM
Standard Kubernetes doesn't have AWS IAM integration. Pods running on EC2 nodes inherit the node's IAM role — all pods on the same node share the same AWS credentials. IRSA gives each pod its own IAM role with minimal permissions.
IRSA uses OIDC: EKS exposes an OIDC provider. A service account annotated with a role ARN gets a projected token in the pod. The AWS SDK exchanges this token for temporary credentials via STS AssumeRoleWithWebIdentity.
# Associate OIDC provider with cluster
eksctl utils associate-iam-oidc-provider \
--cluster production \
--approve
# Create IAM role with trust policy for the service account
eksctl create iamserviceaccount \
--name s3-reader \
--namespace default \
--cluster production \
--attach-policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess \
--approve
In Terraform:
data "aws_iam_openid_connect_provider" "eks" {
url = aws_eks_cluster.main.identity[0].oidc[0].issuer
}
resource "aws_iam_role" "s3_reader" {
name = "eks-s3-reader"
assume_role_policy = jsonencode({
Statement = [{
Effect = "Allow"
Principal = { Federated = data.aws_iam_openid_connect_provider.eks.arn }
Action = "sts:AssumeRoleWithWebIdentity"
Condition = {
StringEquals = {
"${data.aws_iam_openid_connect_provider.eks.url}:sub" = "system:serviceaccount:default:s3-reader"
}
}
}]
})
}
The pod uses the service account, and the AWS SDK automatically picks up the projected token — no credential configuration in application code.
Debugging authentication failures
Common EKS authentication errors and their causes:
error: You must be logged in to the server (Unauthorized)
- Your IAM identity is not in aws-auth ConfigMap (or EKS access entries)
- You're assuming the wrong IAM role (check with
aws sts get-caller-identity) - The aws-auth entry has a typo in the role ARN
error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"
- Outdated AWS CLI or aws-iam-authenticator. Update to v1beta1 and a recent AWS CLI version.
Unable to connect to the server: dial tcp: lookup ... no such host
- Private endpoint-only cluster accessed from outside VPC
- Wrong cluster name in kubeconfig
# Diagnose: what IAM identity is kubectl using?
aws sts get-caller-identity
# Diagnose: test a token directly
TOKEN=$(aws eks get-token --cluster-name production --region us-east-1 --output json | jq -r '.status.token')
curl -k -H "Authorization: Bearer $TOKEN" https://YOUR_EKS_ENDPOINT/api/v1/nodes
# View current aws-auth ConfigMap
kubectl get configmap aws-auth -n kube-system -o yaml
# View EKS access entries (new API)
aws eks list-access-entries --cluster-name production
A CI/CD pipeline uses an IAM role to deploy to EKS. The pipeline runs `aws eks update-kubeconfig`, then `kubectl apply`. It fails with 'Unauthorized'. The IAM role has AmazonEKSClusterPolicy. Running kubectl as your personal IAM user works fine. What is the likely cause?
mediumThe cluster was created by your personal IAM user. The pipeline uses an assumed IAM role (arn:aws:iam::123456789012:role/CICDRole). The cluster endpoint is public.
AAmazonEKSClusterPolicy doesn't include permission to call eks:DescribeCluster
Incorrect.AmazonEKSClusterPolicy is the node instance role policy. For kubectl access, the issue is Kubernetes RBAC authorization, not the specific IAM policy.BThe CI/CD role is not in aws-auth ConfigMap — the cluster creator's IAM user is automatically an admin, but other IAM identities must be explicitly added
Correct!EKS automatically grants the IAM identity that created the cluster full admin access. No other IAM identity has access until explicitly added. The CI/CD role needs an entry in aws-auth: {rolearn: 'arn:aws:iam::123456789012:role/CICDRole', username: 'cicd', groups: ['system:masters']} (or a more restrictive group). Alternatively, use EKS access entries to associate the role with an access policy.CThe kubeconfig generated by update-kubeconfig uses the wrong token expiry
Incorrect.EKS tokens expire after 15 minutes but are refreshed automatically. Token expiry doesn't cause the initial Unauthorized error.DThe CI/CD role needs the eks:GetToken permission
Incorrect.eks:GetToken is needed to call the EKS API to generate tokens. But even with a valid token, if the IAM identity isn't in aws-auth, Kubernetes returns Unauthorized.
Hint:Who can access the cluster by default, and how do you grant access to other IAM identities?