The association is a two-sided thing — the AWS side links (namespace, serviceAccount) → IAM role, and the Kubernetes side is just which SA a pod runs as. There's no direct kubectl resource that shows the combined picture, but here's how to check each side:
- See what SA a running pod uses:
kubectl get pod <pod-name> -n product -o jsonpath='{.spec.serviceAccountName}'
# or for all pods at once:
kubectl get pods -n product -o custom-columns='NAME:.metadata.name,SA:.spec.serviceAccountName'
- List service accounts in the namespace:
kubectl get serviceaccount -n product - Verify the EKS Pod Identity agent is running (the DaemonSet that provides credentials to pods):
kubectl get daemonset -n kube-system eks-pod-identity-agent - Check the AWS side — list all pod identity associations on the cluster:
aws eks list-pod-identity-associations --cluster-name <cluster-name> --profile <profile>This shows every (namespace, serviceAccount) → role ARN mapping registered in AWS.
The full chain:
running pod (example: debug)
└─ spec.serviceAccountName: default
└─ AWS EKS pod identity association: (product, default) → iam-assume-role
└─ IAM policy: sts:AssumeRole → arn:aws:iam::xxxx:role/access-role