Install Velero on the source cluster
Point it to an S3 bucket in the new account (so backup data isn't tied to the old account)
Enable fs-backup mode (this copies PV data at file level to S3, completely bypassing KMS-encrypted EBS snapshots)
Velero backs up all Kubernetes resources (deployments, services, secrets, configmaps, CRDs, PVCs, etc.) automatically
Velero backs up all persistent volume data via file-level copy to S3
No manual kubectl get exports needed — Velero handles it
ECR images — replicate to new account's ECR
IAM roles / IRSA mappings — recreate in new account
aws-auth configmap — document and recreate with new account IAM ARNs
External dependencies (RDS, ElastiCache endpoints, etc.) — update references
Point to the same S3 bucket
Velero restores all resources and volume data onto the new cluster
Update ECR image references to new account
Update IRSA role ARNs
Test everything
# Install velero (primary backup tool for k8s)
# https://velero.io/
# First, capture the cluster config itself
aws eks describe-cluster --name <cluster-name> --output json > cluster-config.json
# Export all node group configurations
aws eks list-nodegroups --cluster-name --output json
aws eks describe-nodegroup --cluster-name \
--nodegroup-name --output json > nodegroup-config.json
# Dump ALL namespaced resources (the brute force but thorough way)
kubectl get all --all-namespaces -o yaml > all-resources.yaml
# But that misses a lot — do this instead:
# Get every API resource type and export them
kubectl api-resources --verbs=list --namespaced -o name | \
xargs -n 1 -I {} sh -c 'kubectl get {} --all-namespaces -o yaml > {}.yaml 2>/dev/null'
# Get cluster-scoped resources too
kubectl api-resources --verbs=list --namespaced=false -o name | \
xargs -n 1 -I {} sh -c 'kubectl get {} -o yaml > cluster-{}.yaml 2>/dev/null'
# Critical items people forget:
kubectl get configmaps --all-namespaces -o yaml > configmaps.yaml
kubectl get secrets --all-namespaces -o yaml > secrets.yaml
kubectl get pv -o yaml > persistent-volumes.yaml
kubectl get pvc --all-namespaces -o yaml > persistent-volume-claims.yaml
kubectl get storageclass -o yaml > storage-classes.yaml
kubectl get ingress --all-namespaces -o yaml > ingresses.yaml
kubectl get crds -o yaml > crds.yaml
kubectl get clusterroles -o yaml > clusterroles.yaml
kubectl get clusterrolebindings -o yaml > clusterrolebindings.yaml
# Identify all EBS volumes attached to PVs
kubectl get pv -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.csi.volumeHandle}{"\n"}{end}'
# Snapshot each volume
for vol_id in $(kubectl get pv -o jsonpath='{.items[*].spec.csi.volumeHandle}'); do
aws ec2 create-snapshot \
--volume-id "$vol_id" \
--description "EKS migration - $(date +%Y%m%d)" \
--tag-specifications "ResourceType=snapshot,Tags=[{Key=Migration,Value=eks-backup}]"
done
This is the most important step since your KMS key is going away:
# For EACH snapshot encrypted with the old KMS key:
# 1. Share snapshot with new account (if still encrypted with old key)
aws ec2 modify-snapshot-attribute \
--snapshot-id snap-xxxxx \
--attribute createVolumePermission \
--operation-type add \
--user-ids
# 2. FROM THE NEW ACCOUNT - Copy and re-encrypt with new KMS key
aws ec2 copy-snapshot \
--source-region \
--source-snapshot-id snap-xxxxx \
--kms-key-id arn:aws:kms:::key/ \
--encrypted \
--description "Re-encrypted EKS migration snapshot"
# ALTERNATIVELY: Create unencrypted volumes from snapshots,
# copy the data, then create new encrypted snapshots with new key
# Use AWS DataSync or EFS-to-EFS backup
# Create a DataSync task from old account EFS to new account EFS
aws datasync create-task \
--source-location-arn arn:aws:datasync:::location/loc-xxx \
--destination-location-arn arn:aws:datasync:::location/loc-xxx \
--name "eks-efs-migration"
# Create S3 bucket in NEW account
aws s3 mb s3://eks-migration-backups- --region
# Create cross-account access policy so old account can write to new bucket
cat < bucket-policy.json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam:::root"
},
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::eks-migration-backups-",
"arn:aws:s3:::eks-migration-backups-/*"
]
}
]
}
EOF
aws s3api put-bucket-policy \
--bucket eks-migration-backups- \
--policy file://bucket-policy.json
# Install Velero on the source cluster
velero install \
--provider aws \
--plugins velero/velero-plugin-for-aws:v1.9.0 \
--bucket eks-migration-backups- \
--backup-location-config region= \
--snapshot-location-config region= \
--secret-file ./velero-credentials \
--use-node-agent # needed for PV file-level backups
# Create a full backup
velero backup create full-cluster-backup \
--include-namespaces '*' \
--include-cluster-resources=true \
--snapshot-volumes=true \
--default-volumes-to-fs-backup=true \
--wait
# Verify backup completed
velero backup describe full-cluster-backup --details
velero backup logs full-cluster-backup
> --default-volumes-to-fs-backup=true is key here — this uses Kopia/Restic to do file-level backups of PV contents to S3 rather than relying on EBS snapshots. This means the data goes to S3 unencrypted by the old KMS key (Velero handles its own encryption), solving your KMS sunset problem for volume data.
# List all images currently in use
kubectl get pods --all-namespaces -o jsonpath='{range .items[*]}{range .spec.containers[*]}{.image}{"\n"}{end}{end}' | sort -u > images-in-use.txt
# If using ECR in old account — replicate to new account's ECR
while read image; do
# For ECR images in old account
if echo "$image" | grep -q ".dkr.ecr"; then
repo_name=$(echo "$image" | sed 's|.*ecr.*.amazonaws.com/||' | cut -d: -f1)
tag=$(echo "$image" | cut -d: -f2)
# Create repo in new account
aws ecr create-repository --repository-name "$repo_name" \
--region --profile new-account 2>/dev/null
# Pull, retag, push
docker pull "$image"
new_image=".dkr.ecr..amazonaws.com/${repo_name}:${tag}"
docker tag "$image" "$new_image"
docker push "$new_image"
fi
done < images-in-use.txt
# OR use ECR replication (easier for bulk)
# Configure in NEW account:
aws ecr put-replication-configuration --replication-configuration '{
"rules": [{"destinations": [{"region":"","registryId":""}]}]
}' --profile old-account
⚠️ If secrets are encrypted with the old KMS key (EKS envelope encryption):
# Export all secrets in plaintext (k8s API server decrypts them for you)
kubectl get secrets --all-namespaces -o json > all-secrets-decrypted.json
# SECURE THIS FILE — it contains plaintext secrets
# Encrypt it immediately with a key you control
gpg --symmetric --cipher-algo AES256 all-secrets-decrypted.json
# Also backup any external secrets references
# (AWS Secrets Manager, SSM Parameter Store)
aws secretsmanager list-secrets --query 'SecretList[].Name' --output text | \
xargs -I {} aws secretsmanager get-secret-value --secret-id {} \
--output json >> secrets-manager-backup.json
# Export IRSA (IAM Roles for Service Accounts) mappings
kubectl get serviceaccounts --all-namespaces \
-o jsonpath='{range .items[?(@.metadata.annotations.eks\.amazonaws\.com/role-arn)]}{.metadata.namespace}{"\t"}{.metadata.name}{"\t"}{.metadata.annotations.eks\.amazonaws\.com/role-arn}{"\n"}{end}' \
> irsa-mappings.txt
# Export aws-auth configmap (maps IAM to k8s RBAC)
kubectl get configmap aws-auth -n kube-system -o yaml > aws-auth-configmap.yaml
# Document OIDC provider config
aws eks describe-cluster --name \
--query 'cluster.identity.oidc' --output json > oidc-config.json
# Export IAM policies attached to node roles and IRSA roles
for role_arn in $(cat irsa-mappings.txt | awk '{print $3}'); do
role_name=$(echo $role_arn | cut -d/ -f2)
aws iam list-attached-role-policies --role-name $role_name --output json
aws iam list-role-policies --role-name $role_name --output json
done > iam-policies-backup.json
# List all EKS add-ons
aws eks list-addons --cluster-name --output json > addons.json
for addon in $(aws eks list-addons --cluster-name --output text --query 'addons[]'); do
aws eks describe-addon --cluster-name --addon-name $addon --output json
done > addon-configs.json
# Backup all Helm releases
helm list --all-namespaces -o json > helm-releases.json
# For each release, get the values
for release in $(helm list --all-namespaces -o json | jq -r '.[] | "$.name),$.namespace)"'); do
name=$(echo $release | cut -d, -f1)
ns=$(echo $release | cut -d, -f2)
helm get values $name -n $ns -o yaml > "helm-values-${ns}-${name}.yaml"
helm get manifest $name -n $ns > "helm-manifest-${ns}-${name}.yaml"
done
# Use your cluster-config.json as reference
# Ideally use IaC (Terraform/CDK/CloudFormation)
eksctl create cluster -f new-cluster-config.yaml
# Or Terraform — recreate with the exported config as reference
# Install Velero on new cluster pointing to same S3 bucket
velero install \
--provider aws \
--plugins velero/velero-plugin-for-aws:v1.9.0 \
--bucket eks-migration-backups- \
--backup-location-config region= \
--snapshot-location-config region= \
--secret-file ./velero-credentials-new \
--use-node-agent
# Verify backup is visible
velero backup get
# Restore everything
velero restore create full-restore \
--from-backup full-cluster-backup \
--include-cluster-resources=true \
--wait
# Check restore status
velero restore describe full-restore --details
velero restore logs full-restore
# Update image references from old ECR to new ECR
find . -name "*.yaml" -exec sed -i \
"s/.dkr.ecr/.dkr.ecr/g" {} \;
# Recreate IRSA roles in new account with new OIDC provider
# Update service account annotations with new role ARNs
# Update aws-auth configmap with new account IAM roles
kubectl edit configmap aws-auth -n kube-system
# Recreate any external dependencies (RDS endpoints, ElastiCache, etc.)
# Update ConfigMaps/Secrets with new endpoints
All EBS snapshots re-encrypted with new KMS key (or fs-level backed up via Velero)
All ECR images replicated to new account
Secrets exported and re-encrypted with key you own
Velero backup completed successfully with no errors
S3 backup bucket is in the NEW account
Helm values exported for all releases
IRSA role mappings documented
aws-auth configmap backed up
CRDs and custom resources exported
Test restore completed on new cluster BEFORE sunsetting old account
Application smoke tests pass on new cluster
DNS/ingress cutover plan ready