Running Containers on EKS Fargate in Private Subnets Behind an ALB
Introduction
In this guide, you’ll learn how to run containers on EKS Fargate within private subnets, securely managed behind an Application Load Balancer (ALB). We’ll cover setting up a Virtual Private Cloud (VPC), creating subnets, and deploying a sample application on AWS.
Prerequisites
To follow this tutorial, ensure you have installed the following tools:
Setting Up the VPC
Creating the VPC
Create a dedicated VPC with the following commands:
aws ec2 create-vpc \
--cidr-block 192.168.0.0/16 \
--tag-specifications "ResourceType=vpc,Tags=[{Key=Name,Value=eks-fargate-vpc}]"
aws ec2 modify-vpc-attribute \
--vpc-id vpc-xxxxxxxxxxxxxxxxx \
--enable-dns-hostnames
If you use custom DNS domain names defined in a private hosted zone in Amazon Route 53, or use private DNS with interface VPC endpoints (AWS PrivateLink), you must set both the enableDnsHostnames and enableDnsSupport attributes to true.
Adding Subnets
Create private subnets for Fargate pods and a public subnet for the bastion EC2 instance.
aws ec2 create-subnet \
--vpc-id vpc-xxxxxxxxxxxxxxxxx \
--availability-zone ap-northeast-1a \
--cidr-block 192.168.0.0/20 \
--tag-specifications "ResourceType=subnet,Tags=[{Key=Name,Value=eks-fargate-private-subnet-1a}]"
aws ec2 create-subnet \
--vpc-id vpc-xxxxxxxxxxxxxxxxx \
--availability-zone ap-northeast-1c \
--cidr-block 192.168.16.0/20 \
--tag-specifications "ResourceType=subnet,Tags=[{Key=Name,Value=eks-fargate-private-subnet-1c}]"
aws ec2 create-subnet \
--vpc-id vpc-xxxxxxxxxxxxxxxxx \
--availability-zone ap-northeast-1a \
--cidr-block 192.168.32.0/20 \
--tag-specifications "ResourceType=subnet,Tags=[{Key=Name,Value=eks-fargate-public-subnet-1a}]"
Adding Internet Gateway
To enable internet access for resources in the public subnet, create an Internet Gateway and attach it to your VPC:
aws ec2 create-internet-gateway \
--tag-specifications "ResourceType=internet-gateway,Tags=[{Key=Name,Value=igw-eks-fargate}]"
aws ec2 attach-internet-gateway \
--internet-gateway-id igw-xxxxxxxxxxxxxxxxx \
--vpc-id vpc-xxxxxxxxxxxxxxxxx
Next, create a route table and associate it with the Internet Gateway:
aws ec2 create-route-table \
--vpc-id vpc-xxxxxxxxxxxxxxxxx \
--tag-specifications "ResourceType=route-table,Tags=[{Key=Name,Value=rtb-eks-fargate-public}]"
aws ec2 create-route \
--route-table-id rtb-xxxxxxxx \
--destination-cidr-block 0.0.0.0/0 \
--gateway-id igw-xxxxxxxxxxxxxxxxx
aws ec2 associate-route-table \
--route-table-id rtb-xxxxxxxx \
--subnet-id subnet-xxxxxxxxxxxxxxxxx
This setup ensures that resources in the public subnet have internet connectivity.
Adding VPC Endpoints
To enable secure communication for an EKS private cluster, create the necessary VPC endpoints. Replace region-code
with your AWS region in the commands.
Refer to the AWS Documentation for detailed information.
Required VPC Endpoints
Type | Endpoint |
---|---|
Interface | com.amazonaws.region-code.ecr.api |
Interface | com.amazonaws.region-code.ecr.dkr |
Interface | com.amazonaws.region-code.ec2 |
Interface | com.amazonaws.region-code.elasticloadbalancing |
Interface | com.amazonaws.region-code.sts |
Gateway | com.amazonaws.region-code.s3 |
Example: ap-northeast-1
Region
Create a security group for the VPC endpoints:
aws ec2 create-security-group \
--description "VPC endpoints" \
--group-name eks-fargate-vpc-endpoints-sg \
--vpc-id vpc-xxxxxxxxxxxxxxxxx \
--tag-specifications "ResourceType=security-group,Tags=[{Key=Name,Value=eks-fargate-vpc-endpoints-sg}]"
aws ec2 authorize-security-group-ingress \
--group-id sg-xxxxxxxxxxxxxxxxx \
--protocol tcp \
--port 443 \
--cidr 192.168.0.0/16
Create the Interface VPC Endpoints:
for name in com.amazonaws.ap-northeast-1.ecr.api com.amazonaws.ap-northeast-1.ecr.dkr com.amazonaws.region-code.ec2 com.amazonaws.ap-northeast-1.elasticloadbalancing com.amazonaws.ap-northeast-1.sts; do \
aws ec2 create-vpc-endpoint \
--vpc-id vpc-xxxxxxxxxxxxxxxxx \
--vpc-endpoint-type Interface \
--service-name $name \
--security-group-ids sg-xxxxxxxxxxxxxxxxx \
--subnet-ids subnet-xxxxxxxxxxxxxxxxx subnet-xxxxxxxxxxxxxxxxx;
done;
Create the Gateway VPC Endpoint for S3:
aws ec2 create-vpc-endpoint \
--vpc-id vpc-xxxxxxxxxxxxxxxxx \
--service-name com.amazonaws.ap-northeast-1.s3 \
--route-table-ids rtb-xxxxxxxxxxxxxxxxx
By adding these endpoints, your private cluster can securely access AWS services such as ECR, S3, and Elastic Load Balancing.
Bastion EC2
To access an EKS private cluster, you can utilize a bastion EC2 instance. This bastion host allows secure interaction with your Kubernetes API server endpoint if public access is disabled.
If you have disabled public access for your cluster’s Kubernetes API server endpoint, you can only access the API server from within your VPC or a connected network.
Creating an Instance IAM Role
To enable the bastion instance to operate securely, create an IAM role and attach the AmazonSSMManagedInstanceCore managed policy for Session Manager access.
Step 1: Create an IAM Role
echo '{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}' > policy.json
aws iam create-role \
--role-name eks-fargate-bastion-ec2-role \
--assume-role-policy-document file://./policy.json
Step 2: Create an Instance Profile
aws iam create-instance-profile \
--instance-profile-name eks-fargate-bastion-ec2-instance-profile
aws iam add-role-to-instance-profile \
--instance-profile-name eks-fargate-bastion-ec2-instance-profile \
--role-name eks-fargate-bastion-ec2-role
Step 3: Attach Policies to the Role
Attach the AmazonSSMManagedInstanceCore policy to allow Session Manager access:
aws iam attach-role-policy \
--role-name eks-fargate-bastion-ec2-role \
--policy-arn arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
For broader permissions to set up and manage EKS, EC2, and VPC services, attach an additional policy. Refer to the AWS Service Authorization Reference for best practices on least-privilege permissions.
echo '{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"cloudformation:CreateStack",
"cloudformation:DeleteStack",
"cloudformation:DescribeStacks",
"cloudformation:DescribeStackEvents",
"cloudformation:ListStacks",
"ec2:*",
"eks:*",
"iam:AttachRolePolicy",
"iam:CreateOpenIDConnectProvider",
"iam:CreateRole",
"iam:DetachRolePolicy",
"iam:DeleteOpenIDConnectProvider",
"iam:GetOpenIDConnectProvider",
"iam:GetRole",
"iam:ListPolicies",
"iam:PassRole",
"iam:PutRolePolicy",
"iam:TagOpenIDConnectProvider"
],
"Resource": "*"
}
]
}' > policy.json
aws iam put-role-policy \
--role-name eks-fargate-bastion-ec2-role \
--policy-name eks-cluster \
--policy-document file://./policy.json
Starting the Bastion EC2 Instance
Once the IAM role is configured, start the EC2 instance. Ensure that you use a valid AMI ID. Refer to the official documentation for the latest AMI details.
instanceProfileRole=$( \
aws iam list-instance-profiles-for-role \
--role-name eks-fargate-bastion-ec2-role \
| jq -r '.InstanceProfiles[0].Arn')
aws ec2 run-instances \
--image-id ami-0bba69335379e17f8 \
--instance-type t2.micro \
--iam-instance-profile "Arn=$instanceProfileRole" \
--subnet-id subnet-xxxxxxxxxxxxxxxxx \
--associate-public-ip-address \
--tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=eks-fargate-bastion-ec2}]"
The bastion EC2 instance is now ready to securely access your private EKS cluster.
Connecting to the Instance with Session Manager
To securely access the bastion EC2 instance, use AWS Session Manager. This eliminates the need for SSH key pairs and ensures secure, auditable access.
After connecting, switch to the ec2-user
account using the following command:
sh-4.2$ sudo su - ec2-user
Updating AWS CLI to the Latest Version
To ensure compatibility with the latest AWS services, update the AWS CLI to its latest version:
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install --bin-dir /usr/local/bin --install-dir /usr/local/aws-cli --update
Verify the installation:
aws --version
Installing kubectl
To manage your EKS cluster, install kubectl on the bastion instance. Follow these steps:
1. Download the kubectl binary for your EKS cluster version
curl -o kubectl https://s3.us-west-2.amazonaws.com/amazon-eks/1.24.7/2022-10-31/bin/linux/amd64/kubectl
2. Make the binary executable
chmod +x ./kubectl
3. Add kubectl to your PATH
mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin
echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc
4. Verify the installation
kubectl version --short --client
Installing eksctl
Install eksctl to simplify the management of your EKS clusters:
1. Download and extract eksctl
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
2. Move the binary to a location in your PATH
sudo mv /tmp/eksctl /usr/local/bin
3. Verify the installation
eksctl version
Your bastion EC2 instance is now ready to manage and operate your EKS cluster with kubectl and eksctl installed.
EKS
Creating the EKS Cluster
Create an EKS cluster using eksctl
with the --fargate
option specified. This cluster will use Fargate to manage pods without requiring worker nodes.
Refer to the AWS Documentation for detailed instructions.
eksctl create cluster \
--name eks-fargate-cluster \
--region ap-northeast-1 \
--version 1.24 \
--vpc-private-subnets subnet-xxxxxxxxxxxxxxxxx,subnet-xxxxxxxxxxxxxxxxx \
--without-nodegroup \
--fargate
After creation, verify the cluster with the following command:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 20m
Troubleshooting Cluster Access
Issue 1: Credential Error
If you encounter the error below when running kubectl get svc
:
Unable to connect to the server: getting credentials: decoding stdout: no kind "ExecCredential" is registered for version "client.authentication.k8s.io/v1alpha1" in scheme "pkg/client/auth/exec/exec.go:62"
Update the AWS CLI to the latest version:
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install --bin-dir /usr/local/bin --install-dir /usr/local/aws-cli --update
Retry the command:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 20m
Issue 2: Connection Refused
If you see the error below:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Update your Kubernetes configuration file (~/.kube/config
) using the following command:
aws eks update-kubeconfig \
--region ap-northeast-1 \
--name eks-fargate-cluster
Retry the command:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 20m
Adding IAM Users and Roles
To avoid losing access to the cluster, grant access to additional IAM users or roles. By default, only the IAM entity that created the cluster has administrative access.
Refer to the official documentation for best practices.
The IAM user or role that created the cluster is the only IAM entity that has access to the cluster. Grant permissions to other IAM users or roles so they can access your cluster.
To add an IAM user to the system:masters
group, use the following command:
eksctl create iamidentitymapping \
--cluster eks-fargate-cluster \
--region ap-northeast-1 \
--arn arn:aws:iam::000000000000:user/xxxxxx \
--group system:masters \
--no-duplicate-arns
This ensures that additional users or roles have administrative access to your EKS cluster.
Enabling Private Cluster Endpoint
Enable the private cluster endpoint to restrict Kubernetes API access to within the VPC.
aws eks update-cluster-config \
--region ap-northeast-1 \
--name eks-fargate-cluster \
--resources-vpc-config endpointPublicAccess=false,endpointPrivateAccess=true
Adding Inbound Rules for HTTPS
Ensure that your Amazon EKS control plane security group allows ingress traffic on port 443 from your bastion EC2 instance.
You must ensure that your Amazon EKS control plane security group contains rules to allow ingress traffic on port 443 from your bastion host.
sgId=$(aws eks describe-cluster --name eks-fargate-cluster | jq -r .cluster.resourcesVpcConfig.clusterSecurityGroupId)
aws ec2 authorize-security-group-ingress \
--group-id $sgId \
--protocol tcp \
--port 443 \
--cidr 192.168.0.0/16
Verifying Connectivity
Test the connectivity between the bastion EC2 instance and the EKS cluster:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 153m
Fargate Profile
Create a Fargate profile for your application namespace.
eksctl create fargateprofile \
--region ap-northeast-1 \
--cluster eks-fargate-cluster \
--name fargate-app-profile \
--namespace fargate-app
Installing AWS Load Balancer Controller
Install the AWS Load Balancer Controller to run application containers behind an Application Load Balancer (ALB).
Creating IAM OIDC Provider
Create an IAM OIDC provider for the cluster if it does not already exist:
oidc_id=$(aws eks describe-cluster --name eks-fargate-cluster --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5)
aws iam list-open-id-connect-providers | grep $oidc_id
# If no response is returned, run the following:
eksctl utils associate-iam-oidc-provider \
--region ap-northeast-1 \
--cluster eks-fargate-cluster \
--approve
Creating IAM Service Account
1. Download the policy file for the AWS Load Balancer Controller
curl -o iam_policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.4.4/docs/install/iam_policy.json
2. Create the IAM policy
aws iam create-policy \
--policy-name AWSLoadBalancerControllerIAMPolicy \
--policy-document file://iam_policy.json
3. Create the IAM service account
eksctl create iamserviceaccount \
--region ap-northeast-1 \
--cluster=eks-fargate-cluster \
--namespace=kube-system \
--name=aws-load-balancer-controller \
--role-name "AmazonEKSLoadBalancerControllerRole" \
--attach-policy-arn=arn:aws:iam::111122223333:policy/AWSLoadBalancerControllerIAMPolicy \
--approve
Installing Helm and Load Balancer Controller Add-on
1. Install Helm v3
$ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 > get_helm.sh
$ chmod 700 get_helm.sh
$ ./get_helm.sh
$ helm version --short | cut -d + -f 1
v3.10.3
2. Install the Load Balancer Controller add-on
helm repo add eks https://aws.github.io/eks-charts
helm repo update
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set region=ap-northeast-1 \
--set vpcId=vpc-xxxxxxxxxxxxxxxxx \
--set image.repository=602401143452.dkr.ecr.ap-northeast-1.amazonaws.com/amazon/aws-load-balancer-controller \
--set clusterName=eks-fargate-cluster \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller \
--set enableShield=false \
--set enableWaf=false \
--set enableWafv2=false
enableShield=false
, enableWaf=false
, and enableWafv2=false
to the command because VPC endpoints are not currently provided. For more information, please refer to the official documentation.
When deploying it, you should use command line flags to set enable-shield, enable-waf, and enable-wafv2 to false. Certificate discovery with hostnames from Ingress objects isn’t supported. This is because the controller needs to reach AWS Certificate Manager, which doesn’t have a VPC interface endpoint.
3. Verify the deployment
$ kubectl get deployment -n kube-system aws-load-balancer-controller
NAME READY UP-TO-DATE AVAILABLE AGE
aws-load-balancer-controller 2/2 2 2 105s
With the AWS Load Balancer Controller installed, your application containers are ready to run securely behind an Application Load Balancer.
Tagging Subnets
Tag the private subnets to indicate their use for internal load balancers. This is required for Kubernetes and the AWS Load Balancer Controller to identify the subnets correctly.
aws ec2 create-tags \
--resources subnet-xxxxxxxxxxxxxxxxx subnet-xxxxxxxxxxxxxxxxx \
--tags Key=kubernetes.io/role/internal-elb,Value=1
Refer to the AWS Documentation for additional details.
Must be tagged in the following format. This is so that Kubernetes and the AWS load balancer controller know that the subnets can be used for internal load balancers.
Deploying a Sample Application
FastAPI Sample Application
This guide uses FastAPI to create a simple API for demonstration purposes.
Directory Structure
Organize the files as follows:
/
├── src
│ ├── __init__.py
│ ├── main.py
│ └── requirements.txt
└── Dockerfile
requirements.txt
Define the necessary dependencies for the application:
anyio==3.6.2
click==8.1.3
fastapi==0.88.0
h11==0.14.0
httptools==0.5.0
idna==3.4
pydantic==1.10.2
python-dotenv==0.21.0
PyYAML==6.0
sniffio==1.3.0
starlette==0.22.0
typing_extensions==4.4.0
uvicorn==0.20.0
uvloop==0.17.0
watchfiles==0.18.1
websockets==10.4
main.py
Create a basic API endpoint:
from fastapi import FastAPI
app = FastAPI()
@app.get('/')
def read_root():
return {'message': 'Hello world!'}
Dockerfile
Define the Dockerfile to build the application container:
FROM python:3.10-alpine@sha256:d8a484baabf7d2337d34cdef6730413ea1feef4ba251784f9b7a8d7b642041b3
COPY ./src ./
RUN pip install --no-cache-dir -r requirements.txt
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "80"]
Pushing the Image to ECR
Build and push the application image to Amazon ECR:
1. Create an ECR repository
aws ecr create-repository --repository-name api
2. Retrieve the repository URI
uri=$(aws ecr describe-repositories | jq -r '.repositories[] | select(.repositoryName == "api") | .repositoryUri')
3. Authenticate Docker to ECR
aws ecr get-login-password --region ap-northeast-1 | docker login --username AWS --password-stdin 000000000000.dkr.ecr.ap-northeast-1.amazonaws.com
4. Build, tag, and push the image
docker build .
docker tag xxxxxxxxxxxx $uri:latest
docker push $uri:latest
Deploying to Fargate
1. Create a Kubernetes manifest file fargate-app.yaml
.
Replace 000000000000.dkr.ecr.ap-northeast-1.amazonaws.com/api:latest
with the actual image URI.
---
apiVersion: v1
kind: Namespace
metadata:
name: fargate-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: fargate-app-deployment
namespace: fargate-app
labels:
app: api
spec:
replicas: 1
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
containers:
- name: api
image: 000000000000.dkr.ecr.ap-northeast-1.amazonaws.com/api:latest
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
nodeSelector:
kubernetes.io/os: linux
---
apiVersion: v1
kind: Service
metadata:
name: fargate-app-service
namespace: fargate-app
labels:
app: api
spec:
selector:
app: api
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePort
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: fargate-app-ingress
namespace: fargate-app
annotations:
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/target-type: ip
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: fargate-app-service
port:
number: 80
For more information about the AWS Load Balancer Controller v2.4 specification, refer to the official documentation.
2. Apply the manifest file
kubectl apply -f fargate-app.yaml
3. Verify the deployed resources
$ kubectl get all -n fargate-app
NAME READY STATUS RESTARTS AGE
pod/fargate-app-deployment-6db55f9b7b-4hp8z 1/1 Running 0 55s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/fargate-app-service NodePort 10.100.190.97 <none> 80:31985/TCP 6m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/fargate-app-deployment 1/1 1 1 6m
NAME DESIRED CURRENT READY AGE
replicaset.apps/fargate-app-deployment-6db55f9b7b 1 1 1 6m
Testing the API
1. Retrieve the DNS name of the ALB
kubectl describe ingress -n fargate-app fargate-app-ingress
Example output:
Name: fargate-app-ingress
Labels: <none>
Namespace: fargate-app
Address: internal-k8s-fargatea-fargatea-0579eb4ce2-1731550123.ap-northeast-1.elb.amazonaws.com
Ingress Class: alb
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
*
/ fargate-app-service:80 (192.168.4.97:80)
Annotations: alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/target-type: ip
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfullyReconciled 4m17s ingress Successfully reconciled
2. Test the API endpoint
curl internal-k8s-fargatea-fargatea-xxxxxxxxxx-xxxxxxxxxx.ap-northeast-1.elb.amazonaws.com
{"message":"Hello world!"}
Your sample FastAPI application is now deployed and accessible through an internal Application Load Balancer.
Deleting the EKS Cluster
If you no longer require the EKS cluster or its associated resources, you can delete them using the steps outlined below. Be mindful of costs associated with VPC endpoints and other provisioned resources.
Steps to Delete the Cluster
1. Delete Application Resources
Remove the deployed application and uninstall the AWS Load Balancer Controller:
kubectl delete -f fargate-app.yaml
helm uninstall aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system
2. Detach IAM Policies
Retrieve the ARN of the AWSLoadBalancerControllerIAMPolicy and detach it:
arn=$(aws iam list-policies --scope Local \
| jq -r '.Policies[] | select(.PolicyName == "AWSLoadBalancerControllerIAMPolicy").Arn')
aws iam detach-role-policy \
--role-name AmazonEKSLoadBalancerControllerRole \
--policy-arn $arn
3. Delete IAM Service Account
Delete the service account associated with the AWS Load Balancer Controller:
eksctl delete iamserviceaccount \
--region ap-northeast-1 \
--cluster eks-fargate-cluster \
--namespace kube-system \
--name aws-load-balancer-controller
4. Delete Fargate Profiles
Remove Fargate profiles created during the setup:
aws eks delete-fargate-profile \
--cluster-name eks-fargate-cluster \
--fargate-profile-name fargate-app-profile
aws eks delete-fargate-profile \
--cluster-name eks-fargate-cluster \
--fargate-profile-name fp-default
5. Detach Pod Execution Role Policy
Retrieve and detach the AmazonEKSFargatePodExecutionRolePolicy:
arn=$(aws iam list-policies --scope AWS \
| jq -r '.Policies[] | select(.PolicyName == "AmazonEKSFargatePodExecutionRolePolicy").Arn')
aws iam detach-role-policy \
--role-name eksctl-eks-fargate-cluster-FargatePodExecutionRole-xxxxxxxxxxxxx \
--policy-arn $arn
6. Delete the EKS Cluster
Use eksctl
to delete the cluster:
eksctl delete cluster \
--region ap-northeast-1 \
--name eks-fargate-cluster
Troubleshooting Deletion Issues
If you encounter issues with deleting the AWS Load Balancer Controller ingress, you may need to remove finalizers manually described here:
kubectl patch ingress fargate-app-ingress -n fargate-app -p '{"metadata":{"finalizers":[]}}' --type=merge
This command ensures that Kubernetes can finalize the ingress resource for deletion.
Conclusion
In this blog post, we explored the complete process of deploying containers on EKS Fargate within private subnets secured behind an Application Load Balancer (ALB). From setting up the VPC infrastructure and deploying a sample FastAPI application to leveraging the AWS Load Balancer Controller, this guide provides a detailed walkthrough for building a scalable and secure Kubernetes environment.
This setup not only simplifies application deployment on AWS but also ensures adherence to cloud security best practices. By following these steps, you can harness the full potential of AWS-managed Kubernetes while maintaining control and minimizing operational overhead.
Happy Coding! 🚀