Implementation of EKS using terraform and cloudformation. Fully functional templates to deploy your VPC and kubernetes clusters together with all the essential tags. Also, worker nodes are part of ASG which consists of spot and on-demand instances.
In this repo you will find three folders; terraform
, terraform-aws
, terraform-k8s
. Note that terraform
contains configuration which uses aws and kubernetes providers together this makes upgrades very complicated. For this reason, please use latest configurations which available in terraform-aws
, terraform-k8s
.
Latest configuration templates used by me can be found in terraform-aws for aws provider and terraform-k8s for kubernetes provider. Once you configure your environment variables in ./terraform-aws/vars
./terraform-k8s/vars
, you can use makefile commands to run your deployments. Resources that will be created after applying templates:
You will find latest setup of following components:
- VPC with public/private subnets, enabled flow logs and VPC endpoints for ECR and S3
- EKS controlplane
- EKS worker nodes in private subnets (spot and ondemnd instances based on variables)
- Option to used Managed Node Groups
- Dynamic basion host
- Automatically configure aws-auth configmap for worker nodes to join the cluster
- OpenID Connect provider which can be used to assign IAM roles to service accounts in k8s
- NodeDrainer lambda which will drain worker nodes during rollingUpdate of the nodes (This is only applicable to spot worker nodes, managed node groups do not require this lambda). Node drainer lambda is maintained in https://github.com/marcincuber/tf-k8s-node-drainer
- IAM Roles for service accounts such as aws-node, cluster-autoscaler, alb-ingress-controller, external-secrets (Role arns are used when you deploy kubernetes addons with Service Accounts that make use of OIDC provider)
- For spot termination handling use aws-node-termination-handler from k8s_templates/aws-node-termination-handler.
Old configuration templates can be found in terraform. Ensure to reconfigure your backend as needed together with environment variables.
Once you configure your environment variables in ./terraform/vars
, you can run make tf-plan-test
or make tf-apply-test
from terraform
directory. Resources that will be created after applying templates:
- VPC with public/private subnets, enabled flow logs and VPC endpoints ECR
- EKS controlplane
- EKS worker nodes in private subnets (spot and ondemnd instances based on variables)
- Dynamic basion host
- Automatically configure aws-auth configmap for worker nodes to join the cluster
- OpenID Connect provider which can be used to assign IAM roles to service accounts in k8s
- NodeDrainer lambda which will drain worker nodes during rollingUpdate of the nodes (This is only applicable to spot worker nodes). Node drainer lambda is maintained in https://github.com/marcincuber/tf-k8s-node-drainer
- IAM Roles for service accounts such as aws-node, cluster-autoscaler, alb-ingress-controller, external-secrets (Role arns are used when you deploy kubernetes addons with Service Accounts that make use of OIDC provider)
Amazon EKS upgrade journey from 1.17 to 1.18
Amazon EKS upgrade journey from 1.16 to 1.17
Amazon EKS upgrade journey from 1.15 to 1.16
More about my configuration can be found in the blog post I have written recently -> EKS design
Amazon EKS- RBAC with IAM access
Using OIDC provider to allow service accounts to assume IAM role
More about kube2iam configuration can be found in the blog post I have written recently -> EKS and kube2iam
Amazon EKS, setup external DNS with OIDC provider and kube2iam
Amazon EKS + managed node groups
Terraform module written by me can be found in -> https://registry.terraform.io/modules/umotif-public/eks-node-group
Kubernetes GitLab Runners on Amazon EKS
All the templates for additional deployments/daemonsets can be found in k8s_templates.
To apply templates simply run kubectl apply -f .
from a desired folder. Ensure to put in correct Role arn in service accounts configuration. Also, check that environment variables are correct.
EKS platforms information Worker nodes upgrades
On user's machine who has been added to EKS, they can configure .kube/config file using the following command:
$ aws eks list-clusters
$ aws eks update-kubeconfig --name ${cluster_name}