项目作者: holgerson97

项目描述 :
Easy to use and all in one EKS Fargate only cluster terraform module.
高级语言: HCL
项目地址: git://github.com/holgerson97/terraform-eks-fargate-cluster.git
创建时间: 2021-03-30T19:42:29Z
项目社区:https://github.com/holgerson97/terraform-eks-fargate-cluster

开源协议:GNU General Public License v3.0

下载


Terraform
License: GPL v3

Terraform EKS Fargate Cluster

This module provides a fully functional EKS Fargate only cluster, which should be used to get started with EKS Fargate and/or test workloads.

To see which variables are worth changing look at the “Variables” section.

After the deployment, you need to edit some settings inside your cluster manually since there are some configurations, that can’t be changed with Terraform or aren’t supported by this module at the time. You can see instructions at the “Deployed? What now?”.

NOTE: This module deploys network resources too. This isn’t the BPA way to provision resources, in real-world scenarios you would use a module for this to be as dynamic as possible. Anyway, you can use this module in production, you should use tagging to not always pull the latest version, which maybe is incompatible with your configuration. The main branch is going to be always validated and ready to use.

Requirements

Software Version
terraform >= 0.14.8
kubectl >= 1.19
provider/aws >= 3.33.0

Getting started

You can use this deployment without changing any variables. The most common is to change the VPC/Subnet CIDRs nor add/remove subnets. Just copy and paste this snippet to get started.

Basic usage:

  1. module "eks-fargate" {
  2. source = "github.com/holgerson97/terraform-eks-fargate-cluster/terraform-eks-fargate-cluster"
  3. }

Advanced usage:

  1. module "eks-fargate" {
  2. source = "github.com/holgerson97/terraform-eks-fargate-cluster//terraform-eks-fargate-cluster"
  3. eks_cluster_name = "eks-cluster-stage"
  4. kubernetes_version = "1.19"
  5. vpc_cidr = "10.10.0.0/16"
  6. public_subnet = "10.10.1.0/24"
  7. private_subnets = {
  8. "subnet-first" = "10.10.2.0/24",
  9. "subnet-second" = "10.10.3.0/24",
  10. "subnet-third" = "10.10.4.0/24"
  11. }
  12. }

Variables

Variable Type Description
vpc_cidr string CIDR of VPC where EKS is going to be deployed.
private_subnets map Private subnets where pods are going to be deployed.
eks_cluster_name string Name of the EKS cluster, that is going to be deployed.
resource_name_tag_prefix string Default prefix for all resource names. Will be prefix-resource-type.
kubernetes_version string Version of Kubernetes (kubelet), that is going to be deploed.
kubernetes_network_cidr string Pod CIDR for Kubernetes cluster.
kubernetes_cluster_logs string List of control pane components, that need to have active logging.
permissions_boundary string ARN of the policy that is used to set the permissions boundary for the role.
eks_cluster_iam_role_name string Name of EKS cluster IAM role.
fargate_iam_role_name string Name of fargate pod execution IAM role.

Deployed? What now?

Getting access to cluster

Since Terraform uses an IAM user to authenticate with AWS CDK, the Kubernetes Cluster-Administrator role is only applied to the IAM user ARN. To change this you need to log in with the user’s credentials at AWS CLI. Then you need to update your kubeconfig file to get access to the cluster.

  1. aws eks update-kubeconfig --name <cluster-name>

NOTE: You may also need to add the region in which you deployed your EKS cluster. This depends on the default region of your AWC CLI profile (—region). If you added the IAM user credentials to a new user you may also need to specify the profile (—profile).

After that, you can change the aws-auth config map and add your roles, groups, and users. Just run the following command.

  1. kubectl edit configmap -n kube-system aws-auth

Reference: https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html

Updating CoreDNS deployment to run on Fargate

By default, the CoreDNS deployment is configured to run on worker nodes. Since we don’t attach any worker node pools to the EKS cluster you need to patch the deployment. Currently, it’s not possible to patch deployment via the Kubernetes Terraform provider, so you need to do it by hand.

  1. kubectl patch deployment coredns \
  2. -n kube-system \
  3. --type json \
  4. -p='[{"op": "remove", "path": "/spec/template/metadata/annotations/eks.amazonaws.com~1compute-type"}]'
  1. kubectl rollout restart -n kube-system deployment coredns

Reference: https://docs.aws.amazon.com/eks/latest/userguide/fargate-getting-started.html

Adding more Fargate profiles

In case you want to add more deployments/replicasets/… to your EKS cluster, that don’t run in kube-system namespace and/or need a different selector (e.g. labels), you need to add more Fargate profiles. You can simply add them to your root configuration.

  1. resource "aws_eks_fargate_profile" "<resource_name>" {
  2. cluster_name = module.<name_of_this_module>.cluster_name
  3. fargate_profile_name = <name_of_fargate_profile>
  4. pod_execution_role_arn = module.<name_of_this_module>.pod_execution_role_arn
  5. subnet_ids = <subnets_where_you_want_to_deploy>
  6. selector {
  7. namespace = <namesapce_selector>
  8. }
  9. tags = {
  10. Name = <name_of_fargate_profile>
  11. }
  12. depends_on = [ module.<name_of_this_module> ]
  13. }

Contributing

Feel free to create pull requests.