Skip to content

pods

Used terraform and helm to create cloud infrastructure and deploy applications.

โ†‘ Back to top

Prerequisites

  • awscli
  • kubectl
  • eksctl
  • Required IAM permissions
  • IAM Role (AWS service) - Cluster Service Role

Create IAM user and Access Key

  • create IAM user with Permission policy (AdministratorAccess)
  • Create Access key and download .csv (Access Key + Secret Key)

install awscli

  • Windows

    • https://awscli.amazonaws.com/AWSCLIV2.msi
  • Linux

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

aws --version
  • Configure aws cli
# Configure Access key, Secret key, region, and etc
aws configure --profile [PROFILE_NAME]
aws configure --profile terraform

# To verify the user identity
aws sts get-caller-identity

install kubectl

  • Windows
# Download kubectl for Kubernetes 1.30 in windows
curl.exe -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.30.0/2024-05-12/bin/windows/amd64/kubectl.exe

# When first installing kubectl, it isn't yet configured to communicate with any server. 
kubectl version --client
  • Linux
curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.30.0/2024-05-12/bin/linux/arm64/kubectl

chmod +x ./kubectl
mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$HOME/bin:$PATH
echo 'export PATH=$HOME/bin:$PATH' >> ~/.bashrc
kubectl version --client

  • When first installing kubectl, it isn't yet configured to communicate with any server.
  • After cluster is created :
aws eks update-kubeconfig --region ap-northeast-2 --name my-cluster --profile terraform

install eksctl

  • https://eksctl.io/installation/
  • ์‹œ์Šคํ…œ๋ณ€์ˆ˜ > Path์— ์ถ”๊ฐ€ C:\Program Files\eksctl_Windows_amd64

  • Windows

    • Make sure to unzip the archive to a folder in the PATH variable.
      • or move unzipped folder into C:\Program Files
      • and edit windows environment variable PATH to include C:\Program Files\eksctl_Windows_amd64

https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_windows_amd64.zip

  • Linux
    • install-eksctl.sh
# for ARM systems, set ARCH to: `arm64`, `armv6` or `armv7`
ARCH=amd64
PLATFORM=$(uname -s)_$ARCH

curl -sLO "https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_$PLATFORM.tar.gz"

# (Optional) Verify checksum
curl -sL "https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_checksums.txt" | grep $PLATFORM | sha256sum --check

tar -xzf eksctl_$PLATFORM.tar.gz -C /tmp && rm eksctl_$PLATFORM.tar.gz

sudo mv /tmp/eksctl /usr/local/bin

Create EKS cluster

Create EKS cluster using one of the followings:

eksctl

- eksctl uses cloudformation
eksctl create cluster --name my-cluster --region ap-northeast-2
# eksctl create cluster --name my-cluster --region ap-northeast-2 --profile [PROFILE_NAME]
# eksctl delete cluster --name my-cluster --region ap-northeast-2

terraform

https://antonputra.com/terraform/how-to-create-eks-cluster-using-terraform/#create-private-and-public-subnets-in-aws-using-terraform

  • VPC, Subnet, igw, nat, route table
  • IAM role with assume-role-policy
    • attach the required Amazon EKS IAM managed policy to it.
  • Attach AmazonEKSClusterPolicy policy to IAM role: 6-eks.tf
  • TODO: Add-Ons

https://developer.hashicorp.com/terraform/install

  • Download terraform.exe

์‹œ์Šคํ…œ๋ณ€์ˆ˜ > Path์— ์ถ”๊ฐ€ C:\Program Files\terraform_1.8.5_windows_amd64

# Navigate to your Terraform configuration directory
cd path/to/your/terraform/configuration

# Initialize Terraform
terraform init

# Validate the configuration
terraform validate

# Format the Configuration
terraform fmt

# Plan the deployment
terraform plan

# Apply the configuration
terraform apply

# With DEBUGGING enabled
TF_LOG=DEBUG terraform apply

# Destroy
# terraform destroy

Check add-on compatibility

eksctl create iamserviceaccount --name aws-node --namespace kube-system \
    --cluster my-cluster --profile terraform --role-name eks-cni-role \
    --role-only --attach-policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy \
    --approve
  • https://docs.aws.amazon.com/eks/latest/userguide/eks-add-ons.html#workloads-add-ons-available-eks
  • https://docs.aws.amazon.com/eks/latest/userguide/managing-add-ons.html#addon-compat

If an add-on requires IAM permissions, then you must have an IAM OpenID Connect (OIDC) provider for your cluster.

aws eks describe-addon-versions --addon-name vpc-cni --kubernetes-version 1.30 --profile terraform | grep addonVersion
    # "addonVersion": "v1.18.2-eksbuild.1",
aws eks describe-addon-versions --addon-name coredns --kubernetes-version 1.30 --profile terraform | grep addonVersion
    # "addonVersion": "v1.11.1-eksbuild.9",
aws eks describe-addon-versions --addon-name kube-proxy --kubernetes-version 1.30 --profile terraform | grep addonVersion
    # "addonVersion": "v1.30.0-eksbuild.3",

aws eks describe-addon-versions --addon-name eks-pod-identity-agent --kubernetes-version 1.30 --profile terraform
    # "v1.3.0-eksbuild.1"
  • Check vpc-cni
# Check `aws-node` serviceaccount is associated with IAM role with AmazonEKS_CNI_Policy policy attached
k describe sa aws-node -n kube-system
    # Name:                aws-node
    # Namespace:           kube-system
    # Labels:              app.kubernetes.io/instance=aws-vpc-cni
    #                     app.kubernetes.io/managed-by=Helm
    #                     app.kubernetes.io/name=aws-node
    #                     app.kubernetes.io/version=v1.18.2
    #                     helm.sh/chart=aws-vpc-cni-1.18.2
    #                     k8s-app=aws-node
    # Annotations:         eks.amazonaws.com/role-arn: arn:aws:iam::094833749257:role/eks-cni-role
    # Image pull secrets:  <none>
    # Mountable secrets:   <none>
    # Tokens:              <none>
    # Events:              <none>

k describe sa aws-load-balancer-controller -n kube-system
  • Check addon is created or not
# Check the nodes have enough resources to schedule CoreDNS pods
kubectl describe nodes

# 1. vpc-cni
kubectl get daemonset aws-node -n kube-system

# 2. coredns
kubectl get deployment coredns -n kube-system

# 3. kube-proxy
kubectl get daemonset kube-proxy -n kube-system

Create add-on

https://docs.aws.amazon.com/eks/latest/userguide/eks-networking-add-ons.html

  • pre-requisite for vpc-cni

  • Create an OIDC provider for your EKS cluster.

    • With OIDC, you can associate an IAM role with a service account. This means that the Pods associated with that service account can assume the IAM role and access AWS services.
    • Create an IAM OIDC provider for your cluster.
    • Configure a Kubernetes service account to assume an IAM role
  • Create an IAM role with the required policy for the vpc-cni addon.

  • Attach the IAM role to the vpc-cni addon.

To access kubernetes API server, (you need authentication token) Containers use service account to authenticate. (service accounts represent pod identity). When pod is created, default serviceaccount is created and associated with pod. The serviceaccount has token that is used to authenticate with the kubernetes API server.

Each pod has service account assigned to it by default.

In simpler terms, EKS OIDC enables you to associate AWS IAM roles with Kubernetes service accounts, allowing Pods in your cluster to assume those roles and access AWS services securely1.

// ... existing code ...

resource "aws_iam_openid_connect_provider" "oidc_provider" {
  client_id_list  = ["sts.amazonaws.com"]
  thumbprint_list = [data.aws_eks_cluster.my-cluster.identity[0].oidc[0].thumbprint]
  url             = data.aws_eks_cluster.my-cluster.identity[0].oidc[0].issuer
}

resource "aws_iam_role" "eks_cni_role" {
  name = "eks-cni-role"

  assume_role_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "${aws_iam_openid_connect_provider.oidc_provider.arn}"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "${data.aws_eks_cluster.my-cluster.identity[0].oidc[0].issuer}:sub": "system:serviceaccount:kube-system:aws-node"
        }
      }
    }
  ]
}
POLICY
}

resource "aws_iam_role_policy_attachment" "eks_cni_policy_attachment" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
  role       = aws_iam_role.eks_cni_role.name
}

resource "aws_eks_addon" "addons" {
  for_each                = { for addon in var.addons : addon.name => addon }
  cluster_name            = aws_eks_cluster.my-cluster.name
  addon_name              = each.value.name
  addon_version           = each.value.version
  resolve_conflicts_on_update = "OVERWRITE"

  provider {
    if each.value.name == "vpc-cni" {
      service_account_role_arn = aws_iam_role.eks_cni_role.arn
    }
  }
}

// ... existing code ...

AWS Management Console and AWS CLI

https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html

  1. Create an Amazon VPC
  2. https://docs.aws.amazon.com/eks/latest/userguide/creating-a-vpc.html
    1. public and private subnets that meets Amazon EKS requirements
    2. public route table has a route to an internet gateway
    3. private route table does not have a route to an internet gateway
    4. instead it uses NAT gateway;
    5. Pods in private subnet can communicate to the internet through a NAT gateway using IPv4 addresses
    1. public only subnets
    1. private only subnets
aws cloudformation create-stack \
  --region ap-northeast-2 \
  --stack-name my-eks-vpc-stack \
  --template-url https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/amazon-eks-vpc-sample.yaml

# 172.16.0.0/16 ์œผ๋กœ ์ˆ˜์ •ํ•˜์—ฌ stack ์ƒ์„ฑ
aws cloudformation create-stack \
  --region ap-northeast-2 \
  --stack-name my-eks-vpc-stack \
  --template-body file://amazon-eks-vpc-sample.yaml
  1. EKS cluster IAM Role creation

https://docs.aws.amazon.com/eks/latest/userguide/service_IAM_role.html

cat > eks-cluster-role-trust-policy.json <<-EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "eks.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
EOF

# Create the role
aws iam create-role \
  --role-name myAmazonEKSClusterRole \
  --assume-role-policy-document file://"eks-cluster-role-trust-policy.json"

# Attach the required Amazon EKS managed IAM policy to the role
aws iam attach-role-policy \
  --policy-arn arn:aws:iam::aws:policy/AmazonEKSClusterPolicy \
  --role-name myAmazonEKSClusterRole

aws iam list-roles | grep RoleName
  1. EKS cluster creation

  2. Create EKS cluster using AWS management console

  3. Go to EKS > Add cluster > Create
  4. Enter name
  5. Choose Cluster Service Role
  6. Specify networking: VPC
  7. Add-ons (Check 5. Add-ons below)

    • https://docs.aws.amazon.com/eks/latest/userguide/eks-add-ons.html
  8. Check EKS addon such as VPC-cni, CoreDNS Version, and kube-proxy with below URL

  9. https://docs.aws.amazon.com/eks/latest/userguide/managing-vpc-cni.html
  10. https://docs.aws.amazon.com/eks/latest/userguide/managing-coredns.html
  11. https://docs.aws.amazon.com/eks/latest/userguide/managing-kube-proxy.html

  12. EKS kubeconfig setting

aws eks update-kubeconfig --region ap-northeast-2 --name my-cluster --profile terraform
kubectl get svc
  1. Add-ons

  2. EKS cluster oidc registration

To use some Amazon EKS add-ons, or to enable individual Kubernetes workloads to have specific AWS Identity and Access Management (IAM) permissions, create an IAM OpenID Connect (OIDC) provider for your cluster. You only need to create an IAM OIDC provider for your cluster once. To learn more about Amazon EKS add-ons, see Amazon EKS add-ons. To learn more about assigning specific IAM permissions to your workloads, see IAM roles for service accounts.
  • https://docs.aws.amazon.com/eks/latest/userguide/eks-add-ons.html

  • Check EKS addon such as VPC-cni, CoreDNS Version, and kube-proxy with below URL

  • https://docs.aws.amazon.com/eks/latest/userguide/managing-vpc-cni.html
  • https://docs.aws.amazon.com/eks/latest/userguide/managing-coredns.html
  • https://docs.aws.amazon.com/eks/latest/userguide/managing-kube-proxy.html
# Determine available list of add-ons for specified EKS version
#   vpc-cni, coredns, kube-proxy are default add-ons
aws eks describe-addon-versions --kubernetes-version 1.28 \
    --query 'addons[].{MarketplaceProductUrl: marketplaceInformation.productUrl, Name: addonName, Owner: owner Publisher: publisher, Type: type}' --output table

# You can see which versions are available for each add-on
#   vpc-cni
aws eks describe-addon-versions --kubernetes-version 1.28 --addon-name vpc-cni \
    --query 'addons[].addonVersions[].{Version: addonVersion, Defaultversion: compatibilities[0].defaultVersion}' --output table
  ------------------------------------------
  |          DescribeAddonVersions         |
  +-----------------+----------------------+
  | Defaultversion  |       Version        |
  +-----------------+----------------------+
  |  False          |  v1.16.0-eksbuild.1  |
  |  False          |  v1.15.5-eksbuild.1  |
  |  True           |  v1.14.1-eksbuild.1  |
  ...
  +-----------------+----------------------+

#   coredns
aws eks describe-addon-versions --kubernetes-version 1.28 --addon-name coredns \
    --query 'addons[].addonVersions[].{Version: addonVersion, Defaultversion: compatibilities[0].defaultVersion}' --output table

  ------------------------------------------
  |          DescribeAddonVersions         |
  +-----------------+----------------------+
  | Defaultversion  |       Version        |
  +-----------------+----------------------+
  |  False          |  v1.10.1-eksbuild.6  |
  |  False          |  v1.10.1-eksbuild.5  |
  |  True           |  v1.10.1-eksbuild.2  |
  ...
  +-----------------+----------------------+

#   kube-proxy
aws eks describe-addon-versions --kubernetes-version 1.28 --addon-name kube-proxy \
    --query 'addons[].addonVersions[].{Version: addonVersion, Defaultversion: compatibilities[0].defaultVersion}' --output table
  -------------------------------------------
  |          DescribeAddonVersions          |
  +-----------------+-----------------------+
  | Defaultversion  |        Version        |
  +-----------------+-----------------------+
  |  False          |  v1.28.4-eksbuild.1   |
  |  False          |  v1.28.2-eksbuild.2   |
  |  True           |  v1.28.1-eksbuild.1   |
  ...
  +-----------------+-----------------------+


# Find the configuration options for your chosen add-on by running the following command:
aws eks describe-addon-configuration --addon-name vpc-cni --addon-version v1.14.1-eksbuild.1
aws eks describe-addon-configuration --addon-name coredns --addon-version 1.10.1-eksbuild.2
aws eks describe-addon-configuration --addon-name kube-proxy --addon-version v1.28.1-eksbuild.1

# 1. vpc-cni
# Add-ons prerequisite
#   https://docs.aws.amazon.com/eks/latest/userguide/managing-vpc-cni.html
#   This add-on utilizes the IAM roles for service accounts capability of Amazon EKS.
#   If your cluster uses the IPv4 family, the permissions in the AmazonEKS_CNI_Policy are required.


# This VPC has two public and two private subnets.
# A public subnet's associated route table has a route to an internet gateway.
# However, the route table of a private subnet doesn't have a route to an internet gateway.

# Pre1) Creating an IAM OIDC provider for a cluster
#   Your cluster has an OpenID Connect (OIDC) issuer URL associated with it.
#   To use AWS Identity and Access Management (IAM) roles for service accounts,
#   an IAM OIDC provider must exist for your cluster's OIDC issuer URL.
#   https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html
#   EKS ๊ณต์‹ ๊ธฐ๋Šฅ์ธ "IAM roles for service accounts" ์„ ์ถ”์ฒœ ํ•ฉ๋‹ˆ๋‹ค.
#   ์ด๋ฅผ ํ†ตํ•ด IAM Role์„ ServiceAccount์™€ ์—ฐ๊ด€์‹œํ‚ค๊ณ 
#   ๊ทธ ServiceAccount๋ฅผ Pod๊ณผ ์—ฐ๊ฒฐ์‹œ์ผœ ์„ธ๋ถ„ํ™”๋˜๊ณ  ์•ˆ์ „ํ•œ ๊ถŒํ•œ ๋ถ€์—ฌ๋ฅผ ํ•˜๋Š” ๊ฒƒ ์ž…๋‹ˆ๋‹ค.
#   https://emadam.tistory.com/110
#   https://www.blog-dreamus.com/post/flo-tech-aws-eks%EC%97%90%EC%84%9C%EC%9D%98-iam-%EC%97%AD%ED%95%A0-%EB%B6%84%EB%A6%AC

# service account
# https://kubernetes.io/docs/concepts/security/service-accounts/

# ServiceAccount ๋Š” Pod, Service ์ฒ˜๋Ÿผ K8S ๋ฆฌ์†Œ์Šค ์ค‘ ํ•˜๋‚˜์ด๋ฉฐ,
# ์‚ฌ๋žŒ์ด ์•„๋‹Œ Pod ์™€ ๊ฐ™์€ ๋ฆฌ์†Œ์Šค์— ๊ถŒํ•œ(IAM)์„ ๋ถ€์—ฌํ•˜๋Š” ์—ญํ• ์„ ๊ฐ€์ง‘๋‹ˆ๋‹ค.
# iam

# aws-load-balancer-controller ์—ญ์‹œ ALB ๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๋“ฑ์˜ ๋™์ž‘์„ ํ•˜๊ธฐ ์œ„ํ•ด
# ServiceAccount ๊ฐ€ ํ•„์š”(๊ด€๋ จ yaml)ํ•ฉ๋‹ˆ๋‹ค. EKS cluster ๋Š” ServiceAccount ์—
# IAM role ์„ ์—ฐ๊ฒฐ(๊ด€๋ จ yaml)ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. cluster ๊ฐ€ ์ด๋ ‡๊ฒŒ AWS ๋ฆฌ์†Œ์Šค๋“ค๊ณผ
# ์—ฐ๋™ํ•˜๋Š” ๊ฒฝ์šฐ AWS ์™€ ์ผ์ •ํ•œ ์ธ์ฆ ๊ณผ์ •์ด ํ•„์š”ํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค.
# ์ด๋Ÿฌํ•œ ์—ฐ๋™์„ ๊ฐ€๋Šฅํ•˜๊ฒŒ ํ•˜๋Š” ๊ฒƒ์ด OIDC ๋ผ๋Š” OAuth 2.0 ์„ ์ด์šฉํ•ด ๋งŒ๋“ค์–ด์ง„ ์ธ์ฆ ๋ ˆ์ด์–ด์ž…๋‹ˆ๋‹ค.


cluster_name=my-cluster
echo $cluster_name
oidc_id=$(aws eks describe-cluster --name $cluster_name --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5)
echo $oidc_id

# Check if OIDC provider is already in your account
aws iam list-open-id-connect-providers | grep $oidc_id | cut -d "/" -f4

eksctl utils associate-iam-oidc-provider --cluster $cluster_name --approve

# Pre2) IAM role with the AmazonEKS_CNI_Policy IAM policy
#   i.e. AmazonEKSVPCCNIRole
#   https://docs.aws.amazon.com/eks/latest/userguide/cni-iam-role.html
#   Prerequisite: create an IAM OIDC provider for your cluster.

#   Step 1: Create the Amazon VPC CNI plugin for Kubernetes IAM role

# Determine ip Family of your cluster
# "ipFamily": "ipv4"
aws eks describe-cluster --name my-cluster | grep ipFamily

# View OIDC provider url
aws eks describe-cluster --name my-cluster --query "cluster.identity.oidc.issuer" --output text
https://oidc.eks.ap-northeast-2.amazonaws.com/id/0E9285829A984AEA6C96A20F16C10A84

cat > vpc-cni-trust-policy.json <<-EOF
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Federated": "arn:aws:iam::094833749257:oidc-provider/oidc.eks.ap-northeast-2.amazonaws.com/id/0E9285829A984AEA6C96A20F16C10A84"
            },
            "Action": "sts:AssumeRoleWithWebIdentity",
            "Condition": {
                "StringEquals": {
                    "oidc.eks.ap-northeast-2.amazonaws.com/id/0E9285829A984AEA6C96A20F16C10A84:aud": "sts.amazonaws.com",
                    "oidc.eks.ap-northeast-2.amazonaws.com/id/0E9285829A984AEA6C96A20F16C10A84:sub": "system:serviceaccount:kube-system:aws-node"
                }
            }
        }
    ]
}
EOF


aws iam create-role \
  --role-name AmazonEKSVPCCNIRole \
  --assume-role-policy-document file://"vpc-cni-trust-policy.json"

aws iam attach-role-policy \
  --policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy \
  --role-name AmazonEKSVPCCNIRole

# annotate the aws-node service account with the ARN of the IAM role
# that you created previously. Replace the example values with your own values.
kubectl annotate serviceaccount \
    -n kube-system aws-node \
    eks.amazonaws.com/role-arn=arn:aws:iam::094833749257:role/AmazonEKSVPCCNIRole


#   Step 2: Re-deploy Amazon VPC CNI plugin for KubernetesPods
# Delete and re-create any existing Pods that are associated
# with the service account to apply the credential environment variables.

kubectl delete Pods -n kube-system -l k8s-app=aws-node

kubectl get pods -n kube-system -l k8s-app=aws-node

# replace cpjw7 with the result of the above command
kubectl describe pod -n kube-system aws-node-cpjw7 | grep 'AWS_ROLE_ARN:\|AWS_WEB_IDENTITY_TOKEN_FILE:'

AWS_ROLE_ARN:                 arn:aws:iam::094833749257:role/AmazonEKSVPCCNIRole
  AWS_WEB_IDENTITY_TOKEN_FILE:  /var/run/secrets/eks.amazonaws.com/serviceaccount/token
  AWS_ROLE_ARN:                           arn:aws:iam::094833749257:role/AmazonEKSVPCCNIRole
  AWS_WEB_IDENTITY_TOKEN_FILE:            /var/run/secrets/eks.amazonaws.com/serviceaccount/token


# Step 3: Remove the CNI policy from the node IAM role
aws iam detach-role-policy --role-name myAmazonEKSNodeRole --policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy


kubectl describe daemonset aws-node --namespace kube-system | grep amazon-k8s-cni: | cut -d : -f 3
aws eks describe-addon --cluster-name my-cluster --addon-name vpc-cni --query addon.addonVersion --output text
kubectl get daemonset aws-node -n kube-system -o yaml > aws-k8s-cni-old.yaml

aws eks create-addon --cluster-name my-cluster \
  --addon-name vpc-cni --addon-version v1.14.1-eksbuild.1 \
  --service-account-role-arn arn:aws:iam::094833749257:role/AmazonEKSVPCCNIRole \
  --configuration-values '{"resources":{"limits":{"cpu":"100m"}}}' \
  --resolve-conflicts OVERWRITE

aws eks describe-addon --cluster-name my-cluster --addon-name vpc-cni --query addon.addonVersion --output text

aws eks create-addon --cluster-name my-cluster \
  --addon-name coredns --addon-version 1.10.1-eksbuild.2 \
  --service-account-role-arn arn:aws:iam::094833749257:role/AmazonEKSVPCCNIRole \
  --configuration-values 'file://example.yaml' \
  --resolve-conflicts OVERWRITE

aws eks create-addon --cluster-name my-cluster \
  --addon-name kube-proxy --addon-version v1.28.1-eksbuild.1 \
  --service-account-role-arn arn:aws:iam::094833749257:role/AmazonEKSVPCCNIRole \
  --configuration-values 'file://example.yaml' \
  --resolve-conflicts OVERWRITE

aws eks list-addons --cluster-name my-cluster

aws eks describe-addon --cluster-name my-cluster \
  --addon-name vpc-cni --query "addon.addonVersion" --output text
resource "aws_eks_addon" "vpc-cni" {
  cluster_name                = aws_eks_cluster.my-cluster.name
  addon_name                  = "vpc-cni"
  addon_version               = "v1.14.1-eksbuild.1"
  resolve_conflicts_on_update = "PRESERVE"
}

resource "aws_eks_addon" "coredns" {
  cluster_name                = aws_eks_cluster.my-cluster.name
  addon_name                  = "coredns"
  addon_version               = "1.10.1-eksbuild.2"
  resolve_conflicts_on_update = "PRESERVE"
}

resource "aws_eks_addon" "kube-proxy" {
  cluster_name                = aws_eks_cluster.my-cluster.name
  addon_name                  = "kube-proxy"
  addon_version               = "v1.28.1-eksbuild.1"
  resolve_conflicts_on_update = "PRESERVE"
}

https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html

If an add-on requires IAM permissions, then you must have an IAM OpenID Connect (OIDC) provider for your cluster.

To determine whether you have one, or to create one, see Creating an IAM OIDC provider for your cluster. You can update or delete an add-on once you've installed it.

OIDC

https://www.youtube.com/watch?v=nIIxexG7_a8&list=PLiMWaCMwGJXkeBzos8QuUxiYT6j8JYGE5&index=1

https://www.youtube.com/watch?v=ZfjpWOC5eoE

https://github.com/antonputra/tutorials/tree/main/lessons/006

  • Secure way to access AWS services from Kubernetes is to use the Open ID Connect Provider, OIDC, which establish trust between the AWS IAM system and the Kubernetes RBAC system.

Configure your computer to communicate with your cluster

https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html#eks-configure-kubectl

  • After cluster creation
  • Configures kubectl so that you can connect to an Amazon EKS cluster.
  • Configuration file is created in ~/.kube/config
aws sts get-caller-identity

aws eks update-kubeconfig --region ap-northeast-2 --name my-cluster --profile terraform

# Edit SG for EKS inbound rules:
# All traffic 172.16.0.0/16

kubectl config current-context
    minikube
kubectl config get-contexts

kubectl config use-context arn:aws:eks:<region>:<account-id>:cluster/my-cluster
kubectl config use-context arn:aws:eks:ap-northeast-2:094833749257:cluster/my-cluster

kubectl config use-context minikube

kubectl get no -o wide
kubectl get pods -A -o wide

Create nodes

https://docs.aws.amazon.com/eks/latest/userguide/create-node-role.html https://docs.aws.amazon.com/eks/latest/userguide/eks-compute.html

A Kubernetes node is a machine that runs containerized applications. Each node has the following components:

  • Container runtime โ€“ Software that's responsible for running the containers.
  • kubelet โ€“ Makes sure that containers are healthy and running within their associated Pod.
  • kube-proxy โ€“ Maintains network rules that allow communication to your Pods.
cat > node-role-trust-policy.json <<-EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
EOF

# Create the node IAM role
# https://docs.aws.amazon.com/eks/latest/userguide/create-node-role.html
aws iam create-role \
  --role-name myAmazonEKSNodeRole \
  --assume-role-policy-document file://"node-role-trust-policy.json"

# Attach the required managed IAM policies to the role
aws iam attach-role-policy \
  --policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy \
  --role-name myAmazonEKSNodeRole
aws iam attach-role-policy \
  --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly \ --role-name myAmazonEKSNodeRole
aws iam attach-role-policy \
  --policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy \
  --role-name myAmazonEKSNodeRole
  • Go to EKS console > cluster > Compute tab
  • Add Node Group: write name and choose Node IAM role name

node (aws-node-wcr6m)

Failed to pull image "602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon/aws-network-policy-agent:v1.0.2-eksbuild.1": failed to pull and unpack image "602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon/aws-network-policy-agent:v1.0.2-eksbuild.1": failed to resolve reference "602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon/aws-network-policy-agent:v1.0.2-eksbuild.1": unexpected status from HEAD request to https://602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/v2/amazon/aws-network-policy-agent/manifests/v1.0.2-eksbuild.1: 401 Unauthorized

Readiness probe failed: {"level":"info","ts":"2024-01-11T14:21:20.132Z","caller":"/root/sdk/go1.20.4/src/runtime/proc.go:250","msg":"timeout: failed to connect service \":50051\" within 5s"}

Back-off restarting failed container aws-node in pod aws-node-wcr6m_kube-system(bd01d52d-8aea-4ef3-adb5-6dee126c63a1)

aws ec2 describe-subnets --region ap-northeast-2 | grep AvailabilityZone\" "AvailabilityZone": "ap-northeast-2c", "AvailabilityZone": "ap-northeast-2a", "AvailabilityZone": "ap-northeast-2b", "AvailabilityZone": "ap-northeast-2d",

aws ec2 describe-subnets --region us-east-1 | grep AvailabilityZone\"

private-ap-northeast-2a private-ap-northeast-2b public-ap-northeast-2a public-ap-northeast-2b

External access

  1. Expose Service (type: LoadBalancer)

Using a Service of type LoadBalancer, AWS will automatically create a NLB for you. You can get the external IP or DNS name of the LoadBalancer. Limited to basic load balancing features; does not support advanced routing features like path-based or host-based routing.

apiVersion: v1
kind: Service
metadata:
  name: fe-nginx-service
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: nlb
    # only for private load balancer; nlb!
    service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
spec:
  selector:
    app: fe-nginx
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 80
  type: LoadBalancer
kubectl apply -f service.yaml
kubectl get svc
  1. Ingress + Service (type: ClusterIP)

  2. https://youtu.be/uiuoNToeMFE?si=TijV0fhrfUTvJ4BL

  3. https://youtu.be/5XpPiORNy1o?si=YwBVGrCUzEenlomB

  4. AWS Load balancer controller

    • Requires a dedicated IAM role to interact with AWS (i.e. create ALB)
    • https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html
    • https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html
    • https://www.youtube.com/watch?v=ZfjpWOC5eoE

https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.6/how-it-works/

When you create a Kubernetes Service of type ClusterIP and an Ingress resource, the AWS Load Balancer Controller will provision an AWS Application Load Balancer (ALB) to manage the ingress traffic.

Supports advanced routing features, SSL termination, or integration with AWS WAF.

The Ingress resource will handle the external exposure and routing of traffic to your services, and the ClusterIP (only accessed from within the cluster or through an ingress controller) type is sufficient for internal communication within the cluster.

Client
  |
  v
ALB (DNS Name)
  |
  v
AWS Load Balancer Controller (Pod in kube-system namespace)
  |
  v
Ingress
  |
  v
Service of type ClusterIP
  |
  v
Pods
# Create an IAM role with OIDC associated.
# It automates the process of creating the IAM role,
# setting up the trust policy for the OIDC provider,
# and annotating the Kubernetes service account with the IAM role ARN.
# This allows the service account to assume the IAM role and use its permissions.

eksctl create iamserviceaccount \
  --cluster=my-cluster \
  --namespace=kube-system \
  --name=aws-load-balancer-controller \
  --role-name aws-lbc \
  --approve
# get sg-id
aws ec2 describe-security-groups --filters "Name=group-name,Values=alb-sg" --query "SecurityGroups[*].GroupId" --output text
    sg-xxxxxx
  • Edit ingress-aws.yaml
    • Creating Ingress resouce will create AWS ALB - Application LoadBalancer
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: fe-nginx-ingress
  annotations:
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/healthcheck-path: /health
spec:
  # This field is used to specify the class of the Ingress. The Ingress class is used by the Ingress Controller to determine whether or not it should act on a particular Ingress resource.
  ingressClassName: alb
  rules:
  # ...
# Configure kubectl to my-cluster
aws eks update-kubeconfig --region ap-northeast-2 --name my-cluster --profile terraform


# Check AWS Loadbalancer Controller is created
kubectl get ingressclass

kubectl apply -f ingress-aws.yaml

kubectl get ingress fe-nginx-ingress -w
   NAME              CLASS    HOSTS   ADDRESS                                         PORTS   AGE
   fe-nginx-ingress  <none>   *       abcdefg1234567890-1234567890.us-west-2.elb.amazonaws.com   80, 443   5m


curl http://abcdefg1234567890-1234567890.us-west-2.elb.amazonaws.com




Trouble shooting with terraform

kubectl get pods -n kube-system -l k8s-app=kube-dns
NAME                       READY   STATUS    RESTARTS   AGE
coredns-5b9dfbf96-phksv    0/1     Pending   0          6m34s
coredns-5d99cdb6c5-64j8n   0/1     Pending   0          5m56s
coredns-5d99cdb6c5-cbwfk   0/1     Pending   0          5m56s

k describe pod coredns-5b9dfbf96-phksv -n kube-system
k describe logs coredns-5b9dfbf96-phksv -n kube-system
  • EKS loadbalancer controller

https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.6/

https://three-beans.tistory.com/entry/AWSEKS-%EC%BD%98%EC%86%94%EB%A1%9C-%EC%83%9D%EC%84%B1%ED%95%98%EB%8A%94-EKS-%E2%91%A0-intro-network-%EA%B3%A0%EB%A0%A4%EC%82%AC%ED%95%AD

https://three-beans.tistory.com/entry/AWSEKS-%EC%BD%98%EC%86%94%EB%A1%9C-%EC%83%9D%EC%84%B1%ED%95%98%EB%8A%94-EKS-%E2%91%A3-ingress-AWS-LoadBalancer-Controller-%EA%B5%AC%EC%84%B1

https://velog.io/@holicme7/K8s-EKS-%ED%99%98%EA%B2%BD%EC%97%90%EC%84%9C-Istio-ALB-%EC%97%B0%EB%8F%99%ED%95%98%EA%B8%B0-AWS-LoadBalaner-Controller

Istio + Ingress + ALB + Gateway

https://wrynn.tistory.com/66 https://www.mirantis.com/blog/your-app-deserves-more-than-kubernetes-ingress-kubernetes-ingress-vs-istio-gateway-webinar/ https://rtfm.co.ua/en/aws-elastic-kubernetes-service-running-alb-ingress-controller/ https://journes.tistory.com/60

  • ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์„ ์™ธ๋ถ€ ์‚ฌ์šฉ์ž๊ฐ€ ์ ‘๊ทผ ๊ฐ€๋Šฅํ•˜๋„๋ก ๊ตฌ์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ•์—๋Š” ์—ฌ๋Ÿฌ๊ฐ€์ง€๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค.
  • NodePort ๋ฐ LoadBalancer (์ฟ ๋ฒ„๋„คํ‹ฐ์Šค Service)
  • Ingress
  • Istio

  • Ingress ๊ตฌ์„ฑ์—์„œ ๋Œ€ํ‘œ์ ์œผ๋กœ Nginx๊ฐ€ Ingress controller ์—ญํ• ์„ ์ˆ˜ํ–‰ํ•œ ๊ฒƒ๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ

  • Istio์—์„œ๋Š” istio-ingressgateway๋ผ๊ณ  ๋ถˆ๋ฆฌ์šฐ๋Š” Pod ๋‚ด๋ถ€์— ํฌํ•จ๋œ ๊ณ ์„ฑ๋Šฅ ํ”„๋ก์‹œ ์ปจํ…Œ์ด๋„ˆ Envoy๊ฐ€
  • ์ฟ ๋ฒ„๋„คํ‹ฐ์Šค ํด๋Ÿฌ์Šคํ„ฐ ๋‚ด๋ถ€๋กœ ์œ ์ž…๋˜๋Š” ํŠธ๋ž˜ํ”ฝ์„ ๊ฐ€์žฅ ๋จผ์ € ์ˆ˜์‹ ํ•˜๊ณ  ๊ฐ๊ฐ์˜ ์„œ๋น„์Šค๋กœ ๋ผ์šฐํŒ…ํ•ฉ๋‹ˆ๋‹ค.

  • ingress controller

  • https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/
  • Istio Ingress is an Istio based ingress controller.
  • https://istio.io/latest/docs/tasks/traffic-management/ingress/kubernetes-ingress/
  • ingress
  • https://kubernetes.io/docs/concepts/services-networking/ingress/
  • creating ingress creates alb
  • prerequisite is aws load balancer controller add-on
  • kubernetes.io/ingress.class annotation is required to tell the Istio gateway controller that it should handle this Ingress
  • aws load balancer controller add-on
  • https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html
  • Kubernetes Ingress
  • Kubernetes service of the LoadBalancer type

  • istio ingress gateway

  • https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control/

Istio์—์„œ Ingress controller์— ๋Œ€์‘ํ•˜๋Š” ๊ฒƒ์ด istio-ingressgateway๋ผ๋ฉด, ๊ทœ์น™์„ ์ •์˜ํ•˜๋Š” Ingress์— ๋Œ€์‘ํ•˜๋Š” ๊ฒƒ์€ Gateway์™€ Virtual Service๋ผ๋Š” ์ฟ ๋ฒ„๋„คํ‹ฐ์Šค ์ปค์Šคํ…€ ๋ฆฌ์†Œ์Šค์ž…๋‹ˆ๋‹ค

In order for the Ingress resource to work,
the cluster must have an ingress controller running.
Unlike other types of controllers which run as part of the kube-controller-manager binary,
Ingress controllers are not started automatically with a cluster

https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html https://velog.io/@holicme7/K8s-EKS-%ED%99%98%EA%B2%BD%EC%97%90%EC%84%9C-Istio-ALB-%EC%97%B0%EB%8F%99%ED%95%98%EA%B8%B0-AWS-LoadBalaner-Controller

  • Ingress managed LoadBalancer -> Ingress -> Service -> Pod

https://rtfm.co.ua/en/istio-an-overview-and-running-service-mesh-in-kubernetes/ https://rtfm.co.ua/en/istio-external-aws-application-loadbalancer-and-istio-ingress-gateway/

  • Traffic flow
  • ALB with SSL
  • EKS cluster
  • EKS worker node
  • Istio Ingress Gateway Service
  • Istio Ingress Gateway Pod
  • Application Service
  • Application Pod

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 chmod 700 get_helm.sh ./get_helm.sh

https://sixmen.com/ko/tech/2022-03-31-3-build-eks-cluster-with-terraform/

  • https://wrynn.tistory.com/66

NodePort

NodePort์˜ ๊ฒฝ์šฐ ๋…ธ๋“œ์— ํฌํŠธ๋ฅผ ํ• ๋‹นํ•˜๊ณ , ์„œ๋น„์Šค๋‹น ํฌํŠธ๋ฅผ ํ•˜๋‚˜์”ฉ ํ• ๋‹นํ•œ๋‹ค. ์„œ๋น„์Šค ๊ฐœ์ˆ˜๊ฐ€ ๋Š˜์–ด๋‚˜๋ฉด์„œ, ํฌํŠธ (๊ธฐ๋ณธ๊ฐ’: 30000-32767)๊ฐ€ ๋ถ€์กฑํ•  ์ˆ˜ ์žˆ๋‹ค.

Ingress

  • NodePort์™€๋Š” ๋‹ฌ๋ฆฌ Ingress๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๋…ธ๋“œ์— ํฌํŠธ๋ฅผ ํ•˜๋‚˜๋งŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ฟ ๋ฒ„๋„คํ‹ฐ์Šค์—์„œ Ingress๋Š” ํด๋Ÿฌ์Šคํ„ฐ ๋‚ด์˜ ์„œ๋น„์Šค์— ๋Œ€ํ•œ ์™ธ๋ถ€ ์ ‘๊ทผ์„ ๊ด€๋ฆฌํ•˜๋Š” ์˜ค๋ธŒ์ ํŠธ์ž…๋‹ˆ๋‹ค.

์šด์˜ ํ™˜๊ฒฝ์—์„œ๋Š” ๋…ธ๋“œ๋ฅผ ํ•˜๋‚˜๋งŒ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์•„๋‹ˆ๋ฏ€๋กœ ๋กœ๋“œ๋ฐธ๋Ÿฐ์„œ๋ฅผ ๋…ธ๋“œ ๊ทธ๋ฃน ์•ž์— ์œ„์น˜์‹œํ‚ค๋Š” ๊ฒƒ์ด ์ผ๋ฐ˜์ ์ž…๋‹ˆ๋‹ค. ๋กœ๋“œ๋ฐธ๋Ÿฐ์„œ์™€ ํ•จ๊ป˜ Ingress๋ฅผ ์‚ฌ์šฉํ•˜๋„๋ก ๊ตฌ์„ฑํ•˜๋ฉด ๋‹จ์ผ ๋กœ๋“œ๋ฐธ๋Ÿฐ์„œ๋ฅผ ์ง„์ž…์ ์œผ๋กœ ํ•˜๋ฉด์„œ, Ingress์˜ Host ๋ฐ Path ๊ธฐ๋ฐ˜ ๋ผ์šฐํŒ…์„ ํ†ตํ•ด ๋‹ค์ˆ˜์˜ ์„œ๋น„์Šค๋ฅผ ์ œ๊ณตํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

Any Pods that are configured to use the service account
can then access any AWS service that the role has permissions to access.

โ†‘ Back to top

Istio

  • Use with EKS

https://nyyang.tistory.com/158 https://dev.to/airoasis/eks-eseo-istio-wa-application-load-balancer-yeongyeol-2k2j https://sarc.io/index.php/cloud/2282-istio-eks https://rtfm.co.ua/en/istio-external-aws-application-loadbalancer-and-istio-ingress-gateway/

  • Prerequisite: AWS Load Balancer Controller
AWS Load Balancer Controller๋Š” Kubernetes์—์„œ YAML ํŒŒ์ผ์„ ๋ฐฐํฌํ•˜๋Š” ๊ฒƒ ๋งŒ์œผ๋กœ
AWS Load Balancer Controller ํŒŒ๋“œ๊ฐ€ ์ด๋ฅผ ๊ฐ์ง€ํ•˜์—ฌ AWS API๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ
์š”์ฒญํ•œ ๋Œ€๋กœ Load Balancer๋ฅผ ์„ค์น˜ ๋ฐ ๊ตฌ์„ฑํ•ด์ฃผ๋Š” ์˜คํ”ˆ์†Œ์Šค์ด๋‹ค.

Ingress ๊ตฌ์„ฑ์—์„œ ๋Œ€ํ‘œ์ ์œผ๋กœ Nginx๊ฐ€ Ingress controller ์—ญํ• ์„ ์ˆ˜ํ–‰ํ•œ ๊ฒƒ๊ณผ
๋งˆ์ฐฌ๊ฐ€์ง€๋กœ Istio์—์„œ๋Š” istio-ingressgateway๋ผ๊ณ  ๋ถˆ๋ฆฌ์šฐ๋Š”
Pod ๋‚ด๋ถ€์— ํฌํ•จ๋œ ๊ณ ์„ฑ๋Šฅ ํ”„๋ก์‹œ ์ปจํ…Œ์ด๋„ˆ Envoy๊ฐ€ ์ฟ ๋ฒ„๋„คํ‹ฐ์Šค ํด๋Ÿฌ์Šคํ„ฐ ๋‚ด๋ถ€๋กœ
์œ ์ž…๋˜๋Š” ํŠธ๋ž˜ํ”ฝ์„ ๊ฐ€์žฅ ๋จผ์ € ์ˆ˜์‹ ํ•˜๊ณ  ๊ฐ๊ฐ์˜ ์„œ๋น„์Šค๋กœ ๋ผ์šฐํŒ…ํ•ฉ๋‹ˆ๋‹ค.

Istio์—์„œ Ingress controller์— ๋Œ€์‘ํ•˜๋Š” ๊ฒƒ์ด istio-ingressgateway๋ผ๋ฉด, ๊ทœ์น™์„ ์ •์˜ํ•˜๋Š” Ingress์— ๋Œ€์‘ํ•˜๋Š” ๊ฒƒ์€ Gateway์™€ Virtual Service๋ผ๋Š” ์ฟ ๋ฒ„๋„คํ‹ฐ์Šค ์ปค์Šคํ…€ ๋ฆฌ์†Œ์Šค์ž…๋‹ˆ๋‹ค.
istio-ingressgateway Pod ๋‚ด๋ถ€์˜ Envoy๋Š” Gateway์™€ Virtual Service์—
๋ช…์‹œ๋œ ๊ทœ์น™๋Œ€๋กœ ํŠธ๋ž˜ํ”ฝ์„ ๋ผ์šฐํŒ…ํ•ฉ๋‹ˆ๋‹ค.

  • Operator๋กœ Istio ์„ค์น˜

istio-operator.yaml

istio labels

istio: ingressgateway
istio: ingressgateway-alb
istio: ingressgateway-ext-alb

AWS LB (ALB๋Š” Ingress(๊ทœ์น™ ๋ช…์„ธ์„œ)์— ์˜ํ•ด ์ƒ์„ฑ๋จ) -> Istio Service -> Istio Pod (์—ฌ๊ธฐ์„œ Isito Pod์˜ Label์— ๋”ฐ๋ผ์„œ ์–ด๋–ค Istio Gateway์— ์ •์˜๋œ Rule / Setting ๊ฐ’์„ ๋”ฐ๋ฅผ ๊ฒƒ์ธ์ง€ ์ •ํ•˜๊ฒŒ ๋œ๋‹ค.)

salat-stage-cwmp.myinfo.net myinfo-cwmp-stg-salat

salat-stage-mqtt.myinfo.net myinfo-nlb-stage-salat

salat-stage-xmpp.myinfo.net myinfo-nlb-stage-salat

salat-stage.myinfo.net myinfo-dashboard-stg-salat

kubectl label namespace <๋„ค์ž„์ŠคํŽ˜์ด์Šค์ด๋ฆ„> istio-injection=enabled

โ†‘ Back to top

Helm Chart

  • Install helm
    • on the same client PC as kubectl
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
  • Configure kubectl
    • Check context : kubectl config current-context
    • Update .kube/config
# TO LOCAL
kubectl config use-context minikube

# TO EKS
aws eks update-kubeconfig --region ap-northeast-2 --name my-cluster --profile terraform
  • Create helm chart
helm create tst-chart
  • Check validity
cd simpledl/script
tree
    tst-chart
    ย ย  โ”œโ”€โ”€ Chart.yaml
    ย ย  โ”œโ”€โ”€ charts
    ย ย  โ”œโ”€โ”€ templates
    ย ย  โ”‚ย ย  โ”œโ”€โ”€ _helpers.tpl
    ย ย  โ”‚ย ย  โ”œโ”€โ”€ deployment.yaml
    ย ย  โ”‚ย ย  โ”œโ”€โ”€ hpa.yaml
    ย ย  โ”‚ย ย  โ”œโ”€โ”€ ingress.yaml
    ย ย  โ”‚ย ย  โ””โ”€โ”€ service.yaml
    ย ย  โ”œโ”€โ”€ values.dev.yaml
    ย ย  โ””โ”€โ”€ values.prd.yaml

helm lint tst-chart
helm template tst-chart --debug

# check results without installation
# helm install --dry-run tst-chart --generate-name
helm install --dry-run tst-release ./tst-chart -f ./tst-chart/values.dev.yaml
  • Install
helm install tst-release ./tst-chart -f ./tst-chart/values.dev.yaml
  • Upgrade
    • Helm will perform a rolling update for the affected resources (e.g., Deployments, StatefulSets).
    • Pods are replaced one by one, ensuring zero-downtime during the update.
    • Helm manages this process transparently.
    • Edit values.dev.yaml and apply helm upgrade command
services: 
  - name: fe-nginx
    replicaCount: 3
helm list
    NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART              APP VERSION
    tst-release     default         1               2024-07-09 14:25:50.410043621 +0900 KST deployed        tst-chart-0.1.0    1.16.0

helm upgrade tst-release ./tst-chart -f ./tst-chart/values.dev.yaml
    Release "tst-release" has been upgraded. Happy Helming!
    NAME: tst-release
    LAST DEPLOYED: Tue Jul  9 14:35:35 2024
    NAMESPACE: default
    STATUS: deployed
    REVISION: 2
    TEST SUITE: None

helm list
    NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART              APP VERSION
    tst-release     default         2               2024-07-09 14:35:35.404055879 +0900 KST deployed        tst-chart-0.1.0    1.16.0
  • Rollback
helm rollback tst-release VERSION_NO
  • Uninstall (Helm v3)
    • helm delete --purge in Helm V2
helm uninstall tst-release
  • Helm Repository
    • While you can deploy a Helm chart directly from the filesystem,
    • it's recommended to use Helm repositories.
    • Helm repositories allow versioning, collaboration, and easy distribution of charts
    • LINK

fe: .init load fe: web/cat POST configmap /config/settings.ini in each pods

โ†‘ Back to top

  • kubectl rollout restart deployment

    • restarts the pods associated with the specified deployment.
    • However, it doesn't directly remove the old replica sets (RS).
    • The old RSs are retained for historical reasons and to support rollbacks if needed.
    • Here's what happens when you run kubectl rollout restart deployment:
  • New Replica Set (RS):

  • A new RS is created with the updated configuration (if any changes were made).
  • The new RS manages the desired number of pods according to the deployment's configuration.

  • Pods Transition:

  • The pods transition from the old RS to the new RS.
  • The old pods are gradually terminated, and the new pods are created.

  • Old Replica Set (RS):

  • The old RS continues to exist, managing the old pods.
  • It is not removed immediately to allow for a smooth transition.

  • Cleanup:

  • After the new pods are fully running and stable, you can manually delete the old RS if you no longer need it.

To clean up the old RSs, you can use the following command: Remember to verify that the new pods are functioning correctly before deleting the old RSs.

kubectl delete rs <old-replica-set-name>

โ†‘ Back to top