Skip to content

- Kubernetes Core concept

Date: June 1, 2024

add-node

Kubernetes Component

Kubernetes is an open-source container orchestration system for automating the deployment, scaling, and management of containerized applications. It was originally designed by Google, and is now maintained by the Cloud Native Computing Foundation (CNCF).

Kubernetes schedules the components of a distributed application onto individual computers in the underlying computer cluster and acts as an interface between the application and the cluster.

Control plane

  • manages the worker nodes and the Pods in the cluster
  • runs the control plane components, monitor and manage overall state of the cluster and resources
  • schedule and start pods
  • 4 processes run on every control plane(master node):
  • kube-apiserver
    • exposes the Kubernetes API and provides the front end to the Control Plane.
    • a single entrypoint (cluster gateway) for interacting with the k8s control plane
    • handles api requests, authentication, and authorization
    • acts as a gatekeeper for authentication
      • request โ†’ api server โ†’ validates request โ†’ other processes โ†’ pod creation
    • UI (dashboard), API(script,sdk), CLI (kubectl)
  • kube-scheduler
    • watches for newly created Pods with no assigned node, and selects a node for them to run on.
    • based on resource percentage of nodes being used - including resource requirements, hardware/software constraints and data locality.
    • schedule new pod โ†’ api server โ†’ scheduler: distribute workloads across worker nodes
  • kube-controller-manager
    • ensures the cluster remains in the desired state
    • run controllers which run loops to ensure the configuration matches actual state of the running cluster.
    • these controllers are as follows:
      • Node controller โ€” Checks and ensures nodes are up and running
      • Job Controller โ€” Manages one-off tasks
      • Endpoints Controller โ€” Populates endpoints and joins services and pods.
      • Service Account and Token Controller โ€” Creation of accounts and API Access tokens.
    • detect cluster state changes(pods state)
    • Controller Manager(detect pod state) โ†’ Scheduler โ†’ Kubelet(on worker node)
  • cloud-controller-manager
    • The cloud controller manager lets you link your cluster into your cloud provider's API
  • etcd
    • consistent and highly-available key-value store that maintains cluster state and ensures data consistency
    • cluster brain!
    • key-value data store of cluster state
    • cluster changes get stored in the key value store!
    • Is the cluster healthy? What resources are available? Did the cluster state change?
    • NO application data is stored in etcd!
    • can be replicated
    • Multiple master nodes for secure storage
      • api server is load balanced
      • distributed storage across all the master nodes

Worker node

  • host multiple pods which are the components of the application workload
  • the following 3 processes must be installed on every node:
  • kubelet
    • kubelet agent runs on each worker node
    • makes sure that containers(described in PodSpecs) are running in a Pod.
    • schedules pods and containers
    • interacts with both the container and node
    • starts the pod with a container inside
    • watches for changes in pod spec and takes action
    • ensures the pods running on the node are running and are healthy.
  • kube-proxy
    • forwards requests to services to pods
    • intelligent and performant forwarding logic that distributes request to pods with low network overhead
      • it can forward pod request for a service into the pod in the same node instead of forwarding to pods in other nodes, therefore lowers possible network overhead.
    • a daemon on each node that allows network rules such as load balancing and routing
    • enables communication between pods and external clients
    • Proxy network running on the node that manage the network rules
    • and communication across pods from networks inside or outside of the cluster.
  • Container runtime

    • responsible for pulling images, creating containers and lifecyle of containers
    • e.g. containerd
  • https://kubernetes.io/docs/concepts/architecture/

add-node

  • Example of Cluster Set-up
    • 2 Master nodes, 3 Worker nodes
    • Master node : less resources
    • Worker node : more resources for running applications
    • can add more Master or Worker nodes

โ†‘ Back to top

Pod

  • Pods
  • Pods are the smallest deployable units of computing that you can create and manage in Kubernetes.
  • abstraction over container
  • usually 1 application(container) per pod
  • each pod gets its own ip address
  • ephermeral; new (unique) ip for each re-creation

  • Pods and controllers

    • You can use workload resources to create and manage multiple Pods for you.
    • Deployment
    • StatefulSet
    • DaemonSet

โ†‘ Back to top

Workload Management

Deployment

  • manage stateless application workloads on your cluster, where any Pod in the Deployment is interchangeable and can be replaced if needed.

  • Rollover (aka multiple updates in-flight)

    • Each time a new Deployment is observed by the Deployment controller, a ReplicaSet is created to bring up the desired Pods. If the Deployment is updated, the existing ReplicaSet that controls Pods whose labels match .spec.selector but whose template does not match .spec.template are scaled down. Eventually, the new ReplicaSet is scaled to .spec.replicas and all old ReplicaSets is scaled to 0.
  • Rolling Back to a Previous Revision

# History
kubectl rollout history deployment/nginx-deployment
# History details
kubectl rollout history deployment/nginx-deployment --revision=2


# Rollback to previous version
kubectl rollout undo deployment/nginx-deployment

# Rollback to revision=2
kubectl rollout undo deployment/nginx-deployment --to-revision=2
  • Scaling a Deployment
kubectl scale deployment/nginx-deployment --replicas=10
  • ReplicaSet
  • StatefulSets
  • DaemonSet

  • Clean up Policy

    • You can set .spec.revisionHistoryLimit field in a Deployment to specify how many old ReplicaSets for this Deployment you want to retain.
    • The rest will be garbage-collected in the background. By default, it is 10.
  • Writing a Deployment Spec

    • .spec.template and .spec.selector are the only required fields of the .spec.
    • .spec.selector is a required field that specifies a label selector for the Pods targeted by this Deployment.
    • .spec.selector must match .spec.template.metadata.labels, or it will be rejected by the API.
  • Strategy

    • .spec.strategy specifies the strategy used to replace old Pods by new ones.
    • .spec.strategy.type can be "Recreate" or "RollingUpdate"(default)
    • RollingUpdate: You can specify maxUnavailable and maxSurge to control the rolling update process.

ReplicaSet

  • A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time.
  • As such, it is often used to guarantee the availability of a specified number of identical Pods.
  • Use deployment instead which is a higher-level concept that manages ReplicaSets and provides declarative updates to Pods along with a lot of other useful features.

StatefulSet

  • StatefulSet is the workload API object used to manage stateful applications.

โ†‘ Back to top

Service

  • Service, Load Balancing, and Networking
  • Service & Ingress
  • The Service API, part of Kubernetes, is an abstraction to help you expose groups of Pods over a network.
  • Service provide stable(permanent) IP address. Each pod has its own ip address, but are ephemeral.
  • Load balancing
  • loose coupling
  • within & outside cluster
  • pods communite with each other using services
  • external service
    • http://node-ip:port
  • internal service

    • http://db-service-ip:port
  • ClusterIP services

    • default type
    • microservice app deployed

โ†‘ Back to top

Ingress

Ingress is an object that allows external traffic to reach services within a cluster. It acts as a single entry point for incoming traffic, routing it to the appropriate services based on rules defined via the Kubernetes API1. Here are the key points about Ingress:

  • Purpose:
    • Expose Services: Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.
    • Traffic Routing: Traffic routing is controlled by rules defined on the Ingress resource.
  • Capabilities:
    • URL Mapping: An Ingress can give services externally-reachable URLs.
    • Load Balancing: It can load balance traffic.
    • SSL/TLS Termination: Ingress can terminate SSL/TLS connections.
    • Virtual Hosting: It offers name-based virtual hosting.
  • Controller: An Ingress controller (usually backed by a load balancer) fulfills the Ingress rules.

  • The host: myapp.com must be a valid domain address

  • map domain name to Node's IP address, which is the entrypoint

  • ibm technology

  • youtube LINK1
  • youtube LINK2
  • youtube LINK3
  • youtube LINK4-Service & Ingreess
  • https://my-app.com (ingress can configure secure https protocal with domain name) โ†’ forwards traffic into internal service
  • ingress by traefik.io
  • gke ingress
  • load balancer and ingress duo
  • load balancer vs. ingress

โ†‘ Back to top

Traffic flow

  • Let's walk through the flow of traffic in a Kubernetes environment with:

    • Ingress
    • Ingress Controller
    • external Load Balancer (such as an Application Load Balancer, ALB):
  • Ingress Creation:

    • You start by creating an Ingress resource in your Kubernetes cluster.
    • The Ingress defines routing rules based on HTTP hostnames and URL paths.
  • External Load Balancer (ELB) Creation:

    • When you create an Ingress, the cloud environment (e.g., AWS) automatically provisions an external Load Balancer (e.g., ALB).
    • The ELB acts as the entry point for external traffic.
  • Traffic Flow:

    • Here's how the traffic flows:
      • External Client: Sends a request to the ALB (Load Balancer).
      • ALB: Receives the request and forwards it to the Ingress Controller.
      • Ingress Controller: Based on the Ingress rules, the controller routes the request to the appropriate Kubernetes Service.
      • Service: The Service forwards the request to the corresponding Pod(s).

So, the complete flow is: ALB โ†’ Ingress Controller โ†’ Ingress โ†’ Service โ†’ Pod.

Ingress allows fine-grained routing, and the Ingress Controller ensures that the load balancer routes requests correctly. If you need more complex routing based on HTTP criteria, Ingress is a powerful tool! ๐Ÿš€

  1. Use External Service: http://my-node-ip:svcNodePort โ†’ Pod
    • service.spec.type=LoadBalancer, nodePort=30510
    • http://localhost:30510/
      • in VirtualBox port-forward 30510
  2. Use Ingress + Internal service: https://my-app.com
    • Ingress Controller Pod โ†’ Ingress (routing rule) โ†’ Service โ†’ Pod
    • using ingress, you can configure https connection

โ†‘ Back to top

Ingress Explained

  1. External Service (without Ingress)
apiVersion: v1
kind: Service
metadata:
    name: myapp-external-service
spec:
    selector:
        app: myapp
    # LoadBalancer : opening to public
    type: LoadBalancer
    ports:
        - protocol: TCP
            port: 8080
            targetPort: 8080
            nodePort: 30510
  1. Using Ingress โ†’ internal Service (e.g. myapp-internal-service)
    • internal service has no nodePort and the type should be type: ClusterIP
    • must be valid domain address
    • map domain name to Node's IP address, which is the entrypoint
      • (one of the nodes or could be a host machine outside the cluster)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
    name: myapp-ingress
spec:
    rules:
    - host: myapp-com
        http:
            # incoming requests are forwarded to the internal service
            paths:
            - path: /
                pathType: Prefix
                backend:
                    service:
                        name: myapp-internal-service
                        port: 8080

---
apiVersion: v1
kind: Service
metadata:
    name: myapp-internal-service
spec:
    selector:
        app: myapp
    ports:
        - protocol: TCP
            port: 8080
            targetPort: 8080

โ†‘ Back to top

Ingress controller

  • implementation of ingress, which is Ingress Controller (Pod)
  • evaluates and processes ingress rules
  • manages redirections
  • entrypoint to cluster
  • many third party implementations
    • e.g. k8s Nginx Ingress Controller
  • HAVE TO CONSIDER the environemnt where the k8s cluster is running
    • Cloud Service Provider (AWS, GCP, AZURE)
      • External reqeust from the browser โ†’
        • Cloud Load balancer โ†’
        • Ingress Controller Pod โ†’
        • Ingress โ†’
        • Service โ†’
        • Pod
      • using cloud lb, you do not have to implement load balancer yourself
    • Baremetal
      • you need to configure some kind of entrypoint (e.g. metallb)
      • either inside of cluster or outside as separate server
      • software or hardware solution can be used
      • must provide entrypoint
      • e.g. Proxy Server: public ip address and open ports
        • Proxy server โ†’ Ingress Controller Pod โ†’ Ingress (checks ingress rules) โ†’ Service โ†’ Pod
        • no server in k8s cluster is publicly accessible from outside

โ†‘ Back to top

Minikube ingress implementation

# nginx implementation of ingress controller
minikube addons enable ingress

k get pod -n kube-system
    nginx-ingess-controller-xxx

  • configure ingress rule for kubernetes dashboard componnent
    • minikube in default creates dashboard service (minikube specific)
k get ns
    kubernetes-dashboard Active 17d

k get all -n kubernetes-dashboard
    pod
    svc
  • dashboard-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
    name: dashboard-ingress
    namespace: kubernetes-dashboard
spec:
    rules:
    - host: dashboard.com
        http:
            paths:
            - path: /
                pathType: Prefix
                backend:
                    service:
                        # forward to service name created in minikube
                        name: kubernetes-dashboard
                        port:
                            number: 80
  • create ingress rule for kubernetes-dashboard
k apply -f dashboard-ingress.yaml

k get ingress -n kubernetes-dashboard --watch
    NAME                             CLASS     HOSTS                         ADDRESS                PORTS     AGE
    dashboard-ingress    nginx     dashboard.com         192.168.49.2     80            42s

vim /etc/hosts
    192.168.49.2 dashboard.com

# check in chrome browser:
# http://dashboard.com

k describe ingress dashboard-ingress -n kubernetes-dashboard

    # whenever there's a request into the cluster, there's no rule for mapping the request to service, then
    # this backend is default to handle the request. e.g. 404 not found
    # one can define custom error page
    # SIMPLY CREATE A SERVICE WITH THE SAME NAME: default-http-backend
    Default backend: default-http-backend:80
  • Define custom default-http-backend
apiVersion: v1
kind: Service
metadata:
    name: default-http-backend
spec:
    selector:
        app: default-response-app
    ports:
        - protocol: TCP
            # this is the port that receives the default backend response
            port: 80
            targetPort: 8080
  • ingress rules

  • multiple paths for the same host

    • http://myapp.com/analytics
    • http://myapp.com/shopping
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
    name: simple-fanout-example
    annotations:
        nginx.ingress.kubernetes.io/rewrite-target: /
spec:
    rules:
    - host: myapp.com
        http:
            paths:
            - path: /analytics
                pathType: Prefix
                backend:
                    service:
                        name: analytics-service
                        port:
                            number: 3000
            - path: /shopping
                pathType: Prefix
                backend:
                    service:
                        name: shopping-service
                        port:
                            number: 8080
  • multiple hosts
    • http://analytics.myapp.com
    • http://shopping.myapp.com
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
    name: simple-fanout-example
    annotations:
        nginx.ingress.kubernetes.io/rewrite-target: /
spec:
    rules:
    - host: analytics.myapp.com
        http:
            paths:
            - path: /
                pathType: Prefix
                backend:
                    service:
                        name: analytics-service
                        port:
                            number: 3000
    - host: shopping.myapp.com
        http:
            paths:
            - path: /
                pathType: Prefix
                backend:
                    service:
                        name: shopping-service
                        port:
                            number: 8080
  • Ingress that includes configuration of TLS certificate
    • Secret component : define yaml to create one
      • tls.crt, tls.key : values are actual file contents, NOT file paths/locations
      • Secret must bein the same namepsace as the Ingress Component
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
    name: tls-example-ingresss
spec:
    ########## TLS SETTING ##########
    tls:
    - hosts:
        - myapp.com
        secretName: myapp-secret-tls
    ########## TLS SETTING ##########
    rules:
    - host: myapp.com
        http:
            paths:
            - path: /
                pathType: Prefix
                backend:
                    service:
                        name: myapp-internal-service
                        port:
                            number: 8080
---
apiVersion: v1
kind: Secret
metadata:
    name: myapp-secret-tls
    namespace:default
data:
    tls.crt: base64 encoded cert
    tls.key: base64 encoded key
type: kubernetes.io/tls
  • ConfigMap

    • stored in plaintext format
    • external configuration of your application
    • DB_URL = monngo-db
  • Secret

    • Caution: "Kubernetes Secrets are, by default, stored unencrypted in the API server's underlying data store (etcd)."
      • https://kubernetes.io/docs/concepts/configuration/secret/
    • stored in base64 encoded format
    • DB_USER = mongo-user
    • DB_PW = mongo-pw
  • Volume (like an external hdd plugged into cluster)

    • k8s does not manage data persistence
    • if database pod dies the data disappears
    • it requires persistent database
    • strorage could on:
      • on Local
      • on Remote, outside of the cluster
  • What if pod dies => downtime occurs.

    • use service as a load balancer which distributes traffic to multiple nodes
    • Use multi-node and pod replicas(deployment as abstraction for pods)
    • deployment specifies how many pods are deployed into multiple nodes
    • BUT, database pods cannot be replicated. it requires to store data into a single storage
      • use StatefulSet for stateful apps such as Databases
  • Deployemnt for stateless apps

    • Deployment is the Abstraction of Pods
  • StatefulSet for stateFul apps or databases

    • DB can't be replicated via deployment.
    • Avoid data inconsistencies
    • => StatefulSet for STATEFUL apps. e.g. mysql, mongodb, elastic search
    • But deploying StatefulSet is not easy
    • NOTE: "DB are often hosted outside of k8s cluster"
  • Minikube

    • master and worker process run on a single node
    • usually via virtual box or other hypervisor
    • for testing purposes
  • deployment

    • blueprint for creating pods
    • most basic configuration for deployemnt (name and image to use)
    • rest defaults
  • replicaset

    • another abstraction of layer
    • manages the replicas of a pod
k create deployment nginx-depl --image=nginx:alpine
k get replicaset
  • Layers of Abstraction
    • Deployment : manages a replicaset
    • ReplicaSet : manages replicas of pods
    • Pod : is an abstraction of containers
    • Container
# edit deployement, but not pod directly
k edit deployment nginx-depl
  • YAML configuration file

    • Attributes of spec are specific to the kind
    • each configuration file has 3 parts
      • metadata
      • specification
      • status: automatically generated by k8s (desired ==? actual) self-healing feature
        • k8s gets this status from etcd!
    • Store the YAML config file with your code (git repository)
    • template also has its own metadata and spec: applies to Pod
      • blueprint for a Pod
  • Connecting the component

    • labels & selectors
    • metadata contains labels, spec contains selectors
      • metadata define key-value pair for label which is matched by the spec selector for pod
      • pod gets the label through the spec.template blueprint
      • pod belongs to deployment by label
      • deployment labels are connected to service's spec.selector
      • service's spec.selector uses deployment's metadata labels to make connection to deployement(pods)
    • service expose port (accessible) โ†’ forward to service targetPort โ†’ deployment's containerPort
apiVersion: apps/v1
kind: Deployment
metadata:
    name: fe-nginx-deployment
    # NOTE label for deployment which is used by service to make connection to deployement(pods)
    labels:
        app: fe-nginx
spec:
    replicas: 2
    selector:
        # NOTE allows `Deployment` to find and manage Pods with this matching label
        matchLabels:
            app: fe-nginx
    template:
        metadata:
            # NOTE sets the labels for the `Pods` created by the Deployment.
            labels:
                app: fe-nginx
        spec:
            containers:
            - name: fe-nginx
                image: jnuho/fe-nginx:latest
                imagePullPolicy: IfNotPresent
                ports:
                - containerPort: 80
            #imagePullSecrets:
            #- name: regcred
k describe svc serivceName
    Endpoints: podip:targetPort
    : this Endpoint ip matches pod ip

k get pod podName -o wide
k get deploy nginx-depl -o yaml
    check the status is automatically generated by k8s
    retrieve result of status from etcd
  • Complete Application setup with Kubernetes components

    • mongodb(internal service; no external requests), mongo-express(Web-app)
    • mongo express get url,id, pw from configmap and secret to connect to mongodb
    • mongo express accessible from browser: NodeIp:PortOfExternalService
    • 2 Deployment / Pod
    • 2 Service
    • 1 ConfigMap
    • 1 Secret
    • Browser โ†’ mongo express external service โ†’ mongoexpress pod โ†’ mongodb internal service โ†’ mongodb pod
  • Namespace

    • kube-system: system processes, master and kubectl processes
    • kube-public: publicly accessible data, configmap, that contains cluster information k cluster-info
    • kube-node-lease: heartbeats of node, determines the availability of a node
    • default: resources you create
kubectl create namespace myNameSpace
  • Group applications into namespaces

    • e.g. database/ logging / monitoring/ nginx-ingress/elastic stack
    • no need to create namespaces for smaller projects with about 10 users
    • create namespaces if there are many teams, same application(same name)
    • staging/development namespace resources use same resource in certain namespaces
    • blue/green deployment using namespaces (Production green/blue)
    • access and resource limits on nameaspaces
  • Each NS must define own ConfigMap/Secret

    • suppose projectA, projectB namespaces
    • both namespace must have ConfigMap with exact same content
apiVersion: v1
kind: ConfigMap
metadata:
    name: mysql-configmap
data:
    db_url: mysql-service.database
  • Components, which can't be created within a namespace
    • persistent volume
    • node
k api-resources --namespaced=false
k api-resources --namespaced=true
  • You can change the active namespace with kubens
    • without a need to k get pod -n myNameSpace
brew install kubectx
kubens
kubens my-namespace
    Active namespace is "my-namespace"
  • Helm explained

    • package manage (e.g. apt)
  • Helm for Elastic Search Stack for Logging

  • requirement: yamls for
    • Stateful Set
    • ConfigMap
    • Secret
    • K8s User with permissions
    • Services
  • First, helm provide packages for those to be used by anyone
    • install bundle of yamls
    • create your own helm charts with helm
    • push them to helm repository
    • download and use existing ones
    • e.g. database apps, monitoring apps(prometheus)
    • sharing helm charts is available
    • you can download reuse that configuration
    • helm search <keyward>
    • public/private helm registries
  • Second, helm as a templating engine

    • for CI/CD: in your build, you can replace the values on the fly
    • Define a template with common attributes for many configurations
      • a common blueprint defined as a template YAML config
      • dynamic values are replaced by placeholders; values.yaml
        • values injection into template files
    • Same applications across different environments
  • Helm chart structure

tree

mychart/                # name of the chart
    Chart.yaml        # meta info about chart: name,version,dependencies
    values.yaml     # values for the template files(can be overridden)
    charts/             # chart dependencies
    templates/        # template files
    READEME.md
    LICENSE

helm install <chartname>

# override values.yaml default values by:
# values.yaml + my-value.yaml => result
helm install --values=my-values.yaml <chartname>
helm install --set version=2.0.0

# BETTER TO HAVE my-values.yaml and values.yaml instead of `set`
  • helm release management

  • version2 vs. version3

  • version2:
    • client (cli)
    • server (tiller)
    • helm install(cli) โ†’ tiller execute yaml and deploy the cluster
    • helm install/upgrade/rollback โ†’ tiller create history with revision
      • revision 1,2... history is stored
      • downsides: tiller has too much power inside of k8s cluster
        • security risk
  • version3: removed tiller for such security risk

  • Volumes

    • Persistent Volume
    • Persistent Volume Claim
    • Storage class
  • need for volumes

    • k8s no data persistence out of the box!
    • requires storage that doesn't depend on the pod lifecycle
    • storage must be available on all nodes
    • need to survive even if node/cluster crushes; highly available
      • outside of cluster?
    • writes/reads to directory
  • Persistent volume

    • cluster resources used to storage data
    • defined by YAML
    • spec: how much storage
    • need actual physical sotrage:
    • persistent volume does not care about your actual storage
      • pv simply provides interface to the actual storage
      • it's like an external plugin to your cluster
    • could be hybrid: multiple storage types
      • one application uses local disk/nfs server/cloud stroages, etc.
    • in YAML for pv, specify in spec, which physical storage to use
  • Check types of vlumes in k8s document

  • gcp cloud storage

apiVersion: v1
kind: PersistentVolume
metadata:
    name: test-volume
    failure-domain.beta.kubernetes.io/zone: us-central1-a__us-central1-b
spec:
    capacity:
        storage: 400Gi
    accessModes:
    - ReadWriteOnce
    gcePersistentDisk:
        pdName: my-data-disk
        fsType: ext4
  • local storage
apiVersion: v1
kind: PersistentVolume
metadata:
    name: example-pv
spec:
    capacity:
        storage: 100Gi
    volumeMode: Filesystem
    accessModes:
    - ReadWriteOnce
    persistentVolumeReclaimPolicy: Delete
    storageClassName: local-storage
    local:
        path: /mnt/disks.ssd1
    modeAffinity:
        required:
            nodeSelectorTerms:
            - matchExpressions:
                - key: kubernetes.io/hostname
                    operator: In
                    values:
                    - example-node
  • Persistent Volumes are NOT namespaced

    • PV outside of the namesapces
    • accessible to the whole cluster
  • Local vs. Remote volume types

    • each volume type has it's own use case!
    • local volume types violoate 2. and 3. requirement for data persistence:
      • (X) Being tied to 1 specific node
      • (X) Surviving cluster crashes
  • For DB persistence, almost always use REMOTE STORAGE!!!

  • StatefulSet

    • What is it and why it is used?
    • how StatefulSet differs from Deployment?
    • specifically for stateful appliations!
      • e.g. stateful apps: database apps
      • e.g. stateless apps: don't keep record of state, each request is completely new
  • Stateful and stateless applications example

    • nodejs(stateless) + mongodb(stateful)
    • http request (doesn't depend on previous data to handle)โ†’ nodejs
      • handle it based on the payload of request
      • update/query from mongodb app
    • mongodb update data based on previous state / query data
      • depends on most up-to-date data/state
  • Stateless apps are deployed using Deployment component

  • Stateful apps are deployed using StatefulSet component
  • Both Deployment and StatefulSet manage pods based on container specification!

  • K8s Services

    • ClusterIP
    • NodePort
    • LoadBalancer
    • Headless
  • each pod has its own ip address

  • pods are ephermeral - are descroyed frequently!
  • service provides stable ip address.
  • service does load balancing into pods
  • loose coupling
  • within & outside cluster

  • ClusterIP Services

    • default type
    • e.g. microservices app deployed
      • in pod : app container(3000)+sidecar container (9000: collects logs)
      • pod assgiend in node ip range: 10.2.2.5 (started on node2)
      • where node1: 10.2.1.x
      • where node2: 10.2.2.x
      • where node3: 10.2.3.x
      • k get pod -o wide to check pod ip
      • Ingress โ†’ Service(ClusterIP) โ†’ Pods
      • Sevice's spec.selector : which Pods it forward to
      • Sevice's spec.ports.targetPort : which Ports it forward to
      • Pods are identified via selectors
        • key value pairs for selctor
        • Pod: spec.template.metadata.labels
        • Service: spec.selector
          • service forwards request to matching Pods
      • Service Endpoint is CREATED with the same name as Service
      • keeps track of which Pods are the members/endppoints of the Service
      • each time pods recreated, Endpoints are also updated to track that
      • Service spec.ports.port: can be arbitrary
      • Service spec.ports.targetPort: MUST MATCH deployment's Pod containerPort
      • Multi-port services (two container specified in deployment.yaml)
        • mongo-db application 27017
        • mongo-db exporter (Prometheus) 9216
          • Prometheus scapes data from mododb-exporter via port 9216
      • service have to handle two requests via two ports 27017, 9216
apiVersion: apps/v1
kind: Deployment
metadata:
    name: microservice-one
spec:
    replicas: 2
    # ...
    template:
        metadata:
            labels:
                app: microservice-one
        spec:
            containers:
            - name: ms-one
                image: my-private-repo/ms-one:latest
                ports:
                - containerPort: 3000
            - name: log-collector
                image: my-private-repo/log-collector:latest
                ports:
                - containerPort: 9000
  • Multi-port service
apiVersion: v1
kind: Service
metadata:
    name: mongodb-service
spec:
    selector:
        app: mongodb
    ports:
        - name: mongodb
            protocol: TCP
            port: 27017
            targetPort: 27017
        - name: mongodb-exporter
            protocol: TCP
            port: 9216
            targetPort: 9216
  • Headless Service

    • Client wants to communicate with 1 specific Pod directly
    • Pods want to talk directly with specific Pod
    • So, not randomly selected (no Load balancing)
    • Use case: Stateful applications
      • such as databases(mysql,mongodb, elasticsearch)
      • Pods replicas are not identical
      • Only Master Pod is allowed to write to DB (write/read)
      • Worker Pods are for only (read)
      • Worker Pods must connect to Master Pod to sync their data after Master Pods made changes to the data
      • When a Worker Pod is created, it must clone the most recent Worker Pod
    • Client need to figure out IP addresses of each Pod
      • Option 1: API call to k8s API Server?
        • list of pods and ip addresses
        • too tied to k8s api and inefficient
      • Option 2: DNS lookup
        • k8s allows client to discover Pod ip addresses
        • DNS lookup for service - returns single IP address which belongs to a Service (ClusterIP address)
      • BUT setting sepc.cluseterIP to None returns Pod IP address instead!!!
  • Define headless Service:

    • NO CLUSTER IP address is assigned!!!
apiVersion: v1
kind: Service
metadata:
    name: mongodb-service-headless
spec:
    clusterIP: None
    selector:
        app: mongodb
    ports:
        - protocol: TCP
            port: 27017
            targetPort: 27017
  • Two services exist alongside each other

    • mongodb-service
    • mongodb-service-headless
  • Use headles service when client needs to perform write into mongodb Master Pod or for Pods to talk to each other for data synchronization

k get svc

NAME                                         TYPE             CLUSTER-IP EXTERNAL-IP PORT(S)
mongodb-service-headless ClusterIP    None             <none>            27017/TCP
  • 3 Service type
    • ClusterIP: default, internal service, only accessible within cluster
      • no external traffic can directly address the ClusterIP service
    • NodePort: accessible on a static port on each worker node in cluster
      • External traffic has access to fixed port on each Worker Node
      • nodePort range should be: 30000 - 32767
      • http://ip-address-worker-node:nodePort
      • When you create NodePort Service, ClusterIP Service is also automatically created because nodePort has to be routed to port of Service
        • nodePort โ†’ port
        • e.g. port:3200, nodePort:30008
          • cluster-ip:3200
          • node-ip:30008
    • LoadBalancer
      • LoadBalancer(Cloud providers')
      • AWS, GCP, AZURE
      • When Service of type LoadBalancer is created,
        • NodePort and ClusterIP Service are created automatically!
        • nodeport is not accessible directly from external browser
          • instead via LoadBalancer!!!
apiVersion: v1
kind: Service
metadata:
    name: ms-service-loadbalancer
spec:
    type: LoadBalancer
    selector:
        app: microservice-one
    ports:
        - protocol: TCP
            port: 3200
            targetPort: 3000
            # only via LoadBalancer though!
            nodePort: 30010
  • LoadBalancer > NodePort > ClusterIP

    • LoadBalancer Service is an extension of NodePort Service
    • NodePort Service is an extension of ClusterIP Service
  • Wrap-up

    • NodePort Service NOT for external connection! TEST-ONLY
    • two common practice:
      • Ingress โ†’ Service (ClusterIP)
      • LoadBalanceri โ†’ Service (ClusterIP)

Kubernetes Networking

  • Medium Post

  • Kubernetes

    • ์ปจํ…Œ์ด๋„ˆ ์˜ค์ผ€์ŠคํŠธ๋ ˆ์ด์…˜ ํ”Œ๋žซํผ์œผ๋กœ ์ปจํ…Œ์ด๋„ˆ ์–ดํ”Œ๋ฆฌ์ผ€์ด์…˜ deploy, manage, scaling ํ”„๋กœ์„ธ์Šค ์ž๋™ํ™”
    • Kubernetes clusters : ๋ฆฌ๋ˆ…์Šค ์ปจํ…Œ์ด๋„ˆ ํ˜ธ์ŠคํŠธ๋ฅผ cluster๋กœ ๊ทธ๋ฃนํ™”ํ•˜๊ณ  ๊ด€๋ฆฌ
      • on-premise, public/private/hybrid clouds์— ์ ์šฉ๊ฐ€๋Šฅ
      • ๋ฐ”๋ฅธ ์Šค์ผ€์ผ๋ง์ด ํ•„์š”ํ•œ cloud-native ์–ดํ”Œ๋ฆฌ์ผ€์ด์…˜์— ์ ํ•ฉํ•œ ํ”Œ๋žซํผ
    • ํด๋ผ์šฐ๋“œ ์•ฑ ๊ฐœ๋ฐœ์‹œ optimization์— ์œ ์šฉ
    • physical ๋˜๋Š” VM ํด๋Ÿฌ์Šคํ„ฐ์— ์ปจํ…Œ์ด๋„ˆ๋“ค์„ scheduling ํ•˜๊ณ  run ํ•  ์ˆ˜ ์žˆ์Œ
    • ํด๋ผ์šฐ๋“œ ๋„ค์ดํ‹ฐ๋ธŒ ์•ฑ์„ '์ฟ ๋ฒ„๋„คํ‹ฐ์Šค ํŒจํ„ด'์„ ์ด์šฉํ•˜์—ฌ ์ฟ ๋ฒ„๋„คํ‹ฐ์Šค๋ฅผ ๋Ÿฐํƒ€์ž„ ํ”Œ๋žซํผ์œผ๋กœ ์‚ฌ์šฉํ•˜์—ฌ ๋งŒ๋“ค์ˆ˜ ์žˆ์Œ
    • ์ถ”๊ฐ€ ๊ธฐ๋Šฅ์œผ๋กœ:
      • ์—ฌ๋Ÿฌํ˜ธ์ŠคํŠธ์— ๊ฑธ์ณ์„œ ์ปจํ…Œ์ด๋„ˆ๋ฅผ Orchestrate ํ•  ์ˆ˜ ์žˆ์Œ
      • ์—”ํ„ฐํ”„๋ผ์ด์ฆˆ ์•ฑ์‹คํ–‰์„ ์œ„ํ•ด ๋ฆฌ์†Œ์Šค๋ฅผ ์ตœ๋Œ€ํ™”ํ•˜์—ฌ ํ•˜๋“œ์›จ์–ด ์šด์šฉ ๊ฐ€๋Šฅ
      • ์–ดํ”Œ๋ฆฌ์ผ€์ด์…˜ ๋ฐฐํฌ์™€ ์—…๋ฐ์ดํŠธ๋ฅผ ์ œ์–ด ๋ฐ ์ž๋™ํ™”
      • Stateful ์•ฑ์„ ์‹คํ–‰ ํ•˜๊ธฐ ์œ„ํ•ด ์Šคํ† ๋ฆฌ์ง€๋ฅผ ๋งˆ์šดํŠธ ํ•˜๊ณ  ์ถ”๊ฐ€ ๊ฐ€๋Šฅ
      • ์ปจํ…Œ์ด๋„ˆ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜๊ณผ ๋ฆฌ์†Œ์Šค๋ฅผ scaling ํ•  ์ˆ˜ ์žˆ์Œ
    • ์ฟ ๋ฒ„๋„คํ‹ฐ์Šค๋Š” ๋‹ค๋ฅธ ํ”„๋กœ์ ํŠธ๋“ค๊ณผ ๊ฒฐํ•ฉํ•˜์—ฌ ํšจ์œจ์ ์ธ ์‚ฌ์šฉ
      • Registry: Docker Registry
      • Networking
      • Telemetry
      • Security: LDAP, SELinux,RBAC, OAUTH with multitenancy layers
      • Automation
      • Services
  • Kubernetes Architecture

  • TERMS

    • Control Plane
      • ์ฟ ๋ฒ„๋„คํ‹ฐ์Šค ๋…ธ๋“œ๋“ค์„ ์ปจํŠธ๋กคํ•˜๋Š” ํ”„๋กœ์„ธ์Šค์˜ ์ง‘ํ•ฉ
      • ์—ฌ๊ธฐ์„œ ๋ชจ๋“  Task ํ• ๋‹น์ด ์ด๋ฃจ์–ด ์ง
    • Node : ์ปจํŠธ๋กค Plane์œผ๋กœ ๋ถ€ํ„ฐ ํ• ๋‹น๋œ Task๋ฅผ ์ˆ˜ํ–‰ํ•˜๋Š” ๋จธ์‹ 
    • Pod: 1๊ฐœ์˜ Node์— Deploy๋œ ํ•œ๊ฐœ ์ด์ƒ์˜ ์ปจํ…Œ์ด๋„ˆ๋“ค
      • ํŒŒ๋“œ์— ์žˆ๋Š” ์ปจํ…Œ์ด๋„ˆ๋“ค์€ IP ์ฃผ์†Œ, IPC (inter-process-communication), Hostname, ๋ฆฌ์†Œ์Šค
    • Replication ์ปจํŠธ๋กค๋Ÿฌ : ๋ช‡๊ฐœ์˜ ๋™์ผ pod ์นดํ”ผ๋“ค์ด ํด๋Ÿฌ์Šคํ„ฐ์—์„œ ์‹คํ–‰๋˜์–ด์•ผ ํ•˜๋Š”์ง€ ์ปจํŠธ๋กค
    • Service : Pods๋กœ๋ถ€ํ„ฐ work definition์„ ๋ถ„๋ฆฌํ•จ.
      • Kubernetes Service Proxy๋“ค์ด ์ž๋™์œผ๋กœ ์„œ๋น„์Šค ๋ฆฌํ€˜์ŠคํŠธ๋ฅผ pod์— ์—ฐ๊ฒฐํ•จ
      • Cluster ๋‚ด์—์„œ ์–ด๋””๋กœ ์›€์ง์ด๋“  ๋˜๋Š” replace ๋˜๋”๋ผ๋„ ์ž๋™์œผ๋กœ ์—ฐ๊ฒฐ ๋จ.
    • Kubelet : ์ด ์„œ๋น„์Šค๋Š” ๋…ธ๋“œ์—์„œ ์‹คํ–‰๋˜๋ฉฐ, ์ปจํ…Œ์ด๋„ˆ manifest๋ฅผ ์ฝ๊ณ , ์ •์˜๋œ ์ปจํ…Œ์ด๋„ˆ๋“ค์ด ์‹œ์ž‘๋˜๊ณ  ์ž‘๋™ํ•˜๋„๋ก ํ•จ
  • ๋™์ž‘์›๋ฆฌ

    • ํด๋Ÿฌ์Šคํ„ฐ : ๋™์ž‘ ์ค‘์ธ ์ฟ ๋ฒ„๋„คํ‹ฐ์Šค deployment๋ฅผ ํด๋Ÿฌ์Šคํ„ฐ๋ผ๊ณ  ํ•ฉ๋‹ˆ๋‹ค.
      • ํด๋Ÿฌ์Šคํ„ฐ๋Š” ์ปจํŠธ๋กค plane๊ณผ compute ๋จธ์‹ (๋…ธ๋“œ) ๋‘๊ฐ€์ง€ ํŒŒํŠธ๋กœ ๋‚˜๋ˆŒ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
        • Control Plane + Worker nodes
      • ๊ฐ๊ฐ์˜ ๋…ธ๋“œ๋Š” ๋ฆฌ๋ˆ…์Šคํ™˜๊ฒฝ์œผ๋กœ ๋ณผ ์ˆ˜ ์žˆ์œผ๋ฉฐ, physical/virtual ๋จธ์‹ ์ž…๋‹ˆ๋‹ค.
      • ๊ฐ๊ฐ์˜ ๋…ธ๋“œ๋Š” ์ปจํ…Œ์ด๋„ˆ๋“ค๋กœ ๊ตฌ์„ฑ๋œ pod๋“ค์„ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค.
      • ์ปจํŠธ๋กค๋Ÿฌ ํ”Œ๋ ˆ์ธ์€ ํด๋Ÿฌ์Šคํ„ฐ์˜ ์ƒํƒœ๋ฅผ ๊ด€๋ฆฌ
        • ์–ด๋–ค ์–ดํ”Œ๋ฆฌ์ผ€์ด์…˜์ด ์‹คํ–‰๋˜๊ณ  ์žˆ๋Š”์ง€, ์–ด๋–ค ์ปจํ…Œ์ด๋„ˆ ์ด๋ฏธ์ง€๊ฐ€ ์‚ฌ์šฉ ๋˜๊ณ  ์žˆ๋Š”์ง€ ๋“ฑ
        • Compute ๋จธ์‹ ์€ ์‹ค์ œ๋กœ ์–ดํ”Œ๋ฆฌ์ผ€์ด์…˜๊ณผ ์›Œํฌ๋กœ๋“œ๋“ค์„ ์‹คํ–‰ ํ•ฉ๋‹ˆ๋‹ค.
    • ์ฟ ๋ฒ„๋„คํ‹ฐ์Šค๋Š” OS์œ„์—์„œ ๋™์ž‘ํ•˜๋ฉด์„œ ๋…ธ๋“œ๋“ค์œ„์— ์‹คํ–‰ ์ค‘์ธ ์ปจํ…Œ์ด๋„ˆ pod๋“ค๊ณผ interact ํ•ฉ๋‹ˆ๋‹ค.
      • ์ปจํŠธ๋กค๋Ÿฌํ”Œ๋ ˆ์ธ์€ admin์œผ๋กœ๋ถ€ํ„ฐ ์ปค๋ฉ˜๋“œ๋ฅผ ๋ฐ›์•„, Compute๋จธ์‹ ์— ํ•ด๋‹น ์ปค๋ฉ˜๋“œ๋“ค์„ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค.
  • Youtube Tutorial (TechWorld with Nana)

- Deployment > ReplicaSet > Pod > Container
    - use kubectl command to manage deployment

```sh
k get pod

k get services

k create deployment nginx-depl --image=nginx
k get deployment
k get pod
k get replicaset

k edit deployement nginx-depl
k get pod
    NAME                                                    READY     STATUS        RESTARTS     AGE
    nginx-depl-8475696677-c4p24     1/1         Running     0                    3m33s
    mongo-depl-5ccf565747-xtp89     1/1         Running     0                    2m10s

k logs nginx-depl-56cb8b6d7-6z9w6

k exec -it [pod name] -- bin/bash

k exec -it mongo-depl-5ccf565747-xtp89 -- bin/bash
k delete deployment mongo-depl
  • microk8s ํ™˜๊ฒฝ
    • https://microk8s.io/docs/getting-started
    • https://ubuntu.com/tutorials/install-a-local-kubernetes-with-microk8s?&_ga=2.260194125.1119864663.1678939258-1273102176.1678684219#1-overview
sudo snap install microk8s --classic

# ๋ฐฉํ™”๋ฒฝ์„ค์ •
# https://webdir.tistory.com/206

sudo usermod -a -G microk8s $USER
sudo chown -f -R $USER ~/.kube
su - $USER
microk8s status --wait-ready

vim .bashrc
    alias k='microk8s kubectl'
    alias helm='microk8s helm'

source .bashrc
  • Microk8s, Ingress, metallb, nginx controller๋กœ ์™ธ๋ถ€ ์„œ๋น„์Šค ๋งŒ๋“ค๊ธฐ

    • ์ฐธ๊ณ  ๋ฌธ์„œ
      • https://kubernetes.github.io/ingress-nginx/deploy/baremetal/
      • https://benbrougher.tech/posts/microk8s-ingress/
      • https://betterprogramming.pub/how-to-expose-your-services-with-kubernetes-ingress-7f34eb6c9b5a
  • Ingress๋Š” ์ฟ ๋ฒ„๋„คํ‹ฐ์Šค๊ฐ€ ์™ธ๋ถ€๋กœ ๋ถ€ํ„ฐ ํŠธ๋ž˜ํ”ฝ์„ ๋ฐ›์•„์„œ ๋‚ด๋ถ€ ์„œ๋น„์Šค๋กœ routeํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•ด์คŒ

    • ํ˜ธ์ŠคํŠธ๋ฅผ ์ •์˜ํ•˜๊ณ , ํ˜ธ์ŠคํŠธ๋‚ด์—์„œ sub-route๋ฅผ ํ†ตํ•ด
    • ๊ฐ™์€ ํ˜ธ์ŠคํŠธ๋„ค์ž„์˜ ๋‹ค๋ฅธ ์„œ๋น„์Šค๋“ค๋กœ routeํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•จ
    • Ingress rule์„ ํ†ตํ•ด ํ•˜๋‚˜์˜ Ip ์ฃผ์†Œ๋กœ ๋“ค์–ด์˜ค๋„๋ก ์„ค์ •
    • Ingress Controller๊ฐ€ ์‹ค์ œ traffic routeํ•˜๋ฉฐ, Ingress๋Š” rule์„ ์ •์˜ํ•˜๋Š” ์—ญํ• 
  • ์ด๋ฏธ์ง€ ๋งŒ๋“ค๊ธฐ โ†’ Dockerhub์— push

# ์ด๋ฏธ์ง€ ๋งŒ๋“ค๊ธฐ
cd learn/yaml/helloworld/docker
docker build -t server-1:latest -f build/Dockerfile .
docker tag server-1 jnuho/server-1
docker push jnuho/server-1
  • simple-service.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
    name: hellok8s-deployment
    labels:
        app: hellok8s
spec:
    replicas: 1
    selector:
        matchLabels:
            app: hellok8s
    template:
        metadata:
            labels:
                app: hellok8s
        spec:
            containers:
            - name: hellok8s
                image: jnuho/server-1
                ports:
                - containerPort: 8081
---
apiVersion: v1
kind: Service
metadata:
    name: hellok8s-service
    # Use specific ip for metallb
    annotations:
        metallb.universe.tf/loadBalancerIPs: 172.16.6.100
spec:
    type: LoadBalancer
    selector:
        app: hellok8s
    ports:
    - port: 8081
        targetPort: 8081
k apply -f simple-service.yaml
k get svc
    NAME                             TYPE                     CLUSTER-IP            EXTERNAL-IP        PORT(S)                    AGE
    kubernetes                 ClusterIP            10.152.183.1        <none>                 443/TCP                    5d19h
    hellok8s-service     LoadBalancer     10.152.183.58     <none>                 8081:31806/TCP     114s
# ์‚ฌ์šฉ์ค‘ ip์ธ์ง€ ํ™•์ธํ•˜๊ธฐ: 100-105
ping 172.16.6.100

microk8s enable metallb:172.16.6.100-172.16.6.105

# ๋กœ๋“œ๋ฐธ๋Ÿฐ์„œ ์„œ๋น„์Šค์˜ IP๊ฐ€ metallb์— ์˜ํ•ด ํ• ๋‹น๋จ
# 172.16.6.100:8081๋กœ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜ ์ ‘๊ทผ

k get svc
    NAME                             TYPE                     CLUSTER-IP            EXTERNAL-IP        PORT(S)                    AGE
    kubernetes                 ClusterIP            10.152.183.1        <none>                 443/TCP                    5d19h
    hellok8s-service     LoadBalancer     10.152.183.58     172.16.6.100     8081:31806/TCP     114s

# ๋ธŒ๋ผ์šฐ์ €๋กœ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜ ์ ‘๊ทผ 172.16.6.100:8081
curl 172.16.6.100:8081

โ†‘ Back to top

VPC-CNI

https://www.youtube.com/watch?v=RBE3yk2UlYA

  • Each node has an eni that can attach multiple prefixes.
  • In certain cloud environments, such as AWS, ENIs can be associated with multiple prefixes.
  • This means you can assign IP addresses from different subnets to the same ENI.

VPC-CNI (Virtual Private Cloud Container Network Interface) is a Kubernetes networking plugin developed by Amazon Web Services (AWS) that integrates Kubernetes pods with the Amazon VPC networking model. It is specifically designed to provide high performance and flexibility for Kubernetes clusters running on AWS.

Key Features of VPC-CNI

  1. Pod Networking within VPC:
  2. The VPC-CNI plugin allows Kubernetes pods to receive IP addresses from the VPC's IP address space, enabling direct integration with AWS networking services and features.

  3. Elastic Network Interfaces (ENIs):

  4. VPC-CNI uses ENIs to assign IP addresses to pods. Each node in the Kubernetes cluster has one or more ENIs attached to it, and these ENIs provide IP addresses that are assigned to pods running on that node.

  5. Scalability:

  6. By leveraging ENIs and secondary IP addresses, VPC-CNI can scale to support large numbers of pods per node, limited only by the instance type and the number of ENIs it supports.

  7. Security Groups:

  8. Pods can be associated with AWS security groups, providing fine-grained control over network traffic to and from pods.

  9. AWS Integration:

  10. VPC-CNI integrates seamlessly with other AWS services, such as Elastic Load Balancers (ELBs), AWS PrivateLink, and VPC Peering.

  11. Performance:

  12. Direct integration with the VPC network ensures high performance, low latency, and high throughput for pod-to-pod and pod-to-service communication.

How VPC-CNI Works

  1. Node Initialization:
  2. When a new node joins the Kubernetes cluster, the VPC-CNI plugin automatically attaches ENIs to the node. These ENIs are allocated from the subnets defined in the VPC.

  3. IP Address Management:

  4. The plugin manages a pool of secondary IP addresses assigned to each ENI. When a pod is scheduled on a node, it receives an IP address from this pool.

  5. Pod Networking:

  6. Each pod is assigned an IP address from the VPC's IP address space, enabling it to communicate with other pods and services within the VPC using native VPC networking capabilities.

  7. ENI Allocation:

  8. VPC-CNI dynamically allocates and deallocates ENIs based on the number of pods running on a node. If a node requires more IP addresses than an ENI can provide, the plugin attaches additional ENIs to the node.

Configuration and Usage

To use VPC-CNI with your Amazon EKS (Elastic Kubernetes Service) cluster, you typically follow these steps:

  1. Create EKS Cluster:
  2. Use the AWS Management Console, CLI, or SDK to create an EKS cluster.

  3. Configure VPC and Subnets:

  4. Ensure your VPC and subnets are configured to meet the requirements for your cluster, including IP address ranges and route tables.

  5. Install VPC-CNI Plugin:

  6. The VPC-CNI plugin is usually installed by default when you create an EKS cluster. However, you can also install or upgrade it manually using kubectl and AWS-provided manifests.

  7. Manage ENI Configurations:

  8. You can configure the VPC-CNI plugin using Kubernetes ConfigMaps to set parameters like ENI allocation, IP address limits, and logging.

Example of ConfigMap for VPC-CNI

Here's an example of a ConfigMap used to configure the VPC-CNI plugin:

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-node
  namespace: kube-system
data:
  enable-ipv4: "true"
  enable-ipv6: "false"
  eni-config: "default"
  eni-max-pods: "110"
  log-level: "DEBUG"
  ...

The VPC-CNI plugin for Kubernetes on AWS provides a robust and scalable networking solution that leverages native VPC capabilities. By using ENIs and integrating directly with AWS networking features, it offers high performance, security, and flexibility for managing Kubernetes pod networking within an Amazon VPC.

โ†‘ Back to top

References

โ†‘ Back to top