Skip to main content

Multi-Master Cluster Deployment

info
  • Minimum node requirements: 3 nodes.
  • Master nodes support redundancy. The cluster can still operate and run workloads normally even if one Master node fails.
  • Suitable for Standard Edition and Professional Edition clusters, typically used in environments with three microservice nodes.

This document covers Kubernetes cluster deployment based on CentOS 7.9 / Debian 12.

Server IPHost Role
192.168.10.20Kubernetes 01 (Master, Node)
192.168.10.21Kubernetes 02 (Master, Node)
192.168.10.22Kubernetes 03 (Master, Node)

Server Requirements

  • No network policy restrictions between cluster servers
  • Each node's hostname must be unique and cannot be changed after the cluster is deployed
  • The primary NIC MAC address must not be duplicated (run ip link to check)
  • product_uuid must not be duplicated (run cat /sys/class/dmi/id/product_uuid to check)
  • Port 6443 must not be in use (run nc -vz 127.0.0.1 6443 to verify)
  • swap memory must be disabled (run swapoff -a to temporarily disable it, and comment out the swap partition entry in /etc/fstab)

Configure HOSTS

Add the following hosts entries to each node in the Kubernetes cluster, pointing k8s-master to the three Master nodes:

cat >> /etc/hosts << EOF
192.168.10.20 k8s-master
192.168.10.21 k8s-master
192.168.10.22 k8s-master
EOF
  • Every node in the cluster (including nodes added later) must have this configuration.

Install CRI Container Runtime

Required on all nodes of the Kubernetes cluster.

  1. Download the Kubernetes cluster installation package

    wget https://pdpublic.mingdao.com/private-deployment/offline/common/kubernetes-1.35.3/1.35-k8s-amd64-pkg.tar.gz
    • Extract the installation package

      tar xzvf 1.35-k8s-amd64-pkg.tar.gz
    • Verify the contents of the installation package

      ls -l 1.35-k8s-amd64-pkg
      Example output
      -rw-r--r-- 1 root 197121 331360 Apr 14 16:38 calico.yaml
      -rw-r--r-- 1 root 197121 33899693 Apr 13 12:06 containerd-static-2.2.2-linux-amd64.tar.gz
      -rw-r--r-- 1 root 197121 19185064 Apr 13 13:18 crictl-v1.35.0-linux-amd64.tar.gz
      -rw-r--r-- 1 root 197121 28576212 Apr 13 13:22 istio-1.29.1-linux-amd64.tar.gz
      -rw-r--r-- 1 root 197121 72372408 Apr 13 13:20 kubeadm
      -rw-r--r-- 1 root 197121 58601656 Apr 13 13:18 kubectl
      -rw-r--r-- 1 root 197121 58110244 Apr 13 13:20 kubelet
      -rw-r--r-- 1 root 197121 11373900 Apr 13 13:17 nerdctl-2.2.2-linux-amd64.tar.gz
      -rw-r--r-- 1 root 197121 12741088 Apr 13 12:12 runc.amd64
  2. Install containerd

    Run this command from within the 1.35-k8s-amd64-pkg directory

    cd 1.35-k8s-amd64-pkg
    tar -zxvf containerd-static-2.2.2-linux-amd64.tar.gz
    mv -f bin/* /usr/local/bin/
  3. Create the containerd configuration directory

    mkdir /etc/containerd
  4. Install runc

    Run this command from within the 1.35-k8s-amd64-pkg directory

    mv runc.amd64 /usr/local/bin/runc
    chmod +x /usr/local/bin/runc
    runc -v
  5. Generate the containerd configuration file and modify the relevant parameters

    containerd config default > /etc/containerd/config.toml

    sed -i \
    -e 's|SystemdCgroup =.*|SystemdCgroup = true|g' \
    -e 's|bin_dirs =.*|bin_dirs = ["/usr/local/kubernetes/cni/bin"]|' \
    -e 's|sandbox =.*|sandbox = "127.0.0.1:5000/pause:3.10.1"|' \
    -e 's|^root =.*|root = "/data/containerd"|' \
    /etc/containerd/config.toml
    • Verify that the configuration has taken effect

      grep "SystemdCgroup\|bin_dirs\|sandbox =\|^root =" /etc/containerd/config.toml
      Example output
      root = "/data/containerd"
      sandbox_image = "127.0.0.1:5000/pause:3.10.1"
      bin_dir = "/usr/local/kubernetes/cni/bin"
      SystemdCgroup = true
  6. Configure the containerd systemd service file

    cat > /etc/systemd/system/containerd.service <<EOF
    [Unit]
    Description=containerd
    After=network-online.target
    Wants=network-online.target
    [Service]
    Type=notify
    ExecStart=/usr/local/bin/containerd --config /etc/containerd/config.toml
    LimitNOFILE=1024000
    LimitNPROC=infinity
    LimitCORE=0
    TimeoutStartSec=0
    Delegate=yes
    KillMode=process
    Restart=on-failure
    StartLimitBurst=3
    StartLimitInterval=60s
    [Install]
    WantedBy=multi-user.target
    EOF
  7. Start containerd and enable it on boot

    systemctl daemon-reload && systemctl restart containerd && systemctl enable containerd

Install Required K8S Commands

Installs crictl / kubeadm / kubelet / kubectl. Required on all nodes of the Kubernetes cluster.

  1. Create the command installation directory

    mkdir -p /usr/local/kubernetes/bin
  2. Move command binaries to the installation directory

    Run this command from within the 1.35-k8s-amd64-pkg directory

    tar -zxvf crictl-v1.35.0-linux-amd64.tar.gz -C /usr/local/kubernetes/bin

    \cp -rvf ./{kubeadm,kubelet,kubectl} /usr/local/kubernetes/bin/
  3. Grant executable permissions to the command binaries

    chmod +x /usr/local/kubernetes/bin/*
    chown $(whoami):$(groups) /usr/local/kubernetes/bin/*
  4. Configure systemd to manage kubelet

    cat > /etc/systemd/system/kubelet.service <<\EOF
    [Unit]
    Description=kubelet: The Kubernetes Node Agent
    Documentation=https://kubernetes.io/docs/home/
    Wants=network-online.target
    After=network-online.target

    [Service]
    ExecStart=/usr/local/kubernetes/bin/kubelet
    Restart=always
    StartLimitInterval=0
    RestartSec=10

    [Install]
    WantedBy=multi-user.target
    EOF
  5. Configure kubeadm drop-in parameters for kubelet

    mkdir -p /etc/systemd/system/kubelet.service.d

    cat > /etc/systemd/system/kubelet.service.d/10-kubeadm.conf <<\EOF
    # Note: This dropin only works with kubeadm and kubelet v1.11+
    [Service]
    Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
    Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
    # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
    EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
    # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
    # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
    EnvironmentFile=-/etc/default/kubelet
    ExecStart=
    ExecStart=/usr/local/kubernetes/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
    EOF
  6. Start kubelet and enable it on boot

    systemctl daemon-reload && systemctl restart kubelet && systemctl enable kubelet
    • kubelet remains in a waiting state until kubeadm init / kubeadm join completes. There is no need to check the service status after restart; it will start automatically once the cluster is initialized.
  7. Add the K8S command path to the environment variables

    cat > /etc/profile.d/kubernetes.sh <<'EOF'
    export PATH=/usr/local/kubernetes/bin/:$PATH
    EOF
    source /etc/profile.d/kubernetes.sh
  8. Configure the crictl runtime endpoint

    crictl config runtime-endpoint unix:///run/containerd/containerd.sock
    • Verify the configuration

      crictl config --get runtime-endpoint

Install nerdctl (Optional)

nerdctl is a Docker-compatible CLI for containerd, providing a familiar interface for managing containers, images, and other resources.

  1. Extract and install nerdctl

    Run this command from within the 1.35-k8s-amd64-pkg directory

    tar -zxvf nerdctl-2.2.2-linux-amd64.tar.gz
    rm -f containerd-rootless*.sh
    mv nerdctl /usr/local/kubernetes/bin/
  2. Configure the command alias and apply it

    echo 'alias nerdctl="nerdctl -n k8s.io"' >> ~/.bashrc

    source ~/.bashrc
    • Run nerdctl -v. Output of nerdctl version 2.2.2 confirms a successful installation.

Install Environment Dependencies

Required on all nodes of the Kubernetes cluster.

  1. Install socat / conntrack

    # CentOS / RedHat
    yum install -y socat conntrack-tools

    # Debian / Ubuntu
    apt install -y socat conntrack
  2. Verify all required commands are available

    crictl --version && containerd --version && runc -v|grep version && kubeadm version && kubelet --version && kubectl version --client=true && socat -V | grep 'socat version' && conntrack --version && echo ok || echo error
    • A final output of ok indicates success; error means one or more commands are missing and must be installed based on the error output.

Configure Kernel Settings

Required on all nodes of the Kubernetes cluster.

  1. Persist the required kernel modules on boot

    cat > /etc/modules-load.d/kubernetes.conf <<EOF
    overlay
    br_netfilter
    ip_vs
    ip_vs_rr
    ip_vs_wrr
    ip_vs_sh
    EOF
  2. Load the kernel modules immediately

    modprobe overlay
    modprobe br_netfilter
    modprobe ip_vs
    modprobe ip_vs_rr
    modprobe ip_vs_wrr
    modprobe ip_vs_sh
  3. Append kernel parameters and apply them

    cp -rfp /etc/sysctl.d/99-sysctl.conf /etc/sysctl.d/99-sysctl.conf.backup-$(date +%Y%m%d%H%M%S)
    cat >> /etc/sysctl.d/99-sysctl.conf <<EOF
    net.bridge.bridge-nf-call-iptables = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.ipv4.ip_forward = 1
    vm.max_map_count = 262144

    # MD Config
    net.nf_conntrack_max = 524288
    net.ipv4.tcp_max_tw_buckets = 5000
    net.ipv4.tcp_window_scaling = 1
    net.ipv4.tcp_rmem = 8192 87380 16777216
    net.ipv4.tcp_wmem = 8192 65536 16777216
    net.ipv4.tcp_max_syn_backlog = 32768
    net.core.netdev_max_backlog = 32768
    net.core.netdev_budget = 600
    net.core.somaxconn = 32768
    net.core.wmem_default = 8388608
    net.core.rmem_default = 8388608
    net.core.rmem_max = 16777216
    net.core.wmem_max = 16777216
    net.ipv4.tcp_timestamps = 1
    net.ipv4.tcp_synack_retries = 2
    net.ipv4.tcp_syn_retries = 2
    net.ipv4.tcp_tw_recycle = 0
    net.ipv4.tcp_tw_reuse = 1
    net.ipv4.tcp_fin_timeout = 2
    net.ipv4.tcp_mem = 8388608 12582912 16777216
    net.ipv4.ip_local_port_range = 1024 65000
    net.ipv4.tcp_max_orphans = 16384
    net.ipv4.tcp_keepalive_intvl = 10
    net.ipv4.tcp_keepalive_probes = 3
    net.ipv4.tcp_keepalive_time = 600
    vm.max_map_count = 262144
    net.netfilter.nf_conntrack_tcp_be_liberal = 0
    net.netfilter.nf_conntrack_tcp_max_retrans = 3
    net.netfilter.nf_conntrack_tcp_timeout_max_retrans = 300
    net.netfilter.nf_conntrack_tcp_timeout_established = 86400
    fs.inotify.max_user_watches=10485760
    fs.inotify.max_user_instances=10240
    EOF

    sysctl --system

K8S Environment Image Preparation

Required on all nodes of the Kubernetes cluster.

  1. Download and import offline images

    wget https://pdpublic.mingdao.com/private-deployment/offline/common/kubernetes-1.35.3/kubeadm-1.35.3-images-amd64.tar.gz
    • Extract and import the images

      gunzip -d kubeadm-1.35.3-images-amd64.tar.gz

      ctr -n k8s.io image import kubeadm-1.35.3-images-amd64.tar
    • After import, run crictl images to verify

      Example output
      IMAGE TAG IMAGE ID SIZE
      127.0.0.1:5000/cni v3.31.4 c433a27dd94ce 72.2MB
      127.0.0.1:5000/coredns v1.13.1 aa5e3ebc0dfed 23.6MB
      127.0.0.1:5000/etcd 3.6.6-0 0a108f7189562 23.6MB
      127.0.0.1:5000/kube-apiserver v1.35.3 0f2b96c93465f 27.6MB
      127.0.0.1:5000/kube-controller-manager v1.35.3 0eb506280f9bc 23MB
      127.0.0.1:5000/kube-controllers v3.31.4 ff033cc89dab5 54MB
      127.0.0.1:5000/kube-proxy v1.35.3 53ed370019059 25.7MB
      127.0.0.1:5000/kube-scheduler v1.35.3 87c9b0e4f80d3 17.1MB
      127.0.0.1:5000/node v3.31.4 e6536b93706ed 160MB
      127.0.0.1:5000/pause 3.10.1 cd073f4c5f6a8 320kB
      127.0.0.1:5000/pilot 1.29.1 cd8219a164d79 73.4MB
      127.0.0.1:5000/proxyv2 1.29.1 599aea7eee05d 89.1MB

Initialize the First Master Node

Perform operations on Kubernetes 01 only.

  1. Initialize the Master Node

    kubeadm init --control-plane-endpoint "k8s-master:6443" --upload-certs --cri-socket unix:///var/run/containerd/containerd.sock -v 5 --kubernetes-version=1.35.3 --image-repository=127.0.0.1:5000 --pod-network-cidr=10.244.0.0/16
    • After successful initialization, the output will contain two types of kubeadm join commands. Save this output:
      • The command with --control-plane --certificate-key parameters: used for adding other Master nodes to the cluster.
      • The command without those parameters: used for adding worker nodes to the cluster.
    • If initialization fails, run kubeadm reset -f to reset and retry.
    Output example
    You can now join any number of control-plane node by running the following command on each as a root:
    kubeadm join k8s-master:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07

    Then you can join any number of worker nodes by running the following on each as root:
    kubeadm join k8s-master:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866
  2. Expand the NodePort available port range

    sed -i '/- kube-apiserver/a\ \ \ \ - --service-node-port-range=1024-32767' /etc/kubernetes/manifests/kube-apiserver.yaml
  3. Configure kubectl Access Credentials on the Master Node

    echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> /etc/profile.d/kubernetes.sh
    source /etc/profile.d/kubernetes.sh
  4. Increase the Pod limit on the Master node

    echo "maxPods: 300" >> /var/lib/kubelet/config.yaml
    systemctl restart kubelet
  5. Allow the Master node to schedule Pods

    kubectl taint node $(kubectl get node | grep control-plane | awk '{print $1}') node-role.kubernetes.io/control-plane:NoSchedule-
    • Wait approximately 2 minutes after initialization before running this command.

    • Expected output:

      • First run: Output includes <node-name> untainted, indicating the taint was removed and the node can now schedule Pods.
      • Subsequent runs: If a node shows error: taint "node-role.kubernetes.io/control-plane:NoSchedule" not found, the taint on that node no longer exists and no further action is required.
      • Note: If the output is unexpected, system components may not yet be fully ready. Wait a moment and retry.
  6. Configure kubectl command completion (optional)

    If the server has internet access or an internal yum/apt repository, install bash-completion to enable tab completion for kubectl subcommands.

    # Install bash-completion
    Debian-based: apt -y install bash-completion
    CentOS-based: yum -y install bash-completion

    # Add Bash completion configuration
    grep -q "kubectl completion" ~/.bashrc || echo "source <(kubectl completion bash)" >> ~/.bashrc && source ~/.bashrc
  7. Install the Calico Network Plugin

    1. Move calico.yaml to the installation directory

      Run this command from within the 1.35-k8s-amd64-pkg directory

      mv calico.yaml /usr/local/kubernetes/
    2. Modify the configuration file

      • Replace the image registry address with the local registry

        sed -ri 's|image: quay.io/calico|image: 127.0.0.1:5000|g' /usr/local/kubernetes/calico.yaml

        grep image: /usr/local/kubernetes/calico.yaml
        Example output
        image: 127.0.0.1:5000/cni:v3.31.4
        image: 127.0.0.1:5000/cni:v3.31.4
        image: 127.0.0.1:5000/node:v3.31.4
        image: 127.0.0.1:5000/node:v3.31.4
        image: 127.0.0.1:5000/kube-controllers:v3.31.4
      • Configure the Pod CIDR

        sed -ri '/# - name: CALICO_IPV4POOL_CIDR/,/# value: ".*"/ {
        s/# - name: CALICO_IPV4POOL_CIDR/- name: CALICO_IPV4POOL_CIDR/
        s/# value: ".*"/ value: "10.244.0.0\/16"/
        }' /usr/local/kubernetes/calico.yaml

        grep -C 2 CALICO_IPV4POOL_CIDR /usr/local/kubernetes/calico.yaml
        Example output
        # chosen from this range. Changing this value after installation will have
        # no effect. This should fall within `--cluster-cidr`.
        - name: CALICO_IPV4POOL_CIDR
        value: "10.244.0.0/16"
        # Disable file logging so `kubectl logs` works.
      • Configure the CNI binary path

        sed -i '/- name: cni-bin-dir/,/type:/s|path: .*|path: /usr/local/kubernetes/cni/bin|' /usr/local/kubernetes/calico.yaml

        grep -C 2 cni-bin-dir /usr/local/kubernetes/calico.yaml
        Example output
        name: host-local-net-dir
        - mountPath: /host/opt/cni/bin
        name: cni-bin-dir
        securityContext:
        privileged: true
        --
        volumeMounts:
        - mountPath: /host/opt/cni/bin
        name: cni-bin-dir
        - mountPath: /host/etc/cni/net.d
        name: cni-net-dir
        --
        path: /proc
        # Used to install CNI.
        - name: cni-bin-dir
        hostPath:
        path: /usr/local/kubernetes/cni/bin
    3. Deploy Calico

      kubectl apply -f /usr/local/kubernetes/calico.yaml
    4. Check Calico service status

      kubectl get pod -n kube-system -l k8s-app=calico-node
      kubectl get pod -n kube-system -l k8s-app=calico-kube-controllers

      Under normal conditions, all Pods should show READY as 1/1 and STATUS as Running:

      NAME READY STATUS RESTARTS AGE
      calico-node-4xbtk 1/1 Running 0 2m
      calico-node-9fzwp 1/1 Running 0 2m
      calico-node-kq7rx 1/1 Running 0 2m
      calico-kube-controllers-7d9d9b7b9c-x2kqt 1/1 Running 0 2m
      • If STATUS remains Init or Pending for an extended period, run kubectl describe pod <pod-name> -n kube-system to view detailed events and diagnose the issue.

Join Other Master Nodes to the Cluster

Perform operations on Kubernetes 02 / 03 nodes respectively.

  1. Join the Kubernetes cluster

    kubeadm join k8s-master:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07
    • The command above is an example. Use the actual Master join command from the first Master node's kubeadm init output.

    • If the command has been lost, regenerate it on the first Master node using the following steps:

      1. Regenerate the join command

        kubeadm token create --print-join-command
      2. Re-upload certificates and generate a new decryption key

        kubeadm init phase upload-certs --upload-certs
      3. Append --control-plane --certificate-key <key from step 2> to the output of step 1

        kubeadm join k8s-master:6443 --token 1b6i9d.0qqufwsjrjpuhkwo --discovery-token-ca-cert-hash sha256:3d28faa49e9cac7dd96aded0bef33a6af1ced57e45f0b12c6190f3d4e1055456 --control-plane --certificate-key 57a0f0e9be1d9f1c74bab54a52faa143ee9fd9c26a60f1b3b816b17b93ecaf6f
  2. Expand the NodePort available port range

    sed -i '/- kube-apiserver/a\ \ \ \ - --service-node-port-range=1024-32767' /etc/kubernetes/manifests/kube-apiserver.yaml
  3. Configure kubectl Access Credentials on the Master Node

    echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> /etc/profile.d/kubernetes.sh
    source /etc/profile.d/kubernetes.sh
  4. Increase the Pod limit on the current node

    echo "maxPods: 300" >> /var/lib/kubelet/config.yaml
    systemctl restart kubelet
  5. Allow the Master node to schedule Pods

    kubectl taint node $(kubectl get node | grep control-plane | awk '{print $1}') node-role.kubernetes.io/control-plane:NoSchedule-
    • Wait approximately 2 minutes after initialization before running this command.

    • Expected output:

      • First run: Output includes <node-name> untainted, indicating the taint was removed and the node can now schedule Pods.
      • Already removed: If a node shows error: taint "node-role.kubernetes.io/control-plane:NoSchedule" not found, the taint on that node no longer exists and no further action is required.
      • Note: If the output is unexpected, system components may not yet be fully ready. Wait a moment and retry.

Add Worker Nodes to the Cluster

For example, Flink nodes or subsequently added microservice nodes all join the cluster as worker nodes.

  1. Join the Kubernetes cluster

    kubeadm join 192.168.10.20:6443 --token 3nwjzw.pdod3r27lnqqhi0x \
    --discovery-token-ca-cert-hash sha256:a84445303a0f8249e7eae3059cb99d46038dc275b2dc2043a022de187a1175a2
    • The command above is an example. Use the actual Worker join command from the first Master node's kubeadm init output.
    • If the command has been lost, run kubeadm token create --print-join-command on the Master node to retrieve it.
  2. Increase the Pod limit on the worker node

    echo "maxPods: 300" >> /var/lib/kubelet/config.yaml
    systemctl restart kubelet

Cluster Status Check

  1. Check node and Pod status

    kubectl get pod -n kube-system # READY column should show "1/1"
    kubectl get node # STATUS column should show "Ready"
  2. Download and import the base image (required on all K8s cluster nodes)

    Image download link:

    https://pdpublic.mingdao.com/private-deployment/offline/common/centos7.9.2009.tar.gz

    Distribute the image file centos7.9.2009.tar.gz to all K8s cluster nodes and run the following to import it:

    gunzip -d centos7.9.2009.tar.gz
    ctr -n k8s.io image import centos7.9.2009.tar
  3. Deploy a test application on the first Master node

    cat > /usr/local/kubernetes/test.yaml <<'EOF'
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: test
    namespace: default
    spec:
    replicas: 3
    selector:
    matchLabels:
    app: test
    template:
    metadata:
    labels:
    app: test
    annotations:
    md-update: '20200517104741'
    spec:
    containers:
    - name: test
    image: centos:7.9.2009
    command:
    - sh
    - -c
    - |
    echo $(hostname) > hostname.txt
    python -m SimpleHTTPServer
    resources:
    limits:
    memory: 512Mi
    cpu: 1
    requests:
    memory: 64Mi
    cpu: 0.01
    volumeMounts:
    - name: tz-config
    mountPath: /etc/localtime
    volumes:
    - name: tz-config
    hostPath:
    path: /usr/share/zoneinfo/Etc/GMT-8

    ---

    apiVersion: v1
    kind: Service
    metadata:
    name: test
    namespace: default
    spec:
    selector:
    app: test
    ports:
    - name: external-test
    port: 8000
    targetPort: 8000
    nodePort: 8000
    type: NodePort
    EOF
  4. Start the service

    kubectl apply -f /usr/local/kubernetes/test.yaml
  5. Check Pod running status

    kubectl get pod -o wide
  6. Verify service access

    curl 127.0.0.1:8000/hostname.txt
    • Multiple executions should return different Pod hostnames, confirming that load balancing is working correctly.
  7. Once testing is complete, delete the test service

    kubectl delete -f /usr/local/kubernetes/test.yaml