Multi-Master Cluster Deployment
- Minimum node requirements: 3 nodes.
- Master nodes support redundancy. The cluster can still operate and run workloads normally even if one Master node fails.
- Suitable for Standard Edition and Professional Edition clusters, typically used in environments with three microservice nodes.
This document covers Kubernetes cluster deployment based on CentOS 7.9 / Debian 12.
| Server IP | Host Role |
|---|---|
| 192.168.10.20 | Kubernetes 01 (Master, Node) |
| 192.168.10.21 | Kubernetes 02 (Master, Node) |
| 192.168.10.22 | Kubernetes 03 (Master, Node) |
Server Requirements
- No network policy restrictions between cluster servers
- Each node's hostname must be unique and cannot be changed after the cluster is deployed
- The primary NIC MAC address must not be duplicated (run
ip linkto check) product_uuidmust not be duplicated (runcat /sys/class/dmi/id/product_uuidto check)- Port 6443 must not be in use (run
nc -vz 127.0.0.1 6443to verify) - swap memory must be disabled (run
swapoff -ato temporarily disable it, and comment out the swap partition entry in/etc/fstab)
Configure HOSTS
Add the following hosts entries to each node in the Kubernetes cluster, pointing k8s-master to the three Master nodes:
cat >> /etc/hosts << EOF
192.168.10.20 k8s-master
192.168.10.21 k8s-master
192.168.10.22 k8s-master
EOF
- Every node in the cluster (including nodes added later) must have this configuration.
Install CRI Container Runtime
Required on all nodes of the Kubernetes cluster.
-
Download the Kubernetes cluster installation package
- Internet-accessible server
- Internet-restricted server
wget https://pdpublic.mingdao.com/private-deployment/offline/common/kubernetes-1.35.3/1.35-k8s-amd64-pkg.tar.gz# Download the installation package via the link below and upload it to the target serverhttps://pdpublic.mingdao.com/private-deployment/offline/common/kubernetes-1.35.3/1.35-k8s-amd64-pkg.tar.gz-
Extract the installation package
tar xzvf 1.35-k8s-amd64-pkg.tar.gz -
Verify the contents of the installation package
ls -l 1.35-k8s-amd64-pkgExample output
-rw-r--r-- 1 root 197121 331360 Apr 14 16:38 calico.yaml-rw-r--r-- 1 root 197121 33899693 Apr 13 12:06 containerd-static-2.2.2-linux-amd64.tar.gz-rw-r--r-- 1 root 197121 19185064 Apr 13 13:18 crictl-v1.35.0-linux-amd64.tar.gz-rw-r--r-- 1 root 197121 28576212 Apr 13 13:22 istio-1.29.1-linux-amd64.tar.gz-rw-r--r-- 1 root 197121 72372408 Apr 13 13:20 kubeadm-rw-r--r-- 1 root 197121 58601656 Apr 13 13:18 kubectl-rw-r--r-- 1 root 197121 58110244 Apr 13 13:20 kubelet-rw-r--r-- 1 root 197121 11373900 Apr 13 13:17 nerdctl-2.2.2-linux-amd64.tar.gz-rw-r--r-- 1 root 197121 12741088 Apr 13 12:12 runc.amd64
-
Install containerd
Run this command from within the
1.35-k8s-amd64-pkgdirectorycd 1.35-k8s-amd64-pkgtar -zxvf containerd-static-2.2.2-linux-amd64.tar.gzmv -f bin/* /usr/local/bin/ -
Create the containerd configuration directory
mkdir /etc/containerd -
Install runc
Run this command from within the
1.35-k8s-amd64-pkgdirectorymv runc.amd64 /usr/local/bin/runcchmod +x /usr/local/bin/runcrunc -v -
Generate the containerd configuration file and modify the relevant parameters
containerd config default > /etc/containerd/config.tomlsed -i \-e 's|SystemdCgroup =.*|SystemdCgroup = true|g' \-e 's|bin_dirs =.*|bin_dirs = ["/usr/local/kubernetes/cni/bin"]|' \-e 's|sandbox =.*|sandbox = "127.0.0.1:5000/pause:3.10.1"|' \-e 's|^root =.*|root = "/data/containerd"|' \/etc/containerd/config.toml-
Verify that the configuration has taken effect
grep "SystemdCgroup\|bin_dirs\|sandbox =\|^root =" /etc/containerd/config.tomlExample output
root = "/data/containerd"sandbox_image = "127.0.0.1:5000/pause:3.10.1"bin_dir = "/usr/local/kubernetes/cni/bin"SystemdCgroup = true
-
-
Configure the containerd systemd service file
cat > /etc/systemd/system/containerd.service <<EOF[Unit]Description=containerdAfter=network-online.targetWants=network-online.target[Service]Type=notifyExecStart=/usr/local/bin/containerd --config /etc/containerd/config.tomlLimitNOFILE=1024000LimitNPROC=infinityLimitCORE=0TimeoutStartSec=0Delegate=yesKillMode=processRestart=on-failureStartLimitBurst=3StartLimitInterval=60s[Install]WantedBy=multi-user.targetEOF -
Start containerd and enable it on boot
systemctl daemon-reload && systemctl restart containerd && systemctl enable containerd
Install Required K8S Commands
Installs crictl / kubeadm / kubelet / kubectl. Required on all nodes of the Kubernetes cluster.
-
Create the command installation directory
mkdir -p /usr/local/kubernetes/bin -
Move command binaries to the installation directory
Run this command from within the
1.35-k8s-amd64-pkgdirectorytar -zxvf crictl-v1.35.0-linux-amd64.tar.gz -C /usr/local/kubernetes/bin\cp -rvf ./{kubeadm,kubelet,kubectl} /usr/local/kubernetes/bin/ -
Grant executable permissions to the command binaries
chmod +x /usr/local/kubernetes/bin/*chown $(whoami):$(groups) /usr/local/kubernetes/bin/* -
Configure systemd to manage kubelet
cat > /etc/systemd/system/kubelet.service <<\EOF[Unit]Description=kubelet: The Kubernetes Node AgentDocumentation=https://kubernetes.io/docs/home/Wants=network-online.targetAfter=network-online.target[Service]ExecStart=/usr/local/kubernetes/bin/kubeletRestart=alwaysStartLimitInterval=0RestartSec=10[Install]WantedBy=multi-user.targetEOF -
Configure kubeadm drop-in parameters for kubelet
mkdir -p /etc/systemd/system/kubelet.service.dcat > /etc/systemd/system/kubelet.service.d/10-kubeadm.conf <<\EOF# Note: This dropin only works with kubeadm and kubelet v1.11+[Service]Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamicallyEnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.EnvironmentFile=-/etc/default/kubeletExecStart=ExecStart=/usr/local/kubernetes/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGSEOF -
Start kubelet and enable it on boot
systemctl daemon-reload && systemctl restart kubelet && systemctl enable kubelet- kubelet remains in a waiting state until
kubeadm init/kubeadm joincompletes. There is no need to check the service status afterrestart; it will start automatically once the cluster is initialized.
- kubelet remains in a waiting state until
-
Add the K8S command path to the environment variables
cat > /etc/profile.d/kubernetes.sh <<'EOF'export PATH=/usr/local/kubernetes/bin/:$PATHEOFsource /etc/profile.d/kubernetes.sh -
Configure the crictl runtime endpoint
crictl config runtime-endpoint unix:///run/containerd/containerd.sock-
Verify the configuration
crictl config --get runtime-endpoint
-
Install nerdctl (Optional)
nerdctl is a Docker-compatible CLI for containerd, providing a familiar interface for managing containers, images, and other resources.
-
Extract and install nerdctl
Run this command from within the
1.35-k8s-amd64-pkgdirectorytar -zxvf nerdctl-2.2.2-linux-amd64.tar.gzrm -f containerd-rootless*.shmv nerdctl /usr/local/kubernetes/bin/ -
Configure the command alias and apply it
echo 'alias nerdctl="nerdctl -n k8s.io"' >> ~/.bashrcsource ~/.bashrc- Run
nerdctl -v. Output ofnerdctl version 2.2.2confirms a successful installation.
- Run
Install Environment Dependencies
Required on all nodes of the Kubernetes cluster.
-
Install socat / conntrack
- Internet-accessible server
- Internet-restricted server
# CentOS / RedHatyum install -y socat conntrack-tools# Debian / Ubuntuapt install -y socat conntrack# socat offline package download link (for CentOS 7.9; re-download for other OS versions as needed)https://pdpublic.mingdao.com/private-deployment/offline/common/kubernetes-1.25.4/socat-deps-centos7.tar.gz# Extract and installtar -zxvf socat-deps-centos7.tar.gzrpm -Uvh --nodeps socat-deps-centos7/*.rpm# conntrack offline package download link (for CentOS 7.9; re-download for other OS versions as needed)https://pdpublic.mingdao.com/private-deployment/offline/common/kubernetes-1.25.4/conntrack-tools-deps-centos7.tar.gz# Extract and installtar -zxvf conntrack-tools-deps-centos7.tar.gzrpm -Uvh --nodeps conntrack-tools-deps-centos7/*.rpm -
Verify all required commands are available
crictl --version && containerd --version && runc -v|grep version && kubeadm version && kubelet --version && kubectl version --client=true && socat -V | grep 'socat version' && conntrack --version && echo ok || echo error- A final output of
okindicates success;errormeans one or more commands are missing and must be installed based on the error output.
- A final output of
Configure Kernel Settings
Required on all nodes of the Kubernetes cluster.
-
Persist the required kernel modules on boot
cat > /etc/modules-load.d/kubernetes.conf <<EOFoverlaybr_netfilterip_vsip_vs_rrip_vs_wrrip_vs_shEOF -
Load the kernel modules immediately
modprobe overlaymodprobe br_netfiltermodprobe ip_vsmodprobe ip_vs_rrmodprobe ip_vs_wrrmodprobe ip_vs_sh -
Append kernel parameters and apply them
cp -rfp /etc/sysctl.d/99-sysctl.conf /etc/sysctl.d/99-sysctl.conf.backup-$(date +%Y%m%d%H%M%S)cat >> /etc/sysctl.d/99-sysctl.conf <<EOFnet.bridge.bridge-nf-call-iptables = 1net.bridge.bridge-nf-call-ip6tables = 1net.ipv4.ip_forward = 1vm.max_map_count = 262144# MD Confignet.nf_conntrack_max = 524288net.ipv4.tcp_max_tw_buckets = 5000net.ipv4.tcp_window_scaling = 1net.ipv4.tcp_rmem = 8192 87380 16777216net.ipv4.tcp_wmem = 8192 65536 16777216net.ipv4.tcp_max_syn_backlog = 32768net.core.netdev_max_backlog = 32768net.core.netdev_budget = 600net.core.somaxconn = 32768net.core.wmem_default = 8388608net.core.rmem_default = 8388608net.core.rmem_max = 16777216net.core.wmem_max = 16777216net.ipv4.tcp_timestamps = 1net.ipv4.tcp_synack_retries = 2net.ipv4.tcp_syn_retries = 2net.ipv4.tcp_tw_recycle = 0net.ipv4.tcp_tw_reuse = 1net.ipv4.tcp_fin_timeout = 2net.ipv4.tcp_mem = 8388608 12582912 16777216net.ipv4.ip_local_port_range = 1024 65000net.ipv4.tcp_max_orphans = 16384net.ipv4.tcp_keepalive_intvl = 10net.ipv4.tcp_keepalive_probes = 3net.ipv4.tcp_keepalive_time = 600vm.max_map_count = 262144net.netfilter.nf_conntrack_tcp_be_liberal = 0net.netfilter.nf_conntrack_tcp_max_retrans = 3net.netfilter.nf_conntrack_tcp_timeout_max_retrans = 300net.netfilter.nf_conntrack_tcp_timeout_established = 86400fs.inotify.max_user_watches=10485760fs.inotify.max_user_instances=10240EOFsysctl --system
K8S Environment Image Preparation
Required on all nodes of the Kubernetes cluster.
-
Download and import offline images
- Internet-accessible server
- Internet-restricted server
wget https://pdpublic.mingdao.com/private-deployment/offline/common/kubernetes-1.35.3/kubeadm-1.35.3-images-amd64.tar.gz# Download the image package via the link below and upload it to the target serverhttps://pdpublic.mingdao.com/private-deployment/offline/common/kubernetes-1.35.3/kubeadm-1.35.3-images-amd64.tar.gz-
Extract and import the images
gunzip -d kubeadm-1.35.3-images-amd64.tar.gzctr -n k8s.io image import kubeadm-1.35.3-images-amd64.tar -
After import, run
crictl imagesto verifyExample output
IMAGE TAG IMAGE ID SIZE127.0.0.1:5000/cni v3.31.4 c433a27dd94ce 72.2MB127.0.0.1:5000/coredns v1.13.1 aa5e3ebc0dfed 23.6MB127.0.0.1:5000/etcd 3.6.6-0 0a108f7189562 23.6MB127.0.0.1:5000/kube-apiserver v1.35.3 0f2b96c93465f 27.6MB127.0.0.1:5000/kube-controller-manager v1.35.3 0eb506280f9bc 23MB127.0.0.1:5000/kube-controllers v3.31.4 ff033cc89dab5 54MB127.0.0.1:5000/kube-proxy v1.35.3 53ed370019059 25.7MB127.0.0.1:5000/kube-scheduler v1.35.3 87c9b0e4f80d3 17.1MB127.0.0.1:5000/node v3.31.4 e6536b93706ed 160MB127.0.0.1:5000/pause 3.10.1 cd073f4c5f6a8 320kB127.0.0.1:5000/pilot 1.29.1 cd8219a164d79 73.4MB127.0.0.1:5000/proxyv2 1.29.1 599aea7eee05d 89.1MB
Initialize the First Master Node
Perform operations on Kubernetes 01 only.
-
Initialize the Master Node
- Command-line Initialization
- kubeadm-config.yaml Initialization
kubeadm init --control-plane-endpoint "k8s-master:6443" --upload-certs --cri-socket unix:///var/run/containerd/containerd.sock -v 5 --kubernetes-version=1.35.3 --image-repository=127.0.0.1:5000 --pod-network-cidr=10.244.0.0/16- Generate the kubeadm-config.yaml configuration file
cd /usr/local/kubernetes/kubeadm config print init-defaults > /usr/local/kubernetes/kubeadm-config.yaml- Modify the configuration file
# Set the image registry addresssed -ri 's|imageRepository.*|imageRepository: 127.0.0.1:5000|' /usr/local/kubernetes/kubeadm-config.yaml# Configure the Pod CIDRsed -ri '/serviceSubnet/a \ \ podSubnet: 10.244.0.0\/16' /usr/local/kubernetes/kubeadm-config.yaml# Set the node IP address (auto-detected)sed -ri 's|advertiseAddress.*|advertiseAddress: '$(hostname -I |awk '{print $1}')'|' /usr/local/kubernetes/kubeadm-config.yaml# Set the etcd data directorysed -ri 's|dataDir:.*|dataDir: /data/etcd|' /usr/local/kubernetes/kubeadm-config.yaml# Set the node name (auto-detected)sed -ri 's|name: node|name: '$(hostname)'|' /usr/local/kubernetes/kubeadm-config.yaml# Set the Kubernetes versionsed -ri 's|kubernetesVersion.*|kubernetesVersion: 1.35.3|' /usr/local/kubernetes/kubeadm-config.yaml# Add the HA control plane endpointsed -i '/apiServer:/i controlPlaneEndpoint: "k8s-master:6443"' /usr/local/kubernetes/kubeadm-config.yaml# Verify the changesgrep -E 'advertiseAddress|controlPlaneEndpoint|name|imageRepository|dataDir|podSubnet|kubernetesVersion' /usr/local/kubernetes/kubeadm-config.yamlExample output
advertiseAddress: 192.168.10.20name: service01controlPlaneEndpoint: "k8s-master:6443"dataDir: /data/etcdimageRepository: 127.0.0.1:5000kubernetesVersion: 1.35.3podSubnet: 10.244.0.0/16Complete kubeadm-config.yaml example
apiVersion: kubeadm.k8s.io/v1beta3bootstrapTokens:- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authenticationkind: InitConfigurationlocalAPIEndpoint:advertiseAddress: 192.168.10.20 # master ipbindPort: 6443nodeRegistration:criSocket: unix:///var/run/containerd/containerd.sockimagePullPolicy: IfNotPresentname: service01 # master hostnametaints: null---controlPlaneEndpoint: "k8s-master:6443"apiServer:timeoutForControlPlane: 4m0sapiVersion: kubeadm.k8s.io/v1beta3certificatesDir: /etc/kubernetes/pkiclusterName: kubernetescontrollerManager: {}dns: {}etcd:local:dataDir: /data/etcdimageRepository: 127.0.0.1:5000kind: ClusterConfigurationkubernetesVersion: 1.35.3networking:dnsDomain: cluster.localserviceSubnet: 10.96.0.0/12podSubnet: 10.244.0.0/16scheduler: {}- Initialize the Master Node
# List required imageskubeadm config images list --config /usr/local/kubernetes/kubeadm-config.yaml# Pull imageskubeadm config images pull --config /usr/local/kubernetes/kubeadm-config.yaml# Preflight checkkubeadm init phase preflight --config=/usr/local/kubernetes/kubeadm-config.yaml# Run initializationkubeadm init --config=/usr/local/kubernetes/kubeadm-config.yaml --upload-certs --v=6- After successful initialization, the output will contain two types of
kubeadm joincommands. Save this output:- The command with
--control-plane --certificate-keyparameters: used for adding other Master nodes to the cluster. - The command without those parameters: used for adding worker nodes to the cluster.
- The command with
- If initialization fails, run
kubeadm reset -fto reset and retry.
Output example
You can now join any number of control-plane node by running the following command on each as a root:kubeadm join k8s-master:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07Then you can join any number of worker nodes by running the following on each as root:kubeadm join k8s-master:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 -
Expand the NodePort available port range
sed -i '/- kube-apiserver/a\ \ \ \ - --service-node-port-range=1024-32767' /etc/kubernetes/manifests/kube-apiserver.yaml -
Configure kubectl Access Credentials on the Master Node
echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> /etc/profile.d/kubernetes.shsource /etc/profile.d/kubernetes.sh -
Increase the Pod limit on the Master node
echo "maxPods: 300" >> /var/lib/kubelet/config.yamlsystemctl restart kubelet -
Allow the Master node to schedule Pods
kubectl taint node $(kubectl get node | grep control-plane | awk '{print $1}') node-role.kubernetes.io/control-plane:NoSchedule--
Wait approximately 2 minutes after initialization before running this command.
-
Expected output:
- First run: Output includes
<node-name> untainted, indicating the taint was removed and the node can now schedule Pods. - Subsequent runs: If a node shows
error: taint "node-role.kubernetes.io/control-plane:NoSchedule" not found, the taint on that node no longer exists and no further action is required. - Note: If the output is unexpected, system components may not yet be fully ready. Wait a moment and retry.
- First run: Output includes
-
-
Configure kubectl command completion (optional)
If the server has internet access or an internal yum/apt repository, install bash-completion to enable tab completion for kubectl subcommands.
# Install bash-completionDebian-based: apt -y install bash-completionCentOS-based: yum -y install bash-completion# Add Bash completion configurationgrep -q "kubectl completion" ~/.bashrc || echo "source <(kubectl completion bash)" >> ~/.bashrc && source ~/.bashrc -
Install the Calico Network Plugin
-
Move calico.yaml to the installation directory
Run this command from within the
1.35-k8s-amd64-pkgdirectorymv calico.yaml /usr/local/kubernetes/ -
Modify the configuration file
-
Replace the image registry address with the local registry
sed -ri 's|image: quay.io/calico|image: 127.0.0.1:5000|g' /usr/local/kubernetes/calico.yamlgrep image: /usr/local/kubernetes/calico.yamlExample output
image: 127.0.0.1:5000/cni:v3.31.4image: 127.0.0.1:5000/cni:v3.31.4image: 127.0.0.1:5000/node:v3.31.4image: 127.0.0.1:5000/node:v3.31.4image: 127.0.0.1:5000/kube-controllers:v3.31.4 -
Configure the Pod CIDR
sed -ri '/# - name: CALICO_IPV4POOL_CIDR/,/# value: ".*"/ {s/# - name: CALICO_IPV4POOL_CIDR/- name: CALICO_IPV4POOL_CIDR/s/# value: ".*"/ value: "10.244.0.0\/16"/}' /usr/local/kubernetes/calico.yamlgrep -C 2 CALICO_IPV4POOL_CIDR /usr/local/kubernetes/calico.yamlExample output
# chosen from this range. Changing this value after installation will have# no effect. This should fall within `--cluster-cidr`.- name: CALICO_IPV4POOL_CIDRvalue: "10.244.0.0/16"# Disable file logging so `kubectl logs` works. -
Configure the CNI binary path
sed -i '/- name: cni-bin-dir/,/type:/s|path: .*|path: /usr/local/kubernetes/cni/bin|' /usr/local/kubernetes/calico.yamlgrep -C 2 cni-bin-dir /usr/local/kubernetes/calico.yamlExample output
name: host-local-net-dir- mountPath: /host/opt/cni/binname: cni-bin-dirsecurityContext:privileged: true--volumeMounts:- mountPath: /host/opt/cni/binname: cni-bin-dir- mountPath: /host/etc/cni/net.dname: cni-net-dir--path: /proc# Used to install CNI.- name: cni-bin-dirhostPath:path: /usr/local/kubernetes/cni/bin
-
-
Deploy Calico
kubectl apply -f /usr/local/kubernetes/calico.yaml -
Check Calico service status
kubectl get pod -n kube-system -l k8s-app=calico-nodekubectl get pod -n kube-system -l k8s-app=calico-kube-controllersUnder normal conditions, all Pods should show
READYas1/1andSTATUSasRunning:NAME READY STATUS RESTARTS AGEcalico-node-4xbtk 1/1 Running 0 2mcalico-node-9fzwp 1/1 Running 0 2mcalico-node-kq7rx 1/1 Running 0 2mcalico-kube-controllers-7d9d9b7b9c-x2kqt 1/1 Running 0 2m- If
STATUSremainsInitorPendingfor an extended period, runkubectl describe pod <pod-name> -n kube-systemto view detailed events and diagnose the issue.
- If
-
Join Other Master Nodes to the Cluster
Perform operations on Kubernetes 02 / 03 nodes respectively.
-
Join the Kubernetes cluster
kubeadm join k8s-master:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07-
The command above is an example. Use the actual Master join command from the first Master node's
kubeadm initoutput. -
If the command has been lost, regenerate it on the first Master node using the following steps:
-
Regenerate the
joincommandkubeadm token create --print-join-command -
Re-upload certificates and generate a new decryption key
kubeadm init phase upload-certs --upload-certs -
Append
--control-plane --certificate-key <key from step 2>to the output of step 1kubeadm join k8s-master:6443 --token 1b6i9d.0qqufwsjrjpuhkwo --discovery-token-ca-cert-hash sha256:3d28faa49e9cac7dd96aded0bef33a6af1ced57e45f0b12c6190f3d4e1055456 --control-plane --certificate-key 57a0f0e9be1d9f1c74bab54a52faa143ee9fd9c26a60f1b3b816b17b93ecaf6f
-
-
-
Expand the NodePort available port range
sed -i '/- kube-apiserver/a\ \ \ \ - --service-node-port-range=1024-32767' /etc/kubernetes/manifests/kube-apiserver.yaml -
Configure kubectl Access Credentials on the Master Node
echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> /etc/profile.d/kubernetes.shsource /etc/profile.d/kubernetes.sh -
Increase the Pod limit on the current node
echo "maxPods: 300" >> /var/lib/kubelet/config.yamlsystemctl restart kubelet -
Allow the Master node to schedule Pods
kubectl taint node $(kubectl get node | grep control-plane | awk '{print $1}') node-role.kubernetes.io/control-plane:NoSchedule--
Wait approximately 2 minutes after initialization before running this command.
-
Expected output:
- First run: Output includes
<node-name> untainted, indicating the taint was removed and the node can now schedule Pods. - Already removed: If a node shows
error: taint "node-role.kubernetes.io/control-plane:NoSchedule" not found, the taint on that node no longer exists and no further action is required. - Note: If the output is unexpected, system components may not yet be fully ready. Wait a moment and retry.
- First run: Output includes
-
Add Worker Nodes to the Cluster
For example, Flink nodes or subsequently added microservice nodes all join the cluster as worker nodes.
-
Join the Kubernetes cluster
kubeadm join 192.168.10.20:6443 --token 3nwjzw.pdod3r27lnqqhi0x \--discovery-token-ca-cert-hash sha256:a84445303a0f8249e7eae3059cb99d46038dc275b2dc2043a022de187a1175a2- The command above is an example. Use the actual Worker join command from the first Master node's
kubeadm initoutput. - If the command has been lost, run
kubeadm token create --print-join-commandon the Master node to retrieve it.
- The command above is an example. Use the actual Worker join command from the first Master node's
-
Increase the Pod limit on the worker node
echo "maxPods: 300" >> /var/lib/kubelet/config.yamlsystemctl restart kubelet
Cluster Status Check
-
Check node and Pod status
kubectl get pod -n kube-system # READY column should show "1/1"kubectl get node # STATUS column should show "Ready" -
Download and import the base image (required on all K8s cluster nodes)
Image download link:
https://pdpublic.mingdao.com/private-deployment/offline/common/centos7.9.2009.tar.gzDistribute the image file
centos7.9.2009.tar.gzto all K8s cluster nodes and run the following to import it:gunzip -d centos7.9.2009.tar.gzctr -n k8s.io image import centos7.9.2009.tar -
Deploy a test application on the first Master node
cat > /usr/local/kubernetes/test.yaml <<'EOF'apiVersion: apps/v1kind: Deploymentmetadata:name: testnamespace: defaultspec:replicas: 3selector:matchLabels:app: testtemplate:metadata:labels:app: testannotations:md-update: '20200517104741'spec:containers:- name: testimage: centos:7.9.2009command:- sh- -c- |echo $(hostname) > hostname.txtpython -m SimpleHTTPServerresources:limits:memory: 512Micpu: 1requests:memory: 64Micpu: 0.01volumeMounts:- name: tz-configmountPath: /etc/localtimevolumes:- name: tz-confighostPath:path: /usr/share/zoneinfo/Etc/GMT-8---apiVersion: v1kind: Servicemetadata:name: testnamespace: defaultspec:selector:app: testports:- name: external-testport: 8000targetPort: 8000nodePort: 8000type: NodePortEOF -
Start the service
kubectl apply -f /usr/local/kubernetes/test.yaml -
Check Pod running status
kubectl get pod -o wide -
Verify service access
curl 127.0.0.1:8000/hostname.txt- Multiple executions should return different Pod hostnames, confirming that load balancing is working correctly.
-
Once testing is complete, delete the test service
kubectl delete -f /usr/local/kubernetes/test.yaml