Multi-Master Cluster Deployment
- Minimum node requirements: 3 nodes.
- Master node redundancy is supported; if one Master fails, the cluster can still operate and run workloads normally.
- For both Standard and Professional clusters, a multi-Master deployment is used when there are three microservice nodes.
This document details the deployment of a Kubernetes cluster based on the CentOS 7.9 / Debian 12 operating system.
| Server IP | Host Role |
|---|---|
| 192.168.10.20 | Kubernetes 01 (Master, Node) |
| 192.168.10.21 | Kubernetes 02 (Master, Node) |
| 192.168.10.22 | Kubernetes 03 (Master, Node) |
Server Requirements
- No network policy restrictions between the cluster servers
- Hostnames between cluster servers must not duplicate
- Primary network card MAC addresses must not duplicate [Check using
ip link] product_idmust not duplicate [Check usingcat /sys/class/dmi/id/product_uuid]- Port 6443 for kubelet must not be occupied [Check using
nc -vz 127.0.0.1 6443] - Disable swap memory [Execute the
swapoff -acommand to disable, and disable swap partition mounting in/etc/fstab]
Configure HOSTS
Add the following hosts information to each node in the Kubernetes cluster, pointing k8s-master to the three master nodes.
cat >> /etc/hosts << EOF
192.168.10.20 k8s-master
192.168.10.21 k8s-master
192.168.10.22 k8s-master
EOF
- Note: This hosts information must be added to every node in the Kubernetes cluster, including any nodes added later.
Install the CRI Container Runtime Environment
Operations must be carried out on each node of the Kubernetes cluster.
-
Download the Docker installation package
- Server has internet access
- Server does not support internet access
wget https://pdpublic.mingdao.com/private-deployment/offline/common/docker-28.5.2.tgz# Docker installation package download link; download and upload to the target serverhttps://pdpublic.mingdao.com/private-deployment/offline/common/docker-28.5.2.tgz -
Install Docker
tar -zxvf docker-28.5.2.tgzmv -f docker/* /usr/local/bin/ -
Create directories for Docker and Containerd configuration files
mkdir /etc/dockermkdir /etc/containerd -
Create the daemon.json file for Docker
cat > /etc/docker/daemon.json <<\EOF{"registry-mirrors": ["https://uvlkeb6d.mirror.aliyuncs.com"],"data-root": "/data/docker","max-concurrent-downloads": 10,"exec-opts": ["native.cgroupdriver=cgroupfs"],"storage-driver": "overlay2","default-address-pools":[{"base":"172.80.0.0/16","size":24}],"insecure-registries": ["127.0.0.1:5000"]}EOF -
Create the
config.tomlfile forcontainerdand modify the configuration.containerd config default > /etc/containerd/config.tomlsed -i 's/SystemdCgroup =.*/SystemdCgroup = true/g' /etc/containerd/config.tomlsed -i 's#bin_dir =.*#bin_dir = "/usr/local/kubernetes/cni/bin"#' /etc/containerd/config.tomlsed -i 's#sandbox_image =.*#sandbox_image = "127.0.0.1:5000/pause:3.8"#' /etc/containerd/config.tomlsed -i 's#^root =.*#root = "/data/containerd"#' /etc/containerd/config.toml
-
Check the
containerdconfiguration filegrep "SystemdCgroup\|bin_dir\|sandbox_image\|^root =" /etc/containerd/config.tomlExample Output
root = "/data/containerd"sandbox_image = "127.0.0.1:5000/pause:3.8"bin_dir = "/usr/local/kubernetes/cni/bin"SystemdCgroup = true
6. Configure the systemd file for docker
cat > /etc/systemd/system/docker.service <<EOF
[Unit]
Description=Docker
After=network-online.target
Wants=network-online.target
Requires=containerd.service
[Service]
Type=notify
ExecStart=/usr/local/bin/dockerd --containerd /var/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP \$MAINPID
LimitNOFILE=1024000
LimitNPROC=infinity
LimitCORE=0
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF
7. Configure the systemd file for containerd
cat > /etc/systemd/system/containerd.service <<EOF
[Unit]
Description=containerd
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/local/bin/containerd --config /etc/containerd/config.toml
LimitNOFILE=1024000
LimitNPROC=infinity
LimitCORE=0
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF
8. Start containerd and docker and enable them to start at boot
systemctl daemon-reload && systemctl restart containerd && systemctl enable containerd
systemctl daemon-reload && systemctl restart docker && systemctl enable docker
Install CNI Plugins
This operation is required on all nodes of the Kubernetes cluster
- Download the CNI plugin files
- Server with Internet Access
- Server without Internet Access
wget https://pdpublic.mingdao.com/private-deployment/offline/common/kubernetes-1.25.4/cni-plugins-linux-amd64-v1.1.1.tgz
# CNI plugin package download link, download and upload to target server
https://pdpublic.mingdao.com/private-deployment/offline/common/kubernetes-1.25.4/cni-plugins-linux-amd64-v1.1.1.tgz
- Create the CNI file installation directory
mkdir -p /usr/local/kubernetes/cni/bin
- Extract the CNI plugin to the installation directory
tar -zxvf cni-plugins-linux-amd64-v1.1.1.tgz -C /usr/local/kubernetes/cni/bin
Install K8S Cluster Commands
Install crictl/kubeadm/kubelet/kubectl commands, this operation is required on all nodes of the Kubernetes cluster
- Create the command installation directory
mkdir -p /usr/local/kubernetes/bin
- Download command files to the installation directory
- Server with Internet Access
- Server without Internet Access
wget https://pdpublic.mingdao.com/private-deployment/offline/common/kubernetes-1.25.4/crictl-v1.25.0-linux-amd64.tar.gz
tar -zxvf crictl-v1.25.0-linux-amd64.tar.gz -C /usr/local/kubernetes/bin
curl -o /usr/local/kubernetes/bin/kubeadm https://pdpublic.mingdao.com/private-deployment/offline/common/kubernetes-1.25.4/kubeadm
curl -o /usr/local/kubernetes/bin/kubelet https://pdpublic.mingdao.com/private-deployment/offline/common/kubernetes-1.25.4/kubelet
curl -o /usr/local/kubernetes/bin/kubectl https://pdpublic.mingdao.com/private-deployment/offline/common/kubernetes-1.25.4/kubectl
# crictl file download link, download and upload to target server, then extract to /usr/local/kubernetes/bin
https://pdpublic.mingdao.com/private-deployment/offline/common/kubernetes-1.25.4/crictl-v1.25.0-linux-amd64.tar.gz
tar -zxvf crictl-v1.25.0-linux-amd64.tar.gz -C /usr/local/kubernetes/bin
# kubeadm file download link, download and upload to target server /usr/local/kubernetes/bin/
https://pdpublic.mingdao.com/private-deployment/offline/common/kubernetes-1.25.4/kubeadm
# kubelet file download link, download and upload to target server /usr/local/kubernetes/bin/
https://pdpublic.mingdao.com/private-deployment/offline/common/kubernetes-1.25.4/kubelet
# kubectl file download link, download and upload to target server /usr/local/kubernetes/bin/
https://pdpublic.mingdao.com/private-deployment/offline/common/kubernetes-1.25.4/kubectl
- Grant executable permissions to the command files
chmod +x /usr/local/kubernetes/bin/*
chown $(whoami):$(groups) /usr/local/kubernetes/bin/*
- Configure systemd to manage kubelet
cat > /etc/systemd/system/kubelet.service <<\EOF
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/home/
Wants=network-online.target
After=network-online.target
[Service]
ExecStart=/usr/local/kubernetes/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
- Configure systemd to manage kubeadm
mkdir -p /etc/systemd/system/kubelet.service.d
cat > /etc/systemd/system/kubelet.service.d/10-kubeadm.conf <<\EOF
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/local/kubernetes/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
EOF
- Start kubelet and enable it to start at boot
systemctl daemon-reload && systemctl restart kubelet && systemctl enable kubelet
- There is no need to check the service status after restarting here, as the service will automatically start after subsequent kubeadm init and kubeadm join steps.
- Set the directory for K8S commands and add to environment variables
- CentOS
- Debian
export PATH=/usr/local/kubernetes/bin/:$PATH
echo 'export PATH=/usr/local/kubernetes/bin/:$PATH' >> /etc/bashrc
export PATH=/usr/local/kubernetes/bin/:$PATH
echo 'export PATH=/usr/local/kubernetes/bin/:$PATH' >> /etc/bash.bashrc
- Configure to prevent errors when pulling images with crictl later
crictl config runtime-endpoint unix:///run/containerd/containerd.sock
Installing the nerdctl Tool
-
Download the nerdctl tool
wget https://pdpublic.mingdao.com/private-deployment/offline/common/kubernetes-1.25.4/nerdctl-1.7.0-linux-amd64.tar.gztar -zxvf nerdctl-1.7.0-linux-amd64.tar.gzrm -f containerd-rootless*.shmv nerdctl /usr/local/kubernetes/bin/ -
Add to the environment variables
echo 'alias nerdctl="nerdctl -n k8s.io"' >> ~/.bashrcsource ~/.bashrcnerdctl -voutputs nerdctl version 1.7.0 indicating normal operation.
Install Environment Dependencies
This operation is required on all nodes of the Kubernetes cluster
- Install environment dependencies socat/conntrack
- Server with Internet Access
- Server without Internet Access
# Use yum to install on CentOS/Red Hat
yum install -y socat conntrack-tools
# Use apt to install on Debian/Ubuntu
apt install -y socat conntrack
# Socat package download link, download and upload to target server (using CentOS 7.9 here, re-download if dependencies do not match)
https://pdpublic.mingdao.com/private-deployment/offline/common/kubernetes-1.25.4/socat-deps-centos7.tar.gz
# Extract and install
tar -zxvf socat-deps-centos7.tar.gz
rpm -Uvh --nodeps socat-deps-centos7/*.rpm
# Conntrack package download link, download and upload to target server (using CentOS 7.9 here, re-download if dependencies do not match)
https://pdpublic.mingdao.com/private-deployment/offline/common/kubernetes-1.25.4/conntrack-tools-deps-centos7.tar.gz
# Extract and install
tar -zxvf conntrack-tools-deps-centos7.tar.gz
rpm -Uvh --nodeps conntrack-tools-deps-centos7/*.rpm
- Check if any command is missing
docker --version && dockerd --version && pgrep -f 'dockerd' && crictl --version && kubeadm version && kubelet --version && kubectl version --client=true && socat -V | grep 'socat version' && conntrack --version && echo ok || echo error
- Outputting ok means everything is correct; if error is shown, investigate and resolve the missing commands.
Modify Kernel Configuration
This operation is required on all nodes of the Kubernetes cluster
- Add kernel modules
cat > /etc/modules-load.d/kubernetes.conf <<EOF
overlay
br_netfilter
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
EOF
- Load the modules
modprobe overlay
modprobe br_netfilter
modprobe ip_vs
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh
- Add kernel parameters
cat >> /etc/sysctl.d/99-sysctl.conf <<EOF
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
vm.max_map_count = 262144
# MD Config
net.nf_conntrack_max = 524288
net.ipv4.tcp_max_tw_buckets = 5000
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_rmem = 8192 87380 16777216
net.ipv4.tcp_wmem = 8192 65536 16777216
net.ipv4.tcp_max_syn_backlog = 32768
net.core.netdev_max_backlog = 32768
net.core.netdev_budget = 600
net.core.somaxconn = 32768
net.core.wmem_default = 8388608
net.core.rmem_default = 8388608
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_syn_retries = 2
net.ipv4.tcp_tw_recycle = 0
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 2
net.ipv4.tcp_mem = 8388608 12582912 16777216
net.ipv4.ip_local_port_range = 1024 65000
net.ipv4.tcp_max_orphans = 16384
net.ipv4.tcp_keepalive_intvl = 10
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_time = 600
vm.max_map_count = 262144
net.netfilter.nf_conntrack_tcp_be_liberal = 0
net.netfilter.nf_conntrack_tcp_max_retrans = 3
net.netfilter.nf_conntrack_tcp_timeout_max_retrans = 300
net.netfilter.nf_conntrack_tcp_timeout_established = 86400
fs.inotify.max_user_watches=10485760
fs.inotify.max_user_instances=10240
EOF
sysctl --system
Prepare K8S Environment Images
This operation is required on all nodes of the Kubernetes cluster
- Load offline images
- Server with Internet Access
- Server without Internet Access
wget https://pdpublic.mingdao.com/private-deployment/offline/common/kubernetes-1.25.4/kubeadm-1.25.4-images.tar.gz
docker load -i kubeadm-1.25.4-images.tar.gz
# Offline image package download link, download and upload to target server and load the image
https://pdpublic.mingdao.com/private-deployment/offline/common/kubernetes-1.25.4/kubeadm-1.25.4-images.tar.gz
docker load -i kubeadm-1.25.4-images.tar.gz
- Start a local repository to tag the images
docker run -d -p 5000:5000 --restart always --name registry registry:2
for i in $(docker images | grep 'registry.k8s.io\|rancher' | awk 'NR!=0{print $1":"$2}');do docker tag $i $(echo $i | sed -e "s/registry.k8s.io/127.0.0.1:5000/" -e "s#coredns/##" -e "s/rancher/127.0.0.1:5000/");done
for i in $(docker images | grep :5000 | awk 'NR!=0{print $1":"$2}');do docker push $i;done
docker images | grep :5000
Initialize the First Master Node
Only operate on the Kubernetes 01 node
-
Initialize the master node
- Command Line Initialization
- kubeadm-config.yaml Initialization
kubeadm init --control-plane-endpoint "k8s-master:6443" --upload-certs --cri-socket unix:///var/run/containerd/containerd.sock -v 5 --kubernetes-version=1.25.4 --image-repository=127.0.0.1:5000 --pod-network-cidr=10.244.0.0/16- Generate the
kubeadm-config.yamlconfiguration file
cd /usr/local/kubernetes/kubeadm config print init-defaults > /usr/local/kubernetes/kubeadm-config.yaml- Edit the configuration file
# Modify the image repositorysed -ri 's#imageRepository.*#imageRepository: 127.0.0.1:5000#' /usr/local/kubernetes/kubeadm-config.yaml# Configure pod network CIDRsed -ri '/serviceSubnet/a \ \ podSubnet: 10.244.0.0\/16' /usr/local/kubernetes/kubeadm-config.yaml# Modify the node IP addresssed -ri 's#advertiseAddress.*#advertiseAddress: '$(hostname -I |awk '{print $1}')'#' /usr/local/kubernetes/kubeadm-config.yaml# Modify the etcd data directorysed -ri 's#dataDir:.*#dataDir: /data/etcd#' /usr/local/kubernetes/kubeadm-config.yaml# Modify the node namesed -ri 's#name: node#name: '$(hostname)'#' /usr/local/kubernetes/kubeadm-config.yaml# Modify the Kubernetes versionsed -ri 's#kubernetesVersion.*#kubernetesVersion: 1.25.4#' /usr/local/kubernetes/kubeadm-config.yaml# Add --control-plane-endpoint "k8s-master:6443"sed -i '/apiServer:/i controlPlaneEndpoint: "k8s-master:6443"' /usr/local/kubernetes/kubeadm-config.yaml# Check the modification resultsgrep 'advertiseAddress\|name\|imageRepository\|dataDir\|podSubnet\|kubernetesVersion\|controlPlaneEndpoint' /usr/local/kubernetes/kubeadm-config.yamlExample Output
advertiseAddress: 192.168.10.20name: service01controlPlaneEndpoint: "k8s-master:6443"dataDir: /data/etcdimageRepository: 127.0.0.1:5000kubernetesVersion: 1.25.4podSubnet: 10.244.0.0/16Example of kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta3bootstrapTokens:- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authenticationkind: InitConfigurationlocalAPIEndpoint:advertiseAddress: 192.168.10.20 # master ipbindPort: 6443nodeRegistration:criSocket: unix:///var/run/containerd/containerd.sockimagePullPolicy: IfNotPresentname: service01 # master hostnametaints: null---controlPlaneEndpoint: "k8s-master:6443"apiServer:timeoutForControlPlane: 4m0sapiVersion: kubeadm.k8s.io/v1beta3certificatesDir: /etc/kubernetes/pkiclusterName: kubernetescontrollerManager: {}dns: {}etcd:local:dataDir: /data/etcdimageRepository: 127.0.0.1:5000kind: ClusterConfigurationkubernetesVersion: 1.25.4networking:dnsDomain: cluster.localserviceSubnet: 10.96.0.0/12podSubnet: 10.244.0.0/16scheduler: {}- Initialize the master node
# View the list of required imageskubeadm config images list --config /usr/local/kubernetes/kubeadm-config.yaml# Pull the imageskubeadm config images pull --config /usr/local/kubernetes/kubeadm-config.yaml# Checkkubeadm init phase preflight --config=/usr/local/kubernetes/kubeadm-config.yaml# Start kubeadm initialization of k8s based on the configuration filekubeadm init --config=/usr/local/kubernetes/kubeadm-config.yaml --upload-certs --v=6The output will look similar to:
...You can now join any number of control-plane node by running the following command on each as a root:kubeadm join k8s-master:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07Please note that the certificate-key gives access to cluster sensitive data, keep it secret!As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use kubeadm init phase upload-certs to reload certs afterward.Then you can join any number of worker nodes by running the following on each as root:kubeadm join k8s-master:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866- Copy this output to a text file. You'll need it later to join the master and node nodes to the cluster.
-
Modify the nodePort usable port range
sed -i '/- kube-apiserver/a\ \ \ \ - --service-node-port-range=1024-32767' /etc/kubernetes/manifests/kube-apiserver.yaml -
Set the configuration path
- CentOS
- Debian
export KUBECONFIG=/etc/kubernetes/admin.confecho 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> /etc/bashrcexport KUBECONFIG=/etc/kubernetes/admin.confecho 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> /etc/bash.bashrc -
Adjust the current node Pod limit
echo "maxPods: 300" >> /var/lib/kubelet/config.yamlsystemctl restart kubelet -
Allow the master to participate in scheduling
-
Wait about 1-2 minutes after initializing the master node before executing the following command
-
Before executing, check the status of the kubelet service with
systemctl status kubelet, to see if it isrunning
kubectl taint node $(kubectl get node | grep control-plane | awk '{print $1}') node-role.kubernetes.io/control-plane:NoSchedule-- After executing this command, the correct output should be: "xxxx untainted". If the output does not match, wait a bit and execute it again for confirmation.
-
-
Install the network plugin
cat > /usr/local/kubernetes/kube-flannel.yml <<EOF---kind: NamespaceapiVersion: v1metadata:name: kube-flannellabels:pod-security.kubernetes.io/enforce: privileged---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata:name: flannelrules:- apiGroups:- ""resources:- podsverbs:- get- apiGroups:- ""resources:- nodesverbs:- list- watch- apiGroups:- ""resources:- nodes/statusverbs:- patch---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:name: flannelroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: flannelsubjects:- kind: ServiceAccountname: flannelnamespace: kube-system---apiVersion: v1kind: ServiceAccountmetadata:name: flannelnamespace: kube-system---kind: ConfigMapapiVersion: v1metadata:name: kube-flannel-cfgnamespace: kube-systemlabels:tier: nodeapp: flanneldata:cni-conf.json: |{"name": "cbr0","cniVersion": "0.3.1","plugins": [{"type": "flannel","delegate": {"hairpinMode": true,"isDefaultGateway": true}},{"type": "portmap","capabilities": {"portMappings": true}}]}net-conf.json: |{"Network": "10.244.0.0/16","Backend": {"Type": "vxlan"}}---apiVersion: apps/v1kind: DaemonSetmetadata:name: kube-flannel-dsnamespace: kube-systemlabels:tier: nodeapp: flannelspec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/osoperator: Invalues:- linuxhostNetwork: truepriorityClassName: system-node-criticaltolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cni-plugin#image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)image: 127.0.0.1:5000/mirrored-flannelcni-flannel-cni-plugin:v1.1.0command:- cpargs:- -f- /flannel- /opt/cni/bin/flannelvolumeMounts:- name: cni-pluginmountPath: /opt/cni/bin- name: install-cni#image: flannelcni/flannel:v0.20.1 for ppc64le and mips64le (dockerhub limitations may apply)image: 127.0.0.1:5000/mirrored-flannelcni-flannel:v0.20.1command:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannel#image: flannelcni/flannel:v0.20.1 for ppc64le and mips64le (dockerhub limitations may apply)image: 127.0.0.1:5000/mirrored-flannelcni-flannel:v0.20.1command:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "100m"memory: "50Mi"limits:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN", "NET_RAW"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: EVENT_QUEUE_DEPTHvalue: "5000"volumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/- name: xtables-lockmountPath: /run/xtables.lockvolumes:- name: runhostPath:path: /run/flannel- name: cni-pluginhostPath:path: /usr/local/kubernetes/cni/bin- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg- name: xtables-lockhostPath:path: /run/xtables.locktype: FileOrCreateEOFkubectl apply -f /usr/local/kubernetes/kube-flannel.yml
Join Other Master Nodes to the Cluster
You need to perform operations on Kubernetes 02/03 nodes
-
Join the Kubernetes cluster
kubeadm join k8s-master:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07-
This command is the output of executing
kubeadm initsuccessfully on the master node. This is just an example; each cluster is different. -
If forgotten, refer to the following steps to regenerate on the first master node:
- Regenerate the
joincommandkubeadm token create --print-join-command - Re-upload the certificates and generate a new decryption key
kubeadm init phase upload-certs --upload-certs
- Concatenate the
joincommand, adding--control-planeand--certificate-keyparameters, and use the generated decryption key as the--certificate-keyparameter valuekubeadm join k8s-master:6443 --token 1b6i9d.0qqufwsjrjpuhkwo --discovery-token-ca-cert-hash sha256:3d28faa49e9cac7dd96aded0bef33a6af1ced57e45f0b12c6190f3d4e1055456 --control-plane --certificate-key 57a0f0e9be1d9f1c74bab54a52faa143ee9fd9c26a60f1b3b816b17b93ecaf6f- At this point, you have obtained the
joincommand to add master nodes to the cluster
- At this point, you have obtained the
- Regenerate the
-
-
Modify the nodePort usable port range
sed -i '/- kube-apiserver/a\ \ \ \ - --service-node-port-range=1024-32767' /etc/kubernetes/manifests/kube-apiserver.yaml -
Set the configuration path
- CentOS
- Debian
export KUBECONFIG=/etc/kubernetes/admin.confecho 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> /etc/bashrcexport KUBECONFIG=/etc/kubernetes/admin.confecho 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> /etc/bash.bashrc -
Adjust the current node Pod limit
echo "maxPods: 300" >> /var/lib/kubelet/config.yamlsystemctl restart kubelet -
Allow the master node to participate in scheduling
-
Wait about 1-2 minutes after finishing the current node initialization before executing the command below
-
Before executing, check the status of the kubelet service with
systemctl status kubelet, to see if it isrunning
kubectl taint node $(kubectl get node | grep control-plane | awk '{print $1}') node-role.kubernetes.io/control-plane:NoSchedule-- After executing this command, the correct output should be: "xxxx untainted". If the output does not match, wait a bit and execute it again for confirmation
-
Add New Worker Nodes to the Cluster
For example, Flink nodes or subsequently added microservice nodes that join the current multi-master Kubernetes cluster as worker nodes
-
Join the Kubernetes cluster
kubeadm join 192.168.10.20:6443 --token 3nwjzw.pdod3r27lnqqhi0x \--discovery-token-ca-cert-hash sha256:a84445303a0f8249e7eae3059cb99d46038dc275b2dc2043a022de187a1175a2- This command is the output of executing
kubeadm initsuccessfully on the master node, serving as an example. Each cluster is different. - If forgotten, you can re-obtain it by executing
kubeadm token create --print-join-commandon the master node.
- This command is the output of executing
-
Adjust the current node Pod limit
echo "maxPods: 300" >> /var/lib/kubelet/config.yamlsystemctl restart kubelet
Check Cluster Status
-
Check node status
kubectl get pod -n kube-system # The READY column needs to be "1/1"kubectl get node # The STATUS column needs to be "Ready" -
Download the image (each microservice node needs to operate)
Download and upload the centos:7.9.2009 image to each server in advance
Offline image download link: https://pdpublic.mingdao.com/private-deployment/offline/common/centos7.9.2009.tar.gz
Load offline image on each server:
gunzip -d centos7.9.2009.tar.gzctr -n k8s.io image import centos7.9.2009.tar -
Only write configuration on microservices node 01 to start the test container
cat > /usr/local/kubernetes/test.yaml <<\EOFapiVersion: apps/v1kind: Deploymentmetadata:name: testnamespace: defaultspec:replicas: 3selector:matchLabels:app: testtemplate:metadata:labels:app: testannotations:md-update: '20200517104741'spec:containers:- name: testimage: centos:7.9.2009command:- sh- -c- |echo $(hostname) > hostname.txtpython -m SimpleHTTPServerresources:limits:memory: 512Micpu: 1requests:memory: 64Micpu: 0.01volumeMounts:- name: tz-configmountPath: /etc/localtimevolumes:- name: tz-confighostPath:path: /usr/share/zoneinfo/Etc/GMT-8---apiVersion: v1kind: Servicemetadata:name: testnamespace: defaultspec:selector:app: testports:- name: external-testport: 8000targetPort: 8000nodePort: 8000type: NodePortEOFkubectl apply -f /usr/local/kubernetes/test.yaml -
Check Pod status
kubectl get pod -o wide -
Test access
curl 127.0.0.1:8000/hostname.txt- Multiple curls should normally return the hostname of different Pods
-
If the curl reaches containers on other nodes, it may take about 1 second to return. In such cases, disable the hardware offload function of the flannel.1 network interface (needs configuration on every node in the Kubernetes cluster)
cat > /etc/systemd/system/disable-offload.service <<\EOF[Unit]Description=Disable offload for flannel.1After=network-online.target flanneld.service[Service]Type=oneshotExecStartPre=/bin/bash -c 'while [ ! -d /sys/class/net/flannel.1 ]; do sleep 1; done'ExecStart=/sbin/ethtool --offload flannel.1 rx off tx off[Install]WantedBy=multi-user.targetEOFReload the systemd configuration and start the service
systemctl daemon-reloadsystemctl enable disable-offloadsystemctl start disable-offload