单Master集群部署
- 最低节点要求:1 或 2 台节点。
- Master 节点一旦宕机,kubectl 等管理工具将无法操作集群,但运行中的业务容器不受影响。
- 适用于精简版集群,通常在两台微服务节点的环境中使用。
本文档基于 CentOS 7.9 / Debian 12 操作系统进行 Kubernetes 集群部署
| 服务器IP | 主机角色 |
|---|---|
| 192.168.10.20 | Kubernetes 01(Master、Node) |
| 192.168.10.21 | Kubernetes 02(Node) |
服务器要求
- 集群服务器之间网络策略无限制
- 各节点主机名须唯一,且集群部署完成后不可更改
- 主网卡 MAC 地址不能重复(执行
ip link查看) - product_uuid 不能重复(执行
cat /sys/class/dmi/id/product_uuid查看) - 6443 端口未被占用(执行
nc -vz 127.0.0.1 6443验证) - 必须禁用 swap 内存(执行
swapoff -a临时禁用,并注释/etc/fstab中 swap 分区的挂载项)
安装CRI容器运行环境
Kubernetes 集群各节点均需要操作
-
下载 Kubernetes 集群安装包
- 服务器支持访问互联网
- 服务器不支持访问互联网
wget https://pdpublic.mingdao.com/private-deployment/offline/common/kubernetes-1.35.3/1.35-k8s-amd64-pkg.tar.gz# 通过以下链接下载安装包,下载完成后上传至目标服务器https://pdpublic.mingdao.com/private-deployment/offline/common/kubernetes-1.35.3/1.35-k8s-amd64-pkg.tar.gz -
解压安装包
tar xzvf 1.35-k8s-amd64-pkg.tar.gz-
检查安装包内文件
ls -l 1.35-k8s-amd64-pkg输出结果示例
-rw-r--r-- 1 root 197121 331360 Apr 14 16:38 calico.yaml-rw-r--r-- 1 root 197121 33899693 Apr 13 12:06 containerd-static-2.2.2-linux-amd64.tar.gz-rw-r--r-- 1 root 197121 19185064 Apr 13 13:18 crictl-v1.35.0-linux-amd64.tar.gz-rw-r--r-- 1 root 197121 28576212 Apr 13 13:22 istio-1.29.1-linux-amd64.tar.gz-rw-r--r-- 1 root 197121 72372408 Apr 13 13:20 kubeadm-rw-r--r-- 1 root 197121 58601656 Apr 13 13:18 kubectl-rw-r--r-- 1 root 197121 58110244 Apr 13 13:20 kubelet-rw-r--r-- 1 root 197121 11373900 Apr 13 13:17 nerdctl-2.2.2-linux-amd64.tar.gz-rw-r--r-- 1 root 197121 12741088 Apr 13 12:12 runc.amd64
-
-
安装 containerd
需在安装包目录
1.35-k8s-amd64-pkg下执行cd 1.35-k8s-amd64-pkgtar -zxvf containerd-static-2.2.2-linux-amd64.tar.gzmv -f bin/* /usr/local/bin/ -
创建 containerd 配置目录
mkdir /etc/containerd -
安装 runc
需在安装包目录
1.35-k8s-amd64-pkg下执行mv runc.amd64 /usr/local/bin/runcchmod +x /usr/local/bin/runcrunc -v -
生成 containerd 配置文件并修改相关参数
containerd config default > /etc/containerd/config.tomlsed -i \-e 's|SystemdCgroup =.*|SystemdCgroup = true|g' \-e 's|bin_dirs =.*|bin_dirs = ["/usr/local/kubernetes/cni/bin"]|' \-e 's|sandbox =.*|sandbox = "127.0.0.1:5000/pause:3.10.1"|' \-e 's|^root =.*|root = "/data/containerd"|' \/etc/containerd/config.toml-
验证配置是否生效
grep "SystemdCgroup\|bin_dirs\|sandbox =\|^root =" /etc/containerd/config.toml输出结果示例
root = "/data/containerd"sandbox_image = "127.0.0.1:5000/pause:3.10.1"bin_dir = "/usr/local/kubernetes/cni/bin"SystemdCgroup = true
-
-
配置 containerd systemd 服务文件
cat > /etc/systemd/system/containerd.service <<EOF[Unit]Description=containerdAfter=network-online.targetWants=network-online.target[Service]Type=notifyExecStart=/usr/local/bin/containerd --config /etc/containerd/config.tomlLimitNOFILE=1024000LimitNPROC=infinityLimitCORE=0TimeoutStartSec=0Delegate=yesKillMode=processRestart=on-failureStartLimitBurst=3StartLimitInterval=60s[Install]WantedBy=multi-user.targetEOF -
启动 containerd 并设置开机自启
systemctl daemon-reload && systemctl restart containerd && systemctl enable containerd
安装 K8S 集群所需命令
安装 crictl / kubeadm / kubelet / kubectl,Kubernetes 集群各节点均需要操作
-
创建命令安装目录
mkdir -p /usr/local/kubernetes/bin -
将命令文件移至安装目录
需在安装包目录
1.35-k8s-amd64-pkg下执行tar -zxvf crictl-v1.35.0-linux-amd64.tar.gz -C /usr/local/kubernetes/bincp ./{kubeadm,kubelet,kubectl} /usr/local/kubernetes/bin/ -
赋予命令文件可执行权限
chmod +x /usr/local/kubernetes/bin/*chown $(whoami):$(groups) /usr/local/kubernetes/bin/* -
配置 systemd 管理 kubelet
cat > /etc/systemd/system/kubelet.service <<\EOF[Unit]Description=kubelet: The Kubernetes Node AgentDocumentation=https://kubernetes.io/docs/home/Wants=network-online.targetAfter=network-online.target[Service]ExecStart=/usr/local/kubernetes/bin/kubeletRestart=alwaysStartLimitInterval=0RestartSec=10[Install]WantedBy=multi-user.targetEOF -
配置 kubelet 的 kubeadm 扩展参数
mkdir -p /etc/systemd/system/kubelet.service.dcat > /etc/systemd/system/kubelet.service.d/10-kubeadm.conf <<\EOF# Note: This dropin only works with kubeadm and kubelet v1.11+[Service]Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamicallyEnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.EnvironmentFile=-/etc/default/kubeletExecStart=ExecStart=/usr/local/kubernetes/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGSEOF -
启动 kubelet 并设置开机自启
systemctl daemon-reload && systemctl restart kubelet && systemctl enable kubelet- kubelet 在
kubeadm init/kubeadm join完成前处于等待状态,restart后无需检查服务状态,集群初始化完成后会自动拉起
- kubelet 在
-
配置 K8S 命令路径至环境变量
cat > /etc/profile.d/kubernetes.sh <<'EOF'export PATH=/usr/local/kubernetes/bin/:$PATHEOFsource /etc/profile.d/kubernetes.sh -
配置 crictl 运行时端点
crictl config runtime-endpoint unix:///run/containerd/containerd.sock-
验证配置
crictl config --get runtime-endpoint
-
安装 nerdctl 工具(可选)
nerdctl 是 containerd 的命令行工具,作用是用兼容 Docker 的语法,方便运维管理容器、镜像、等资源。
-
解压并安装 nerdctl
需在安装包目录
1.35-k8s-amd64-pkg下执行tar -zxvf nerdctl-2.2.2-linux-amd64.tar.gzrm -f containerd-rootless*.shmv nerdctl /usr/local/kubernetes/bin/ -
配置命令别名并生效
echo 'alias nerdctl="nerdctl -n k8s.io"' >> ~/.bashrcsource ~/.bashrc- 执行
nerdctl -v,输出nerdctl version 2.2.2代表安装成功
- 执行
安装环境依赖
Kubernetes 集群各节点均需要操作
-
安装 socat / conntrack
- 服务器支持访问互联网
- 服务器不支持访问互联网
# CentOS / RedHatyum install -y socat conntrack-tools# Debian / Ubuntuapt install -y socat conntrack# socat 离线包下载链接(适用于 CentOS 7.9,其他系统版本请按需重新下载)https://pdpublic.mingdao.com/private-deployment/offline/common/kubernetes-1.25.4/socat-deps-centos7.tar.gz# 解压后安装tar -zxvf socat-deps-centos7.tar.gzrpm -Uvh --nodeps socat-deps-centos7/*.rpm# conntrack 离线包下载链接(适用于 CentOS 7.9,其他系统版本请按需重新下载)https://pdpublic.mingdao.com/private-deployment/offline/common/kubernetes-1.25.4/conntrack-tools-deps-centos7.tar.gz# 解压后安装tar -zxvf conntrack-tools-deps-centos7.tar.gzrpm -Uvh --nodeps conntrack-tools-deps-centos7/*.rpm -
验证所有命令是否就绪
crictl --version && containerd --version && runc -v|grep version && kubeadm version && kubelet --version && kubectl version --client=true && socat -V | grep 'socat version' && conntrack --version && echo ok || echo error- 最终输出
ok代表正常,输出error则需根据错误信息补全缺失的命令
- 最终输出
修改内核配置
Kubernetes 集群各节点均需要操作
-
持久化加载所需内核模块
cat > /etc/modules-load.d/kubernetes.conf <<EOFoverlaybr_netfilterip_vsip_vs_rrip_vs_wrrip_vs_shEOF -
立即加载内核模块
modprobe overlaymodprobe br_netfiltermodprobe ip_vsmodprobe ip_vs_rrmodprobe ip_vs_wrrmodprobe ip_vs_sh -
追加内核参数并生效
cp -rfp /etc/sysctl.d/99-sysctl.conf /etc/sysctl.d/99-sysctl.conf.backup-$(date +%Y%m%d%H%M%S)cat >> /etc/sysctl.d/99-sysctl.conf <<EOFnet.bridge.bridge-nf-call-iptables = 1net.bridge.bridge-nf-call-ip6tables = 1net.ipv4.ip_forward = 1vm.max_map_count = 262144# MD Confignet.nf_conntrack_max = 524288net.ipv4.tcp_max_tw_buckets = 5000net.ipv4.tcp_window_scaling = 1net.ipv4.tcp_rmem = 8192 87380 16777216net.ipv4.tcp_wmem = 8192 65536 16777216net.ipv4.tcp_max_syn_backlog = 32768net.core.netdev_max_backlog = 32768net.core.netdev_budget = 600net.core.somaxconn = 32768net.core.wmem_default = 8388608net.core.rmem_default = 8388608net.core.rmem_max = 16777216net.core.wmem_max = 16777216net.ipv4.tcp_timestamps = 1net.ipv4.tcp_synack_retries = 2net.ipv4.tcp_syn_retries = 2net.ipv4.tcp_tw_recycle = 0net.ipv4.tcp_tw_reuse = 1net.ipv4.tcp_fin_timeout = 2net.ipv4.tcp_mem = 8388608 12582912 16777216net.ipv4.ip_local_port_range = 1024 65000net.ipv4.tcp_max_orphans = 16384net.ipv4.tcp_keepalive_intvl = 10net.ipv4.tcp_keepalive_probes = 3net.ipv4.tcp_keepalive_time = 600vm.max_map_count = 262144net.netfilter.nf_conntrack_tcp_be_liberal = 0net.netfilter.nf_conntrack_tcp_max_retrans = 3net.netfilter.nf_conntrack_tcp_timeout_max_retrans = 300net.netfilter.nf_conntrack_tcp_timeout_established = 86400fs.inotify.max_user_watches=10485760fs.inotify.max_user_instances=10240EOFsysctl --system
K8S 环境镜像准备
Kubernetes 集群各节点均需要操作
-
下载并导入离线镜像
- 服务器支持访问互联网
- 服务器不支持访问互联网
wget https://pdpublic.mingdao.com/private-deployment/offline/common/kubernetes-1.35.3/kubeadm-1.35.3-images-amd64.tar.gz# 通过以下链接下载镜像包,下载完成后上传至目标服务器https://pdpublic.mingdao.com/private-deployment/offline/common/kubernetes-1.35.3/kubeadm-1.35.3-images-amd64.tar.gz-
解压并导入镜像
gunzip -d kubeadm-1.35.3-images-amd64.tar.gzctr -n k8s.io image import kubeadm-1.35.3-images-amd64.tar -
导入完成后执行
crictl images验证输出结果示例
IMAGE TAG IMAGE ID SIZE127.0.0.1:5000/cni v3.31.4 c433a27dd94ce 72.2MB127.0.0.1:5000/coredns v1.13.1 aa5e3ebc0dfed 23.6MB127.0.0.1:5000/etcd 3.6.6-0 0a108f7189562 23.6MB127.0.0.1:5000/kube-apiserver v1.35.3 0f2b96c93465f 27.6MB127.0.0.1:5000/kube-controller-manager v1.35.3 0eb506280f9bc 23MB127.0.0.1:5000/kube-controllers v3.31.4 ff033cc89dab5 54MB127.0.0.1:5000/kube-proxy v1.35.3 53ed370019059 25.7MB127.0.0.1:5000/kube-scheduler v1.35.3 87c9b0e4f80d3 17.1MB127.0.0.1:5000/node v3.31.4 e6536b93706ed 160MB127.0.0.1:5000/pause 3.10.1 cd073f4c5f6a8 320kB127.0.0.1:5000/pilot 1.29.1 cd8219a164d79 73.4MB127.0.0.1:5000/proxyv2 1.29.1 599aea7eee05d 89.1MB
主节点配置
仅在 Kubernetes 01 节点操作
-
初始化 Master 节点
- 命令行方式初始化
- kubeadm-config.yaml方式初始化
kubeadm init --cri-socket unix:///var/run/containerd/containerd.sock -v 5 --kubernetes-version=1.35.3 --image-repository=127.0.0.1:5000 --pod-network-cidr=10.244.0.0/16- 生成 kubeadm-config.yaml 配置文件
cd /usr/local/kubernetes/kubeadm config print init-defaults > /usr/local/kubernetes/kubeadm-config.yaml- 修改配置文件
# 修改镜像仓库地址sed -ri 's|imageRepository.*|imageRepository: 127.0.0.1:5000|' /usr/local/kubernetes/kubeadm-config.yaml# 配置 Pod 网段sed -ri '/serviceSubnet/a \ \ podSubnet: 10.244.0.0\/16' /usr/local/kubernetes/kubeadm-config.yaml# 修改节点 IP 地址(自动获取)sed -ri 's|advertiseAddress.*|advertiseAddress: '$(hostname -I |awk '{print $1}')'|' /usr/local/kubernetes/kubeadm-config.yaml# 修改 etcd 数据目录sed -ri 's|dataDir:.*|dataDir: /data/etcd|' /usr/local/kubernetes/kubeadm-config.yaml# 修改节点名称(自动获取)sed -ri 's|name:.*|name: '$(hostname)'|' /usr/local/kubernetes/kubeadm-config.yaml# 修改 Kubernetes 版本sed -ri 's|kubernetesVersion.*|kubernetesVersion: 1.35.3|' /usr/local/kubernetes/kubeadm-config.yaml# 验证修改结果grep -E 'advertiseAddress|controlPlaneEndpoint|name|imageRepository|dataDir|podSubnet|kubernetesVersion' /usr/local/kubernetes/kubeadm-config.yaml输出结果示例
advertiseAddress: 192.168.10.20name: service01dataDir: /data/etcdimageRepository: 127.0.0.1:5000kubernetesVersion: 1.35.3podSubnet: 10.244.0.0/16kubeadm-config.yaml 完整示例
apiVersion: kubeadm.k8s.io/v1beta3bootstrapTokens:- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authenticationkind: InitConfigurationlocalAPIEndpoint:advertiseAddress: 192.168.10.20 # master ipbindPort: 6443nodeRegistration:criSocket: unix:///var/run/containerd/containerd.sockimagePullPolicy: IfNotPresentname: service01 # master hostnametaints: null---apiServer:timeoutForControlPlane: 4m0sapiVersion: kubeadm.k8s.io/v1beta3certificatesDir: /etc/kubernetes/pkiclusterName: kubernetescontrollerManager: {}dns: {}etcd:local:dataDir: /data/etcdimageRepository: 127.0.0.1:5000kind: ClusterConfigurationkubernetesVersion: 1.35.3networking:dnsDomain: cluster.localserviceSubnet: 10.96.0.0/12podSubnet: 10.244.0.0/16scheduler: {}- 初始化 Master 节点
# 查看所需镜像列表kubeadm config images list --config /usr/local/kubernetes/kubeadm-config.yaml# 拉取镜像kubeadm config images pull --config /usr/local/kubernetes/kubeadm-config.yaml# 预检kubeadm init phase preflight --config=/usr/local/kubernetes/kubeadm-config.yaml# 执行初始化kubeadm init --config=/usr/local/kubernetes/kubeadm-config.yaml --upload-certs --v=6- 初始化成功后会输出
kubeadm join命令,请保存该输出,工作节点加入集群时需要用到 - 若初始化失败,可执行
kubeadm reset -f重置后重试
-
扩展 NodePort 可用端口范围
sed -i '/- kube-apiserver/a\ \ \ \ - --service-node-port-range=1024-32767' /etc/kubernetes/manifests/kube-apiserver.yaml -
配置主节点 kubectl 访问凭证
echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> /etc/profile.d/kubernetes.shsource /etc/profile.d/kubernetes.sh -
调整 Master 节点 Pod 上限
echo "maxPods: 300" >> /var/lib/kubelet/config.yamlsystemctl restart kubelet -
允许 Master 节点参与 Pod 调度
kubectl taint node $(kubectl get node | grep control-plane | awk '{print $1}') node-role.kubernetes.io/control-plane:NoSchedule--
初始化完成后,等待约 2 分钟再执行本命令
-
预期输出:
-
首次执行: 输出
<节点名> untainted,表示污点移除成功,节点已允许调度 Pod -
再次执行: 若提示
error: taint "node-role.kubernetes.io/control-plane:NoSchedule" not found,说明污点已不存在,无需重复操作 -
注意: 若执行后输出不符,可能由于系统组件尚未完全就绪,请稍等片刻后重试
-
-
-
配置 kubectl 命令补全(可选)
如服务器可以访问互联网或有内部 yum/apt 仓库,可选装 bash-completion,实现对 kubectl 子命令 tab 快捷键补全
# 安装 bash-completionDebian 系:apt -y install bash-completionCentOS 系:yum -y install bash-completion# 添加 Bash 自动补全配置grep -q "kubectl completion" ~/.bashrc || echo "source <(kubectl completion bash)" >> ~/.bashrc && source ~/.bashrc -
安装 Calico 网络插件
-
将 calico.yaml 移至安装目录
需在安装包目录
1.35-k8s-amd64-pkg下执行mv calico.yaml /usr/local/kubernetes/ -
修改配置文件
-
批量替换镜像仓库地址,使用本地镜像
sed -ri 's|image: quay.io/calico|image: 127.0.0.1:5000|g' /usr/local/kubernetes/calico.yamlgrep image: /usr/local/kubernetes/calico.yaml输出结果示例
image: 127.0.0.1:5000/cni:v3.31.4image: 127.0.0.1:5000/cni:v3.31.4image: 127.0.0.1:5000/node:v3.31.4image: 127.0.0.1:5000/node:v3.31.4image: 127.0.0.1:5000/kube-controllers:v3.31.4 -
配置 Pod 网段
sed -ri '/# - name: CALICO_IPV4POOL_CIDR/,/# value: ".*"/ {s/# - name: CALICO_IPV4POOL_CIDR/- name: CALICO_IPV4POOL_CIDR/s/# value: ".*"/ value: "10.244.0.0\/16"/}' /usr/local/kubernetes/calico.yamlgrep -C 2 CALICO_IPV4POOL_CIDR /usr/local/kubernetes/calico.yaml输出结果示例
# chosen from this range. Changing this value after installation will have# no effect. This should fall within `--cluster-cidr`.- name: CALICO_IPV4POOL_CIDRvalue: "10.244.0.0/16"# Disable file logging so `kubectl logs` works. -
配置 CNI 二进制路径
sed -i '/- name: cni-bin-dir/,/type:/s|path: .*|path: /usr/local/kubernetes/cni/bin|' /usr/local/kubernetes/calico.yamlgrep -C 2 cni-bin-dir /usr/local/kubernetes/calico.yaml输出结果示例
name: host-local-net-dir- mountPath: /host/opt/cni/binname: cni-bin-dirsecurityContext:privileged: true--volumeMounts:- mountPath: /host/opt/cni/binname: cni-bin-dir- mountPath: /host/etc/cni/net.dname: cni-net-dir--path: /proc# Used to install CNI.- name: cni-bin-dirhostPath:path: /usr/local/kubernetes/cni/bin
-
-
部署 Calico
kubectl apply -f /usr/local/kubernetes/calico.yaml -
检查 Calico 服务状态
kubectl get pod -n kube-system -l k8s-app=calico-nodekubectl get pod -n kube-system -l k8s-app=calico-kube-controllers正常情况下,所有 Pod 的
READY列为1/1,STATUS列为Running:NAME READY STATUS RESTARTS AGEcalico-node-4xbtk 1/1 Running 0 2mcalico-node-9fzwp 1/1 Running 0 2mcalico-kube-controllers-7d9d9b7b9c-x2kqt 1/1 Running 0 2m- 若
STATUS长时间为Init或Pending,可执行kubectl describe pod <pod名> -n kube-system查看详细事件排查原因
- 若
-
工作节点配置
仅在 Kubernetes 02 节点操作
-
加入 Kubernetes 集群
kubeadm join 192.168.10.20:6443 --token 3nwjzw.pdod3r27lnqqhi0x \--discovery-token-ca-cert-hash sha256:a84445303a0f8249e7eae3059cb99d46038dc275b2dc2043a022de187a1175a2- 上方命令为示例,请使用主节点
kubeadm init时实际输出的join命令 - 若已遗忘,可在主节点执行
kubeadm token create --print-join-command重新获取
- 上方命令为示例,请使用主节点
-
调整工作节点 Pod 上限
echo "maxPods: 300" >> /var/lib/kubelet/config.yamlsystemctl restart kubelet
集群状态检查
-
检查节点与 Pod 状态
kubectl get pod -n kube-system # READY 列需要是 "1/1"kubectl get node # STATUS 列需要是 "Ready" -
下载与导入基础镜像(K8s 集群各节点均需要操作)
镜像下载链接
https://pdpublic.mingdao.com/private-deployment/offline/common/centos7.9.2009.tar.gz将该镜像文件
centos7.9.2009.tar.gz分发至 K8s 集群各节点,并执行镜像导入操作:gunzip -d centos7.9.2009.tar.gzctr -n k8s.io image import centos7.9.2009.tar -
在主节点部署测试应用
cat > /usr/local/kubernetes/test.yaml <<'EOF'apiVersion: apps/v1kind: Deploymentmetadata:name: testnamespace: defaultspec:replicas: 3selector:matchLabels:app: testtemplate:metadata:labels:app: testannotations:md-update: '20200517104741'spec:containers:- name: testimage: centos:7.9.2009command:- sh- -c- |echo $(hostname) > hostname.txtpython -m SimpleHTTPServerresources:limits:memory: 512Micpu: 1requests:memory: 64Micpu: 0.01volumeMounts:- name: tz-configmountPath: /etc/localtimevolumes:- name: tz-confighostPath:path: /usr/share/zoneinfo/Etc/GMT-8---apiVersion: v1kind: Servicemetadata:name: testnamespace: defaultspec:selector:app: testports:- name: external-testport: 8000targetPort: 8000nodePort: 8000type: NodePortEOF -
启动服务
kubectl apply -f /usr/local/kubernetes/test.yaml -
检查 Pod 运行状态
kubectl get pod -o wide -
验证服务访问
curl 127.0.0.1:8000/hostname.txt- 多次执行应返回不同 Pod 的 hostname,说明负载均衡正常
-
测试没问题后,可以删除此服务
kubectl delete -f /usr/local/kubernetes/test.yaml