Deploy Dedicated Computing Services
Configure Dedicated Computing Server
Joining the Kubernetes Cluster
-
Deploy the Kubernetes node environment on the dedicated computing server in advance.
-
Obtain the command to add a new node to the cluster from the Kubernetes master node:
kubeadm token create --print-join-command
-
Execute the join cluster command on the dedicated computing server:
kubeadm join 192.168.1.100:6443 --token 3nwjzw.pdod3r27lnqqhi0x --discovery-token-ca-cert-hash sha256:a84445303xxxxxxxxxxxxxxxxxxx175a2
Download Images
-
Download the computing instance image (only needs to be done on the dedicated computing server).
- Server with Internet Access
- Server without Internet Access
crictl pull registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-computinginstance:6.4.0
# Image file download link, upload to the deployment server after downloading
https://pdpublic.mingdao.com/private-deployment/offline/mingdaoyun-computinginstance-linux-amd64-6.4.0.tar.gz
# Decompress the image
gunzip -d mingdaoyun-computinginstance-linux-amd64-6.4.0.tar.gz
# Import the image
ctr -n k8s.io image import mingdaoyun-computinginstance-linux-amd64-6.4.0.tar
Add Taint and Label
-
Obtain the node name of the dedicated computing node:
kubectl get node
-
Add taint and label to the dedicated computing node:
kubectl taint node $your_node_name md=workflowcompute:NoSchedule
kubectl label node $your_node_name md=workflowcompute- Note, you need to replace
$your_node_name
with the actual node name
- Note, you need to replace
-
Add label to the dedicated computing node
Deploy Dedicated Computing Services
-
Dedicated computing is supported starting from version v5.1.0. When installing the manager version >=5.1.0 for the first time, a default
computinginstance.yaml
file will be generated. Usually, you only need to modify the existing configuration file when configuring the dedicated computing service. -
If installing manager version <5.1.0 during the first deployment, a new
computinginstance.yaml
file needs to be added and the relevant scripts need to be modified.
- Manager version during first deployment >=5.1.0
- Install Manager Version <5.1.0 on First Deployment
-
Execute
cat $KUBECONFIG
to obtain the $KUBECONFIG content (corresponding to the/etc/kubernetes/admin.conf
file in Kubernetes):cat $KUBECONFIG
Output sample:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0...
server: https://192.168.0.150:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0...- In the next step, copy the
cat $KUBECONFIG
output content to thecomputinginstance.yaml
file, make sure to add four spaces before each line when copying
- In the next step, copy the
-
Modify the
/data/mingdao/script/kubernetes/computinginstance.yaml
file:For the first deployment, make sure to modify the following points in the computinginstance.yaml file
- kafka.brokers: Change to the kafka address connected by the microservice
- syncStatus.mongoUri: Modify to the actual mongodb connection information
- image: Change to the same computing instance version number as the microservice
- kubeconfig.yaml: Change to the actual content of $KUBECONFIG, remember to add four spaces before each line of the original $KUBECONFIG content
Sample Modification:
apiVersion: v1
kind: ConfigMap
metadata:
name: computinginstance
namespace: default
data:
config.yaml: |-
server:
listen:
host: "0.0.0.0"
port: "9157"
common:
kubernetes:
configFile: "/usr/local/computinginstance/kubeconfig.yaml"
namespace: "default"
configmapTemplate: "/usr/local/computinginstance/configmap-workflowcompute.yaml"
deploymentTemplate: "/usr/local/computinginstance/deployment-workflowcompute.yaml"
kafka:
brokers: "192.168.0.144:9092" # Modify to the kafka address connected by the microservice
workflowTopicPrefix: "WorkFlow-"
workSheetTopicPrefix: "WorkSheet-"
workflowConsumerIdPrefix: "md-workflow-consumer-"
replicationFactor: 1
deleteTopic: true
callback:
url: "http://computingschedule:9158"
createInterval: 120000 #ms
deleteInterval: 120000 #ms
syncStatus:
mongoUri: "mongodb://mingdao:123456@192.168.0.144:27017/mdIdentification" # Modify to the actual mongodb connection information
interval: 30000 # ms
model:
10:
replicas: 1 # Number of instances
thread: 2 # Number of threads per type, total = 5*thread*replicas=10
cpu: 2 # Maximum CPU cores per instance, total = cpu*replicas=2
memory: 4 # Maximum memory in GB per instance, total = memory*replicas=4
20:
replicas: 1
thread: 4
cpu: 4
memory: 8
50:
replicas: 1
thread: 10
cpu: 8
memory: 16
100:
replicas: 1
thread: 20
cpu: 16
memory: 32
configmap-workflowcompute.yaml: |-
apiVersion: v1
kind: ConfigMap
metadata:
name: workflowcompute
data:
application-www-computing.properties: |-
md.resource.consumer.config.maps={\
'resourceId': 'CONFIGMAP_INSTANCEID', \
'wfTopic': 'WorkFlow-CONFIGMAP_INSTANCEID', \
'wsTopic': 'WorkSheet-CONFIGMAP_INSTANCEID', \
'partition': 'CONFIGMAP_WORKFLOW_PARTITION' \
}
md.kafka.consumer.topic=WorkSheet-CONFIGMAP_INSTANCEID
md.kafka.consumer.group.id=md-workflow-consumer-CONFIGMAP_INSTANCEID
md.kafka.consumer.concurrency=CONFIGMAP_WORKFLOW_THREAD
md.kafka.batch.topic=WorkSheet-CONFIGMAP_INSTANCEID
md.kafka.batch.concurrency=CONFIGMAP_WORKFLOW_THREAD
spring.kafka.batch.topic=WorkFlow-CONFIGMAP_INSTANCEID
spring.kafka.batch.concurrency=CONFIGMAP_WORKFLOW_THREAD
spring.kafka.button.topic=WorkFlow-CONFIGMAP_INSTANCEID
spring.kafka.button.concurrency=CONFIGMAP_WORKFLOW_THREAD
spring.kafka.process.topic=WorkFlow-CONFIGMAP_INSTANCEID
spring.kafka.process.concurrency=CONFIGMAP_WORKFLOW_THREAD
spring.kafka.consumer.topic=WorkFlow-CONFIGMAP_INSTANCEID
spring.kafka.consumer.group.id=md-workflow-consumer-CONFIGMAP_INSTANCEID
spring.kafka.consumer.concurrency=CONFIGMAP_WORKFLOW_THREAD
spring.kafka.properties.partition=CONFIGMAP_WORKFLOW_PARTITION
grpc.client.MDWorksheetService.address=static://127.0.0.1:9422
spring.kafka.router.topic=WorkFlow-CONFIGMAP_INSTANCEID
deployment-workflowcompute.yaml: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: workflowcompute-DEPLOYMENT_INSTANCEID
labels:
app: workflowcompute-DEPLOYMENT_INSTANCEID
md-service: workflowcompute
spec:
replicas: DEPLOYMENT_REPLICAS
selector:
matchLabels:
app: workflowcompute-DEPLOYMENT_INSTANCEID
template:
metadata:
labels:
app: workflowcompute-DEPLOYMENT_INSTANCEID
md-service: workflowcompute
annotations:
md-update: '20231228184263'
spec:
imagePullSecrets:
- name: hub.mingdao.com
tolerations:
- key: "md"
operator: "Equal"
value: "workflowcompute"
effect: "NoSchedule"
nodeSelector:
md: workflowcompute
containers:
- name: workflow-consumer
image: registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-computinginstance:6.4.0 # computinginstance image version
env:
- name: ENV_SERVERID
value: "single:workflowconsumer"
command:
- sh
- -c
- |
cp /usr/local/MDPrivateDeployment/workflowconsumer/application-www-computing.properties.template /usr/local/MDPrivateDeployment/workflowconsumer/application-www-computing.properties
sed -i s/CONFIGMAP_INSTANCEID/DEPLOYMENT_INSTANCEID/g /usr/local/MDPrivateDeployment/workflowconsumer/application-www-computing.properties
sed -i s/CONFIGMAP_WORKFLOW_THREAD/DEPLOYMENT_WORKFLOW_THREAD/g /usr/local/MDPrivateDeployment/workflowconsumer/application-www-computing.properties
sed -i s/CONFIGMAP_WORKFLOW_PARTITION/DEPLOYMENT_WORKFLOW_PARTITION/g /usr/local/MDPrivateDeployment/workflowconsumer/application-www-computing.properties
cat /usr/local/MDPrivateDeployment/workflowconsumer/application-www-computing.properties
sleep 20
exec /Housekeeper/main -config /Housekeeper/config.yaml
resources:
limits:
memory: DEPLOYMENT_MEMORYGi
cpu: DEPLOYMENT_CPU
requests:
memory: 1Gi
cpu: 0.25
volumeMounts:
- name: tz-config
mountPath: /etc/localtime
- name: workflowcompute
mountPath: /usr/local/MDPrivateDeployment/workflowconsumer/application-www-computing.properties.template
subPath: application-www-computing.properties
- name: worksheetservice
image: registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-computinginstance:6.4.0 # computinginstance image version
env:
- name: ENV_SERVERID
value: "single:worksheet"
command:
- sh
- -c
- |
cat /usr/local/MDPrivateDeployment/worksheet/Config/appsettingsMain.json
exec /Housekeeper/main -config /Housekeeper/config.yaml
resources:
limits:
memory: DEPLOYMENT_MEMORYGi
cpu: DEPLOYMENT_CPU
requests:
memory: 1Gi
cpu: 0.25
readinessProbe:
tcpSocket:
port: 9422
initialDelaySeconds: 10
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 9422
initialDelaySeconds: 60
periodSeconds: 10
volumeMounts:
- name: tz-config
mountPath: /etc/localtime
volumes:
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Etc/GMT-8
- name: workflowcompute
configMap:
name: workflowcompute
items:
- key: application-www-computing.properties
path: application-www-computing.properties
kubeconfig.yaml: |- # Change to the actual content of $KUBECONFIG below, make sure to add four spaces before each line of the original $KUBECONFIG content
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0...
server: https://192.168.0.150:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0...
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQ...
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: computinginstance
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: computinginstance
template:
metadata:
labels:
app: computinginstance
dir: grpc
annotations:
md-update: "20231228184309"
spec:
imagePullSecrets:
- name: hub.mingdao.com
containers:
- name: computinginstance
image: registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-computinginstance:6.4.0 # computinginstance image version
resources:
limits:
cpu: "2"
memory: 2G
requests:
cpu: "0.01"
memory: 128M
readinessProbe:
tcpSocket:
port: 9157
initialDelaySeconds: 3
periodSeconds: 3
livenessProbe:
tcpSocket:
port: 9157
initialDelaySeconds: 20
periodSeconds: 10
volumeMounts:
- name: configfile
mountPath: /usr/local/computinginstance/config.yaml
subPath: config.yaml
- name: configfile
mountPath: /usr/local/computinginstance/configmap-workflowcompute.yaml
subPath: configmap-workflowcompute.yaml
- name: configfile
mountPath: /usr/local/computinginstance/deployment-workflowcompute.yaml
subPath: deployment-workflowcompute.yaml
- name: configfile
mountPath: /usr/local/computinginstance/kubeconfig.yaml
subPath: kubeconfig.yaml
- name: tz-config
mountPath: /etc/localtime
volumes:
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Asia/Shanghai
- name: configfile
configMap:
name: computinginstance
items:
- key: config.yaml
path: config.yaml
- key: configmap-workflowcompute.yaml
path: configmap-workflowcompute.yaml
- key: deployment-workflowcompute.yaml
path: deployment-workflowcompute.yaml
- key: kubeconfig.yaml
path: kubeconfig.yaml
---
apiVersion: v1
kind: Service
metadata:
name: computinginstance
namespace: default
spec:
selector:
app: computinginstance
ports:
- name: http-computinginstance
port: 9157
targetPort: 9157
nodePort: 9157
type: NodePort -
Restart the service:
cd /data/mingdao/script/kubernetes/
bash restart.sh
-
Modify
/data/mingdao/script/kubernetes/start.sh
baseDir=$(dirname $0)
kubectl apply -f $baseDir/config.yaml
sleep 30
kubectl apply -f $baseDir/service.yaml
kubectl apply -f $baseDir/datapipeline.yaml
kubectl apply -f $baseDir/wps.yaml
kubectl apply -f $baseDir/computinginstance.yaml # This line is newly added
sleep 180
kubectl apply -f $baseDir/www.yaml -
Modify
/data/mingdao/script/kubernetes/stop.sh
baseDir=$(dirname $0)
kubectl delete -f $baseDir/www.yaml
kubectl delete -f $baseDir/service.yaml
kubectl delete -f $baseDir/datapipeline.yaml
kubectl delete -f $baseDir/wps.yaml
kubectl delete -f $baseDir/computinginstance.yaml # This line is newly added
kubectl delete -f $baseDir/config.yaml -
Modify
/data/mingdao/script/kubernetes/update.sh
baseDir=$(dirname $0)
imagePrefix="registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-"
function replaceEnv() {
local key=$1
local value=$2
local file=$3
sed -i "/$key/ s#\".*\"#\"$value\"#" $file
}
function replaceAppVersion() {
local version=$1
replaceEnv ENV_APP_VERSION $version $baseDir/config.yaml
}
function update() {
local service=$1
local version=$2
sed -r -i "/$(echo -n "${imagePrefix}$service" | sed -n 's#/#\\/#g;p')/ s/$service:.*$/$service:$version/" $baseDir/config.yaml
sed -r -i "/$(echo -n "${imagePrefix}$service" | sed -n 's#/#\\/#g;p')/ s/$service:.*$/$service:$version/" $baseDir/service.yaml
sed -r -i "/$(echo -n "${imagePrefix}$service" | sed -n 's#/#\\/#g;p')/ s/$service:.*$/$service:$version/" $baseDir/datapipeline.yaml
sed -r -i "/$(echo -n "${imagePrefix}$service" | sed -n 's#/#\\/#g;p')/ s/$service:.*$/$service:$version/" $baseDir/wps.yaml
# The following line is newly added
sed -r -i "/$(echo -n "${imagePrefix}computinginstance" | sed -n 's#/#\\/#g;p')/ s/computinginstance:.*$/computinginstance:$version/" $baseDir/computinginstance.yaml # This line is newly added
sed -r -i "/$(echo -n "${imagePrefix}$service" | sed -n 's#/#\\/#g;p')/ s/$service:.*$/$service:$version/" $baseDir/www.yaml
echo "Note: The version number of the ${service} service has been changed to ${version}, it will take effect after restart"
}
case $1 in
update)
if [[ $# -ne 3 ]];then
echo "Update the specific service version number, format example: $baseDir/$scriptName update community 3.1.0"
exit 1
fi
update $2 $3
replaceAppVersion $3
echo "Restarting services..."
/bin/bash $baseDir/restart.sh
# The following four lines are newly added
echo "Updating workflowcompute instance...: $3" # This line is newly added
curl -XPOST '127.0.0.1:9157/instance/update' -d '{"ContainerName": "workflow-consumer", "Image": "'${imagePrefix}computinginstance:$3'"}' -H 'Content-Type: application/json' # This line is newly added
sleep 5 # This line is newly added
curl -XPOST '127.0.0.1:9157/instance/update' -d '{"ContainerName": "worksheetservice", "Image": "'${imagePrefix}computinginstance:$3'"}}' -H 'Content-Type: application/json' # This line is newly added
;;
*)
echo "Usage:
$0 <$(sed -n '/case \$1/,/esac/p' $0 | grep -oP '\b\w+\)$' | tr -d ')' | paste -s -d '|')>"
;;
esac -
Run
cat $KUBECONFIG
to obtain the contents of $KUBECONFIG (corresponds to the/etc/kubernetes/admin.conf
file for Kubernetes)cat $KUBECONFIG
Example output:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJME1ERXlPVEE1TXpRMU5Wb1hEVE0wTURFeU5qQTVNelExTlZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTjR3CklsY3RBNzhlTlRPZWJOK1RoUXdDL29SamtKMlJMeUlVZ2dncU1MSWlxWWpEbytqczNKbkdpNVU4cUIrZ3RlV1AKRDhTUEVlVUd0ZmNOOHoxTzlqK0Q1aFB3dkRPRDFyV2k3OEpnZWI3SURtdkU4L1VMdWdIYldkajlTckcrWTVEYwpDakQrcFduWHAzQUkzdjhwbm45a1ZJZ2Z3RldFcFVGQWI4Y2h0WSt3MU11ZjVhNVlhSTJ3NWtwOXhVS1lmV0g0CkFWUnlCUUFFVU5TUjE1Y3EyZ3Z2WDVTT1ZXUU0xcFJYNG1jcHovZ0toZ1pCYTBoSHZJaGRIRFBxQ0o0bUNYak4KbzgrWDYzNWZ4S3JPYTV4NTFhemVJMTljeE5lOXh0V25JdDUzLzJtWVlkYjIzWVI5UDRCK3U5NWVheWZWU2JwZQo4amhvdW1qVXFoSmNHMFdiY1pNQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZHMW1zZ01jVDBoOVpZbVlJU2pZK2wyNFRERFJNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBSHFXUDhnVDFJbXBpL1M4NVIzQwpzU0Y2V0NISFFscU5uOTh3UHdhR0dEcmFZQnUwbnhSeFk5Q243cHk3TytzZ2hLeDRpLzR5N3ZKYi84MXZWTEx2CklORGhxL2N2enp1WWtHQ0J1QStXcFVMZmhneHFsb0F4MjF3eWNScCtUTlZ5UnphNXpmUkt6RUZiZml0WU1LQk0KNkVhZmk3d3lma2twNDBDYzFoKzJMM3N4WG1sSDUyaXA4REx3WDJGVEVXRjE3VnE5WTBRdXZDcGpBSUlnY09aSgpGQjNCMHhiK0ZsUGsxZC9vakNHSzVOdDd6Z2VWQmN1VkhGSUhITSs5eWQ0dHJCVVViR05vVCszWVJXU0hxTGJECml3LzdQVDU2SFpheHA1VmUvN2p1cTZEbkRIYXQ5ZW9Idk5acEcvNUJMUW1SWUUyOFNaU3lvUnd5V3pUMkpTb24KOE8wPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://192.168.0.150:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJVENDQWdtZ0F3SUJBZ0lJWnJObmsyZ1RLRmd3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TkRBeE1qa3dPVE0wTlRWYUZ3MHlOVEF4TWpnd09UTTBOVGRhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXBhMk1zT1JkNHJ5djFhdEsKbkcrZzZiUkhJZ28rbEVkL09qSThTVU9BVlppV0tNY2pFaU81TXpkSXhaZW8yNFIxYzBweTNyei9WQ1FkNkZTdQpmSkFydDJodkZEczFvWGN2UEZ6d2xKRWZpblRNZkdqVndhZTkyNGZrak15MDU4d0pFbjVEVDJYa294QmhyOGhFClhSZ2dBbFpDL0R3dVA3ZjhBcmtyTC8wOG9TbmFhWm5sVkpKTU9qWnN5VFBCMnFTTXZselh1Z25vSW14Qlhkc0oKZkVScm5qQ3R2RDdxOU5MRmx5QVV0OWRsMVNUMDRocDh0L1hBVVAyazZMcjQyUXZCdEVnN1RaR1pBTEp5NEdpUwozLyswTU9NcmVMYjVnRGRIOTExRTV2d3JteU4zdUNDSkxjbWVoUEh2UWJuYVVUNGtod2lkMHdBcDNJNnhCYkNpCjRNaFNGd0lEQVFBQm8xWXdWREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFmQmdOVkhTTUVHREFXZ0JSdFpySURIRTlJZldXSm1DRW8yUHBkdUV3dwowVEFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBUVVkeEVxUkFzZ1ZmWFAzS0tuaG4veTRmcUpVdXk4ZzRPU2N0CmtOWmRkdFhNZ0xYK3RMekVIZnlxTXZ4cFlzMGhLQWNHZWlaRHkwajkrSFZ4L00zZWVmNUdGOWVTRHhvYy9Gd3QKQ2NxUGV2R1lKOVpaTGhacERENEN0L1dURjJ3eXRjdGo4UkdBVGsxcXl5Vkh5Vi82NGNSc2phYTIyQTNGVUFDNAovS2x5MHlBc25Ib0IyWS9SelJPTTJWLzhIdFFqRUg3RExzZHFpQVdIU2xOUGphdDM1NWg0ek10cW9jTWY0bzljCm0yd1RuaVp3QktBQmZVc3BNUFRlc2FITGw4Y1AzWklXbjF5QnJmcVFjenVWRnVtRjhqLy9QMTJ0YkxrVXAwSzEKbTNRclNDVnZDdnhOL1VjNGlmUFB1YkdRZDdudUc2RUdkdHVSeFB2RzdDem1sTjQ1amc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBcGEyTXNPUmQ0cnl2MWF0S25HK2c2YlJISWdvK2xFZC9Pakk4U1VPQVZaaVdLTWNqCkVpTzVNemRJeFplbzI0UjFjMHB5M3J6L1ZDUWQ2RlN1ZkpBcnQyaHZGRHMxb1hjdlBGendsSkVmaW5UTWZHalYKd2FlOTI0ZmtqTXkwNTh3SkVuNURUMlhrb3hCaHI4aEVYUmdnQWxaQy9Ed3VQN2Y4QXJrckwvMDhvU25hYVpubApWSkpNT2pac3lUUEIycVNNdmx6WHVnbm9JbXhCWGRzSmZFUnJuakN0dkQ3cTlOTEZseUFVdDlkbDFTVDA0aHA4CnQvWEFVUDJrNkxyNDJRdkJ0RWc3VFpHWkFMSnk0R2lTMy8rME1PTXJlTGI1Z0RkSDkxMUU1dndybXlOM3VDQ0oKTGNtZWhQSHZRYm5hVVQ0a2h3aWQwd0FwM0k2eEJiQ2k0TWhTRndJREFRQUJBb0lCQUN0dzloTHJ6akpGaDFWZgpOSkVRTkFFVFpCTm8zRC9FLzNjaTlPdkE1MFdLWE5VVVlmMi9vQy90cndjZ1hRWXlGUm5GeTVqYnRaYzZZUjBxCkZ4WlNOeVJBSGVBUUpsL1FBSEt6YStHSXE5eUNBNXdiWVBFR0txSUZYOGdMWk9QaWUvNTlYT2pVcnI2UzdRcEsKV0tLUVVOUk1DZ1JaUTBjeDFzSmdDeExxTTV1T0VkVDVLcGNiaE9BR042TGNVNkduRWEvcjlnUitUL2tIenA4QgpEdVZsNmREcHNGUjFyTmNsbGVXMkJLNmlMd2x6LzR0TmhROUxMY3lkSEswdHI0VWtKZVp2YTFEWFM1RWNUeTRQCkY5MzVJLzhqNTQvYlZ0ZjJQMFdpSUJ2ZFRaWnp5cjVPWG5nMDlPZUN3bTdwd2JjSVlhRDV4YXROdjRxYVR1ci8KY0I1R3RBRUNnWUVBeTZwekptQWZNZkxmOWUrSHZnTDRBK0M1cndFQVJhcllnUFlsTjJER0hrVFZNK2c1eW5YaQpUVHJEdlRZbUhqTmtiUUorOE14U09pRHZjK2Z1VHJWRzRUamE1Slk3YlJ1eFVVdHRRa2MzWk9UWFdzWXlzUWJICmRxbDBLU0RrcGdCdmZSMndxblp2V2JSR09rQ05sdmRpYUZKT0t4cUduWjc3NU9PTnVmNzRRUWNDZ1lFQTBFQXQKNXp2ZDhzYnZJOFBzQ1pmZlFYM1o1TWgzYytDVUhVTzQ4Mkc5QUx1L2tiRWhPL1JxVVJuck5DanJlWU9zWDliVQovSGwxelZkMStvRGlLcE1IZEJ2RDFubFM0MDY0TURTMW9BUEtxZjFFN0EySEN5cVZJZTZMbGRnKzh2VElkYzRHCmovV0RqR0JKMUs2RXI0NGx2SXNWWnpRaUpXSGxYZTV2dkJxdDhuRUNnWUF2OEVUK2FXMnVVaDdKUXNKT3hXQWYKZVl2N1YxNzdCd1hEQlMwcFpjdjhYL05YTG5nNzRaZU0yaUlzclV3M201MHQwNEtScDJaTGJHa3dmUTBvMVo2RApjT0NGSVorSFJSZHRyVFZnZm1iWmhzdngvK2o2cGovWS9IWHRJR0x4ZC9UR0hIRHpEc0dTK2MzMTlDL2Zzd2NrCnl3cS9OcFV0RUxqMTNXSUV2N0VyalFLQmdRQ014alhCa0ZpeTJ3T2hPN245cWlxRFRSM1VhQ2RIcjlLd0RhMmkKNkxrcEc4R2VMUXo0U0hydUpBVTVGMGhHdGxuNTEzSFcwZ3h1S09kWjFYSU5zYUppUExxZjZ4ZTdET1c1d0lmZQppWEdnZzlMcGR5T0l5dHBSTzc0a0p6QTJjSDVxRkVHZll5bnY1TTlEOUUxQmwyZXZFcDMvUytDaFFKSWFjeW9aCkVEZnlJUUtCZ0JXUGRELzhSTFhVWm9oV2FnTy9uMHRBUnlLMnladTA4RWFjcENXV2ZFNUdUZzF1cnVTNTdhT0IKMU1KeTFQNElnODJhVml4T291bkR3d2xzSHBjNXlHdVNBTERvYklkRnJoMHJVYVZLTTJZMDN1VzFBRnhzcnFFQQpWc0t0aUQ3WjMrRzM1cmFwRTkrcDhoR0EzaVF6eHl3YWdtRUI0Sm9WU0MxbHZJekc1ME1XCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==- In the next step, the output from
cat $KUBECONFIG
will be copied into thecomputinginstance.yaml
file. Note: Before copying, add four spaces at the beginning of each line
- In the next step, the output from
-
Add the file
/data/mingdao/script/kubernetes/computinginstance.yaml
When deploying for the first time, be sure to modify the following in computinginstance.yaml:
- kafka.brokers: Change to the Kafka address used by microservices
- syncStatus.mongoUri: # Change to the actual MongoDB connection details
- image: Change to the computinginstance version number consistent with the microservice
- kubeconfig.yaml: Change to the actual contents of $KUBECONFIG, be sure to add four spaces before each line of the original $KUBECONFIG content
Example
computinginstance.yaml
file:apiVersion: v1
kind: ConfigMap
metadata:
name: computinginstance
namespace: default
data:
config.yaml: |-
server:
listen:
host: "0.0.0.0"
port: "9157"
common:
kubernetes:
configFile: "/usr/local/computinginstance/kubeconfig.yaml"
namespace: "default"
configmapTemplate: "/usr/local/computinginstance/configmap-workflowcompute.yaml"
deploymentTemplate: "/usr/local/computinginstance/deployment-workflowcompute.yaml"
kafka:
brokers: "192.168.0.144:9092" # Change to the Kafka address connected by the microservice
workflowTopicPrefix: "WorkFlow-"
workSheetTopicPrefix: "WorkSheet-"
workflowConsumerIdPrefix: "md-workflow-consumer-"
replicationFactor: 1
deleteTopic: true
callback:
url: "http://computingschedule:9158"
createInterval: 120000 #ms
deleteInterval: 120000 #ms
syncStatus:
mongoUri: "mongodb://mingdao:123456@192.168.0.144:27017/mdIdentification" # Change to actual MongoDB connection details
interval: 30000 # ms
model:
10:
replicas: 1 # Number of instances
thread: 2 # Number of threads per type, total=5*thread*replicas=10
cpu: 2 # Max CPU cores per instance, total=cpu*replicas=2
memory: 4 # Max memory (GB) per instance, total=memory*replicas=4
20:
replicas: 1
thread: 4
cpu: 4
memory: 8
50:
replicas: 1
thread: 10
cpu: 8
memory: 16
100:
replicas: 1
thread: 20
cpu: 16
memory: 32
configmap-workflowcompute.yaml: |-
apiVersion: v1
kind: ConfigMap
metadata:
name: workflowcompute
data:
application-www-computing.properties: |-
md.resource.consumer.config.maps={\
'resourceId': 'CONFIGMAP_INSTANCEID', \
'wfTopic': 'WorkFlow-CONFIGMAP_INSTANCEID', \
'wsTopic': 'WorkSheet-CONFIGMAP_INSTANCEID', \
'partition': 'CONFIGMAP_WORKFLOW_PARTITION' \
}
md.kafka.consumer.topic=WorkSheet-CONFIGMAP_INSTANCEID
md.kafka.consumer.group.id=md-workflow-consumer-CONFIGMAP_INSTANCEID
md.kafka.consumer.concurrency=CONFIGMAP_WORKFLOW_THREAD
md.kafka.batch.topic=WorkSheet-CONFIGMAP_INSTANCEID
md.kafka.batch.concurrency=CONFIGMAP_WORKFLOW_THREAD
spring.kafka.batch.topic=WorkFlow-CONFIGMAP_INSTANCEID
spring.kafka.batch.concurrency=CONFIGMAP_WORKFLOW_THREAD
spring.kafka.button.topic=WorkFlow-CONFIGMAP_INSTANCEID
spring.kafka.button.concurrency=CONFIGMAP_WORKFLOW_THREAD
spring.kafka.process.topic=WorkFlow-CONFIGMAP_INSTANCEID
spring.kafka.process.concurrency=CONFIGMAP_WORKFLOW_THREAD
spring.kafka.consumer.topic=WorkFlow-CONFIGMAP_INSTANCEID
spring.kafka.consumer.group.id=md-workflow-consumer-CONFIGMAP_INSTANCEID
spring.kafka.consumer.concurrency=CONFIGMAP_WORKFLOW_THREAD
spring.kafka.properties.partition=CONFIGMAP_WORKFLOW_PARTITION
grpc.client.MDWorksheetService.address=static://127.0.0.1:9422
spring.kafka.router.topic=WorkFlow-CONFIGMAP_INSTANCEID
deployment-workflowcompute.yaml: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: workflowcompute-DEPLOYMENT_INSTANCEID
labels:
app: workflowcompute-DEPLOYMENT_INSTANCEID
md-service: workflowcompute
spec:
replicas: DEPLOYMENT_REPLICAS
selector:
matchLabels:
app: workflowcompute-DEPLOYMENT_INSTANCEID
template:
metadata:
labels:
app: workflowcompute-DEPLOYMENT_INSTANCEID
md-service: workflowcompute
annotations:
md-update: '20231228184263'
spec:
imagePullSecrets:
- name: hub.mingdao.com
tolerations:
- key: "md"
operator: "Equal"
value: "workflowcompute"
effect: "NoSchedule"
nodeSelector:
md: workflowcompute
containers:
- name: workflow-consumer
image: registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-computinginstance:6.4.0 # computinginstance image version
env:
- name: ENV_SERVERID
value: "single:workflowconsumer"
command:
- sh
- -c
- |
cp /usr/local/MDPrivateDeployment/workflowconsumer/application-www-computing.properties.template /usr/local/MDPrivateDeployment/workflowconsumer/application-www-computing.properties
sed -i s/CONFIGMAP_INSTANCEID/DEPLOYMENT_INSTANCEID/g /usr/local/MDPrivateDeployment/workflowconsumer/application-www-computing.properties
sed -i s/CONFIGMAP_WORKFLOW_THREAD/DEPLOYMENT_WORKFLOW_THREAD/g /usr/local/MDPrivateDeployment/workflowconsumer/application-www-computing.properties
sed -i s/CONFIGMAP_WORKFLOW_PARTITION/DEPLOYMENT_WORKFLOW_PARTITION/g /usr/local/MDPrivateDeployment/workflowconsumer/application-www-computing.properties
cat /usr/local/MDPrivateDeployment/workflowconsumer/application-www-computing.properties
sleep 20
exec /Housekeeper/main -config /Housekeeper/config.yaml
resources:
limits:
memory: DEPLOYMENT_MEMORYGi
cpu: DEPLOYMENT_CPU
requests:
memory: 1Gi
cpu: 0.25
volumeMounts:
- name: tz-config
mountPath: /etc/localtime
- name: workflowcompute
mountPath: /usr/local/MDPrivateDeployment/workflowconsumer/application-www-computing.properties.template
subPath: application-www-computing.properties
- name: worksheetservice
image: registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-computinginstance:6.4.0 # computinginstance image version
env:
- name: ENV_SERVERID
value: "single:worksheet"
command:
- sh
- -c
- |
cat /usr/local/MDPrivateDeployment/worksheet/Config/appsettingsMain.json
exec /Housekeeper/main -config /Housekeeper/config.yaml
resources:
limits:
memory: DEPLOYMENT_MEMORYGi
cpu: DEPLOYMENT_CPU
requests:
memory: 1Gi
cpu: 0.25
readinessProbe:
tcpSocket:
port: 9422
initialDelaySeconds: 10
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 9422
initialDelaySeconds: 60
periodSeconds: 10
volumeMounts:
- name: tz-config
mountPath: /etc/localtime
volumes:
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Etc/GMT-8
- name: workflowcompute
configMap:
name: workflowcompute
items:
- key: application-www-computing.properties
path: application-www-computing.properties
kubeconfig.yaml: |- # Change the following to the actual contents of $KUBECONFIG, be sure to add four spaces before each line of the original $KUBECONFIG content
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJME1ERXlPVEE1TXpRMU5Wb1hEVE0wTURFeU5qQTVNelExTlZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTjR3CklsY3RBNzhlTlRPZWJOK1RoUXdDL29SamtKMlJMeUlVZ2dncU1MSWlxWWpEbytqczNKbkdpNVU4cUIrZ3RlV1AKRDhTUEVlVUd0ZmNOOHoxTzlqK0Q1aFB3dkRPRDFyV2k3OEpnZWI3SURtdkU4L1VMdWdIYldkajlTckcrWTVEYwpDakQrcFduWHAzQUkzdjhwbm45a1ZJZ2Z3RldFcFVGQWI4Y2h0WSt3MU11ZjVhNVlhSTJ3NWtwOXhVS1lmV0g0CkFWUnlCUUFFVU5TUjE1Y3EyZ3Z2WDVTT1ZXUU0xcFJYNG1jcHovZ0toZ1pCYTBoSHZJaGRIRFBxQ0o0bUNYak4KbzgrWDYzNWZ4S3JPYTV4NTFhemVJMTljeE5lOXh0V25JdDUzLzJtWVlkYjIzWVI5UDRCK3U5NWVheWZWU2JwZQo4amhvdW1qVXFoSmNHMFdiY1pNQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZHMW1zZ01jVDBoOVpZbVlJU2pZK2wyNFRERFJNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBSHFXUDhnVDFJbXBpL1M4NVIzQwpzU0Y2V0NISFFscU5uOTh3UHdhR0dEcmFZQnUwbnhSeFk5Q243cHk3TytzZ2hLeDRpLzR5N3ZKYi84MXZWTEx2CklORGhxL2N2enp1WWtHQ0J1QStXcFVMZmhneHFsb0F4MjF3eWNScCtUTlZ5UnphNXpmUkt6RUZiZml0WU1LQk0KNkVhZmk3d3lma2twNDBDYzFoKzJMM3N4WG1sSDUyaXA4REx3WDJGVEVXRjE3VnE5WTBRdXZDcGpBSUlnY09aSgpGQjNCMHhiK0ZsUGsxZC9vakNHSzVOdDd6Z2VWQmN1VkhGSUhITSs5eWQ0dHJCVVViR05vVCszWVJXU0hxTGJECml3LzdQVDU2SFpheHA1VmUvN2p1cTZEbkRIYXQ5ZW9Idk5acEcvNUJMUW1SWUUyOFNaU3lvUnd5V3pUMkpTb24KOE8wPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://192.168.0.150:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJVENDQWdtZ0F3SUJBZ0lJWnJObmsyZ1RLRmd3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TkRBeE1qa3dPVE0wTlRWYUZ3MHlOVEF4TWpnd09UTTBOVGRhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXBhMk1zT1JkNHJ5djFhdEsKbkcrZzZiUkhJZ28rbEVkL09qSThTVU9BVlppV0tNY2pFaU81TXpkSXhaZW8yNFIxYzBweTNyei9WQ1FkNkZTdQpmSkFydDJodkZEczFvWGN2UEZ6d2xKRWZpblRNZkdqVndhZTkyNGZrak15MDU4d0pFbjVEVDJYa294QmhyOGhFClhSZ2dBbFpDL0R3dVA3ZjhBcmtyTC8wOG9TbmFhWm5sVkpKTU9qWnN5VFBCMnFTTXZselh1Z25vSW14Qlhkc0oKZkVScm5qQ3R2RDdxOU5MRmx5QVV0OWRsMVNUMDRocDh0L1hBVVAyazZMcjQyUXZCdEVnN1RaR1pBTEp5NEdpUwozLyswTU9NcmVMYjVnRGRIOTExRTV2d3JteU4zdUNDSkxjbWVoUEh2UWJuYVVUNGtod2lkMHdBcDNJNnhCYkNpCjRNaFNGd0lEQVFBQm8xWXdWREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFmQmdOVkhTTUVHREFXZ0JSdFpySURIRTlJZldXSm1DRW8yUHBkdUV3dwowVEFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBUVVkeEVxUkFzZ1ZmWFAzS0tuaG4veTRmcUpVdXk4ZzRPU2N0CmtOWmRkdFhNZ0xYK3RMekVIZnlxTXZ4cFlzMGhLQWNHZWlaRHkwajkrSFZ4L00zZWVmNUdGOWVTRHhvYy9Gd3QKQ2NxUGV2R1lKOVpaTGhacERENEN0L1dURjJ3eXRjdGo4UkdBVGsxcXl5Vkh5Vi82NGNSc2phYTIyQTNGVUFDNAovS2x5MHlBc25Ib0IyWS9SelJPTTJWLzhIdFFqRUg3RExzZHFpQVdIU2xOUGphdDM1NWg0ek10cW9jTWY0bzljCm0yd1RuaVp3QktBQmZVc3BNUFRlc2FITGw4Y1AzWklXbjF5QnJmcVFjenVWRnVtRjhqLy9QMTJ0YkxrVXAwSzEKbTNRclNDVnZDdnhOL1VjNGlmUFB1YkdRZDdudUc2RUdkdHVSeFB2RzdDem1sTjQ1amc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBcGEyTXNPUmQ0cnl2MWF0S25HK2c2YlJISWdvK2xFZC9Pakk4U1VPQVZaaVdLTWNqCkVpTzVNemRJeFplbzI0UjFjMHB5M3J6L1ZDUWQ2RlN1ZkpBcnQyaHZGRHMxb1hjdlBGendsSkVmaW5UTWZHalYKd2FlOTI0ZmtqTXkwNTh3SkVuNURUMlhrb3hCaHI4aEVYUmdnQWxaQy9Ed3VQN2Y4QXJrckwvMDhvU25hYVpubApWSkpNT2pac3lUUEIycVNNdmx6WHVnbm9JbXhCWGRzSmZFUnJuakN0dkQ3cTlOTEZseUFVdDlkbDFTVDA0aHA4CnQvWEFVUDJrNkxyNDJRdkJ0RWc3VFpHWkFMSnk0R2lTMy8rME1PTXJlTGI1Z0RkSDkxMUU1dndybXlOM3VDQ0oKTGNtZWhQSHZRYm5hVVQ0a2h3aWQwd0FwM0k2eEJiQ2k0TWhTRndJREFRQUJBb0lCQUN0dzloTHJ6akpGaDFWZgpOSkVRTkFFVFpCTm8zRC9FLzNjaTlPdkE1MFdLWE5VVVlmMi9vQy90cndjZ1hRWXlGUm5GeTVqYnRaYzZZUjBxCkZ4WlNOeVJBSGVBUUpsL1FBSEt6YStHSXE5eUNBNXdiWVBFR0txSUZYOGdMWk9QaWUvNTlYT2pVcnI2UzdRcEsKV0tLUVVOUk1DZ1JaUTBjeDFzSmdDeExxTTV1T0VkVDVLcGNiaE9BR042TGNVNkduRWEvcjlnUitUL2tIenA4QgpEdVZsNmREcHNGUjFyTmNsbGVXMkJLNmlMd2x6LzR0TmhROUxMY3lkSEswdHI0VWtKZVp2YTFEWFM1RWNUeTRQCkY5MzVJLzhqNTQvYlZ0ZjJQMFdpSUJ2ZFRaWnp5cjVPWG5nMDlPZUN3bTdwd2JjSVlhRDV4YXROdjRxYVR1ci8KY0I1R3RBRUNnWUVBeTZwekptQWZNZkxmOWUrSHZnTDRBK0M1cndFQVJhcllnUFlsTjJER0hrVFZNK2c1eW5YaQpUVHJEdlRZbUhqTmtiUUorOE14U09pRHZjK2Z1VHJWRzRUamE1Slk3YlJ1eFVVdHRRa2MzWk9UWFdzWXlzUWJICmRxbDBLU0RrcGdCdmZSMndxblp2V2JSR09rQ05sdmRpYUZKT0t4cUduWjc3NU9PTnVmNzRRUWNDZ1lFQTBFQXQKNXp2ZDhzYnZJOFBzQ1pmZlFYM1o1TWgzYytDVUhVTzQ4Mkc5QUx1L2tiRWhPL1JxVVJuck5DanJlWU9zWDliVQovSGwxelZkMStvRGlLcE1IZEJ2RDFubFM0MDY0TURTMW9BUEtxZjFFN0EySEN5cVZJZTZMbGRnKzh2VElkYzRHCmovV0RqR0JKMUs2RXI0NGx2SXNWWnpRaUpXSGxYZTV2dkJxdDhuRUNnWUF2OEVUK2FXMnVVaDdKUXNKT3hXQWYKZVl2N1YxNzdCd1hEQlMwcFpjdjhYL05YTG5nNzRaZU0yaUlzclV3M201MHQwNEtScDJaTGJHa3dmUTBvMVo2RApjT0NGSVorSFJSZHRyVFZnZm1iWmhzdngvK2o2cGovWS9IWHRJR0x4ZC9UR0hIRHpEc0dTK2MzMTlDL2Zzd2NrCnl3cS9OcFV0RUxqMTNXSUV2N0VyalFLQmdRQ014alhCa0ZpeTJ3T2hPN245cWlxRFRSM1VhQ2RIcjlLd0RhMmkKNkxrcEc4R2VMUXo0U0hydUpBVTVGMGhHdGxuNTEzSFcwZ3h1S09kWjFYSU5zYUppUExxZjZ4ZTdET1c1d0lmZQppWEdnZzlMcGR5T0l5dHBSTzc0a0p6QTJjSDVxRkVHZll5bnY1TTlEOUUxQmwyZXZFcDMvUytDaFFKSWFjeW9aCkVEZnlJUUtCZ0JXUGRELzhSTFhVWm9oV2FnTy9uMHRBUnlLMnladTA4RWFjcENXV2ZFNUdUZzF1cnVTNTdhT0IKMU1KeTFQNElnODJhVml4T291bkR3d2xzSHBjNXlHdVNBTERvYklkRnJoMHJVYVZLTTJZMDN1VzFBRnhzcnFFQQpWc0t0aUQ3WjMrRzM1cmFwRTkrcDhoR0EzaVF6eHl3YWdtRUI0Sm9WU0MxbHZJekc1ME1XCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: computinginstance
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: computinginstance
template:
metadata:
labels:
app: computinginstance
dir: grpc
annotations:
md-update: "20231228184309"
spec:
imagePullSecrets:
- name: hub.mingdao.com
containers:
- name: computinginstance
image: registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-computinginstance:6.4.0 # computinginstance image version
resources:
limits:
cpu: "2"
memory: 2G
requests:
cpu: "0.01"
memory: 128M
readinessProbe:
tcpSocket:
port: 9157
initialDelaySeconds: 3
periodSeconds: 3
livenessProbe:
tcpSocket:
port: 9157
initialDelaySeconds: 20
periodSeconds: 10
volumeMounts:
- name: configfile
mountPath: /usr/local/computinginstance/config.yaml
subPath: config.yaml
- name: configfile
mountPath: /usr/local/computinginstance/configmap-workflowcompute.yaml
subPath: configmap-workflowcompute.yaml
- name: configfile
mountPath: /usr/local/computinginstance/deployment-workflowcompute.yaml
subPath: deployment-workflowcompute.yaml
- name: configfile
mountPath: /usr/local/computinginstance/kubeconfig.yaml
subPath: kubeconfig.yaml
- name: tz-config
mountPath: /etc/localtime
volumes:
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Asia/Shanghai
- name: configfile
configMap:
name: computinginstance
items:
- key: config.yaml
path: config.yaml
- key: configmap-workflowcompute.yaml
path: configmap-workflowcompute.yaml
- key: deployment-workflowcompute.yaml
path: deployment-workflowcompute.yaml
- key: kubeconfig.yaml
path: kubeconfig.yaml
---
apiVersion: v1
kind: Service
metadata:
name: computinginstance
namespace: default
spec:
selector:
app: computinginstance
ports:
- name: http-computinginstance
port: 9157
targetPort: 9157
nodePort: 9157
type: NodePort -
Restart the service
cd /data/mingdao/script/kubernetes/
bash restart.sh
Verification
In the default namespace, fetch all deployments whose names start with workflowcompute-
and computinginstance
ns="default"
deployments=$(kubectl -n $ns get deployments -o jsonpath='{.items[*].metadata.name}' | tr ' ' '\n' | grep '^workflowcompute-\|computinginstance')
for deploy in $deployments
do
echo "Namespace: $ns, Deployment: $deploy"
# Retrieve and print the deployment images
kubectl -n $ns get deployment $deploy -o jsonpath='{.spec.template.spec.containers[*].image}'
echo
done
Output example:
Namespace: default, Deployment: computinginstance
registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-computinginstance:6.4.0
Namespace: default, Deployment: workflowcompute-7e4wf0fea4ho
registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-computinginstance:6.4.0
- Deployments starting with
workflowcompute-
exist only after a dedicated compute resource has been created in the UI
Check whether taints have taken effect
kubectl get pod -o wide | grep workflowcompute-
- Normally, the containers should be running on the dedicated compute nodes
Check Kafka
/usr/local/kafka/bin/kafka-consumer-groups.sh --bootstrap-server ${ENV_KAFKA_ENDPOINTS:=127.0.0.1:9092} --describe --group md-workflow-consumer-7e4wf0fea4ho
- Here, resource ID
7e4wf0fea4ho
is used as an example