Skip to main content

HAP Microservices

Download Images

In a Kubernetes cluster environment, this operation needs to be performed separately on the server where each microservice node is located.

crictl pull registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-community:6.5.6
crictl pull registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-doc:2.0.0
crictl pull registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-command:node1018-python36

Deployment Manager

By default, operate only on the first Kubernetes Master server.

  1. Download the manager
wget https://pdpublic.mingdao.com/private-deployment/6.5.6/mingdaoyun_private_deployment_captain_linux_amd64.tar.gz
  1. Create a directory and extract the manager into the newly created directory
mkdir /usr/local/MDPrivateDeployment/
tar -zxvf mingdaoyun_private_deployment_captain_linux_amd64.tar.gz -C /usr/local/MDPrivateDeployment/
  1. Set up the systemd unit file for the manager service required by the HAP Manager
cat > /etc/systemd/system/hap-manager.service <<'EOF'
[Unit]
Description=HAP Manager
After=network-online.target
Wants=network-online.target

[Service]
Type=oneshot
WorkingDirectory=/usr/local/MDPrivateDeployment
ExecStart=/usr/bin/bash ./service.sh start
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target
EOF
  1. Start the manager service
systemctl daemon-reload
systemctl start hap-manager
systemctl enable hap-manager

Deploy Microservices

By default, operate only on the first Kubernetes Master server.

  1. Generate the initial configuration file
cd /usr/local/MDPrivateDeployment/
bash ./service.sh install https://hap.domain.com
echo -n 'StageStart' > installer.stage
  1. Configure the ConfigMap information in the config.yaml file
  • config.yaml is located by default at /data/mingdao/script/kubernetes

Execute vim config.yaml to edit the file, modify the following variable values to match the actual deployment environment:

apiVersion: v1
kind: ConfigMap
metadata:
name: env-list
namespace: default
data:
ENV_APP_VERSION: "6.5.6"
ENV_MYSQL_HOST: "192.168.10.2"
ENV_MYSQL_PORT: "3306"
ENV_MYSQL_USERNAME: "root"
ENV_MYSQL_PASSWORD: "123456"
ENV_MONGODB_URI: "mongodb://hap:123456@192.168.10.4:27017,192.168.10.5:27017,192.168.10.6:27017"
ENV_MONGODB_OPTIONS: "?maxIdleTimeMS=600000&maxLifeTimeMS=1800000"
ENV_REDIS_HOST: "192.168.10.13"
ENV_REDIS_PORT: "6379"
ENV_REDIS_PASSWORD: "123456"
ENV_KAFKA_ENDPOINTS: "192.168.10.7:9092,192.168.10.8:9092,192.168.10.9:9092"
ENV_ELASTICSEARCH_ENDPOINTS: "http://192.168.10.10:9200,http://192.168.10.11:9200,http://192.168.10.12:9200"
ENV_ELASTICSEARCH_PASSWORD: "elastic:123456"
ENV_FILE_ENDPOINTS: "192.168.10.16:9001,192.168.10.17:9002,192.168.10.18:9003,192.168.10.19:9004"
ENV_FILE_ACCESSKEY: "storage"
ENV_FILE_SECRETKEY: "123456"
ENV_MINGDAO_INTRANET_ENDPOINT: "www:8880"
ENV_ADDRESS_MAIN: "https://hap.domain.com"
ENV_ADDRESS_ALLOWLIST: ""
ENV_CAPTAIN_ENDPOINT: "http://192.168.10.20:38880"
ENV_HEALTHCHECK: "off"
ENV_API_TOKEN: "4PrArcXYquO1sHlV9evsDqFKUUJ1kWVAg7v6oGcTKRNG9fUY"
ENV_TIME_ZONE: "Asia/Shanghai"

Explanation of environment variables to modify for the initial deployment:

Variable NameDescription
ENV_MYSQL_HOSTEnter the MySQL database address.
ENV_MYSQL_PORTEnter the MySQL service port.
ENV_MYSQL_USERNAMEEnter the MySQL database login username.
ENV_MYSQL_PASSWORDEnter the MySQL database login password.
ENV_MONGODB_URIEnter the MongoDB connection address, which can be in standalone, replica set, or sharded cluster format.
ENV_REDIS_HOST[Redis master-slave or standalone mode] Enter the Redis service host address.
ENV_REDIS_PORT[Redis master-slave or standalone mode] Enter the Redis service port.
ENV_REDIS_PASSWORD[Redis master-slave or standalone mode] Enter the Redis service access password.
ENV_REDIS_SENTINEL_ENDPOINTS[Redis Sentinel mode] Enter the Sentinel node addresses, separated by commas (e.g., 192.168.10.21:26379,192.168.10.22:26379,192.168.10.23:26379).
ENV_REDIS_SENTINEL_MASTER[Redis Sentinel mode] Enter the name of the master node.
ENV_REDIS_SENTINEL_PASSWORD[Redis Sentinel mode] Enter the access password for connecting to the Sentinel cluster.
ENV_KAFKA_ENDPOINTSEnter the Kafka cluster node addresses, separated by commas (e.g., 192.168.10.7:9092,192.168.10.8:9092).
ENV_ELASTICSEARCH_ENDPOINTSEnter the Elasticsearch node access addresses, with multiple nodes separated by commas in cluster mode, and prepend with http:// protocol (e.g., http://192.168.10.10:9200,http://192.168.10.11:9200).
ENV_ELASTICSEARCH_PASSWORDEnter the Elasticsearch access credentials, in the format username:password.
ENV_FILE_ENDPOINTSEnter the HAP file service node addresses, separated by commas (e.g., 192.168.10.16:9001,192.168.10.17:9002).
ENV_FILE_ACCESSKEYEnter the HAP file service access key AccessKey.
ENV_FILE_SECRETKEYEnter the HAP file service access key SecretKey.
ENV_ADDRESS_MAINEnter the actual access address for the HAP system (e.g., https://hap.domain.com).
ENV_ADDRESS_ALLOWLISTOptional, enter the extended access address whitelist for the HAP system, separated by commas.
ENV_CAPTAIN_ENDPOINTEnter the actual deployment manager server address (e.g., http://192.168.10.20:38880).
  1. After modification, save and exit to ensure all configurations match the current deployment environment, then proceed to the subsequent steps.

  2. Start the HAP Microservices

cd /data/mingdao/script/kubernetes/
bash start.sh

In the /data/mingdao/script/kubernetes/ directory:

  • start.sh is used to start the HAP microservices.
  • stop.sh is used to stop the HAP microservices.
  1. Check the status of the HAP Microservices
kubectl get pod -o wide
  • Under normal circumstances, the Pod status is 2/2 and Running.

  • If a Pod fails to start or has an abnormal status, use the following command to view its restart or error logs:

kubectl logs -p <pod-name>
  1. Once Nginx completes the address proxy configuration, the HAP system can be accessed via the configured address.