Multi-node
Initialize Docker Swarm Cluster
-
Execute the swarm initialization command on the first node
docker swarm init# After executing the initialization command, a join command will be output, which needs to be run on the other three nodes# If there are multiple IP addresses on the server, you can use the --advertise-addr parameter to specify the IP and port that other nodes use to connect to the current management node# docker swarm init --advertise-addr 192.168.1.11 -
Run the join command on each of the other nodes to join the swarm cluster
docker swarm join --token xxxxxxxx# If the token is forgotten, you can view it on the first initialized server with the following command:# docker swarm join-token worker -
View the nodes and record each node ID
docker node ls
Deploy File Service
The following steps need to be performed on each server
-
Download the image
- Server supports internet access
- Server does not support internet access
docker pull registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-file:2.1.0# Offline image package download link, upload to the deployment server after downloadinghttps://pdpublic.mingdao.com/private-deployment/offline/mingdaoyun-file-linux-amd64-2.1.0.tar.gz# Load the offline image on the server using the docker load -i commanddocker load -i mingdaoyun-file-linux-amd64-2.1.0.tar.gz -
Create the data directory
mkdir -p /data/file/volume/{cache,data,fetchtmp,multitmp,tmp} -
Create the directory to store the configuration file
mkdir -p /usr/local/MDPrivateDeployment/clusterMode/config -
Create the
s3-config.jsonfile for storage integration with the following templatecat > /usr/local/MDPrivateDeployment/clusterMode/config/s3-config.json << EOF{"mode": 1,"accessKeyID": "your_accessKey","secretAccessKey": "your_secretKey","bucketEndPoint": "http://192.168.0.11:9011","bucketName": {"mdmedia": "mdmedia","mdpic": "mdpic","mdpub": "mdpub","mdoc": "mdoc"},"region": "1","addressingModel": 1}EOF-
Integrating Self-hosted MinIO Object Storage
- Replace accessKeyID and secretAccessKey with the corresponding values of the MinIO service environment variables
MINIO_ROOT_USERandMINIO_ROOT_PASSWORD. - Change bucketEndPoint to the access address of the MinIO service.
- MinIO does not define region by default, you can fill it in arbitrarily here, for example, set it to "1".
- MinIO is typically accessed through IP addresses, so you need to set "addressingModel": 1 to prevent the system from automatically appending the bucket name before the endpoint, causing access exceptions.
- Replace accessKeyID and secretAccessKey with the corresponding values of the MinIO service environment variables
-
Integrating Cloud Vendor Object Storage (such as Alibaba Cloud OSS, Tencent Cloud COS, AWS S3, etc.)
- Configure accessKeyID and secretAccessKey according to the AccessKey provided by the cloud vendor.
- Change the mapping relationship of bucketEndPoint and bucketName to the actual object storage information.
- Fill in the region parameter according to the actual deployment area.
- When using cloud vendor object storage, please delete the "addressingModel" parameter to ensure the connection method is correct.
-
The following steps are only performed on the first node
-
Create the directory to store the configuration file
mkdir -p /usr/local/MDPrivateDeployment/clusterMode -
Create file.yaml
cat > /usr/local/MDPrivateDeployment/clusterMode/file.yaml <<EOFversion: '3'services:file1:hostname: file1image: registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-file:2.1.0volumes:- /usr/share/zoneinfo/Etc/GMT-8:/etc/localtime- /data/file/volume:/data/storage- /usr/local/MDPrivateDeployment/clusterMode/config/s3-config.json:/usr/local/file/s3-config.jsonports:- "9001:9000"environment:ENV_ACCESS_KEY_FILE: storageENV_SECRET_KEY_FILE: 12345678910ENV_MINGDAO_PROTO: "http"ENV_MINGDAO_HOST: "hap.domain.com"ENV_MINGDAO_PORT: "80"# Redis master-slave modeENV_FILE_CACHE: "redis://:123456@192.168.10.13:6379"# Redis Sentinel mode#ENV_REDIS_SENTINEL_ENDPOINTS: "192.168.10.13:26379,192.168.10.14:26379,192.168.10.15:26379"#ENV_REDIS_SENTINEL_MASTER: "mymaster"#ENV_REDIS_SENTINEL_PASSWORD: "password"ENV_FILECACHE_EXPIRE: "false"ENV_FILE_ID: "file1"ENV_FILE_DOMAIN: "http://file1:9001,http://file2:9002,http://file3:9003,http://file4:9004"command: ["./main", "server", "/data/storage/data"]deploy:placement:constraints:- node.id == xxxxxxxxxxxxxxxx # Node ID of File Node01file2:hostname: file2image: registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-file:2.1.0volumes:- /usr/share/zoneinfo/Etc/GMT-8:/etc/localtime- /data/file/volume:/data/storage- /usr/local/MDPrivateDeployment/clusterMode/config/s3-config.json:/usr/local/file/s3-config.jsonports:- "9002:9000"environment:ENV_ACCESS_KEY_FILE: storageENV_SECRET_KEY_FILE: 12345678910ENV_MINGDAO_PROTO: "http"ENV_MINGDAO_HOST: "hap.domain.com"ENV_MINGDAO_PORT: "80"# Redis master-slave modeENV_FILE_CACHE: "redis://:123456@192.168.10.13:6379"# Redis Sentinel mode#ENV_REDIS_SENTINEL_ENDPOINTS: "192.168.10.13:26379,192.168.10.14:26379,192.168.10.15:26379"#ENV_REDIS_SENTINEL_MASTER: "mymaster"#ENV_REDIS_SENTINEL_PASSWORD: "password"ENV_FILECACHE_EXPIRE: "false"ENV_FILE_ID: "file2"ENV_FILE_DOMAIN: "http://file1:9001,http://file2:9002,http://file3:9003,http://file4:9004"command: ["./main", "server", "/data/storage/data"]deploy:placement:constraints:- node.id == xxxxxxxxxxxxxxxx # Node ID of File Node02file3:hostname: file3image: registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-file:2.1.0volumes:- /usr/share/zoneinfo/Etc/GMT-8:/etc/localtime- /data/file/volume:/data/storage- /usr/local/MDPrivateDeployment/clusterMode/config/s3-config.json:/usr/local/file/s3-config.jsonports:- "9003:9000"environment:ENV_ACCESS_KEY_FILE: storageENV_SECRET_KEY_FILE: 12345678910ENV_MINGDAO_PROTO: "http"ENV_MINGDAO_HOST: "hap.domain.com"ENV_MINGDAO_PORT: "80"# Redis master-slave modeENV_FILE_CACHE: "redis://:123456@192.168.10.13:6379"# Redis Sentinel mode#ENV_REDIS_SENTINEL_ENDPOINTS: "192.168.10.13:26379,192.168.10.14:26379,192.168.10.15:26379"#ENV_REDIS_SENTINEL_MASTER: "mymaster"#ENV_REDIS_SENTINEL_PASSWORD: "password"ENV_FILECACHE_EXPIRE: "false"ENV_FILE_ID: "file3"ENV_FILE_DOMAIN: "http://file1:9001,http://file2:9002,http://file3:9003,http://file4:9004"command: ["./main", "server", "/data/storage/data"]deploy:placement:constraints:- node.id == xxxxxxxxxxxxxxxx # Node ID of File Node03file4:hostname: file4image: registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-file:2.1.0volumes:- /usr/share/zoneinfo/Etc/GMT-8:/etc/localtime- /data/file/volume:/data/storage- /usr/local/MDPrivateDeployment/clusterMode/config/s3-config.json:/usr/local/file/s3-config.jsonports:- "9004:9000"environment:ENV_ACCESS_KEY_FILE: storageENV_SECRET_KEY_FILE: 12345678910ENV_MINGDAO_PROTO: "http"ENV_MINGDAO_HOST: "hap.domain.com"ENV_MINGDAO_PORT: "80"# Redis master-slave modeENV_FILE_CACHE: "redis://:123456@192.168.10.13:6379"# Redis Sentinel mode#ENV_REDIS_SENTINEL_ENDPOINTS: "192.168.10.13:26379,192.168.10.14:26379,192.168.10.15:26379"#ENV_REDIS_SENTINEL_MASTER: "mymaster"#ENV_REDIS_SENTINEL_PASSWORD: "password"ENV_FILECACHE_EXPIRE: "false"ENV_FILE_ID: "file4"ENV_FILE_DOMAIN: "http://file1:9001,http://file2:9002,http://file3:9003,http://file4:9004"command: ["./main", "server", "/data/storage/data"]deploy:placement:constraints:- node.id == xxxxxxxxxxxxxxxx # Node ID of File Node04EOFENV_ACCESS_KEY_FILEandENV_SECRET_KEY_FILEare access authentication information for the file service, please configure them as high-strength passwords during actual deployment.ENV_MINGDAO_PROTO,ENV_MINGDAO_HOST, andENV_MINGDAO_PORTare used to configure the main access address of the HAP system, which must be consistent with the protocol, host, and port of theENV_ADDRESS_MAINenvironment variable in the HAP microservices.- Replace the Redis connection information with the correct IP address and password according to the actual deployment environment.
ENV_FILECACHE_EXPIREis used to set the thumbnail cache expiration policy,"false"means no expiration (default),"true"means automatically clean up the cache periodically.- In multi-node deployments, the
ENV_FILE_IDof each instance must be unique, for example,file1,file2,file3, etc. - Configure
ENV_FILE_DOMAINas the actual access address of all file nodes by adding a protocol prefix (such ashttp://), and separate multiple addresses with commas.
-
Create start and stop scripts
cat > /usr/local/MDPrivateDeployment/clusterMode/start.sh <<EOFdocker stack deploy -c /usr/local/MDPrivateDeployment/clusterMode/file.yaml file --detach=falseEOFcat > /usr/local/MDPrivateDeployment/clusterMode/stop.sh <<EOFdocker stack rm fileEOFchmod +x /usr/local/MDPrivateDeployment/clusterMode/start.shchmod +x /usr/local/MDPrivateDeployment/clusterMode/stop.sh -
Start the service
bash /usr/local/MDPrivateDeployment/clusterMode/start.sh -
Check the service status
docker stack ps filedocker ps -a | grep file -
After deployment, upload the pre-configured files to MinIO object storage