Multi-node
Initialize Docker Swarm Cluster
-
Execute the swarm initialization command on the first node
docker swarm init
# After executing the initialization command, a join command will be output, which needs to be run on the other three nodes later.
# When there are multiple IP addresses on the server, you can use the --advertise-addr parameter to specify the IP and port for other nodes to connect to the current manager node.
# docker swarm init --advertise-addr 192.168.1.11 -
Run the join command on the other nodes respectively to join the swarm cluster
docker swarm join --token xxxxxxxx
# If you forget the token, you can view it on the first initialized server with the following command:
# docker swarm join-token worker -
View the nodes and record each node ID
docker node ls
Deploy File Service
The following steps need to be performed on each server
-
Download the image
- Servers with Internet Access
- Servers without Internet Access
docker pull registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-file:2.1.0
# Offline image package download link, upload to the deployment server after downloading
https://pdpublic.mingdao.com/private-deployment/offline/mingdaoyun-file-linux-amd64-2.1.0.tar.gz
# Load the offline image on the server using the docker load -i command
docker load -i mingdaoyun-file-linux-amd64-2.1.0.tar.gz -
Create data directories
mkdir -p /data/file/volume/{cache,data,fetchtmp,multitmp,tmp}
-
Create a directory for the configuration files
mkdir -p /usr/local/MDPrivateDeployment/clusterMode/config
-
Create the integration storage
s3-config.json
file, template as followscat > /usr/local/MDPrivateDeployment/clusterMode/config/s3-config.json << EOF
{
"mode": 1,
"accessKeyID": "your_accessKey",
"secretAccessKey": "your_secretKey",
"bucketEndPoint": "http://192.168.0.11:9011",
"bucketName": {
"mdmedia": "mdmedia",
"mdpic": "mdpic",
"mdpub": "mdpub",
"mdoc": "mdoc"
},
"region": "1",
"addressingModel": 1
}
EOF- For self-hosted MinIO object storage, replace
accessKeyID
andsecretAccessKey
with MinIO service'sMINIO_ROOT_USER
andMINIO_ROOT_PASSWORD
values - For self-hosted MinIO object storage, there is usually no
region
defined by default; here you can randomly fill in a value like 1 - For self-hosted MinIO object storage, typically accessed via IP, include the
"addressingModel": 1
parameter to avoid incorrect endpoint addresses by auto-prepending the bucket to the IP - In a self-hosted MinIO object storage cluster environment, different MinIO node addresses can be used as values for
bucketEndPoint
in each node'ss3-config.json
file
- For self-hosted MinIO object storage, replace
The following steps are only to be performed on the first node
-
Create a directory for the configuration files
mkdir -p /usr/local/MDPrivateDeployment/clusterMode
-
Create file.yaml
cat > /usr/local/MDPrivateDeployment/clusterMode/file.yaml <<EOF
version: '3'
services:
file1:
hostname: file1
image: registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-file:2.1.0
volumes:
- /usr/share/zoneinfo/Etc/GMT-8:/etc/localtime
- /data/file/volume:/data/storage
- /usr/local/MDPrivateDeployment/clusterMode/config/s3-config.json:/usr/local/file/s3-config.json
ports:
- "9001:9000"
environment:
ENV_ACCESS_KEY_FILE: storage
ENV_SECRET_KEY_FILE: 12345678910
ENV_MINGDAO_PROTO: "http"
ENV_MINGDAO_HOST: "hap.domain.com"
ENV_MINGDAO_PORT: "80"
# Redis Master-Slave mode
ENV_FILE_CACHE: "redis://:123456@192.168.10.13:6379"
# Redis Sentinel mode
#ENV_REDIS_SENTINEL_ENDPOINTS: "192.168.10.13:26379,192.168.10.14:26379,192.168.10.15:26379"
#ENV_REDIS_SENTINEL_MASTER: "mymaster"
#ENV_REDIS_SENTINEL_PASSWORD: "password"
ENV_FILECACHE_EXPIRE: "false"
ENV_FILE_ID: "file1"
ENV_FILE_DOMAIN: "http://file1:9001,http://file2:9002,http://file3:9003,http://file4:9004"
command: ["./main", "server", "/data/storage/data"]
deploy:
placement:
constraints:
- node.id == xxxxxxxxxxxxxxxx # Node ID for File Node01
file2:
hostname: file2
image: registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-file:2.1.0
volumes:
- /usr/share/zoneinfo/Etc/GMT-8:/etc/localtime
- /data/file/volume:/data/storage
- /usr/local/MDPrivateDeployment/clusterMode/config/s3-config.json:/usr/local/file/s3-config.json
ports:
- "9002:9000"
environment:
ENV_ACCESS_KEY_FILE: storage
ENV_SECRET_KEY_FILE: 12345678910
ENV_MINGDAO_PROTO: "http"
ENV_MINGDAO_HOST: "hap.domain.com"
ENV_MINGDAO_PORT: "80"
# Redis Master-Slave mode
ENV_FILE_CACHE: "redis://:123456@192.168.10.13:6379"
# Redis Sentinel mode
#ENV_REDIS_SENTINEL_ENDPOINTS: "192.168.10.13:26379,192.168.10.14:26379,192.168.10.15:26379"
#ENV_REDIS_SENTINEL_MASTER: "mymaster"
#ENV_REDIS_SENTINEL_PASSWORD: "password"
ENV_FILECACHE_EXPIRE: "false"
ENV_FILE_ID: "file2"
ENV_FILE_DOMAIN: "http://file1:9001,http://file2:9002,http://file3:9003,http://file4:9004"
command: ["./main", "server", "/data/storage/data"]
deploy:
placement:
constraints:
- node.id == xxxxxxxxxxxxxxxx # Node ID for File Node02
file3:
hostname: file3
image: registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-file:2.1.0
volumes:
- /usr/share/zoneinfo/Etc/GMT-8:/etc/localtime
- /data/file/volume:/data/storage
- /usr/local/MDPrivateDeployment/clusterMode/config/s3-config.json:/usr/local/file/s3-config.json
ports:
- "9003:9000"
environment:
ENV_ACCESS_KEY_FILE: storage
ENV_SECRET_KEY_FILE: 12345678910
ENV_MINGDAO_PROTO: "http"
ENV_MINGDAO_HOST: "hap.domain.com"
ENV_MINGDAO_PORT: "80"
# Redis Master-Slave mode
ENV_FILE_CACHE: "redis://:123456@192.168.10.13:6379"
# Redis Sentinel mode
#ENV_REDIS_SENTINEL_ENDPOINTS: "192.168.10.13:26379,192.168.10.14:26379,192.168.10.15:26379"
#ENV_REDIS_SENTINEL_MASTER: "mymaster"
#ENV_REDIS_SENTINEL_PASSWORD: "password"
ENV_FILECACHE_EXPIRE: "false"
ENV_FILE_ID: "file3"
ENV_FILE_DOMAIN: "http://file1:9001,http://file2:9002,http://file3:9003,http://file4:9004"
command: ["./main", "server", "/data/storage/data"]
deploy:
placement:
constraints:
- node.id == xxxxxxxxxxxxxxxx # Node ID for File Node03
file4:
hostname: file4
image: registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-file:2.1.0
volumes:
- /usr/share/zoneinfo/Etc/GMT-8:/etc/localtime
- /data/file/volume:/data/storage
- /usr/local/MDPrivateDeployment/clusterMode/config/s3-config.json:/usr/local/file/s3-config.json
ports:
- "9004:9000"
environment:
ENV_ACCESS_KEY_FILE: storage
ENV_SECRET_KEY_FILE: 12345678910
ENV_MINGDAO_PROTO: "http"
ENV_MINGDAO_HOST: "hap.domain.com"
ENV_MINGDAO_PORT: "80"
# Redis Master-Slave mode
ENV_FILE_CACHE: "redis://:123456@192.168.10.13:6379"
# Redis Sentinel mode
#ENV_REDIS_SENTINEL_ENDPOINTS: "192.168.10.13:26379,192.168.10.14:26379,192.168.10.15:26379"
#ENV_REDIS_SENTINEL_MASTER: "mymaster"
#ENV_REDIS_SENTINEL_PASSWORD: "password"
ENV_FILECACHE_EXPIRE: "false"
ENV_FILE_ID: "file4"
ENV_FILE_DOMAIN: "http://file1:9001,http://file2:9002,http://file3:9003,http://file4:9004"
command: ["./main", "server", "/data/storage/data"]
deploy:
placement:
constraints:
- node.id == xxxxxxxxxxxxxxxx # Node ID for File Node04
EOF- Replace the system main address with the one specified in the variable during deployment
- Replace the redis connection information with the actual password and IP during deployment
- In multi-node deployment, ensure each instance has a unique
ENV_FILE_ID
, such asfile1
,file2
,file3
, etc. - Note that in file v2 version,
MINIO_ACCESS_KEY
andMINIO_SECRET_KEY
have been deprecated and replaced withENV_ACCESS_KEY_FILE
andENV_SECRET_KEY_FILE
for authentication between microservices and file services. - The
ENV_FILE_DOMAIN
value should match the content of ENV_FILE_ENDPOINTS in the microservices configuration file, but needs to include http://
-
Create start and stop scripts
cat > /usr/local/MDPrivateDeployment/clusterMode/start.sh <<EOF
docker stack deploy -c /usr/local/MDPrivateDeployment/clusterMode/file.yaml file
EOF
cat > /usr/local/MDPrivateDeployment/clusterMode/stop.sh <<EOF
docker stack rm file
EOF
chmod +x /usr/local/MDPrivateDeployment/clusterMode/start.sh
chmod +x /usr/local/MDPrivateDeployment/clusterMode/stop.sh -
Start the service
bash /usr/local/MDPrivateDeployment/clusterMode/start.sh
-
Add to startup
echo "bash /usr/local/MDPrivateDeployment/clusterMode/start.sh" >> /etc/rc.local
chmod +x /etc/rc.local -
After deployment, upload the preconfigured files to the MinIO object storage