Skip to main content

Multi-node

Initialize Docker Swarm Cluster

  1. Execute the swarm initialization command on the first node

    docker swarm init

    # After executing the initialization command, a join command will be output, which needs to be run on the other three nodes
    # If there are multiple IP addresses on the server, you can use the --advertise-addr parameter to specify the IP and port that other nodes use to connect to the current management node
    # docker swarm init --advertise-addr 192.168.1.11
  2. Run the join command on each of the other nodes to join the swarm cluster

    docker swarm join --token xxxxxxxx

    # If the token is forgotten, you can view it on the first initialized server with the following command:
    # docker swarm join-token worker
  3. View the nodes and record each node ID

    docker node ls

Deploy File Service

The following steps need to be performed on each server

  1. Download the image

    docker pull registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-file:2.1.0
  2. Create the data directory

    mkdir -p /data/file/volume/{cache,data,fetchtmp,multitmp,tmp}
  3. Create the directory to store the configuration file

    mkdir -p /usr/local/MDPrivateDeployment/clusterMode/config
  4. Create the s3-config.json file for storage integration with the following template

    cat > /usr/local/MDPrivateDeployment/clusterMode/config/s3-config.json << EOF
    {
    "mode": 1,
    "accessKeyID": "your_accessKey",
    "secretAccessKey": "your_secretKey",
    "bucketEndPoint": "http://192.168.0.11:9011",
    "bucketName": {
    "mdmedia": "mdmedia",
    "mdpic": "mdpic",
    "mdpub": "mdpub",
    "mdoc": "mdoc"
    },
    "region": "1",
    "addressingModel": 1
    }
    EOF
    • Integrating Self-hosted MinIO Object Storage

      • Replace accessKeyID and secretAccessKey with the corresponding values of the MinIO service environment variables MINIO_ROOT_USER and MINIO_ROOT_PASSWORD.
      • Change bucketEndPoint to the access address of the MinIO service.
      • MinIO does not define region by default, you can fill it in arbitrarily here, for example, set it to "1".
      • MinIO is typically accessed through IP addresses, so you need to set "addressingModel": 1 to prevent the system from automatically appending the bucket name before the endpoint, causing access exceptions.
    • Integrating Cloud Vendor Object Storage (such as Alibaba Cloud OSS, Tencent Cloud COS, AWS S3, etc.)

      • Configure accessKeyID and secretAccessKey according to the AccessKey provided by the cloud vendor.
      • Change the mapping relationship of bucketEndPoint and bucketName to the actual object storage information.
      • Fill in the region parameter according to the actual deployment area.
      • When using cloud vendor object storage, please delete the "addressingModel" parameter to ensure the connection method is correct.

The following steps are only performed on the first node

  1. Create the directory to store the configuration file

    mkdir -p /usr/local/MDPrivateDeployment/clusterMode
  2. Create file.yaml

    cat > /usr/local/MDPrivateDeployment/clusterMode/file.yaml <<EOF
    version: '3'
    services:
    file1:
    hostname: file1
    image: registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-file:2.1.0
    volumes:
    - /usr/share/zoneinfo/Etc/GMT-8:/etc/localtime
    - /data/file/volume:/data/storage
    - /usr/local/MDPrivateDeployment/clusterMode/config/s3-config.json:/usr/local/file/s3-config.json
    ports:
    - "9001:9000"
    environment:
    ENV_ACCESS_KEY_FILE: storage
    ENV_SECRET_KEY_FILE: 12345678910
    ENV_MINGDAO_PROTO: "http"
    ENV_MINGDAO_HOST: "hap.domain.com"
    ENV_MINGDAO_PORT: "80"
    # Redis master-slave mode
    ENV_FILE_CACHE: "redis://:123456@192.168.10.13:6379"
    # Redis Sentinel mode
    #ENV_REDIS_SENTINEL_ENDPOINTS: "192.168.10.13:26379,192.168.10.14:26379,192.168.10.15:26379"
    #ENV_REDIS_SENTINEL_MASTER: "mymaster"
    #ENV_REDIS_SENTINEL_PASSWORD: "password"
    ENV_FILECACHE_EXPIRE: "false"
    ENV_FILE_ID: "file1"
    ENV_FILE_DOMAIN: "http://file1:9001,http://file2:9002,http://file3:9003,http://file4:9004"
    command: ["./main", "server", "/data/storage/data"]
    deploy:
    placement:
    constraints:
    - node.id == xxxxxxxxxxxxxxxx # Node ID of File Node01

    file2:
    hostname: file2
    image: registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-file:2.1.0
    volumes:
    - /usr/share/zoneinfo/Etc/GMT-8:/etc/localtime
    - /data/file/volume:/data/storage
    - /usr/local/MDPrivateDeployment/clusterMode/config/s3-config.json:/usr/local/file/s3-config.json
    ports:
    - "9002:9000"
    environment:
    ENV_ACCESS_KEY_FILE: storage
    ENV_SECRET_KEY_FILE: 12345678910
    ENV_MINGDAO_PROTO: "http"
    ENV_MINGDAO_HOST: "hap.domain.com"
    ENV_MINGDAO_PORT: "80"
    # Redis master-slave mode
    ENV_FILE_CACHE: "redis://:123456@192.168.10.13:6379"
    # Redis Sentinel mode
    #ENV_REDIS_SENTINEL_ENDPOINTS: "192.168.10.13:26379,192.168.10.14:26379,192.168.10.15:26379"
    #ENV_REDIS_SENTINEL_MASTER: "mymaster"
    #ENV_REDIS_SENTINEL_PASSWORD: "password"
    ENV_FILECACHE_EXPIRE: "false"
    ENV_FILE_ID: "file2"
    ENV_FILE_DOMAIN: "http://file1:9001,http://file2:9002,http://file3:9003,http://file4:9004"
    command: ["./main", "server", "/data/storage/data"]
    deploy:
    placement:
    constraints:
    - node.id == xxxxxxxxxxxxxxxx # Node ID of File Node02

    file3:
    hostname: file3
    image: registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-file:2.1.0
    volumes:
    - /usr/share/zoneinfo/Etc/GMT-8:/etc/localtime
    - /data/file/volume:/data/storage
    - /usr/local/MDPrivateDeployment/clusterMode/config/s3-config.json:/usr/local/file/s3-config.json
    ports:
    - "9003:9000"
    environment:
    ENV_ACCESS_KEY_FILE: storage
    ENV_SECRET_KEY_FILE: 12345678910
    ENV_MINGDAO_PROTO: "http"
    ENV_MINGDAO_HOST: "hap.domain.com"
    ENV_MINGDAO_PORT: "80"
    # Redis master-slave mode
    ENV_FILE_CACHE: "redis://:123456@192.168.10.13:6379"
    # Redis Sentinel mode
    #ENV_REDIS_SENTINEL_ENDPOINTS: "192.168.10.13:26379,192.168.10.14:26379,192.168.10.15:26379"
    #ENV_REDIS_SENTINEL_MASTER: "mymaster"
    #ENV_REDIS_SENTINEL_PASSWORD: "password"
    ENV_FILECACHE_EXPIRE: "false"
    ENV_FILE_ID: "file3"
    ENV_FILE_DOMAIN: "http://file1:9001,http://file2:9002,http://file3:9003,http://file4:9004"
    command: ["./main", "server", "/data/storage/data"]
    deploy:
    placement:
    constraints:
    - node.id == xxxxxxxxxxxxxxxx # Node ID of File Node03

    file4:
    hostname: file4
    image: registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-file:2.1.0
    volumes:
    - /usr/share/zoneinfo/Etc/GMT-8:/etc/localtime
    - /data/file/volume:/data/storage
    - /usr/local/MDPrivateDeployment/clusterMode/config/s3-config.json:/usr/local/file/s3-config.json
    ports:
    - "9004:9000"
    environment:
    ENV_ACCESS_KEY_FILE: storage
    ENV_SECRET_KEY_FILE: 12345678910
    ENV_MINGDAO_PROTO: "http"
    ENV_MINGDAO_HOST: "hap.domain.com"
    ENV_MINGDAO_PORT: "80"
    # Redis master-slave mode
    ENV_FILE_CACHE: "redis://:123456@192.168.10.13:6379"
    # Redis Sentinel mode
    #ENV_REDIS_SENTINEL_ENDPOINTS: "192.168.10.13:26379,192.168.10.14:26379,192.168.10.15:26379"
    #ENV_REDIS_SENTINEL_MASTER: "mymaster"
    #ENV_REDIS_SENTINEL_PASSWORD: "password"
    ENV_FILECACHE_EXPIRE: "false"
    ENV_FILE_ID: "file4"
    ENV_FILE_DOMAIN: "http://file1:9001,http://file2:9002,http://file3:9003,http://file4:9004"
    command: ["./main", "server", "/data/storage/data"]
    deploy:
    placement:
    constraints:
    - node.id == xxxxxxxxxxxxxxxx # Node ID of File Node04
    EOF
    • ENV_ACCESS_KEY_FILE and ENV_SECRET_KEY_FILE are access authentication information for the file service, please configure them as high-strength passwords during actual deployment.
    • ENV_MINGDAO_PROTO, ENV_MINGDAO_HOST, and ENV_MINGDAO_PORT are used to configure the main access address of the HAP system, which must be consistent with the protocol, host, and port of the ENV_ADDRESS_MAIN environment variable in the HAP microservices.
    • Replace the Redis connection information with the correct IP address and password according to the actual deployment environment.
    • ENV_FILECACHE_EXPIRE is used to set the thumbnail cache expiration policy, "false" means no expiration (default), "true" means automatically clean up the cache periodically.
    • In multi-node deployments, the ENV_FILE_ID of each instance must be unique, for example, file1, file2, file3, etc.
    • Configure ENV_FILE_DOMAIN as the actual access address of all file nodes by adding a protocol prefix (such as http://), and separate multiple addresses with commas.
  3. Create start and stop scripts

    cat > /usr/local/MDPrivateDeployment/clusterMode/start.sh <<EOF
    docker stack deploy -c /usr/local/MDPrivateDeployment/clusterMode/file.yaml file --detach=false
    EOF

    cat > /usr/local/MDPrivateDeployment/clusterMode/stop.sh <<EOF
    docker stack rm file
    EOF

    chmod +x /usr/local/MDPrivateDeployment/clusterMode/start.sh
    chmod +x /usr/local/MDPrivateDeployment/clusterMode/stop.sh
  4. Start the service

    bash /usr/local/MDPrivateDeployment/clusterMode/start.sh
  5. Check the service status

    docker stack ps file
    docker ps -a | grep file
  6. After deployment, upload the pre-configured files to MinIO object storage