Skip to main content

Single Node

Deploy File Service

  1. Download the Image

    docker pull registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-file:2.1.0
  2. Create Data Directory

    mkdir -p /data/file/volume/{cache,data,fetchtmp,multitmp,tmp}
  3. Create Configuration File Directory

    mkdir -p /usr/local/MDPrivateDeployment/clusterMode/config
  4. Create s3-config.json File for Object Storage Connection, Template as Follows

    cat > /usr/local/MDPrivateDeployment/clusterMode/config/s3-config.json << 'EOF'
    {
    "mode": 1,
    "accessKeyID": "your_accessKey",
    "secretAccessKey": "your_secretKey",
    "bucketEndPoint": "http://192.168.0.11:9011",
    "bucketName": {
    "mdmedia": "mdmedia",
    "mdpic": "mdpic",
    "mdpub": "mdpub",
    "mdoc": "mdoc"
    },
    "region": "1",
    "addressingModel": 1
    }
    EOF
    • Connect to Self-Built MinIO Object Storage

      • Replace accessKeyID and secretAccessKey with the corresponding values of MinIO service environment variables MINIO_ROOT_USER and MINIO_ROOT_PASSWORD.
      • Change bucketEndPoint to the MinIO service access address.
      • MinIO does not define region by default, it can be filled arbitrarily here, such as setting to "1".
      • MinIO is usually accessed via IP address, so set "addressingModel": 1 to prevent the system from automatically adding the bucket name in front of the endpoint, leading to access anomalies.
    • Connect to Cloud Provider Object Storage (e.g., Alibaba Cloud OSS, Tencent Cloud COS, AWS S3, etc.)

      • Configure accessKeyID and secretAccessKey with the AccessKey provided by cloud providers.
      • Change bucketEndPoint and bucketName mappings to actual object storage information.
      • Fill in the region parameter according to the actual deployment area.
      • When using cloud provider object storage, please delete the "addressingModel" parameter to ensure the correct connection method.
  5. Create file.yaml File

    cat > /usr/local/MDPrivateDeployment/clusterMode/file.yaml <<EOF
    version: '3'
    services:
    file:
    image: registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-file:2.1.0
    volumes:
    - /usr/share/zoneinfo/Etc/GMT-8:/etc/localtime
    - /data/file/volume:/data/storage
    - /usr/local/MDPrivateDeployment/clusterMode/config/s3-config.json:/usr/local/file/s3-config.json
    ports:
    - "9000:9000"
    environment:
    ENV_ACCESS_KEY_FILE: storage
    ENV_SECRET_KEY_FILE: 12345678910
    ENV_MINGDAO_PROTO: "http"
    ENV_MINGDAO_HOST: "hap.domain.com"
    ENV_MINGDAO_PORT: "80"
    # Redis master-slave mode
    ENV_FILE_CACHE: "redis://:123456@192.168.10.13:6379"
    # Redis Sentinel mode
    #ENV_REDIS_SENTINEL_ENDPOINTS: "192.168.10.13:26379,192.168.10.14:26379,192.168.10.15:26379"
    #ENV_REDIS_SENTINEL_MASTER: "mymaster"
    #ENV_REDIS_SENTINEL_PASSWORD: "password"
    ENV_FILECACHE_EXPIRE: "false"
    ENV_FILE_ID: "file1"
    ENV_FILE_DOMAIN: "http://192.168.10.16:9000"
    command: ["./main", "server", "/data/storage/data"]
    EOF
    • ENV_ACCESS_KEY_FILE and ENV_SECRET_KEY_FILE are Access Credentials for the file Service, set them to strong passwords during actual deployment.
    • ENV_MINGDAO_PROTO, ENV_MINGDAO_HOST, and ENV_MINGDAO_PORT are used to configure the Main Access Address of the HAP System, which should match the protocol, host, and port of the ENV_ADDRESS_MAIN environment variable in the HAP microservice.
    • Redis connection information should be replaced with the correct IP address and password according to the actual deployment environment.
    • ENV_FILECACHE_EXPIRE is used to set the Thumbnail Cache Expiration Policy, "false" means no expiration (default), "true" means regular automatic cache cleaning.
    • In a multi-node deployment, each instance's ENV_FILE_ID must be unique, such as file1, file2, file3, etc.
    • ENV_FILE_DOMAIN should be configured as the actual access address of all file nodes and add the protocol prefix (e.g., http://) before it. Use a comma to separate multiple addresses.
  6. Create Start/Stop Scripts

    cat > /usr/local/MDPrivateDeployment/clusterMode/start.sh <<EOF
    docker stack deploy -c /usr/local/MDPrivateDeployment/clusterMode/file.yaml file --detach=false
    EOF

    cat > /usr/local/MDPrivateDeployment/clusterMode/stop.sh <<EOF
    docker stack rm file
    EOF

    chmod +x /usr/local/MDPrivateDeployment/clusterMode/start.sh
    chmod +x /usr/local/MDPrivateDeployment/clusterMode/stop.sh
  7. Initialize Swarm (If the Node Has Been Initialized, Skip This Step)

    docker swarm init
  8. Start the Service

    bash /usr/local/MDPrivateDeployment/clusterMode/start.sh
  9. Check Service Status

    docker stack ps file
    docker ps -a | grep file
  10. After Deployment, Upload the Preconfigured Files to MinIO Object Storage