Skip to main content

Enabling Object Storage for File

Server IPHost Role
192.168.10.16File Node01
192.168.10.17File Node02
  • Each File node must install docker in advance
  • In object storage mode, each File node deploys the file storage in the same way

Deploy File Storage

  1. Download the image

    docker pull registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-file:1.7.0
  2. Create data directories

    mkdir -p /data/file/volume/{cache,data,fetchtmp,multitmp,tmp}
  3. Create a directory for configuration files

    mkdir -p /usr/local/MDPrivateDeployment/clusterMode/config
  4. Create the s3-config.json file for storage integration, template as follows

    cat > /usr/local/MDPrivateDeployment/clusterMode/config/s3-config.json << EOF
    {
    "mode": 1,
    "accessKeyID": "${Key}",
    "secretAccessKey": "${Secret}",
    "bucketEndPoint": "oss-cn-beijing.aliyuncs.com",
    "bucketName": {
    "mdmedia": "oss-mdtest",
    "mdpic": "oss-mdtest",
    "mdpub": "oss-mdtest",
    "mdoc": "oss-mdtest"
    },
    "region": "oss-cn-beijing"
    }
    EOF
    • Currently, the Mingdao system uses 4 buckets: mdmedia, mdpic, mdpub, mdoc. You can map the actual buckets used in the bucketName field of the configuration file.
  5. Create the file.yaml file

    cat > /usr/local/MDPrivateDeployment/clusterMode/file.yaml <<EOF
    version: '3'
    services:
    file:
    image: registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-file:1.7.0
    volumes:
    - /usr/share/zoneinfo/Etc/GMT-8:/etc/localtime
    - /data/file/volume:/data/storage
    - /usr/local/MDPrivateDeployment/clusterMode/config/s3-config.json:/usr/local/file/s3-config.json
    ports:
    - "9000:9000"
    environment:
    MINIO_ACCESS_KEY: storage
    MINIO_SECRET_KEY: 123456
    ENV_MINGDAO_PROTO: "http"
    ENV_MINGDAO_HOST: "hap.domain.com"
    ENV_MINGDAO_PORT: "80"
    # Redis Master-Slave Mode
    ENV_FILE_CACHE: "redis://:123456@192.168.10.13:6379"
    # Redis Sentinel Mode
    #ENV_REDIS_SENTINEL_ENDPOINTS: "192.168.10.13:26379,192.168.10.14:26379,192.168.10.15:26379"
    #ENV_REDIS_SENTINEL_MASTER: "mymaster"
    #ENV_REDIS_SENTINEL_PASSWORD: "password"
    ENV_FILECACHE_EXPIRE: "false"
    ENV_FILE_ID: "file1"
    command: ["./main", "server", "/data/storage/data"]
    EOF
    • Replace the access addresses in the variables with the actual system's main address during the deployment
    • Replace the Redis connection information with the actual password and IP during the deployment
    • The ENV_FILE_ID variable must have different values for each instance, such as file1, file2, etc.
  6. Create a startup script

    mkdir -p /usr/local/MDPrivateDeployment/clusterMode
    cat > /usr/local/MDPrivateDeployment/start.sh <<EOF
    docker stack deploy -c /usr/local/MDPrivateDeployment/clusterMode/file.yaml file
    EOF
    chmod +x /usr/local/MDPrivateDeployment/start.sh
  7. Initialize swarm

docker swarm init
  1. Start the service

    bash /usr/local/MDPrivateDeployment/start.sh
  2. Add to startup on boot

    echo "bash /usr/local/MDPrivateDeployment/start.sh" >> /etc/rc.local
    chmod +x /etc/rc.local
  3. Initialize prefabricated files

    You can use tools provided by each object storage vendor to upload the contents of the prefabricated file package (data under the folders mdmedia, mdpic, mdpub, mdoc) to the corresponding buckets in the cloud according to the bucketName mapping relationship.

    Example: Mingdao Private Deployment of Alibaba Cloud OSS Initialization Instructions