Enabling Object Storage for File
| Server IP | Host Role |
|---|---|
| 192.168.10.16 | File Node01 |
| 192.168.10.17 | File Node02 |
- Each File node must install docker in advance
- In object storage mode, each File node deploys the file storage in the same way
Deploy File Storage
-
Download the image
- Servers with Internet access
- Servers without Internet access
docker pull registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-file:1.7.0# Offline image package download link, upload to the deployment server after downloadhttps://pdpublic.mingdao.com/private-deployment/offline/mingdaoyun-file-linux-amd64-1.7.0.tar.gz# Load the offline image on the server using docker load -i commanddocker load -i mingdaoyun-file-linux-amd64-1.7.0.tar.gz -
Create data directories
mkdir -p /data/file/volume/{cache,data,fetchtmp,multitmp,tmp} -
Create a directory for configuration files
mkdir -p /usr/local/MDPrivateDeployment/clusterMode/config -
Create the s3-config.json file for storage integration, template as follows
cat > /usr/local/MDPrivateDeployment/clusterMode/config/s3-config.json << EOF{"mode": 1,"accessKeyID": "${Key}","secretAccessKey": "${Secret}","bucketEndPoint": "oss-cn-beijing.aliyuncs.com","bucketName": {"mdmedia": "oss-mdtest","mdpic": "oss-mdtest","mdpub": "oss-mdtest","mdoc": "oss-mdtest"},"region": "oss-cn-beijing"}EOF- Currently, the Mingdao system uses 4 buckets:
mdmedia,mdpic,mdpub,mdoc. You can map the actual buckets used in the bucketName field of the configuration file.
- Currently, the Mingdao system uses 4 buckets:
-
Create the file.yaml file
cat > /usr/local/MDPrivateDeployment/clusterMode/file.yaml <<EOFversion: '3'services:file:image: registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-file:1.7.0volumes:- /usr/share/zoneinfo/Etc/GMT-8:/etc/localtime- /data/file/volume:/data/storage- /usr/local/MDPrivateDeployment/clusterMode/config/s3-config.json:/usr/local/file/s3-config.jsonports:- "9000:9000"environment:MINIO_ACCESS_KEY: storageMINIO_SECRET_KEY: 123456ENV_MINGDAO_PROTO: "http"ENV_MINGDAO_HOST: "hap.domain.com"ENV_MINGDAO_PORT: "80"# Redis Master-Slave ModeENV_FILE_CACHE: "redis://:123456@192.168.10.13:6379"# Redis Sentinel Mode#ENV_REDIS_SENTINEL_ENDPOINTS: "192.168.10.13:26379,192.168.10.14:26379,192.168.10.15:26379"#ENV_REDIS_SENTINEL_MASTER: "mymaster"#ENV_REDIS_SENTINEL_PASSWORD: "password"ENV_FILECACHE_EXPIRE: "false"ENV_FILE_ID: "file1"command: ["./main", "server", "/data/storage/data"]EOF- Replace the access addresses in the variables with the actual system's main address during the deployment
- Replace the Redis connection information with the actual password and IP during the deployment
- The
ENV_FILE_IDvariable must have different values for each instance, such asfile1,file2, etc.
-
Create a startup script
mkdir -p /usr/local/MDPrivateDeployment/clusterModecat > /usr/local/MDPrivateDeployment/start.sh <<EOFdocker stack deploy -c /usr/local/MDPrivateDeployment/clusterMode/file.yaml fileEOFchmod +x /usr/local/MDPrivateDeployment/start.sh -
Initialize swarm
docker swarm init
-
Start the service
bash /usr/local/MDPrivateDeployment/start.sh -
Add to startup on boot
echo "bash /usr/local/MDPrivateDeployment/start.sh" >> /etc/rc.localchmod +x /etc/rc.local -
Initialize prefabricated files
You can use tools provided by each object storage vendor to upload the contents of the prefabricated file package (data under the folders
mdmedia,mdpic,mdpub,mdoc) to the corresponding buckets in the cloud according to the bucketName mapping relationship.Example: Mingdao Private Deployment of Alibaba Cloud OSS Initialization Instructions