Single Node
Deploy File Service
-
Download the Image
- Server Supports Internet Access
- Server Does Not Support Internet Access
docker pull registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-file:2.1.0# Offline image package download link, upload to the deployment server after downloadhttps://pdpublic.mingdao.com/private-deployment/offline/mingdaoyun-file-linux-amd64-2.1.0.tar.gz# Load the offline image on the server using docker load -i commanddocker load -i mingdaoyun-file-linux-amd64-2.1.0.tar.gz -
Create Data Directory
mkdir -p /data/file/volume/{cache,data,fetchtmp,multitmp,tmp} -
Create Configuration File Directory
mkdir -p /usr/local/MDPrivateDeployment/clusterMode/config -
Create
s3-config.jsonFile for Object Storage Connection, Template as Followscat > /usr/local/MDPrivateDeployment/clusterMode/config/s3-config.json << 'EOF'{"mode": 1,"accessKeyID": "your_accessKey","secretAccessKey": "your_secretKey","bucketEndPoint": "http://192.168.0.11:9011","bucketName": {"mdmedia": "mdmedia","mdpic": "mdpic","mdpub": "mdpub","mdoc": "mdoc"},"region": "1","addressingModel": 1}EOF-
Connect to Self-Built MinIO Object Storage
- Replace accessKeyID and secretAccessKey with the corresponding values of MinIO service environment variables
MINIO_ROOT_USERandMINIO_ROOT_PASSWORD. - Change bucketEndPoint to the MinIO service access address.
- MinIO does not define region by default, it can be filled arbitrarily here, such as setting to "1".
- MinIO is usually accessed via IP address, so set "addressingModel": 1 to prevent the system from automatically adding the bucket name in front of the endpoint, leading to access anomalies.
- Replace accessKeyID and secretAccessKey with the corresponding values of MinIO service environment variables
-
Connect to Cloud Provider Object Storage (e.g., Alibaba Cloud OSS, Tencent Cloud COS, AWS S3, etc.)
- Configure accessKeyID and secretAccessKey with the AccessKey provided by cloud providers.
- Change bucketEndPoint and bucketName mappings to actual object storage information.
- Fill in the region parameter according to the actual deployment area.
- When using cloud provider object storage, please delete the "addressingModel" parameter to ensure the correct connection method.
-
-
Create file.yaml File
cat > /usr/local/MDPrivateDeployment/clusterMode/file.yaml <<EOFversion: '3'services:file:image: registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-file:2.1.0volumes:- /usr/share/zoneinfo/Etc/GMT-8:/etc/localtime- /data/file/volume:/data/storage- /usr/local/MDPrivateDeployment/clusterMode/config/s3-config.json:/usr/local/file/s3-config.jsonports:- "9000:9000"environment:ENV_ACCESS_KEY_FILE: storageENV_SECRET_KEY_FILE: 12345678910ENV_MINGDAO_PROTO: "http"ENV_MINGDAO_HOST: "hap.domain.com"ENV_MINGDAO_PORT: "80"# Redis master-slave modeENV_FILE_CACHE: "redis://:123456@192.168.10.13:6379"# Redis Sentinel mode#ENV_REDIS_SENTINEL_ENDPOINTS: "192.168.10.13:26379,192.168.10.14:26379,192.168.10.15:26379"#ENV_REDIS_SENTINEL_MASTER: "mymaster"#ENV_REDIS_SENTINEL_PASSWORD: "password"ENV_FILECACHE_EXPIRE: "false"ENV_FILE_ID: "file1"ENV_FILE_DOMAIN: "http://192.168.10.16:9000"command: ["./main", "server", "/data/storage/data"]EOFENV_ACCESS_KEY_FILEandENV_SECRET_KEY_FILEare Access Credentials for the file Service, set them to strong passwords during actual deployment.ENV_MINGDAO_PROTO,ENV_MINGDAO_HOST, andENV_MINGDAO_PORTare used to configure the Main Access Address of the HAP System, which should match the protocol, host, and port of theENV_ADDRESS_MAINenvironment variable in the HAP microservice.- Redis connection information should be replaced with the correct IP address and password according to the actual deployment environment.
ENV_FILECACHE_EXPIREis used to set the Thumbnail Cache Expiration Policy,"false"means no expiration (default),"true"means regular automatic cache cleaning.- In a multi-node deployment, each instance's
ENV_FILE_IDmust be unique, such asfile1,file2,file3, etc. ENV_FILE_DOMAINshould be configured as the actual access address of all file nodes and add the protocol prefix (e.g.,http://) before it. Use a comma to separate multiple addresses.
-
Create Start/Stop Scripts
cat > /usr/local/MDPrivateDeployment/clusterMode/start.sh <<EOFdocker stack deploy -c /usr/local/MDPrivateDeployment/clusterMode/file.yaml file --detach=falseEOFcat > /usr/local/MDPrivateDeployment/clusterMode/stop.sh <<EOFdocker stack rm fileEOFchmod +x /usr/local/MDPrivateDeployment/clusterMode/start.shchmod +x /usr/local/MDPrivateDeployment/clusterMode/stop.sh -
Initialize Swarm (If the Node Has Been Initialized, Skip This Step)
docker swarm init -
Start the Service
bash /usr/local/MDPrivateDeployment/clusterMode/start.sh -
Check Service Status
docker stack ps filedocker ps -a | grep file -
After Deployment, Upload the Preconfigured Files to MinIO Object Storage