Standalone to Cluster Migration
Stopping the Legacy Standalone Environment
-
Check if there is message accumulation in the Kafka queue of the legacy standalone environment.
Enter the storage component container
docker exec -it $(docker ps | grep mingdaoyun-sc | awk '{print $1}') bashCheck whether there is a minio process running inside the current container
ps aux|grep [m]inio- If there is output, it means the file storage service running in the current container is Version V2.
- If there is no output, it means the file storage service running in the current container is Version V1.
- The migration steps for V1 and V2 will differ when migrating the file storage service.
Check if there is accumulation in the Kafka workflow queue
/usr/local/kafka/bin/kafka-consumer-groups.sh --bootstrap-server ${ENV_KAFKA_ENDPOINTS:=127.0.0.1:9092} --describe --group md-workflow-consumer | awk '{count+=$6}END{print count}'- Outputting only 0 means there is no accumulation and microservices can be stopped immediately.
- A number greater than 0 indicates there are workflow messages in the queue awaiting consumption.
- If microservices are stopped while there is unconsumed data in the queue, after data migration, the new environment will display certain workflows as still queued and these will not be consumed further.
-
In the directory of the installation manager, execute the command to stop all microservices
bash service.sh stopall
Starting Temporary Containers
-
Start a container separately, mounting the Mingdao data directory
docker run -itd --entrypoint bash --rm -v /data/mingdao/script/volume/data/:/data/ 788b6f437789788b6f437789is the image ID of the storage componentmingdaoyun-sc; you can retrieve it withdocker images.- If the data directory of Mingdao in the standalone environment has been modified, use the actual path.
-
Enter the newly started temporary container
docker exec -it 363625b14db6 bash363625b14db6is the ID of the newly started container; you can retrieve it withdocker ps.
-
Inside the temporary container, start mysql, mongodb, and file services respectively
source /entrypoint.sh && mysqlStartup &source /entrypoint.sh && mongodbStartup &source /entrypoint.sh && filev1Run &If the file storage service running in the container is V2, start the minio service additionally
echo "127.0.0.1 sc" >> /etc/hostssource /entrypoint.sh && minioStartup &
File Storage Migration
The built-in file storage version in the standalone environment may vary; select the corresponding migration guide according to actual circumstances.
- File Storage V1
- File Storage V2
-
In the temporary container, configure the cluster environment file storage information
mc alias set minio_old http://127.0.0.1:9000 mdstorage eBxExGQJNhGosgv5FQJiVNqHmc alias set minio_new http://10.206.0.6:9011 mingdao T7RxxxxxxxxxxdRky-
Address and authentication info for
minio_olddo not need modification -
For
minio_new, replace IP, port, and authentication info with the access address and credentials of the MinIO service in the new cluster environment
-
-
Copy file storage data from the standalone environment to the cluster environment’s MinIO
mc mirror minio_old/mdmedia minio_new/mdmediamc mirror minio_old/mdoc minio_new/mdocmc mirror minio_old/mdpic minio_new/mdpicmc mirror minio_old/mdpub minio_new/mdpub
-
In the temporary container, configure the cluster environment file storage information
mc alias set minio_old http://127.0.0.1:9010 mdstorage eBxExGQJNhGosgv5FQJiVNqHmc alias set minio_new http://10.206.0.6:9011 mingdao T7RxxxxxxxxxxdRky-
Address and authentication info for
minio_olddo not need modification -
For
minio_new, replace IP, port, and authentication info with the access address and credentials of the MinIO service in the new cluster environment
-
-
Copy file storage data from the standalone environment to the cluster environment’s MinIO
mc mirror minio_old/mdmedia minio_new/mdmediamc mirror minio_old/mdoc minio_new/mdocmc mirror minio_old/mdpic minio_new/mdpicmc mirror minio_old/mdpub minio_new/mdpub
Database Migration
MySQL Data Export
-
In the temporary container, create a directory for MySQL data export
mkdir -p /data/backup/mysql_dump -
Enter the backup directory
cd /data/backup/ -
Export MySQL data
for dbname in MDApplication MDCalendar MDLog MDProject MDStructure; domysqldump --set-gtid-purged=off --default-character-set=utf8mb4 -h127.0.0.1 -P3306 -uroot -p123456 $dbname > mysql_dump/$dbname.sqldone- If HDP is enabled in the old environment, also add the
MDHDPdatabase to the export list
- If HDP is enabled in the old environment, also add the
-
Exported data will be persistently stored in the host at
/data/mingdao/script/volume/data/backup/mysql_dump
MongoDB Data Export
-
In the temporary container, create a directory for MongoDB data export
mkdir -p /data/backup/mongodb_dump -
Enter the backup directory
cd /data/backup/ -
Create a list of MongoDB databases to export
cat > mongodb.list <<EOFMDAlertMDChatTopMDGroupMDHistoryMDLicenseMDNotificationMDSsoMDUsercommonbasemdIdentificationmdactionlogmdapprolesmdapprovemdappsmdattachmentmdcalendarmdcategorymdcheckmddossiermdemailmdformmdgroupsmdinboxmdkcmdmapmdmobileaddressmdpostmdreportdatamdrolesmdsearchmdservicedatamdsmsmdtagmdtransfermdworkflowmdworksheetmdworkweixinmdwsrowspushlogtaskcentermdintegrationmdworksheetlogmdworksheetsearchmddatapipelinemdwfpluginmdpaymentmdwfaiEOF- If aggregate tables are enabled in the old environment, also add the
mdaggregationwsrowsdatabase to the MongoDB export list - If HDP is enabled in the old environment, also add the
mdhdpdatabase to the MongoDB export list
- If aggregate tables are enabled in the old environment, also add the
-
Export MongoDB data
for i in $(cat mongodb.list);do mongodump --uri mongodb://127.0.0.1:27017/$i --numParallelCollections=6 --gzip -o ./mongodb_dump/ ;done-
The
--numParallelCollectionsparameter specifies the number of collections processed in parallel bymongodump. Default is 4. The example uses 6; adjust as needed depending on server performance. -
For large datasets, export can be time-consuming. You can run it in the background using
nohup.nohup bash -c 'for i in $(cat mongodb.list); do mongodump --uri mongodb://127.0.0.1:27017/$i --numParallelCollections=6 --gzip -o ./mongodb_dump/; done' > mongodump.log 2>&1 &
-
-
Exported data will be persistently stored in the host at
/data/mingdao/script/volume/data/backup/mongodb_dump
Data Transmission
MySQL Data Transmission
-
Start a receiver on the MySQL master node in the new environment
mkdir /data/recover && cd /data/recovernc -l 9900 | tar -zxvf - -
On the host in the legacy environment, enter the directory where exported data is stored and start a sender
cd /data/mingdao/script/volume/data/backuptar -zcvf - mysql_dump | nc 192.168.1.1 9900
MongoDB Data Transmission
-
Start a receiver on the MongoDB primary node in the new environment
mkdir /data/recover && cd /data/recovernc -l 9900 | tar -zxvf - -
On the host in the legacy environment, enter the directory where exported data is stored and start a sender
cd /data/mingdao/script/volume/data/backuptar -zcvf - mongodb_dump | nc 192.168.1.2 9900
Database Restoration
-
Before restoring MySQL and MongoDB data in the new environment, all business databases in the new environment will be deleted and cleared. If the new environment contains any data, please back up and export it in advance!
-
Before restoring data, please stop the microservices in the new environment in advance.
MySQL Data Restoration
- The following MySQL commands connect to
127.0.0.1:3306by default. If the new environment MySQL is in MGR mode, add-P 6446consistently. - Replace
-p123456with the actual password. - If HDP is enabled in the new environment, handle the
MDHDPdatabase in the corresponding steps. - If HDP was enabled in the old environment, process the character set of
MDHDP.sqlbefore importing.
-
Delete the MySQL databases used by the MingdaoHAP system in the new environment
/usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 <<< 'drop database MDApplication;'/usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 <<< 'drop database MDCalendar;'/usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 <<< 'drop database MDLog;'/usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 <<< 'drop database MDProject;'/usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 <<< 'drop database MDStructure;'-
If HDP is enabled in the new environment, also delete the
MDHDPdatabase/usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 <<< 'drop database MDHDP;'
-
-
Create the MySQL databases used by the Mingdao HAP system in the new environment
/usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 <<< 'create database MDApplication;'/usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 <<< 'create database MDCalendar;'/usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 <<< 'create database MDLog;'/usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 <<< 'create database MDProject;'/usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 <<< 'create database MDStructure;'-
If HDP is enabled in the new environment, also create the
MDHDPdatabase/usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 <<< 'create database MDHDP;'
-
-
Modify
utf8toutf8mb4in SQL filesfor dbname in MDApplication MDCalendar MDLog MDProject MDStructure; dosed -ri 's/CHARSET=utf8(;| )/CHARSET=utf8mb4\1/g' /data/recover/mysql_dump/$dbname.sqldonesed -i 's/CHARACTER SET utf8 COLLATE utf8_bin //' /data/recover/mysql_dump/MDProject.sql-
If HDP was enabled in the old environment, apply the same character set replacement to
MDHDP.sql.sed -ri 's/CHARSET=utf8(;| )/CHARSET=utf8mb4\1/g' /data/recover/mysql_dump/MDHDP.sql
-
-
Import the backed-up MySQL data into the new environment
/usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 --default-character-set utf8mb4 -D MDApplication < /data/recover/mysql_dump/MDApplication.sql/usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 --default-character-set utf8mb4 -D MDCalendar < /data/recover/mysql_dump/MDCalendar.sql/usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 --default-character-set utf8mb4 -D MDLog < /data/recover/mysql_dump/MDLog.sql/usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 --default-character-set utf8mb4 -D MDProject < /data/recover/mysql_dump/MDProject.sql/usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 --default-character-set utf8mb4 -D MDStructure < /data/recover/mysql_dump/MDStructure.sql-
If HDP is enabled in the new environment, also import the
MDHDP.sqldata/usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 --default-character-set utf8mb4 -D MDHDP < /data/recover/mysql_dump/MDHDP.sql
-
MongoDB Data Restoration
-
Create a list of MongoDB databases used by the Mingdao HAP system to be deleted in the new environment
cat > dropMongodb.list <<EOFuse MDAlertdb.dropDatabase()use MDChatTopdb.dropDatabase()use MDGroupdb.dropDatabase()use MDHistorydb.dropDatabase()use MDLicensedb.dropDatabase()use MDNotificationdb.dropDatabase()use MDSsodb.dropDatabase()use MDUserdb.dropDatabase()use commonbasedb.dropDatabase()use mdIdentificationdb.dropDatabase()use mdactionlogdb.dropDatabase()use mdapprolesdb.dropDatabase()use mdapprovedb.dropDatabase()use mdappsdb.dropDatabase()use mdattachmentdb.dropDatabase()use mdcalendardb.dropDatabase()use mdcategorydb.dropDatabase()use mdcheckdb.dropDatabase()use mddossierdb.dropDatabase()use mdemaildb.dropDatabase()use mdformdb.dropDatabase()use mdgroupsdb.dropDatabase()use mdinboxdb.dropDatabase()use mdkcdb.dropDatabase()use mdmapdb.dropDatabase()use mdmobileaddressdb.dropDatabase()use mdpostdb.dropDatabase()use mdreportdatadb.dropDatabase()use mdrolesdb.dropDatabase()use mdsearchdb.dropDatabase()use mdservicedatadb.dropDatabase()use mdsmsdb.dropDatabase()use mdtagdb.dropDatabase()use mdtransferdb.dropDatabase()use mdworkflowdb.dropDatabase()use mdworksheetdb.dropDatabase()use mdworkweixindb.dropDatabase()use mdwsrowsdb.dropDatabase()use pushlogdb.dropDatabase()use taskcenterdb.dropDatabase()use mdintegrationdb.dropDatabase()use mdworksheetlogdb.dropDatabase()use mdworksheetsearchdb.dropDatabase()use mddatapipelinedb.dropDatabase()use mdwfplugindb.dropDatabase()use mdpaymentdb.dropDatabase()use mdwfaidb.dropDatabase()EOF- If the new environment has aggregate tables enabled, also add the
mdaggregationwsrowsdatabase to the MongoDB deletion list - If HDP is enabled in the new environment, also add the
mdhdpdatabase to the MongoDB deletion list
- If the new environment has aggregate tables enabled, also add the
-
Delete MongoDB databases used by the Mingdao HAP system in the new environment
/usr/local/mongodb/bin/mongo mongodb://root:123456@127.0.0.1:27017/admin < dropMongodb.list -
Download the MongoDB
database-toolspackage in the new environment, which contains themongorestorecommand for data restorationDownload links:
- RedHat / CentOS 8.0 x64
- Debian 12.0 x64
- Others
https://fastdl.mongodb.org/tools/db/mongodb-database-tools-rhel80-x86_64-100.9.3.tgzhttps://fastdl.mongodb.org/tools/db/mongodb-database-tools-debian12-x86_64-100.9.3.tgzFor other operating systems, obtain the tool package from MongoDB official release history. It is recommended to keep it consistent with the
mongodumpversion bundled with the standalone environment; 100.9.3 is recommended.https://www.mongodb.com/try/download/database-tools/releases/archive- For domestic operating systems such as Kylin, usually use the RedHat 8.0 version
- After downloading, upload to the server where MongoDB is located, then extract
-
Import the backed-up MongoDB data into the new environment (remember to modify the actual path to
mongorestore)for dbname in $(ls /data/recover/mongodb_dump/);do/your_path/mongorestore --host 127.0.0.1 -u root -p 123456 --authenticationDatabase admin --numParallelCollections=6 --numInsertionWorkersPerCollection=2 -d $dbname --gzip --dir /data/recover/mongodb_dump/$dbname/done-
Replace the password with the actual one.
-
For large datasets, restoration takes time; you may adjust the following parameters:
--numParallelCollectionsspecifies the number of collections processed in parallel bymongorestore. Default is 4. The example uses 6; adjust as needed depending on server performance.--numInsertionWorkersPerCollectionspecifies the number of worker threads per collection. Default is 1. This example uses 2; adjust as needed for hardware.
-
You can also run this command in the background using
nohup:nohup bash -c 'for dbname in $(ls /data/recover/mongodb_dump/); do/your_path/mongorestore --host 127.0.0.1 -u root -p 123456 --authenticationDatabase admin --numParallelCollections=6 --numInsertionWorkersPerCollection=2 -d "$dbname" --gzip --dir "/data/recover/mongodb_dump/$dbname/"done' > mongorestore.log 2>&1 &
-
-
Change the organization ID bound to the new environment
/usr/local/mongodb/bin/mongo -u root -p 123456 --authenticationDatabase admin> use ClientLicense;> db.projects.updateMany({"projectID" : "New Environment Organization ID"},{$set:{"projectID" : "Legacy Environment Organization ID"}});
Elasticsearch Index Cleanup
-
Before starting microservices in the new environment, delete all current Mingdao HAP business indexes in Elasticsearch
-
Log in to the Elasticsearch server and list the indexes in the new environment
$ curl -u elastic:123456 127.0.0.1:9200/_cat/indicesgreen open chatmessage_190329 Ed7b0fAeT2C4MT7zdxykDQ 1 1 0 0 450b 225bgreen open actionlogb304361c-84ea-4f17-8ce2-bd11111115d3 SQx-1XftQ6e2Q95QSfjXZw 5 1 141 0 1.5mb 790.4kbgreen open usedata 59PEzs1uSsuHU-HWRy27jA 5 1 13 0 178.4kb 89.2kbgreen open actionlog9 UClpsSWkS7q1fIL6z6LxfQ 5 1 12 0 277.7kb 138.8kbgreen open kcnode_190329 2Zxqp0uyQKKRLq7xjtaC1w 1 1 0 0 450b 225bgreen open post_190723 0Cnp7rQjQRWb8gw5fFv9Dg 1 1 3 0 32.2kb 16.1kbgreen open task_190723 PT5sEOV_Sq6AI29vhUe1bQ 1 1 1 0 15.2kb 7.6kb- The third column in the output is the index name
-
Delete existing Mingdao HAP business indexes
You can delete the related indexes one by one (replace the password and index names as needed):
$ curl -XDELETE -u elastic:123456 127.0.0.1:9200/chatmessage_190329$ curl -XDELETE -u elastic:123456 127.0.0.1:9200/actionlogb304361c-84ea-4f17-8ce2-bd11111115d3$ curl -XDELETE -u elastic:123456 127.0.0.1:9200/usedata$ curl -XDELETE -u elastic:123456 127.0.0.1:9200/actionlog9$ curl -XDELETE -u elastic:123456 127.0.0.1:9200/kcnode_190329$ curl -XDELETE -u elastic:123456 127.0.0.1:9200/post_190723$ curl -XDELETE -u elastic:123456 127.0.0.1:9200/task_190723Or use the following script for one-step cleanup (replace the password as needed):
elastic_pwd=123456for i in $(curl -u elastic:$elastic_pwd 127.0.0.1:9200/_cat/indices|awk '{print $3}'); docurl -XDELETE -u elastic:$elastic_pwd 127.0.0.1:9200/$idoneCheck the cleanup result:
curl -u elastic:123456 127.0.0.1:9200/_cat/indices- If no index is returned, the cleanup has succeeded.
Redis Cache Cleanup
-
Before starting microservices in the new environment, clear the Redis cache data in the new environment
-
Log in to the Redis server in the new environment and execute the cache cleanup command (replace the password as needed):
/usr/local/redis/bin/redis-cli -a 123456 "flushall"
Starting Microservices in the New Environment
-
If the access address in the new environment has changed, make sure to update the access address variables in the HAP microservices
config.yamland the File servicefile.yamlaccordingly -
Start the microservices
-
Use the
kubectl get podcommand to check whether the status of all pods is2/2 -
After microservices are started, enter the
configcontainerkubectl exec -it $(kubectl get pod | grep config | awk 'NR==1{print $1}') -- bash -
In the
configcontainer, refresh themongodbindexessource /entrypoint.sh && mongodbUpdateIndex -
In the
configcontainer, refresh theelasticsearchindexessource /entrypoint.sh && resetCollaborationIndex
At this point, the data migration is complete 👏. Next, log in to the system using the new environment’s access address to verify.