Standalone to Cluster Migration
Stopping the Legacy Standalone Environment
-
Check if there is message accumulation in the Kafka queue of the legacy standalone environment.
Enter the storage component container
docker exec -it $(docker ps | grep mingdaoyun-sc | awk '{print $1}') bash
Check whether there is a minio process running inside the current container
ps aux|grep [m]inio
- If there is output, it means the file storage service running in the current container is Version V2.
- If there is no output, it means the file storage service running in the current container is Version V1.
- The migration steps for V1 and V2 will differ when migrating the file storage service.
Check if there is accumulation in the Kafka workflow queue
/usr/local/kafka/bin/kafka-consumer-groups.sh --bootstrap-server ${ENV_KAFKA_ENDPOINTS:=127.0.0.1:9092} --describe --group md-workflow-consumer | awk '{count+=$6}END{print count}'
- Outputting only 0 means there is no accumulation and microservices can be stopped immediately.
- A number greater than 0 indicates there are workflow messages in the queue awaiting consumption.
- If microservices are stopped while there is unconsumed data in the queue, after data migration, the new environment will display certain workflows as still queued and these will not be consumed further.
-
In the directory of the installation manager, execute the command to stop all microservices
bash service.sh stopall
Starting Temporary Containers
-
Start a container separately, mounting the Mingdao data directory
docker run -itd --entrypoint bash --rm -v /data/mingdao/script/volume/data/:/data/ 788b6f437789
788b6f437789
is the image ID of the storage componentmingdaoyun-sc
; you can retrieve it withdocker images
.- If the data directory of Mingdao in the standalone environment has been modified, use the actual path.
-
Enter the newly started temporary container
docker exec -it 363625b14db6 bash
363625b14db6
is the ID of the newly started container; you can retrieve it withdocker ps
.
-
Inside the temporary container, start mysql, mongodb, and file services respectively
source /entrypoint.sh && mysqlStartup &
source /entrypoint.sh && mongodbStartup &
source /entrypoint.sh && filev1Run &If the file storage service running in the container is V2, start the minio service additionally
echo "127.0.0.1 sc" >> /etc/hosts
source /entrypoint.sh && minioStartup &
File Storage Migration
The built-in file storage version in the standalone environment may vary; select the corresponding migration guide according to actual circumstances.
- File Storage V1
- File Storage V2
-
In the temporary container, configure the cluster environment file storage information
mc alias set minio_old http://127.0.0.1:9000 mdstorage eBxExGQJNhGosgv5FQJiVNqH
mc alias set minio_new http://10.206.0.6:9011 mingdao T7RxxxxxxxxxxdRky-
Address and authentication info for
minio_old
do not need modification -
For
minio_new
, replace IP, port, and authentication info with the access address and credentials of the MinIO service in the new cluster environment
-
-
Copy file storage data from the standalone environment to the cluster environment’s MinIO
mc mirror minio_old/mdmedia minio_new/mdmedia
mc mirror minio_old/mdoc minio_new/mdoc
mc mirror minio_old/mdpic minio_new/mdpic
mc mirror minio_old/mdpub minio_new/mdpub
-
In the temporary container, configure the cluster environment file storage information
mc alias set minio_old http://127.0.0.1:9010 mdstorage eBxExGQJNhGosgv5FQJiVNqH
mc alias set minio_new http://10.206.0.6:9011 mingdao T7RxxxxxxxxxxdRky-
Address and authentication info for
minio_old
do not need modification -
For
minio_new
, replace IP, port, and authentication info with the access address and credentials of the MinIO service in the new cluster environment
-
-
Copy file storage data from the standalone environment to the cluster environment’s MinIO
mc mirror minio_old/mdmedia minio_new/mdmedia
mc mirror minio_old/mdoc minio_new/mdoc
mc mirror minio_old/mdpic minio_new/mdpic
mc mirror minio_old/mdpub minio_new/mdpub
Database Migration
MySQL Data Export
-
In the temporary container, create a directory for MySQL data export
mkdir -p /data/backup/mysql_dump
-
Enter the backup directory
cd /data/backup/
-
Export MySQL data
for dbname in MDApplication MDCalendar MDLog MDProject MDStructure; do
mysqldump --set-gtid-purged=off --default-character-set=utf8mb4 -h127.0.0.1 -P3306 -uroot -p123456 $dbname > mysql_dump/$dbname.sql
done -
Exported data will be persistently stored in the host at
/data/mingdao/script/volume/data/backup/mysql_dump
MongoDB Data Export
-
In the temporary container, create a directory for MongoDB data export
mkdir -p /data/backup/mongodb_dump
-
Enter the backup directory
cd /data/backup/
-
Create a list of MongoDB databases to export
cat > mongodb.list <<EOF
MDAlert
MDChatTop
MDGroup
MDHistory
MDLicense
MDNotification
MDSso
MDUser
commonbase
mdIdentification
mdactionlog
mdapproles
mdapprove
mdapps
mdattachment
mdcalendar
mdcategory
mdcheck
mddossier
mdemail
mdform
mdgroups
mdinbox
mdkc
mdmap
mdmobileaddress
mdpost
mdreportdata
mdroles
mdsearch
mdservicedata
mdsms
mdtag
mdtransfer
mdworkflow
mdworksheet
mdworkweixin
mdwsrows
pushlog
taskcenter
mdintegration
mdworksheetlog
mdworksheetsearch
mddatapipeline
mdwfplugin
mdpayment
EOF- If aggregate tables are enabled in the old environment, also add the
mdaggregationwsrows
database to the MongoDB export list
- If aggregate tables are enabled in the old environment, also add the
-
Export MongoDB data
for i in $(cat mongodb.list);do mongodump --uri mongodb://127.0.0.1:27017/$i --numParallelCollections=6 --gzip -o ./mongodb_dump/ ;done
-
The
--numParallelCollections
parameter specifies the number of collections processed in parallel bymongodump
. Default is 4. The example uses 6; adjust as needed depending on server performance. -
For large datasets, export can be time-consuming. You can run it in the background using
nohup
.nohup bash -c 'for i in $(cat mongodb.list); do mongodump --uri mongodb://127.0.0.1:27017/$i --numParallelCollections=6 --gzip -o ./mongodb_dump/; done' > mongodump.log 2>&1 &
-
-
Exported data will be persistently stored in the host at
/data/mingdao/script/volume/data/backup/mongodb_dump
Data Transmission
MySQL Data Transmission
-
Start a receiver on the MySQL master node in the new environment
mkdir /data/recover && cd /data/recover
nc -l 9900 | tar -zxvf - -
On the host in the legacy environment, enter the directory where exported data is stored and start a sender
cd /data/mingdao/script/volume/data/backup
tar -zcvf - mysql_dump | nc 192.168.1.1 9900
MongoDB Data Transmission
-
Start a receiver on the MongoDB primary node in the new environment
mkdir /data/recover && cd /data/recover
nc -l 9900 | tar -zxvf - -
On the host in the legacy environment, enter the directory where exported data is stored and start a sender
cd /data/mingdao/script/volume/data/backup
tar -zcvf - mongodb_dump | nc 192.168.1.2 9900
Database Restoration
-
Before restoring MySQL and MongoDB data in the new environment, all business databases in the new environment will be deleted and cleared. If the new environment contains any data, please back up and export it in advance!
-
Before restoring data, please stop the microservices in the new environment in advance.
MySQL Data Restoration
-
Delete the MySQL databases used by the MingdaoHAP system in the new environment
/usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 <<< 'drop database MDApplication;'
/usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 <<< 'drop database MDCalendar;'
/usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 <<< 'drop database MDLog;'
/usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 <<< 'drop database MDProject;'
/usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 <<< 'drop database MDStructure;' -
Create the MySQL databases used by the Mingdao HAP system in the new environment
/usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 <<< 'create database MDApplication;'
/usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 <<< 'create database MDCalendar;'
/usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 <<< 'create database MDLog;'
/usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 <<< 'create database MDProject;'
/usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 <<< 'create database MDStructure;' -
Modify
utf8
toutf8mb4
in SQL filesfor dbname in MDApplication MDCalendar MDLog MDProject MDStructure; do
sed -ri 's/CHARSET=utf8(;| )/CHARSET=utf8mb4\1/g' /data/recover/mysql_dump/$dbname.sql
done
sed -i 's/CHARACTER SET utf8 COLLATE utf8_bin //' /data/recover/mysql_dump/MDProject.sql -
Import the backed-up MySQL data into the new environment
/usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 --default-character-set utf8mb4 -D MDApplication < /data/recover/mysql_dump/MDApplication.sql
/usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 --default-character-set utf8mb4 -D MDCalendar < /data/recover/mysql_dump/MDCalendar.sql
/usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 --default-character-set utf8mb4 -D MDLog < /data/recover/mysql_dump/MDLog.sql
/usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 --default-character-set utf8mb4 -D MDProject < /data/recover/mysql_dump/MDProject.sql
/usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 --default-character-set utf8mb4 -D MDStructure < /data/recover/mysql_dump/MDStructure.sql
MongoDB Data Restoration
-
Create a list of MongoDB databases used by the Mingdao HAP system to be deleted in the new environment
cat > dropMongodb.list <<EOF
use MDAlert
db.dropDatabase()
use MDChatTop
db.dropDatabase()
use MDGroup
db.dropDatabase()
use MDHistory
db.dropDatabase()
use MDLicense
db.dropDatabase()
use MDNotification
db.dropDatabase()
use MDSso
db.dropDatabase()
use MDUser
db.dropDatabase()
use commonbase
db.dropDatabase()
use mdIdentification
db.dropDatabase()
use mdactionlog
db.dropDatabase()
use mdapproles
db.dropDatabase()
use mdapprove
db.dropDatabase()
use mdapps
db.dropDatabase()
use mdattachment
db.dropDatabase()
use mdcalendar
db.dropDatabase()
use mdcategory
db.dropDatabase()
use mdcheck
db.dropDatabase()
use mddossier
db.dropDatabase()
use mdemail
db.dropDatabase()
use mdform
db.dropDatabase()
use mdgroups
db.dropDatabase()
use mdinbox
db.dropDatabase()
use mdkc
db.dropDatabase()
use mdmap
db.dropDatabase()
use mdmobileaddress
db.dropDatabase()
use mdpost
db.dropDatabase()
use mdreportdata
db.dropDatabase()
use mdroles
db.dropDatabase()
use mdsearch
db.dropDatabase()
use mdservicedata
db.dropDatabase()
use mdsms
db.dropDatabase()
use mdtag
db.dropDatabase()
use mdtransfer
db.dropDatabase()
use mdworkflow
db.dropDatabase()
use mdworksheet
db.dropDatabase()
use mdworkweixin
db.dropDatabase()
use mdwsrows
db.dropDatabase()
use pushlog
db.dropDatabase()
use taskcenter
db.dropDatabase()
use mdintegration
db.dropDatabase()
use mdworksheetlog
db.dropDatabase()
use mdworksheetsearch
db.dropDatabase()
use mddatapipeline
db.dropDatabase()
use mdwfplugin
db.dropDatabase()
use mdpayment
db.dropDatabase()
EOF- If the new environment has aggregate tables enabled, also add the
mdaggregationwsrows
database to the MongoDB deletion list
- If the new environment has aggregate tables enabled, also add the
-
Delete MongoDB databases used by the Mingdao HAP system in the new environment
/usr/local/mongodb/bin/mongo mongodb://root:123456@127.0.0.1:27017/admin < dropMongodb.list
-
Download the MongoDB
database-tools
package in the new environment, which contains themongorestore
command for data restorationDownload links:
- RedHat / CentOS 8.0 x64
- Debian 12.0 x64
- Others
https://fastdl.mongodb.org/tools/db/mongodb-database-tools-rhel80-x86_64-100.9.3.tgz
https://fastdl.mongodb.org/tools/db/mongodb-database-tools-debian12-x86_64-100.9.3.tgz
For other operating systems, obtain the tool package from MongoDB official release history (make sure to download version 100.9.3, recommended to match the
mongodump
version)https://www.mongodb.com/try/download/database-tools/releases/archive
- For domestic operating systems such as Kylin, usually use the RedHat 8.0 version
- After downloading, upload to the server where MongoDB is located, then extract
-
Import the backed-up MongoDB data into the new environment (remember to modify the actual path to
mongorestore
)for dbname in $(ls /data/recover/mongodb_dump/);do
/your_path/mongorestore --host 127.0.0.1 -u root -p 123456 --authenticationDatabase admin --numParallelCollections=6 --numInsertionWorkersPerCollection=2 -d $dbname --gzip --dir /data/recover/mongodb_dump/$dbname/
done- For large datasets, restoration takes time; you may adjust the following parameters:
--numParallelCollections
specifies the number of collections processed in parallel bymongorestore
. Default is 4. The example uses 6; adjust as needed depending on server performance.--numInsertionWorkersPerCollection
specifies the number of worker threads per collection. Default is 1. This example uses 2; adjust as needed for hardware.
- You can also run this command in the background using
nohub
:nohup bash -c '
for dbname in $(ls /data/recover/mongodb_dump/); do
/your_path/mongorestore --host 127.0.0.1 -u root -p 123456 --authenticationDatabase admin --numParallelCollections=6 --numInsertionWorkersPerCollection=2 -d "$dbname" --gzip --dir "/data/recover/mongodb_dump/$dbname/"
done' > mongorestore.log 2>&1 &
- For large datasets, restoration takes time; you may adjust the following parameters:
-
Change the organization ID bound to the new environment
/usr/local/mongodb/bin/mongo -u root -p 123456 --authenticationDatabase admin
> use ClientLicense;
> db.projects.updateMany({"projectID" : "New Environment Organization ID"},{$set:{"projectID" : "Legacy Environment Organization ID"}});
Elasticsearch Index Cleanup
-
Before starting microservices in the new environment, delete all current Mingdao HAP business indexes in Elasticsearch
-
Log in to the Elasticsearch server and list the indexes in the new environment
$ curl -u elastic:123456 127.0.0.1:9200/_cat/indices
green open chatmessage_190329 Ed7b0fAeT2C4MT7zdxykDQ 1 1 0 0 450b 225b
green open actionlogb304361c-84ea-4f17-8ce2-bd11111115d3 SQx-1XftQ6e2Q95QSfjXZw 5 1 141 0 1.5mb 790.4kb
green open usedata 59PEzs1uSsuHU-HWRy27jA 5 1 13 0 178.4kb 89.2kb
green open actionlog9 UClpsSWkS7q1fIL6z6LxfQ 5 1 12 0 277.7kb 138.8kb
green open kcnode_190329 2Zxqp0uyQKKRLq7xjtaC1w 1 1 0 0 450b 225b
green open post_190723 0Cnp7rQjQRWb8gw5fFv9Dg 1 1 3 0 32.2kb 16.1kb
green open task_190723 PT5sEOV_Sq6AI29vhUe1bQ 1 1 1 0 15.2kb 7.6kb- The third column in the output is the index name
-
Delete existing Mingdao HAP business indexes
$ curl -XDELETE -u elastic:123456 127.0.0.1:9200/chatmessage_190329
$ curl -XDELETE -u elastic:123456 127.0.0.1:9200/actionlogb304361c-84ea-4f17-8ce2-bd11111115d3
$ curl -XDELETE -u elastic:123456 127.0.0.1:9200/usedata
$ curl -XDELETE -u elastic:123456 127.0.0.1:9200/actionlog9
$ curl -XDELETE -u elastic:123456 127.0.0.1:9200/kcnode_190329
$ curl -XDELETE -u elastic:123456 127.0.0.1:9200/post_190723
$ curl -XDELETE -u elastic:123456 127.0.0.1:9200/task_190723# Alternatively, refer to a one-step cleanup
elastic_pwd=123456
for i in $(curl -u elastic:$elastic_pwd 127.0.0.1:9200/_cat/indices|awk '{print $3}'); do
curl -XDELETE -u elastic:$elastic_pwd 127.0.0.1:9200/$i
done
# Check
curl -u elastic:123456 127.0.0.1:9200/_cat/indices
Redis Cache Cleanup
-
Before starting microservices in the new environment, clear Redis cache data in the new environment
-
Log in to the Redis server in the new environment and execute the cache cleanup command
/usr/local/redis/bin/redis-cli -a 123456 "flushall"
Starting Microservices in the New Environment
-
If the access address in the new environment has changed, make sure to update the access address variables in the microservices
config.yaml
and the file storagefile.yaml
accordingly -
Start the microservices
-
Use the
kubectl get pod
command to check whether the status of all pods is2/2
-
Pay particular attention to whether the
actionlog
service is in the2/2
state-
After migration, the
actionlog
service will initialize data before starting. If the dataset is large, it may not start within the time specified byresources.livenessProbe.initialDelaySeconds
and may keep restarting. -
If it keeps restarting, you can temporarily increase the value of
resources.livenessProbe.initialDelaySeconds
for theactionlog
service to allow initial setup. Once complete, the pod will enter the2/2
status.
-
-
-
After microservices are started, enter the
config
containerkubectl exec -it $(kubectl get pod | grep config | awk '{print $1}') bash
-
In the
config
container, refresh themongodb
indexessource /entrypoint.sh && mongodbUpdateIndex
-
In the
config
container, refresh theelasticsearch
indexessource /entrypoint.sh && resetCollaborationIndex
At this point, the data migration is complete 👏. Next, log in to the system using the new environment’s access address to verify.