Skip to main content

Standalone to Cluster Migration

Stopping the Legacy Standalone Environment

  1. Check if there is message accumulation in the Kafka queue of the legacy standalone environment.

    Enter the storage component container

    docker exec -it $(docker ps | grep mingdaoyun-sc | awk '{print $1}') bash

    Check whether there is a minio process running inside the current container

    ps aux|grep [m]inio
    • If there is output, it means the file storage service running in the current container is Version V2.
    • If there is no output, it means the file storage service running in the current container is Version V1.
    • The migration steps for V1 and V2 will differ when migrating the file storage service.

    Check if there is accumulation in the Kafka workflow queue

    /usr/local/kafka/bin/kafka-consumer-groups.sh --bootstrap-server ${ENV_KAFKA_ENDPOINTS:=127.0.0.1:9092} --describe --group md-workflow-consumer | awk '{count+=$6}END{print count}'
    • Outputting only 0 means there is no accumulation and microservices can be stopped immediately.
    • A number greater than 0 indicates there are workflow messages in the queue awaiting consumption.
    • If microservices are stopped while there is unconsumed data in the queue, after data migration, the new environment will display certain workflows as still queued and these will not be consumed further.
  2. In the directory of the installation manager, execute the command to stop all microservices

    bash service.sh stopall

Starting Temporary Containers

  1. Start a container separately, mounting the Mingdao data directory

    docker run -itd --entrypoint bash --rm -v /data/mingdao/script/volume/data/:/data/ 788b6f437789
    • 788b6f437789 is the image ID of the storage component mingdaoyun-sc; you can retrieve it with docker images.
    • If the data directory of Mingdao in the standalone environment has been modified, use the actual path.
  2. Enter the newly started temporary container

    docker exec -it 363625b14db6 bash
    • 363625b14db6 is the ID of the newly started container; you can retrieve it with docker ps.
  3. Inside the temporary container, start mysql, mongodb, and file services respectively

    source /entrypoint.sh  && mysqlStartup &
    source /entrypoint.sh && mongodbStartup &
    source /entrypoint.sh && filev1Run &

    If the file storage service running in the container is V2, start the minio service additionally

    echo "127.0.0.1 sc" >> /etc/hosts
    source /entrypoint.sh && minioStartup &

File Storage Migration

The built-in file storage version in the standalone environment may vary; select the corresponding migration guide according to actual circumstances.

  1. In the temporary container, configure the cluster environment file storage information

    mc alias set minio_old  http://127.0.0.1:9000 mdstorage eBxExGQJNhGosgv5FQJiVNqH
    mc alias set minio_new http://10.206.0.6:9011 mingdao T7RxxxxxxxxxxdRky
    • Address and authentication info for minio_old do not need modification

    • For minio_new, replace IP, port, and authentication info with the access address and credentials of the MinIO service in the new cluster environment

  2. Copy file storage data from the standalone environment to the cluster environment’s MinIO

    mc mirror minio_old/mdmedia minio_new/mdmedia
    mc mirror minio_old/mdoc minio_new/mdoc
    mc mirror minio_old/mdpic minio_new/mdpic
    mc mirror minio_old/mdpub minio_new/mdpub

Database Migration

MySQL Data Export

  1. In the temporary container, create a directory for MySQL data export

    mkdir -p /data/backup/mysql_dump
  2. Enter the backup directory

    cd /data/backup/
  3. Export MySQL data

    for dbname in MDApplication MDCalendar MDLog MDProject MDStructure; do
    mysqldump --set-gtid-purged=off --default-character-set=utf8mb4 -h127.0.0.1 -P3306 -uroot -p123456 $dbname > mysql_dump/$dbname.sql
    done
  4. Exported data will be persistently stored in the host at /data/mingdao/script/volume/data/backup/mysql_dump

MongoDB Data Export

  1. In the temporary container, create a directory for MongoDB data export

    mkdir -p /data/backup/mongodb_dump
  2. Enter the backup directory

    cd /data/backup/
  3. Create a list of MongoDB databases to export

    cat > mongodb.list <<EOF
    MDAlert
    MDChatTop
    MDGroup
    MDHistory
    MDLicense
    MDNotification
    MDSso
    MDUser
    commonbase
    mdIdentification
    mdactionlog
    mdapproles
    mdapprove
    mdapps
    mdattachment
    mdcalendar
    mdcategory
    mdcheck
    mddossier
    mdemail
    mdform
    mdgroups
    mdinbox
    mdkc
    mdmap
    mdmobileaddress
    mdpost
    mdreportdata
    mdroles
    mdsearch
    mdservicedata
    mdsms
    mdtag
    mdtransfer
    mdworkflow
    mdworksheet
    mdworkweixin
    mdwsrows
    pushlog
    taskcenter
    mdintegration
    mdworksheetlog
    mdworksheetsearch
    mddatapipeline
    mdwfplugin
    mdpayment
    EOF
    • If aggregate tables are enabled in the old environment, also add the mdaggregationwsrows database to the MongoDB export list
  4. Export MongoDB data

    for i in $(cat mongodb.list);do mongodump --uri mongodb://127.0.0.1:27017/$i --numParallelCollections=6 --gzip -o ./mongodb_dump/ ;done
    • The --numParallelCollections parameter specifies the number of collections processed in parallel by mongodump. Default is 4. The example uses 6; adjust as needed depending on server performance.

    • For large datasets, export can be time-consuming. You can run it in the background using nohup.

      nohup bash -c 'for i in $(cat mongodb.list); do mongodump --uri mongodb://127.0.0.1:27017/$i --numParallelCollections=6 --gzip -o ./mongodb_dump/; done' > mongodump.log 2>&1 &
  5. Exported data will be persistently stored in the host at /data/mingdao/script/volume/data/backup/mongodb_dump

Data Transmission

MySQL Data Transmission

  1. Start a receiver on the MySQL master node in the new environment

    mkdir /data/recover && cd /data/recover 

    nc -l 9900 | tar -zxvf -
  2. On the host in the legacy environment, enter the directory where exported data is stored and start a sender

    cd /data/mingdao/script/volume/data/backup

    tar -zcvf - mysql_dump | nc 192.168.1.1 9900

MongoDB Data Transmission

  1. Start a receiver on the MongoDB primary node in the new environment

    mkdir /data/recover && cd /data/recover

    nc -l 9900 | tar -zxvf -
  2. On the host in the legacy environment, enter the directory where exported data is stored and start a sender

    cd /data/mingdao/script/volume/data/backup

    tar -zcvf - mongodb_dump | nc 192.168.1.2 9900

Database Restoration

caution
  • Before restoring MySQL and MongoDB data in the new environment, all business databases in the new environment will be deleted and cleared. If the new environment contains any data, please back up and export it in advance!

  • Before restoring data, please stop the microservices in the new environment in advance.

MySQL Data Restoration

  1. Delete the MySQL databases used by the MingdaoHAP system in the new environment

    /usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 <<< 'drop database MDApplication;'
    /usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 <<< 'drop database MDCalendar;'
    /usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 <<< 'drop database MDLog;'
    /usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 <<< 'drop database MDProject;'
    /usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 <<< 'drop database MDStructure;'
  2. Create the MySQL databases used by the Mingdao HAP system in the new environment

    /usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 <<< 'create database MDApplication;'
    /usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 <<< 'create database MDCalendar;'
    /usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 <<< 'create database MDLog;'
    /usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 <<< 'create database MDProject;'
    /usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 <<< 'create database MDStructure;'
  3. Modify utf8 to utf8mb4 in SQL files

    for dbname in MDApplication MDCalendar MDLog MDProject MDStructure; do
    sed -ri 's/CHARSET=utf8(;| )/CHARSET=utf8mb4\1/g' /data/recover/mysql_dump/$dbname.sql
    done
    sed -i 's/CHARACTER SET utf8 COLLATE utf8_bin //' /data/recover/mysql_dump/MDProject.sql
  4. Import the backed-up MySQL data into the new environment

    /usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 --default-character-set utf8mb4 -D MDApplication < /data/recover/mysql_dump/MDApplication.sql
    /usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 --default-character-set utf8mb4 -D MDCalendar < /data/recover/mysql_dump/MDCalendar.sql
    /usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 --default-character-set utf8mb4 -D MDLog < /data/recover/mysql_dump/MDLog.sql
    /usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 --default-character-set utf8mb4 -D MDProject < /data/recover/mysql_dump/MDProject.sql
    /usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p123456 --default-character-set utf8mb4 -D MDStructure < /data/recover/mysql_dump/MDStructure.sql

MongoDB Data Restoration

  1. Create a list of MongoDB databases used by the Mingdao HAP system to be deleted in the new environment

    cat > dropMongodb.list <<EOF
    use MDAlert
    db.dropDatabase()
    use MDChatTop
    db.dropDatabase()
    use MDGroup
    db.dropDatabase()
    use MDHistory
    db.dropDatabase()
    use MDLicense
    db.dropDatabase()
    use MDNotification
    db.dropDatabase()
    use MDSso
    db.dropDatabase()
    use MDUser
    db.dropDatabase()
    use commonbase
    db.dropDatabase()
    use mdIdentification
    db.dropDatabase()
    use mdactionlog
    db.dropDatabase()
    use mdapproles
    db.dropDatabase()
    use mdapprove
    db.dropDatabase()
    use mdapps
    db.dropDatabase()
    use mdattachment
    db.dropDatabase()
    use mdcalendar
    db.dropDatabase()
    use mdcategory
    db.dropDatabase()
    use mdcheck
    db.dropDatabase()
    use mddossier
    db.dropDatabase()
    use mdemail
    db.dropDatabase()
    use mdform
    db.dropDatabase()
    use mdgroups
    db.dropDatabase()
    use mdinbox
    db.dropDatabase()
    use mdkc
    db.dropDatabase()
    use mdmap
    db.dropDatabase()
    use mdmobileaddress
    db.dropDatabase()
    use mdpost
    db.dropDatabase()
    use mdreportdata
    db.dropDatabase()
    use mdroles
    db.dropDatabase()
    use mdsearch
    db.dropDatabase()
    use mdservicedata
    db.dropDatabase()
    use mdsms
    db.dropDatabase()
    use mdtag
    db.dropDatabase()
    use mdtransfer
    db.dropDatabase()
    use mdworkflow
    db.dropDatabase()
    use mdworksheet
    db.dropDatabase()
    use mdworkweixin
    db.dropDatabase()
    use mdwsrows
    db.dropDatabase()
    use pushlog
    db.dropDatabase()
    use taskcenter
    db.dropDatabase()
    use mdintegration
    db.dropDatabase()
    use mdworksheetlog
    db.dropDatabase()
    use mdworksheetsearch
    db.dropDatabase()
    use mddatapipeline
    db.dropDatabase()
    use mdwfplugin
    db.dropDatabase()
    use mdpayment
    db.dropDatabase()
    EOF
    • If the new environment has aggregate tables enabled, also add the mdaggregationwsrows database to the MongoDB deletion list
  2. Delete MongoDB databases used by the Mingdao HAP system in the new environment

    /usr/local/mongodb/bin/mongo mongodb://root:123456@127.0.0.1:27017/admin < dropMongodb.list
  3. Download the MongoDB database-tools package in the new environment, which contains the mongorestore command for data restoration

    Download links:

    https://fastdl.mongodb.org/tools/db/mongodb-database-tools-rhel80-x86_64-100.9.3.tgz
    • After downloading, upload to the server where MongoDB is located, then extract
  4. Import the backed-up MongoDB data into the new environment (remember to modify the actual path to mongorestore)

    for dbname in $(ls /data/recover/mongodb_dump/);do 
    /your_path/mongorestore --host 127.0.0.1 -u root -p 123456 --authenticationDatabase admin --numParallelCollections=6 --numInsertionWorkersPerCollection=2 -d $dbname --gzip --dir /data/recover/mongodb_dump/$dbname/
    done
    • For large datasets, restoration takes time; you may adjust the following parameters:
      • --numParallelCollections specifies the number of collections processed in parallel by mongorestore. Default is 4. The example uses 6; adjust as needed depending on server performance.
      • --numInsertionWorkersPerCollection specifies the number of worker threads per collection. Default is 1. This example uses 2; adjust as needed for hardware.
    • You can also run this command in the background using nohub:
      nohup bash -c '
      for dbname in $(ls /data/recover/mongodb_dump/); do
      /your_path/mongorestore --host 127.0.0.1 -u root -p 123456 --authenticationDatabase admin --numParallelCollections=6 --numInsertionWorkersPerCollection=2 -d "$dbname" --gzip --dir "/data/recover/mongodb_dump/$dbname/"
      done' > mongorestore.log 2>&1 &
  5. Change the organization ID bound to the new environment

    /usr/local/mongodb/bin/mongo -u root -p 123456 --authenticationDatabase admin

    > use ClientLicense;
    > db.projects.updateMany({"projectID" : "New Environment Organization ID"},{$set:{"projectID" : "Legacy Environment Organization ID"}});

Elasticsearch Index Cleanup

  1. Before starting microservices in the new environment, delete all current Mingdao HAP business indexes in Elasticsearch

  2. Log in to the Elasticsearch server and list the indexes in the new environment

    $ curl -u elastic:123456 127.0.0.1:9200/_cat/indices
    green open chatmessage_190329 Ed7b0fAeT2C4MT7zdxykDQ 1 1 0 0 450b 225b
    green open actionlogb304361c-84ea-4f17-8ce2-bd11111115d3 SQx-1XftQ6e2Q95QSfjXZw 5 1 141 0 1.5mb 790.4kb
    green open usedata 59PEzs1uSsuHU-HWRy27jA 5 1 13 0 178.4kb 89.2kb
    green open actionlog9 UClpsSWkS7q1fIL6z6LxfQ 5 1 12 0 277.7kb 138.8kb
    green open kcnode_190329 2Zxqp0uyQKKRLq7xjtaC1w 1 1 0 0 450b 225b
    green open post_190723 0Cnp7rQjQRWb8gw5fFv9Dg 1 1 3 0 32.2kb 16.1kb
    green open task_190723 PT5sEOV_Sq6AI29vhUe1bQ 1 1 1 0 15.2kb 7.6kb
    • The third column in the output is the index name
  3. Delete existing Mingdao HAP business indexes

    $ curl -XDELETE -u elastic:123456 127.0.0.1:9200/chatmessage_190329
    $ curl -XDELETE -u elastic:123456 127.0.0.1:9200/actionlogb304361c-84ea-4f17-8ce2-bd11111115d3
    $ curl -XDELETE -u elastic:123456 127.0.0.1:9200/usedata
    $ curl -XDELETE -u elastic:123456 127.0.0.1:9200/actionlog9
    $ curl -XDELETE -u elastic:123456 127.0.0.1:9200/kcnode_190329
    $ curl -XDELETE -u elastic:123456 127.0.0.1:9200/post_190723
    $ curl -XDELETE -u elastic:123456 127.0.0.1:9200/task_190723
    # Alternatively, refer to a one-step cleanup
    elastic_pwd=123456
    for i in $(curl -u elastic:$elastic_pwd 127.0.0.1:9200/_cat/indices|awk '{print $3}'); do
    curl -XDELETE -u elastic:$elastic_pwd 127.0.0.1:9200/$i
    done

    # Check
    curl -u elastic:123456 127.0.0.1:9200/_cat/indices

Redis Cache Cleanup

  1. Before starting microservices in the new environment, clear Redis cache data in the new environment

  2. Log in to the Redis server in the new environment and execute the cache cleanup command

    /usr/local/redis/bin/redis-cli -a 123456 "flushall"

Starting Microservices in the New Environment

  1. If the access address in the new environment has changed, make sure to update the access address variables in the microservices config.yaml and the file storage file.yaml accordingly

  2. Start the microservices

  3. Use the kubectl get pod command to check whether the status of all pods is 2/2

    • Pay particular attention to whether the actionlog service is in the 2/2 state

      • After migration, the actionlog service will initialize data before starting. If the dataset is large, it may not start within the time specified by resources.livenessProbe.initialDelaySeconds and may keep restarting.

      • If it keeps restarting, you can temporarily increase the value of resources.livenessProbe.initialDelaySeconds for the actionlog service to allow initial setup. Once complete, the pod will enter the 2/2 status.

  4. After microservices are started, enter the config container

    kubectl exec -it $(kubectl get pod | grep config | awk '{print $1}') bash
  5. In the config container, refresh the mongodb indexes

    source /entrypoint.sh && mongodbUpdateIndex
  6. In the config container, refresh the elasticsearch indexes

    source /entrypoint.sh && resetCollaborationIndex

At this point, the data migration is complete 👏. Next, log in to the system using the new environment’s access address to verify.