• GBase 8a 节点替换


    节点替换 包括替换gcware、替换gcluster、替换gnode  ,新节点可以是freenode节点(替换后节点IP改变),也可以是全新未安装过gbase的节点(替换后节点IP不变),本次内容是使用全新节点替换gnode,其他情况 以后有时间再补充

    原始集群   192.168.61.1   【8a-1】 (gcware + gcluster + gnode)

                      192.168.61.2  【8a-2】 (gnode)  待替换节点

    新节点       192.168.61.3  【8a-3】

    目录

    原始状态

    设置被替换节点状态为 unavailable

    删除被替换节点的 feventlog

    被替换节点关机或者断网

    生成新的distribution

    将被替换的集群节点(原 192.168.61.2)机器网线拔出(或者直接关机,上面做过这里可以略过),并将待替换的新机器(192.168.61.3)IP改为192.168.61.2

    配置新节点的ssh

    执行节点替换

    再次数据重分布,将数据分布到新节点

    删除旧的分布方案

    问题记录


    原始状态

    1. [gbase@8a-1 gcinstall]$ gcadmin
    2. CLUSTER STATE: ACTIVE
    3. ====================================
    4. | GBASE GCWARE CLUSTER INFORMATION |
    5. ====================================
    6. | NodeName | IpAddress | gcware |
    7. ------------------------------------
    8. | gcware1 | 192.168.61.1 | OPEN |
    9. ------------------------------------
    10. ======================================================
    11. | GBASE COORDINATOR CLUSTER INFORMATION |
    12. ======================================================
    13. | NodeName | IpAddress | gcluster | DataState |
    14. ------------------------------------------------------
    15. | coordinator1 | 192.168.61.1 | OPEN | 0 |
    16. ------------------------------------------------------
    17. =============================================================
    18. | GBASE CLUSTER FREE DATA NODE INFORMATION |
    19. =============================================================
    20. | NodeName | IpAddress | gnode | syncserver | DataState |
    21. -------------------------------------------------------------
    22. | FreeNode1 | 192.168.61.2 | OPEN | OPEN | 0 |
    23. -------------------------------------------------------------
    24. | FreeNode2 | 192.168.61.1 | OPEN | OPEN | 0 |
    25. -------------------------------------------------------------
    26. 0 virtual cluster
    27. 1 coordinator node
    28. 2 free data node

    设置被替换节点状态为 unavailable

    当节点状态置为unavaliable时  该节点不再接收gcluster下发的SQL

    1. [gbase@8a-1 gcinstall]$ gcadmin setnodestate 192.168.61.2 unavailable
    2. [gbase@8a-1 gcinstall]$ gcadmin
    3. CLUSTER STATE: ACTIVE
    4. VIRTUAL CLUSTER MODE: NORMAL
    5. ====================================
    6. | GBASE GCWARE CLUSTER INFORMATION |
    7. ====================================
    8. | NodeName | IpAddress | gcware |
    9. ------------------------------------
    10. | gcware1 | 192.168.61.1 | OPEN |
    11. ------------------------------------
    12. ======================================================
    13. | GBASE COORDINATOR CLUSTER INFORMATION |
    14. ======================================================
    15. | NodeName | IpAddress | gcluster | DataState |
    16. ------------------------------------------------------
    17. | coordinator1 | 192.168.61.1 | OPEN | 0 |
    18. ------------------------------------------------------
    19. ===============================================================================================================
    20. | GBASE DATA CLUSTER INFORMATION |
    21. ===============================================================================================================
    22. | NodeName | IpAddress | DistributionId | gnode | syncserver | DataState |
    23. ---------------------------------------------------------------------------------------------------------------
    24. | node1 | 192.168.61.2 | 1 | UNAVAILABLE | | |
    25. ---------------------------------------------------------------------------------------------------------------
    26. | node2 | 192.168.61.1 | 1 | OPEN | OPEN | 0 |
    27. ---------------------------------------------------------------------------------------------------------------

    删除被替换节点的 feventlog

    当集群各节点状态不一致时会生成feventlog,这个feventlog也是判断集群各节点是否一致的标准,当我们将节点状态置为unavailable时  这个节点已经与其他节点不一致了,需要手工删除feventlog,虽然再后面替换节点repalce.py这个里面也会删除feventlog, 但是我们选择先手工删除,这样在节点替换过程中 再次删除feventlog的时候会很快,可以保证在节点替换过程中 不会因为feventlog太大 而耗费时间过长,只要feventlog一致,那么集群就可以正常运转,同时也保证了替换节点过程中,瞬间删除feventlog ,集群可以正常工作,同时 在节点替换过程中,如果没有先将feventlog删除的话,那么节点替换过程中也会自动删除,只不过这个时间可能会很长,但是在这个时间段,下发过来的sql不会拒收,而是将SQL置为等待状态,等节点替换完继续执行SQL,这个特性也说明了节点替换过程中,业务不受影响

    [gbase@8a-1 gcinstall]$ gcadmin rmfeventlog 192.168.61.2

    被替换节点关机或者断网

    1. #192.168.61.2节点
    2. [root@8a-2 ~]# shutdown -h now

    生成新的distribution

    由于192.168.61.2将被替换,所以要将192.168.61.2上面的数据先转移,转移的方法就是先生成新的分布方案,再按照新的分布方案进行数据重分布。比如原集群的分布方案是192.168.61.1和192.168.61.2两个节点,192.168.61.2将被替换,那么新的分布方案就不能包含192.168.61.2了

    原始集群的分布方案(Distribution ID = 1)

    1. [gbase@8a-1 gcinstall]$ gcadmin showdistribution node
    2. Distribution ID: 1 | State: new | Total segment num: 2
    3. ============================================================================================
    4. | nodes | 192.168.61.2 | 192.168.61.1 |
    5. --------------------------------------------------------------------------------------------
    6. | primary | 1 | 2 |
    7. | segments | | |
    8. --------------------------------------------------------------------------------------------
    9. |duplicate | 2 | 1 |
    10. |segments 1| | |
    11. ============================================================================================

    复制原始的分布方案

    1. [gbase@8a-1 gcinstall]$ gcadmin getdistribution 1 distribution_info_1.xml
    2. [gbase@8a-1 gcinstall]$ cat distribution_info_1.xml
    3. '1.0' encoding="utf-8"?>
    4. "192.168.61.2"/> #被替换节点为主分片
    5. "192.168.61.1"/>
    6. "192.168.61.1"/>
    7. "192.168.61.2"/> #被替换节点为备份分片

     修改新的分布方案

    [gbase@8a-1 gcinstall]$ vi distribution_info_1.xml



       
           
               
                             如果被替换节点是主分片,那么将ip由被替换的节点改为改为该分片上的备份分片的IP,当前分片的备份分片为192.168.61.1,所以主分片这里改为 192.168.61.1

                                                       因为上面主分片已经改为192.168.61.1,这里的备份分片直接删掉
                       
                   

               

               
                   

                   
                           如果被替换节点是备份分片,那么这几行直接删除
                   

               
           
       

    [gbase@8a-1 gcinstall]$ vi gcChangeInfo_dis1.xml




    生成新的分布方案

    [gbase@8a-1 gcinstall]$ gcadmin distribution gcChangeInfo_dis1.xml

    查看新的分布方案

    此时多了distribution ID = 2 ,这个分布方案里面不包含被替换节点

    1. [gbase@8a-1 gcinstall]$ gcadmin showdistribution node
    2. Distribution ID: 2 | State: new | Total segment num: 2
    3. ====================================================
    4. | nodes | 192.168.61.1 |
    5. ----------------------------------------------------
    6. | primary | 1 |
    7. | segments | 2 |
    8. ====================================================
    9. Distribution ID: 1 | State: old | Total segment num: 2
    10. ============================================================================================
    11. | nodes | 192.168.61.2 | 192.168.61.1 |
    12. --------------------------------------------------------------------------------------------
    13. | primary | 1 | 2 |
    14. | segments | | |
    15. --------------------------------------------------------------------------------------------
    16. |duplicate | 2 | 1 |
    17. |segments 1| | |
    18. ============================================================================================

    使用Distribution = 2 ,将数据重新分布但不给被替换节点分配数据

    没有重分布时 数据的分布情况如下(数据全部使用data_distribution_id = 1)

    1. gbase> select index_name,tbname,data_distribution_id,vc_id from gbase.table_distribution;
    2. +-------------------------------+--------------------+----------------------+---------+
    3. | index_name | tbname | data_distribution_id | vc_id |
    4. +-------------------------------+--------------------+----------------------+---------+
    5. | gclusterdb.rebalancing_status | rebalancing_status | 1 | vc00001 |
    6. | gclusterdb.dual | dual | 1 | vc00001 |
    7. | testdb.t1 | t1 | 1 | vc00001 |
    8. +-------------------------------+--------------------+----------------------+---------+
    9. 3 rows in set (Elapsed: 00:00:00.00)

    重分布后 数据的分布情况如下(数据改为使用 data_distribution_id = 2)

    1. gbase> initnodedatamap;
    2. Query OK, 0 rows affected, 3 warnings (Elapsed: 00:00:00.11)
    3. gbase> rebalance instance to 2; 这里 to 2 表示使用Distribution = 2 的分布方案
    4. Query OK, 1 row affected (Elapsed: 00:00:00.03)
    5. gbase> select index_name,status,percentage,priority,host,distribution_id from gclusterdb.rebalancing_status;
    6. +------------+-----------+------------+----------+--------------+-----------------+
    7. | index_name | status | percentage | priority | host | distribution_id |
    8. +------------+-----------+------------+----------+--------------+-----------------+
    9. | testdb.t1 | COMPLETED | 100 | 5 | 192.168.61.1 | 2 |
    10. +------------+-----------+------------+----------+--------------+-----------------+
    11. 1 row in set (Elapsed: 00:00:00.01)
    12. gbase> select index_name,tbname,data_distribution_id,vc_id from gbase.table_distribution;
    13. +-------------------------------+--------------------+----------------------+---------+
    14. | index_name | tbname | data_distribution_id | vc_id |
    15. +-------------------------------+--------------------+----------------------+---------+
    16. | gclusterdb.rebalancing_status | rebalancing_status | 2 | vc00001 |
    17. | gclusterdb.dual | dual | 2 | vc00001 |
    18. | test.t | t | 2 | vc00001 |
    19. +-------------------------------+--------------------+----------------------+---------+
    20. 3 rows in set (Elapsed: 00:00:00.00)

    将被替换的集群节点(原 192.168.61.2)机器网线拔出(或者直接关机,上面做过这里可以略过),并将待替换的新机器(192.168.61.3)IP改为192.168.61.2

    1. [root@8a-3 ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33
    2. [root@8a-3 ~]# service network restart
    3. Restarting network (via systemctl): [ OK ]
    4. [root@8a-3 ~]# ip a
    5. 2: ens33: mtu 1500 qdisc fq_codel state UP group default qlen 1000
    6. link/ether 00:0c:29:c6:d3:8e brd ff:ff:ff:ff:ff:ff
    7. inet 192.168.61.2/24 brd 192.168.61.255 scope global noprefixroute ens33
    8. valid_lft forever preferred_lft forever
    9. inet6 fe80::61d5:26ac:7834:3674/64 scope link noprefixroute
    10. [root@8a-3 ~]# cd /opt/gbase/
    11. [root@8a-3 gbase]# ll #新节点原始状态 当前目录为空
    12. total 0

    自己在虚拟机测试时,这一步非常重要,如果忘记这步,那么替换前后集群是没有变化的,只有先将旧节点关机,新节点开机 才可以达到正常替换的效果,不然替换完了,结果一看 新节点192.168.61.3上什么都没有

    配置新节点的ssh

    192.168.61.1上面的ssh密钥还是旧的,对于新节点需要重新生成

    1. #更新gbase用户
    2. [gbase@8a-1 ~]$ su - gbase
    3. [gbase@8a-1 ~]$ vi /home/gbase/.ssh/known_hosts
    4. [gbase@8a-1 ~]$ ssh 192.168.61.2

    192.168.61.1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNZZ/4d+7gPdp9IA6JqTZ85mFjbVuPMJkHExKkPmmdMGoRjjAtnBcxw+noK3ozrzmQ19t7ThFZDS4R73tq8r64M=
    192.168.61.2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNU6SIPlrj10pbGzZPzB2Tr0Ssvo4Sj79Z/DjhCqsdSc969xOoj8P9sS+smIC2YW0cPWIU/1YdnV/IPtOiKx7PI=

    1. #更新root用户
    2. [gbase@8a-1 ~]$ su - root
    3. [root@8a-1 ~]# vi /root/.ssh/known_hosts
    4. [root@8a-1 ~]# ssh 192.168.61.2

    192.168.61.1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNZZ/4d+7gPdp9IA6JqTZ85mFjbVuPMJkHExKkPmmdMGoRjjAtnBcxw+noK3ozrzmQ19t7ThFZDS4R73tq8r64M=
    192.168.61.2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNU6SIPlrj10pbGzZPzB2Tr0Ssvo4Sj79Z/DjhCqsdSc969xOoj8P9sS+smIC2YW0cPWIU/1YdnV/IPtOiKx7PI=

    执行节点替换

    1. [root@8a-1 ~]# su - gbase
    2. [gbase@8a-1 ~]$ cd /opt/gcinstall/
    3. [gbase@8a-1 gcinstall]$ ./replace.py --host=192.168.61.2 --type=data --dbaUser=gbase --dbaUserPwd=gbase --generalDBUser=gbase --generalDBPwd=gbase20110531 --overwrite 这里的host表示被替换节点的IP

    再次检查分布情况(替换后的分布方案 Distribution = 3)

    1. [gbase@8a-1 gcinstall]$ gcadmin showdistribution
    2. Distribution ID: 3 | State: new | Total segment num: 2
    3. Primary Segment Node IP Segment ID Duplicate Segment node IP
    4. ========================================================================================================================
    5. | 192.168.61.2 | 1 | 192.168.61.1 |
    6. ------------------------------------------------------------------------------------------------------------------------
    7. | 192.168.61.1 | 2 | 192.168.61.2 |
    8. ========================================================================================================================
    9. Distribution ID: 2 | State: old | Total segment num: 2
    10. Primary Segment Node IP Segment ID Duplicate Segment node IP
    11. ========================================================================================================================
    12. | 192.168.61.1 | 1 | |
    13. ------------------------------------------------------------------------------------------------------------------------
    14. | 192.168.61.1 | 2 | |
    15. ========================================================================================================================
    16. [gbase@8a-1 gcinstall]$ gcadmin
    17. CLUSTER STATE: ACTIVE
    18. VIRTUAL CLUSTER MODE: NORMAL
    19. ====================================
    20. | GBASE GCWARE CLUSTER INFORMATION |
    21. ====================================
    22. | NodeName | IpAddress | gcware |
    23. ------------------------------------
    24. | gcware1 | 192.168.61.1 | OPEN |
    25. ------------------------------------
    26. ======================================================
    27. | GBASE COORDINATOR CLUSTER INFORMATION |
    28. ======================================================
    29. | NodeName | IpAddress | gcluster | DataState |
    30. ------------------------------------------------------
    31. | coordinator1 | 192.168.61.1 | OPEN | 0 |
    32. ------------------------------------------------------
    33. =========================================================================================================
    34. | GBASE DATA CLUSTER INFORMATION |
    35. =========================================================================================================
    36. | NodeName | IpAddress | DistributionId | gnode | syncserver | DataState |
    37. ---------------------------------------------------------------------------------------------------------
    38. | node1 | 192.168.61.2 | 3 | OPEN | OPEN | 0 |
    39. ---------------------------------------------------------------------------------------------------------
    40. | node2 | 192.168.61.1 | 2,3 | OPEN | OPEN | 0 |
    41. ---------------------------------------------------------------------------------------------------------

    备注:执行完replace.py时,此时新节点已经安装完gbase,可以去新节点检查以下,安装前/opt/gbase目录为空的,安装后该目录下已经生成了安装文件

    1. #新的192.168.61.2节点
    2. #replace.py执行前
    3. [root@8a-3 ~]# cd /opt/gbase/
    4. [root@8a-3 gbase]# ll
    5. total 0
    6. #replace.py执行后
    7. [root@8a-3 gbase]# ll
    8. total 0
    9. drwxrwxr-x 2 gbase gbase 6 Dec 1 23:34 192.168.61.2

    再次数据重分布,将数据分布到新节点

    1. [gbase@8a-1 gcinstall]$ gccli -uroot -p
    2. Enter password:
    3. GBase client 9.5.3.27.14_patch.1b41b5c1. Copyright (c) 2004-2022, GBase. All Rights Reserved.
    4. gbase> rebalance instance to 3;
    5. Query OK, 1 row affected (Elapsed: 00:00:00.02)
    6. gbase> select * from gclusterdb.rebalancing_status;
    7. +------------+---------+------------+----------+----------------------------+----------+---------+------------+----------+--------------+-----------------+
    8. | index_name | db_name | table_name | tmptable | start_time | end_time | status | percentage | priority | host | distribution_id |
    9. +------------+---------+------------+----------+----------------------------+----------+---------+------------+----------+--------------+-----------------+
    10. | test.t | test | t | | 2022-11-29 00:08:30.294000 | NULL | RUNNING | 0 | 5 | 192.168.61.1 | 3 |
    11. +------------+---------+------------+----------+----------------------------+----------+---------+------------+----------+--------------+-----------------+
    12. 1 row in set (Elapsed: 00:00:00.00)
    13. gbase> select * from gclusterdb.rebalancing_status;
    14. +------------+---------+------------+----------+----------------------------+----------------------------+-----------+------------+----------+--------------+-----------------+
    15. | index_name | db_name | table_name | tmptable | start_time | end_time | status | percentage | priority | host | distribution_id |
    16. +------------+---------+------------+----------+----------------------------+----------------------------+-----------+------------+----------+--------------+-----------------+
    17. | test.t1 | test | t1 | | 2023-04-05 15:11:14.360000 | 2023-04-05 15:11:16.033000 | COMPLETED | 100 | 1 | 192.168.61.3 | 3 |
    18. +------------+---------+------------+----------+----------------------------+----------------------------+-----------+------------+----------+--------------+-----------------+

    删除旧的分布方案

    [gbase@8a-1 gcinstall]$ gcadmin rmdistribution 2

    问题记录

    1. replace.py 新节点无法启动数据库 提示 ulimit: open files。查看其他节点的limit信息对比发现 open files参数不一致修要修改

    1. [root@bogon ~]# ulimit -a
    2. [root@bogon ~]# ulimit -n 655360

    2. replace.py 新节点启动失败 可以手工登录新节点 gcluster_services all info/start

    3. 如果rebalance卡住不动 可以检查一下新节点数据库服务是否正常 如果停了可以重启

    也可调整rebalance的优先级和并行度

    1. gbase> update gclusterdb.rebalancing_status set priority = 1 where index_name='test.t1'; -- priority 小的先执行
    2. gbase> show variables like 'gcluster_rebalancing_parallel_degree';
    3. +--------------------------------------+-------+
    4. | Variable_name | Value |
    5. +--------------------------------------+-------+
    6. | gcluster_rebalancing_parallel_degree | 4 |
    7. +--------------------------------------+-------+
    8. gbase> set global gcluster_rebalancing_parallel_degree=8;

    rebalance一直处于running状态 查看日志,发现表空间不存在

    (以下解决方法是错的 不要看了! 但是我不想删 也是一种解决问题的思路)

    删除旧的分布方案时无法删除,提示还在使用

    1. [gbase@8a-1 gcinstall]$ gcadmin rmdistribution 2
    2. cluster distribution ID [2]
    3. it will be removed now
    4. please ensure this is ok, input [Y,y] or [N,n]: y
    5. select count(*) from gbase.nodedatamap where data_distribution_id=2 result is not 0
    6. refreshnodedatamap drop 2 failed, gcluster command error: Can not drop nodedatamap 2. Some table are using it.
    7. gcadmin remove distribution: check whether distribution [2] is using failed
    8. gcadmin remove distribution [2] failed
    9. gbase> select index_name,tbname,data_distribution_id,vc_id from gbase.table_distribution;
    10. +-------------------------------+--------------------+----------------------+---------+
    11. | index_name | tbname | data_distribution_id | vc_id |
    12. +-------------------------------+--------------------+----------------------+---------+
    13. | gclusterdb.rebalancing_status | rebalancing_status | 3 | vc00001 |
    14. | test.t | t | 2 | vc00001 |
    15. | gclusterdb.dual | dual | 3 | vc00001 |
    16. +-------------------------------+--------------------+----------------------+---------+
    17. 3 rows in set (Elapsed: 00:00:00.00)
    18. gbase> rebalance table test.t to 3;
    19. ERROR 1707 (HY000): gcluster command error: target table has been rebalancing.

    查看重分布状态 发现重分布一直处于running状态

    1. gbase> select index_name,status,percentage,priority,host,distribution_id from gclusterdb.rebalancing_status;
    2. +------------+---------+------------+----------+--------------+-----------------+
    3. | index_name | status | percentage | priority | host | distribution_id |
    4. +------------+---------+------------+----------+--------------+-----------------+
    5. | test.t | RUNNING | 0 | 1 | 192.168.61.1 | 3 |
    6. +------------+---------+------------+----------+--------------+-----------------+

    判断是不是锁住了,查看锁信息,果然被锁了

    1. [gbase@8a-1 gcinstall]$ gcadmin showlock
    2. +=====================================================================================+
    3. | GCLUSTER LOCK |
    4. +=====================================================================================+
    5. +-----------------------------+------------+---------------+--------------+------+----+
    6. | Lock name | owner | content | create time |locked|type|
    7. +-----------------------------+------------+---------------+--------------+------+----+
    8. | gc-event-lock |192.168.61.1| global master |20221129002719| TRUE | E |
    9. +-----------------------------+------------+---------------+--------------+------+----+
    10. | vc00001.hashmap_lock |192.168.61.1|3388(LWP:41885)|20221129004555| TRUE | S |
    11. +-----------------------------+------------+---------------+--------------+------+----+
    12. | vc00001.test.db_lock |192.168.61.1|3388(LWP:41885)|20221129004555| TRUE | S |
    13. +-----------------------------+------------+---------------+--------------+------+----+
    14. | vc00001.test.t.meta_lock |192.168.61.1|3388(LWP:41885)|20221129004555| TRUE | S |
    15. +-----------------------------+------------+---------------+--------------+------+----+
    16. |vc00001.test.t.rebalance_lock|192.168.61.1|3388(LWP:41885)|20221129004555| TRUE | E |
    17. +-----------------------------+------------+---------------+--------------+------+----+
    18. |vc00001.test.table_space_lock|192.168.61.1|3388(LWP:41885)|20221129004555| TRUE | S |
    19. +-----------------------------+------------+---------------+--------------+------+----+
    20. Total : 6

    找出当前被锁住的进程ID为3388

    1. gbase> show processlist;
    2. +------+-----------------+--------------------+------+---------+------+-----------------------------+-------------------------------------------------------------------+
    3. | Id | User | Host | db | Command | Time | State | Info |
    4. +------+-----------------+--------------------+------+---------+------+-----------------------------+-------------------------------------------------------------------+
    5. | 1 | event_scheduler | localhost | NULL | Daemon | 1780 | Waiting for next activation | NULL |
    6. | 3388 | gbase | 192.168.61.1:36000 | NULL | Query | 667 | Move table's slice | REBALANCE /*+ synchronous,vcid */ TABLE "vc00001"."test"."t" TO 3 |
    7. | 3391 | root | localhost | NULL | Query | 0 | NULL | show processlist |
    8. +------+-----------------+--------------------+------+---------+------+-----------------------------+-------------------------------------------------------------------+
    9. 3 rows in set (Elapsed: 00:00:00.00)

    查看被锁进程的锁状态

    1. [gbase@8a-1 gcinstall]$ gcadmin showlock | grep 3388
    2. | vc00001.hashmap_lock |192.168.61.1|3388(LWP:41885)|20221129004555| TRUE | S |
    3. | vc00001.test.db_lock |192.168.61.1|3388(LWP:41885)|20221129004555| TRUE | S |
    4. | vc00001.test.t.meta_lock |192.168.61.1|3388(LWP:41885)|20221129004555| TRUE | S |
    5. |vc00001.test.t.rebalance_lock|192.168.61.1|3388(LWP:41885)|20221129004555| TRUE | E |
    6. |vc00001.test.table_space_lock|192.168.61.1|3388(LWP:41885)|20221129004555| TRUE | S |

     后面就是排查哪个进程持有了锁没有释放,然后杀掉就好

    有时需要先删除datamap 

    refreshnodedatamap drop 2

    GBase 8a扩容完成后refreshnodedatamap drop报错:Can not drop nodedatamap ,Some table are using it. – 老紫竹的家

  • 相关阅读:
    RK3568驱动指南|第六篇-平台总线-第55章 初识设备树
    【MySQL集群二】使用MyCat和ProxySql代理MySQL集群
    详解HTTP协议版本(HTTP/1.0、1.1、2.0、3.0区别)
    jmeter接口自动化部署jenkins教程详解
    CentOS8编译安装curl 7.84.0
    请求转发和动态包含/生成响应信息/响应头/重定向/输出流
    5:第二章:架构后端项目:1:【传统开发模式】和【前后端分离开发模式】,简述;
    K8S基础架构租赁(Lease )
    力扣练习——43 路径总和
    stack使用+模拟实现
  • 原文地址:https://blog.csdn.net/qq_34479012/article/details/128087343