• MySQL InnoDB Cluster部署


    安装

    下载Yum存储库

    wget https://dev.mysql.com/get/mysql80-community-release-el7-6.noarch.rpm
    
    • 1

    安装发布包

    yum install -y mysql80-community-release-el7-6.noarch.rpm
    
    • 1

    导入密钥

    rpm --import https://repo.mysql.com/RPM-GPG-KEY-mysql-2022
    
    • 1

    安装MySQL

    sudo yum install -y mysql-community-server
    
    • 1

    启动MySQL

    systemctl start mysqld
    
    • 1

    查看临时口令

    grep 'temporary password' /var/log/mysqld.log
    
    • 1

    登录并修改口令

    mysql -uroot -p
    
    • 1
    ALTER USER 'root'@'localhost' IDENTIFIED BY '12345678';
    
    • 1

    安装MySQL Shell

    sudo yum install -y mysql-shell
    
    • 1

    InnoDB Cluster 账户配置

    服务器配置账户

    mysql> create user root@'%' identified by '12345678';
    mysql> grant all privileges on * to 'root'@'%';
    mysql> flush privileges;
    
    • 1
    • 2
    • 3

    配置集群管理权限

    GRANT CLONE_ADMIN, CONNECTION_ADMIN, CREATE USER, EXECUTE, FILE, GROUP_REPLICATION_ADMIN, PERSIST_RO_VARIABLES_ADMIN, PROCESS, RELOAD, REPLICATION CLIENT, REPLICATION SLAVE, REPLICATION_APPLIER, REPLICATION_SLAVE_ADMIN, ROLE_ADMIN, SELECT, SHUTDOWN, SYSTEM_VARIABLES_ADMIN ON *.* TO 'root'@'%' WITH GRANT OPTION;
    GRANT DELETE, INSERT, UPDATE ON mysql.* TO 'root'@'%' WITH GRANT OPTION;
    GRANT ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, REFERENCES, SHOW VIEW, TRIGGER, UPDATE ON mysql_innodb_cluster_metadata.* TO 'root'@'%' WITH GRANT OPTION;
    GRANT ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, REFERENCES, SHOW VIEW, TRIGGER, UPDATE ON mysql_innodb_cluster_metadata_bkp.* TO 'root'@'%' WITH GRANT OPTION;
    GRANT ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, REFERENCES, SHOW VIEW, TRIGGER, UPDATE ON mysql_innodb_cluster_metadata_previous.* TO 'root'@'%' WITH GRANT OPTION;
    
    • 1
    • 2
    • 3
    • 4
    • 5

    集群管理员帐户

    创建集群后通过函数创建集群管理员帐户:

     MySQL  192.168.2.201:3306 ssl  mysql  JS > cluster.setupAdminAccount('clusterAdmin');
    
    Missing the password for new account clusterAdmin@%. Please provide one.
    Password for new account: ********
    Confirm password: ********
    
    Creating user clusterAdmin@%.
    Account clusterAdmin@% was successfully created.
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8

    Router帐户

     MySQL  192.168.2.201:3306 ssl  mysql  JS > cluster.setupRouterAccount('routerUser1')
    
    Missing the password for new account routerUser1@%. Please provide one.
    Password for new account: ********
    Confirm password: ********
    
    Creating user routerUser1@%.
    Account routerUser1@% was successfully created.
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8

    创建InnoDB Cluster集群

    检查实例

    dba.checkInstanceConfiguration('root@192.168.2.201:3306')
    
    • 1

    配置集群

     MySQL  localhost:33060+ ssl  JS > dba.configureInstance('root@192.168.2.201:3306');
    Configuring local MySQL instance listening at port 3306 for use in an InnoDB cluster...
    
    This instance reports its own address as node1:3306
    Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.
    
    applierWorkerThreads will be set to the default value of 4.
    
    The instance 'node1:3306' is valid to be used in an InnoDB cluster.
    The instance 'node1:3306' is already ready to be used in an InnoDB cluster.
    
    Successfully enabled parallel appliers.
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12

    创建集群

     MySQL  192.168.2.201:3306 ssl  mysql  JS > var cluster = dba.createCluster('testCluster')
    A new InnoDB Cluster will be created on instance 'node1:3306'.
    
    Validating instance configuration at 192.168.2.201:3306...
    
    This instance reports its own address as node1:3306
    
    Instance configuration is suitable.
    NOTE: Group Replication will communicate with other members using 'node1:3306'. Use the localAddress option to override.
    
    Creating InnoDB Cluster 'testCluster' on 'node1:3306'...
    
    Adding Seed Instance...
    Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
    At least 3 instances are needed for the cluster to be able to withstand up to
    one server failure.
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16

    查看集群状态

    
     MySQL  192.168.2.201:3306 ssl  mysql  JS > cluster.status()
    {
        "clusterName": "testCluster", 
        "defaultReplicaSet": {
            "name": "default", 
            "primary": "node1:3306", 
            "ssl": "REQUIRED", 
            "status": "OK_NO_TOLERANCE", 
            "statusText": "Cluster is NOT tolerant to any failures.", 
            "topology": {
                "node1:3306": {
                    "address": "node1:3306", 
                    "memberRole": "PRIMARY", 
                    "mode": "R/W", 
                    "readReplicas": {}, 
                    "replicationLag": "applier_queue_applied", 
                    "role": "HA", 
                    "status": "ONLINE", 
                    "version": "8.0.30"
                }
            }, 
            "topologyMode": "Single-Primary"
        }, 
        "groupInformationSourceMember": "node1:3306"
    }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26

    添加实例

     MySQL  192.168.2.201:3306 ssl  mysql  JS > cluster.addInstance('root@192.168.2.202:3306')
    
    NOTE: The target instance 'node2:3306' has not been pre-provisioned (GTID set is empty). The Shell is unable to decide whether incremental state recovery can correctly provision it.
    The safest and most convenient way to provision a new instance is through automatic clone provisioning, which will completely overwrite the state of 'node2:3306' with a physical snapshot from an existing cluster member. To use this method by default, set the 'recoveryMethod' option to 'clone'.
    
    The incremental state recovery may be safely used if you are sure all updates ever executed in the cluster were done with GTIDs enabled, there are no purged transactions and the new instance contains the same GTID set as the cluster or a subset of it. To use this method by default, set the 'recoveryMethod' option to 'incremental'.
    
    
    Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone): C
    Validating instance configuration at 192.168.2.202:3306...
    
    This instance reports its own address as node2:3306
    
    Instance configuration is suitable.
    NOTE: Group Replication will communicate with other members using 'node2:3306'. Use the localAddress option to override.
    
    A new instance will be added to the InnoDB cluster. Depending on the amount of
    data on the cluster this might take from a few seconds to several hours.
    
    Adding instance to the cluster...
    
    Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background.
    Clone based state recovery is now in progress.
    
    NOTE: A server restart is expected to happen as part of the clone process. If the
    server does not support the RESTART command or does not come back after a
    while, you may need to manually start it back.
    
    * Waiting for clone to finish...
    NOTE: node2:3306 is being cloned from node1:3306
    ** Stage DROP DATA: Completed
    ** Clone Transfer  
        FILE COPY  ############################################################  100%  Completed
        PAGE COPY  ############################################################  100%  Completed
        REDO COPY  ############################################################  100%  Completed
    
    NOTE: node2:3306 is shutting down...
    
    * Waiting for server restart... ready 
    * node2:3306 has restarted, waiting for clone to finish...
    ** Stage RESTART: Completed
    * Clone process has finished: 72.61 MB transferred in 3 sec (24.20 MB/s)
    
    State recovery already finished for 'node2:3306'
    
    The instance 'node2:3306' was successfully added to the cluster.
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46

    配置Router

    [root@node1 mysql8.0]# mysqlrouter --bootstrap root@192.168.2.201:3306 --account=mysqlrouter --user=root
    Please enter MySQL password for root: 
    # Bootstrapping system MySQL Router instance...
    
    Please enter MySQL password for mysqlrouter: 
    - Creating account(s) (only those that are needed, if any)
    - Verifying account (using it to run SQL queries that would be run by Router)
    - Storing account in keyring
    - Adjusting permissions of generated files
    - Creating configuration /etc/mysqlrouter/mysqlrouter.conf
    
    Existing configuration backed up to '/etc/mysqlrouter/mysqlrouter.conf.bak'
    
    # MySQL Router configured for the InnoDB Cluster 'testCluster'
    
    After this MySQL Router has been started with the generated configuration
    
        $ /etc/init.d/mysqlrouter restart
    or
        $ systemctl start mysqlrouter
    or
        $ mysqlrouter -c /etc/mysqlrouter/mysqlrouter.conf
    
    InnoDB Cluster 'testCluster' can be reached by connecting to:
    
    ## MySQL Classic protocol
    
    - Read/Write Connections: localhost:6446
    - Read/Only Connections:  localhost:6447
    
    ## MySQL X protocol
    
    - Read/Write Connections: localhost:6448
    - Read/Only Connections:  localhost:6449
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34

    启动Router

    mysqlrouter -c /tmp/myrouter/mysqlrouter.conf
    
    • 1

    故障转移测试

    停用node1实例

    systemctl stop mysqld
    
    • 1

    查看node2日志

    2022-09-03T05:36:49.419627Z 0 [Warning] [MY-011499] [Repl] Plugin group_replication reported: 'Members removed from the group: node1:3306'
    2022-09-03T05:36:49.419648Z 0 [System] [MY-011500] [Repl] Plugin group_replication reported: 'Primary server with address node1:3306 left the group. Electing new Primary.'
    2022-09-03T05:36:49.420017Z 0 [System] [MY-011507] [Repl] Plugin group_replication reported: 'A new primary with address node2:3306 was elected. The new primary will execute all previous group transactions before allowing writes.'
    2022-09-03T05:36:49.420363Z 0 [System] [MY-011503] [Repl] Plugin group_replication reported: 'Group membership changed to node2:3306, node3:3306 on view 16620968151373714:14.'
    2022-09-03T05:36:49.422096Z 19 [System] [MY-013731] [Repl] Plugin group_replication reported: 'The member action "mysql_disable_super_read_only_if_primary" for event "AFTER_PRIMARY_ELECTION" with priority "1" will be run.'
    2022-09-03T05:36:49.422301Z 19 [System] [MY-011566] [Repl] Plugin group_replication reported: 'Setting super_read_only=OFF.'
    2022-09-03T05:36:49.422341Z 19 [System] [MY-013731] [Repl] Plugin group_replication reported: 'The member action "mysql_start_failover_channels_if_primary" for event "AFTER_PRIMARY_ELECTION" with priority "10" will be run.'
    2022-09-03T05:36:49.423681Z 7439 [System] [MY-011510] [Repl] Plugin group_replication reported: 'This server is working as primary member.'
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8

    查看集群状态

     MySQL  node2:33060+ ssl  JS > cluster.status()
    {
        "clusterName": "testCluster", 
        "defaultReplicaSet": {
            "name": "default", 
            "primary": "node2:3306", 
            "ssl": "REQUIRED", 
            "status": "OK_NO_TOLERANCE_PARTIAL", 
            "statusText": "Cluster is NOT tolerant to any failures. 1 member is not active.", 
            "topology": {
                "node1:3306": {
                    "address": "node1:3306", 
                    "memberRole": "SECONDARY", 
                    "mode": "n/a", 
                    "readReplicas": {}, 
                    "role": "HA", 
                    "shellConnectError": "MySQL Error 2003: Could not open connection to 'node1:3306': Can't connect to MySQL server on 'node1:3306' (111)", 
                    "status": "(MISSING)"
                }, 
                "node2:3306": {
                    "address": "node2:3306", 
                    "memberRole": "PRIMARY", 
                    "mode": "R/W", 
                    "readReplicas": {}, 
                    "replicationLag": "applier_queue_applied", 
                    "role": "HA", 
                    "status": "ONLINE", 
                    "version": "8.0.30"
                }, 
                "node3:3306": {
                    "address": "node3:3306", 
                    "memberRole": "SECONDARY", 
                    "mode": "R/O", 
                    "readReplicas": {}, 
                    "replicationLag": "applier_queue_applied", 
                    "role": "HA", 
                    "status": "ONLINE", 
                    "version": "8.0.30"
                }
            }, 
            "topologyMode": "Single-Primary"
        }, 
        "groupInformationSourceMember": "node2:3306"
    }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44

    此时无法检测node1的状态,且整个集群的状态为OK_NO_TOLERANCE_PARTIAL,即无法容忍再发生故障了。

    重新启动node1实例

    systemctl start mysqld
    
    • 1

    查看集群状态

     MySQL  node2:33060+ ssl  JS > cluster.status()
    {
        "clusterName": "testCluster", 
        "defaultReplicaSet": {
            "name": "default", 
            "primary": "node2:3306", 
            "ssl": "REQUIRED", 
            "status": "OK", 
            "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
            "topology": {
                "node1:3306": {
                    "address": "node1:3306", 
                    "memberRole": "SECONDARY", 
                    "mode": "R/O", 
                    "readReplicas": {}, 
                    "replicationLag": "applier_queue_applied", 
                    "role": "HA", 
                    "status": "ONLINE", 
                    "version": "8.0.30"
                }, 
                "node2:3306": {
                    "address": "node2:3306", 
                    "memberRole": "PRIMARY", 
                    "mode": "R/W", 
                    "readReplicas": {}, 
                    "replicationLag": "applier_queue_applied", 
                    "role": "HA", 
                    "status": "ONLINE", 
                    "version": "8.0.30"
                }, 
                "node3:3306": {
                    "address": "node3:3306", 
                    "memberRole": "SECONDARY", 
                    "mode": "R/O", 
                    "readReplicas": {}, 
                    "replicationLag": "applier_queue_applied", 
                    "role": "HA", 
                    "status": "ONLINE", 
                    "version": "8.0.30"
                }
            }, 
            "topologyMode": "Single-Primary"
        }, 
        "groupInformationSourceMember": "node2:3306"
    }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45

    此时集群状态为OK,但node1是作为只读实例加入到集群。

  • 相关阅读:
    paddle学习赛——钢铁目标检测(yolov5、ppyoloe+,Faster-RCNN)
    ASP.NET Core 6.0 启动方式
    HCIA网络课程第九周作业
    二进制安装k8s 1.25.2 高可用集群
    HTML使用 crypto-js-AES 加密
    单目结构光三维重建平面不平整
    js/vue/tsx 中获取dom元素的方法集合
    新手小白学JAVA 日期类Date SimpleDateFormat Calendar
    SparkSQL系列-7、自定义UDF函数?
    【微信小程序】解决canvas组件层级最高问题
  • 原文地址:https://blog.csdn.net/ldjjbzh626/article/details/126534568