• MySQL--MHA高可用方案


    MHA高可用方案实行

    1.1MHA简介

            MHA 在监控到 master 节点故障时,会提升其中拥有最新数据的 slave 节点成为新的master 节点,在此期间,MHA 会通过于其它从节点获取额外信息来避免一致性方面的问题。MHA 还提供了 master 节点的在线切换功能,即按需切换 master/slave 节点。
      

    1.2MHA 服务的两种角色

     MHA Manager(管理节点)和 MHA Node(数据节点):

    MHA Manager:

      通常单独部署在一台独立机器上管理多个 master/slave 集群(组),每个 master/slave 集群称作一个 application,用来管理统筹整个集群。

    MHA node:

      运行在每台 MySQL 服务器上(master/slave/manager),它**通过监控具备解析和清理 logs 功能的脚本来加快故障转移。
      主要是接收管理节点所发出指令的代理,代理需要运行在每一个 mysql 节点上。简单讲 node 就是用来收集从节点服务器上所生成的 bin-log 。对比打算提升为新的主节点之上的从节点的是否拥有并完成操作,如果没有发给新主节点在本地应用后提升为主节点。

    1.3MHA工作原理

            MHA是在我们的主从数据库之间,主库出现问题的时候我们可以快速的在主从同步的基础上将主服务器的二进制日志进行保存,将中继日志应用到别的库中保存。从而达到我们重新进行选举master的工作,保证我们的数据不被破坏。然后将之前的主库和其他从库向心选举的主进行复制。从而再次达到正常主从同步。(但是之前的主库恢复过后不会再成为主库,只能作为从库继续运行)

    MHA高可用实现过程

    2.1相关配置

    manager

    192.168.177.129

    master                192.168.177.130
    slave1192.168.177.131
    slave2192.168.177.132

    用master与slave1和slave2之间作为主从同步,用manager配置MHA

    master配置

    1. datadir=/var/lib/mysql
    2. socket=/var/lib/mysql/mysql.sock
    3. log-error=/var/log/mysqld.log
    4. pid-file=/var/run/mysqld/mysqld.pid
    5. server_id=1
    6. log-bin=mysql-bin
    7. log-slave-updates=1
    8. skip-slave-start=1
    9. skip_name_resolve
    10. gtid-mode = on
    11. enforce-gtid-consistency = true
    12. relay-log=relay-log
    13. [mysql]
    14. prompt = (\\u@\\h) [\d] >\\
    15. no_auto_rehash
    16. [root@master ~]# systemctl restart mysqld

    slave配置

    1. ###slave1
    2. datadir=/var/lib/mysql
    3. socket=/var/lib/mysql/mysql.sock
    4. log-error=/var/log/mysqld.log
    5. pid-file=/var/run/mysqld/mysqld.pid
    6. server_id=2
    7. log-bin=mysql-bin
    8. log-slave-updates=1
    9. skip_name_resolve
    10. gtid-mode = on
    11. enforce-gtid-consistency = true
    12. relay-log=relay-log
    13. read_only=ON
    14. relay_log_purge = 0
    15. [mysql]
    16. prompt = (\\u@\\h) [\d] >\\
    17. no_auto_rehash
    18. ###slave2
    19. datadir=/var/lib/mysql
    20. socket=/var/lib/mysql/mysql.sock
    21. log-error=/var/log/mysqld.log
    22. pid-file=/var/run/mysqld/mysqld.pid
    23. server_id=3
    24. log-bin=mysql-bin
    25. log-slave-updates=1
    26. skip_name_resolve
    27. gtid-mode = on
    28. enforce-gtid-consistency = true
    29. relay-log=relay-log
    30. read_only=ON
    31. relay_log_purge = 0
    32. [mysql]
    33. prompt = (\\u@\\h) [\d] >\\
    34. no_auto_rehash

    主从同步配置完之后,再次开机如果不能同步,我们可以使用先stop slave,然后reset slave最后在重新开启slave再次查看我们主从同步状态。

    2.2配置用户授权

    1. 在master上进行
    2. create user rep@'%' IDENTIFIED WITH MYSQL_NATIVE_PASSWORD BY 'Mhn@1234';
    3. grant replication slave,replication client on *.* to rep@'%';

    2.3配置互相免密钥

    1. 使用xshell工具同时对多个主机配置
    2. [root@master ~]# vim /etc/hosts
    3. 192.168.177.129 manager
    4. 192.168.177.130 master
    5. 192.168.177.131 slave1
    6. 192.168.177.132 slave2
    7. master上
    8. [root@master ~]# ssh-keygen -f ~/.ssh/id_rsa -P '' -q
    9. [root@master ~]# ssh-copy-id manager
    10. slave1上
    11. [root@slave1 ~]# ssh-keygen -f ~/.ssh/id_rsa -P '' -q
    12. [root@slave1 ~]# ssh-copy-id manager
    13. slave2上
    14. [root@slave2 ~]# ssh-keygen -f ~/.ssh/id_rsa -P '' -q
    15. [root@slave2 ~]# ssh-copy-id manager
    16. manager上
    17. [root@manager ~]# ssh-keygen -f ~/.ssh/id_rsa -P '' -q
    18. [root@manager ~]# ssh-copy-id master
    19. [root@manager ~]# ssh-copy-id slave1
    20. [root@manager ~]# ssh-copy-id slave2
    21. [root@manager ~]# ssh-copy-id manager
    22. [root@manager ~]# scp ~/.ssh/authorized_keys master:/root/.ssh/authorized_keys
    23. authorized_keys 100% 1573 3.7MB/s 00:00
    24. [root@manager ~]# scp ~/.ssh/authorized_keys slave1:/root/.ssh/authorized_keys
    25. authorized_keys 100% 1573 1.3MB/s 00:00
    26. [root@manager ~]# scp ~/.ssh/authorized_keys slave2:/root/.ssh/authorized_keys
    27. authorized_keys 100% 1573 2.4MB/s 00:00

    现在可以在manager主机上看到我们四台主机相互免密钥。

    2.4安装MHA安装包

    现在我们就可以下载MHA的两个安装包(manager和node)。

    在manager上需要安装

    在master、slave1、slave2上只需要安装node即可

    1. [root@manager ~]# scp mha4mysql-node-0.58-0.el7.centos.noarch.rpm master:/root/
    2. mha4mysql-node-0.58-0.el7.centos.noarch.rpm 100% 35KB 11.8MB/s 00:00
    3. [root@manager ~]# scp mha4mysql-node-0.58-0.el7.centos.noarch.rpm slave1:/root/
    4. mha4mysql-node-0.58-0.el7.centos.noarch.rpm 100% 35KB 17.3MB/s 00:00
    5. [root@manager ~]# scp mha4mysql-node-0.58-0.el7.centos.noarch.rpm slave2:/root/
    6. mha4mysql-node-0.58-0.el7.centos.noarch.rpm 100% 35KB 18.9MB/s 00:00

    在master、slave1、slave2上直接yum install mha*.rpm安装

    第一次报错:

    第二次报错:

    我们直接安装MySQL的icu-data-files包和MySQL的libs-compat包即可安装成功node安装包

    在manager我们直接安装yum install mha*.rpm -y

    报错:

    查了一下需要安装yum install epel-release -y这个依赖

    现在就将安装包安装完毕了。

    2.5配置MHA文件

    接下来可以给我们的manager用户写入配置文件,在master上进行授权

    1. [root@manager ~]# mkdir /etc/mha
    2. [root@manager ~]# mkdir -p /var/log/mha/app1
    3. [root@manager ~]# vim /etc/mha/app1.cnf
    4. [server default] //适用于slave1,2,3个
    5. user=mhaadmin //MHA管理用户
    6. password=Mha@2001 //MHA管理密码
    7. manager_workdir=/var/log/mha/app1 //MHA_master的工作路径
    8. manager_log=/var/log/mha/app1/manager.log //MHA_master的日志文件
    9. ssh_user=root //基于ssh的密钥验证
    10. repl_user=rep //数据库用户名
    11. repl_password=Mhn@2001 //数据库密码
    12. ping_interval=1 //ping时间间隔
    13. [server1] //节点2
    14. hostname=192.168.177.130 //节点2的主机地址
    15. ssh_port=22 //节点2的ssh端口
    16. candidate_master=1 //等于1代表如果坏可以成为master
    17. [server2]
    18. hostname=192.168.177.131
    19. ssh_port=22
    20. candidate_master=1
    21. [server3]
    22. hostname=192.168.177.132
    23. ssh_port=22
    24. candidate_master=1

    接下来根据我们配置文件中写的用户配置在我们的master主机上

    1. (root@localhost) [(none)] >create user mhaadmin@'%' identified with mysql_native_password by 'Mha@2001';
    2. Query OK, 0 rows affected (0.01 sec)
    3. (root@localhost) [(none)] >grant all on *.* to mhaadmin@'%';
    4. Query OK, 0 rows affected (0.01 sec)

    2.6检查远程连接和主从同步

    我们接下来就需要检查一下远程链接和主从同步是否正确。

    1. [root@manager ~]# masterha_check_ssh --conf=/etc/mha/app1.cnf
    2. root@192.168.177.131(192.168.177.131:22) to root@192.168.177.130(192.168.177.130:22)..
    3. Warning: Permanently added '192.168.177.130' (ECDSA) to the list of known hosts.
    4. Thu Mar 7 14:39:14 2024 - [debug] ok.
    5. Thu Mar 7 14:39:14 2024 - [debug] Connecting via SSH from root@192.168.177.131(192.168.177.131:22) to root@192.168.177.132(192.168.177.132:22)..
    6. Warning: Permanently added '192.168.177.132' (ECDSA) to the list of known hosts.
    7. Thu Mar 7 14:39:14 2024 - [debug] ok.
    8. Thu Mar 7 14:39:15 2024 - [debug]
    9. Thu Mar 7 14:39:14 2024 - [debug] Connecting via SSH from root@192.168.177.132(192.168.177.132:22) to root@192.168.177.131(192.168.177.131:22)..
    10. Warning: Permanently added '192.168.177.131' (ECDSA) to the list of known hosts.
    11. Thu Mar 7 14:39:15 2024 - [debug] ok.
    12. Thu Mar 7 14:39:15 2024 - [info] All SSH connection tests passed successfully.
    13. [root@manager ~]# masterha_check_repl --conf=/etc/mha/app1.cnf
    14. Thu Mar 7 14:43:23 2024 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
    15. Thu Mar 7 14:43:23 2024 - [info] Reading application default configuration from /etc/mha/app1.cnf..
    16. Thu Mar 7 14:43:23 2024 - [info] Reading server configuration from /etc/mha/app1.cnf..
    17. Thu Mar 7 14:43:23 2024 - [info] MHA::MasterMonitor version 0.58.
    18. Thu Mar 7 14:43:24 2024 - [info] GTID failover mode = 1
    19. Thu Mar 7 14:43:24 2024 - [info] Dead Servers:
    20. Thu Mar 7 14:43:24 2024 - [info] Alive Servers:
    21. Thu Mar 7 14:43:24 2024 - [info] 192.168.177.130(192.168.177.130:3306)
    22. Thu Mar 7 14:43:24 2024 - [info] 192.168.177.131(192.168.177.131:3306)
    23. Thu Mar 7 14:43:24 2024 - [info] 192.168.177.132(192.168.177.132:3306)
    24. Thu Mar 7 14:43:24 2024 - [info] Alive Slaves:
    25. Thu Mar 7 14:43:24 2024 - [info] 192.168.177.131(192.168.177.131:3306) Version=8.0.35 (oldest major version between slaves) log-bin:enabled
    26. Thu Mar 7 14:43:24 2024 - [info] GTID ON
    27. Thu Mar 7 14:43:24 2024 - [info] Replicating from 192.168.177.130(192.168.177.130:3306)
    28. Thu Mar 7 14:43:24 2024 - [info] Primary candidate for the new Master (candidate_master is set)
    29. Thu Mar 7 14:43:24 2024 - [info] 192.168.177.132(192.168.177.132:3306) Version=8.0.35 (oldest major version between slaves) log-bin:enabled
    30. Thu Mar 7 14:43:24 2024 - [info] GTID ON
    31. Thu Mar 7 14:43:24 2024 - [info] Replicating from 192.168.177.130(192.168.177.130:3306)
    32. Thu Mar 7 14:43:24 2024 - [info] Primary candidate for the new Master (candidate_master is set)
    33. Thu Mar 7 14:43:24 2024 - [info] Current Alive Master: 192.168.177.130(192.168.177.130:3306)
    34. Thu Mar 7 14:43:24 2024 - [info] Checking slave configurations..
    35. Thu Mar 7 14:43:24 2024 - [info] Checking replication filtering settings..
    36. Thu Mar 7 14:43:24 2024 - [info] binlog_do_db= , binlog_ignore_db=
    37. Thu Mar 7 14:43:24 2024 - [info] Replication filtering check ok.
    38. Thu Mar 7 14:43:24 2024 - [info] GTID (with auto-pos) is supported. Skipping all SSH and Node package checking.
    39. Thu Mar 7 14:43:24 2024 - [info] Checking SSH publickey authentication settings on the current master..
    40. Thu Mar 7 14:43:24 2024 - [info] HealthCheck: SSH to 192.168.177.130 is reachable.
    41. Thu Mar 7 14:43:24 2024 - [info]
    42. 192.168.177.130(192.168.177.130:3306) (current master)
    43. +--192.168.177.131(192.168.177.131:3306)
    44. +--192.168.177.132(192.168.177.132:3306)
    45. Thu Mar 7 14:43:24 2024 - [info] Checking replication health on 192.168.177.131..
    46. Thu Mar 7 14:43:24 2024 - [info] ok.
    47. Thu Mar 7 14:43:24 2024 - [info] Checking replication health on 192.168.177.132..
    48. Thu Mar 7 14:43:24 2024 - [info] ok.
    49. Thu Mar 7 14:43:24 2024 - [warning] master_ip_failover_script is not defined.
    50. Thu Mar 7 14:43:24 2024 - [warning] shutdown_script is not defined.
    51. Thu Mar 7 14:43:24 2024 - [info] Got exit code 0 (Not master dead).
    52. MySQL Replication Health is OK.

    3启动MHA

    1. 命令行启动
    2. [root@node1 ~]# nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/mha/app1/manager.log 2>&1 &

    使用脚本来启动

    1. #!/bin/bash
    2. # chkconfig: 35 80 20
    3. # description: MHA management script.
    4. STARTEXEC="/usr/bin/masterha_manager --conf"
    5. STOPEXEC="/usr/bin/masterha_stop --conf"
    6. CONF="/etc/mha/app1.cnf"
    7. process_count=`ps -ef |grep -w masterha_manager|grep -v grep|wc -l`
    8. PARAMS="--ignore_last_failover"
    9. case "$1" in
    10. start)
    11. if [ $process_count -gt 1 ]
    12. then
    13. echo "masterha_manager exists, process is already running"
    14. else
    15. echo "Starting Masterha Manager"
    16. $STARTEXEC $CONF $PARAMS < /dev/null > /var/log/mha/app1/manager.log 2>&1 &
    17. fi
    18. ;;
    19. stop)
    20. if [ $process_count -eq 0 ]
    21. then
    22. echo "Masterha Manager does not exist, process is not running"
    23. else
    24. echo "Stopping ..."
    25. $STOPEXEC $CONF
    26. while(true)
    27. do
    28. process_count=`ps -ef |grep -w masterha_manager|grep -v grep|wc -l`
    29. if [ $process_count -gt 0 ]
    30. then
    31. sleep 1
    32. else
    33. break
    34. fi
    35. done
    36. echo "Master Manager stopped"
    37. fi
    38. ;;
    39. *)
    40. echo "Please use start or stop as first argument"
    41. ;;
    42. esac
    1. [root@manager ~]# chmod +x /etc/init.d/masterha_managerd
    2. [root@manager ~]# chkconfig --add masterha_managerd
    3. [root@manager ~]# chkconfig masterha_managerd on
    4. [root@manager ~]# systemctl start masterha_managerd
    5. [root@manager ~]# systemctl status masterha_managerd
    6. ● masterha_managerd.service - SYSV: MHA management script.
    7. Loaded: loaded (/etc/rc.d/init.d/masterha_managerd; bad; vendor preset: disabled)
    8. Active: active (running) since 四 2024-03-07 14:54:25 CST; 17s ago
    9. Docs: man:systemd-sysv-generator(8)
    10. Process: 10423 ExecStart=/etc/rc.d/init.d/masterha_managerd start (code=exited, status=0/SUCCESS)
    11. Tasks: 1
    12. CGroup: /system.slice/masterha_managerd.service
    13. └─10430 perl /usr/bin/masterha_manager --conf /etc/mha/app1.cnf --ignore_last...
    14. 3月 07 14:54:25 manager systemd[1]: Starting SYSV: MHA management script....
    15. 3月 07 14:54:25 manager masterha_managerd[10423]: Starting Masterha Manager
    16. 3月 07 14:54:25 manager systemd[1]: Started SYSV: MHA management script..
    17. [root@manager ~]# ps -ef | grep -w masterha_managerd
    18. root 10530 9389 0 14:55 pts/1 00:00:00 grep --color=auto -w masterha_managerd
    19. [root@manager ~]# tail /var/log/mha/app1/manager.log
    20. 192.168.177.130(192.168.177.130:3306) (current master)
    21. +--192.168.177.131(192.168.177.131:3306)
    22. +--192.168.177.132(192.168.177.132:3306)
    23. Thu Mar 7 14:54:27 2024 - [warning] master_ip_failover_script is not defined.
    24. Thu Mar 7 14:54:27 2024 - [warning] shutdown_script is not defined.
    25. Thu Mar 7 14:54:27 2024 - [info] Set master ping interval 1 seconds.
    26. Thu Mar 7 14:54:27 2024 - [warning] secondary_check_script is not defined. It is highly recommended setting it to check master reachability from two or more routes.
    27. Thu Mar 7 14:54:27 2024 - [info] Starting ping health check on 192.168.177.130(192.168.177.130:3306)..
    28. Thu Mar 7 14:54:27 2024 - [info] Ping(SELECT) succeeded, waiting until MySQL doesn't respond..
    29. #查看master节点的状态
    30. [root@manager ~]# masterha_check_status --conf=/etc/mha/app1.cnf
    31. app1 (pid:10430) is running(0:PING_OK), master:192.168.177.130

    3.1配置VIP

    脑裂

    脑裂是指在主备切换时,由于切换不彻底或其他原因,导致客户端和Slave误以为出现两个active master,最终使得整个集群处于混乱状态。

    vip配置可以采用两种方式,一种通过keepalived的方式管理虚拟ip的浮动;另外一种通过脚本方式启动虚拟ip的方式 (即不需要keepalived或者heartbeat类似的软件).

    为了防止脑裂发生,推荐生产环境采用脚本的方式来管理虚拟ip,而不是使用keepalived来完成。

    1. [root@manager ~]# vim /usr/local/bin/master_ip_failover
    2. #!/usr/bin/env perl
    3. use strict;
    4. use warnings FATAL => 'all';
    5. use Getopt::Long;
    6. my (
    7. $command, $ssh_user, $orig_master_host, $orig_master_ip,
    8. $orig_master_port, $new_master_host, $new_master_ip, $new_master_port
    9. );
    10. my $vip = '192.168.177.210/24';
    11. my $key = '1';
    12. my $ssh_start_vip = "/sbin/ifconfig ens33:$key $vip";
    13. my $ssh_stop_vip = "/sbin/ifconfig ens33:$key down";
    14. GetOptions(
    15. 'command=s' => \$command,
    16. 'ssh_user=s' => \$ssh_user,
    17. 'orig_master_host=s' => \$orig_master_host,
    18. 'orig_master_ip=s' => \$orig_master_ip,
    19. 'orig_master_port=i' => \$orig_master_port,
    20. 'new_master_host=s' => \$new_master_host,
    21. 'new_master_ip=s' => \$new_master_ip,
    22. 'new_master_port=i' => \$new_master_port,
    23. );
    24. exit &main();
    25. sub main {
    26. print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";
    27. if ( $command eq "stop" || $command eq "stopssh" ) {
    28. my $exit_code = 1;
    29. eval {
    30. print "Disabling the VIP on old master: $orig_master_host \n";
    31. &stop_vip();
    32. $exit_code = 0;
    33. };
    34. if ($@) {
    35. warn "Got Error: $@\n";
    36. exit $exit_code;
    37. }
    38. exit $exit_code;
    39. }
    40. elsif ( $command eq "start" ) {
    41. my $exit_code = 10;
    42. eval {
    43. print "Enabling the VIP - $vip on the new master - $new_master_host \n";
    44. &start_vip();
    45. $exit_code = 0;
    46. };
    47. if ($@) {
    48. warn $@;
    49. exit $exit_code;
    50. }
    51. exit $exit_code;
    52. }
    53. elsif ( $command eq "status" ) {
    54. print "Checking the Status of the script.. OK \n";
    55. exit 0;
    56. }
    57. else {
    58. &usage();
    59. exit 1;
    60. }
    61. }
    62. sub start_vip() {
    63. `ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
    64. }
    65. sub stop_vip() {
    66. return 0 unless ($ssh_user);
    67. `ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
    68. }
    69. sub usage {
    70. print
    71. "Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
    72. }
    73. [root@manager ~]# chmod +x /usr/local/bin/master_ip_failover
    74. [root@manager ~]# vim /etc/mha/app1.cnf
    75. [server default]
    76. 添加:
    77. master_ip_failover_script=/usr/local/bin/master_ip_failover

    添加一个IP

    1. [root@manager ~]# ifconfig ens33:1 192.168.177.210/24
    2. [root@manager ~]# ifconfig -a | grep -A 2 "ens33:1"
    3. ens33:1: flags=4163 mtu 1500
    4. inet 192.168.177.210 netmask 255.255.255.0 broadcast 192.168.177.255
    5. ether 00:0c:29:dc:28:da txqueuelen 1000 (Ethernet)

    配置邮箱服务

    1. [root@manager ~]# cat /usr/local/bin/send_report
    2. #!/usr/bin/perl
    3. # Copyright (C) 2011 DeNA Co.,Ltd.
    4. #
    5. # This program is free software; you can redistribute it and/or modify
    6. # it under the terms of the GNU General Public License as published by
    7. # the Free Software Foundation; either version 2 of the License, or
    8. # (at your option) any later version.
    9. #
    10. # This program is distributed in the hope that it will be useful,
    11. # but WITHOUT ANY WARRANTY; without even the implied warranty of
    12. # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
    13. # GNU General Public License for more details.
    14. #
    15. # You should have received a copy of the GNU General Public License
    16. # along with this program; if not, write to the Free Software
    17. # Foundation, Inc.,
    18. # 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
    19. ## Note: This is a sample script and is not complete. Modify the script based on your environment.
    20. use strict;
    21. use warnings FATAL => 'all';
    22. use Mail::Sender;
    23. use Getopt::Long;
    24. #new_master_host and new_slave_hosts are set only when recovering master succeeded
    25. my ( $dead_master_host, $new_master_host, $new_slave_hosts, $subject, $body );
    26. my $smtp='smtp.qq.com';  # 这里的smtp可以查看要使用的邮箱的smtp值, 或者百度
    27. my $mail_from='from@qq.com';  # 填写邮箱
    28. my $mail_user='from@qq.com';  # 填写邮箱
    29. my $mail_pass='password';  # 注意这里的密码是邮箱开启smtp服务时设定的密码, 不是邮箱的登陆密码
    30. #my $mail_to=['to1@qq.com','to2@qq.com'];
    31. my $mail_to='to@qq.com';  # 接受邮件的邮箱
    32. GetOptions(
    33. 'orig_master_host=s' => \$dead_master_host,
    34. 'new_master_host=s' => \$new_master_host,
    35. 'new_slave_hosts=s' => \$new_slave_hosts,
    36. 'subject=s' => \$subject,
    37. 'body=s' => \$body,
    38. );
    39. # Do whatever you want here
    40. mailToContacts($smtp,$mail_from,$mail_user,$mail_pass,$mail_to,$subject,$body);
    41. sub mailToContacts {
    42. my ($smtp, $mail_from, $mail_user, $mail_pass, $mail_to, $subject, $msg ) = @_;
    43. open my $DEBUG, ">/var/log/mha/app1/mail.log"  # 这里的路径需要修改,改成一个真是存在的路径即可
    44. or die "Can't open the debug file:$!\n";
    45. my $sender = new Mail::Sender {
    46. ctype => 'text/plain;charset=utf-8',
    47. encoding => 'utf-8',
    48. smtp => $smtp,
    49. from => $mail_from,
    50. auth => 'LOGIN',
    51. TLS_allowed => '0',
    52. authid => $mail_user,
    53. authpwd => $mail_pass,
    54. to => $mail_to,
    55. subject => $subject,
    56. debug => $DEBUG
    57. };
    58. $sender->MailMsg(
    59. {
    60. msg => $msg,
    61. debug => $DEBUG
    62. }
    63. ) or print $Mail::Sender::Error;
    64. return 1;
    65. }
    66. exit 0;
    67. [root@node1 ~]# chmod +x /usr/local/bin/send_report
    68. [root@node1 ~]# touch /var/log/mha/app1/mail.log

    更改manager配置文件

    1. [root@node1 ~]# vim /etc/mha/app1.cnf
    2. [server default]
    3. report_script=/usr/local/bin/send_report
    4. 重启mha
    5. [root@node1 ~]# systemctl restart masterha_managerd

    测试邮箱功能

    1. 在manager上我们跟踪日志
    2. tail -f /var/log/mha/app1/manager.log
    3. 在master上我们停止MySQL服务
    4. systemctl stop mysqld

    邮箱功能正常

    模拟master恢复正常

    1. [root@master ~]# systemctl start mysqld
    2. [root@master ~]# mysql -uroot -pMysql@123
    3. (root@localhost) [(none)] >change master to
    4. -> master_host='192.168.177.131',
    5. -> master_user='rep',
    6. -> master_password='Mysql@123',
    7. -> master_auto_position=1;
    8. Query OK, 0 rows affected, 7 warnings (0.01 sec)
    9. (root@localhost) [(none)] >start slave;
    10. Query OK, 0 rows affected, 1 warning (0.01 sec)

    我们查看同步状态

    发现主从没有成功

    这个原因是我们没有设置正确的主从关系,更改后就可以了

    这个就是我们刚才在master重新向新master配主从时密码与在最开始的master上的不正确

    主从就恢复了

    在manager上查看我们的三个节点有没有被删除,如果没有删除只需要重启服务就可以

    可以看到我们现在的master是192.168.177.131

  • 相关阅读:
    Go[Golang]语言学习实践[回顾]教程01--为何选择学习使用Go语言
    vue3的单组件的编写(三)【响应式 API 之 toRef 与 toRefs】
    Spring整合RabbitMQ——生产者(利用配置类)
    算法题:买卖股票的最佳时机 II (贪心算法解决股票问题)
    【重温基础算法】内部排序之基数排序法
    网络安全架构:安全架构公理
    钢铁行业B2B供应链集采平台:完善供应商管理,健全供应商管理机制
    抽空写了个小游戏(未完待续)
    script defer async模式
    单片机对比:选择最适合你的单片机
  • 原文地址:https://blog.csdn.net/2202_76007104/article/details/136515483