MySQL-MHA高可用方案



MySQL-MHA高可用方案

MySQL-MHA高可用方案

一、MHA简介

MHA(Master High Availability)目前在MySQL高可用方面是一个相对成熟的解决方案,它由日本DeNA公司youshimaton(现就职于Facebook公司)开发,是一套优秀的作为MySQL高可用性环境下故障切换和主从提升的高可用软件。在MySQL故障切换过程中,MHA能做到在0~30秒之内自动完成数据库的故障切换操作,并且在进行故障切换的过程中,MHA能在最大程度上保证数据的一致性,以达到真正意义上的高可用。
MHA里有两个角色一个是MHA Node(数据节点)另一个是MHA Manager(管理节点)。 MHA Manager可以单独部署在一台独立的机器上管理多个master-slave集群,也可以部署在一台slave节点上。MHA Node运行在每台
MySQL服务器上,MHA Manager会定时探测集群中的master节点,当master出现故障时,它可以自动将最新数据的slave提升为新的master,然后将所有其他的slave重新指向新的master。整个故障转移过程对应用程序完全透明。
在MHA自动故障切换过程中,MHA试图从宕机的主服务器上保存二进制日志,最大程度的保证数据的不丢失,但这并不总是可行的。例如,如果主服务器硬件故障或无法通过ssh访问,MHA没法保存二进制日志,只进行故障转移而丢失了最新的数据。使用MySQL 5.5的半同步复制,可以大大降低数据丢失的风险。MHA可以与半同步复制结合起来。如果只有一个slave已经收到了最新的二进制日志,MHA可以将最新的二进制日志应用于其他所有的slave服务器上,因此可以保证所有节点的数据一致性。
注:从MySQL5.5开始,MySQL以插件的形式支持半同步复制。
如何理解半同步呢?首先我们来看看异步,全同步的概念。

1、异步复制(Asynchronous replication)

MySQL默认的复制即是异步的,主库在执行完客户端提交的事务后会立即将结果返给给客户端,并不关心从库是否已经接收并处理,这样就会有一个问题,主如果crash掉了,此时主上已经提交的事务可能并没有传到从上,如果此时,强行将从提升为主,可能导致新主上的数据不完整。

2、全同步复制(Fully synchronous replication)

指当主库执行完一个事务,所有的从库都执行了该事务才返回给客户端。因为需要等待所有从库执行完该事务才能返回,所以全同步复制的性能必然会收到严重的影响。

3、半同步复制(Semisynchronous replication)

介于异步复制和全同步复制之间,主库在执行完客户端提交的事务后不是立刻返回给客户端,而是等待至少一个从库接收到并写到relay log中才返回给客户端。相对于异步复制,半同步复制提高了数据的安全性,同时它也造成了一定程度的延迟,这个延迟最少是一个TCP/IP往返的时间。所以,半同步复制最好在低延时的网络中使用。

4、异步与半同步异同

默认情况下MySQL的复制是异步的,Master上所有的更新操作写入Binlog之后并不确保所有的更新都被复制到Slave之上。异步操作虽然效率高,但是在Master/Slave出现问题的时候,存在很高数据不同步的风险,甚至可能丢失数据。
MySQL5.5引入半同步复制功能的目的是为了保证在master出问题的时候,至少有一台Slave的数据是完整的。在超时的情况下也可以临时转入异步复制,保障业务的正常使用,直到一台salve追赶上之后,继续切换到半同步模式。

5、MHA工作原理

相较于其它HA软件,MHA的目的在于维持MySQL Replication中Master库的高可用性,其最大特点是可以修复多个Slave之间的差异日志,最终使所有Slave保持数据一致,然后从中选择一个充当新的Master,并将其它Slave指向它。
从宕机崩溃的master保存二进制日志事件(binlogevents)。
识别含有最新更新的slave。
应用差异的中继日志(relay log)到其它slave。
应用从master保存的二进制日志事件(binlogevents)。
提升一个slave为新master。 -使其它的slave连接新的master进行复制。
目前MHA主要支持一主多从的架构,要搭建MHA,要求一个复制集群中必须最少有三台数据库服务器,一主二从,即一台充当master,一台充当备用master,另外一台充当从库,因为至少需要三台服务器。

二、搭建实验环境

接下来部署MHA,具体的搭建环境如下:

角色IP地址主机名server id类型OS
Manager192.168.206.200manager管理节点centos 7.8
Master192.168.206.201maser1主MySQL(写入)centos 7.8
CandidateMaster192.168.206.202c-master2从MySQL(读)centos 7.8
slave192.168.206.203slave3从MySQL(读)centos 7.8

其中master对外提供写服务,备选master(实际的slave,主机名m3)提供读服务,slave也提供相关的读服务,一旦master宕机,将会把备选master提升为新的master,slave指向新的master,manager作为管理服务器。

三、基础环境准备

1、基础配置

在配置好IP地址和主机名后,检查selinux、防火墙设置,关闭 selinux ,防火墙服务,以便后期主从同步不出错。
注:时间要同步

2、 在4台机器都配置epel源

# 安装epel源
yum install -y epel-release
# 下载阿里开源镜像的epel源文件
wget -O /etc/yum.repos.d/epel-7.repo http://mirrors.aliyun.com/repo/epel-7.repo
# 清除系统yum缓存
yum clean all
# 重新生成新的yum缓存
yum makecache

3、建立ssh无交互登录环境

(1)Manager主机
[root@manager ~]# ssh-keygen
[root@manager ~]# for i in 1 2 3;do ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.206.20$i;done
1
2
(2)Candidate master主机
[root@slave-master ~]# ssh-keygen
[root@slave-master ~]# for i in 1 2 3;do ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.206.20$i;done

(3)master主机
[root@master ~]# ssh-keygen
[root@master ~]# for i in 1 2 3;do ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.206.20$i;done

(4)slave主机
[root@slave ~]# ssh-keygen
[root@slave ~]# for i in 1 2 3;do ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.206.20$i;done

4、测试ssh无交互登录

[root@manager ~]# ssh root@192.168.206.201
Last login: Tue Jul 7 16:24:46 2020 from 192.168.206.1
[root@master ~]# exit
登出
Connection to 192.168.206.201 closed.
[root@manager ~]#

# 其它服务器测试省略

5、配置hosts环境

[root@manager ~]# vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

# 将4台服务器的ip和对应的主机名添加到下面
192.168.206.200    manager
192.168.206.201    master
192.168.206.202    slave-master
192.168.206.203    slave

# 每台服务器的hosts文件都一样
[root@manager ~]# scp /etc/hosts root@192.168.206.201:/etc/hosts
hosts 100% 256 139.0KB/s 00:00
[root@manager ~]# scp /etc/hosts root@192.168.206.202:/etc/hosts
hosts 100% 256 208.7KB/s 00:00
[root@manager ~]# scp /etc/hosts root@192.168.206.203:/etc/hosts
hosts 100% 256 236.3KB/s 00:00

四、配置mysql半同步复制

为了尽可能的减少主库硬件损坏宕机造成的数据丢失,因此在配置MHA的同时建议配置成MySQL的半同步复制。
注:mysql半同步插件是由谷歌提供,具体位置/usr/local/mysql/lib/plugin/下,一个是master用的semisync_master.so,一个是slave用的semisync_slave.so。
下面我们就来具体配置一下,如果不清楚Plugin的目录,用如下命令查找:
mysql> show variables like '%plugin_dir%';
+---------------+------------------------------+
| Variable_name | Value |
+---------------+------------------------------+
| plugin_dir | /usr/local/mysql/lib/plugin/ |
+---------------+------------------------------+
1 row in set (0.01 sec)

1、主从节点安装插件

(1)分别在主从节点上安装相关的插件
master
Candicate master
slave
在MySQL上安装插件需要数据库支持动态载入,检查是否支持,用如下命令检测:
mysql> show variables like '%have_dynamic%';
+----------------------+-------+
| Variable_name | Value |
+----------------------+-------+
| have_dynamic_loading | YES |
+----------------------+-------+
1 row in set (0.01 sec)

所有mysql数据库服务器,安装半同步插件(semisync_master.so,semisync_slave.so)
mysql> install plugin rpl_semi_sync_master soname 'semisync_master.so';
Query OK, 0 rows affected (0.24 sec)

mysql> install plugin rpl_semi_sync_slave soname 'semisync_slave.so';
Query OK, 0 rows affected (0.03 sec)

其他mysql主机采用同样的方法安装。
(2)检查Plugin是否已正确安装
mysql> show plugins;
mysql> select plugin_name from information_schema.plugins;

(3)查看半同步相关信息
mysql> show variables like '%rpl_semi_sync%';
+-------------------------------------------+------------+
| Variable_name | Value |
+-------------------------------------------+------------+
| rpl_semi_sync_master_enabled | OFF |
| rpl_semi_sync_master_timeout | 10000 |
| rpl_semi_sync_master_trace_level | 32 |
| rpl_semi_sync_master_wait_for_slave_count | 1 |
| rpl_semi_sync_master_wait_no_slave | ON |
| rpl_semi_sync_master_wait_point | AFTER_SYNC |
| rpl_semi_sync_slave_enabled | OFF |
| rpl_semi_sync_slave_trace_level | 32 |
+-------------------------------------------+------------+
8 rows in set (0.00 sec)

上图可以看到半同复制插件已经安装,只是还没有启用,所以是off。

2、修改my.cnf文件

注:若主MYSQL服务器已经存在,只是后期才搭建从MYSQL服务器,在置配数据同步前应先将主
MYSQL服务器的要同步的数据库拷贝到从MYSQL服务器上(如先在主MYSQL上备份数据库,再用备份
在从MYSQL服务器上恢复)
(1)master主机
server-id=1
log-bin=mysql-bin
binlog_format=mixed
log-bin-index=mysql-bin.index
rpl_semi_sync_master_enabled=1
rpl_semi_sync_master_timeout=1000
rpl_semi_sync_slave_enabled=1
relay_log_purge=0
relay-log=relay-bin
relay-log-index=slave-relay-bin.index

注:
rpl_semi_sync_master_enabled=1 1表是启用,0表示关闭
rpl_semi_sync_master_timeout=10000:毫秒单位 ,该参数主服务器等待确认消息10秒后,不再等待,变为异步方式。
(2)Candidate master主机
server-id=2
log-bin=mysql-bin
binlog_format=mixed
log-bin-index=mysql-bin.index
rpl_semi_sync_master_enabled=1
rpl_semi_sync_master_timeout=10000
rpl_semi_sync_slave_enabled=1
relay_log_purge=0
relay-log = relay-bin
relay-log-index = slave-relay-bin.index

注:relay_log_purge=0,禁止 SQL 线程在执行完一个 relay log 后自动将其删除,对于MHA场景下,对
于某些滞后从库的恢复依赖于其他从库的relay log,因此采取禁用自动删除功能
(3)slave主机
server-id=3
log-bin=mysql-bin
relay-log = relay-bin
relay-log-index = slave-relay-bin.index
read_only=1
rpl_semi_sync_slave_enabled=1

(4)查看半同步相关信息
在所有服务器上重启mysql服务。
systemctl restart mysqld

进入mysql,查看半同步相关信息。
mysql> show variables like '%rpl_semi_sync%';
+-------------------------------------------+------------+
| Variable_name | Value |
+-------------------------------------------+------------+
| rpl_semi_sync_master_enabled | ON |
| rpl_semi_sync_master_timeout | 1000 |
| rpl_semi_sync_master_trace_level | 32 |
| rpl_semi_sync_master_wait_for_slave_count | 1 |
| rpl_semi_sync_master_wait_no_slave | ON |
| rpl_semi_sync_master_wait_point | AFTER_SYNC |
| rpl_semi_sync_slave_enabled | ON |
| rpl_semi_sync_slave_trace_level | 32 |
+-------------------------------------------+------------+
8 rows in set (0.01 sec)

(5)查看半同步状态
mysql> show status like '%rpl_semi_sync%';
+--------------------------------------------+-------+
| Variable_name | Value |
+--------------------------------------------+-------+
| Rpl_semi_sync_master_clients | 0 |
| Rpl_semi_sync_master_net_avg_wait_time | 0 |
| Rpl_semi_sync_master_net_wait_time | 0 |
| Rpl_semi_sync_master_net_waits | 0 |
| Rpl_semi_sync_master_no_times | 0 |
| Rpl_semi_sync_master_no_tx | 0 |
| Rpl_semi_sync_master_status | ON |
| Rpl_semi_sync_master_timefunc_failures | 0 |
| Rpl_semi_sync_master_tx_avg_wait_time | 0 |
| Rpl_semi_sync_master_tx_wait_time | 0 |
| Rpl_semi_sync_master_tx_waits | 0 |
| Rpl_semi_sync_master_wait_pos_backtraverse | 0 |
| Rpl_semi_sync_master_wait_sessions | 0 |
| Rpl_semi_sync_master_yes_tx | 0 |
| Rpl_semi_sync_slave_status | OFF |
+--------------------------------------------+-------+
15 rows in set (0.00 sec)

有几个状态参数值得关注的:
rpl_semi_sync_master_status :显示主服务是异步复制模式还是半同步复制模式
rpl_semi_sync_master_clients :显示有多少个从服务器配置为半同步复制模式
rpl_semi_sync_master_yes_tx :显示从服务器确认成功提交的数量
rpl_semi_sync_master_no_tx :显示从服务器确认不成功提交的数量
rpl_semi_sync_master_tx_avg_wait_time :事务因开启 semi_sync ,平均需要额外等待的时间
rpl_semi_sync_master_net_avg_wait_time :事务进入等待队列后,到网络平均等待时间

3、配置主从同步

(1)master主机
mysql> stop slave;
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> grant replication slave on *.* to mharep@'192.168.206.%' identified by '123456';
Query OK, 0 rows affected, 1 warning (1.00 sec)

mysql> grant all privileges on *.* to manager@'192.168.206.%' identified by '123456';
Query OK, 0 rows affected, 1 warning (0.22 sec)

mysql> show master status;
+------------------+----------+--------------+------------------+-------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000001 | 746 | | | |
+------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)

第一条grant命令是创建一个用于主从复制的帐号,在master和candicate master的主机上创建即可。
第二条grant命令是创建MHA管理账号,所有mysql服务器上都需要执行。MHA会在配置文件里要求能远程登录到数据库,所以要进行必要的赋权。
(2)Candidate master主机
mysql> stop slave;
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> grant replication slave on *.* to mharep@'192.168.206.%' identified by '123456';
Query OK, 0 rows affected, 1 warning (10.01 sec)

mysql> grant all privileges on *.* to manager@'192.168.206.%' identified by '123456';
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> change master to master_host='192.168.206.201',master_port=3306,master_user='mharep',master_password='123456',master_log_file='mysql-bin.000001',master_log_pos=746;
Query OK, 0 rows affected, 2 warnings (0.04 sec)

mysql> start slave;
Query OK, 0 rows affected (0.00 sec)

mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.206.201
Master_User: mharep
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 154
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes

查看从的状态,以下两个值必须为yes,代表从服务器能正常连接主服务器。
Slave_IO_Running:Yes
Slave_SQL_Running:Yes

(3)slave主机
mysql> stop slave;
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> grant all privileges on *.* to manager@'192.168.206.%' identified by '123456';
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> change master to master_host='192.168.206.201',master_port=3306,master_user='mharep',master_password='123456',master_log_file='mysql-bin.000001',master_log_pos=746;
Query OK, 0 rows affected, 2 warnings (0.03 sec)

mysql> start slave;
Query OK, 0 rows affected (0.00 sec)

mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.206.201
Master_User: mharep
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 154
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes

查看从的状态,以下两个值必须为yes,代表从服务器能正常连接主服务器。
Slave_IO_Running:Yes
Slave_SQL_Running:Yes

(4)查看master服务器的半同步状态
mysql> show status like '%rpl_semi_sync%';
+--------------------------------------------+-------+
| Variable_name | Value |
+--------------------------------------------+-------+
| Rpl_semi_sync_master_clients | 2 |
| Rpl_semi_sync_master_net_avg_wait_time | 0 |
| Rpl_semi_sync_master_net_wait_time | 0 |
| Rpl_semi_sync_master_net_waits | 0 |
| Rpl_semi_sync_master_no_times | 0 |
| Rpl_semi_sync_master_no_tx | 0 |
| Rpl_semi_sync_master_status | ON |
| Rpl_semi_sync_master_timefunc_failures | 0 |
| Rpl_semi_sync_master_tx_avg_wait_time | 0 |
| Rpl_semi_sync_master_tx_wait_time | 0 |
| Rpl_semi_sync_master_tx_waits | 0 |
| Rpl_semi_sync_master_wait_pos_backtraverse | 0 |
| Rpl_semi_sync_master_wait_sessions | 0 |
| Rpl_semi_sync_master_yes_tx | 0 |
| Rpl_semi_sync_slave_status | OFF |
+--------------------------------------------+-------+
15 rows in set (0.01 sec)

rpl_semi_sync_master_clients :显示有2个从服务器配置为半同步复制模式。

五、配置mysql-mha

mha包括manager节点和data节点:
data节点:包括原有的MySQL复制结构中的主机,至少3台,即1主2从,当masterfailover后,还能保证主从结构;只需安装node包。
manager节点:运行监控脚本,负责monitoring 和 auto-failover;需要安装node包和manager包。

1、 在所有主机上安装mha所依赖的软件包

yum -y install perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager perl-Config-IniFiles ncftp perl-Params-Validate perl-CPAN perl-Test-Mock-LWP.noarch perl-LWP-Authen-Negotiate.noarch perl-devel perl-ExtUtils-CBuilder perl-ExtUtils-MakeMaker
1

2、 安装mha4mysql-node

需要2个管理节点(4台服务器)都安装
tar zxvf mha4mysql-node-0.58.tar.gz
cd mha4mysql-node-0.58/
perl Makefile.PL
make && make install

在3台数据库节点只要安装mha4mysql-node。

3、安装mha4mysql-manager

tar zxvf mha4mysql-manager-0.58.tar.gz
cd mha4mysql-manager-0.58/
perl Makefile.PL
make && make install

在管理节点需要mha4mysql-node和mha4mysql-manager都安装。
根据提示输入:
[root@manager mha4mysql-manager-0.58]# mkdir /etc/masterha
[root@manager mha4mysql-manager-0.58]# mkdir -p /masterha/app1
[root@manager mha4mysql-manager-0.58]# mkdir /scripts
[root@manager mha4mysql-manager-0.58]# cp samples/conf/* /etc/masterha/
[root@manager mha4mysql-manager-0.58]# cp samples/scripts/* /scripts/

4、配置mha

与绝大多数Linux应用程序类似,MHA的正确使用依赖于合理的配置文件。MHA的配置文件与mysql的my.cnf文件配置相似,采取的是param=value的方式来配置。配置文件位于管理节点,通常包括每一个mysql server的主机名、mysql用户名、密码、工作目录等等。
(1)编辑mha配置文件
编辑/etc/masterha/app1.conf,内容如下:
[server default]
manager_workdir=/masterha/app1
manager_log=/masterha/app1/manager.log
user=manager
password=123456
ssh_user=root
repl_user=mharep
repl_password=123456
ping_interval=1

[server1]
hostname=192.168.206.201
port=3306
master_binlog_dir=/usr/local/mysql/data
candidate_master=1

[server2]
hostname=192.168.206.202
port=3306
master_binlog_dir=/usr/local/mysql/data
candidate_master=1

[server3]
hostname=192.168.206.203
port=3306
master_binlog_dir=/usr/local/mysql/data
no_master=1

保存并退出。
(2)配关配置项的解释
manager_workdir=/masterha/app1 //设置manager的工作目录

manager_log=/masterha/app1/manager.log //设置manager的日志

user=manager //设置监控用户manager

password=123456 //监控用户manager的密码

ssh_user=root //ssh连接用户

repl_user=mharep //主从复制用户

repl_password=123.abc //主从复制用户密码

ping_interval=1 //设置监控主库,发送ping包的时间间隔,默认是3秒,尝试三次没有回应的时候自动进行railover

master_binlog_dir=/usr/local/mysql/data //设置master 保存binlog的位置,以便MHA可以找到master的日志,我这里的也就是mysql的数据目录

candidate_master=1 //设置为候选master,如果设置该参数以后,发生主从切换以后将会将此从库提升为主库。

(3)SSH 有效性验证
[root@manager ~]# masterha_check_ssh --global_conf=/etc/masterha/masterha_default.cnf --conf=/etc/masterha/app1.cnf
Tue Jul 7 18:53:32 2020 - [info] Reading default configuration from /etc/masterha/masterha_default.cnf..
Tue Jul 7 18:53:32 2020 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Tue Jul 7 18:53:32 2020 - [info] Reading server configuration from /etc/masterha/app1.cnf..
Tue Jul 7 18:53:32 2020 - [info] Starting SSH connection tests..
Tue Jul 7 18:53:34 2020 - [debug]
Tue Jul 7 18:53:32 2020 - [debug] Connecting via SSH from root@192.168.206.201(192.168.206.201:22) to root@192.168.206.202(192.168.206.202:22)..
Tue Jul 7 18:53:33 2020 - [debug] ok.
Tue Jul 7 18:53:33 2020 - [debug] Connecting via SSH from root@192.168.206.201(192.168.206.201:22) to root@192.168.206.203(192.168.206.203:22)..
Tue Jul 7 18:53:34 2020 - [debug] ok.
Tue Jul 7 18:53:35 2020 - [debug]
Tue Jul 7 18:53:32 2020 - [debug] Connecting via SSH from root@192.168.206.202(192.168.206.202:22) to root@192.168.206.201(192.168.206.201:22)..
Tue Jul 7 18:53:33 2020 - [debug] ok.
Tue Jul 7 18:53:33 2020 - [debug] Connecting via SSH from root@192.168.206.202(192.168.206.202:22) to root@192.168.206.203(192.168.206.203:22)..
Tue Jul 7 18:53:34 2020 - [debug] ok.
Tue Jul 7 18:53:35 2020 - [debug]
Tue Jul 7 18:53:33 2020 - [debug] Connecting via SSH from root@192.168.206.203(192.168.206.203:22) to root@192.168.206.201(192.168.206.201:22)..
Tue Jul 7 18:53:34 2020 - [debug] ok.
Tue Jul 7 18:53:34 2020 - [debug] Connecting via SSH from root@192.168.206.203(192.168.206.203:22) to root@192.168.206.202(192.168.206.202:22)..
Tue Jul 7 18:53:35 2020 - [debug] ok.
Tue Jul 7 18:53:35 2020 - [info] All SSH connection tests passed successfully.

(4)集群复制的有效性验证
mysql必须都启动
[root@manager ~]# masterha_check_ssh --global_conf=/etc/masterha/masterha_default.cnf --conf=/etc/masterha/app1.cnf
Tue Jul 7 18:53:32 2020 - [info] Reading default configuration from /etc/masterha/masterha_default.cnf..
Tue Jul 7 18:53:32 2020 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Tue Jul 7 18:53:32 2020 - [info] Reading server configuration from /etc/masterha/app1.cnf..
Tue Jul 7 18:53:32 2020 - [info] Starting SSH connection tests..
Tue Jul 7 18:53:34 2020 - [debug]
Tue Jul 7 18:53:32 2020 - [debug] Connecting via SSH from root@192.168.206.201(192.168.206.201:22) to root@192.168.206.202(192.168.206.202:22)..
Tue Jul 7 18:53:33 2020 - [debug] ok.
Tue Jul 7 18:53:33 2020 - [debug] Connecting via SSH from root@192.168.206.201(192.168.206.201:22) to root@192.168.206.203(192.168.206.203:22)..
Tue Jul 7 18:53:34 2020 - [debug] ok.
Tue Jul 7 18:53:35 2020 - [debug]
Tue Jul 7 18:53:32 2020 - [debug] Connecting via SSH from root@192.168.206.202(192.168.206.202:22) to root@192.168.206.201(192.168.206.201:22)..
Tue Jul 7 18:53:33 2020 - [debug] ok.
Tue Jul 7 18:53:33 2020 - [debug] Connecting via SSH from root@192.168.206.202(192.168.206.202:22) to root@192.168.206.203(192.168.206.203:22)..
Tue Jul 7 18:53:34 2020 - [debug] ok.
Tue Jul 7 18:53:35 2020 - [debug]
Tue Jul 7 18:53:33 2020 - [debug] Connecting via SSH from root@192.168.206.203(192.168.206.203:22) to root@192.168.206.201(192.168.206.201:22)..
Tue Jul 7 18:53:34 2020 - [debug] ok.
Tue Jul 7 18:53:34 2020 - [debug] Connecting via SSH from root@192.168.206.203(192.168.206.203:22) to root@192.168.206.202(192.168.206.202:22)..
Tue Jul 7 18:53:35 2020 - [debug] ok.
Tue Jul 7 18:53:35 2020 - [info] All SSH connection tests passed successfully.

[root@manager ~]# masterha_check_repl --global_conf=/etc/masterha/masterha_default.cnf --conf=/etc/masterha/app1.cnf
Tue Jul 7 18:56:57 2020 - [info] Reading default configuration from /etc/masterha/masterha_default.cnf..
Tue Jul 7 18:56:58 2020 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Tue Jul 7 18:56:58 2020 - [info] Reading server configuration from /etc/masterha/app1.cnf..
Tue Jul 7 18:56:58 2020 - [info] MHA::MasterMonitor version 0.58.
Tue Jul 7 18:56:59 2020 - [info] GTID failover mode = 0
Tue Jul 7 18:56:59 2020 - [info] Dead Servers:
Tue Jul 7 18:56:59 2020 - [info] Alive Servers:
Tue Jul 7 18:56:59 2020 - [info] 192.168.206.201(192.168.206.201:3306)
Tue Jul 7 18:56:59 2020 - [info] 192.168.206.202(192.168.206.202:3306)
Tue Jul 7 18:56:59 2020 - [info] 192.168.206.203(192.168.206.203:3306)
Tue Jul 7 18:56:59 2020 - [info] Alive Slaves:
Tue Jul 7 18:56:59 2020 - [info] 192.168.206.202(192.168.206.202:3306) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Tue Jul 7 18:56:59 2020 - [info] Replicating from 192.168.206.201(192.168.206.201:3306)
Tue Jul 7 18:56:59 2020 - [info] Primary candidate for the new Master (candidate_master is set)
Tue Jul 7 18:56:59 2020 - [info] 192.168.206.203(192.168.206.203:3306) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Tue Jul 7 18:56:59 2020 - [info] Replicating from 192.168.206.201(192.168.206.201:3306)
Tue Jul 7 18:56:59 2020 - [info] Not candidate for the new Master (no_master is set)
Tue Jul 7 18:56:59 2020 - [info] Current Alive Master: 192.168.206.201(192.168.206.201:3306)
Tue Jul 7 18:56:59 2020 - [info] Checking slave configurations..
Tue Jul 7 18:56:59 2020 - [info] read_only=1 is not set on slave 192.168.206.202(192.168.206.202:3306).
Tue Jul 7 18:56:59 2020 - [warning] relay_log_purge=0 is not set on slave 192.168.206.203(192.168.206.203:3306).
Tue Jul 7 18:56:59 2020 - [info] Checking replication filtering settings..
Tue Jul 7 18:56:59 2020 - [info] binlog_do_db= , binlog_ignore_db=
Tue Jul 7 18:56:59 2020 - [info] Replication filtering check ok.
Tue Jul 7 18:56:59 2020 - [info] GTID (with auto-pos) is not supported
Tue Jul 7 18:56:59 2020 - [info] Starting SSH connection tests..
Tue Jul 7 18:57:02 2020 - [info] All SSH connection tests passed successfully.
Tue Jul 7 18:57:02 2020 - [info] Checking MHA Node version..
Tue Jul 7 18:57:03 2020 - [info] Version check ok.
Tue Jul 7 18:57:03 2020 - [info] Checking SSH publickey authentication settings on the current master..
Tue Jul 7 18:57:04 2020 - [info] HealthCheck: SSH to 192.168.206.201 is reachable.
Tue Jul 7 18:57:04 2020 - [info] Master MHA Node version is 0.58.
Tue Jul 7 18:57:04 2020 - [info] Checking recovery script configurations on 192.168.206.201(192.168.206.201:3306)..
Tue Jul 7 18:57:04 2020 - [info] Executing command: save_binary_logs --command=test --start_pos=4 --binlog_dir=/usr/local/mysql/data --output_file=/data/log/masterha/save_binary_logs_test --manager_version=0.58 --start_file=mysql-bin.000001
Tue Jul 7 18:57:04 2020 - [info] Connecting to root@192.168.206.201(192.168.206.201:22)..
Creating /data/log/masterha if not exists.. Creating directory /data/log/masterha.. done.
ok.
Checking output directory is accessible or not..
ok.
Binlog found at /usr/local/mysql/data, up to mysql-bin.000001
Tue Jul 7 18:57:05 2020 - [info] Binlog setting check done.
Tue Jul 7 18:57:05 2020 - [info] Checking SSH publickey authentication and checking recovery script configurations on all alive slave servers..
Tue Jul 7 18:57:05 2020 - [info] Executing command : apply_diff_relay_logs --command=test --slave_user='manager' --slave_host=192.168.206.202 --slave_ip=192.168.206.202 --slave_port=3306 --workdir=/data/log/masterha --target_version=5.7.22-log --manager_version=0.58 --relay_log_info=/usr/local/mysql/data/relay-log.info --relay_dir=/usr/local/mysql/data/ --slave_pass=xxx
Tue Jul 7 18:57:05 2020 - [info] Connecting to root@192.168.206.202(192.168.206.202:22)..
Creating directory /data/log/masterha.. done.
Checking slave recovery environment settings..
Opening /usr/local/mysql/data/relay-log.info ... ok.
Relay log found at /usr/local/mysql/data, up to relay-bin.000003
Temporary relay log file is /usr/local/mysql/data/relay-bin.000003
Checking if super_read_only is defined and turned on.. not present or turned off, ignoring.
Testing mysql connection and privileges..
mysql: [Warning] Using a password on the command line interface can be insecure.
done.
Testing mysqlbinlog output.. done.
Cleaning up test file(s).. done.
Tue Jul 7 18:57:06 2020 - [info] Executing command : apply_diff_relay_logs --command=test --slave_user='manager' --slave_host=192.168.206.203 --slave_ip=192.168.206.203 --slave_port=3306 --workdir=/data/log/masterha --target_version=5.7.22-log --manager_version=0.58 --relay_log_info=/usr/local/mysql/data/relay-log.info --relay_dir=/usr/local/mysql/data/ --slave_pass=xxx
Tue Jul 7 18:57:06 2020 - [info] Connecting to root@192.168.206.203(192.168.206.203:22)..
Creating directory /data/log/masterha.. done.
Checking slave recovery environment settings..
Opening /usr/local/mysql/data/relay-log.info ... ok.
Relay log found at /usr/local/mysql/data, up to relay-bin.000003
Temporary relay log file is /usr/local/mysql/data/relay-bin.000003
Checking if super_read_only is defined and turned on.. not present or turned off, ignoring.
Testing mysql connection and privileges..
mysql: [Warning] Using a password on the command line interface can be insecure.
done.
Testing mysqlbinlog output.. done.
Cleaning up test file(s).. done.
Tue Jul 7 18:57:06 2020 - [info] Slaves settings check done.
Tue Jul 7 18:57:06 2020 - [info]
192.168.206.201(192.168.206.201:3306) (current master)
+--192.168.206.202(192.168.206.202:3306)
+--192.168.206.203(192.168.206.203:3306)

Tue Jul 7 18:57:06 2020 - [info] Checking replication health on 192.168.206.202..
Tue Jul 7 18:57:06 2020 - [info] ok.
Tue Jul 7 18:57:06 2020 - [info] Checking replication health on 192.168.206.203..
Tue Jul 7 18:57:06 2020 - [info] ok.
Tue Jul 7 18:57:06 2020 - [warning] master_ip_failover_script is not defined.
Tue Jul 7 18:57:06 2020 - [warning] shutdown_script is not defined.
Tue Jul 7 18:57:06 2020 - [info] Got exit code 0 (Not master dead).

MySQL Replication Health is OK.

验证成功的话会自动识别出所有服务器和主从状况。
注:在验证时,若遇到这个错误:Can’t exec “mysqlbinlog” … 解决方法是在所有服务器上执行:
ln -s /usr/local/mysql/bin/* /usr/local/bin/

(5)启动 manager
[root@manager ~]# nohup masterha_manager --conf=/etc/masterha/app1.cnf& >/tmp/mha_manager.log &
[1] 55707
[root@manager ~]# nohup: 忽略输入并把输出追加到"nohup.out"

注:在应用Unix/Linux时,我们一般想让某个程序在后台运行,于是我们将常会用 & 在程序结尾来让程
序自动运行。比如我们要运行mysql在后台: /usr/local/mysql/bin/mysqld_safe –user=mysql &。可是
有很多程序并不想mysqld一样,这样我们就需要nohup命令,
(6)状态检查
[root@manager ~]# masterha_check_status --conf=/etc/masterha/app1.cnf
app1 (pid:55707) is running(0:PING_OK), master:192.168.206.201

(7)故障转移验证(自动failover)
master dead后,MHA当时已经开启,候选Master库(Slave)会自动failover为Master。
验证的方式是先停掉 master(192.168.206.201),因为之前的配置文件中,把Candicate master(192.168.206.202)作为了候选人,那么就到 slave(192.168.206.203) 上查看 master 的 IP 是否变为了 slave-master 的 IP。
1)停掉 master
在master(192.168.206.201) 上把 mysql 停掉。
2)查看 MHA 日志
上面的配置文件中指定了日志位置为/masterha/app1/manager.log。
[root@manager ~]# cat /masterha/app1/manager.log
----- Failover Report -----

app1: MySQL Master failover 192.168.206.201(192.168.206.201:3306) to 192.168.206.202(192.168.206.202:3306) succeeded

Master 192.168.206.201(192.168.206.201:3306) is down!

Check MHA Manager logs at manager:/masterha/app1/manager.log for details.

Started automated(non-interactive) failover.
The latest slave 192.168.206.202(192.168.206.202:3306) has all relay logs for recovery.
Selected 192.168.206.202(192.168.206.202:3306) as a new master.
192.168.206.202(192.168.206.202:3306): OK: Applying all logs succeeded.
192.168.206.203(192.168.206.203:3306): This host has the latest relay log events.
Generating relay diff files from the latest slave succeeded.
192.168.206.203(192.168.206.203:3306): OK: Applying all logs succeeded. Slave started, replicating from 192.168.206.202(192.168.206.202:3306)
192.168.206.202(192.168.206.202:3306): Resetting slave info succeeded.
Master failover to 192.168.206.202(192.168.206.202:3306) completed successfully.

从日志信息中可以看到 master failover 已经成功了,并可以看出故障转移的大体流程
3)检查 slave的复制
登录 slave(192.168.206.203) 的Mysql,查看 slave 状态:
mysql> show slave status \G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.206.202
Master_User: mharep
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000004
Read_Master_Log_Pos: 154
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000004
Slave_IO_Running: Yes
Slave_SQL_Running: Yes

可以看到 master 的 IP 现在为 192.168.206.202,已经切换到和192.168.206.202同步了。
本来是和192.168.206.201同步的,说明 MHA 已经把Candicate master(192.168.206.202) 提升为了新的 master,IO线程和SQL线程也正确运行,MHA搭建成功。

六、MHA Manager端日常主要操作步骤

1、检查是否有下列文件,有则删除。

发生主从切换后,MHAmanager服务会自动停掉,且在manager_workdir(/masterha/app1)目录下面生成文件app1.failover.complete。
[root@manager ~]# ll /masterha/app1/
总用量 24
-rw-r--r--. 1 root root 0 7月 7 19:07 app1.failover.complete
-rw-r--r--. 1 root root 22192 7月 7 19:07 manager.log

若要启动MHA,必须先确保无此文件, 如果有这个提示,那么删除此文件。
[error]
[/usr/share/perl5/vendor_perl/MHA/MasterFailover.pm, ln298]
Last failover was done at 2015/01/09 10:00:47.Current time is too early to do failover again. If you want to do failover, manually remove /masterha/app1/app1.failover.complete and run this script again.

2、检查MHA复制检查

需要把master设置成candidate的从服务器
(1)查看candidate的从服务器(192.168.206.202)的二进制日志
mysql> show master status\G
*************************** 1. row ***************************
File: mysql-bin.000004
Position: 154

(2)将master服务器设置为candidate的从服务器
[root@master ~]# systemctl start mysqld
[root@master ~]# mysql -uroot -p

mysql> change master to master_host='192.168.206.202',master_port=3306,master_user='mharep',master_password='123456',master_log_file='mysql-bin.000004',master_log_pos=154;
Query OK, 0 rows affected, 2 warnings (0.00 sec)

mysql> start slave;
Query OK, 0 rows affected (0.00 sec)

[root@manager ~]# masterha_check_repl --conf=/etc/masterha/app1.cnf

# 省略部分输出信息

Tue Jul 7 19:31:44 2020 - [info] Slaves settings check done.
Tue Jul 7 19:31:44 2020 - [info]
192.168.206.202(192.168.206.202:3306) (current master)
+--192.168.206.201(192.168.206.201:3306)
+--192.168.206.203(192.168.206.203:3306)

Tue Jul 7 19:31:44 2020 - [info] Checking replication health on 192.168.206.201..
Tue Jul 7 19:31:44 2020 - [info] ok.
Tue Jul 7 19:31:44 2020 - [info] Checking replication health on 192.168.206.203..
Tue Jul 7 19:31:44 2020 - [info] ok.
Tue Jul 7 19:31:44 2020 - [warning] master_ip_failover_script is not defined.
Tue Jul 7 19:31:44 2020 - [warning] shutdown_script is not defined.
Tue Jul 7 19:31:44 2020 - [info] Got exit code 0 (Not master dead).

MySQL Replication Health is OK.

3、停止MHA

masterha_stop --conf=/etc/masterha/app1.cnf

4、启动MHA

nohup masterha_manager --conf=/etc/masterha/app1.cnf &>/tmp/mha_manager.log &

当有slave 节点宕掉时,默认是启动不了的,加上 --ignore_fail_on_start 即使有节点宕掉也能启动MHA,如下:
nohup masterha_manager --conf=/etc/masterha/app1.cnf --ignore_fail_on_start & >/tmp/mha_manager.log &

5、检查状态

masterha_check_status --conf=/etc/masterha /app1.cnf

6、检查日志

tail -f /masterha/app1/manager.log