MySQL-MHA高可用方案



MySQL-MHA高可用方案

MySQL-MHA高可用方案

一、MHA简介

MHA(Master High Availability)目前在MySQL高可用方面是一个相对成熟的解决方案,它由日本DeNA公司youshimaton(现就职于Facebook公司)开发,是一套优秀的作为MySQL高可用性环境下故障切换和主从提升的高可用软件。在MySQL故障切换过程中,MHA能做到在0~30秒之内自动完成数据库的故障切换操作,并且在进行故障切换的过程中,MHA能在最大程度上保证数据的一致性,以达到真正意义上的高可用。
MHA里有两个角色一个是MHA Node(数据节点)另一个是MHA Manager(管理节点)。 MHA Manager可以单独部署在一台独立的机器上管理多个master-slave集群,也可以部署在一台slave节点上。MHA Node运行在每台
MySQL服务器上,MHA Manager会定时探测集群中的master节点,当master出现故障时,它可以自动将最新数据的slave提升为新的master,然后将所有其他的slave重新指向新的master。整个故障转移过程对应用程序完全透明。
在MHA自动故障切换过程中,MHA试图从宕机的主服务器上保存二进制日志,最大程度的保证数据的不丢失,但这并不总是可行的。例如,如果主服务器硬件故障或无法通过ssh访问,MHA没法保存二进制日志,只进行故障转移而丢失了最新的数据。使用MySQL 5.5的半同步复制,可以大大降低数据丢失的风险。MHA可以与半同步复制结合起来。如果只有一个slave已经收到了最新的二进制日志,MHA可以将最新的二进制日志应用于其他所有的slave服务器上,因此可以保证所有节点的数据一致性。
注:从MySQL5.5开始,MySQL以插件的形式支持半同步复制。
如何理解半同步呢?首先我们来看看异步,全同步的概念。

1、异步复制(Asynchronous replication)

MySQL默认的复制即是异步的,主库在执行完客户端提交的事务后会立即将结果返给给客户端,并不关心从库是否已经接收并处理,这样就会有一个问题,主如果crash掉了,此时主上已经提交的事务可能并没有传到从上,如果此时,强行将从提升为主,可能导致新主上的数据不完整。

2、全同步复制(Fully synchronous replication)

指当主库执行完一个事务,所有的从库都执行了该事务才返回给客户端。因为需要等待所有从库执行完该事务才能返回,所以全同步复制的性能必然会收到严重的影响。

3、半同步复制(Semisynchronous replication)

介于异步复制和全同步复制之间,主库在执行完客户端提交的事务后不是立刻返回给客户端,而是等待至少一个从库接收到并写到relay log中才返回给客户端。相对于异步复制,半同步复制提高了数据的安全性,同时它也造成了一定程度的延迟,这个延迟最少是一个TCP/IP往返的时间。所以,半同步复制最好在低延时的网络中使用。

4、异步与半同步异同

默认情况下MySQL的复制是异步的,Master上所有的更新操作写入Binlog之后并不确保所有的更新都被复制到Slave之上。异步操作虽然效率高,但是在Master/Slave出现问题的时候,存在很高数据不同步的风险,甚至可能丢失数据。
MySQL5.5引入半同步复制功能的目的是为了保证在master出问题的时候,至少有一台Slave的数据是完整的。在超时的情况下也可以临时转入异步复制,保障业务的正常使用,直到一台salve追赶上之后,继续切换到半同步模式。

5、MHA工作原理

相较于其它HA软件,MHA的目的在于维持MySQL Replication中Master库的高可用性,其最大特点是可以修复多个Slave之间的差异日志,最终使所有Slave保持数据一致,然后从中选择一个充当新的Master,并将其它Slave指向它。
从宕机崩溃的master保存二进制日志事件(binlogevents)。
识别含有最新更新的slave。
应用差异的中继日志(relay log)到其它slave。
应用从master保存的二进制日志事件(binlogevents)。
提升一个slave为新master。 -使其它的slave连接新的master进行复制。
目前MHA主要支持一主多从的架构,要搭建MHA,要求一个复制集群中必须最少有三台数据库服务器,一主二从,即一台充当master,一台充当备用master,另外一台充当从库,因为至少需要三台服务器。

二、搭建实验环境

接下来部署MHA,具体的搭建环境如下:

角色IP地址主机名server id类型OS
Manager192.168.206.200manager管理节点centos 7.8
Master192.168.206.201maser1主MySQL(写入)centos 7.8
CandidateMaster192.168.206.202c-master2从MySQL(读)centos 7.8
slave192.168.206.203slave3从MySQL(读)centos 7.8

其中master对外提供写服务,备选master(实际的slave,主机名m3)提供读服务,slave也提供相关的读服务,一旦master宕机,将会把备选master提升为新的master,slave指向新的master,manager作为管理服务器。

三、基础环境准备

1、基础配置

在配置好IP地址和主机名后,检查selinux、防火墙设置,关闭 selinux ,防火墙服务,以便后期主从同步不出错。
注:时间要同步

2、 在4台机器都配置epel源

# 安装epel源
yum install -y epel-release
# 下载阿里开源镜像的epel源文件
wget -O /etc/yum.repos.d/epel-7.repo http://mirrors.aliyun.com/repo/epel-7.repo
# 清除系统yum缓存
yum clean all
# 重新生成新的yum缓存
yum makecache

3、建立ssh无交互登录环境

(1)Manager主机
[root@manager ~]# ssh-keygen
[root@manager ~]# for i in 1 2 3;do ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.206.20$i;done
1
2
(2)Candidate master主机
[root@slave-master ~]# ssh-keygen
[root@slave-master ~]# for i in 1 2 3;do ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.206.20$i;done

(3)master主机
[root@master ~]# ssh-keygen
[root@master ~]# for i in 1 2 3;do ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.206.20$i;done

(4)slave主机
[root@slave ~]# ssh-keygen
[root@slave ~]# for i in 1 2 3;do ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.206.20$i;done

4、测试ssh无交互登录

[root@manager ~]# ssh root@192.168.206.201
Last login: Tue Jul 7 16:24:46 2020 from 192.168.206.1
[root@master ~]# exit
登出
Connection to 192.168.206.201 closed.
[root@manager ~]#

# 其它服务器测试省略

5、配置hosts环境

[root@manager ~]# vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

# 将4台服务器的ip和对应的主机名添加到下面
192.168.206.200    manager
192.168.206.201    master
192.168.206.202    slave-master
192.168.206.203    slave

# 每台服务器的hosts文件都一样
[root@manager ~]# scp /etc/hosts root@192.168.206.201:/etc/hosts
hosts 100% 256 139.0KB/s 00:00
[root@manager ~]# scp /etc/hosts root@192.168.206.202:/etc/hosts
hosts 100% 256 208.7KB/s 00:00
[root@manager ~]# scp /etc/hosts root@192.168.206.203:/etc/hosts
hosts 100% 256 236.3KB/s 00:00

四、配置mysql半同步复制

为了尽可能的减少主库硬件损坏宕机造成的数据丢失,因此在配置MHA的同时建议配置成MySQL的半同步复制。
注:mysql半同步插件是由谷歌提供,具体位置/usr/local/mysql/lib/plugin/下,一个是master用的semisync_master.so,一个是slave用的semisync_slave.so。
下面我们就来具体配置一下,如果不清楚Plugin的目录,用如下命令查找:
mysql> show variables like '%plugin_dir%';
+---------------+------------------------------+
| Variable_name | Value |
+---------------+------------------------------+
| plugin_dir | /usr/local/mysql/lib/plugin/ |
+---------------+------------------------------+
1 row in set (0.01 sec)

1、主从节点安装插件

(1)分别在主从节点上安装相关的插件
master
Candicate master
slave
在MySQL上安装插件需要数据库支持动态载入,检查是否支持,用如下命令检测:
mysql> show variables like '%have_dynamic%';
+----------------------+-------+
| Variable_name | Value |
+----------------------+-------+
| have_dynamic_loading | YES |
+----------------------+-------+
1 row in set (0.01 sec)

所有mysql数据库服务器,安装半同步插件(semisync_master.so,semisync_slave.so)
mysql> install plugin rpl_semi_sync_master soname 'semisync_master.so';
Query OK, 0 rows affected (0.24 sec)

mysql> install plugin rpl_semi_sync_slave soname 'semisync_slave.so';
Query OK, 0 rows affected (0.03 sec)

其他mysql主机采用同样的方法安装。
(2)检查Plugin是否已正确安装
mysql> show plugins;
mysql> select plugin_name from information_schema.plugins;

(3)查看半同步相关信息
mysql> show variables like '%rpl_semi_sync%';
+-------------------------------------------+------------+
| Variable_name | Value |
+-------------------------------------------+------------+
| rpl_semi_sync_master_enabled | OFF |
| rpl_semi_sync_master_timeout | 10000 |
| rpl_semi_sync_master_trace_level | 32 |
| rpl_semi_sync_master_wait_for_slave_count | 1 |
| rpl_semi_sync_master_wait_no_slave | ON |
| rpl_semi_sync_master_wait_point | AFTER_SYNC |
| rpl_semi_sync_slave_enabled | OFF |
| rpl_semi_sync_slave_trace_level | 32 |
+-------------------------------------------+------------+
8 rows in set (0.00 sec)

上图可以看到半同复制插件已经安装,只是还没有启用,所以是off。

2、修改my.cnf文件

注:若主MYSQL服务器已经存在,只是后期才搭建从MYSQL服务器,在置配数据同步前应先将主
MYSQL服务器的要同步的数据库拷贝到从MYSQL服务器上(如先在主MYSQL上备份数据库,再用备份
在从MYSQL服务器上恢复)
(1)master主机
server-id=1
log-bin=mysql-bin
binlog_format=mixed
log-bin-index=mysql-bin.index
rpl_semi_sync_master_enabled=1
rpl_semi_sync_master_timeout=1000
rpl_semi_sync_slave_enabled=1
relay_log_purge=0
relay-log=relay-bin
relay-log-index=slave-relay-bin.index

注:
rpl_semi_sync_master_enabled=1 1表是启用,0表示关闭
rpl_semi_sync_master_timeout=10000:毫秒单位 ,该参数主服务器等待确认消息10秒后,不再等待,变为异步方式。
(2)Candidate master主机
server-id=2
log-bin=mysql-bin
binlog_format=mixed
log-bin-index=mysql-bin.index
rpl_semi_sync_master_enabled=1
rpl_semi_sync_master_timeout=10000
rpl_semi_sync_slave_enabled=1
relay_log_purge=0
relay-log = relay-bin
relay-log-index = slave-relay-bin.index

注:relay_log_purge=0,禁止 SQL 线程在执行完一个 relay log 后自动将其删除,对于MHA场景下,对
于某些滞后从库的恢复依赖于其他从库的relay log,因此采取禁用自动删除功能
(3)slave主机
server-id=3
log-bin=mysql-bin
relay-log = relay-bin
relay-log-index = slave-relay-bin.index
read_only=1
rpl_semi_sync_slave_enabled=1

(4)查看半同步相关信息
在所有服务器上重启mysql服务。
systemctl restart mysqld

进入mysql,查看半同步相关信息。
mysql> show variables like '%rpl_semi_sync%';
+-------------------------------------------+------------+
| Variable_name | Value |
+-------------------------------------------+------------+
| rpl_semi_sync_master_enabled | ON |
| rpl_semi_sync_master_timeout | 1000 |
| rpl_semi_sync_master_trace_level | 32 |
| rpl_semi_sync_master_wait_for_slave_count | 1 |
| rpl_semi_sync_master_wait_no_slave | ON |
| rpl_semi_sync_master_wait_point | AFTER_SYNC |
| rpl_semi_sync_slave_enabled | ON |
| rpl_semi_sync_slave_trace_level | 32 |
+-------------------------------------------+------------+
8 rows in set (0.01 sec)

(5)查看半同步状态
mysql> show status like '%rpl_semi_sync%';
+--------------------------------------------+-------+
| Variable_name | Value |
+--------------------------------------------+-------+
| Rpl_semi_sync_master_clients | 0 |
| Rpl_semi_sync_master_net_avg_wait_time | 0 |
| Rpl_semi_sync_master_net_wait_time | 0 |
| Rpl_semi_sync_master_net_waits | 0 |
| Rpl_semi_sync_master_no_times | 0 |
| Rpl_semi_sync_master_no_tx | 0 |
| Rpl_semi_sync_master_status | ON |
| Rpl_semi_sync_master_timefunc_failures | 0 |
| Rpl_semi_sync_master_tx_avg_wait_time | 0 |
| Rpl_semi_sync_master_tx_wait_time | 0 |
| Rpl_semi_sync_master_tx_waits | 0 |
| Rpl_semi_sync_master_wait_pos_backtraverse | 0 |
| Rpl_semi_sync_master_wait_sessions | 0 |
| Rpl_semi_sync_master_yes_tx | 0 |
| Rpl_semi_sync_slave_status | OFF |
+--------------------------------------------+-------+
15 rows in set (0.00 sec)

有几个状态参数值得关注的:
rpl_semi_sync_master_status :显示主服务是异步复制模式还是半同步复制模式
rpl_semi_sync_master_clients :显示有多少个从服务器配置为半同步复制模式
rpl_semi_sync_master_yes_tx :显示从服务器确认成功提交的数量
rpl_semi_sync_master_no_tx :显示从服务器确认不成功提交的数量
rpl_semi_sync_master_tx_avg_wait_time :事务因开启 semi_sync ,平均需要额外等待的时间
rpl_semi_sync_master_net_avg_wait_time :事务进入等待队列后,到网络平均等待时间

3、配置主从同步

(1)master主机
mysql> stop slave;
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> grant replication slave on *.* to mharep@'192.168.206.%' identified by '123456';
Query OK, 0 rows affected, 1 warning (1.00 sec)

mysql> grant all privileges on *.* to manager@'192.168.206.%' identified by '123456';
Query OK, 0 rows affected, 1 warning (0.22 sec)

mysql> show master status;
+------------------+----------+--------------+------------------+-------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000001 | 746 | | | |
+------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)

第一条grant命令是创建一个用于主从复制的帐号,在master和candicate master的主机上创建即可。
第二条grant命令是创建MHA管理账号,所有mysql服务器上都需要执行。MHA会在配置文件里要求能远程登录到数据库,所以要进行必要的赋权。
(2)Candidate master主机
mysql> stop slave;
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> grant replication slave on *.* to mharep@'192.168.206.%' identified by '123456';
Query OK, 0 rows affected, 1 warning (10.01 sec)

mysql> grant all privileges on *.* to manager@'192.168.206.%' identified by '123456';
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> change master to master_host='192.168.206.201',master_port=3306,master_user='mharep',master_password='123456',master_log_file='mysql-bin.000001',master_log_pos=746;
Query OK, 0 rows affected, 2 warnings (0.04 sec)

mysql> start slave;
Query OK, 0 rows affected (0.00 sec)

mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.206.201
Master_User: mharep
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 154
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes

查看从的状态,以下两个值必须为yes,代表从服务器能正常连接主服务器。
Slave_IO_Running:Yes
Slave_SQL_Running:Yes

(3)slave主机
mysql> stop slave;
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> grant all privileges on *.* to manager@'192.168.206.%' identified by '123456';
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> change master to master_host='192.168.206.201',master_port=3306,master_user='mharep',master_password='123456',master_log_file='mysql-bin.000001',master_log_pos=746;
Query OK, 0 rows affected, 2 warnings (0.03 sec)

mysql> start slave;
Query OK, 0 rows affected (0.00 sec)

mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.206.201
Master_User: mharep
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 154
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes

查看从的状态,以下两个值必须为yes,代表从服务器能正常连接主服务器。
Slave_IO_Running:Yes
Slave_SQL_Running:Yes

(4)查看master服务器的半同步状态
mysql> show status like '%rpl_semi_sync%';
+--------------------------------------------+-------+
| Variable_name | Value |
+--------------------------------------------+-------+
| Rpl_semi_sync_master_clients | 2 |
| Rpl_semi_sync_master_net_avg_wait_time | 0 |
| Rpl_semi_sync_master_net_wait_time | 0 |
| Rpl_semi_sync_master_net_waits | 0 |
| Rpl_semi_sync_master_no_times | 0 |
| Rpl_semi_sync_master_no_tx | 0 |
| Rpl_semi_sync_master_status | ON |
| Rpl_semi_sync_master_timefunc_failures | 0 |
| Rpl_semi_sync_master_tx_avg_wait_time | 0 |
| Rpl_semi_sync_master_tx_wait_time | 0 |
| Rpl_semi_sync_master_tx_waits | 0 |
| Rpl_semi_sync_master_wait_pos_backtraverse | 0 |
| Rpl_semi_sync_master_wait_sessions | 0 |
| Rpl_semi_sync_master_yes_tx | 0 |
| Rpl_semi_sync_slave_status | OFF |
+--------------------------------------------+-------+
15 rows in set (0.01 sec)

rpl_semi_sync_master_clients :显示有2个从服务器配置为半同步复制模式。

五、配置mysql-mha

mha包括manager节点和data节点:
data节点:包括原有的MySQL复制结构中的主机,至少3台,即1主2从,当masterfailover后,还能保证主从结构;只需安装node包。
manager节点:运行监控脚本,负责monitoring 和 auto-failover;需要安装node包和manager包。

1、 在所有主机上安装mha所依赖的软件包

yum -y install perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager perl-Config-IniFiles ncftp perl-Params-Validate perl-CPAN perl-Test-Mock-LWP.noarch perl-LWP-Authen-Negotiate.noarch perl-devel perl-ExtUtils-CBuilder perl-ExtUtils-MakeMaker
1

2、 安装mha4mysql-node

需要2个管理节点(4台服务器)都安装
tar zxvf mha4mysql-node-0.58.tar.gz
cd mha4mysql-node-0.58/
perl Makefile.PL
make && make install

在3台数据库节点只要安装mha4mysql-node。

3、安装mha4mysql-manager

tar zxvf mha4mysql-manager-0.58.tar.gz
cd mha4mysql-manager-0.58/
perl Makefile.PL
make && make install

在管理节点需要mha4mysql-node和mha4mysql-manager都安装。
根据提示输入:
[root@manager mha4mysql-manager-0.58]# mkdir /etc/masterha
[root@manager mha4mysql-manager-0.58]# mkdir -p /masterha/app1
[root@manager mha4mysql-manager-0.58]# mkdir /scripts
[root@manager mha4mysql-manager-0.58]# cp samples/conf/* /etc/masterha/
[root@manager mha4mysql-manager-0.58]# cp samples/scripts/* /scripts/

4、配置mha

与绝大多数Linux应用程序类似,MHA的正确使用依赖于合理的配置文件。MHA的配置文件与mysql的my.cnf文件配置相似,采取的是param=value的方式来配置。配置文件位于管理节点,通常包括每一个mysql server的主机名、mysql用户名、密码、工作目录等等。
(1)编辑mha配置文件
编辑/etc/masterha/app1.conf,内容如下:
[server default]
manager_workdir=/masterha/app1
manager_log=/masterha/app1/manager.log
user=manager
password=123456
ssh_user=root
repl_user=mharep
repl_password=123456
ping_interval=1

[server1]
hostname=192.168.206.201
port=3306
master_binlog_dir=/usr/local/mysql/data
candidate_master=1

[server2]
hostname=192.168.206.202
port=3306
master_binlog_dir=/usr/local/mysql/data
candidate_master=1

[server3]
hostname=192.168.206.203
port=3306
master_binlog_dir=/usr/local/mysql/data
no_master=1

保存并退出。
(2)配关配置项的解释
manager_workdir=/masterha/app1 //设置manager的工作目录

manager_log=/masterha/app1/manager.log //设置manager的日志

user=manager //设置监控用户manager

password=123456 //监控用户manager的密码

ssh_user=root //ssh连接用户

repl_user=mharep //主从复制用户

repl_password=123.abc //主从复制用户密码

ping_interval=1 //设置监控主库,发送ping包的时间间隔,默认是3秒,尝试三次没有回应的时候自动进行railover

master_binlog_dir=/usr/local/mysql/data //设置master 保存binlog的位置,以便MHA可以找到master的日志,我这里的也就是mysql的数据目录

candidate_master=1 //设置为候选master,如果设置该参数以后,发生主从切换以后将会将此从库提升为主库。

(3)SSH 有效性验证
[root@manager ~]# masterha_check_ssh --global_conf=/etc/masterha/masterha_default.cnf --conf=/etc/masterha/app1.cnf
Tue Jul 7 18:53:32 2020 - [info] Reading default configuration from /etc/masterha/masterha_default.cnf..
Tue Jul 7 18:53:32 2020 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Tue Jul 7 18:53:32 2020 - [info] Reading server configuration from /etc/masterha/app1.cnf..
Tue Jul 7 18:53:32 2020 - [info] Starting SSH connection tests..
Tue Jul 7 18:53:34 2020 - [debug]
Tue Jul 7 18:53:32 2020 - [debug] Connecting via SSH from root@192.168.206.201(192.168.206.201:22) to root@192.168.206.202(192.168.206.202:22)..
Tue Jul 7 18:53:33 2020 - [debug] ok.
Tue Jul 7 18:53:33 2020 - [debug] Connecting via SSH from root@192.168.206.201(192.168.206.201:22) to root@192.168.206.203(192.168.206.203:22)..
Tue Jul 7 18:53:34 2020 - [debug] ok.
Tue Jul 7 18:53:35 2020 - [debug]
Tue Jul 7 18:53:32 2020 - [debug] Connecting via SSH from root@192.168.206.202(192.168.206.202:22) to root@192.168.206.201(192.168.206.201:22)..
Tue Jul 7 18:53:33 2020 - [debug] ok.
Tue Jul 7 18:53:33 2020 - [debug] Connecting via SSH from root@192.168.206.202(192.168.206.202:22) to root@192.168.206.203(192.168.206.203:22)..
Tue Jul 7 18:53:34 2020 - [debug] ok.
Tue Jul 7 18:53:35 2020 - [debug]
Tue Jul 7 18:53:33 2020 - [debug] Connecting via SSH from root@192.168.206.203(192.168.206.203:22) to root@192.168.206.201(192.168.206.201:22)..
Tue Jul 7 18:53:34 2020 - [debug] ok.
Tue Jul 7 18:53:34 2020 - [debug] Connecting via SSH from root@192.168.206.203(192.168.206.203:22) to root@192.168.206.202(192.168.206.202:22)..
Tue Jul 7 18:53:35 2020 - [debug] ok.
Tue Jul 7 18:53:35 2020 - [info] All SSH connection tests passed successfully.

(4)集群复制的有效性验证
mysql必须都启动
[root@manager ~]# masterha_check_ssh --global_conf=/etc/masterha/masterha_default.cnf --conf=/etc/masterha/app1.cnf
Tue Jul 7 18:53:32 2020 - [info] Reading default configuration from /etc/masterha/masterha_default.cnf..
Tue Jul 7 18:53:32 2020 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Tue Jul 7 18:53:32 2020 - [info] Reading server configuration from /etc/masterha/app1.cnf..
Tue Jul 7 18:53:32 2020 - [info] Starting SSH connection tests..
Tue Jul 7 18:53:34 2020 - [debug]
Tue Jul 7 18:53:32 2020 - [debug] Connecting via SSH from root@192.168.206.201(192.168.206.201:22) to root@192.168.206.202(192.168.206.202:22)..
Tue Jul 7 18:53:33 2020 - [debug] ok.
Tue Jul 7 18:53:33 2020 - [debug] Connecting via SSH from root@192.168.206.201(192.168.206.201:22) to root@192.168.206.203(192.168.206.203:22)..
Tue Jul 7 18:53:34 2020 - [debug] ok.
Tue Jul 7 18:53:35 2020 - [debug]
Tue Jul 7 18:53:32 2020 - [debug] Connecting via SSH from root@192.168.206.202(192.168.206.202:22) to root@192.168.206.201(192.168.206.201:22)..
Tue Jul 7 18:53:33 2020 - [debug] ok.
Tue Jul 7 18:53:33 2020 - [debug] Connecting via SSH from root@192.168.206.202(192.168.206.202:22) to root@192.168.206.203(192.168.206.203:22)..
Tue Jul 7 18:53:34 2020 - [debug] ok.
Tue Jul 7 18:53:35 2020 - [debug]
Tue Jul 7 18:53:33 2020 - [debug] Connecting via SSH from root@192.168.206.203(192.168.206.203:22) to root@192.168.206.201(192.168.206.201:22)..
Tue Jul 7 18:53:34 2020 - [debug] ok.
Tue Jul 7 18:53:34 2020 - [debug] Connecting via SSH from root@192.168.206.203(192.168.206.203:22) to root@192.168.206.202(192.168.206.202:22)..
Tue Jul 7 18:53:35 2020 - [debug] ok.
Tue Jul 7 18:53:35 2020 - [info] All SSH connection tests passed successfully.

[root@manager ~]# masterha_check_repl --global_conf=/etc/masterha/masterha_default.cnf --conf=/etc/masterha/app1.cnf
Tue Jul 7 18:56:57 2020 - [info] Reading default configuration from /etc/masterha/masterha_default.cnf..
Tue Jul 7 18:56:58 2020 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Tue Jul 7 18:56:58 2020 - [info] Reading server configuration from /etc/masterha/app1.cnf..
Tue Jul 7 18:56:58 2020 - [info] MHA::MasterMonitor version 0.58.
Tue Jul 7 18:56:59 2020 - [info] GTID failover mode = 0
Tue Jul 7 18:56:59 2020 - [info] Dead Servers:
Tue Jul 7 18:56:59 2020 - [info] Alive Servers:
Tue Jul 7 18:56:59 2020 - [info] 192.168.206.201(192.168.206.201:3306)
Tue Jul 7 18:56:59 2020 - [info] 192.168.206.202(192.168.206.202:3306)
Tue Jul 7 18:56:59 2020 - [info] 192.168.206.203(192.168.206.203:3306)
Tue Jul 7 18:56:59 2020 - [info] Alive Slaves:
Tue Jul 7 18:56:59 2020 - [info] 192.168.206.202(192.168.206.202:3306) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Tue Jul 7 18:56:59 2020 - [info] Replicating from 192.168.206.201(192.168.206.201:3306)
Tue Jul 7 18:56:59 2020 - [info] Primary candidate for the new Master (candidate_master is set)
Tue Jul 7 18:56:59 2020 - [info] 192.168.206.203(192.168.206.203:3306) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Tue Jul 7 18:56:59 2020 - [info] Replicating from 192.168.206.201(192.168.206.201:3306)
Tue Jul 7 18:56:59 2020 - [info] Not candidate for the new Master (no_master is set)
Tue Jul 7 18:56:59 2020 - [info] Current Alive Master: 192.168.206.201(192.168.206.201:3306)
Tue Jul 7 18:56:59 2020 - [info] Checking slave configurations..
Tue Jul 7 18:56:59 2020 - [info] read_only=1 is not set on slave 192.168.206.202(192.168.206.202:3306).
Tue Jul 7 18:56:59 2020 - [warning] relay_log_purge=0 is not set on slave 192.168.206.203(192.168.206.203:3306).
Tue Jul 7 18:56:59 2020 - [info] Checking replication filtering settings..
Tue Jul 7 18:56:59 2020 - [info] binlog_do_db= , binlog_ignore_db=
Tue Jul 7 18:56:59 2020 - [info] Replication filtering check ok.
Tue Jul 7 18:56:59 2020 - [info] GTID (with auto-pos) is not supported
Tue Jul 7 18:56:59 2020 - [info] Starting SSH connection tests..
Tue Jul 7 18:57:02 2020 - [info] All SSH connection tests passed successfully.
Tue Jul 7 18:57:02 2020 - [info] Checking MHA Node version..
Tue Jul 7 18:57:03 2020 - [info] Version check ok.
Tue Jul 7 18:57:03 2020 - [info] Checking SSH publickey authentication settings on the current master..
Tue Jul 7 18:57:04 2020 - [info] HealthCheck: SSH to 192.168.206.201 is reachable.
Tue Jul 7 18:57:04 2020 - [info] Master MHA Node version is 0.58.
Tue Jul 7 18:57:04 2020 - [info] Checking recovery script configurations on 192.168.206.201(192.168.206.201:3306)..
Tue Jul 7 18:57:04 2020 - [info] Executing command: save_binary_logs --command=test --start_pos=4 --binlog_dir=/usr/local/mysql/data --output_file=/data/log/masterha/save_binary_logs_test --manager_version=0.58 --start_file=mysql-bin.000001
Tue Jul 7 18:57:04 2020 - [info] Connecting to root@192.168.206.201(192.168.206.201:22)..
Creating /data/log/masterha if not exists.. Creating directory /data/log/masterha.. done.
ok.
Checking output directory is accessible or not..
ok.
Binlog found at /usr/local/mysql/data, up to mysql-bin.000001
Tue Jul 7 18:57:05 2020 - [info] Binlog setting check done.
Tue Jul 7 18:57:05 2020 - [info] Checking SSH publickey authentication and checking recovery script configurations on all alive slave servers..
Tue Jul 7 18:57:05 2020 - [info] Executing command : apply_diff_relay_logs --command=test --slave_user='manager' --slave_host=192.168.206.202 --slave_ip=192.168.206.202 --slave_port=3306 --workdir=/data/log/masterha --target_version=5.7.22-log --manager_version=0.58 --relay_log_info=/usr/local/mysql/data/relay-log.info --relay_dir=/usr/local/mysql/data/ --slave_pass=xxx
Tue Jul 7 18:57:05 2020 - [info] Connecting to root@192.168.206.202(192.168.206.202:22)..
Creating directory /data/log/masterha.. done.
Checking slave recovery environment settings..
Opening /usr/local/mysql/data/relay-log.info ... ok.
Relay log found at /usr/local/mysql/data, up to relay-bin.000003
Temporary relay log file is /usr/local/mysql/data/relay-bin.000003
Checking if super_read_only is defined and turned on.. not present or turned off, ignoring.
Testing mysql connection and privileges..
mysql: [Warning] Using a password on the command line interface can be insecure.
done.
Testing mysqlbinlog output.. done.
Cleaning up test file(s).. done.
Tue Jul 7 18:57:06 2020 - [info] Executing command : apply_diff_relay_logs --command=test --slave_user='manager' --slave_host=192.168.206.203 --slave_ip=192.168.206.203 --slave_port=3306 --workdir=/data/log/masterha --target_version=5.7.22-log --manager_version=0.58 --relay_log_info=/usr/local/mysql/data/relay-log.info --relay_dir=/usr/local/mysql/data/ --slave_pass=xxx
Tue Jul 7 18:57:06 2020 - [info] Connecting to root@192.168.206.203(192.168.206.203:22)..
Creating directory /data/log/masterha.. done.
Checking slave recovery environment settings..
Opening /usr/local/mysql/data/relay-log.info ... ok.
Relay log found at /usr/local/mysql/data, up to relay-bin.000003
Temporary relay log file is /usr/local/mysql/data/relay-bin.000003
Checking if super_read_only is defined and turned on.. not present or turned off, ignoring.
Testing mysql connection and privileges..
mysql: [Warning] Using a password on the command line interface can be insecure.
done.
Testing mysqlbinlog output.. done.
Cleaning up test file(s).. done.
Tue Jul 7 18:57:06 2020 - [info] Slaves settings check done.
Tue Jul 7 18:57:06 2020 - [info]
192.168.206.201(192.168.206.201:3306) (current master)
+--192.168.206.202(192.168.206.202:3306)
+--192.168.206.203(192.168.206.203:3306)

Tue Jul 7 18:57:06 2020 - [info] Checking replication health on 192.168.206.202..
Tue Jul 7 18:57:06 2020 - [info] ok.
Tue Jul 7 18:57:06 2020 - [info] Checking replication health on 192.168.206.203..
Tue Jul 7 18:57:06 2020 - [info] ok.
Tue Jul 7 18:57:06 2020 - [warning] master_ip_failover_script is not defined.
Tue Jul 7 18:57:06 2020 - [warning] shutdown_script is not defined.
Tue Jul 7 18:57:06 2020 - [info] Got exit code 0 (Not master dead).

MySQL Replication Health is OK.

验证成功的话会自动识别出所有服务器和主从状况。
注:在验证时,若遇到这个错误:Can’t exec “mysqlbinlog” … 解决方法是在所有服务器上执行:
ln -s /usr/local/mysql/bin/* /usr/local/bin/

(5)启动 manager
[root@manager ~]# nohup masterha_manager --conf=/etc/masterha/app1.cnf& >/tmp/mha_manager.log &
[1] 55707
[root@manager ~]# nohup: 忽略输入并把输出追加到"nohup.out"

注:在应用Unix/Linux时,我们一般想让某个程序在后台运行,于是我们将常会用 & 在程序结尾来让程
序自动运行。比如我们要运行mysql在后台: /usr/local/mysql/bin/mysqld_safe –user=mysql &。可是
有很多程序并不想mysqld一样,这样我们就需要nohup命令,
(6)状态检查
[root@manager ~]# masterha_check_status --conf=/etc/masterha/app1.cnf
app1 (pid:55707) is running(0:PING_OK), master:192.168.206.201

(7)故障转移验证(自动failover)
master dead后,MHA当时已经开启,候选Master库(Slave)会自动failover为Master。
验证的方式是先停掉 master(192.168.206.201),因为之前的配置文件中,把Candicate master(192.168.206.202)作为了候选人,那么就到 slave(192.168.206.203) 上查看 master 的 IP 是否变为了 slave-master 的 IP。
1)停掉 master
在master(192.168.206.201) 上把 mysql 停掉。
2)查看 MHA 日志
上面的配置文件中指定了日志位置为/masterha/app1/manager.log。
[root@manager ~]# cat /masterha/app1/manager.log
----- Failover Report -----

app1: MySQL Master failover 192.168.206.201(192.168.206.201:3306) to 192.168.206.202(192.168.206.202:3306) succeeded

Master 192.168.206.201(192.168.206.201:3306) is down!

Check MHA Manager logs at manager:/masterha/app1/manager.log for details.

Started automated(non-interactive) failover.
The latest slave 192.168.206.202(192.168.206.202:3306) has all relay logs for recovery.
Selected 192.168.206.202(192.168.206.202:3306) as a new master.
192.168.206.202(192.168.206.202:3306): OK: Applying all logs succeeded.
192.168.206.203(192.168.206.203:3306): This host has the latest relay log events.
Generating relay diff files from the latest slave succeeded.
192.168.206.203(192.168.206.203:3306): OK: Applying all logs succeeded. Slave started, replicating from 192.168.206.202(192.168.206.202:3306)
192.168.206.202(192.168.206.202:3306): Resetting slave info succeeded.
Master failover to 192.168.206.202(192.168.206.202:3306) completed successfully.

从日志信息中可以看到 master failover 已经成功了,并可以看出故障转移的大体流程
3)检查 slave的复制
登录 slave(192.168.206.203) 的Mysql,查看 slave 状态:
mysql> show slave status \G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.206.202
Master_User: mharep
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000004
Read_Master_Log_Pos: 154
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000004
Slave_IO_Running: Yes
Slave_SQL_Running: Yes

可以看到 master 的 IP 现在为 192.168.206.202,已经切换到和192.168.206.202同步了。
本来是和192.168.206.201同步的,说明 MHA 已经把Candicate master(192.168.206.202) 提升为了新的 master,IO线程和SQL线程也正确运行,MHA搭建成功。

六、MHA Manager端日常主要操作步骤

1、检查是否有下列文件,有则删除。

发生主从切换后,MHAmanager服务会自动停掉,且在manager_workdir(/masterha/app1)目录下面生成文件app1.failover.complete。
[root@manager ~]# ll /masterha/app1/
总用量 24
-rw-r--r--. 1 root root 0 7月 7 19:07 app1.failover.complete
-rw-r--r--. 1 root root 22192 7月 7 19:07 manager.log

若要启动MHA,必须先确保无此文件, 如果有这个提示,那么删除此文件。
[error]
[/usr/share/perl5/vendor_perl/MHA/MasterFailover.pm, ln298]
Last failover was done at 2015/01/09 10:00:47.Current time is too early to do failover again. If you want to do failover, manually remove /masterha/app1/app1.failover.complete and run this script again.

2、检查MHA复制检查

需要把master设置成candidate的从服务器
(1)查看candidate的从服务器(192.168.206.202)的二进制日志
mysql> show master status\G
*************************** 1. row ***************************
File: mysql-bin.000004
Position: 154

(2)将master服务器设置为candidate的从服务器
[root@master ~]# systemctl start mysqld
[root@master ~]# mysql -uroot -p

mysql> change master to master_host='192.168.206.202',master_port=3306,master_user='mharep',master_password='123456',master_log_file='mysql-bin.000004',master_log_pos=154;
Query OK, 0 rows affected, 2 warnings (0.00 sec)

mysql> start slave;
Query OK, 0 rows affected (0.00 sec)

[root@manager ~]# masterha_check_repl --conf=/etc/masterha/app1.cnf

# 省略部分输出信息

Tue Jul 7 19:31:44 2020 - [info] Slaves settings check done.
Tue Jul 7 19:31:44 2020 - [info]
192.168.206.202(192.168.206.202:3306) (current master)
+--192.168.206.201(192.168.206.201:3306)
+--192.168.206.203(192.168.206.203:3306)

Tue Jul 7 19:31:44 2020 - [info] Checking replication health on 192.168.206.201..
Tue Jul 7 19:31:44 2020 - [info] ok.
Tue Jul 7 19:31:44 2020 - [info] Checking replication health on 192.168.206.203..
Tue Jul 7 19:31:44 2020 - [info] ok.
Tue Jul 7 19:31:44 2020 - [warning] master_ip_failover_script is not defined.
Tue Jul 7 19:31:44 2020 - [warning] shutdown_script is not defined.
Tue Jul 7 19:31:44 2020 - [info] Got exit code 0 (Not master dead).

MySQL Replication Health is OK.

3、停止MHA

masterha_stop --conf=/etc/masterha/app1.cnf

4、启动MHA

nohup masterha_manager --conf=/etc/masterha/app1.cnf &>/tmp/mha_manager.log &

当有slave 节点宕掉时,默认是启动不了的,加上 --ignore_fail_on_start 即使有节点宕掉也能启动MHA,如下:
nohup masterha_manager --conf=/etc/masterha/app1.cnf --ignore_fail_on_start & >/tmp/mha_manager.log &

5、检查状态

masterha_check_status --conf=/etc/masterha /app1.cnf

6、检查日志

tail -f /masterha/app1/manager.log
1

7、主从切换后续工作

(1)重构
重构就是你的主挂了,切换到Candidate master上,Candidate master变成了主
(2)重构的两种方案
1)原主库修复成一个新的slave 主库切换后,把原主库修复成新从库,然后重新执行以上5步。
2)原主库数据文件完整的情况下,可通过以下方式找出最后执行的CHANGE MASTER命令:
[root@manager ~]# grep "CHANGE MASTER TO MASTER" /masterha/app1/manager.log | tail -1
Tue Jul 7 19:07:08 2020 - [info] All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='192.168.206.202', MASTER_PORT=3306, MASTER_LOG_FILE='mysql-bin.000004', MASTER_LOG_POS=154, MASTER_USER='mharep', MASTER_PASSWORD='xxx';

[root@manager ~]# mysql -uroot -p

mysql> CHANGE MASTER TO MASTER_HOST='192.168.206.202', MASTER_PORT=3306, MASTER_LOG_FILE='mysql-bin.000004', MASTER_LOG_POS=154, MASTER_USER='mharep', MASTER_PASSWORD='123456';
Query OK, 0 rows affected, 2 warnings (0.01 sec)

mysql> start slave;
Query OK, 0 rows affected (0.00 sec)

mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.18.6
Master_User: mharep
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000002
Read_Master_Log_Pos: 154
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000002
Slave_IO_Running: Yes
Slave_SQL_Running: Yes

3)启动manager
[root@manager ~]# nohup masterha_manager --conf=/etc/masterha/app1.cnf& >/tmp/mha_manager.log &
[1] 55707

[root@manager ~]# masterha_check_status --conf=/etc/masterha/app1.cnf
app1 (pid:55707) is running(0:PING_OK), master:192.168.206.201

注意:如果正常,会显示"PING_OK",否则会显示"NOT_RUNNING",这代表MHA监控没有开启。
定期删除中继日志 在配置主从复制中,slave上设置了参数relay_log_purge=0,所以slave节点需要定期删除中
继日志,建议每个slave节点删除中继日志的时间错开。
corntab -e
0 5 * * * /usr/local/bin/purge_relay_logs - -user=root --password=pwd123 --port=3306 --disable_relay_log_purge >> /var/log/purge_relay.log 2>&1

七、配置VIP

vip配置可以采用两种方式,一种通过keepalived的方式管理虚拟ip的浮动;另外一种通过脚本方式启动虚拟ip
的方式(即不需要keepalived或者heartbeat类似的软件)。

1、keepalived方式管理虚拟ip

(1)keepalived配置
下载软件并在master(192.168.206.201)和备选master(192.168.206.202)上安装软件包keepalived。
在编译安装Keepalived之前,必须先安装内核开发包kernel-devel以及openssl-devel、popt-devel等支持库。
1)下载Keepalived
wget https://www.keepalived.org/software/keepalived-2.0.20.tar.gz
1
2)安装依赖库
yum -y install kernel-devel openssl-devel popt-devel gcc
3)解压
tar zxf keepalived-2.0.20.tar.gz
4)安装
cd keepalived-2.0.20/
./configure --prefix=/ && make && make install
5)使用keepalived服务
systemctl enable keepalived
systemctl start keepalived
6)创建防火墙规则
若开启了防火墙,需要关闭防火墙或创建规则。
firewall-cmd --direct --permanent --add-rule ipv4 filter OUTPUT 0 --in-interface ens33 --destination 224.0.0.18 --protocol vrrp -j ACCEPT

firewall-cmd --direct --permanent --add-rule ipv4 filter INPUT 0 --in-interface ens33 --destination 224.0.0.18 --protocol vrrp -j ACCEPT

firewall-cmd --reload
(2)修改Keepalived的配置文件
vim /etc/keepalived/keepalived.conf
1)在master上配置
[root@master ~]# vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
router_id mysql-1
}

vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51
priority 100
advert_int 1
nopreempt
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.206.100
}
}
2)在备选master上配置
! Configuration File for keepalived

global_defs {
router_id mysql-2    # 此处需要修改
}

vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51
priority 50     # 此处需要修改
advert_int 1
nopreempt
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.206.100
}
}
3)在master上启动并查看日志
注意:需启动keepalived服务
[root@master keepalived-2.0.20]# systemctl start keepalived.service
[root@master keepalived-2.0.20]# ps -ef | grep keep
gdm 1977 1813 0 03:52 ? 00:00:00 /usr/libexec/gsd-housekeeping
gdm 59162 58971 0 05:51 ? 00:00:00 /usr/libexec/gsd-housekeeping
root 113268 1 0 14:54 ? 00:00:00 //sbin/keepalived -D
root 113270 113268 1 14:54 ? 00:00:00 //sbin/keepalived -D
root 113292 92137 0 14:55 pts/0 00:00:00 grep --color=auto keep

[root@master keepalived-2.0.20]# ip a | grep 100
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
inet 192.168.206.100/32 scope global ens33
# 省略部分输出信息

[root@master keepalived-2.0.20]# tail -f /var/log/messages
# 省略部分输出信息
Jul 8 14:57:31 master avahi-daemon[740]: Registering new address record for 192.168.206.100 on ens33.IPv4.
Jul 8 14:57:31 master avahi-daemon[740]: Registering new address record for 192.168.206.201 on ens33.IPv4.
# 省略部分输出信息

发现已经将虚拟ip 192.168.206.100绑定了网卡ens33上。
4)在备选master上启动并查看日志
在另外一台服务器,候选master上启动keepalived服务,并观察
[root@slave-master keepalived-2.0.20]# systemctl start keepalived.service

[root@slave-master keepalived-2.0.20]# tail -f /var/log/messages
# 省略部分输出信息
Jul 8 15:01:16 slave-master Keepalived_vrrp[111159]: Assigned address 192.168.206.202 for interface ens33
# 省略部分输出信息

从上面的信息可以看到keepalived已经配置成功。
注意: 上面两台服务器的keepalived都设置为了BACKUP模式,在keepalived中2种模式,分别是master-backup模式和backup->backup模式。这两种模式有很大区别。
1. 在master->backup模式下,一旦主库宕机,虚拟ip会自动漂移到从库,当主库修复后,keepalived启动后,还会把虚拟ip抢占过来,即使设置了非抢占模式(nopreempt)抢占ip的动作也会发生。
2. 在backup->backup模式下,当主库宕机后虚拟ip会自动漂移到从库上,当原主库恢复和keepalived服务启动后,并不会抢占新主的虚拟ip,即使是优先级高于从库的优先级别,也不会发生抢占。

为了减少ip漂移次数,通常是把修复好的主库当做新的备库。
(3)MHA引入keepalived
MySQL服务进程挂掉时通过MHA 停止keepalived
要想把keepalived服务引入MHA,我们只需要修改切换时触发的脚本文件master_ip_failover即可,在该脚本中添加在master发生宕机时对keepalived的处理。
编辑脚本/scripts/master_ip_failover,修改后如下。
#!/usr/bin/env perl

use strict;
use warnings FATAL => 'all';

use Getopt::Long;

my (
$command, $ssh_user, $orig_master_host,
$orig_master_ip, $orig_master_port, $new_master_host,
$new_master_ip, $new_master_port, $new_master_user,
$new_master_password
);

my $vip="192.168.206.100";
my $ssh_start_vip="systemctl start keepalived.service";
my $ssh_stop_vip="systemctl stop keepalived.service";

GetOptions(
'command=s' => \$command,
'ssh_user=s' => \$ssh_user,
'orig_master_host=s' => \$orig_master_host,
'orig_master_ip=s' => \$orig_master_ip,
'orig_master_port=i' => \$orig_master_port,
'new_master_host=s' => \$new_master_host,
'new_master_ip=s' => \$new_master_ip,
'new_master_port=i' => \$new_master_port,
'new_master_user=s' => \$new_master_user,
'new_master_password=s' => \$new_master_password,
);

exit &main();

sub main {

print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";

if ( $command eq "stop" || $command eq "stopssh" ) {

my $exit_code = 1;
eval {
print "Disabling the VIP on old master: $orig_master_host \n";
&stop_vip();
$exit_code = 0;
};
if ($@) {
warn "Got Error: $@\n";
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "start" ) {

my $exit_code = 10;
$exit_code = 0;
};
if ($@) {
warn $@;
exit $exit_code;
}
exit $exit_code;
#'ssh $ssh_user\@cluster1 \" $ssh_start_vip \"';
exit 0;
}
else {
&usage();
exit 1;
}
}

sub start_vip(){
`ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}

sub stop_vip(){
return 0 unless ($ssh_user);
`ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
sub usage {
print
"Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}

现在已经修改这个脚本了,接下来我们在/etc/masterha/app1.cnf 中调用故障切换脚本 停止MHA:
#masterha_stop --conf=/etc/masterha/app1.cnf

在配置文件/etc/masterha/app1.cnf 中启用下面的参数(在[server default下面添加])
master_ip_failover_script=/scripts/master_ip_failover

(4)启动MHA:
#nohup masterha_manager --conf=/etc/masterha/app1.cnf &>/tmp/mha_manager.log &

(5)检查状态
[root@centos1 ~]# masterha_check_status --conf=/etc/masterha/app1.cnf
app1 (pid:12047) is running(0:PING_OK), master:192.168.206.202

再检查集群状态,看是否会报错
[root@centos1 ~]# masterha_check_repl --conf=/etc/masterha/app1.cnf
Fri Sep 30 23:05:10 2016 - [info] Slaves settings check done.
Fri Sep 30 23:05:10 2016 - [info]
192.168.1.102(192.168.1.102:3306) (current master)
+--192.168.206.202(192.168.206.202:3306)
+--192.168.206.203(192.168.206.203:3306)

Fri Sep 30 23:05:10 2016 - [info] Checking replication health on 192.168.206.202..
Fri Sep 30 23:05:10 2016 - [info] ok.
Fri Sep 30 23:05:10 2016 - [info] Checking replication health on 192.168.206.203..
Fri Sep 30 23:05:10 2016 - [info] ok.
Fri Sep 30 23:05:10 2016 - [info] Checking master_ip_failover_script status:
Fri Sep 30 23:05:10 2016 - [info] /scripts/master_ip_failover --command=status -
-ssh_user=root --orig_master_host=192.168.206.201 --orig_master_ip=192.168.206.201 --
orig_master_port=3306

Checking the Status of the script.. OK
Fri Sep 30 23:05:10 2016 - [info] OK.
Fri Sep 30 23:05:10 2016 - [warning] shutdown_script is not defined.
Fri Sep 30 23:05:10 2016 - [info] Got exit code 0 (Not master dead).

MySQL Replication Health is OK.

(6)测试
在master上停掉mysql服务
# systemctl stop mysqld [ OK ]

到slave(192.168.206.202)查看slave的状态:
mysql> show slave status\G
*************************** 1. row ***************************
            Slave_IO_State: Waiting for master to send event
                Master_Host: 192.168.206.200
                Master_User: mharep
                Master_Port: 3306
             Connect_Retry: 60
        Master_Log_File: mysql-bin.000004
            Read_Master_Log_Pos: 154
     Relay_Log_File: relay-bin.000002
         Relay_Log_Pos: 320
    Relay_Master_Log_File: mysql-bin.000004
         Slave_IO_Running: Yes
         Slave_SQL_Running: Yes

从上图可以看出slave指向了新的master服务器(192.168.206.200) 查看VIP
[root@master1 keepalived]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP
group default qlen 1000
link/ether 00:0c:29:ce:ea:e9 brd ff:ff:ff:ff:ff:ff
inet 192.168.206.200/24 brd 192.168.206.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 192.168.206.100/24 brd 192.168.206.255 scope global secondary ens33:0
valid_lft forever preferred_lft forever
inet6 fe80::8afe:8575:3294:65a2/64 scope link tentative noprefixroute
dadfailed
valid_lft forever preferred_lft forever
inet6 fe80::1d3d:dd62:8adb:c689/64 scope link tentative noprefixroute
dadfailed
valid_lft forever preferred_lft forever
inet6 fe80::77a5:d0c0:a955:c7f3/64 scope link noprefixroute
valid_lft forever preferred_lft forever

从上图可以看到master2(原来的master)释放了VIP,master1(新的master)接管了VIP地址 主从切换后续工作 主库切换后,把原主库修复成新从库,相关操作请参考前面相关操作。 为了防止脑裂发生,推荐生产环境采用脚本的方式来管理虚拟ip,而不是使用keepalived来完成。到此为止,基本MHA集群已经配置完毕
(7)总结
MHA软件由两部分组成,Manager工具包和Node工具包,具体的说明如下。 Manager工具包主要包括以下几个工具: masterha_check_ssh 检查MHA的SSH配置状况 masterha_check_repl 检查MySQL复制状况masterha_manger 启动MHA masterha_check_status 检测当前MHA运行状态masterha_master_monitor 检测master是否宕机 masterha_master_switch 控制故障转移(自动或者手动) masterha_conf_host 添加或删除置的server信息Node工具包(这些工具通常由MHA Manager的脚本触发,无需人为操作)主要包括以下几个工具: save_binary_logs 保存和复制master的二进制日志 apply_diff_relay_logs 识别差异的中继日志事件并将其差异的事件应用于其他的slave filter_mysqlbinlog 去除不必要的ROLLBACK事件(MHA已不再使用这个工具)purge_relay_logs 清除中继日志(不会阻塞SQL线程)
(8)mysql必备技能掌握
1、MySQL架构:对mysql的架构,整体有个印象,才能不断的加深对mysql的理解和后继的学习。

2、用各种姿势备份MySQL数据库 数据备份是DBA或运维工程师日常工作之一,如果让你来备份,你会用什么方式备份,在时间时间备份,使用什么策略备份

3、mysql主从复制及读写分离 mysql的主从复制及读写分离是DBA必备技能之一

4、MySQL/MariaDB数据库基于SSL实现主从复制 加强主从复制的安全性

5、MySQL高可用 数据的高可用如何保证

6、数据库Sharding的基本思想和切分策略 随着数据量的不断攀升,从性能和可维护的角度,需要进行一些Sharding,也就是数据库的切分,有垂直切分和水平切分

7、MySQL/MariaDB 性能调整和优化技巧 掌握优化思路和技巧,对数据库的不断优化是一项长期工程



yg9538 2022年7月22日 22:48 304 收藏文档