Keepalived检测每个服务器节点状态,当服务器节点异常或工作出现故障, Keepalived将故障节点从集群系统中剔除,故障节点恢复后,Keepalived再将其加入到集群系统中所有工作自动完成,无需人工千预
使用keepalived为主从设备提供VIP地址漂移

- # 在两台web服务器上安装keepalived
- [root@pubserver cluster]# vim 07-install-keepalived.yml
- ---
- - name: install keepalived
- hosts: webservers
- tasks:
- - name: install keepalived # 安装keepalived
- yum:
- name: keepalived
- state: present
- [root@pubserver cluster]# ansible-playbook 07-install-keepalived.yml
-
- # 修改配置文件
- [root@web1 ~]# vim /etc/keepalived/keepalived.conf
- 12 router_id web1 # 设置本机在集群中的唯一识别符
- 13 vrrp_iptables # 自动配置iptables放行规则
- ... ...
- 20 vrrp_instance VI_1 {
- 21 state MASTER # 状态,主为MASTER,备为BACKUP
- 22 interface eth0 # 网卡
- 23 virtual_router_id 51 # 虚拟路由器地址
- 24 priority 100 # 优先级
- 25 advert_int 1 # 发送心跳消息的间隔
- 26 authentication {
- 27 auth_type PASS # 认证类型为共享密码
- 28 auth_pass 1111 # 集群中的机器密码相同,才能成为集群
- 29 }
- 30 virtual_ipaddress {
- 31 192.168.88.80/24 # VIP地址
- 32 }
- 33 }
- # 删除下面所有行
-
- [root@web1 ~]# systemctl start keepalived
- # 等几秒服务完全启动后,可以查看到vip
- [root@web1 ~]# ip a s eth0 | grep '88'
- inet 192.168.88.100/24 brd 192.168.88.255 scope global noprefixroute eth0
- inet 192.168.88.80/24 scope global secondary eth0
-
-
- # 配置web2
- [root@web1 ~]# scp /etc/keepalived/keepalived.conf 192.168.88.200:/etc/keepalived/
- [root@web2 ~]# vim /etc/keepalived/keepalived.conf
- 12 router_id web2 # 改id
- 13 vrrp_iptables
- ... ...
- 20 vrrp_instance VI_1 {
- 21 state BACKUP # 改状态
- 22 interface eth0
- 23 virtual_router_id 51
- 24 priority 80 # 改优先级
- 25 advert_int 1
- 26 authentication {
- 27 auth_type PASS
- 28 auth_pass 1111
- 29 }
- 30 virtual_ipaddress {
- 31 192.168.88.80/24
- 32 }
- 33 }
-
- # 启动服务
- [root@web2 ~]# systemctl start keepalived
- # 查看地址,eth0不会出现vip
- [root@web2 ~]# ip a s | grep '88'
- inet 192.168.88.200/24 brd 192.168.88.255 scope global noprefixroute eth0
-
-
- # 测试,现在访问88.80,看到是web1上的内容
- [root@client1 ~]# curl http://192.168.88.80
- Welcome from web1
-
- # 模拟web1出现故障
- [root@web1 ~]# systemctl stop keepalived.service
-
- # 测试,现在访问88.80,看到是web2上的内容
- [root@client1 ~]# curl http://192.168.88.80
- Welcome from web2
-
- # 在web2上查看vip,可以查看到vip 192.168.88.80
- [root@web2 ~]# ip a s | grep '88'
- inet 192.168.88.200/24 brd 192.168.88.255 scope global noprefixroute eth0
- inet 192.168.88.80/24 scope global secondary eth0
通过track script脚本控制监视MASTER服务器的80端口,实现主备切换
- # 1. 在MASTER上创建监视脚本
- [root@web1 ~]# vim /etc/keepalived/check_http.sh
- #!/bin/bash
-
- ss -tlnp | grep :80 &> /dev/null && exit 0 || exit 1
-
- [root@web1 ~]# chmod +x /etc/keepalived/check_http.sh
-
- # 2. 修改MASTER配置文件,使用脚本
- [root@web1 ~]# vim /etc/keepalived/keepalived.conf
- 1 ! Configuration File for keepalived
- 2
- 3 global_defs {
- ...略...
- 18 }
- 19
- 20 vrrp_script chk_http_port { # 定义监视脚本
- 21 script "/etc/keepalived/check_http.sh"
- 22 interval 2 # 脚本每隔2秒运行一次
- 23 }
- 24
- 25 vrrp_instance VI_1 {
- 26 state MASTER
- 27 interface eth0
- 28 virtual_router_id 51
- 29 priority 100
- 30 advert_int 1
- 31 authentication {
- 32 auth_type PASS
- 33 auth_pass 1111
- 34 }
- 35 virtual_ipaddress {
- 36 192.168.88.80/24
- 37 }
- 38 track_script { # 引用脚本
- 39 chk_http_port
- 40 }
- 41 }
-
- # 3. 重起服务
- [root@web1 ~]# systemctl restart keepalived.service
-
- # 4. 测试,关闭web1的nginx后,VIP将会切换至web2
- [root@web1 ~]# systemctl stop nginx.service
- [root@web1 ~]# ip a s | grep 88
- inet 192.168.88.100/24 brd 192.168.88.255 scope global noprefixroute eth0
- [root@web2 ~]# ip a s | grep 88
- inet 192.168.88.200/24 brd 192.168.88.255 scope global noprefixroute eth0
- inet 192.168.88.80/24 scope global secondary eth0
- # 5. 当MASTER的nginx修复后,VIP将会切换回至web1
- [root@web1 ~]# systemctl start nginx.service
- [root@web1 ~]# ip a s | grep 88
- inet 192.168.88.100/24 brd 192.168.88.255 scope global noprefixroute eth0
- inet 192.168.88.80/24 scope global secondary eth0
- [root@web2 ~]# ip a s | grep 88
- inet 192.168.88.200/24 brd 192.168.88.255 scope global noprefixroute eth0

环境说明:LVS-DR模式
| client1 | eth0->192.168.88.10 |
| lvs1 | eth0->192.168.88.5 |
| lvs2 | eth0->192.168.88.6 |
| web1 | eth0->192.168.88.100 |
| web2 | eth0->192.168.88.200 |
- # 关闭2台web服务器上的keepalived,并卸载
- [root@pubserver cluster]# vim 08-rm-keepalived.yml
- ---
- - name: remove keepalived
- hosts: webservers
- tasks:
- - name: stop keepalived # 停服务
- service:
- name: keepalived
- state: stopped
-
- - name: uninstall keepalived # 卸载
- yum:
- name: keepalived
- state: absent
- [root@pubserver cluster]# ansible-playbook 08-rm-keepalived.yml
-
- # 创建新虚拟机lvs2
- [root@myhost ~]# vm clone lvs2
-
- # 为lvs2设置ip地址
- [root@myhost ~]# vm setip lvs2 192.168.88.6
-
- # 连接
- [root@myhost ~]# ssh 192.168.88.6
eth0上的VIP地址。因为vip将由keepalived接管- [root@pubserver cluster]# vim 09-del-lvs1-vip.yml
- ---
- - name: del lvs1 vip
- hosts: lvs1
- tasks:
- - name: rm vip
- lineinfile: # 在指定文件中删除行
- path: /etc/sysconfig/network-scripts/ifcfg-eth0
- regexp: 'IPADDR2=' # 正则匹配
- state: absent
- notify: restart system
-
- handlers:
- - name: restart system
- shell: reboot
- [root@pubserver cluster]# ansible-playbook 09-del-lvs1-vip.yml
-
- # 查看结果
- [root@lvs1 ~]# ip a s eth0 | grep 88
- inet 192.168.88.5/24 brd 192.168.88.255 scope global noprefixroute eth0
- [root@lvs1 ~]# ipvsadm -Ln # 查看规则
- [root@lvs1 ~]# ipvsadm -D -t 192.168.88.15:80
- # 在主机清单文件中加入lvs2的说明
- [root@pubserver cluster]# vim inventory
- ...略...
- [lb]
- lvs1 ansible_host=192.168.88.5
- lvs2 ansible_host=192.168.88.6
- ...略...
-
- # 安装软件包
- [root@pubserver cluster]# cp 01-upload-repo.yml 10-upload-repo.yml
- ---
- - name: config repos.d
- hosts: lb
- tasks:
- - name: delete repos.d
- file:
- path: /etc/yum.repos.d
- state: absent
-
- - name: create repos.d
- file:
- path: /etc/yum.repos.d
- state: directory
- mode: '0755'
-
- - name: upload local88
- copy:
- src: files/local88.repo
- dest: /etc/yum.repos.d/
- [root@pubserver cluster]# ansible-playbook 10-upload-repo.yml
-
- [root@pubserver cluster]# vim 11-install-lvs2.yml
- ---
- - name: install lvs keepalived
- hosts: lb
- tasks:
- - name: install pkgs # 安装软件包
- yum:
- name: ipvsadm,keepalived
- state: present
- [root@pubserver cluster]# ansible-playbook 11-install-lvs2.yml
-
- [root@lvs1 ~]# vim /etc/keepalived/keepalived.conf
- 12 router_id lvs1 # 为本机取一个唯一的id
- 13 vrrp_iptables # 自动开启iptables放行规则
- ... ...
- 20 vrrp_instance VI_1 {
- 21 state MASTER
- 22 interface eth0
- 23 virtual_router_id 51
- 24 priority 100
- 25 advert_int 1
- 26 authentication {
- 27 auth_type PASS
- 28 auth_pass 1111
- 29 }
- 30 virtual_ipaddress {
- 31 192.168.88.15 # vip地址,与web服务器的vip一致
- 32 }
- 33 }
- # 以下为keepalived配置lvs的规则
- 35 virtual_server 192.168.88.15 80 { # 声明虚拟服务器地址
- 36 delay_loop 6 # 健康检查延迟6秒开始
- 37 lb_algo wrr # 调度算法为wrr
- 38 lb_kind DR # 工作模式为DR
- 39 persistence_timeout 50 # 50秒内相同客户端调度到相同服务器
- 40 protocol TCP # 协议是TCP
- 41
- 42 real_server 192.168.88.100 80 { # 声明真实服务器
- 43 weight 1 # 权重
- 44 TCP_CHECK { # 通过TCP协议对真实服务器做健康检查
- 45 connect_timeout 3 # 连接超时时间为3秒
- 46 nb_get_retry 3 # 3次访问失败则认为真实服务器故障
- 47 delay_before_retry 3 # 两次检查时间的间隔3秒
- 48 }
- 49 }
- 50 real_server 192.168.88.200 80 {
- 51 weight 2
- 52 TCP_CHECK {
- 53 connect_timeout 3
- 54 nb_get_retry 3
- 55 delay_before_retry 3
- 56 }
- 57 }
- 58 }
- # 以下部分删除
-
- # 启动keepalived服务
- [root@lvs1 ~]# systemctl start keepalived
-
- # 验证
- [root@lvs1 ~]# ip a s eth0 | grep 88
- inet 192.168.88.5/24 brd 192.168.88.255 scope global noprefixroute eth0
- inet 192.168.88.15/32 scope global eth0
- [root@lvs1 ~]# ipvsadm -Ln # 出现规则
- IP Virtual Server version 1.2.1 (size=4096)
- Prot LocalAddress:Port Scheduler Flags
- -> RemoteAddress:Port Forward Weight ActiveConn InActConn
- TCP 192.168.88.15:80 wrr persistent 50
- -> 192.168.88.100:80 Route 1 0 0
- -> 192.168.88.200:80 Route 2 0 0
-
- # 客户端连接测试
- [root@client1 ~]# for i in {1..6}; do curl http://192.168.88.15/; done
- Welcome from web2
- Welcome from web2
- Welcome from web2
- Welcome from web2
- Welcome from web2
- Welcome from web2
- # 为了效率相同的客户端在50秒内分发给同一台服务器。为了使用同一个客户端可以看到轮询效果,可以注释配置文件中相应的行后,重启keepavlied。
- [root@lvs1 ~]# vim +39 /etc/keepalived/keepalived.conf
- ...略...
- # persistence_timeout 50
- ...略...
- [root@lvs1 ~]# systemctl restart keepalived.service
- # 在客户端验证
- [root@client1 ~]# for i in {1..6}; do curl http://192.168.88.15/; done
- Welcome from web2
- Welcome from web1
- Welcome from web2
- Welcome from web2
- Welcome from web1
- Welcome from web2
-
- # 配置LVS2
- [root@lvs1 ~]# scp /etc/keepalived/keepalived.conf 192.168.88.6:/etc/keepalived/
- [root@lvs2 ~]# vim /etc/keepalived/keepalived.conf
- 12 router_id lvs2
- 21 state BACKUP
- 24 priority 80
- [root@lvs2 ~]# systemctl start keepalived
- [root@lvs2 ~]# ipvsadm -Ln # 出现规则
- IP Virtual Server version 1.2.1 (size=4096)
- Prot LocalAddress:Port Scheduler Flags
- -> RemoteAddress:Port Forward Weight ActiveConn InActConn
- TCP 192.168.88.15:80 wrr
- -> 192.168.88.100:80 Route 1 0 0
- -> 192.168.88.200:80 Route 2 0 0
- # 1. 验证真实服务器健康检查
- [root@web1 ~]# systemctl stop nginx
- [root@lvs1 ~]# ipvsadm -Ln # web1在规则中消失
- [root@lvs2 ~]# ipvsadm -Ln
-
- [root@web1 ~]# systemctl start nginx
- [root@lvs1 ~]# ipvsadm -Ln # web1重新出现在规则中
- [root@lvs2 ~]# ipvsadm -Ln
-
- # 2. 验证lvs的高可用性
- [root@lvs1 ~]# shutdown -h now # 关机
- [root@lvs2 ~]# ip a s | grep 88 # 可以查看到vip
- inet 192.168.88.6/24 brd 192.168.88.255 scope global noprefixroute eth0
- inet 192.168.88.15/32 scope global eth0
- # 客户端访问vip依然可用
- [root@client1 ~]# for i in {1..6}; do curl http://192.168.88.15/; done
- Welcome from web1
- Welcome from web2
- Welcome from web2
- Welcome from web1
- Welcome from web2
- Welcome from web2
它是免费、快速并且可靠的一种解决方案,适用于那些负载特大的web站点,这些站点通常又需要会话保持或七层处理,可以提供高可用性、负载均衡以及基于TCP和HTTP应用的代理
mode http
- 客户端请求被深度分析后再发往服务器,只适用于web服务
mode tcp
- 4层调度,不检查第七层信息,适用于各种服务
mode health
- 仅做健康状态检查,已经不建议使用

| client1 | eth0 -> 192.168.88.10 |
| HAProxy | eth0 -> 192.168.88.5 |
| web1 | eth0 -> 192.168.88.100 |
| web2 | eth0 -> 192.168.88.200 |
- # 关闭192.168.88.6
- [root@lvs2 ~]# shutdown -h now
-
- # 配置192.168.88.5为haproxy服务器
- [root@pubserver cluster]# vim 12-config-haproxy.yml
- ---
- - name: config haproxy
- hosts: lvs1
- tasks:
- - name: rm lvs keepalived # 删除软件包
- yum:
- name: ipvsadm,keepalived
- state: absent
-
- - name: rename hostname # 修改主机名
- shell: hostnamectl set-hostname haproxy1
-
- - name: install haproxy # 安装软件包
- yum:
- name: haproxy
- state: present
- [root@pubserver cluster]# ansible-playbook 12-config-haproxy.yml
-
- # web服务器,不需要配置vip,不需要改内核参数。但是存在对haproxy也没有影响。
- # 修改配置文件
- [root@haproxy1 ~]# vim /etc/haproxy/haproxy.cfg
- # 配置文件中,global是全局配置;default是缺省配置,如果后续有和default相同的配置,default配置将会被覆盖。
- # 配置文件中,frontend描述haproxy怎么和用户交互;backend描述haproxy怎么和后台应用服务器交互。这两个选项,一般不单独使用,而是合并到一起,名为listen。
- # 将64行之后全部删除,写入以下内容
- 64 #---------------------------------------------------------------------
- 65 listen myweb # 定义虚拟服务器
- 66 bind 0.0.0.0:80 # 监听在所有可用地址的80端口
- 67 balance roundrobin # 定义轮询调度算法
- # 对web服务器做健康检查,2秒检查一次,如果连续2次检查成功,认为服务器是健康的,如果连续5次检查失败,认为服务器坏了
- 68 server web1 192.168.88.100:80 check inter 2000 rise 2 fall 5
- 69 server web2 192.168.88.200:80 check inter 2000 rise 2 fall 5
- 70
- 71 listen stats # 定义虚拟服务器
- 72 bind 0.0.0.0:1080 # 监听在所有可用地址的1080端口
- 73 stats refresh 30s # 设置监控页面自动刷新时间为30秒
- 74 stats uri /stats # 定义监控地址是/stats
- 75 stats auth admin:admin # 监控页面的用户名和密码都是admin
-
- # 启服务
- [root@haproxy1 ~]# systemctl start haproxy.service
- # 使用firefox访问监控地址 http://192.168.88.5:1080/stats
-
- # 客户端访问测试
- [root@client1 ~]# for i in {1..6}; do curl http://192.168.88.5/; done
- Welcome from web2
- Welcome from web1
- Welcome from web2
- Welcome from web1
- Welcome from web2
- Welcome from web1
-
- # client1上使用ab访问
- [root@client1 ~]# yum install -y httpd-tools
- [root@client1 ~]# ab -n1000 -c200 http://192.168.88.5/
监控地址 http://192.168.88.5:1080/stats如下:

LVS适用于需要高并发性和稳定性的场景,Nginx适用于静态文件服务和反向代理等应用层负载均衡场景,HAProxy则具备较为丰富的功能和灵活性,适用于多种负载均衡场景。
优点:
缺点:
优点:
缺点:
优点:
缺点: