在此前已经部署了单master节点,但,出于集群稳定性的考虑,需要将其扩展为多master。原单master部署链接:kubernetes二进制安装教程单master_zsk_john的博客-CSDN博客
计划是在此基础上扩展,其中的细节还是比较多的,单master和多master的集群规划计划如下:
| 序号 | ip | 角色 | hostname | 安装的组件 |
| 1 | 192.168.217.16 | master | master,k8s-master | kube-apiserver,kubelet,kube-controller-manager,kube-proxy,etcd,docker环境 |
| 2 | 192.168.217.17 | slave1 | slave1,k8s-slave1 | kubelet,kube-proxy,etcd,docker环境 |
| 3 | 192.168.217.18 | slave2 | slave2,k8s-slave2 | kubelet,kube-proxy,etcd,docker环境 |
| 序号 | ip | 角色 | hostname | 安装的组件 |
| 1 | 192.168.217.16 | master1 | master,k8s-master(master节点) | kube-apiserver,kube-controller-manager,kube-proxy,kubelet,etcd,docker环境 |
| 2 | 192.168.217.11 | master2 | master2,k8s-master2(master节点) | kube-apiserver,kube-controller-manager,kube-proxy,kubelet,docker环境 |
| 3 | 192.168.217.17 | slave1,node1 | slave1,k8s-slave1(work节点) | kubelet,kube-proxy,etcd,docker环境 |
| 4 | 192.168.217.18 | slave2,node2 | slave2,k8s-slave2(work节点) | kubelet,kube-proxy,etcd,docker环境 |
| 5 | 192.168.217.17 192.168.217.88(vip) | Load Balancer(Master) | slave1,k8s-slave1 | nginx,keepalived |
| 6 | 192.168.217.18 | Load Balancer(backup) | slave2,k8s-slave2 | nginx,keepalived |
规划思路:
增加一台新的服务器,安装master节点所必须的三个组件:kube-apiserver,kube-controller-manager,kube-proxy,etcd由于已经是三个节点了,符合集群的奇数规定,因此,新服务器上不安装etcd,负载均衡软件使用的是nginx和keepalived,负载均衡不能安装在master节点上,因为会端口占用,因此,在两个work节点安装的。docker环境是不管哪个节点都必须安装的,kubelet是节点管理服务,因此,master和work节点都安装。
在实际的生产中,当然负载均衡应该是单独的部署在新服务器上。因服务器不够多,也是实验性质,因此,负载均衡安装在了两个work节点上。
一,
新服务器11上面安装ntp时间服务器,与其他服务器做免密配置,设定主机名,四台服务器的hosts内容如下;
- [root@centos1 nginx-offline]# cat /etc/hosts
- 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
- ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
- 192.168.217.16 master k8s-master
- 192.168.217.17 slave1 k8s-node1
- 192.168.217.18 slave2 k8s-node2
- 192.168.217.11 master2 k8s-master2
hosts文件通过scp命令同步到所有节点。
新服务上安装docker 环境,可简单一点,如果前面是使用二进制安装的docker,在master服务器也就是16服务器上面执行命令:
- scp /usr/bin/{docker,dockerd,docker-init,docker-proxy,ctr,runc,containerd,containerd-shim} 192.168.217.11:/usr/bin/
- scp /etc/docker/daemon.json master2:/etc/docker
- scp /usr/lib/systemd/system/docker.service 192.168.217.11:/usr/lib/systemd/system/
在11服务器上执行命令,启动docker服务并查看docker状态是否正常:
systemctl enable docker && systemctl start docker && systemctl status docker
二,
在Master2创建etcd证书目录:
mkdir -p /opt/etcd/ssl
在master节点,16服务器上,直接拷贝原有的master节点的现有文件到新服务器上,并做相关修改即可,命令如下:
- scp -r /opt/kubernetes root@192.168.217.11:/opt
- scp -r /opt/cni/ root@192.168.217.11:/opt
- scp -r /opt/etcd/ssl root@192.168.217.11:/opt/etcd
- scp /usr/lib/systemd/system/kube* root@192.168.217.11:/usr/lib/systemd/system
- scp /usr/bin/kubectl root@192.168.217.11:/usr/bin
三,
在master2节点,11服务器上,删除kubelet证书和kubeconfig文件:
删除的原因是kubelet服务会在启动的时候新生成这些文件,如果是旧的文件,将不会启动成功。
- rm -f /opt/kubernetes/cfg/kubelet.kubeconfig
- rm -f /opt/kubernetes/ssl/kubelet*
四,
仍然在master2节点, 11服务器上,修改配置文件(是三个配置文件,不要遗漏了哦):
- 修改apiserver、kubelet和kube-proxy配置文件为本地IP:
- vim /opt/kubernetes/cfg/kube-apiserver.conf
- ...
- --bind-address=192.168.217.11 \
- --advertise-address=192.168.217.11 \
- ...
- vim /opt/kubernetes/cfg/kubelet.conf
- --hostname-override=k8s-master2
- vi /opt/kubernetes/cfg/kube-proxy-config.yml
- hostnameOverride: k8s-master2
五,
在11服务器上启动相关服务:
- systemctl daemon-reload
- systemctl start kube-apiserver
- systemctl start kube-controller-manager
- systemctl start kube-scheduler
- systemctl start kubelet
- systemctl start kube-proxy
- systemctl enable kube-apiserver
- systemctl enable kube-controller-manager
- systemctl enable kube-scheduler
- systemctl enable kubelet
- systemctl enable kube-proxy
六,
检测是否正常:
- kubectl get cs
- NAME STATUS MESSAGE ERROR
- scheduler Healthy ok
- controller-manager Healthy ok
- etcd-1 Healthy {"health":"true"}
- etcd-2 Healthy {"health":"true"}
- etcd-0 Healthy {"health":"true"}
此时应该可以看到新的node节点了:
- [root@centos1 nginx-offline]# k get no
- NAME STATUS ROLES AGE VERSION
- k8s-master Ready
9d v1.18.3 - k8s-master2 Ready
172m v1.18.3 - k8s-node1 Ready
8d v1.18.3 - k8s-node2 Ready
8d v1.18.3
七,
在17和18服务器上都执行:
- yum install nginx keepalived -y
- systemctl enable nginx keepalived && systemctl start nginx keepalived
八,负载均衡相关配置文件
17主master:
nginx配置文件(17和18的配置文件都一样的):
- [root@master ~]# cat /etc/nginx/nginx.conf
- user nginx;
- worker_processes auto;
- error_log /var/log/nginx/error.log;
- pid /run/nginx.pid;
-
- include /usr/share/nginx/modules/*.conf;
-
- events {
- worker_connections 1024;
- }
-
- stream {
-
- log_format main ' - [] ';
-
- access_log /var/log/nginx/k8s-access.log main;
-
- upstream k8s-apiserver {
- server 192.168.217.16:6443;
- server 192.168.217.11:6443;
- }
-
- server {
- listen 6443;
- proxy_pass k8s-apiserver;
- }
- }
-
- http {
- log_format main ' - [] "" '
- ' "" '
- '"" ""';
-
- access_log /var/log/nginx/access.log main;
-
- sendfile on;
- tcp_nopush on;
- tcp_nodelay on;
- keepalive_timeout 65;
- types_hash_max_size 2048;
-
- include /etc/nginx/mime.types;
- default_type application/octet-stream;
-
- server {
- listen 80 default_server;
- server_name _;
-
- location / {
- }
- }
- }
keepalived的配置文件:
- [root@master ~]# cat /etc/keepalived/keepalived.conf
- global_defs {
- notification_email {
- acassen@firewall.loc
- failover@firewall.loc
- sysadmin@firewall.loc
- }
- notification_email_from Alexandre.Cassen@firewall.loc
- smtp_server 127.0.0.1
- smtp_connect_timeout 30
- router_id NGINX_MASTER
- }
-
- vrrp_script check_nginx {
- script "/etc/keepalived/check_nginx.sh"
- }
-
- vrrp_instance VI_1 {
- state MASTER
- interface ens33
- virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
- priority 100 # 优先级,备服务器设置 90
- advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒
- authentication {
- auth_type PASS
- auth_pass 1111
- }
- virtual_ipaddress {
- 192.168.217.88/24
- }
- track_script {
- check_nginx
- }
- }
18服务器上的keepalived配置文件(两个文件,其中一个是检测脚本,脚本两个节点都要有):
- [root@slave2 nginx-offline]# cat /etc/keepalived/check_nginx.sh
- #!/bin/bash
- count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
- if [ "$count" -eq 0 ];then
- exit 1
- else
- exit 0
- fi
- [root@slave2 nginx-offline]# cat /etc/keepalived/keepalived.conf
- global_defs {
- notification_email {
- acassen@firewall.loc
- failover@firewall.loc
- sysadmin@firewall.loc
- }
- notification_email_from Alexandre.Cassen@firewall.loc
- smtp_server 127.0.0.1
- smtp_connect_timeout 30
- router_id NGINX_MASTER
- }
-
- vrrp_script check_nginx {
- script "/etc/keepalived/check_nginx.sh"
- }
-
- vrrp_instance VI_1 {
- state BACKUP
- interface ens33
- virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
- priority 80 # 优先级,备服务器设置 90
- advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒
- authentication {
- auth_type PASS
- auth_pass 1111
- }
- virtual_ipaddress {
- 192.168.217.88/24
- }
- track_script {
- check_nginx
- }
- }
九,
重启负载均衡相关服务
systemctl restart nginx keepalived
十,
kube-apiserver服务所使用的证书文件内没有写vip地址,因此,16服务器上的kube-apiserver服务将会启动失败,需要重新生成证书:
在master节点,16服务器上,该文件内添加"192.168.217.88",
- [root@master ~]# cat k8s/server-csr.json
- {
- "CN": "kubernetes",
- "hosts": [
- "10.0.0.1",
- "127.0.0.1",
- "192.168.217.16",
- "192.168.217.17",
- "192.168.217.18",
- "192.168.217.88",
- "kubernetes",
- "kubernetes.default",
- "kubernetes.default.svc",
- "kubernetes.default.svc.cluster",
- "kubernetes.default.svc.cluster.local"
- ],
- "key": {
- "algo": "rsa",
- "size": 2048
- },
- "names": [
- {
- "C": "CN",
- "L": "BeiJing",
- "ST": "BeiJing",
- "O": "k8s",
- "OU": "System"
- }
- ]
- }
重新生成证书:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
拷贝证书文件(拷贝到本地和新master上):
- cp server*pem /opt/kubernetes/ssl/
- scp /opt/kubernetes/ssl/server*pem master2:/opt/kubernetes/ssl/
重启服务,使得相关证书生效:
systemctl restart kube-apiserver kubelet
十一,
所有配置文件内添加VIP的IP地址192.168.217.88 ,并重启相关服务。
- sed -i 's#192.168.217.16:6443#192.168.217.88:6443#' /opt/kubernetes/cfg/*
- systemctl restart kubelet kube-proyx
十二,
测试单元
在17服务器上,也就是负载均衡的主节点上,可以看到ens33网卡两个ip:
- [root@slave1 ~]# ip a
- 1: lo:
mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 - link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
- inet 127.0.0.1/8 scope host lo
- valid_lft forever preferred_lft forever
- inet6 ::1/128 scope host
- valid_lft forever preferred_lft forever
- 2: ens33:
mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000 - link/ether 00:0c:29:e9:9e:89 brd ff:ff:ff:ff:ff:ff
- inet 192.168.217.17/24 brd 192.168.217.255 scope global ens33
- valid_lft forever preferred_lft forever
- inet 192.168.217.88/24 scope global secondary ens33
- valid_lft forever preferred_lft forever
- inet6 fe80::20c:29ff:fee9:9e89/64 scope link
- valid_lft forever preferred_lft forever
此时,停止17上的nginx,在18服务上 ,ip a 命令应该可以看到ens33网卡两个IP,证明负载均衡漂移成功。
通过VIP 可以看到k8s版本:
- [root@centos1 nginx-offline]# curl -k https://192.168.217.88:6443/version
- {
- "major": "1",
- "minor": "18",
- "gitVersion": "v1.18.3",
- "gitCommit": "2e7996e3e2712684bc73f0dec0200d64eec7fe40",
- "gitTreeState": "clean",
- "buildDate": "2020-05-20T12:43:34Z",
- "goVersion": "go1.13.9",
- "compiler": "gc",
- "platform": "linux/amd64"