• 猿创征文|云原生|kubernetes二进制1.18单master扩展为多master


    前言

    在此前已经部署了单master节点,但,出于集群稳定性的考虑,需要将其扩展为多master。原单master部署链接:kubernetes二进制安装教程单master_zsk_john的博客-CSDN博客

    计划是在此基础上扩展,其中的细节还是比较多的,单master和多master的集群规划计划如下:

    单master集群规划

    单master集群规划表
    序号ip角色hostname安装的组件
    1192.168.217.16master       master,k8s-masterkube-apiserver,kubelet,kube-controller-manager,kube-proxy,etcd,docker环境
    2192.168.217.17slave1slave1,k8s-slave1kubelet,kube-proxy,etcd,docker环境
    3192.168.217.18slave2slave2,k8s-slave2kubelet,kube-proxy,etcd,docker环境

    多master集群规划

    多master集群规划表
    序号ip角色hostname安装的组件
    1192.168.217.16master1master,k8s-master(master节点)kube-apiserver,kube-controller-manager,kube-proxy,kubelet,etcd,docker环境
    2192.168.217.11

    master2

    master2,k8s-master2(master节点)kube-apiserver,kube-controller-manager,kube-proxy,kubelet,docker环境
    3192.168.217.17slave1,node1slave1,k8s-slave1(work节点)kubelet,kube-proxy,etcd,docker环境
    4192.168.217.18slave2,node2slave2,k8s-slave2(work节点)kubelet,kube-proxy,etcd,docker环境
    5

    192.168.217.17

    192.168.217.88(vip)

    Load Balancer(Master)slave1,k8s-slave1nginx,keepalived
    6192.168.217.18Load Balancer(backup)slave2,k8s-slave2nginx,keepalived

    规划思路:

    增加一台新的服务器,安装master节点所必须的三个组件:kube-apiserver,kube-controller-manager,kube-proxy,etcd由于已经是三个节点了,符合集群的奇数规定,因此,新服务器上不安装etcd,负载均衡软件使用的是nginx和keepalived,负载均衡不能安装在master节点上,因为会端口占用,因此,在两个work节点安装的。docker环境是不管哪个节点都必须安装的,kubelet是节点管理服务,因此,master和work节点都安装。

    在实际的生产中,当然负载均衡应该是单独的部署在新服务器上。因服务器不够多,也是实验性质,因此,负载均衡安装在了两个work节点上。

    扩展部署master节点步骤

    一,

    新服务器11上面安装ntp时间服务器,与其他服务器做免密配置,设定主机名,四台服务器的hosts内容如下;

    1. [root@centos1 nginx-offline]# cat /etc/hosts
    2. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
    3. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
    4. 192.168.217.16 master k8s-master
    5. 192.168.217.17 slave1 k8s-node1
    6. 192.168.217.18 slave2 k8s-node2
    7. 192.168.217.11 master2 k8s-master2

    hosts文件通过scp命令同步到所有节点。

    新服务上安装docker 环境,可简单一点,如果前面是使用二进制安装的docker,在master服务器也就是16服务器上面执行命令:

    1. scp /usr/bin/{docker,dockerd,docker-init,docker-proxy,ctr,runc,containerd,containerd-shim} 192.168.217.11:/usr/bin/
    2. scp /etc/docker/daemon.json master2:/etc/docker
    3. scp /usr/lib/systemd/system/docker.service 192.168.217.11:/usr/lib/systemd/system/

    在11服务器上执行命令,启动docker服务并查看docker状态是否正常:

    systemctl enable docker && systemctl start docker && systemctl status docker

    二,

    在Master2创建etcd证书目录:

    mkdir -p /opt/etcd/ssl
    

    在master节点,16服务器上,直接拷贝原有的master节点的现有文件到新服务器上,并做相关修改即可,命令如下:

    1. scp -r /opt/kubernetes root@192.168.217.11:/opt
    2. scp -r /opt/cni/ root@192.168.217.11:/opt
    3. scp -r /opt/etcd/ssl root@192.168.217.11:/opt/etcd
    4. scp /usr/lib/systemd/system/kube* root@192.168.217.11:/usr/lib/systemd/system
    5. scp /usr/bin/kubectl root@192.168.217.11:/usr/bin

    三,

    在master2节点,11服务器上,删除kubelet证书和kubeconfig文件:

    删除的原因是kubelet服务会在启动的时候新生成这些文件,如果是旧的文件,将不会启动成功。

    1. rm -f /opt/kubernetes/cfg/kubelet.kubeconfig
    2. rm -f /opt/kubernetes/ssl/kubelet*

     四,

    仍然在master2节点, 11服务器上,修改配置文件(是三个配置文件,不要遗漏了哦):

    1. 修改apiserver、kubelet和kube-proxy配置文件为本地IP:
    2. vim /opt/kubernetes/cfg/kube-apiserver.conf
    3. ...
    4. --bind-address=192.168.217.11 \
    5. --advertise-address=192.168.217.11 \
    6. ...
    7. vim /opt/kubernetes/cfg/kubelet.conf
    8. --hostname-override=k8s-master2
    9. vi /opt/kubernetes/cfg/kube-proxy-config.yml
    10. hostnameOverride: k8s-master2

    五,

    在11服务器上启动相关服务:

    1. systemctl daemon-reload
    2. systemctl start kube-apiserver
    3. systemctl start kube-controller-manager
    4. systemctl start kube-scheduler
    5. systemctl start kubelet
    6. systemctl start kube-proxy
    7. systemctl enable kube-apiserver
    8. systemctl enable kube-controller-manager
    9. systemctl enable kube-scheduler
    10. systemctl enable kubelet
    11. systemctl enable kube-proxy

     六,

    检测是否正常:

    1. kubectl get cs
    2. NAME STATUS MESSAGE ERROR
    3. scheduler Healthy ok
    4. controller-manager Healthy ok
    5. etcd-1 Healthy {"health":"true"}
    6. etcd-2 Healthy {"health":"true"}
    7. etcd-0 Healthy {"health":"true"}

    此时应该可以看到新的node节点了:

    1. [root@centos1 nginx-offline]# k get no
    2. NAME STATUS ROLES AGE VERSION
    3. k8s-master Ready 9d v1.18.3
    4. k8s-master2 Ready 172m v1.18.3
    5. k8s-node1 Ready 8d v1.18.3
    6. k8s-node2 Ready 8d v1.18.3

    七,

    在17和18服务器上都执行:

    1. yum install nginx keepalived -y
    2. systemctl enable nginx keepalived && systemctl start nginx keepalived

     八,负载均衡相关配置文件

    17主master:

    nginx配置文件(17和18的配置文件都一样的):

    1. [root@master ~]# cat /etc/nginx/nginx.conf
    2. user nginx;
    3. worker_processes auto;
    4. error_log /var/log/nginx/error.log;
    5. pid /run/nginx.pid;
    6. include /usr/share/nginx/modules/*.conf;
    7. events {
    8. worker_connections 1024;
    9. }
    10. stream {
    11. log_format main ' - [] ';
    12. access_log /var/log/nginx/k8s-access.log main;
    13. upstream k8s-apiserver {
    14. server 192.168.217.16:6443;
    15. server 192.168.217.11:6443;
    16. }
    17. server {
    18. listen 6443;
    19. proxy_pass k8s-apiserver;
    20. }
    21. }
    22. http {
    23. log_format main ' - [] "" '
    24. ' "" '
    25. '"" ""';
    26. access_log /var/log/nginx/access.log main;
    27. sendfile on;
    28. tcp_nopush on;
    29. tcp_nodelay on;
    30. keepalive_timeout 65;
    31. types_hash_max_size 2048;
    32. include /etc/nginx/mime.types;
    33. default_type application/octet-stream;
    34. server {
    35. listen 80 default_server;
    36. server_name _;
    37. location / {
    38. }
    39. }
    40. }

    keepalived的配置文件:

    1. [root@master ~]# cat /etc/keepalived/keepalived.conf
    2. global_defs {
    3. notification_email {
    4. acassen@firewall.loc
    5. failover@firewall.loc
    6. sysadmin@firewall.loc
    7. }
    8. notification_email_from Alexandre.Cassen@firewall.loc
    9. smtp_server 127.0.0.1
    10. smtp_connect_timeout 30
    11. router_id NGINX_MASTER
    12. }
    13. vrrp_script check_nginx {
    14. script "/etc/keepalived/check_nginx.sh"
    15. }
    16. vrrp_instance VI_1 {
    17. state MASTER
    18. interface ens33
    19. virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
    20. priority 100 # 优先级,备服务器设置 90
    21. advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒
    22. authentication {
    23. auth_type PASS
    24. auth_pass 1111
    25. }
    26. virtual_ipaddress {
    27. 192.168.217.88/24
    28. }
    29. track_script {
    30. check_nginx
    31. }
    32. }

    18服务器上的keepalived配置文件(两个文件,其中一个是检测脚本,脚本两个节点都要有):

    1. [root@slave2 nginx-offline]# cat /etc/keepalived/check_nginx.sh
    2. #!/bin/bash
    3. count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
    4. if [ "$count" -eq 0 ];then
    5. exit 1
    6. else
    7. exit 0
    8. fi

    1. [root@slave2 nginx-offline]# cat /etc/keepalived/keepalived.conf
    2. global_defs {
    3. notification_email {
    4. acassen@firewall.loc
    5. failover@firewall.loc
    6. sysadmin@firewall.loc
    7. }
    8. notification_email_from Alexandre.Cassen@firewall.loc
    9. smtp_server 127.0.0.1
    10. smtp_connect_timeout 30
    11. router_id NGINX_MASTER
    12. }
    13. vrrp_script check_nginx {
    14. script "/etc/keepalived/check_nginx.sh"
    15. }
    16. vrrp_instance VI_1 {
    17. state BACKUP
    18. interface ens33
    19. virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
    20. priority 80 # 优先级,备服务器设置 90
    21. advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒
    22. authentication {
    23. auth_type PASS
    24. auth_pass 1111
    25. }
    26. virtual_ipaddress {
    27. 192.168.217.88/24
    28. }
    29. track_script {
    30. check_nginx
    31. }
    32. }

    九,

    重启负载均衡相关服务

    systemctl restart nginx keepalived

    十,

    kube-apiserver服务所使用的证书文件内没有写vip地址,因此,16服务器上的kube-apiserver服务将会启动失败,需要重新生成证书:

    在master节点,16服务器上,该文件内添加"192.168.217.88",

    1. [root@master ~]# cat k8s/server-csr.json
    2. {
    3. "CN": "kubernetes",
    4. "hosts": [
    5. "10.0.0.1",
    6. "127.0.0.1",
    7. "192.168.217.16",
    8. "192.168.217.17",
    9. "192.168.217.18",
    10. "192.168.217.88",
    11. "kubernetes",
    12. "kubernetes.default",
    13. "kubernetes.default.svc",
    14. "kubernetes.default.svc.cluster",
    15. "kubernetes.default.svc.cluster.local"
    16. ],
    17. "key": {
    18. "algo": "rsa",
    19. "size": 2048
    20. },
    21. "names": [
    22. {
    23. "C": "CN",
    24. "L": "BeiJing",
    25. "ST": "BeiJing",
    26. "O": "k8s",
    27. "OU": "System"
    28. }
    29. ]
    30. }

    重新生成证书:

    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

    拷贝证书文件(拷贝到本地和新master上):

    1. cp server*pem /opt/kubernetes/ssl/
    2. scp /opt/kubernetes/ssl/server*pem master2:/opt/kubernetes/ssl/

    重启服务,使得相关证书生效: 

    systemctl restart kube-apiserver kubelet

    十一,

    所有配置文件内添加VIP的IP地址192.168.217.88 ,并重启相关服务。

    1. sed -i 's#192.168.217.16:6443#192.168.217.88:6443#' /opt/kubernetes/cfg/*
    2. systemctl restart kubelet kube-proyx

    十二,

    测试单元

    在17服务器上,也就是负载均衡的主节点上,可以看到ens33网卡两个ip:

    1. [root@slave1 ~]# ip a
    2. 1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    3. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    4. inet 127.0.0.1/8 scope host lo
    5. valid_lft forever preferred_lft forever
    6. inet6 ::1/128 scope host
    7. valid_lft forever preferred_lft forever
    8. 2: ens33: mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
    9. link/ether 00:0c:29:e9:9e:89 brd ff:ff:ff:ff:ff:ff
    10. inet 192.168.217.17/24 brd 192.168.217.255 scope global ens33
    11. valid_lft forever preferred_lft forever
    12. inet 192.168.217.88/24 scope global secondary ens33
    13. valid_lft forever preferred_lft forever
    14. inet6 fe80::20c:29ff:fee9:9e89/64 scope link
    15. valid_lft forever preferred_lft forever

    此时,停止17上的nginx,在18服务上  ,ip a 命令应该可以看到ens33网卡两个IP,证明负载均衡漂移成功。

    通过VIP 可以看到k8s版本:

    1. [root@centos1 nginx-offline]# curl -k https://192.168.217.88:6443/version
    2. {
    3. "major": "1",
    4. "minor": "18",
    5. "gitVersion": "v1.18.3",
    6. "gitCommit": "2e7996e3e2712684bc73f0dec0200d64eec7fe40",
    7. "gitTreeState": "clean",
    8. "buildDate": "2020-05-20T12:43:34Z",
    9. "goVersion": "go1.13.9",
    10. "compiler": "gc",
    11. "platform": "linux/amd64"

  • 相关阅读:
    艾美捷热转移稳定性检测试剂盒解决方案
    DLink 815路由器栈溢出漏洞分析与复现
    支持向量机分类算法
    SiO2/KH550修饰四氧化三铁纳米磁性颗粒|PDA包裹四氧化三铁磁性纳米颗粒(科研级)
    Ubuntu Kafka开机自启动服务
    请看绝命碰撞
    原料厂与烧结工艺
    Android 深入理解SurfaceView
    微短剧:爱优腾、抖快、喜马拉雅的新航线
    K8S安装过程二:安装Keepalived服务
  • 原文地址:https://blog.csdn.net/alwaysbefine/article/details/126702199