目录
3、所有节点安装kubeadm,kubelet和kubectl
4、所有master主机部署haproxy和keepalived实现高可用
续上一篇文章
网络部署插件有fannel、calico、cilium等
主要介绍flannel和calico
flannel方案
需要在每个节点上把发向容器的数据包进行封装后,再用隧道将封装后的数据包发送到运行着目标Pod的node节点上。目标node节点再负责去掉封装,将去除封装的数据包发送到目标Pod上。数据通信性能则大受影响。
calico方案
Calico不使用隧道或NAT来实现转发,而是把Host当作Internet中的路由器,使用BGP同步路由,并使用iptables来做安全访问策略,完成跨Host转发来。
让集群中的不同节点主机创建的 Docker 容器都具有全集群唯一的虚拟 IP 地址。
1)原始数据包从源主机的Pod容器经过cni0网桥转发到flannel0虚拟接口
2)flanneld进程负责监听flannel0接口收到的数据,并将源数据包封装到UDP报文中
3)然后flanneld进程通过查询etcd中维护的路由表,找到目标Pod所在的node节点的nodeIP,然后将nodeIP封装在UDP报文外,最后通过物理网卡发送到目标node节点
4)数据报文通过8285端口送到目标node节点的flanneld进程进行解封装,再由flannel0虚接口经过cni0网桥转发到目标Pod容器
1)原始数据帧从源主机的Pod容器经过cni0网桥接口转发到flannel.1虚接口
2)flannel.1虚接口收到数据帧后添加VXLAN头部,并在内核将原始数据帧封装到UDP报文中
3)flanneld进程根据etcd中维护的路由表经过物理网卡发送到目标Pod所在的node节点
4)数据报文通过8472端口发送到目标node节点的flannel.1虚接口,在由flannel.1虚接口经过cni0网桥发送到目标Pod容器
Calico CNI插件:主要负责与kubernetes对接,供kubelet调用使用。
1)原始数据包源主机的Pod容器发送到tunl0接口,然后在内核的IPIP驱动封装到节点网络的IP报文中
2)再由tunl0接口的路由经过物理网卡发送到目标node节点
3)IP数据报文到达目标node节点后,再通过内核的IPIP驱动解包得到原始的IP数据包
4)最后根据本地的路由规则经过veth pair设备送达目标Pod容器
本质就是通过路由规则来维护Pod之间的通信,Felix负责维护路由规则和网络接口管理,BIRD负责分发路由信息给其它节点
1)源主机的Pod容器发出的原始IP数据包会通过veth pair设备送达到节点网络空间
2)然后会根据原始IP数据包中的目标IP和节点的路由规则找到目标node节点的IP,再经过物理网卡发送到目标node节点
3)IP数据包到达目标node节点后,会根据本地的路由规则经过veth pair设备送达到目标Pod容器
1)flannel的网络模式有UDP、VXLAN、HOST-GW
2)calico的网络模式有IPIP、BGP、混合模式
3)flannel的默认网段是10.244.0.0/16,而calico的默认网段是192.168.0.0/16
4)flannel适合用于小型架构和架构不复杂的场合,而calico适合用于大型架构和架构复杂的场合
master01 | 192.168.3.100 | docker、kubeadm,kubelet、kubectl、haproxy和keepalived |
master02 | 192.168.3.101 | docker、kubeadm,kubelet、kubectl、haproxy和keepalived |
master03 | 192.168.3.102 | docker、kubeadm,kubelet、kubectl、haproxy和keepalived |
node01 | 192.168.3.103 | docker、kubeadm,kubelet和kubectl |
node02 | 192.168.3.104 | docker、kubeadm,kubelet和kubectl |
- //所有节点,关闭防火墙规则,关闭selinux,关闭swap交换
- systemctl stop firewalld
- systemctl disable firewalld
- setenforce 0
- sed -i 's/enforcing/disabled/' /etc/selinux/config
- iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
- swapoff -a
- sed -ri 's/.*swap.*/#&/' /etc/fstab
- //分别修改主机名
- hostnamectl set-hostname master01
- hostnamectl set-hostname master02
- hostnamectl set-hostname master03
- hostnamectl set-hostname node01
- hostnamectl set-hostname node02
- //所有节点修改hosts文件
- vim /etc/hosts
- 192.168.3.100 master01
- 192.168.3.101 master02
- 192.168.3.102 master03
- 192.168.3.103 node01
- 192.168.3.104 node02
- //所有节点时间同步
- yum -y install ntpdate
- ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
- echo 'Asia/Shanghai' >/etc/timezone
- ntpdate time2.aliyun.com
- //设置周期性任务做时间同步
- systemctl enable --now crond
- crontab -e
- */30 * * * * /usr/sbin/ntpdate time2.aliyun.com
- //所有节点实现Linux的资源限制
- vim /etc/security/limits.conf
- * soft nofile 65536
- * hard nofile 131072
- * soft nproc 65535
- * hard nproc 655350
- * soft memlock unlimited
- * hard memlock unlimited
- //调整内核参数
- cat > /etc/sysctl.d/k8s.conf <<EOF
- net.ipv4.ip_forward = 1
- net.bridge.bridge-nf-call-iptables = 1
- net.bridge.bridge-nf-call-ip6tables = 1
- fs.may_detach_mounts = 1
- vm.overcommit_memory=1
- vm.panic_on_oom=0
- fs.inotify.max_user_watches=89100
- fs.file-max=52706963
- fs.nr_open=52706963
- net.netfilter.nf_conntrack_max=2310720
-
- net.ipv4.tcp_keepalive_time = 600
- net.ipv4.tcp_keepalive_probes = 3
- net.ipv4.tcp_keepalive_intvl =15
- net.ipv4.tcp_max_tw_buckets = 36000
- net.ipv4.tcp_tw_reuse = 1
- net.ipv4.tcp_max_orphans = 327680
- net.ipv4.tcp_orphan_retries = 3
- net.ipv4.tcp_syncookies = 1
- net.ipv4.tcp_max_syn_backlog = 16384
- net.ipv4.ip_conntrack_max = 65536
- net.ipv4.tcp_max_syn_backlog = 16384
- net.ipv4.tcp_timestamps = 0
- net.core.somaxconn = 16384
- EOF
-
- #生效参数
- sysctl --system
- //加载 ip_vs 模块
- for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");do echo $i; /sbin/modinfo -F filename $i >/dev/null 2>&1 && /sbin/modprobe $i;done
- //下载依赖环境
- yum install -y yum-utils device-mapper-persistent-data lvm2
- //下载docker的repo源
- yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
- //下载docker
- yum install -y docker-ce docker-ce-cli containerd.io
- //设置docker加速拉取镜像
- cat > /etc/docker/daemon.json <<EOF
- {
- "registry-mirrors": ["https://6na95ym4.mirror.aliyuncs.com"],
- "exec-opts": ["native.cgroupdriver=systemd"],
- "log-driver": "json-file",
- "log-opts": {
- "max-size": "500m", "max-file": "3"
- }
- }
- EOF
- //重载systemd管理文件
- systemctl daemon-reload
- //设置开启自启docker服务,并立即开启docker
- systemctl enable --now docker.service
- //定义kubernetes源
- cat > /etc/yum.repos.d/kubernetes.repo << EOF
- [kubernetes]
- name=Kubernetes
- baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
- enabled=1
- gpgcheck=0
- repo_gpgcheck=0
- gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
- EOF
- //下载
- yum install -y kubelet-1.20.15 kubeadm-1.20.15 kubectl-1.20.15
- //配置Kubelet使用阿里云的pause镜像
- cat > /etc/sysconfig/kubelet <<EOF
- KUBELET_EXTRA_ARGS="--cgroup-driver=systemd --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2"
- EOF
-
- //开机自启kubelet
- systemctl enable --now kubelet
- //此时会发现kubelet启动不成功
- systemctl status kubelet
- //所有 master 节点部署 Haproxy
- yum -y install haproxy keepalived
-
- cat > /etc/haproxy/haproxy.cfg << EOF
- global
- log 127.0.0.1 local0 info
- log 127.0.0.1 local1 warning
- chroot /var/lib/haproxy
- pidfile /var/run/haproxy.pid
- maxconn 4000
- user haproxy
- group haproxy
- daemon
- stats socket /var/lib/haproxy/stats
-
- defaults
- mode tcp
- log global
- option tcplog
- option dontlognull
- option redispatch
- retries 3
- timeout queue 1m
- timeout connect 10s
- timeout client 1m
- timeout server 1m
- timeout check 10s
- maxconn 3000
-
- frontend monitor-in
- bind *:33305
- mode http
- option httplog
- monitor-uri /monitor
-
- frontend k8s-master
- bind *:6444
- mode tcp
- option tcplog
- default_backend k8s-master
-
- backend k8s-master
- mode tcp
- option tcplog
- option tcp-check
- balance roundrobin
- server k8s-master1 192.168.3.100:6443 check inter 10000 fall 2 rise 2 weight 1
- server k8s-master2 192.168.3.101:6443 check inter 10000 fall 2 rise 2 weight 1
- server k8s-master3 192.168.3.102:6443 check inter 10000 fall 2 rise 2 weight 1
- EOF
- //所有 master 节点部署 keepalived
- yum -y install keepalived
-
- cd /etc/keepalived/
-
- vim keepalived.conf
- ! Configuration File for keepalived
- global_defs {
- router_id LVS_HA1 #路由标识符,每个节点配置不同
- }
-
- vrrp_script chk_haproxy {
- script "/etc/keepalived/check_haproxy.sh"
- interval 2
- weight 2
- }
-
- vrrp_instance VI_1 {
- state MASTER #master01上设置MASTER、master02和master03上设置BACKUP
- interface ens33
- virtual_router_id 51
- priority 100 #本机初始权重,备机设置小于主机的值
- advert_int 1
- virtual_ipaddress {
- 192.168.3.254 #设置VIP地址
- }
- track_script {
- chk_haproxy
- }
- }
- //编写haproxy的健康检查脚本
- vim check_haproxy.sh
- #!/bin/bash
- if ! killall -0 haproxy; then
- systemctl stop keepalived
- fi
-
- //并给脚本执行权限
- chmod +x chk_haproxy.sh
-
- //开机自启haproxy和keepalived
- systemctl enable --now haproxy
- systemctl enable --now keepalived
- //在 master01 节点上设置集群初始化配置文件
- kubeadm config print init-defaults > /opt/kubeadm-config.yaml
-
- cd /opt/
- vim kubeadm-config.yaml
- ......
- 11 localAPIEndpoint:
- 12 advertiseAddress: 192.168.3.100 #指定当前master节点的IP地址
- 13 bindPort: 6443
-
- 21 apiServer:
- 22 certSANs: #在apiServer属性下面添加一个certsSANs的列表,添加所有master节点的IP地址和集群VIP地址
- 23 - 192.168.3.254
- 24 - 192.168.3.100
- 25 - 192.168.3.101
- 26 - 192.168.3.102
-
- 30 clusterName: kubernetes
- 31 controlPlaneEndpoint: "192.168.3.100:6444" #指定集群VIP地址
- 32 controllerManager: {}
-
- 38 imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers #指定镜像下载地址
- 39 kind: ClusterConfiguration
- 40 kubernetesVersion: v1.20.15 #指定kubernetes版本号
- 41 networking:
- 42 dnsDomain: cluster.local
- 43 podSubnet: "10.244.0.0/16" #指定pod网段,10.244.0.0/16用于匹配flannel默认网段
- 44 serviceSubnet: 10.96.0.0/16 #指定service网段
- 45 scheduler: {}
- #末尾再添加以下内容
- ---
- apiVersion: kubeproxy.config.k8s.io/v1alpha1
- kind: KubeProxyConfiguration
- mode: ipvs #把默认的kube-proxy调度方式改为ipvs模式
- #更新集群初始化配置文件
- kubeadm config migrate --old-config kubeadm-config.yaml --new-config new.yaml
- #拷贝yaml配置文件给其他主机,通过配置文件进行拉取镜像
- for i in master02 master03 node01 node02; do scp /opt/new.yaml $i:/opt/; done
- //拉取镜像
- kubeadm config images pull --config /opt/new.yaml
- //master01 节点进行初始化
- kubeadm init --config new.yaml --upload-certs | tee kubeadm-init.log
- //master01 节点进行环境配置
- #配置 kubectl
- mkdir -p $HOME/.kube
- cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
- chown $(id -u):$(id -g) $HOME/.kube/config
-
- //然后重启kubelet服务
- systemctl restart kubelet
- //所有节点加入集群
- #master 节点加入集群
- kubeadm join 192.168.3.254:6444 --token abcdef.0123456789abcdef \
- --discovery-token-ca-cert-hash sha256:e1434974e3b947739e650c13b94f9a2e864f6c444b9a6e891efb4d8e1c4a05b7 \
- --control-plane --certificate-key fff2215a35a1b54f9b39882a36644b19300b7053429c43a1a713e4ed791076c4
-
- //然后在每个master节点也会提示接下来需要做什么
- mkdir -p $HOME/.kube
- sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
- sudo chown $(id -u):$(id -g) $HOME/.kube/config
-
- //node 节点加入集群
- kubeadm join 192.168.3.254:6444 --token abcdef.0123456789abcdef \
- --discovery-token-ca-cert-hash sha256:e1434974e3b947739e650c13b94f9a2e864f6c444b9a6e891efb4d8e1c4a05b7
- //在 master01 查看集群信息
- kubectl get nodes
- 所有节点上传 flannel 镜像 flannel.tar 和网络插件 cni-plugins-linux-amd64-v0.8.6.tgz 到 /opt 目录,master节点上传 kube-flannel.yml 文件
- cd /opt
- docker load < flannel.tar
-
- mv /opt/cni /opt/cni_bak
- mkdir -p /opt/cni/bin
- tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin
-
- kubectl apply -f kube-flannel.yml
再次查看集群信息
- //在 master01 查看集群信息
- kubectl get nodes
最后验证一下
kubectl get pod -n kube-system