目录
实验环境
| 主机名 | IP | 角色 |
| docker | 192.168.67.10 | harbor |
| k8s1 | 192.168.67.11 | control-plane |
| k8s2 | 192.168.67.12 | control-plane |
| k8s3 | 192.168.67.13 | control-plane |
| k8s4 | 192.168.56.14 | haproxy,pacemaker |
| k8s5 | 192.168.67.15 | haproxy,pacemaker |
| k8s6 | 192.168.67.16 | worker node |
配置软件仓库
- vim yyl.repo
-
-
- #高可用的
- [HighAvailability]
- name=rhel7.6 HighAvailability
- baseurl=file:///media/addons/HighAvailability
- gpgcheck=0

下载软件
yum install -y haproxy net-tools

编辑配置文件



测试:

测试成功后关闭服务,不要设置自启动

scp yyl.repo k8s5:/etc/yum.repos.d/
- yum install -y pacemaker pcs psmisc policycoreutils-python
- 两个节点都要

- systemctl enable --now pcsd.service
- ssh k8s5 systemctl enable --now pcsd.service
-
- echo westos | passwd --stdin hacluster
- ssh k8s6 'echo westos | passwd --stdin hacluster'
-
-
- pcs cluster auth k8s4 k8s5

创建集群
pcs cluster setup --name mycluster k8s5 k8s4

启动集群
pcs cluster start --all

集群自启动
pcs cluster enable --all

禁用stonith
pcs property set stonith-enabled=false

添加集群资源
- pcs resource create vip ocf:heartbeat:IPaddr2 ip=192.168.67.200 op monitor interval=30s
- pcs resource create haproxy systemd:haproxy op monitor interval=60s
- pcs resource group add hagroup vip haproxy
测试
pcs node standby
恢复:
pcs node unstandby

测试:

k8s1、k8s2、k8s3在配置前需要重置节点
- kubeadm reset
- kubeadm reset --cri-socket unix:///var/run/cri-dockerd.sock
- kubeadm reset --cri-socket unix:///var/run/cri-dockerd.sock
-
- cd /etc/cni/net.d 初始化需要清除
- rm -fr *
-
- reboot 可以使iptable 和 ipvs 策略被自动清除



加载内核模块
- modprobe overlay
- modprobe br_netfilter
kubeadm config print init-defaults > kubeadm-init.yaml
修改配置
初始化集群
kubeadm init --config kubeadm-init.yaml --upload-certs

部署网络组件
kubectl apply -f calico.yaml


kubeadm join 192.168.56.200:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:51184d632ecb2f9e6c7f82b064e07c01974924d359eb98035aae7ce98e56d60d --control-plane --certificate-key cb28e3d92a419945a34a6a2d1db49c80fbf5d8275c28e40f8c7e0450a9ad8fb5



新添加的节点需要初始化配置
禁用swap
- swapoff -a
- vim /etc/fstab

安装containerd、kubelet、kubeadm、kubectl 从其它节点拷贝repo文件
- scp k8s.repo docker.repo k8s6:/etc/yum.repos.d/
- yum install -y containerd.io kubeadm-1.24.17-0 kubelet-1.24.17-0 kubectl-1.24.17-0

自启动服务
- systemctl enable --now containerd
- systemctl enable --now kubelet
拷贝containerd的配置文件
scp -r * k8s6:/etc/containerd/
重启服务:
- systemctl restart containerd
- crictl config runtime-endpoint unix:///run/containerd/containerd.sock
- crictl pull myapp:v1

配置内核模块:
- cd /etc/sysctl.d/
- scp docker.conf k8s7:/etc/sysctl.d/
-
- modprobe overlay
- modprobe br_netfilter
- sysctl --system

添加worker节点
kubeadm join 192.168.56.200:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:8845bd441093179e02b51a239075a64b5386085bb702c11397c21abebb132d25
测试:
