• 通过containerd部署k8s集群环境及初始化时部分报错解决


    目录

    一.基础环境配置(每个节点都做)

    1.hosts解析

    2.防火墙和selinux

    3.安装基本软件并配置时间同步

    4.禁用swap分区

    5.更改内核参数

    6.配置ipvs

    7.k8s下载

    (1)配置镜像下载相关软件

    (2)配置kubelet上的cgroup

    二.下载containerd(每个节点都做)

    1.下载基本软件

    2.添加软件仓库信息

    3.更改docker-ce.repo文件

    4.下载containerd并初始化配置

    5.更改containerd上的cgroup

    6.修改镜像源为阿里

    7.配置crictl并拉取镜像验证

    三.master节点初始化(只在master做)

    1.生成并修改配置文件

    2.查看/etc/containerd/config.toml 内的image地址是否已经加载为阿里的地址

    3.查看所需镜像并拉取

    4.初始化

    (1)通过生成的kubeadm.yml文件进行初始化

    (2)注意一个报错:

    (3)初始化后需要执行的操作

    四.node节点加入master

    1.根据master初始化成功后的命令进行加入

    2.node1/node2加入

    3.在master上查看

    4.注意报错

    (1)/etc/kubernetes下那些文件已存在,一般是由于已经加入过master,我选择的是将其目录下的内容删除,或者使用reset进行重置

    (2)端口占用问题尝试将占用进程杀掉

    五.安装网络插件(master做,node选做)

    1.获取并修改文件

    2.应用该文件并进行查看验证

    六.配置kubectl命令补全


     

    192.168.2.190master
    192.168.2.191node2-191.com
    192.168.2.193node4-193.com

    一.基础环境配置(每个节点都做)

    1.hosts解析

    1. [root@master ~]# tail -3 /etc/hosts
    2. 192.168.2.190 master
    3. 192.168.2.191 node2-191.com
    4. 192.168.2.193 node4-193.com

    2.防火墙和selinux

    1. [root@master ~]# systemctl status firewalld.service;getenforce
    2. ● firewalld.service - firewalld - dynamic firewall daemon
    3.   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
    4.   Active: inactive (dead)
    5.     Docs: man:firewalld(1)
    6. Disabled
    7. #临时
    8. systemctl stop firewalld
    9. setenforce 0
    10. #禁用
    11. systemctl disable firewalld
    12. sed -i '/^SELINUX=/ c SELINUX=disabled' /etc/selinux/config

    3.安装基本软件并配置时间同步

    1. [root@master ~]# yum install -y wget tree bash-completion lrzsz psmisc net-tools vim chrony
    2. [root@master ~]# vim /etc/chrony.conf
    3. :3,6 s/^/#     #注释掉原有行
    4. server ntp1.aliyun.com iburst
    5. [root@node1-190 ~]# systemctl restart chronyd
    6. [root@node1-190 ~]# chronyc sources
    7. 210 Number of sources = 1
    8. MS Name/IP address         Stratum Poll Reach LastRx Last sample              
    9. ===============================================================================
    10. ^* 120.25.115.20                 2   8   341   431   -357us[ -771us] +/-   20ms

    4.禁用swap分区

    1. [root@master ~]# swapoff -a && sed -i 's/.*swap.*/#&/' /etc/fstab && free -m
    2.             total       used       free     shared buff/cache   available
    3. Mem:         10376         943       8875         11         557       9178
    4. Swap:             0           0           0

    5.更改内核参数

    1. [root@node1-190 ~]# cat >> /etc/sysctl.d/k8s.conf << EOF
    2. vm.swappiness=0
    3. net.bridge.bridge-nf-call-ip6tables = 1
    4. net.bridge.bridge-nf-call-iptables = 1
    5. net.ipv4.ip_forward = 1
    6. EOF
    7. [root@node1-190 ~]# modprobe br_netfilter && modprobe overlay && sysctl -p /etc/sysctl.d/k8s.conf
    8. vm.swappiness = 0
    9. net.bridge.bridge-nf-call-ip6tables = 1
    10. net.bridge.bridge-nf-call-iptables = 1
    11. net.ipv4.ip_forward = 1

    6.配置ipvs

    1. [root@node1-190 ~]# yum install ipset ipvsadm -y
    2. [root@node1-190 ~]# cat <<EOF > /etc/sysconfig/modules/ipvs.modules
    3. #!/bin/bash
    4. modprobe -- ip_vs
    5. modprobe -- ip_vs_rr
    6. modprobe -- ip_vs_wrr
    7. modprobe -- ip_vs_sh
    8. modprobe -- nf_conntrack
    9. EOF
    10. # 为脚本文件添加执行权限并运行,验证是否加载成功
    11. [root@node1-190 ~]# chmod +x /etc/sysconfig/modules/ipvs.modules && /bin/bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
    12. nf_conntrack_ipv4     15053 2
    13. nf_defrag_ipv4         12729 1 nf_conntrack_ipv4
    14. ip_vs_sh               12688 0
    15. ip_vs_wrr             12697 0
    16. ip_vs_rr               12600 0
    17. ip_vs                 145458 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
    18. nf_conntrack         139264 7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4
    19. libcrc32c             12644 4 xfs,ip_vs,nf_nat,nf_conntrack

    7.k8s下载

    (1)配置镜像下载相关软件

    1. [root@master ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    2. [kubernetes]
    3. name=Kubernetes
    4. baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    5. enabled=1
    6. gpgcheck=0
    7. repo_gpgcheck=0
    8. gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
    9. http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    10. EOF
    11. [root@master ~]# yum install -y kubeadm kubelet kubectl
    12. [root@master ~]# kubeadm version
    13. kubeadm version: &version.Info{Major:"1", Minor:"28", GitVersion:"v1.28.2", GitCommit:"89a4ea3e1e4ddd7f7572286090359983e0387b2f", GitTreeState:"clean", BuildDate:"2023-09-13T09:34:32Z", GoVersion:"go1.20.8", Compiler:"gc", Platform:"linux/amd64"}

    (2)配置kubelet上的cgroup

    1. [root@master ~]# cat <<EOF > /etc/sysconfig/kubelet
    2. KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
    3. KUBE_PROXY_MODE="ipvs"
    4. EOF
    5. [root@master ~]# systemctl start kubelet
    6. [root@master ~]# systemctl enable kubelet
    7. Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

    二.下载containerd(每个节点都做)

    1.下载基本软件

    [root@master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2

    2.添加软件仓库信息

    [root@master ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

    3.更改docker-ce.repo文件

    [root@master ~]# sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo

    4.下载containerd并初始化配置

    1. [root@master ~]# yum install -y containerd
    2. [root@master ~]# containerd config default | tee /etc/containerd/config.toml

    5.更改containerd上的cgroup

    [root@master ~]# sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.toml

    6.修改镜像源为阿里

    [root@master ~]# sed -i "s#registry.k8s.io#registry.aliyuncs.com/google_containers#g" /etc/containerd/config.toml

    7.配置crictl并拉取镜像验证

    1. [root@master ~]# crictl --version
    2. crictl version v1.26.0
    3. [root@master ~]# cat <<EOF | tee /etc/crictl.yaml
    4. runtime-endpoint: unix:///run/containerd/containerd.sock
    5. image-endpoint: unix:///run/containerd/containerd.sock
    6. timeout: 10
    7. debug: false
    8. EOF
    9. [root@master ~]# systemctl daemon-reload && systemctl start containerd && systemctl enable containerd
    10. Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /usr/lib/systemd/system/containerd.service.
    11. [root@master ~]# crictl pull nginx
    12. Image is up to date for sha256:61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99
    13. [root@master ~]# crictl images
    14. IMAGE                     TAG                 IMAGE ID           SIZE
    15. docker.io/library/nginx   latest             61395b4c586da       70.5MB

    三.master节点初始化(只在master做)

    1.生成并修改配置文件

    1. [root@master ~]# kubeadm config print init-defaults > kubeadm.yml
    2. [root@master ~]# ll
    3. total 8
    4. -rw-r--r-- 1 root root   0 Jul 23 09:59 abc
    5. -rw-------. 1 root root 1386 Jul 23 09:02 anaconda-ks.cfg
    6. -rw-r--r-- 1 root root 807 Sep 27 16:18 kubeadm.yml
    7. [root@master ~]# vim kubeadm.yml

    advertiseAddress修改为自己master主机的IP

    criSocket就使用默认的containerd

    name修改为自己master主机的名称

    imageRepository修改为阿里的地址registry.aliyuncs.com/google_containers

    KubenetesVersion修改为你下载的真实版本

    a68031a20bec4a93a7d64284514d7312.png

    2.查看/etc/containerd/config.toml 内的image地址是否已经加载为阿里的地址

    1. [root@master ~]# vim /etc/containerd/config.toml
    2. [root@master ~]# systemctl restart containerd

    2358c47c5839416ba78613a7c2d24f06.png

    3.查看所需镜像并拉取

    1. [root@master ~]# kubeadm config images list --config kubeadm.yml
    2. [root@master ~]# kubeadm config images pull --config kubeadm.yml
    3. [config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.2
    4. [config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.2
    5. [config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.2
    6. [config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.28.2
    7. [config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9
    8. [config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.9-0
    9. [config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.10.1

    4.初始化

    (1)通过生成的kubeadm.yml文件进行初始化

    1. [root@master ~]# kubeadm init --config=kubeadm.yml --upload-certs --v=6
    2. ......
    3. Your Kubernetes control-plane has initialized successfully!
    4. To start using your cluster, you need to run the following as a regular user:
    5. mkdir -p $HOME/.kube
    6. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    7. sudo chown $(id -u):$(id -g) $HOME/.kube/config
    8. Alternatively, if you are the root user, you can run:
    9. export KUBECONFIG=/etc/kubernetes/admin.conf
    10. You should now deploy a pod network to the cluster.
    11. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    12. https://kubernetes.io/docs/concepts/cluster-administration/addons/
    13. Then you can join any number of worker nodes by running the following on each as root:
    14. kubeadm join 192.168.2.190:6443 --token abcdef.0123456789abcdef \
    15. --discovery-token-ca-cert-hash sha256:0dbb20609e31e4fe7d8ec76f07e6efd1f56965c5f8aa5d5ae5f1d6e9e958ffbe

    (2)注意一个报错:

    [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist

    解决

    1. #编辑此文件并写入内容后重载配置,也可以选择在最前面基础环境的时候就将这些内容写入/etc/sysctl.conf
    2. [root@master ~]# vim /etc/sysctl.conf
    3. net.bridge.bridge-nf-call-iptables = 1
    4. [root@master net]# modprobe br_netfilter   #加载模块
    5. [root@master net]# sysctl -p
    6. net.bridge.bridge-nf-call-iptables = 1

    (3)初始化后需要执行的操作

    1. #master节点若是普通用户
    2. [root@master ~]# mkdir -p $HOME/.kube
    3. [root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    4. [root@master ~]# chown $(id -u):$(id -g) $HOME/.kube/config
    5. #master节点若是root用户
    6. [root@master ~]# export KUBECONFIG=/etc/kubernetes/admin.conf

    四.node节点加入master

    1.根据master初始化成功后的命令进行加入

    1. #后续可以通过kubeadm token create --print-join-command再获取
    2. kubeadm join 192.168.2.190:6443 --token abcdef.0123456789abcdef \
    3. --discovery-token-ca-cert-hash sha256:0dbb20609e31e4fe7d8ec76f07e6efd1f56965c5f8aa5d5ae5f1d6e9e958ffbe

    2.node1/node2加入

    1. [root@node2-191 ~]# kubeadm join 192.168.2.190:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:3e56e3aa62b5835b6ed0d16832a4a13d1154ec09fe9c4f82bff9eaaaee2755c2
    2. [preflight] Running pre-flight checks
    3. [preflight] Reading configuration from the cluster...
    4. [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
    5. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    6. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    7. [kubelet-start] Starting the kubelet
    8. [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
    9. This node has joined the cluster:
    10. * Certificate signing request was sent to apiserver and a response was received.
    11. * The Kubelet was informed of the new secure connection details.
    12. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
    13. [root@node4-193 ~]# kubeadm join 192.168.2.190:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:3e56e3aa62b5835b6ed0d16832a4a13d1154ec09fe9c4f82bff9eaaaee2755c2
    14. [preflight] Running pre-flight checks
    15. [preflight] Reading configuration from the cluster...
    16. [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
    17. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    18. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    19. [kubelet-start] Starting the kubelet
    20. [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
    21. This node has joined the cluster:
    22. * Certificate signing request was sent to apiserver and a response was received.
    23. * The Kubelet was informed of the new secure connection details.
    24. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

    3.在master上查看

    1. [root@master ~]# kubectl get nodes
    2. NAME           STATUS   ROLES           AGE     VERSION
    3. master         Ready   control-plane   7m32s   v1.28.2
    4. node2-191.com   Ready   <none>         54s     v1.28.2
    5. node4-193.com   Ready   <none>         11s     v1.28.2

    4.注意报错

    e260172f2f6b4a71ad8087ac1672a840.png

    解决: 

    (1)/etc/kubernetes下那些文件已存在,一般是由于已经加入过master,我选择的是将其目录下的内容删除,或者使用reset进行重置

    1. [root@node4-193 ~]# rm -rf /etc/kubernetes/*
    2. # 或
    3. [root@node4-193 ~]# kubeadm reset

    (2)端口占用问题尝试将占用进程杀掉

    五.安装网络插件(master做,node选做)

    1.获取并修改文件

    链接:百度网盘 请输入提取码 提取码:tswi

    1. [root@master ~]# wget --no-check-certificate https://projectcalico.docs.tigera.io/archive/v3.25/manifests/calico.yaml
    2. [root@master ~]# vim calico.yaml

    (1)找到CLUSTER_TYPE那行,添加后两行,ens33处填写你自己的网卡名称

    1. - name: IP_AUTODETECTION_METHOD
    2. value: "interface=ens33"

    62ff38e8fb44464099c0d4e6820ce2aa.png

    (2)取消注释这部分并修改地址

    1. - name: CALICO_IPV4POOL_CIDR
    2. value: 10.244.0.0/16"

     b3d62965684a46c8b1bd105fbbf4713b.png

    2.应用该文件并进行查看验证

    1. [root@master ~]# kubectl apply -f calico.yaml
    2. [root@master ~]# kubectl get pods -A
    3. NAMESPACE NAME READY STATUS RESTARTS AGE
    4. kube-system calico-kube-controllers-658d97c59c-k27lr 1/1 Running 0 18s
    5. kube-system calico-node-bzq6k 1/1 Running 0 18s
    6. kube-system calico-node-dcb9c 1/1 Running 0 18s
    7. kube-system calico-node-v97ll 1/1 Running 0 18s
    8. kube-system coredns-66f779496c-nfxfr 1/1 Running 0 4m9s
    9. kube-system coredns-66f779496c-q8s6j 1/1 Running 0 4m9s
    10. kube-system etcd-k8s-master 1/1 Running 12 4m16s
    11. kube-system kube-apiserver-k8s-master 1/1 Running 12 4m16s
    12. kube-system kube-controller-manager-k8s-master 1/1 Running 13 4m16s
    13. kube-system kube-proxy-7gsls 1/1 Running 0 4m10s
    14. kube-system kube-proxy-szdqz 1/1 Running 0 2m54s
    15. kube-system kube-proxy-wgrpb 1/1 Running 0 2m58s
    16. kube-system kube-scheduler-k8s-master 1/1 Running 13 4m16s

    六.配置kubectl命令补全

    1. [root@k8s-master ~]# yum install -y bash-completion
    2. [root@k8s-master ~]# source /usr/share/bash-completion/bash_completion
    3. [root@k8s-master ~]# source <(kubectl completion bash)
    4. [root@k8s-master ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc

     

     

  • 相关阅读:
    python多线程示例
    Redis技术
    基于TCP/UDP协议的Socket编程方法
    【C++ STL】-- 二叉搜索树
    【gradle】通过 toml 管理 classifier 的写法
    VScode连接远端服务器一直输入密码解决方法
    在全新ubuntu上用gpu训练paddleocr模型遇到的坑与解决办法
    c++学习---第四部分下
    Spring中InitializingBean接口的功能
    Java实现布隆过滤器
  • 原文地址:https://blog.csdn.net/weixin_64334766/article/details/133362918