• 利用kubeadmin快速搭建kubenates集群


    系列文章目录

    1、利用kubeadmin快速搭建kubenates集群



    前言

    之前都是使用rancher、kubesphere、mac docker集成环境的搭建的k8s,现在想深入了解一下原理了,所以今天就尝试用 kubeadmin搭建一下环境。

    关于安装环境的选型:其实有几点参考RKE/kubeadmin/minikube。
    为什么选择kubeadmin其实很简单,这是谷歌推荐并且社区支持度最好的方式了。


    一、前期准备

    三台机器,操作系统 ubuntu 20.04:

    iphostname
    47.106.96.249cka001
    120.78.184.91cka002
    47.106.15.38cka003

    登陆三台机器下执行以下命令:

    #设置hosts文件
    echo '172.18.0.111    cka001
    172.18.0.109    cka002
    172.18.0.110    cka003
    ' >>  /etc/hosts
    #关闭防⽕墙
    ufw disable
    #关闭 swap
    swapoff -a
    #修改时区和语⾔
    ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
    sudo echo 'LANG="en_US.UTF-8"' >> /etc/profile;source /etc/profile
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12

    安装 Containerd

    准备⼯作,备份源⽂件

    sudo cp /etc/apt/sources.list /etc/apt/sources.list.bak
    
    • 1

    添加国内源

    cat > /etc/apt/sources.list << EOF
    deb http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
    deb-src http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
    deb http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
    deb-src http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
    deb http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
    deb-src http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
    deb http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
    deb-src http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
    deb http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
    deb-src http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
    EOF
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12

    安装 Contrainerd

    sudo apt-get update && sudo apt-get install -y containerd
    sudo mkdir -p /etc/containerd
    containerd config default | sudo tee /etc/containerd/config.toml
    vim /etc/containerd/config.toml
    #修改sand_image,为以下:
    sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6

    重启 Containerd 服务

    sudo systemctl restart containerd
    sudo systemctl status containerd
    
    • 1
    • 2

    安装nerdctl

    nerdctl是docker的替换软件。

    wget https://github.com/containerd/nerdctl/releases/download/v0.21.0/nerdctl-0.21.0-linux-amd64.tar.gz
    tar -zxvf nerdctl-0.21.0-linux-amd64.tar.gz
    cp nerdctl /usr/bin/
    
    • 1
    • 2
    • 3

    验证使用

    nerdctl --help
    #相当于docker ps
    nerdctl -n k8s.io ps
    #相当于docker images
    nerdctl -n k8s.io images 
    
    • 1
    • 2
    • 3
    • 4
    • 5

    部署kubernetes

    安装kubeadm

    三台机器都执行
    更新

    apt-get update && sudo apt-get install -y apt-transport-https ca-certificates curl
    
    • 1

    安装 gpg 证书

    curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
    
    • 1

    添加 Kubernetes 源

    cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
    deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
    EOF
    
    • 1
    • 2
    • 3

    更新系统,安装依赖包

    apt-get update
    apt-get install ebtables
    apt-get install libxtables12=1.6.1-2ubuntu2
    apt-get upgrade iptables
    
    • 1
    • 2
    • 3
    • 4

    查看可安装版本

    apt policy kubeadm
    #执行1.23版本安装
    sudo apt-get -y install kubelet=1.23.8-00 kubeadm=1.23.8-00 kubectl=1.23.8-00 --
    allow-downgrades;
    
    • 1
    • 2
    • 3
    • 4

    cka001 安装 Control Plane

    查看 kubeadm 初始化集群的默认参数

    kubeadm config print init-defaults
    
    • 1

    采⽤阿⾥云 Registry 部署集群

    kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.23.8
    
    • 1

    等待⽚刻,部署成功后会⽣成对应的节点注册命令如下,复制保存,添加 Worker 节点时使⽤。

    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    Alternatively, if you are the root user, you can run:
    
      export KUBECONFIG=/etc/kubernetes/admin.conf
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 172.18.0.111:6443 --token pu78c7.j3glzcgxj5p1yqof \
    	--discovery-token-ca-cert-hash sha256:2c4fde4f3e343b4e80aba677823e165993b836a4161d0bb9c745f3342e4471c2
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20

    配置 kubeconfig ⽂件

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    • 1
    • 2
    • 3

    配置 kubectl ⾃动补全,对于 Ubuntu 20.04,操作如下

    apt install -y bash-completion
    source /usr/share/bash-completion/bash_completion
    source <(kubectl completion bash)
    echo "source <(kubectl completion bash)" >> ~/.bashrc
    
    • 1
    • 2
    • 3
    • 4

    再次查看 join 命令

    $kubeadm token create --print-join-command
    kubeadm join 172.18.0.111:6443 --token f7eq30.peizydpg1cb5vof7 --discovery-token-ca-cert-hash sha256:2c4fde4f3e343b4e80aba677823e165993b836a4161d0bb9c745f3342e4471c2
    
    • 1
    • 2

    后面从节点加入节点时可以用这个命令。

    cka002/cka003安装 Node

    在对应节点上执⾏ kubeadm init 时⽣成的 kubeadm join 命令即可(注意!不要在同⼀台机器上重复执⾏
    join 命令,操作时候⼀定要看清楚是在哪⼀台),例如:

    kubeadm join 172.18.0.111:6443 --token f7eq30.peizydpg1cb5vof7 --discovery-token-ca-cert-hash sha256:2c4fde4f3e343b4e80aba677823e165993b836a4161d0bb9c745f3342e4471c2
    
    • 1

    完成后,在 Control Plane 节点上,检查节点加⼊情况

    kubectl get node
    root@cka001:~# kubectl get node
    NAME     STATUS   ROLES                  AGE   VERSION
    cka001   Ready    control-plane,master   65m   v1.23.8
    cka002   NotReady    <none>                 57m   v1.23.8
    cka003   NotReady    <none>                 57m   v1.23.8
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6

    因为我们还没有安装网络插件所以显示NodeReady。

    安装网络插件

    kubectl apply -f
    https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    
    • 1
    • 2

    等待几分钟, 观察是不是好了。

    kubectl get node
    root@cka001:~# kubectl get node
    NAME     STATUS   ROLES                  AGE   VERSION
    cka001   Ready    control-plane,master   65m   v1.23.8
    cka002   Ready    <none>                 57m   v1.23.8
    cka003   Ready    <none>                 57m   v1.23.8
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6

    简单使用

    查看集群状态

    root@cka001:~# kubectl cluster-info
    Kubernetes control plane is running at https://172.18.0.111:6443
    CoreDNS is running at https://172.18.0.111:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
    
    To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
    
    root@cka001:~# kubectl get nodes -owide
    NAME     STATUS   ROLES                  AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
    cka001   Ready    control-plane,master   69m   v1.23.8   172.18.0.111   <none>        Ubuntu 20.04.4 LTS   5.4.0-113-generic   containerd://1.5.5
    cka002   Ready    <none>                 61m   v1.23.8   172.18.0.109   <none>        Ubuntu 20.04.4 LTS   5.4.0-113-generic   containerd://1.5.5
    cka003   Ready    <none>                 60m   v1.23.8   172.18.0.110   <none>        Ubuntu 20.04.4 LTS   5.4.0-113-generic   containerd://1.5.5
    
    root@cka001:~# kubectl get pod -A
    NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
    kube-system   coredns-6d8c4cb4d-t5gxk          1/1     Running   0          34m
    kube-system   coredns-6d8c4cb4d-w2bzb          1/1     Running   0          34m
    kube-system   etcd-cka001                      1/1     Running   0          69m
    kube-system   kube-apiserver-cka001            1/1     Running   0          69m
    kube-system   kube-controller-manager-cka001   1/1     Running   0          69m
    kube-system   kube-flannel-ds-4ls2q            1/1     Running   0          55m
    kube-system   kube-flannel-ds-rc8ph            1/1     Running   0          55m
    kube-system   kube-flannel-ds-w2k49            1/1     Running   0          55m
    kube-system   kube-proxy-2z72k                 1/1     Running   0          61m
    kube-system   kube-proxy-lk8xn                 1/1     Running   0          69m
    kube-system   kube-proxy-sqdxg                 1/1     Running   0          61m
    kube-system   kube-scheduler-cka001            1/1     Running   0          69m
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26

    重置集群

    即删除集群,只有当集群被玩坏时才需要做此操作!否则不要执⾏操作!
    在每个节点上执⾏:

    kubeadm reset # 这⼀步将删除集群节点,很危险,请慎重
    
    • 1

    清理 iptables 规则:

    iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X # 这⼀步很危险,请慎重
    
    • 1

    若使⽤ IPVS,清理 IPVS 规则:

    ipvsadm --clear # 这⼀步很危险,请慎重
    
    • 1

    尝试关闭cka002 kubelet,观察其实pod变化nodes变化

    root@cka001:~# kubectl get nodes
    NAME     STATUS     ROLES                  AGE   VERSION
    cka001   Ready      control-plane,master   75m   v1.23.8
    cka002   NotReady   <none>                 67m   v1.23.8
    cka003   Ready      <none>                 67m   v1.23.8
    root@cka001:~# kubectl get pod -A
    NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
    kube-system   coredns-6d8c4cb4d-t5gxk          1/1     Running   0          40m
    kube-system   coredns-6d8c4cb4d-w2bzb          1/1     Running   0          40m
    kube-system   etcd-cka001                      1/1     Running   0          75m
    kube-system   kube-apiserver-cka001            1/1     Running   0          75m
    kube-system   kube-controller-manager-cka001   1/1     Running   0          75m
    kube-system   kube-flannel-ds-4ls2q            1/1     Running   0          61m
    kube-system   kube-flannel-ds-rc8ph            1/1     Running   0          61m
    kube-system   kube-flannel-ds-w2k49            1/1     Running   0          61m
    kube-system   kube-proxy-2z72k                 1/1     Running   0          67m
    kube-system   kube-proxy-lk8xn                 1/1     Running   0          75m
    kube-system   kube-proxy-sqdxg                 1/1     Running   0          67m
    kube-system   kube-scheduler-cka001            1/1     Running   0          75m
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19

    node会是not_ready状态,pod则无影响。

  • 相关阅读:
    【深度学习】树莓派Zero w深度学习模型Python推理
    zotero文献管理工具安装使用
    一种通过注解处理数据权限设计整理
    Discuz! X3.4 升级至 Discuz! X3.5 详细教程
    ASP.NET Core Blazor编程系列一——综述
    TCP和UDP的最完整的区别
    开源短信项目 platform-sms 发布了新版本 0.5.0
    借助 Terraform 功能协调部署 CI/CD 流水线-Part 2
    划重点!CISA、FBI、NSA联合发布深度伪造威胁网络安全报告
    软考:信息安全工程师5(应用安全)
  • 原文地址:https://blog.csdn.net/e421083458/article/details/125466583