• 部署 k8s 集群


    部署 k8s 集群 – 使用 kubeadm 方式


    操作系统环境:CPU2个 内存2G 硬盘50G centos8

    环境初始化

    查看操作系统的版本(三个节点)

    此方式下安装 kubernetes 集群要求 Centos 版本要在 7.5 或之上

    [root@master ~]# cat /etc/redhat-release 
    CentOS Stream release 8
    
    • 1
    • 2

    主机名解析 (三个节点)

    为了方便集群节点间的直接调用,在这个配置一下主机名解析,企业中推荐使用内部 DNS 服务器

    [root@master ~]# vim /etc/hosts
    //添加以下内容
    192.168.91.143  master.example.com      master
    192.168.91.144  node1.example.com       node1
    192.168.91.145  node2.example.com       node2
    
    [root@master ~]# scp /etc/hosts root@192.168.91.144:/etc/hosts
    The authenticity of host '192.168.91.144 (192.168.91.144)' can't be established.
    ECDSA key fingerprint is SHA256:E1hm3B9fcDiSbBxk3nMnGCmUUHu1sbHVKx0SMtWVFs0.
    Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
    Warning: Permanently added '192.168.91.144' (ECDSA) to the list of known hosts.
    root@192.168.91.144's password: 
    hosts                                               100%  277   117.6KB/s   00:00    
    
    [root@master ~]# scp /etc/hosts root@192.168.91.145:/etc/hosts
    The authenticity of host '192.168.91.145 (192.168.91.145)' can't be established.
    ECDSA key fingerprint is SHA256:ChaT9TKGSd6PancdebQHlfuyNIMDoown+ER6UZ8symQ.
    Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
    Warning: Permanently added '192.168.91.145' (ECDSA) to the list of known hosts.
    root@192.168.91.145's password: 
    hosts                                               100%  277   252.9KB/s   00:00    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21

    配置密钥(master 节点)

    [root@master ~]# ssh-keygen 
    Generating public/private rsa key pair.
    Enter file in which to save the key (/root/.ssh/id_rsa): 
    Enter passphrase (empty for no passphrase): 
    Enter same passphrase again: 
    Your identification has been saved in /root/.ssh/id_rsa.
    Your public key has been saved in /root/.ssh/id_rsa.pub.
    The key fingerprint is:
    SHA256:ue0fIPnmAXsIPEcQkQ6KDiU+8EoUHXP+zv4OQZgHQ9w root@master.example.com
    The key's randomart image is:
    +---[RSA 3072]----+
    | .oo+==+         |
    |o...=.*E         |
    |+= . * o.        |
    |o+o  .=. o       |
    |+..   +oS .      |
    |..    o+.O .     |
    |       ++ * .    |
    |      . .= . .   |
    |       .ooo..    |
    +----[SHA256]-----+
    
    [root@master ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@node1
    /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
    The authenticity of host 'node1 (192.168.91.144)' can't be established.
    ECDSA key fingerprint is SHA256:E1hm3B9fcDiSbBxk3nMnGCmUUHu1sbHVKx0SMtWVFs0.
    Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
    /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
    /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
    root@node1's password: 
    
    Number of key(s) added: 1
    
    Now try logging into the machine, with:   "ssh 'root@node1'"
    and check to make sure that only the key(s) you wanted were added.
    
    [root@master ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@node2
    /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
    The authenticity of host 'node2 (192.168.91.145)' can't be established.
    ECDSA key fingerprint is SHA256:ChaT9TKGSd6PancdebQHlfuyNIMDoown+ER6UZ8symQ.
    Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
    /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
    /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
    root@node2's password: 
    
    Number of key(s) added: 1
    
    Now try logging into the machine, with:   "ssh 'root@node2'"
    and check to make sure that only the key(s) you wanted were added.
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49

    时钟同步

    kubernetes 要求集群中的节点时间必须精确一致,这里使用 chronyd 服务从网络同步时间,企业中建议配置内部的时间同步服务器

    dnf -y install chrony
    systemctl enable --now chronyd
    
    //master节点
    [root@master ~]# vim /etc/chrony.conf 
    local stratum 10		//取消注释
    
    [root@master ~]# systemctl restart chronyd
    
    [root@master ~]# hwclock -w
    
    //node1、node2节点
    [root@node1 ~]# vim /etc/chrony.conf 
    #pool 2.centos.pool.ntp.org iburst		//注释
    server  master.example.com iburst		//添加
    
    [root@node1 ~]# systemctl restart chronyd
    [root@node1 ~]# hwclock -w
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18

    禁用 firewalld、selinux、postfix(三个节点)

    [root@master ~]# systemctl stop firewalld
    
    [root@master ~]# systemctl disable firewalld
    Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
    Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
    
    [root@master ~]# vim /etc/sysconfig/selinux 
    SELINUX=disabled
    
    [root@master ~]# systemctl stop postfix
    Failed to stop postfix.service: Unit postfix.service not loaded.
    
    [root@master ~]# reboot
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13

    禁用 swap 分区(三个节点)

    注释掉 swap 分区的行

    [root@master ~]# vim /etc/fstab 
    #/dev/mapper/cs-swap     none                    swap    defaults        0 0
    
    [root@master ~]# swapoff -a
    
    [root@master ~]# free -m
                  total        used        free      shared  buff/cache   available
    Mem:           1789         196        1240           8         352        1436
    Swap:             0           0           0
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9

    开启 IP 转发,和修改内核信息(三个节点)

    [root@master ~]# vim /etc/sysctl.d/k8s.conf
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    
    [root@master ~]# modprobe br_netfilter 
    
    [root@master ~]# sysctl -p /etc/sysctl.d/k8s.conf 
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11

    配置 IPVS 功能(三个节点)

    [root@master ~]# vim /etc/sysconfig/modules/ipvs.modules
    #!/bin/bash
    modprobe -- ip_vs
    modprobe -- ip_vs_rr
    modprobe -- ip_vs_wrr
    modprobe -- ip_vs_sh
    
    [root@master ~]# chmod +x /etc/sysconfig/modules/ipvs.modules
    
    [root@master ~]# bash /etc/sysconfig/modules/ipvs.modules
    
    [root@master ~]# reboot
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12

    安装 docker

    切换镜像源(三个节点)

    [root@master ~]# rm -f /etc/yum.repos.d/*
    
    [root@master ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
    
    [root@master ~]# sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
    
    [root@master ~]# yum install -y https://mirrors.aliyun.com/epel/epel-release-latest-8.noarch.rpm
    
    [root@master ~]# sed -i 's|^#baseurl=https://download.example/pub|baseurl=https://mirrors.aliyun.com|' /etc/yum.repos.d/epel*
    
    [root@master ~]# sed -i 's|^metalink|#metalink|' /etc/yum.repos.d/epel*
    
    [root@master ~]# wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13

    安装 docker-ce(三个节点)

    [root@master ~]# dnf -y install docker-ce --allowerasing
    
    [root@master ~]# systemctl enable --now docker
    Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
    
    • 1
    • 2
    • 3
    • 4

    添加一个配置文件,配置 docker 仓库加速器(三个节点)

    [root@master ~]# vim /etc/docker/daemon.json
    {
      "registry-mirrors": ["https://34p1xetd.mirror.aliyuncs.com"],
      "exec-opts": ["native.cgroupdriver=systemd"],
      "log-driver": "json-file",
      "log-opts": {
        "max-size": "100m"
      },
      "storage-driver": "overlay2"
    }
    
    [root@master ~]# systemctl daemon-reload 
    
    [root@master ~]# systemctl restart docker
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14

    安装 kubernetes 组件

    国内镜像源(三个节点)

    [root@master ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
    > [kubernetes]
    > name=Kubernetes
    > baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    > enabled=1
    > gpgcheck=0
    > repo_gpgcheck=0
    > gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    > EOF
    
    [root@master ~]# yum list | grep kube 
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11

    安装 kubeadm kubelet kubectl 工具(三个节点)

    [root@master ~]# dnf -y install kubeadm kubelet kubectl
    
    [root@master ~]# systemctl enable --now kubelet
    Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
    
    • 1
    • 2
    • 3
    • 4

    配置 containerd(三个节点)

    为确保后面集群初始化及加入集群能够成功执行,需要配置 containerd 的配置文件/etc/containerd/config.toml,此操作需要在所有节点执行

    [root@master ~]# containerd config default > /etc/containerd/config.toml
    
    [root@master ~]# vim /etc/containerd/config.toml
        sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"        //修改
    
    [root@master ~]# systemctl restart containerd
    
    [root@master ~]# systemctl enable containerd
    Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /usr/lib/systemd/system/containerd.service.
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9

    部署 k8s 的 master 节点(master 节点)

    [root@master ~]# kubeadm init \
    > --apiserver-advertise-address=192.168.91.143 \
    > --image-repository registry.aliyuncs.com/google_containers \
    > --kubernetes-version v1.25.4 \
    > --service-cidr=10.96.0.0/12 \
    > --pod-network-cidr=10.244.0.0/16
    [init] Using Kubernetes version: v1.25.4
    ......
    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    Alternatively, if you are the root user, you can run:
    
      export KUBECONFIG=/etc/kubernetes/admin.conf
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 192.168.91.143:6443 --token jfjp64.o0f91ypbh39ruasf \
    	--discovery-token-ca-cert-hash sha256:6986491ae4f6c31df04fcab050b85b7271a756204a41f677c2a84b0f051cc7bb
    
    
    
    //建议将初始化内容保存在某个文件中
    [root@master ~]# vim k8s
    
    [root@master ~]# vim /etc/profile.d/k8s.sh
    export KUBECONFIG=/etc/kubernetes/admin.conf
    
    [root@master ~]# source /etc/profile.d/k8s.sh
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38

    安装 pod 网络插件(CNI/flannel)(master 节点)

    [root@master ~]# wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
    
    [root@master ~]# kubectl apply -f kube-flannel.yml 
    namespace/kube-flannel created
    clusterrole.rbac.authorization.k8s.io/flannel created
    clusterrolebinding.rbac.authorization.k8s.io/flannel created
    serviceaccount/flannel created
    configmap/kube-flannel-cfg created
    daemonset.apps/kube-flannel-ds created
    
    [root@master ~]# kubectl get pod -n kube-system
    NAME                                         READY   STATUS    RESTARTS   AGE
    coredns-c676cc86f-mfl8l                      1/1     Running   0          8m42s
    coredns-c676cc86f-wwzft                      1/1     Running   0          8m42s
    etcd-master.example.com                      1/1     Running   0          8m54s
    kube-apiserver-master.example.com            1/1     Running   0          8m54s
    kube-controller-manager-master.example.com   1/1     Running   0          8m54s
    kube-proxy-99vv7                             1/1     Running   0          8m43s
    kube-scheduler-master.example.com            1/1     Running   0          8m54s
    
    [root@master ~]# kubectl get nodes
    NAME                 STATUS   ROLES           AGE     VERSION
    master.example.com   Ready    control-plane   9m20s   v1.25.4
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23

    将 node 节点加入到 k8s 集群中(node 节点(

    [root@node1 ~]# kubeadm join 192.168.91.143:6443 --token jfjp64.o0f91ypbh39ruasf \
    > --discovery-token-ca-cert-hash sha256:6986491ae4f6c31df04fcab050b85b7271a756204a41f677c2a84b0f051cc7bb
     
    
    
    [root@node2 ~]# kubeadm join 192.168.91.143:6443 --token jfjp64.o0f91ypbh39ruasf \
    > --discovery-token-ca-cert-hash sha256:6986491ae4f6c31df04fcab050b85b7271a756204a41f677c2a84b0f051cc7bb
    
    
    
    [root@master ~]# kubectl get nodes
    NAME                 STATUS   ROLES           AGE     VERSION
    master.example.com   Ready    control-plane   13m     v1.25.4
    node1.example.com    Ready              2m10s   v1.25.4
    node2.example.com    Ready              118s    v1.25.4
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15

    使用 k8s 集群创建一个 pod,运行 nginx 容器,然后进行测试

    kubectl get pods:查看 pod 状态
    kubectl get pods -o wide:查看容器在哪个节点中运行的

    [root@master ~]# kubectl create deployment nginx --image nginx
    deployment.apps/nginx created
    
    [root@master ~]# kubectl get pods
    NAME                    READY   STATUS    RESTARTS   AGE
    nginx-76d6c9b8c-gf4gh   1/1     Running   0          59s
    
    [root@master ~]# kubectl get pods -o wide
    NAME                    READY   STATUS    RESTARTS   AGE     IP           NODE                NOMINATED NODE   READINESS GATES
    nginx-76d6c9b8c-gf4gh   1/1     Running   0          2m27s   10.244.2.2   node1.example.com              
    
    [root@master ~]# kubectl get services
    NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
    kubernetes   ClusterIP   10.96.0.1            443/TCP   20m
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
  • 相关阅读:
    【Python基础篇015】第叁章模块大全之《 os模块》
    ActionVLAD算法详解
    构建网站遇到的一些问题(浏览器缓存问题)
    Redis 过期淘汰机制
    【MySQL】数据库常见错误及解决
    【ONE·Linux || 信号】
    Nginx Note(01)——Nginx简介、优点和用途
    软件过程与管理_期末复习知识点回顾总结
    跟艾文学编程《Python基础》PyCharm 安装
    Hadoop入门(二):手把手带你从零基础到完整安装配置
  • 原文地址:https://blog.csdn.net/m0_64735681/article/details/127913045