• kubeadm安装kubernetes集群


    kubeadm安装kubernetes集群

    虚拟机安装ubuntu22.04

    制作基础虚拟机

    安装操作系统:省略。。。

    本文使用的是ubuntu22.04安装,通常需要给虚拟机配置2个网络,一个是可以上网的网络可以是nat也可以时桥接网络,另一个是host-only网络,用于xshell链接时,设置固定IP后可以直接连接,并且重启和更换网络环境也不需要改变

    系统配置:

    # 安装net-tools,查看ifconfig
    sudo apt install net-tools
    
    • 1
    • 2

    复制生成3个节点

    1. 选择基础虚拟机,复制
      在这里插入图片描述

    2. 复制虚拟机,设置虚拟机名称、缓存路径、并重新生成mac地址
      在这里插入图片描述
      在这里插入图片描述

    3. 复制出三个虚拟机用来搭建kubernetes环境
      在这里插入图片描述

    4. 节点需要重启网络

    # ubuntu22.04
    sudo systemctl restart systemd-networkd
    
    • 1
    • 2

    硬件环境综述

    本文使用1个master节点、2个node节点进行搭建,master节点应至少配置2GB内存,2个CPU。

    作者的环境使用的master节点与node均为2GB内存、4个CPU。操作系统均为ubuntu22.04.

    主机环境调整

    关闭防火墙

    iptables防火墙,会对所有网络流量进行过滤、转发,如果是内网机器一般都会直接关闭,省的影响网
    络性能,但k8s不能直接关了,k8s是需要用防火墙做ip转发和修改的,当然也看使用的网络模式,如果
    采用的网络模式不需要防火墙也是可以直接关闭的

    sudo systemctl stop firewalld
    sudo systemctl disable firewalld
    
    • 1
    • 2

    禁用selinux

    selinux,这个是用来加强安全性的一个组件,但非常容易出错且难以定位,一般上来装完系统就先给禁
    用了

    # 查看 selinux 状态
    sudo apt install selinux-utils
    getenforce
    # 禁用
    sudo setenforce 0
    
    • 1
    • 2
    • 3
    • 4
    • 5

    禁用swap

    swap,这个当内存不足时,linux会自动使用swap,将部分内存数据存放到磁盘中,这个这样会使性能
    下降,为了性能考虑推荐关掉

    # 查看交换区
    free
    # 禁用交换区
    sudo swapoff -a
    # 打开文件注释交换区定义
    /etc/fstab
    #/swap.img      none    swap    sw      0       0
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7

    修改主机名

    1. /etc/hosts文件增加主机名与本机ip映射
    2. 修改系统主机名
    # 修改主机名
    sudo hostnamectl set-hostname k8s-master1
    # 查看主机名
    hostname
    
    • 1
    • 2
    • 3
    • 4

    转发 IPv4 并让 iptables 看到桥接流量

    通过运行 lsmod | grep br_netfilter 来验证 br_netfilter 模块是否已加载。

    若要显式加载此模块,请运行 sudo modprobe br_netfilter

    为了让 Linux 节点的 iptables 能够正确查看桥接流量,请确认 sysctl 配置中的 net.bridge.bridge-nf-call-iptables 设置为 1。例如:

    cat <
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18

    安装cri

    cri-dockerd

    前提:安装docker

    下载cri-dockerd

    下载官网发布的版本

    https://github.com/Mirantis/cri-dockerd/releases/tag/v0.2.5

    在这里插入图片描述

    自己clone源代码编译
    cd cri-dockerd
    mkdir bin
    go build -o bin/cri-dockerd
    mkdir -p /usr/local/bin
    install -o root -g root -m 0755 bin/cri-dockerd /usr/local/bin/cri-dockerd
    cp -a packaging/systemd/* /etc/systemd/system
    sed -i -e 's,/usr/bin/cri-dockerd,/usr/local/bin/cri-dockerd,' /etc/systemd/system/cri-docker.service
    systemctl daemon-reload
    systemctl enable cri-docker.service
    systemctl enable --now cri-docker.socket
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10

    cri-dockerd服务配置

    1. 创建/etc/systemd/system/cri-docker.socket
    [Unit]
    Description=CRI Docker Socket for the API
    PartOf=cri-docker.service
    [Socket]
    ListenStream=%t/cri-dockerd.sock # 此处为sock文件的存放路径
    SocketMode=0660
    SocketUser=root
    SocketGroup=docker
    [Install]
    WantedBy=sockets.target
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    1. 创建/etc/systemd/system/cri-docker.service
    [Unit]
    Description=CRI Interface for Docker Application Container Engine
    Documentation=https://docs.mirantis.com
    After=network-online.target firewalld.service docker.service
    Wants=network-online.target
    Requires=cri-docker.socket
    [Service]
    Type=notify
    # 此处为启动的命令行
    # 注意启动的dockerd的路径
    # 注意网络插件及pause的配置
    ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --networkplugin=cni \
    --pod-infra-containerimage=registry.aliyuncs.com/google_containers/pause:3.7  
    ExecReload=/bin/kill -s HUP $MAINPID
    TimeoutSec=0
    RestartSec=2
    Restart=always
    # Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
    # Both the old, and new location are accepted by systemd 229 and up, so using the old location
    # to make them work for either version of systemd.
    StartLimitBurst=3
    # Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
    # Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
    # this option work for either version of systemd.
    StartLimitInterval=60s
    # Having non-zero Limit*s causes performance problems due to accounting overhead
    # in the kernel. We recommend using cgroups to do container-local accounting.
    LimitNOFILE=infinity
    LimitNPROC=infinity
    LimitCORE=infinity
    # Comment TasksMax if your systemd version does not support it.
    # Only systemd 226 and above support this option.
    TasksMax=infinity
    Delegate=yes
    KillMode=process
    [Install]
    WantedBy=multi-user.target
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    1. 启动并验证服务
    # 重新加载配置
    systemctl daemon-reload
    # 设置为开机自启动
    systemctl enable cri-docker
    # 启动服务
    systemctl enable --now cri-docker
    # 检查服务状态
    systemctl status cri-docker
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8

    containerd

    二进制文件安装

    安装containerd

    从https://github.com/containerd/containerd/releases下载containerd---.tar.gz

    $ cd /usr/local
    $ tar Cxzvf /usr/local containerd-1.6.2-linux-amd64.tar.gz
    bin/
    bin/containerd-shim-runc-v2
    bin/containerd-shim
    bin/ctr
    bin/containerd-shim-runc-v1
    bin/containerd
    bin/containerd-stress
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9

    配置/usr/lib/systemd/system/containerd.service

    # Copyright The containerd Authors.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #     http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    
    [Unit]
    Description=containerd container runtime
    Documentation=https://containerd.io
    After=network.target local-fs.target
    
    [Service]
    #uncomment to enable the experimental sbservice (sandboxed) version of containerd/cri integration
    #Environment="ENABLE_CRI_SANDBOXES=sandboxed"
    ExecStartPre=-/sbin/modprobe overlay
    ExecStart=/usr/local/bin/containerd
    
    Type=notify
    Delegate=yes
    KillMode=process
    Restart=always
    RestartSec=5
    # Having non-zero Limit*s causes performance problems due to accounting overhead
    # in the kernel. We recommend using cgroups to do container-local accounting.
    LimitNPROC=infinity
    LimitCORE=infinity
    LimitNOFILE=infinity
    # Comment TasksMax if your systemd version does not supports it.
    # Only systemd 226 and above support this version.
    TasksMax=infinity
    OOMScoreAdjust=-999
    
    [Install]
    WantedBy=multi-user.target
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42

    启动服务

    systemctl daemon-reload
    systemctl enable --now containerd
    
    • 1
    • 2

    安装 runc

    从https://github.com/opencontainers/runc/releases下载runc.二进制文件

    cd /usr/local/sbin/
    mv runc.amd64 runc
    chmod 755 runc
    
    • 1
    • 2
    • 3

    安装cni插件

    从https://github.com/containernetworking/plugins/releases下载cni-plugins---.tgz存档,然后在下面解压:/opt/cni/bin

    $ mkdir -p /opt/cni/bin
    $ tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.1.1.tgz
    ./
    ./macvlan
    ./static
    ./vlan
    ./portmap
    ./host-local
    ./vrf
    ./bridge
    ./tuning
    ./firewall
    ./host-device
    ./sbr
    ./loopback
    ./dhcp
    ./ptp
    ./ipvlan
    ./bandwidth
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19

    修改containerd配置

     mkdir /etc/containerd
    # 生成配置文件
    containerd config default > /etc/containerd/config.toml
    # 重载沙箱(pause)镜像 
    sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.7"
    # 设置cgroup驱动
    [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
      ...
      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
        SystemdCgroup = true
    
    # 重启containerd
    sudo systemctl restart containerd
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14

    circtl默认链接unix:///var/run/dockershim.sock,所以需要修改circtl配置文件

    cat < /etc/crictl.yaml 
    runtime-endpoint: unix:///run/containerd/containerd.sock
    image-endpoint: unix:///run/containerd/containerd.sock
    timeout: 10
    debug: false
    EOF
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6

    拉取镜像验证

    crictl pull nginx:1.20.2
    crictl images ls
    
    • 1
    • 2

    kubeadm

    1. 安装相关软件
    sudo apt-get update
    sudo apt-get install -y apt-transport-https ca-certificates curl
    
    • 1
    • 2
    1. 下载gpg密钥:这里使用阿里云的
    sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg \
    https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg
    
    • 1
    • 2
    1. 设置kubernetes镜像源
    echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg]  \
    https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" | sudo tee \
    /etc/apt/sources.list.d/kubernetes.list
    
    • 1
    • 2
    • 3
    1. 更新apt软件索引,并查看相关软件的可用版本
    sudo apt-get update
    apt-cache madison kubelet kubeadm kubectl
    
    • 1
    • 2
    1. 安装特定版本
    sudo apt-get install -y kubelet= kubeadm=
    kubectl=
    例如:
    sudo apt-get install -y kubelet=1.24.1-00 kubeadm=1.24.1-00 kubectl=1.24.1-00
    
    • 1
    • 2
    • 3
    • 4
    1. 检查
    # kubeadm
    kubeadm version
    # kubectl
    kubectl version
    # kubelet
    systemctl status kubelet
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6

    注意:kubelet在刚安装完成时,会处于一个自动启动状态,每10s启动一次,在没有完成初始化之前它
    一致处于这种状态,所以不要纠结于kubelet安装之后没有启动。

    初始化配置(仅master节点)

    生成默认配置文件

    kubeadm config print init-defaults > init.default.yaml
    
    • 1

    修改配置文件

    # 修改地址 节点IP地址
    localAPIEndpoint.advertiseAddress: 192.168.56.101
    # 修改套接字,如果使用cri-docker需要修改
    nodeRegistration.criSocket: unix:///var/run/cri-dockerd.sock
    # 修改节点名称
    nodeRegistration.name: master1
    # 修改镜像仓库地址为国内开源镜像库
    imageRepository: registry.aliyuncs.com/google_containers
    # 修改版本号
    kubernetesVersion: 1.24.1
    # 增加podSubnet,由于后续会安装flannel 网络插件,该插件必须在集群初始化时指定pod地址
    # 10.244.0.0/16 为flannel组件podSubnet默认值,集群配置与网络组件中的配置需保持一致
    podSubnet: 10.244.0.0/16
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13

    拉取相关镜像

    sudo kubeadm config images pull --config=init.default.yaml
    
    • 1

    初始化集群

    # 通过配置文件初始化
    sudo kubeadm init --config=init.default.yaml
    # 通过参数初始化
    kubeadm init --image-repository registry.aliyuncs.com/google_containers --
    kubernetes-version=v1.24.1 --pod-network-cidr=10.244.0.0/16 --apiserveradvertise-address=192.168.239.142 --cri-socket unix:///var/run/cri-dockerd.sock
    
    • 1
    • 2
    • 3
    • 4
    • 5

    在这里插入图片描述

    若当前用户为普通用户,请执行以下命令

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    • 1
    • 2
    • 3

    当前用户为root用户,请配置环境变量:

    # /etc/profile 末尾添加环境变量
    export KUBECONFIG=/etc/kubernetes/admin.conf
    # 执行命令,立即生效
    source /etc/profile
    
    • 1
    • 2
    • 3
    • 4

    查看节点状态

    # kubectl get node
    NAME      STATUS     ROLES           AGE   VERSION
    master1   NotReady   control-plane   89s   v1.24.1
    
    • 1
    • 2
    • 3

    可以看到节点状态为NotReady

    查看kubelet状态

    # systemctl status kubelet
      Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
        Drop-In: /etc/systemd/system/kubelet.service.d
                 └─10-kubeadm.conf
         Active: active (running) since Sun 2022-09-04 17:18:25 CST; 5min ago
           Docs: https://kubernetes.io/docs/home/
       Main PID: 2146 (kubelet)
          Tasks: 16 (limit: 2236)
         Memory: 34.8M
            CPU: 14.635s
         CGroup: /system.slice/kubelet.service
                 └─2146 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:>
    
    Sep 04 17:23:05 master1 kubelet[2146]: E0904 17:23:05.440384    2146 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not i>
    Sep 04 17:23:10 master1 kubelet[2146]: E0904 17:23:10.441932    2146 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not i>
    Sep 04 17:23:15 master1 kubelet[2146]: E0904 17:23:15.443737    2146 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not i>
    Sep 04 17:23:20 master1 kubelet[2146]: E0904 17:23:20.445438    2146 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not i>
    Sep 04 17:23:25 master1 kubelet[2146]: E0904 17:23:25.447628    2146 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not i>
    Sep 04 17:23:30 master1 kubelet[2146]: E0904 17:23:30.451519    2146 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not i>
    Sep 04 17:23:35 master1 kubelet[2146]: E0904 17:23:35.454570    2146 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not i>
    Sep 04 17:23:40 master1 kubelet[2146]: E0904 17:23:40.459534    2146 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not i>
    Sep 04 17:23:45 master1 kubelet[2146]: E0904 17:23:45.465543    2146 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not i>
    Sep 04 17:23:50 master1 kubelet[2146]: E0904 17:23:50.468645    2146 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not i>
    ~
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25

    可以看到网络插件为准备好

    使用命令查看所有pod可以看到有coredns为准备好

    kubectl get pod --all-namespaces
    NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
    kube-system   coredns-74586cf9b6-22qc5          0/1     Pending   0          6m3s
    kube-system   coredns-74586cf9b6-qx9ql          0/1     Pending   0          6m3s
    kube-system   etcd-master1                      1/1     Running   0          6m7s
    kube-system   kube-apiserver-master1            1/1     Running   0          6m7s
    kube-system   kube-controller-manager-master1   1/1     Running   0          6m7s
    kube-system   kube-proxy-dgmcn                  1/1     Running   0          6m3s
    kube-system   kube-scheduler-master1            1/1     Running   0          6m7s
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10

    安装网络插件

    Kubernetes 定义了 CNI 标准,有很多网络插件,这里我选择最常用的 Flannel,可以在它的 GitHub 仓库里(https://github.com/flannel-io/flannel/)找到相关文档。它安装也很简单,只需要使用项目的“kube-flannel.yml”在 Kubernetes 里部署一下就好了。

    下载地址

    你可以使用curl下载下来

    curl https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml >> flannel.yml
    
    • 1

    如果前面有设置podSubnet那么,你需要修改文件里的“net-conf.json”字段,把 Network 改成刚才 kubeadm 的参数 --pod-network-cidr 设置的地址段,例如:

      net-conf.json: |
        {
          "Network": "10.10.0.0/16",
          "Backend": {
            "Type": "vxlan"
          }
        }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    kubectl apply -f flannel.yml
    
    • 1

    然后再查看节点状态就可以看到master节点已经ready了

    # kubectl get node
    NAME      STATUS   ROLES           AGE   VERSION
    master1   Ready    control-plane   21m   v1.24.1
    
    • 1
    • 2
    • 3

    查看所有pod节点也都是正常,这里需要注意kube-flannel可以能初始化会有所延迟。

    # kubectl get pod --all-namespaces
    NAMESPACE      NAME                              READY   STATUS    RESTARTS   AGE
    kube-flannel   kube-flannel-ds-8x697             1/1     Running   0          2m12s
    kube-system    coredns-74586cf9b6-4vqhq          1/1     Running   0          9m26s
    kube-system    coredns-74586cf9b6-6s6mk          1/1     Running   0          9m26s
    kube-system    etcd-master1                      1/1     Running   1          9m40s
    kube-system    kube-apiserver-master1            1/1     Running   1          9m40s
    kube-system    kube-controller-manager-master1   1/1     Running   0          9m40s
    kube-system    kube-flannel-ds-thzgs             1/1     Running   0          5m59s
    kube-system    kube-proxy-8f28v                  1/1     Running   0          9m26s
    kube-system    kube-scheduler-master1            1/1     Running   1          9m40s
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12

    开启kube-proxy的ipvs模式

    # 修改mod
    kubectl edit cm kube-proxy -n kube-system
    修改:mode: "ipvs"
    # 删除现有kube-proxy pod
    kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
    
    • 1
    • 2
    • 3
    • 4
    • 5

    节点

    在master节点获取加入节点命令

    sudo kubeadm token create --print-join-command
    
    • 1

    注意:如果容器运行时不是contained需要在命令后面配置sock的url,例如cri-dockerd就需要配置--cri-socket unix:///var/run/cri-dockerd.sock

    重置节点

    如果节点不需要或者出错需要删除时我们需要重置节点,重置步骤为

    • 执行kubeadm reset
    sudo kubeadm reset
    
    • 1
    • 删除相关文件
    sudo rm -rf /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni
    /etc/cni/net.d $HOME/.kube/config
    
    • 1
    • 2
    • 清除ipvs
    sudo ipvsadm --clear
    
    • 1
    • 删除网络
    sudo ifconfig cni0 down
    sudo ip link delete cni0
    
    • 1
    • 2

    部署web应用

    创建资源

    1. 创建一个myhello-rc.yaml
    apiVersion: v1
    kind: ReplicationController # 副本控制器 RC
    metadata:
      name: myhello-rc # RC名称,全局唯一
      labels:
        name: myhello-rc
    spec:
      replicas: 5 # Pod副本期待数量
      selector:
        name: myhello-pod
      template:
        metadata:
          labels:
            name: myhello-pod
        spec:
          containers: # Pod 内容的定义部分
          - name: myhello #容器的名称
            image: xlhmzch/hello:1.0.0 #容器对应的 Docker Image
            imagePullPolicy: IfNotPresent
            ports:
            - containerPort: 80
            env: # 注入到容器的环境变量
            - name: env1
              value: "k8s-env1"
            - name: env2
              value: "k8s-env2"
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26

    创建资源:

    sudo kubectl create -f myhello-rc.yaml
    
    • 1

    创建service

    1. 创建myhello-svc.yaml
    apiVersion: v1
    kind: Service
    metadata:
      name: myhello-svc
      labels:
        name: myhello-svc
    spec:
      type: NodePort # 对外提供端口
      ports:
      - port: 80
        protocol: TCP
        targetPort: 80
        name: http
        nodePort: 30000
    selector:
      name: myhello-pod
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    1. 创建资源:
    sudo kubectl create -f myhello-svc.yaml
    
    • 1

    验证

    curl http://192.168.1.9:30000/ping
    
    • 1

    相关问题

    节点加入时加载cni失败

    1. 检查kublet状态
    systemctl status kubelet
    
    ● kubelet.service - kubelet: The Kubernetes Node Agent
         Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
        Drop-In: /etc/systemd/system/kubelet.service.d
                 └─10-kubeadm.conf
         Active: active (running) since Sun 2022-09-04 21:05:48 CST; 3min 39s ago
           Docs: https://kubernetes.io/docs/home/
       Main PID: 3310 (kubelet)
          Tasks: 16 (limit: 2236)
         Memory: 33.1M
            CPU: 5.174s
         CGroup: /system.slice/kubelet.service
                 └─3310 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:>
    
    Sep 04 21:08:09 node2 kubelet[3310]: E0904 21:08:09.146947    3310 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not ini>
    Sep 04 21:08:14 node2 kubelet[3310]: E0904 21:08:14.148254    3310 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not ini>
    Sep 04 21:08:19 node2 kubelet[3310]: E0904 21:08:19.151768    3310 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not ini>
    Sep 04 21:08:24 node2 kubelet[3310]: E0904 21:08:24.221484    3310 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not ini>
    Sep 04 21:08:29 node2 kubelet[3310]: E0904 21:08:29.223015    3310 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not ini>
    Sep 04 21:08:34 node2 kubelet[3310]: E0904 21:08:34.242096    3310 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not ini>
    Sep 04 21:08:39 node2 kubelet[3310]: E0904 21:08:39.242534    3310 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not ini>
    Sep 04 21:08:44 node2 kubelet[3310]: E0904 21:08:44.243459    3310 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not ini
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23

    发现加载网路插件cni失败

    解决:

    1. cni网络相关配置文件缺失
    sudo mkdir -p /run/flannel/
    sudo scp root@master1:/run/flannel/subnet.env /run/flannel/subnet.env
    sudo mkdir -p /etc/cni/net.d
    sudo scp root@master1:/etc/cni/net.d/10-flannel.conflist  /etc/cni/net.d/
    
    • 1
    • 2
    • 3
    • 4
    1. containerd无法拉取镜像导致
    # crictl pull quay.io/coreos/flannel:v0.9.1-amd64
    WARN[0000] image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead. 
    ERRO[0000] unable to determine image API version: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory" 
    
    • 1
    • 2
    • 3

    解决:

     cat < /etc/crictl.yaml 
    runtime-endpoint: unix:///run/containerd/containerd.sock
    image-endpoint: unix:///run/containerd/containerd.sock
    timeout: 10
    debug: false
    EOF
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7

    心得

    1. 刚开始装cri-docker没有安装docker,后面换成containerd
    2. containerd不需要修改sock少了个坑
    3. 其他节点忘记验证是否能拉去镜像,导致sock失败问题1

    推荐一个零声学院免费教程,个人觉得老师讲得不错,
    分享给大家:Linux,Nginx,ZeroMQ,MySQL,Redis,
    fastdfs,MongoDB,ZK,流媒体,CDN,P2P,K8S,Docker,
    TCP/IP,协程,DPDK等技术内容,点击立即学习


      1. containerd无法拉取镜像导致
      ↩︎
  • 相关阅读:
    Java IO包之序列化与ObjectInputStream、ObjectOutputStream的简介说明
    关于多线程:锁的原理释义
    Docker基础-namespace
    python数据分析(numpy)
    CH573-09-BLE蓝牙安卓应用二次开发——RISC-V内核BLE MCU快速开发教程
    (CVPR 2019) 3D-SIS: 3D Semantic Instance Segmentation of RGB-D Scans
    腾讯广告可直跳淘宝天猫,双11流量抢占不可错过!
    GJB 5000B-2021下载-见文章结尾
    CMSC5707-高级人工智能之卷积神经网络CNN
    VS2022+CMAKE+OPENCV+QT+PCL安装及环境搭建
  • 原文地址:https://blog.csdn.net/hzb869168467/article/details/126695463