• 使用kubekey的all-in-one安装K8S1.24及KubeSphere3.3


    KubeSphere简介

    官网:https://kubesphere.com.cn/

    有中文,别人介绍的很详细,笔者不再赘述,本篇主要实操。

    KubeSphere可以很方便地部署devops的各种组件:

    在这里插入图片描述

    常用组件一应俱全,对于非专业运维的开发人员异常友好,构建基于Jenkins的Devops流水线非常方便。比K8S自带的DashBoard也花哨很多。流水线构建之后尝试,先安装基础环境。

    Kubekey简介

    官网文档:https://kubesphere.com.cn/docs/v3.3/installing-on-linux/introduction/kubekey/

    Github:https://github.com/kubesphere/kubekey

    Github中文文档:https://github.com/kubesphere/kubekey/blob/master/README_zh-CN.md

    Kubekey采用Go编写,不像Ansible那样依赖运行环境,可以同时安装 Kubernetes 和 KubeSphere。此外Kubekey还可以对K8S集群做升级、扩缩容、根据Yaml安装插件等操作,对非专业运维的开发人员相当友好。

    多节点安装K8S及KubeSphere官方文档:https://kubesphere.com.cn/docs/v3.3/installing-on-linux/introduction/multioverview/

    All-in-one选用

    原因

    由于K8S的涉及初衷就是run forever,想要让各种Pod正常情况一直运行下去,但是笔者使用的宿主机不可能不关机,吸取了大数据集群挂起后检查出现ntp时间不同步导致HBase组件宕掉的教训后,自己玩的K8S选用All-in-one,不用时挂起即可,否则每次重启虚拟机及组件也是个麻烦事。Kuboard也很方便,但是笔者一向喜欢比较重的组件。Kuboard搭建K8S集群目前只支持到1.23,还是老了点,重启还需要自行杀坏掉的pod:

    kubectl delete pods pod名称 --grace-period=0 --force
    
    • 1

    所以最终选择了All-in-one的KubeSphere。

    Kubekey版本

    根据Github的官方文档:https://github.com/kubesphere/kubekey/releases

    当然是选用目前最新的2.2.2。

    虚拟机配置

    官网建议默认最小化安装最低2C+4G+40G,双路E5 2696V3 + 256G + 3块硬盘的宿主机可以奢侈一点,笔者分配12C + 40G + 300G,一步到位,方便后期加入更多组件。较官网建议的8C + 16G + 100G只多不少。

    Linux版本

    • Linux 发行版

      • Ubuntu 16.04, 18.04, 20.04
      • Debian Buster, Stretch
      • CentOS/RHEL 7
      • SUSE Linux Enterprise Server 15

      建议使用 Linux Kernel 版本: 4.15 or later
      可以通过命令 uname -srm 查看 Linux Kernel 版本。

    由于Centos已经停更,且Ubuntu的Kernel版本要更新一些,可以更好地支持K8S1.24的新特性。当然Centos7.9也可以升级Kernel版本后使用,看个人喜好。Ubuntu22.04暂时不建议非专业运维的开发人员使用,安装K8S1.24这种新版本,安装Calico这种CNI组件会有问题,如果使用Flannel当我没说。。。

    K8S版本

    • v1.17:   v1.17.9
    • v1.18:   v1.18.8
    • v1.19:   v1.19.9
    • v1.20:   v1.20.10
    • v1.21:   v1.21.13
    • v1.22:   v1.22.12
    • v1.23:   v1.23.9 (default)
    • v1.24:   v1.24.3

    目前Kubekey支持这些主流版本。Github的文档要比官网文档新一些。

    当然是安装1.24,毕竟移除了DockerShim,也是个里程碑版本。

    运行时

    常见的运行时docker, crio, containerd and isula,目前当然应该选用containerd,选用docker就过时了。

    依赖

    Kubernetes 版本 ≥ 1.18Kubernetes 版本 < 1.18
    socat必须安装可选,但推荐安装
    conntrack必须安装可选,但推荐安装
    ebtables可选,但推荐安装可选,但推荐安装
    ipset可选,但推荐安装可选,但推荐安装
    ipvsadm可选,但推荐安装可选,但推荐安装

    从官方文档可以看出必须安装socat及conntrack。

    虚拟机准备

    为了方便使用,笔者选用Ubuntu20.04LTS的Desktop image,自己玩玩有GUI还是很方便。

    配置网卡

    直接GUI中设置即可:

    在这里插入图片描述

    设置固定IP:192.168.88.20

    设置子网掩码:255.255.255.0

    设置网关:192.168.88.2

    并且禁用IPV6。

    更换阿里源

    众所周知阿里源比Ubuntu的源速度快很多:

    在这里插入图片描述

    安装必要命令

    Ubuntu的Desktop默认缺少很多命令集,必须手动安装后才能使用SSH等常用命令。

    sudo apt install net-tools
    
    • 1

    在这里插入图片描述

    可以看到net-tools已经可用,并且固定IP配置成功。

    SSH是必须安装的,否则会Network error: Connection refused。安装好只后才能使用MobaXterm远程连接并且传输文件。

    sudo apt-get install openssh-server
    sudo apt-get install openssh-client
    sudo apt-get install socat
    sudo apt-get install conntrack
    sudo apt-get install curl
    
    • 1
    • 2
    • 3
    • 4
    • 5

    安装Kubekey

    执行:

    sudo mkdir -p /home/zhiyong/kubesphereinstall
    cd /home/zhiyong/kubesphereinstall
    export KKZONE=cn	#从CN下载
    curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -	#下载
    chmod +x kk		#可以不用执行
    
    • 1
    • 2
    • 3
    • 4
    • 5

    等待:

    zhiyong@zhiyong-ksp1:~/kubesphereinstall$ curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
    
    Downloading kubekey v2.2.2 from https://kubernetes.pek3b.qingstor.com/kubekey/releases/download/v2.2.2/kubekey-v2.2.2-linux-amd64.tar.gz ...
    
    
    Kubekey v2.2.2 Download Complete!
    
    zhiyong@zhiyong-ksp1:~/kubesphereinstall$ ll
    总用量 70336
    drwxrwxr-x  2 zhiyong zhiyong     4096 88 01:03 ./
    drwxr-xr-x 16 zhiyong zhiyong     4096 88 00:52 ../
    -rwxr-xr-x  1 zhiyong zhiyong 54910976 726 14:17 kk*
    -rw-rw-r--  1 zhiyong zhiyong 17102249 88 01:03 kubekey-v2.2.2-linux-amd64.tar.gz
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13

    吊起Kubekey

    指定版本安装K8S及KubeSphere:

    export KKZONE=cn
    ./kk create cluster --with-kubernetes v1.24.3 --with-kubesphere v3.3.0 --container-manager containerd
    
    • 1
    • 2

    需要切换root用户才能操作:

    zhiyong@zhiyong-ksp1:~/kubesphereinstall$ ./kk create cluster --with-kubernetes v1.24.3 --with-kubesphere v3.3.0 --container-manager containerd
    error: Current user is zhiyong. Please use root!
    zhiyong@zhiyong-ksp1:~/kubesphereinstall$ su - root
    密码:
    su: 认证失败
    zhiyong@zhiyong-ksp1:~/kubesphereinstall$ su root
    密码:
    su: 认证失败
    zhiyong@zhiyong-ksp1:~/kubesphereinstall$ sudo su root
    root@zhiyong-ksp1:/home/zhiyong/kubesphereinstall# ./kk create cluster --with-kubernetes v1.24.3 --with-kubesphere v3.3.0 --container-manager containerd
    
    
     _   __      _          _   __
    | | / /     | |        | | / /
    | |/ / _   _| |__   ___| |/ /  ___ _   _
    |    \| | | | '_ \ / _ \    \ / _ \ | | |
    | |\  \ |_| | |_) |  __/ |\  \  __/ |_| |
    \_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
                                        __/ |
                                       |___/
    
    01:11:22 CST [GreetingsModule] Greetings
    01:11:23 CST message: [zhiyong-ksp1]
    Greetings, KubeKey!
    01:11:23 CST success: [zhiyong-ksp1]
    01:11:23 CST [NodePreCheckModule] A pre-check on nodes
    01:11:23 CST success: [zhiyong-ksp1]
    01:11:23 CST [ConfirmModule] Display confirmation form
    +--------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
    | name         | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time         |
    +--------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
    | zhiyong-ksp1 | y    | y    | y       | y        | y     |       |         | y         |        |        | y          |            |             |                  | CST 01:11:23 |
    +--------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
    
    This is a simple check of your environment.
    Before installation, ensure that your machines meet all requirements specified at
    https://github.com/kubesphere/kubekey#requirements-and-recommendations
    
    Continue this installation? [yes/no]: yes
    01:12:55 CST success: [LocalHost]
    01:12:55 CST [NodeBinariesModule] Download installation binaries
    01:12:55 CST message: [localhost]
    downloading amd64 kubeadm v1.24.3 ...
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100 42.3M  100 42.3M    0     0  1115k      0  0:00:38  0:00:38 --:--:-- 1034k
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100 42.3M  100 42.3M    0     0   813k      0  0:00:53  0:00:53 --:--:--  674k
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100 42.3M  100 42.3M    0     0   607k      0  0:01:11  0:01:11 --:--:--  616k
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100 42.3M  100 42.3M    0     0   696k      0  0:01:02  0:01:02 --:--:--  778k
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100 42.3M  100 42.3M    0     0  1686k      0  0:00:25  0:00:25 --:--:-- 4429k
    01:17:09 CST message: [LocalHost]
    Failed to download kubeadm binary: curl -L -o /home/zhiyong/kubesphereinstall/kubekey/kube/v1.24.3/amd64/kubeadm https://storage.googleapis.com/kubernetes-release/release/v1.24.3/bin/linux/amd64/kubeadm error: No SHA256 found for kubeadm. v1.24.3 is not supported.
    01:17:09 CST failed: [LocalHost]
    error: Pipeline[CreateClusterPipeline] execute failed: Module[NodeBinariesModule] exec failed:
    failed: [LocalHost] [DownloadBinaries] exec failed after 1 retires: Failed to download kubeadm binary: curl -L -o /home/zhiyong/kubesphereinstall/kubekey/kube/v1.24.3/amd64/kubeadm https://storage.googleapis.com/kubernetes-release/release/v1.24.3/bin/linux/amd64/kubeadm error: No SHA256 found for kubeadm. v1.24.3 is not supported.
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63

    可用看出可能是对象存储丢失了哈希校验文件或者别的原因,导致不能从二进制安装K8S1.24.3。

    经过多次尝试,发现目前可以安装K8S1.24.1:

    root@zhiyong-ksp1:/home/zhiyong/kubesphereinstall# ./kk create cluster --with-kubernetes v1.24.1 --with-kubesphere v3.3.0 --container-manager containerd
    
    
     _   __      _          _   __
    | | / /     | |        | | / /
    | |/ / _   _| |__   ___| |/ /  ___ _   _
    |    \| | | | '_ \ / _ \    \ / _ \ | | |
    | |\  \ |_| | |_) |  __/ |\  \  __/ |_| |
    \_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
                                        __/ |
                                       |___/
    
    07:55:51 CST [GreetingsModule] Greetings
    07:55:51 CST message: [zhiyong-ksp1]
    Greetings, KubeKey!
    07:55:51 CST success: [zhiyong-ksp1]
    07:55:51 CST [NodePreCheckModule] A pre-check on nodes
    07:55:51 CST success: [zhiyong-ksp1]
    07:55:51 CST [ConfirmModule] Display confirmation form
    +--------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
    | name         | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time         |
    +--------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
    | zhiyong-ksp1 | y    | y    | y       | y        | y     |       |         | y         |        |        | y          |            |             |                  | CST 07:55:51 |
    +--------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
    
    This is a simple check of your environment.
    Before installation, ensure that your machines meet all requirements specified at
    https://github.com/kubesphere/kubekey#requirements-and-recommendations
    
    Continue this installation? [yes/no]: yes
    07:55:53 CST success: [LocalHost]
    07:55:53 CST [NodeBinariesModule] Download installation binaries
    07:55:53 CST message: [localhost]
    downloading amd64 kubeadm v1.24.1 ...
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100 42.3M  100 42.3M    0     0  1019k      0  0:00:42  0:00:42 --:--:-- 1061k
    07:56:36 CST message: [localhost]
    downloading amd64 kubelet v1.24.1 ...
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100  110M  100  110M    0     0  1022k      0  0:01:51  0:01:51 --:--:-- 1141k
    07:58:29 CST message: [localhost]
    downloading amd64 kubectl v1.24.1 ...
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100 43.5M  100 43.5M    0     0   994k      0  0:00:44  0:00:44 --:--:--  997k
    07:59:14 CST message: [localhost]
    downloading amd64 helm v3.6.3 ...
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100 43.0M  100 43.0M    0     0  1014k      0  0:00:43  0:00:43 --:--:-- 1026k
    07:59:58 CST message: [localhost]
    downloading amd64 kubecni v0.9.1 ...
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100 37.9M  100 37.9M    0     0  1011k      0  0:00:38  0:00:38 --:--:-- 1027k
    08:00:37 CST message: [localhost]
    downloading amd64 crictl v1.24.0 ...
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100 13.8M  100 13.8M    0     0  1012k      0  0:00:14  0:00:14 --:--:-- 1051k
    08:00:51 CST message: [localhost]
    downloading amd64 etcd v3.4.13 ...
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100 16.5M  100 16.5M    0     0  1022k      0  0:00:16  0:00:16 --:--:-- 1070k
    08:01:08 CST message: [localhost]
    downloading amd64 containerd 1.6.4 ...
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100 42.3M  100 42.3M    0     0  1011k      0  0:00:42  0:00:42 --:--:-- 1087k
    08:01:51 CST message: [localhost]
    downloading amd64 runc v1.1.1 ...
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100 9194k  100 9194k    0     0   999k      0  0:00:09  0:00:09 --:--:-- 1089k
    08:02:01 CST success: [LocalHost]
    08:02:01 CST [ConfigureOSModule] Prepare to init OS
    08:02:01 CST success: [zhiyong-ksp1]
    08:02:01 CST [ConfigureOSModule] Generate init os script
    08:02:01 CST success: [zhiyong-ksp1]
    08:02:01 CST [ConfigureOSModule] Exec init os script
    08:02:03 CST stdout: [zhiyong-ksp1]
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-arptables = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_local_reserved_ports = 30000-32767
    vm.max_map_count = 262144
    vm.swappiness = 1
    fs.inotify.max_user_instances = 524288
    kernel.pid_max = 65535
    08:02:03 CST success: [zhiyong-ksp1]
    08:02:03 CST [ConfigureOSModule] configure the ntp server for each node
    08:02:03 CST skipped: [zhiyong-ksp1]
    08:02:03 CST [KubernetesStatusModule] Get kubernetes cluster status
    08:02:03 CST success: [zhiyong-ksp1]
    08:02:03 CST [InstallContainerModule] Sync containerd binaries
    08:02:06 CST success: [zhiyong-ksp1]
    08:02:06 CST [InstallContainerModule] Sync crictl binaries
    08:02:07 CST success: [zhiyong-ksp1]
    08:02:07 CST [InstallContainerModule] Generate containerd service
    08:02:07 CST success: [zhiyong-ksp1]
    08:02:07 CST [InstallContainerModule] Generate containerd config
    08:02:07 CST success: [zhiyong-ksp1]
    08:02:07 CST [InstallContainerModule] Generate crictl config
    08:02:07 CST success: [zhiyong-ksp1]
    08:02:07 CST [InstallContainerModule] Enable containerd
    08:02:08 CST success: [zhiyong-ksp1]
    08:02:08 CST [PullModule] Start to pull images on all nodes
    08:02:08 CST message: [zhiyong-ksp1]
    downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.7
    08:02:11 CST message: [zhiyong-ksp1]
    downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.24.1
    08:02:22 CST message: [zhiyong-ksp1]
    downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.24.1
    08:02:31 CST message: [zhiyong-ksp1]
    downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.24.1
    08:02:38 CST message: [zhiyong-ksp1]
    downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.24.1
    08:02:49 CST message: [zhiyong-ksp1]
    downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.6
    08:02:54 CST message: [zhiyong-ksp1]
    downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
    08:03:06 CST message: [zhiyong-ksp1]
    downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.23.2
    08:03:21 CST message: [zhiyong-ksp1]
    downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.23.2
    08:03:49 CST message: [zhiyong-ksp1]
    downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.23.2
    08:04:10 CST message: [zhiyong-ksp1]
    downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.23.2
    08:04:13 CST success: [zhiyong-ksp1]
    08:04:13 CST [ETCDPreCheckModule] Get etcd status
    08:04:13 CST success: [zhiyong-ksp1]
    08:04:13 CST [CertsModule] Fetch etcd certs
    08:04:13 CST success: [zhiyong-ksp1]
    08:04:13 CST [CertsModule] Generate etcd Certs
    [certs] Generating "ca" certificate and key
    [certs] admin-zhiyong-ksp1 serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost zhiyong-ksp1] and IPs [127.0.0.1 ::1 192.168.88.20]
    [certs] member-zhiyong-ksp1 serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost zhiyong-ksp1] and IPs [127.0.0.1 ::1 192.168.88.20]
    [certs] node-zhiyong-ksp1 serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost zhiyong-ksp1] and IPs [127.0.0.1 ::1 192.168.88.20]
    08:04:15 CST success: [LocalHost]
    08:04:15 CST [CertsModule] Synchronize certs file
    08:04:16 CST success: [zhiyong-ksp1]
    08:04:16 CST [CertsModule] Synchronize certs file to master
    08:04:16 CST skipped: [zhiyong-ksp1]
    08:04:16 CST [InstallETCDBinaryModule] Install etcd using binary
    08:04:17 CST success: [zhiyong-ksp1]
    08:04:17 CST [InstallETCDBinaryModule] Generate etcd service
    08:04:17 CST success: [zhiyong-ksp1]
    08:04:17 CST [InstallETCDBinaryModule] Generate access address
    08:04:17 CST success: [zhiyong-ksp1]
    08:04:17 CST [ETCDConfigureModule] Health check on exist etcd
    08:04:17 CST skipped: [zhiyong-ksp1]
    08:04:17 CST [ETCDConfigureModule] Generate etcd.env config on new etcd
    08:04:17 CST success: [zhiyong-ksp1]
    08:04:17 CST [ETCDConfigureModule] Refresh etcd.env config on all etcd
    08:04:17 CST success: [zhiyong-ksp1]
    08:04:17 CST [ETCDConfigureModule] Restart etcd
    08:04:22 CST stdout: [zhiyong-ksp1]
    Created symlink /etc/systemd/system/multi-user.target.wants/etcd.service → /etc/systemd/system/etcd.service.
    08:04:22 CST success: [zhiyong-ksp1]
    08:04:22 CST [ETCDConfigureModule] Health check on all etcd
    08:04:22 CST success: [zhiyong-ksp1]
    08:04:22 CST [ETCDConfigureModule] Refresh etcd.env config to exist mode on all etcd
    08:04:22 CST success: [zhiyong-ksp1]
    08:04:22 CST [ETCDConfigureModule] Health check on all etcd
    08:04:22 CST success: [zhiyong-ksp1]
    08:04:22 CST [ETCDBackupModule] Backup etcd data regularly
    08:04:22 CST success: [zhiyong-ksp1]
    08:04:22 CST [ETCDBackupModule] Generate backup ETCD service
    08:04:23 CST success: [zhiyong-ksp1]
    08:04:23 CST [ETCDBackupModule] Generate backup ETCD timer
    08:04:23 CST success: [zhiyong-ksp1]
    08:04:23 CST [ETCDBackupModule] Enable backup etcd service
    08:04:23 CST success: [zhiyong-ksp1]
    08:04:23 CST [InstallKubeBinariesModule] Synchronize kubernetes binaries
    08:04:33 CST success: [zhiyong-ksp1]
    08:04:33 CST [InstallKubeBinariesModule] Synchronize kubelet
    08:04:33 CST success: [zhiyong-ksp1]
    08:04:33 CST [InstallKubeBinariesModule] Generate kubelet service
    08:04:34 CST success: [zhiyong-ksp1]
    08:04:34 CST [InstallKubeBinariesModule] Enable kubelet service
    08:04:34 CST success: [zhiyong-ksp1]
    08:04:34 CST [InstallKubeBinariesModule] Generate kubelet env
    08:04:34 CST success: [zhiyong-ksp1]
    08:04:34 CST [InitKubernetesModule] Generate kubeadm config
    08:04:35 CST success: [zhiyong-ksp1]
    08:04:35 CST [InitKubernetesModule] Init cluster using kubeadm
    08:04:51 CST stdout: [zhiyong-ksp1]
    W0808 08:04:35.263265    5320 common.go:83] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
    W0808 08:04:35.264800    5320 common.go:83] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
    W0808 08:04:35.267646    5320 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
    [init] Using Kubernetes version: v1.24.1
    [preflight] Running pre-flight checks
            [WARNING FileExisting-ethtool]: ethtool not found in system path
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Generating "ca" certificate and key
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost zhiyong-ksp1 zhiyong-ksp1.cluster.local] and IPs [10.233.0.1 192.168.88.20 127.0.0.1]
    [certs] Generating "apiserver-kubelet-client" certificate and key
    [certs] Generating "front-proxy-ca" certificate and key
    [certs] Generating "front-proxy-client" certificate and key
    [certs] External etcd mode: Skipping etcd/ca certificate authority generation
    [certs] External etcd mode: Skipping etcd/server certificate generation
    [certs] External etcd mode: Skipping etcd/peer certificate generation
    [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
    [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
    [certs] Generating "sa" key and public key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Starting the kubelet
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [apiclient] All control plane components are healthy after 11.004889 seconds
    [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
    [upload-certs] Skipping phase. Please see --upload-certs
    [mark-control-plane] Marking the node zhiyong-ksp1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
    [mark-control-plane] Marking the node zhiyong-ksp1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]
    [bootstrap-token] Using token: 1g0q46.cdguvbypxok882ne
    [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
    [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
    [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    Alternatively, if you are the root user, you can run:
    
      export KUBECONFIG=/etc/kubernetes/admin.conf
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    You can now join any number of control-plane nodes by copying certificate authorities
    and service account keys on each node and then running the following as root:
    
      kubeadm join lb.kubesphere.local:6443 --token 1g0q46.cdguvbypxok882ne \
            --discovery-token-ca-cert-hash sha256:b44eb7d34699d4efc2b51013d5e217d97e977ebbed7c77ac27934a0883501c02 \
            --control-plane
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join lb.kubesphere.local:6443 --token 1g0q46.cdguvbypxok882ne \
            --discovery-token-ca-cert-hash sha256:b44eb7d34699d4efc2b51013d5e217d97e977ebbed7c77ac27934a0883501c02
    08:04:51 CST success: [zhiyong-ksp1]
    08:04:51 CST [InitKubernetesModule] Copy admin.conf to ~/.kube/config
    08:04:51 CST success: [zhiyong-ksp1]
    08:04:51 CST [InitKubernetesModule] Remove master taint
    08:04:51 CST stdout: [zhiyong-ksp1]
    node/zhiyong-ksp1 untainted
    08:04:51 CST stdout: [zhiyong-ksp1]
    node/zhiyong-ksp1 untainted
    08:04:51 CST success: [zhiyong-ksp1]
    08:04:51 CST [InitKubernetesModule] Add worker label
    08:04:51 CST stdout: [zhiyong-ksp1]
    node/zhiyong-ksp1 labeled
    08:04:51 CST success: [zhiyong-ksp1]
    08:04:51 CST [ClusterDNSModule] Generate coredns service
    08:04:52 CST success: [zhiyong-ksp1]
    08:04:52 CST [ClusterDNSModule] Override coredns service
    08:04:52 CST stdout: [zhiyong-ksp1]
    service "kube-dns" deleted
    08:04:53 CST stdout: [zhiyong-ksp1]
    service/coredns created
    Warning: resource clusterroles/system:coredns is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
    clusterrole.rbac.authorization.k8s.io/system:coredns configured
    08:04:53 CST success: [zhiyong-ksp1]
    08:04:53 CST [ClusterDNSModule] Generate nodelocaldns
    08:04:54 CST success: [zhiyong-ksp1]
    08:04:54 CST [ClusterDNSModule] Deploy nodelocaldns
    08:04:54 CST stdout: [zhiyong-ksp1]
    serviceaccount/nodelocaldns created
    daemonset.apps/nodelocaldns created
    08:04:54 CST success: [zhiyong-ksp1]
    08:04:54 CST [ClusterDNSModule] Generate nodelocaldns configmap
    08:04:54 CST success: [zhiyong-ksp1]
    08:04:54 CST [ClusterDNSModule] Apply nodelocaldns configmap
    08:04:55 CST stdout: [zhiyong-ksp1]
    configmap/nodelocaldns created
    08:04:55 CST success: [zhiyong-ksp1]
    08:04:55 CST [KubernetesStatusModule] Get kubernetes cluster status
    08:04:55 CST stdout: [zhiyong-ksp1]
    v1.24.1
    08:04:55 CST stdout: [zhiyong-ksp1]
    zhiyong-ksp1   v1.24.1   [map[address:192.168.88.20 type:InternalIP] map[address:zhiyong-ksp1 type:Hostname]]
    08:04:56 CST stdout: [zhiyong-ksp1]
    [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
    [upload-certs] Using certificate key:
    a8f69e35232215a9e8129098a4a7eede8cacaf554073374a05d970e52b156403
    08:04:56 CST stdout: [zhiyong-ksp1]
    secret/kubeadm-certs patched
    08:04:57 CST stdout: [zhiyong-ksp1]
    secret/kubeadm-certs patched
    08:04:57 CST stdout: [zhiyong-ksp1]
    secret/kubeadm-certs patched
    08:04:57 CST stdout: [zhiyong-ksp1]
    zosd83.v0iafx5s1n5d3ax7
    08:04:57 CST success: [zhiyong-ksp1]
    08:04:57 CST [JoinNodesModule] Generate kubeadm config
    08:04:57 CST skipped: [zhiyong-ksp1]
    08:04:57 CST [JoinNodesModule] Join control-plane node
    08:04:57 CST skipped: [zhiyong-ksp1]
    08:04:57 CST [JoinNodesModule] Join worker node
    08:04:57 CST skipped: [zhiyong-ksp1]
    08:04:57 CST [JoinNodesModule] Copy admin.conf to ~/.kube/config
    08:04:57 CST skipped: [zhiyong-ksp1]
    08:04:57 CST [JoinNodesModule] Remove master taint
    08:04:57 CST skipped: [zhiyong-ksp1]
    08:04:57 CST [JoinNodesModule] Add worker label to master
    08:04:57 CST skipped: [zhiyong-ksp1]
    08:04:57 CST [JoinNodesModule] Synchronize kube config to worker
    08:04:57 CST skipped: [zhiyong-ksp1]
    08:04:57 CST [JoinNodesModule] Add worker label to worker
    08:04:57 CST skipped: [zhiyong-ksp1]
    08:04:57 CST [DeployNetworkPluginModule] Generate calico
    08:04:57 CST success: [zhiyong-ksp1]
    08:04:57 CST [DeployNetworkPluginModule] Deploy calico
    08:04:58 CST stdout: [zhiyong-ksp1]
    configmap/calico-config created
    customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
    clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
    clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
    clusterrole.rbac.authorization.k8s.io/calico-node created
    clusterrolebinding.rbac.authorization.k8s.io/calico-node created
    daemonset.apps/calico-node created
    serviceaccount/calico-node created
    deployment.apps/calico-kube-controllers created
    serviceaccount/calico-kube-controllers created
    poddisruptionbudget.policy/calico-kube-controllers created
    08:04:58 CST success: [zhiyong-ksp1]
    08:04:58 CST [ConfigureKubernetesModule] Configure kubernetes
    08:04:58 CST success: [zhiyong-ksp1]
    08:04:58 CST [ChownModule] Chown user $HOME/.kube dir
    08:04:58 CST success: [zhiyong-ksp1]
    08:04:58 CST [AutoRenewCertsModule] Generate k8s certs renew script
    08:04:58 CST success: [zhiyong-ksp1]
    08:04:58 CST [AutoRenewCertsModule] Generate k8s certs renew service
    08:04:58 CST success: [zhiyong-ksp1]
    08:04:58 CST [AutoRenewCertsModule] Generate k8s certs renew timer
    08:04:58 CST success: [zhiyong-ksp1]
    08:04:58 CST [AutoRenewCertsModule] Enable k8s certs renew service
    08:04:58 CST success: [zhiyong-ksp1]
    08:04:58 CST [SaveKubeConfigModule] Save kube config as a configmap
    08:04:58 CST success: [LocalHost]
    08:04:58 CST [AddonsModule] Install addons
    08:04:58 CST success: [LocalHost]
    08:04:58 CST [DeployStorageClassModule] Generate OpenEBS manifest
    08:04:59 CST success: [zhiyong-ksp1]
    08:04:59 CST [DeployStorageClassModule] Deploy OpenEBS as cluster default StorageClass
    08:05:00 CST success: [zhiyong-ksp1]
    08:05:00 CST [DeployKubeSphereModule] Generate KubeSphere ks-installer crd manifests
    08:05:00 CST success: [zhiyong-ksp1]
    08:05:00 CST [DeployKubeSphereModule] Apply ks-installer
    08:05:01 CST stdout: [zhiyong-ksp1]
    namespace/kubesphere-system created
    serviceaccount/ks-installer created
    customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io created
    clusterrole.rbac.authorization.k8s.io/ks-installer created
    clusterrolebinding.rbac.authorization.k8s.io/ks-installer created
    deployment.apps/ks-installer created
    08:05:01 CST success: [zhiyong-ksp1]
    08:05:01 CST [DeployKubeSphereModule] Add config to ks-installer manifests
    08:05:01 CST success: [zhiyong-ksp1]
    08:05:01 CST [DeployKubeSphereModule] Create the kubesphere namespace
    08:05:01 CST success: [zhiyong-ksp1]
    08:05:01 CST [DeployKubeSphereModule] Setup ks-installer config
    08:05:01 CST stdout: [zhiyong-ksp1]
    secret/kube-etcd-client-certs created
    08:05:01 CST success: [zhiyong-ksp1]
    08:05:01 CST [DeployKubeSphereModule] Apply ks-installer
    08:05:04 CST stdout: [zhiyong-ksp1]
    namespace/kubesphere-system unchanged
    serviceaccount/ks-installer unchanged
    customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io unchanged
    clusterrole.rbac.authorization.k8s.io/ks-installer unchanged
    clusterrolebinding.rbac.authorization.k8s.io/ks-installer unchanged
    deployment.apps/ks-installer unchanged
    clusterconfiguration.installer.kubesphere.io/ks-installer created
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110
    • 111
    • 112
    • 113
    • 114
    • 115
    • 116
    • 117
    • 118
    • 119
    • 120
    • 121
    • 122
    • 123
    • 124
    • 125
    • 126
    • 127
    • 128
    • 129
    • 130
    • 131
    • 132
    • 133
    • 134
    • 135
    • 136
    • 137
    • 138
    • 139
    • 140
    • 141
    • 142
    • 143
    • 144
    • 145
    • 146
    • 147
    • 148
    • 149
    • 150
    • 151
    • 152
    • 153
    • 154
    • 155
    • 156
    • 157
    • 158
    • 159
    • 160
    • 161
    • 162
    • 163
    • 164
    • 165
    • 166
    • 167
    • 168
    • 169
    • 170
    • 171
    • 172
    • 173
    • 174
    • 175
    • 176
    • 177
    • 178
    • 179
    • 180
    • 181
    • 182
    • 183
    • 184
    • 185
    • 186
    • 187
    • 188
    • 189
    • 190
    • 191
    • 192
    • 193
    • 194
    • 195
    • 196
    • 197
    • 198
    • 199
    • 200
    • 201
    • 202
    • 203
    • 204
    • 205
    • 206
    • 207
    • 208
    • 209
    • 210
    • 211
    • 212
    • 213
    • 214
    • 215
    • 216
    • 217
    • 218
    • 219
    • 220
    • 221
    • 222
    • 223
    • 224
    • 225
    • 226
    • 227
    • 228
    • 229
    • 230
    • 231
    • 232
    • 233
    • 234
    • 235
    • 236
    • 237
    • 238
    • 239
    • 240
    • 241
    • 242
    • 243
    • 244
    • 245
    • 246
    • 247
    • 248
    • 249
    • 250
    • 251
    • 252
    • 253
    • 254
    • 255
    • 256
    • 257
    • 258
    • 259
    • 260
    • 261
    • 262
    • 263
    • 264
    • 265
    • 266
    • 267
    • 268
    • 269
    • 270
    • 271
    • 272
    • 273
    • 274
    • 275
    • 276
    • 277
    • 278
    • 279
    • 280
    • 281
    • 282
    • 283
    • 284
    • 285
    • 286
    • 287
    • 288
    • 289
    • 290
    • 291
    • 292
    • 293
    • 294
    • 295
    • 296
    • 297
    • 298
    • 299
    • 300
    • 301
    • 302
    • 303
    • 304
    • 305
    • 306
    • 307
    • 308
    • 309
    • 310
    • 311
    • 312
    • 313
    • 314
    • 315
    • 316
    • 317
    • 318
    • 319
    • 320
    • 321
    • 322
    • 323
    • 324
    • 325
    • 326
    • 327
    • 328
    • 329
    • 330
    • 331
    • 332
    • 333
    • 334
    • 335
    • 336
    • 337
    • 338
    • 339
    • 340
    • 341
    • 342
    • 343
    • 344
    • 345
    • 346
    • 347
    • 348
    • 349
    • 350
    • 351
    • 352
    • 353
    • 354
    • 355
    • 356
    • 357
    • 358
    • 359
    • 360
    • 361
    • 362
    • 363
    • 364
    • 365
    • 366
    • 367
    • 368
    • 369
    • 370
    • 371
    • 372
    • 373
    • 374
    • 375
    • 376
    • 377
    • 378
    • 379
    • 380
    • 381
    • 382
    • 383
    • 384
    • 385
    • 386
    • 387
    • 388
    • 389
    • 390
    • 391
    • 392
    • 393
    • 394
    • 395
    • 396
    • 397
    • 398
    • 399
    • 400
    • 401
    • 402
    • 403
    • 404
    • 405
    • 406
    • 407
    • 408
    • 409
    • 410
    • 411
    • 412
    • 413
    • 414
    • 415
    • 416
    • 417
    • 418
    • 419
    • 420
    • 421

    此时双路E5核多但是主频吃不满:

    在这里插入图片描述

    多等一会儿:

    08:05:04 CST success: [zhiyong-ksp1]
    #####################################################
    ###              Welcome to KubeSphere!           ###
    #####################################################
    
    Console: http://192.168.88.20:30880
    Account: admin
    Password: P@88w0rd
    
    NOTES:
      1. After you log into the console, please check the
         monitoring status of service components in
         "Cluster Management". If any service is not
         ready, please wait patiently until all components
         are up and running.
      2. Please change the default password after login.
    
    #####################################################
    https://kubesphere.io             2022-08-08 08:13:00
    #####################################################
    08:13:02 CST success: [zhiyong-ksp1]
    08:13:02 CST Pipeline[CreateClusterPipeline] execute successfully
    Installation is complete.
    
    Please check the result using the command:
    
            kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
    
    root@zhiyong-ksp1:/home/zhiyong/kubesphereinstall#
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29

    此时安装成功。

    验证KubeSphere

    使用WebUI

    按照提示登录:

    http://192.168.88.20:30880
    admin
    P@88w0rd
    更改密码:Aa123456
    
    • 1
    • 2
    • 3
    • 4

    此时【首次安装K8S1.24.0时的截图】:

    在这里插入图片描述

    可以看到KubeSphere版本3.3.0已经显示出来。

    在这里插入图片描述

    但是。。。DashBoard一片空白,显然是有什么问题。

    验证K8S

    root@zhiyong-ksp1:/home/zhiyong/kubesphereinstall# kubectl get pod --all-namespaces
    NAMESPACE                      NAME                                               READY   STATUS         RESTARTS   AGE
    kube-system                    calico-kube-controllers-f9f9bbcc9-2v7lm            1/1     Running        0          11m
    kube-system                    calico-node-4mgc7                                  1/1     Running        0          11m
    kube-system                    coredns-f657fccfd-2gw7h                            1/1     Running        0          11m
    kube-system                    coredns-f657fccfd-pflwf                            1/1     Running        0          11m
    kube-system                    kube-apiserver-zhiyong-ksp1                        1/1     Running        0          11m
    kube-system                    kube-controller-manager-zhiyong-ksp1               1/1     Running        0          11m
    kube-system                    kube-proxy-cn68l                                   1/1     Running        0          11m
    kube-system                    kube-scheduler-zhiyong-ksp1                        1/1     Running        0          11m
    kube-system                    nodelocaldns-96gtw                                 1/1     Running        0          11m
    kube-system                    openebs-localpv-provisioner-68db4d895d-p9527       0/1     ErrImagePull   0          11m
    kube-system                    snapshot-controller-0                              1/1     Running        0          10m
    kubesphere-controls-system     default-http-backend-587748d6b4-ccg59              1/1     Running        0          8m39s
    kubesphere-controls-system     kubectl-admin-5d588c455b-82cnk                     1/1     Running        0          3m51s
    kubesphere-monitoring-system   alertmanager-main-0                                2/2     Running        0          6m58s
    kubesphere-monitoring-system   kube-state-metrics-6d6786b44-bbb4f                 3/3     Running        0          7m2s
    kubesphere-monitoring-system   node-exporter-8sz74                                2/2     Running        0          7m3s
    kubesphere-monitoring-system   notification-manager-deployment-6f8c66ff88-pt4l8   2/2     Running        0          6m13s
    kubesphere-monitoring-system   notification-manager-operator-6455b45546-nkmx8     2/2     Running        0          6m42s
    kubesphere-monitoring-system   prometheus-k8s-0                                   0/2     Pending        0          6m57s
    kubesphere-monitoring-system   prometheus-operator-66d997dccf-c968c               2/2     Running        0          7m5s
    kubesphere-system              ks-apiserver-6b9bcb86f4-hsdzs                      1/1     Running        0          8m39s
    kubesphere-system              ks-console-599c49d8f6-ngb6b                        1/1     Running        0          8m39s
    kubesphere-system              ks-controller-manager-66747fcddc-r7cpt             1/1     Running        0          8m39s
    kubesphere-system              ks-installer-5fd8bd46b8-dzhbb                      1/1     Running        0          11m
    root@zhiyong-ksp1:/home/zhiyong/kubesphereinstall#
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28

    可以看到calico等组件的Pod正常启动,但是openebs-localpv-provisioner-68db4d895d-p9527这个pod拉取镜像出错,prometheus-k8s-0没有被吊起。

    root@zhiyong-ksp1:/home/zhiyong/kubesphereinstall# kubectl version
    WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
    Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.1", GitCommit:"3ddd0f45aa91e2f30c70734b175631bec5b5825a", GitTreeState:"clean", BuildDate:"2022-05-24T12:26:19Z", GoVersion:"go1.18.2", Compiler:"gc", Platform:"linux/amd64"}
    Kustomize Version: v4.5.4
    Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.1", GitCommit:"3ddd0f45aa91e2f30c70734b175631bec5b5825a", GitTreeState:"clean", BuildDate:"2022-05-24T12:18:48Z", GoVersion:"go1.18.2", Compiler:"gc", Platform:"linux/amd64"}
    root@zhiyong-ksp1:/home/zhiyong/kubesphereinstall#
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7

    可以看到K8S版本是1.24.1。

    修复失败的Pod

    查看失败的机器

    root@zhiyong-ksp1:/home/zhiyong/kubesphereinstall# kubectl get pods --all-namespaces -o wide
    NAMESPACE                      NAME                                               READY   STATUS             RESTARTS   AGE   IP              NODE           NOMINATED NODE   READINESS GATES
    kube-system                    calico-kube-controllers-f9f9bbcc9-2v7lm            1/1     Running            0          27m   10.233.107.2    zhiyong-ksp1   <none>           <none>
    kube-system                    calico-node-4mgc7                                  1/1     Running            0          27m   192.168.88.20   zhiyong-ksp1   <none>           <none>
    kube-system                    coredns-f657fccfd-2gw7h                            1/1     Running            0          27m   10.233.107.4    zhiyong-ksp1   <none>           <none>
    kube-system                    coredns-f657fccfd-pflwf                            1/1     Running            0          27m   10.233.107.1    zhiyong-ksp1   <none>           <none>
    kube-system                    kube-apiserver-zhiyong-ksp1                        1/1     Running            0          28m   192.168.88.20   zhiyong-ksp1   <none>           <none>
    kube-system                    kube-controller-manager-zhiyong-ksp1               1/1     Running            0          28m   192.168.88.20   zhiyong-ksp1   <none>           <none>
    kube-system                    kube-proxy-cn68l                                   1/1     Running            0          27m   192.168.88.20   zhiyong-ksp1   <none>           <none>
    kube-system                    kube-scheduler-zhiyong-ksp1                        1/1     Running            0          28m   192.168.88.20   zhiyong-ksp1   <none>           <none>
    kube-system                    nodelocaldns-96gtw                                 1/1     Running            0          27m   192.168.88.20   zhiyong-ksp1   <none>           <none>
    kube-system                    openebs-localpv-provisioner-68db4d895d-p9527       0/1     ImagePullBackOff   0          27m   10.233.107.3    zhiyong-ksp1   <none>           <none>
    kube-system                    snapshot-controller-0                              1/1     Running            0          26m   10.233.107.6    zhiyong-ksp1   <none>           <none>
    kubesphere-controls-system     default-http-backend-587748d6b4-ccg59              1/1     Running            0          24m   10.233.107.7    zhiyong-ksp1   <none>           <none>
    kubesphere-controls-system     kubectl-admin-5d588c455b-82cnk                     1/1     Running            0          20m   10.233.107.16   zhiyong-ksp1   <none>           <none>
    kubesphere-monitoring-system   alertmanager-main-0                                2/2     Running            0          23m   10.233.107.11   zhiyong-ksp1   <none>           <none>
    kubesphere-monitoring-system   kube-state-metrics-6d6786b44-bbb4f                 3/3     Running            0          23m   10.233.107.10   zhiyong-ksp1   <none>           <none>
    kubesphere-monitoring-system   node-exporter-8sz74                                2/2     Running            0          23m   192.168.88.20   zhiyong-ksp1   <none>           <none>
    kubesphere-monitoring-system   notification-manager-deployment-6f8c66ff88-pt4l8   2/2     Running            0          22m   10.233.107.13   zhiyong-ksp1   <none>           <none>
    kubesphere-monitoring-system   notification-manager-operator-6455b45546-nkmx8     2/2     Running            0          22m   10.233.107.12   zhiyong-ksp1   <none>           <none>
    kubesphere-monitoring-system   prometheus-k8s-0                                   0/2     Pending            0          23m   <none>          <none>         <none>           <none>
    kubesphere-monitoring-system   prometheus-operator-66d997dccf-c968c               2/2     Running            0          23m   10.233.107.9    zhiyong-ksp1   <none>           <none>
    kubesphere-system              ks-apiserver-6b9bcb86f4-hsdzs                      1/1     Running            0          24m   10.233.107.14   zhiyong-ksp1   <none>           <none>
    kubesphere-system              ks-console-599c49d8f6-ngb6b                        1/1     Running            0          24m   10.233.107.8    zhiyong-ksp1   <none>           <none>
    kubesphere-system              ks-controller-manager-66747fcddc-r7cpt             1/1     Running            0          24m   10.233.107.15   zhiyong-ksp1   <none>           <none>
    kubesphere-system              ks-installer-5fd8bd46b8-dzhbb                      1/1     Running            0          27m   10.233.107.5    zhiyong-ksp1   <none>           <none>
    root@zhiyong-ksp1:/home/zhiyong/kubesphereinstall#
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28

    由于all-in-one,肯定都是运行在zhiyong-ksp1这一台机器。

    修复openebs

    查看报错日志

    root@zhiyong-ksp1:/home/zhiyong/kubesphereinstall# kubectl describe pod openebs-localpv-provisioner-68db4d895d-p9527 -n kube-system
    Name:         openebs-localpv-provisioner-68db4d895d-p9527
    Namespace:    kube-system
    Priority:     0
    Node:         zhiyong-ksp1/192.168.88.20
    Start Time:   Mon, 08 Aug 2022 08:05:12 +0800
    Labels:       name=openebs-localpv-provisioner
                  openebs.io/component-name=openebs-localpv-provisioner
                  openebs.io/version=3.3.0
                  pod-template-hash=68db4d895d
    Annotations:  cni.projectcalico.org/containerID: 9a6d89d617fd7f68fdb00cad356d7104c475822c0d859a4b97e93ec2659a1c21
                  cni.projectcalico.org/podIP: 10.233.107.3/32
                  cni.projectcalico.org/podIPs: 10.233.107.3/32
    Status:       Pending
    IP:           10.233.107.3
    IPs:
      IP:           10.233.107.3
    Controlled By:  ReplicaSet/openebs-localpv-provisioner-68db4d895d
    Containers:
      openebs-provisioner-hostpath:
        Container ID:
        Image:          registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0
        Image ID:
        Port:           <none>
        Host Port:      <none>
        State:          Waiting
          Reason:       ImagePullBackOff
        Ready:          False
        Restart Count:  0
        Liveness:       exec [sh -c test $(pgrep -c "^provisioner-loc.*") = 1] delay=30s timeout=1s period=60s #success=1 #failure=3
        Environment:
          NODE_NAME:                     (v1:spec.nodeName)
          OPENEBS_NAMESPACE:            kube-system (v1:metadata.namespace)
          OPENEBS_SERVICE_ACCOUNT:       (v1:spec.serviceAccountName)
          OPENEBS_IO_ENABLE_ANALYTICS:  true
          OPENEBS_IO_INSTALLER_TYPE:    openebs-operator-lite
          OPENEBS_IO_HELPER_IMAGE:      registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0
        Mounts:
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p88sf (ro)
    Conditions:
      Type              Status
      Initialized       True
      Ready             False
      ContainersReady   False
      PodScheduled      True
    Volumes:
      kube-api-access-p88sf:
        Type:                    Projected (a volume that contains injected data from multiple sources)
        TokenExpirationSeconds:  3607
        ConfigMapName:           kube-root-ca.crt
        ConfigMapOptional:       <nil>
        DownwardAPI:             true
    QoS Class:                   BestEffort
    Node-Selectors:              <none>
    Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
    Events:
      Type     Reason            Age                   From               Message
      ----     ------            ----                  ----               -------
      Warning  FailedScheduling  19m                   default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
      Normal   Scheduled         19m                   default-scheduler  Successfully assigned kube-system/openebs-localpv-provisioner-68db4d895d-p9527 to zhiyong-ksp1
      Normal   Pulling           17m (x4 over 19m)     kubelet            Pulling image "registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0"
      Warning  Failed            17m (x4 over 19m)     kubelet            Failed to pull image "registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0": rpc error: code = NotFound desc = failed to pull and unpack image "registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0": failed to resolve reference "registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0": registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0: not found
      Warning  Failed            17m (x4 over 19m)     kubelet            Error: ErrImagePull
      Warning  Failed            17m (x6 over 19m)     kubelet            Error: ImagePullBackOff
      Normal   BackOff           4m14s (x63 over 19m)  kubelet            Back-off pulling image "registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0"
    root@zhiyong-ksp1:/home/zhiyong/kubesphereinstall#
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68

    可以看到阿里云的镜像仓库中没找到provisioner-localpv:3.3.0这个镜像,导致拉取和解包失败,致使Pod没有被成功调度。

    root@zhiyong-ksp1:/home/zhiyong/kubesphereinstall# docker
    
    Command 'docker' not found, but can be installed with:
    
    snap install docker     # version 20.10.14, or
    apt  install docker.io  # version 20.10.12-0ubuntu2~20.04.1
    
    See 'snap info docker' for additional versions.
    
    root@zhiyong-ksp1:/home/zhiyong/kubesphereinstall#
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11

    由于K8S1.24已经废除了DockerShim,可以看出即使没有安装Docker也可以使用containerd这种rni运行时跑Pod。而KK是Go写的,不能在txt中打开查看命令。需要手动拉取镜像。

    root@zhiyong-ksp1:/home/zhiyong/kubesphereinstall# kubectl get deploy/openebs-localpv-provisioner -n kube-system -o yaml | grep imagePull
          {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"name":"openebs-localpv-provisioner","openebs.io/component-name":"openebs-localpv-provisioner","openebs.io/version":"3.3.0"},"name":"openebs-localpv-provisioner","namespace":"kube-system"},"spec":{"replicas":1,"selector":{"matchLabels":{"name":"openebs-localpv-provisioner","openebs.io/component-name":"openebs-localpv-provisioner"}},"strategy":{"type":"Recreate"},"template":{"metadata":{"labels":{"name":"openebs-localpv-provisioner","openebs.io/component-name":"openebs-localpv-provisioner","openebs.io/version":"3.3.0"}},"spec":{"containers":[{"env":[{"name":"NODE_NAME","valueFrom":{"fieldRef":{"fieldPath":"spec.nodeName"}}},{"name":"OPENEBS_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"OPENEBS_SERVICE_ACCOUNT","valueFrom":{"fieldRef":{"fieldPath":"spec.serviceAccountName"}}},{"name":"OPENEBS_IO_ENABLE_ANALYTICS","value":"true"},{"name":"OPENEBS_IO_INSTALLER_TYPE","value":"openebs-operator-lite"},{"name":"OPENEBS_IO_HELPER_IMAGE","value":"registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0"}],"image":"registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0","imagePullPolicy":"Always","livenessProbe":{"exec":{"command":["sh","-c","test $(pgrep -c \"^provisioner-loc.*\") = 1"]},"initialDelaySeconds":30,"periodSeconds":60},"name":"openebs-provisioner-hostpath"}],"serviceAccountName":"openebs-maya-operator"}}}}
            imagePullPolicy: Always
    
    
    • 1
    • 2
    • 3
    • 4

    可以看出拉取该镜像的策略是Always。

    换JOSN样式:

    {
    	"apiVersion":"apps/v1",
    	"kind":"Deployment",
    	"metadata":{"annotations":{},"labels":{"name":"openebs-localpv-provisioner","openebs.io/component-name":"openebs-localpv-provisioner","openebs.io/version":"3.3.0"},"name":"openebs-localpv-provisioner","namespace":"kube-system"},
    	"spec":{
    		"replicas":1,
    		"selector":{"matchLabels":{"name":"openebs-localpv-provisioner","openebs.io/component-name":"openebs-localpv-provisioner"}},
    		"strategy":{"type":"Recreate"},
    		"template":{
    			"metadata":{"labels":{"name":"openebs-localpv-provisioner","openebs.io/component-name":"openebs-localpv-provisioner","openebs.io/version":"3.3.0"}},
    			"spec":{
    				"containers":[{
    					"env":[{"name":"NODE_NAME","valueFrom":{"fieldRef":{"fieldPath":"spec.nodeName"}}},{"name":"OPENEBS_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"OPENEBS_SERVICE_ACCOUNT","valueFrom":{"fieldRef":{"fieldPath":"spec.serviceAccountName"}}},{"name":"OPENEBS_IO_ENABLE_ANALYTICS","value":"true"},{"name":"OPENEBS_IO_INSTALLER_TYPE","value":"openebs-operator-lite"},{"name":"OPENEBS_IO_HELPER_IMAGE","value":"registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0"}],
    					"image":"registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0",
    					"imagePullPolicy":"Always",
    					"livenessProbe":{"exec":{"command":["sh","-c","test $(pgrep -c \"^provisioner-loc.*\") = 1"]},"initialDelaySeconds":30,"periodSeconds":60},
    					"name":"openebs-provisioner-hostpath"
    					}],
    				"serviceAccountName":"openebs-maya-operator"
    			}
    		}
    	}
    }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23

    容易看到镜像源是北京阿里云的仓库。

    root@zhiyong-ksp1:/home/zhiyong/kubesphereinstall# journalctl -xefu kubelet
    8月 08 18:52:30 zhiyong-ksp1 kubelet[5836]: E0808 18:52:30.510082    5836 kuberuntime_manager.go:905] container &Container{Name:openebs-provisioner-hostpath,Image:registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPENEBS_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPENEBS_SERVICE_ACCOUNT,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.serviceAccountName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPENEBS_IO_ENABLE_ANALYTICS,Value:true,ValueFrom:nil,},EnvVar{Name:OPENEBS_IO_INSTALLER_TYPE,Value:openebs-operator-lite,ValueFrom:nil,},EnvVar{Name:OPENEBS_IO_HELPER_IMAGE,Value:registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p88sf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[sh -c test $(pgrep -c "^provisioner-loc.*") = 1],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:1,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod openebs-localpv-provisioner-68db4d895d-p9527_kube-system(5fb97aa4-17f3-418d-8f00-f354a26d42c4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image "registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0": failed to resolve reference "registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0": registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0: not found
    8月 08 18:52:30 zhiyong-ksp1 kubelet[5836]: E0808 18:52:30.510193    5836 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openebs-provisioner-hostpath\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0\\\": failed to resolve reference \\\"registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0\\\": registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0: not found\"" pod="kube-system/openebs-localpv-provisioner-68db4d895d-p9527" podUID=5fb97aa4-17f3-418d-8f00-f354a26d42c4
    8月 08 18:52:43 zhiyong-ksp1 kubelet[5836]: E0808 18:52:43.760395    5836 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openebs-provisioner-hostpath\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0\\\"\"" pod="kube-system/openebs-localpv-provisioner-68db4d895d-p9527" podUID=5fb97aa4-17f3-418d-8f00-f354a26d42c4
    8月 08 18:52:55 zhiyong-ksp1 kubelet[5836]: E0808 18:52:55.762115    5836 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openebs-provisioner-hostpath\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0\\\"\"" pod="kube-system/openebs-localpv-provisioner-68db4d895d-p9527" podUID=5fb97aa4-17f3-418d-8f00-f354a26d42c4
    8月 08 18:53:07 zhiyong-ksp1 kubelet[5836]: E0808 18:53:07.759765    5836 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openebs-provisioner-hostpath\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0\\\"\"" pod="kube-system/openebs-localpv-provisioner-68db4d895d-p9527" podUID=5fb97aa4-17f3-418d-8f00-f354a26d42c4
    8月 08 18:53:19 zhiyong-ksp1 kubelet[5836]: E0808 18:53:19.761798    5836 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openebs-provisioner-hostpath\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0\\\"\"" pod="kube-system/openebs-localpv-provisioner-68db4d895d-p9527" podUID=5fb97aa4-17f3-418d-8f00-f354a26d42c4
    8月 08 18:53:31 zhiyong-ksp1 kubelet[5836]: E0808 18:53:31.761308    5836 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openebs-provisioner-hostpath\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0\\\"\"" pod="kube-system/openebs-localpv-provisioner-68db4d895d-p9527" podUID=5fb97aa4-17f3-418d-8f00-f354a26d42c4
    8月 08 18:53:44 zhiyong-ksp1 kubelet[5836]: E0808 18:53:44.759751    5836 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openebs-provisioner-hostpath\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0\\\"\"" pod="kube-system/openebs-localpv-provisioner-68db4d895d-p9527" podUID=5fb97aa4-17f3-418d-8f00-f354a26d42c4
    8月 08 18:53:55 zhiyong-ksp1 kubelet[5836]: E0808 18:53:55.760970    5836 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openebs-provisioner-hostpath\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0\\\"\"" pod="kube-system/openebs-localpv-provisioner-68db4d895d-p9527" podUID=5fb97aa4-17f3-418d-8f00-f354a26d42c4
    ^C
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12

    根据kubelet的log可以看到:

    rpc error: code = NotFound desc = failed to pull and unpack image "registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0": failed to resolve reference "registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0": registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0: not found
    
    • 1

    显然K8S一直在拉镜像,但是没有在阿里云的镜像仓库找到所需的provisioner-localpv:3.3.0,故一直在失败。联系不到北京阿里云和KubeSphere的官方人员添加镜像,只好自己想办法。

    Docker拉取镜像

    从另一台安装了Docker ce的Ubuntu20.04尝试拉取镜像:

    zhiyong@zhiyong-vm-dev:~$ sudo su root
    [sudo] zhiyong 的密码:
    root@zhiyong-vm-dev:/home/zhiyong# docker images
    REPOSITORY                             TAG       IMAGE ID       CREATED        SIZE
    dockerfilehttpserver                   v2        16e4ba2b8b99   3 months ago   6.08MB
    zhiyongdocker/dockerfilehttpserver     v1        16e4ba2b8b99   3 months ago   6.08MB
    <none>                                 <none>    0206c712d151   3 months ago   962MB
    gohttpserver                           v1        ffdef51c9a42   3 months ago   83.9MB
    zhiyongdocker/dockerstudy/httpserver   v1        ffdef51c9a42   3 months ago   83.9MB
    zhiyongdocker/dockerstudy              v1        ffdef51c9a42   3 months ago   83.9MB
    ubuntuwithvm                           latest    428e030662bc   3 months ago   170MB
    ubuntu                                 latest    d2e4e1f51132   3 months ago   77.8MB
    golang                                 1.17      b6bd03a3a78e   3 months ago   941MB
    root@zhiyong-vm-dev:/home/zhiyong# docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0
    Error response from daemon: manifest for registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0 not found: manifest unknown: manifest unknown
    root@zhiyong-vm-dev:/home/zhiyong# docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv
    Using default tag: latest
    Error response from daemon: manifest for registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:latest not found: manifest unknown: manifest unknown
    root@zhiyong-vm-dev:/home/zhiyong#
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20

    可以看到没有相应。大概率是镜像的问题。

    K8S1.24废弃了DockerShim后,如果运行时使用了containerd,则不用docker,命令行敲crictl即可:

    root@zhiyong-ksp1:/home/zhiyong/kubesphereinstall# crictl ps
    CONTAINER           IMAGE               CREATED             STATE               NAME                            ATTEMPT             POD ID              POD
    4d93fcf7938b1       30c7baa8e18c0       9 hours ago         Running             kubectl                         0                   a67b9c8b9c32b       kubectl-admin-5d588c455b-82cnk
    37b64e1ef1a93       41e1783887b2e       9 hours ago         Running             ks-controller-manager           0                   5dffb8c8434ad       ks-controller-manager-66747fcddc-r7cpt
    9764c2e6fe873       18b286f71e299       9 hours ago         Running             ks-apiserver                    0                   8984266bbc961       ks-apiserver-6b9bcb86f4-hsdzs
    5ab66d71a2559       4b47c43ec6ab6       9 hours ago         Running             tenant                          0                   e264f4a29075a       notification-manager-deployment-6f8c66ff88-pt4l8
    120199a0b5d08       b8b2f6b3790fe       9 hours ago         Running             notification-manager            0                   e264f4a29075a       notification-manager-deployment-6f8c66ff88-pt4l8
    1a67c02e2694d       08ca8def2520f       9 hours ago         Running             notification-manager-operator   0                   07e33e29e6719       notification-manager-operator-6455b45546-nkmx8
    21c18185d9c74       7c63de88523a9       9 hours ago         Running             config-reloader                 0                   2ce0e7cbac8c6       alertmanager-main-0
    3b24b2e8474d2       ad393d6a4d1b1       9 hours ago         Running             kube-rbac-proxy                 0                   07e33e29e6719       notification-manager-operator-6455b45546-nkmx8
    a591534097ca5       29589495df8d9       9 hours ago         Running             kube-rbac-proxy-self            0                   32070a3c31dbf       kube-state-metrics-6d6786b44-bbb4f
    cb220ca214499       29589495df8d9       9 hours ago         Running             kube-rbac-proxy-main            0                   32070a3c31dbf       kube-state-metrics-6d6786b44-bbb4f
    60d8711f43342       29589495df8d9       9 hours ago         Running             kube-rbac-proxy                 0                   0f31de65528d9       node-exporter-8sz74
    8155b2e24bad4       ba2b418f427c0       9 hours ago         Running             alertmanager                    0                   2ce0e7cbac8c6       alertmanager-main-0
    54215afd8e5f0       29589495df8d9       9 hours ago         Running             kube-rbac-proxy                 0                   be7a351ba2809       prometheus-operator-66d997dccf-c968c
    af70a513bd639       df2bb3f0d0cdc       9 hours ago         Running             kube-state-metrics              0                   32070a3c31dbf       kube-state-metrics-6d6786b44-bbb4f
    919989a66c912       1dbe0e9319764       9 hours ago         Running             node-exporter                   0                   0f31de65528d9       node-exporter-8sz74
    140daabff8690       b30c215b787f5       9 hours ago         Running             prometheus-operator             0                   be7a351ba2809       prometheus-operator-66d997dccf-c968c
    b9e9d61fafe4a       846921f0fe0e5       9 hours ago         Running             default-http-backend            0                   3c505ebc4907f       default-http-backend-587748d6b4-ccg59
    dae3fca244be3       ece5e4e72a503       9 hours ago         Running             ks-console                      0                   0a9ed0f01eb48       ks-console-599c49d8f6-ngb6b
    b7e0535450cd2       f1d8a00ae690f       9 hours ago         Running             snapshot-controller             0                   bbc98c5c00b1a       snapshot-controller-0
    a11458b17ec68       871885288439b       9 hours ago         Running             installer                       0                   35d12673dc26a       ks-installer-5fd8bd46b8-dzhbb
    94d6e30312500       a4ca41631cc7a       9 hours ago         Running             coredns                         0                   45d8f2920b5c4       coredns-f657fccfd-2gw7h
    812701fcc9096       ec95788d0f725       9 hours ago         Running             calico-kube-controllers         0                   ffee300da6df0       calico-kube-controllers-f9f9bbcc9-2v7lm
    3744e0c7dcf29       a4ca41631cc7a       9 hours ago         Running             coredns                         0                   ef89905735ecb       coredns-f657fccfd-pflwf
    fdf5c906781a4       a3447b26d32c7       9 hours ago         Running             calico-node                     0                   8b4fa7e469e90       calico-node-4mgc7
    a7e8689d566a3       beb86f5d8e6cd       9 hours ago         Running             kube-proxy                      0                   a289903551065       kube-proxy-cn68l
    9cce5f0851bc6       5340ba194ec91       9 hours ago         Running             node-cache                      0                   2d1012b05a992       nodelocaldns-96gtw
    7c501c1148040       b4ea7e648530d       9 hours ago         Running             kube-controller-manager         0                   fb8089b9b6a17       kube-controller-manager-zhiyong-ksp1
    e332cfd7d6d85       e9f4b425f9192       9 hours ago         Running             kube-apiserver                  0                   225cbc996e2a7       kube-apiserver-zhiyong-ksp1
    cb48f9235d089       18688a72645c5       9 hours ago         Running             kube-scheduler                  0                   9218eee89f849       kube-scheduler-zhiyong-ksp1
    root@zhiyong-ksp1:/home/zhiyong/kubesphereinstall# crictl images
    IMAGE                                                                         TAG                 IMAGE ID            SIZE
    registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager                    v0.23.0             ba2b418f427c0       26.5MB
    registry.cn-beijing.aliyuncs.com/kubesphereio/cni                             v3.23.2             a87d3f6f1b8fd       111MB
    registry.cn-beijing.aliyuncs.com/kubesphereio/coredns                         1.8.6               a4ca41631cc7a       13.6MB
    registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64            1.4                 846921f0fe0e5       1.82MB
    registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache              1.15.12             5340ba194ec91       42.1MB
    registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver                    v3.3.0              18b286f71e299       68.2MB
    registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console                      v3.3.0              ece5e4e72a503       38.7MB
    registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager           v3.3.0              41e1783887b2e       62.6MB
    registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer                    v3.3.0              871885288439b       151MB
    registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver                  v1.24.1             e9f4b425f9192       33.8MB
    registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager         v1.24.1             b4ea7e648530d       31MB
    registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers                v3.23.2             ec95788d0f725       56.4MB
    registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy                      v1.24.1             beb86f5d8e6cd       39.5MB
    registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy                 v0.11.0             29589495df8d9       19.2MB
    registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy                 v0.8.0              ad393d6a4d1b1       20MB
    registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler                  v1.24.1             18688a72645c5       15.5MB
    registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics              v2.3.0              df2bb3f0d0cdc       11.3MB
    registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl                         v1.22.0             30c7baa8e18c0       26.6MB
    registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter                   v1.3.1              1dbe0e9319764       10.3MB
    registry.cn-beijing.aliyuncs.com/kubesphereio/node                            v3.23.2             a3447b26d32c7       77.8MB
    registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator   v1.4.0              08ca8def2520f       19.3MB
    registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager            v1.4.0              b8b2f6b3790fe       21.7MB
    registry.cn-beijing.aliyuncs.com/kubesphereio/notification-tenant-sidecar     v3.2.0              4b47c43ec6ab6       14.7MB
    registry.cn-beijing.aliyuncs.com/kubesphereio/pause                           3.7                 221177c6082a8       311kB
    registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol              v3.23.2             b21e2d7408a79       8.67MB
    registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader      v0.55.1             7c63de88523a9       4.84MB
    registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator             v0.55.1             b30c215b787f5       14.3MB
    registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller             v4.0.0              f1d8a00ae690f       19MB
    root@zhiyong-ksp1:/home/zhiyong/kubesphereinstall#
    root@zhiyong-ksp1:/home/zhiyong/kubesphereinstall# crictl images | grep provisioner-localpv
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64

    当然是没有拉取成功所需的registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0。

    但是Docker可以成功拉取:

    root@zhiyong-vm-dev:/home/zhiyong# docker pull openebs/provisioner-localpv:3.3.0
    3.3.0: Pulling from openebs/provisioner-localpv
    1b7ca6aea1dd: Pull complete
    dc334afc6648: Pull complete
    0781fd48f154: Pull complete
    Digest: sha256:9944beedeb5ad33b1013d62da026c3ac31f29238b378335e40ded2dcfe2c56f4
    Status: Downloaded newer image for openebs/provisioner-localpv:3.3.0
    docker.io/openebs/provisioner-localpv:3.3.0
    root@zhiyong-vm-dev:/home/zhiyong# docker images | grep local
    openebs/provisioner-localpv            3.3.0     739e82fed8b2   3 weeks ago    70.3MB
    root@zhiyong-vm-dev:/home/zhiyong#
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11

    其实Containerd就是Docker的底层,所以应该使用DockerHub的镜像替换阿里云的镜像才能使该pod启动。

    Crictl拉取镜像

    root@zhiyong-ksp1:/home/zhiyong/kubesphereinstall# crictl images | grep provisioner-localpv
    root@zhiyong-ksp1:/home/zhiyong/kubesphereinstall# crictl pull openebs/provisioner-localpv:3.3.0
    Image is up to date for sha256:739e82fed8b2ca886eb4e5cce90ad4b2518c084e845b56f79d03f45d81b8f6c3
    root@zhiyong-ksp1:/home/zhiyong/kubesphereinstall# crictl images | grep provisioner-localpv
    docker.io/openebs/provisioner-localpv                                         3.3.0               739e82fed8b2c       28.8MB
    root@zhiyong-ksp1:/home/zhiyong/kubesphereinstall#
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7

    此时已经成功拉取了DockerHub的镜像。但是由于拉取镜像的策略是Always,pod并不能成功启动:

    root@zhiyong-ksp1:/home/zhiyong/kubesphereinstall# kubectl get pods -owide --all-namespaces | grep open
    kube-system                    openebs-localpv-provisioner-68db4d895d-p9527       0/1     ImagePullBackOff   0          9h    10.233.107.3    zhiyong-ksp1   <none>           <none>
    
    • 1
    • 2

    所以还需要修改pod的yaml,修改拉取镜像的策略及镜像源。

    修改Pod的Yaml

    root@zhiyong-ksp1:/home/zhiyong/kubesphereinstall# kubectl edit pod/openebs-localpv-provisioner-68db4d895d-p9527 -o yaml -nkube-system
    
    • 1

    和普通的vim用法相同:

    spec:
      containers:
      - env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.nodeName
        - name: OPENEBS_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: OPENEBS_SERVICE_ACCOUNT
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.serviceAccountName
        - name: OPENEBS_IO_ENABLE_ANALYTICS
          value: "true"
        - name: OPENEBS_IO_INSTALLER_TYPE
          value: openebs-operator-lite
        - name: OPENEBS_IO_HELPER_IMAGE
          value: registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0
        #修改镜像:  
        #image: registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0
        image: docker.io/openebs/provisioner-localpv:3.3.0
        imagePullPolicy: Always
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29

    保存后:

    root@zhiyong-ksp1:/home/zhiyong/kubesphereinstall# kubectl get pods -owide --all-namespaces | grep open
    kube-system                    openebs-localpv-provisioner-68db4d895d-p9527       1/1     Running        0          9h    10.233.107.3    zhiyong-ksp1   <none>           <none>
    
    • 1
    • 2

    可以看到pod被救活了。

    同时:

    root@zhiyong-ksp1:/home/zhiyong/kubesphereinstall# kubectl get pods -owide --all-namespaces
    NAMESPACE                      NAME                                               READY   STATUS         RESTARTS   AGE   IP              NODE           NOMINATED NODE   READINESS GATES
    kube-system                    init-pvc-23fc88d9-da65-47fc-80e6-b15976a8bcf4      0/1     ErrImagePull   0          12s   10.233.107.19   zhiyong-ksp1   <none>           <none>
    
    • 1
    • 2
    • 3

    显然init-pvc-23fc88d9-da65-47fc-80e6-b15976a8bcf4这个pod又坏掉了。

    修复init-pvc

    journalctl -xefu kubelet
    
    • 1

    看到:

    8月 08 19:58:16 zhiyong-ksp1 kubelet[5836]: E0808 19:58:16.180345    5836 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-init\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0\\\"\"" pod="kube-system/init-pvc-23fc88d9-da65-47fc-80e6-b15976a8bcf4" podUID=79024991-f2be-47c2-9054-b37ad59dad1b
    
    • 1

    显然之前的修复并不完善。还需要修改linux-utils的镜像。

    root@zhiyong-ksp1:/home/zhiyong/kubesphereinstall# crictl pull openebs/linux-utils:3.3.0
    Image is up to date for sha256:e88cfb3a763b92942e18934ca6d90d4fe80da1e6d7818801bd92124304220a1b
    root@zhiyong-ksp1:/home/zhiyong/kubesphereinstall# crictl images | grep utils
    docker.io/openebs/linux-utils                                                 3.3.0               e88cfb3a763b9       26.9MB
    root@zhiyong-ksp1:/home/zhiyong# kubectl edit pod/init-pvc-23fc88d9-da65-47fc-80e6-b15976a8bcf4 -o yaml -nkube-system
    
    • 1
    • 2
    • 3
    • 4
    • 5

    和之前一样,vim方式修改:

    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/hostname
                operator: In
                values:
                - zhiyong-ksp1
      containers:
      - command:
        - mkdir
        - -m
        - "0777"
        - -p
        - /data/pvc-23fc88d9-da65-47fc-80e6-b15976a8bcf4
        #修改镜像:
        #image: registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0
        image: docker.io/openebs/linux-utils:3.3.0
        imagePullPolicy: IfNotPresent
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21

    由于已经拉取了镜像,且配置的策略已经是有镜像则不拉取,保存后稍等片刻即可。

    修复prometheus

    先查看报错日志:

    root@zhiyong-ksp1:/home/zhiyong/kubesphereinstall# kubectl describe pod prometheus-k8s-0 -n kubesphere-monitoring-system
    Name:           prometheus-k8s-0
    Namespace:      kubesphere-monitoring-system
    Priority:       0
    Node:           <none>
    Labels:         app.kubernetes.io/component=prometheus
                    app.kubernetes.io/instance=k8s
                    app.kubernetes.io/managed-by=prometheus-operator
                    app.kubernetes.io/name=prometheus
                    app.kubernetes.io/part-of=kube-prometheus
                    app.kubernetes.io/version=2.34.0
                    controller-revision-hash=prometheus-k8s-557cc865c4
                    operator.prometheus.io/name=k8s
                    operator.prometheus.io/shard=0
                    prometheus=k8s
                    statefulset.kubernetes.io/pod-name=prometheus-k8s-0
    Annotations:    kubectl.kubernetes.io/default-container: prometheus
    Status:         Pending
    IP:
    IPs:            <none>
    Controlled By:  StatefulSet/prometheus-k8s
    Init Containers:
      init-config-reloader:
        Image:      registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.55.1
        Port:       8080/TCP
        Host Port:  0/TCP
        Command:
          /bin/prometheus-config-reloader
        Args:
          --watch-interval=0
          --listen-address=:8080
          --config-file=/etc/prometheus/config/prometheus.yaml.gz
          --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml
          --watched-dir=/etc/prometheus/rules/prometheus-k8s-rulefiles-0
        Limits:
          cpu:     100m
          memory:  50Mi
        Requests:
          cpu:     100m
          memory:  50Mi
        Environment:
          POD_NAME:  prometheus-k8s-0 (v1:metadata.name)
          SHARD:     0
        Mounts:
          /etc/prometheus/config from config (rw)
          /etc/prometheus/config_out from config-out (rw)
          /etc/prometheus/rules/prometheus-k8s-rulefiles-0 from prometheus-k8s-rulefiles-0 (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vcb4c (ro)
    Containers:
      prometheus:
        Image:      registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus:v2.34.0
        Port:       9090/TCP
        Host Port:  0/TCP
        Args:
          --web.console.templates=/etc/prometheus/consoles
          --web.console.libraries=/etc/prometheus/console_libraries
          --storage.tsdb.retention.time=7d
          --config.file=/etc/prometheus/config_out/prometheus.env.yaml
          --storage.tsdb.path=/prometheus
          --web.enable-lifecycle
          --query.max-concurrency=1000
          --web.route-prefix=/
          --web.config.file=/etc/prometheus/web_config/web-config.yaml
        Limits:
          cpu:     4
          memory:  16Gi
        Requests:
          cpu:        200m
          memory:     400Mi
        Liveness:     http-get http://:web/-/healthy delay=0s timeout=3s period=5s #success=1 #failure=6
        Readiness:    http-get http://:web/-/ready delay=0s timeout=3s period=5s #success=1 #failure=3
        Startup:      http-get http://:web/-/ready delay=0s timeout=3s period=15s #success=1 #failure=60
        Environment:  <none>
        Mounts:
          /etc/prometheus/certs from tls-assets (ro)
          /etc/prometheus/config_out from config-out (ro)
          /etc/prometheus/rules/prometheus-k8s-rulefiles-0 from prometheus-k8s-rulefiles-0 (rw)
          /etc/prometheus/web_config/web-config.yaml from web-config (ro,path="web-config.yaml")
          /prometheus from prometheus-k8s-db (rw,path="prometheus-db")
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vcb4c (ro)
      config-reloader:
        Image:      registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.55.1
        Port:       8080/TCP
        Host Port:  0/TCP
        Command:
          /bin/prometheus-config-reloader
        Args:
          --listen-address=:8080
          --reload-url=http://localhost:9090/-/reload
          --config-file=/etc/prometheus/config/prometheus.yaml.gz
          --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml
          --watched-dir=/etc/prometheus/rules/prometheus-k8s-rulefiles-0
        Limits:
          cpu:     100m
          memory:  50Mi
        Requests:
          cpu:     100m
          memory:  50Mi
        Environment:
          POD_NAME:  prometheus-k8s-0 (v1:metadata.name)
          SHARD:     0
        Mounts:
          /etc/prometheus/config from config (rw)
          /etc/prometheus/config_out from config-out (rw)
          /etc/prometheus/rules/prometheus-k8s-rulefiles-0 from prometheus-k8s-rulefiles-0 (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vcb4c (ro)
    Conditions:
      Type           Status
      PodScheduled   False
    Volumes:
      prometheus-k8s-db:
        Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
        ClaimName:  prometheus-k8s-db-prometheus-k8s-0
        ReadOnly:   false
      config:
        Type:        Secret (a volume populated by a Secret)
        SecretName:  prometheus-k8s
        Optional:    false
      tls-assets:
        Type:                Projected (a volume that contains injected data from multiple sources)
        SecretName:          prometheus-k8s-tls-assets-0
        SecretOptionalName:  <nil>
      config-out:
        Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
        Medium:
        SizeLimit:  <unset>
      prometheus-k8s-rulefiles-0:
        Type:      ConfigMap (a volume populated by a ConfigMap)
        Name:      prometheus-k8s-rulefiles-0
        Optional:  false
      web-config:
        Type:        Secret (a volume populated by a Secret)
        SecretName:  prometheus-k8s-web-config
        Optional:    false
      kube-api-access-vcb4c:
        Type:                    Projected (a volume that contains injected data from multiple sources)
        TokenExpirationSeconds:  3607
        ConfigMapName:           kube-root-ca.crt
        ConfigMapOptional:       <nil>
        DownwardAPI:             true
    QoS Class:                   Burstable
    Node-Selectors:              kubernetes.io/os=linux
    Tolerations:                 dedicated=monitoring:NoSchedule
                                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
    Events:
      Type     Reason            Age    From               Message
      ----     ------            ----   ----               -------
      Warning  FailedScheduling  8m17s  default-scheduler  running PreBind plugin "VolumeBinding": binding volumes: timed out waiting for the condition
    root@zhiyong-ksp1:/home/zhiyong/kubesphereinstall#
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110
    • 111
    • 112
    • 113
    • 114
    • 115
    • 116
    • 117
    • 118
    • 119
    • 120
    • 121
    • 122
    • 123
    • 124
    • 125
    • 126
    • 127
    • 128
    • 129
    • 130
    • 131
    • 132
    • 133
    • 134
    • 135
    • 136
    • 137
    • 138
    • 139
    • 140
    • 141
    • 142
    • 143
    • 144
    • 145
    • 146
    • 147
    • 148
    • 149
    • 150
    • 151

    由于前置依赖Pod已正常运行,KubeKey已经自动init好Prometheus的Pod:

    root@zhiyong-ksp1:/home/zhiyong# kubectl get pods -owide --all-namespaces | grep prometheus
    kubesphere-monitoring-system   prometheus-k8s-0                                   2/2     Running   0             10h   10.233.107.36   zhiyong-ksp1   <none>           <none>
    kubesphere-monitoring-system   prometheus-operator-66d997dccf-c968c               2/2     Running   2 (82m ago)   10h   10.233.107.23   zhiyong-ksp1   <none>           <none>
    
    • 1
    • 2
    • 3

    web UI也可以看到:

    在这里插入图片描述

    此时Prometheus已经正常监控节点的负载。到此K8S1.24.1及KubeSphere3.3.0安装完毕,可以正常使用。

  • 相关阅读:
    Java爬虫详解
    五年谷歌ML Infra生涯,我学到最重要的3个教训
    健身耳机哪款好,几款适合健身的耳机分享
    国内有哪些做得好的企业协同办公软件
    Java笔记二
    大数据项目之电商数仓、Flume安装(完整版)
    用busybox构建最小根文件系统详解
    电商通用(二)
    Android 11.0修改系统默认设备类型的平板电脑类型为设备类型
    vue项目中常用解决跨域的方法
  • 原文地址:https://blog.csdn.net/qq_41990268/article/details/126236516