• Kubernetes(k8s) 1.24.0版本基于Containerd的集群安装部署


    1. 部署方式

    有几下几种部署方式:

    • minikube:一个用于快速搭建单节点的kubernetes工具
    • kubeadm:一个用于快速搭建kubernetes集群的工具
    • 二进制包:从官网上下载每个组件的二进制包,依次去安装

    这里我们选用kubeadm方式进行安装

    2. 集群规划

    Kubernetes有一主多从或多主多从的集群部署方式,这里我们采用一主多从的方式

    服务器名称服务器IP角色CPU(最低要求)内存(最低要求)
    k8s-master192.168.23.160master2核2G
    k8s-node1192.168.23.161node2核2G
    k8s-node2192.168.23.162node2核2G

    3. containerd安装

    containerd和kubernetes版本对应关系,参考:

    1. https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md
    2. https://github.com/kubernetes/kubernetes/blob/master/build/dependencies.yaml

    containerd的安装,请参考Centos7上安装容器运行时Containerd和命令行工具nerdctl、crictl

    4. 安装k8s集群

    4.1 基础环境

    1. 禁用selinux

    临时禁用方法

    [root@k8s-master ~]# setenforce 0
    [root@k8s-master ~]# getenforce
    Permissive
    [root@k8s-master ~]#
    
    • 1
    • 2
    • 3
    • 4

    永久禁用方法。需重启服务器

    [root@k8s-master ~]# sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
    
    • 1

    2. 关闭swap

    swap分区指的是虚拟内存分区,它的作用是在物理内存使用完之后,将磁盘空间虚拟成内存来使用。但是会对系统性能产生影响。所以这里需要关闭。如果不能关闭,则在需要修改集群的配置参数

    临时关闭方法

    [root@k8s-master ~]# swapoff -a  
    [root@k8s-master ~]# free -m
                  total        used        free      shared  buff/cache   available
    Mem:           1819         286         632           9         900        1364
    Swap:             0           0           0
    [root@k8s-master ~]#
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6

    永久关闭方法。需重启服务器

    [root@k8s-master ~]# sed -ri 's/.*swap.*/#&/' /etc/fstab
    
    • 1

    3. bridged网桥设置

    为了让服务器的iptables能发现bridged traffic,需要添加网桥过滤和地址转发功能

    新建modules-load.d/k8s.conf文件

    [root@k8s-master ~]# cat <<EOF | tee /etc/modules-load.d/k8s.conf
    overlay
    br_netfilter
    EOF
    [root@k8s-master ~]#
    
    • 1
    • 2
    • 3
    • 4
    • 5

    新建sysctl.d/k8s.conf文件

    [root@k8s-master ~]# cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.ipv4.ip_forward                 = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
    [root@k8s-master ~]#
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6

    加载配置文件

    [root@k8s-master ~]# sysctl --system
    
    • 1

    加载br_netfilter网桥过滤模块,和加载网络虚拟化技术模块

    [root@k8s-master ~]# modprobe br_netfilter
    [root@k8s-master ~]# modprobe overlay
    
    • 1
    • 2

    检验网桥过滤模块是否加载成功

    [root@k8s-master ~]# lsmod | grep -e br_netfilter -e overlay
    br_netfilter           22256  0 
    bridge                151336  1 br_netfilter
    overlay                91659  0 
    [root@k8s-master ~]#
    
    • 1
    • 2
    • 3
    • 4
    • 5

    3.4 配置IPVS

    service有基于iptables和基于ipvs两种代理模型。基于ipvs的性能要高一些。需要手动载入才能使用ipvs模块

    安装ipset和ipvsadm

    [root@k8s-master ~]# yum install ipset ipvsadm
    
    • 1

    新建脚本文件/etc/sysconfig/modules/ipvs.modules,内容如下

    [root@k8s-master ~]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
    #!/bin/bash
     
    modprobe -- ip_vs
    modprobe -- ip_vs_rr
    modprobe -- ip_vs_wrr
    modprobe -- ip_vs_sh
    modprobe -- nf_conntrack_ipv4
    EOF
    [root@k8s-master ~]#
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10

    添加执行权限给脚本文件,然后执行脚本文件

    [root@k8s-master ~]# chmod +x /etc/sysconfig/modules/ipvs.modules
    [root@k8s-master ~]# /bin/bash /etc/sysconfig/modules/ipvs.modules
    [root@k8s-master ~]#
    
    • 1
    • 2
    • 3

    检验模块是否加载成功

    [root@k8s-master ~]# lsmod | grep -e ip_vs -e nf_conntrack_ipv4
    ip_vs_sh               12688  0 
    ip_vs_wrr              12697  0 
    ip_vs_rr               12600  0 
    ip_vs                 145458  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
    nf_conntrack_ipv4      15053  2 
    nf_defrag_ipv4         12729  1 nf_conntrack_ipv4
    nf_conntrack          139264  7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4
    libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack
    [root@k8s-master ~]#
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10

    4.2 安装kubelet、kubeadm、kubectl

    添加yum源

    [root@k8s-master ~]# cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    [root@k8s-master ~]#
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11

    安装,然后启动kubelet

    [root@k8s-master ~]# yum install -y --setopt=obsoletes=0 kubelet-1.24.0 kubeadm-1.24.0 kubectl-1.24.0
    [root@k8s-master ~]# systemctl enable kubelet --now 
    Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
    [root@k8s-master ~]#
    
    • 1
    • 2
    • 3
    • 4

    说明如下:

    • obsoletes等于1表示更新旧的rpm包的同时会删除旧包,0表示更新旧的rpm包不会删除旧包
    • kubelet启动后,可以用命令journalctl -f -u kubelet查看kubelet更详细的日志
    • kubelet默认使用systemd作为cgroup driver
    • 启动后,kubelet现在每隔几秒就会重启,因为它陷入了一个等待kubeadm指令的死循环

    4.3 下载各个机器需要的镜像

    查看集群所需镜像的版本

    [root@k8s-master ~]# kubeadm config images list
    k8s.gcr.io/kube-apiserver:v1.24.0
    k8s.gcr.io/kube-controller-manager:v1.24.0
    k8s.gcr.io/kube-scheduler:v1.24.0
    k8s.gcr.io/kube-proxy:v1.24.0
    k8s.gcr.io/pause:3.7
    k8s.gcr.io/etcd:3.5.3-0
    k8s.gcr.io/coredns/coredns:v1.8.6
    [root@k8s-master ~]#
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9

    编辑镜像下载文件images.sh,然后执行。其中node节点只需要kube-proxy和pause

    [root@k8s-master ~]# tee ./images.sh <<'EOF'
    #!/bin/bash
    
    images=(
    kube-apiserver:v1.24.0
    kube-controller-manager:v1.24.0
    kube-scheduler:v1.24.0
    kube-proxy:v1.24.0
    pause:3.7
    etcd:3.5.3-0
    coredns:v1.8.6
    )
    for imageName in ${images[@]} ; do
    crictl pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
    done
    EOF
    [root@k8s-master ~]#
    [root@k8s-master ~]# chmod +x ./images.sh && ./images.sh
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18

    4.4 初始化主节点(只在master节点执行)

    [root@k8s-master ~]# kubeadm init \
    --apiserver-advertise-address=192.168.23.160 \
    --control-plane-endpoint=k8s-master \
    --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
    --kubernetes-version v1.24.0 \
    --service-cidr=10.96.0.0/16 \
    --pod-network-cidr=10.244.0.0/16
    [init] Using Kubernetes version: v1.24.0
    [preflight] Running pre-flight checks
    ......省略部分......
    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    Alternatively, if you are the root user, you can run:
    
      export KUBECONFIG=/etc/kubernetes/admin.conf
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    You can now join any number of control-plane nodes by copying certificate authorities
    and service account keys on each node and then running the following as root:
    
      kubeadm join k8s-master:6443 --token yzicfs.d50rrfxpd3a0wokb \
    	--discovery-token-ca-cert-hash sha256:8548affb49155f0ba53a0ac4eb9500a060ee3b4076ad8b66bd973bab92d78103 \
    	--control-plane 
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join k8s-master:6443 --token yzicfs.d50rrfxpd3a0wokb \
    	--discovery-token-ca-cert-hash sha256:8548affb49155f0ba53a0ac4eb9500a060ee3b4076ad8b66bd973bab92d78103 
    [root@k8s-master ~]#
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38

    说明:

    • 可以使用参数--v=6--v=10等查看详细的日志
    • 所有参数的网络不能重叠。比如192.168.2.x和192.168.3.x是重叠的
      –pod-network-cidr:指定pod网络的IP地址范围。直接填写这个就可以了
      –service-cidr:service VIP的IP地址范围。默认就10.96.0.0/12。直接填写这个就可以了
      –apiserver-advertise-address:API Server监听的IP地址

    另一种kubeadm init的方法

    # 打印默认的配置信息
    [root@k8s-master ~]# kubeadm config print init-defaults --component-configs KubeletConfiguration
    # 通过默认的配置信息,进行编辑修改,其中serviceSubnet和podSubnet在同一层级。然后拉取镜像
    [root@k8s-master ~]# kubeadm config images pull --config kubeadm-config.yaml
    # 进行初始化
    [root@k8s-master ~]# kubeadm init --config kubeadm-config.yaml
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6

    如果init失败,使用如下命令进行回退

    [root@k8s-master ~]# kubeadm reset -f
    [root@k8s-master ~]# 
    [root@k8s-master ~]# rm -rf /etc/kubernetes
    [root@k8s-master ~]# rm -rf /var/lib/etcd/
    [root@k8s-master ~]# rm -rf $HOME/.kube
    
    • 1
    • 2
    • 3
    • 4
    • 5

    4.5 设置.kube/config(只在master执行)

    [root@k8s-master ~]# mkdir -p $HOME/.kube
    [root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    [root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    • 1
    • 2
    • 3

    kubectl会读取该配置文件

    4.6 安装网络插件calico(只在master执行)

    参考calico官网

    calico版本选择,参考

    1. https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md

    插件使用的是DaemonSet的控制器,会在每个节点都运行

    1. 下载calico.yaml文件
    [root@k8s-master ~]# curl https://docs.projectcalico.org/archive/v3.19/manifests/calico.yaml -O
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100  185k  100  185k    0     0  81914      0  0:00:02  0:00:02 --:--:-- 81966
    [root@k8s-master ~]#
    
    • 1
    • 2
    • 3
    • 4
    • 5
    1. 将下面的部分
                # - name: CALICO_IPV4POOL_CIDR
                #   value: "192.168.0.0/16"
    
    • 1
    • 2

    修改成下面的部分。其中IP为kubeadm init时候pod-network-cidr的IP

                - name: CALICO_IPV4POOL_CIDR
                  value: "10.244.0.0/16"
    
    • 1
    • 2
    1. 查看需要的镜像
    [root@k8s-master ~]# cat calico.yaml | grep image
              image: docker.io/calico/cni:v3.19.4
              image: docker.io/calico/cni:v3.19.4
              image: docker.io/calico/pod2daemon-flexvol:v3.19.4
              image: docker.io/calico/node:v3.19.4
              image: docker.io/calico/kube-controllers:v3.19.4
    [root@k8s-master ~]#
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    1. 编辑镜像下载文件,然后执行脚本文件
    [root@k8s-master ~]# tee ./calicoImages.sh <<'EOF'
    #!/bin/bash
    
    images=(
    docker.io/calico/cni:v3.19.4
    docker.io/calico/pod2daemon-flexvol:v3.19.4
    docker.io/calico/node:v3.19.4
    docker.io/calico/kube-controllers:v3.19.4
    )
    for imageName in ${images[@]} ; do
    crictl pull $imageName
    done
    EOF
    [root@k8s-master ~]#
    [root@k8s-master ~]# chmod +x ./calicoImages.sh && ./calicoImages.sh
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15

    会下载calico/node、calico/pod2daemon-flexvol、calico/cni、calico/kube-controllers四个镜像

    1. 部署calico
    [root@k8s-master ~]# kubectl apply -f calico.yaml
    
    • 1
    1. 此时查看master的状态
    [root@k8s-master ~]# 
    [root@k8s-master ~]# kubectl get pods -A
    NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
    kube-system   calico-kube-controllers-57d95cb479-5zppz   0/1     Pending   0          14s
    kube-system   calico-node-v6zcv                          1/1     Running   0          14s
    kube-system   coredns-7f74c56694-snzmv                   1/1     Running   0          71s
    kube-system   coredns-7f74c56694-whh84                   1/1     Running   0          71s
    kube-system   etcd-k8s-master                            1/1     Running   0          84s
    kube-system   kube-apiserver-k8s-master                  1/1     Running   0          84s
    kube-system   kube-controller-manager-k8s-master         1/1     Running   0          83s
    kube-system   kube-proxy-f9w7h                           1/1     Running   0          71s
    kube-system   kube-scheduler-k8s-master                  1/1     Running   0          84s
    [root@k8s-master ~]# 
    [root@k8s-master ~]# kubectl get nodes
    NAME         STATUS   ROLES           AGE    VERSION
    k8s-master   Ready    control-plane   105s   v1.24.0
    [root@k8s-master ~]#
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17

    4.7 加入node节点(只在node执行)

    由上面的kubeadm init成功后的结果得来的

    [root@k8s-node1 ~]# kubeadm join k8s-master:6443 --token yzicfs.d50rrfxpd3a0wokb \
    	--discovery-token-ca-cert-hash sha256:8548affb49155f0ba53a0ac4eb9500a060ee3b4076ad8b66bd973bab92d78103
    [preflight] Running pre-flight checks
    ......省略部分......
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
    
    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
    
    [root@k8s-node1 ~]#
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11

    令牌有效期24小时,可以在master节点生成新令牌命令

    [root@k8s-master ~]# kubeadm token create --print-join-command
    
    • 1

    此时查看master的状态

    [root@k8s-master ~]# kubectl get pods -A
    NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
    kube-system   calico-kube-controllers-57d95cb479-5zppz   1/1     Running   0          2m35s
    kube-system   calico-node-2m8xb                          1/1     Running   0          37s
    kube-system   calico-node-jnll4                          1/1     Running   0          35s
    kube-system   calico-node-v6zcv                          1/1     Running   0          2m35s
    kube-system   coredns-7f74c56694-snzmv                   1/1     Running   0          3m32s
    kube-system   coredns-7f74c56694-whh84                   1/1     Running   0          3m32s
    kube-system   etcd-k8s-master                            1/1     Running   0          3m45s
    kube-system   kube-apiserver-k8s-master                  1/1     Running   0          3m45s
    kube-system   kube-controller-manager-k8s-master         1/1     Running   0          3m44s
    kube-system   kube-proxy-9gc7d                           1/1     Running   0          35s
    kube-system   kube-proxy-f9w7h                           1/1     Running   0          3m32s
    kube-system   kube-proxy-s8rwk                           1/1     Running   0          37s
    kube-system   kube-scheduler-k8s-master                  1/1     Running   0          3m45s
    [root@k8s-master ~]#
    [root@k8s-master ~]# kubectl get nodes
    NAME         STATUS   ROLES           AGE     VERSION
    k8s-master   Ready    control-plane   4m11s   v1.24.0
    k8s-node1    Ready    <none>          61s     v1.24.0
    k8s-node2    Ready    <none>          59s     v1.24.0
    [root@k8s-master ~]# 
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22

    4.7.1 node节点可以执行kubectl命令方法

    在master节点上将$HOME/.kube复制到node节点的$HOME目录下

    [root@k8s-master ~]# 
    [root@k8s-master ~]# scp -r $HOME/.kube k8s-node1:$HOME
    [root@k8s-master ~]# 
    
    • 1
    • 2
    • 3

    5. 部署dashboard(只在master执行)

    Kubernetes官方可视化界面:https://github.com/kubernetes/dashboard

    5.1 部署

    dashboard和kubernetes的版本对应关系,参考:https://github.com/kubernetes/dashboard/blob/v2.5.1/go.mod

    [root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.1/aio/deploy/recommended.yaml
    namespace/kubernetes-dashboard created
    serviceaccount/kubernetes-dashboard created
    service/kubernetes-dashboard created
    secret/kubernetes-dashboard-certs created
    secret/kubernetes-dashboard-csrf created
    secret/kubernetes-dashboard-key-holder created
    configmap/kubernetes-dashboard-settings created
    role.rbac.authorization.k8s.io/kubernetes-dashboard created
    clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
    rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
    clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
    deployment.apps/kubernetes-dashboard created
    service/dashboard-metrics-scraper created
    deployment.apps/dashboard-metrics-scraper created
    [root@k8s-master ~]#
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16

    会下载kubernetesui/dashboard:v2.5.1、kubernetesui/metrics-scraper:v1.0.7两个镜像

    在master通过命令watch -n 3 kubectl get pods -A查看状态

    5.2 设置访问端口

    将type: ClusterIP改为:type: NodePort

    [root@k8s-master ~]# kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
    service/kubernetes-dashboard edited
    [root@k8s-master ~]#
    
    • 1
    • 2
    • 3

    查看端口命令

    [root@k8s-master ~]# kubectl get svc -A | grep kubernetes-dashboard
    kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   10.96.44.79    <none>        8000/TCP                 3m39s
    kubernetes-dashboard   kubernetes-dashboard        NodePort    10.96.27.108   <none>        443:30256/TCP            3m39s
    [root@k8s-master ~]#
    
    • 1
    • 2
    • 3
    • 4

    访问dashborad页面:https://k8s-node1:30256,如下所示
    dashboard
    这里需要登录令牌,通过下面的步骤获取

    5.3 创建访问账号

    创建资源文件,然后应用

    [root@k8s-master ~]# tee ./dash.yaml <<'EOF'
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: admin-user
      namespace: kubernetes-dashboard
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: admin-user
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
    - kind: ServiceAccount
      name: admin-user
      namespace: kubernetes-dashboard
    EOF
    [root@k8s-master ~]# 
    [root@k8s-master ~]# kubectl apply -f dash.yaml
    serviceaccount/admin-user created
    clusterrolebinding.rbac.authorization.k8s.io/admin-user created
    [root@k8s-master ~]#
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25

    5.4 获取访问令牌

    [root@k8s-master ~]# kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
    Error executing template: template: output:1:16: executing "output" at <base64decode>: invalid value; expected string. Printing more information for debugging the template:
    	template was:
    		{{.data.token | base64decode}}
    	raw data was:
    		{"apiVersion":"v1","items":[{"apiVersion":"v1","kind":"Secret","metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Secret\",\"metadata\":{\"annotations\":{},\"labels\":{\"k8s-app\":\"kubernetes-dashboard\"},\"name\":\"kubernetes-dashboard-certs\",\"namespace\":\"kubernetes-dashboard\"},\"type\":\"Opaque\"}\n"},"creationTimestamp":"2022-05-12T05:00:35Z","labels":{"k8s-app":"kubernetes-dashboard"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:k8s-app":{}}},"f:type":{}},"manager":"kubectl-client-side-apply","operation":"Update","time":"2022-05-12T05:00:35Z"}],"name":"kubernetes-dashboard-certs","namespace":"kubernetes-dashboard","resourceVersion":"1071","uid":"952ec075-0512-42c0-9845-5fc0feb1e3ed"},"type":"Opaque"},{"apiVersion":"v1","data":{"csrf":"014uxJAOA4YG1zOy+F0sz5LBmbii8hfVMCKREV2h3xcTCGhXQaEq2DInQlyG/ivRDENB0Yn8aRiN7N0zhkypuc+yKztoCKhzoAkTAzaLN0Ddjxk9WQksdchRlkpfF/ydwDUAAwETfH3xtshTEHDz+WXd3/q0dQG/v4RyUeJtoG1mHyCmLceoxP1nFsiIUA0EfKuu0acapiRr7vynZHhGCCUCVamQx6u4sYTN5lJnhr1wxEU8nS12LuyznLR5TmxX9R7SniQ4FivMZpP+cDDn2RgSXq9U8gC2iAHNx/USzJ81ukYJxj37buSZdah9P8wGRO5Zt9Vb8v/nrXB+VGM98g=="},"kind":"Secret","metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"data\":{\"csrf\":\"\"},\"kind\":\"Secret\",\"metadata\":{\"annotations\":{},\"labels\":{\"k8s-app\":\"kubernetes-dashboard\"},\"name\":\"kubernetes-dashboard-csrf\",\"namespace\":\"kubernetes-dashboard\"},\"type\":\"Opaque\"}\n"},"creationTimestamp":"2022-05-12T05:00:35Z","labels":{"k8s-app":"kubernetes-dashboard"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{},"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:k8s-app":{}}},"f:type":{}},"manager":"kubectl-client-side-apply","operation":"Update","time":"2022-05-12T05:00:35Z"},{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{"f:csrf":{}}},"manager":"dashboard","operation":"Update","time":"2022-05-12T05:02:47Z"}],"name":"kubernetes-dashboard-csrf","namespace":"kubernetes-dashboard","resourceVersion":"1307","uid":"2c6d3791-323d-4230-a1c7-64b422b0d5ae"},"type":"Opaque"},{"apiVersion":"v1","data":{"priv":"LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBL3VQRjBzY2dSTktsZWhqa0tLV1JBK2Y3NjNYTVRETjNnSzdSK245QVRES0wwV2NWClJOUExpdzJqSmVnSVFjRG16Q0RvMEQzKzdpVjJKalk0YTZGeWxwSzc4NkljNEZ4SWNhNEM1OWd6TGo5eHJ1VU4KbDBBSTM1b0VHOWRkWmJIenEwaU9laTNZMXV1dThRNnhPNTFMYk9qV3IxSzhDaHV0MldQbGdiTTduYjRBeWxsZgpPenJxcXMrSHVuZEN6aTMzZU84WXVJQkJhMlJ6MGJnMkNNU3h0dU1iM0NXcGhaWFl1d01JMnMvaEFLcDJseUtmCkl5NFRZalcyK2dCdDlhNnpHSUt0bk1MZ0haTjVPbU1UeTlzVlJzbkRweHBaOE1DTjlnRG9VaEhjWHh3RExDRkIKUitNMUR5eFBLMHhpbXo3ejJmY1I4YnZNbC9MLzFiWEx4K1Bqd3dJREFRQUJBb0lCQVFDQTdyaS9vU2hxaDk5YQp2c0tTNlFWTTQ0a2tGd2RMdUhFSHIrYlpmb3NJd0R6SHBRdzJMNmh6WTJlV29pT2pGeS9vSy9GNGZSTzZaVXE1Cms0M0FxLzhwdVhuSGlNWndtMTJ0MjJidTNnY3RxcndYeXhldjNaMWZkaW9ENTFJQVFoN1BFcm0zaGY5ODMrVXoKWE1vOExKbmRzbjMrVzZ4d3RJV2hSSTN3cUxoTVZyRGpoNmw3V1VaMWtNQnB5dGt3NnNhcFVJMnVva0VyNnFNUgpvWmNCU1Z4SkZZVEx0Q2VTTWw3MWdKRDFVNWMvSE5XSWcrZkoyZ2g1OUE2NEdBcUpJNEcxcjcrRzlZRndIQWM3CjNBeUlRSW1TK3RMNHBGTEVwckxYVGxPOHQydXBrNzRHYUk2dEljY0llNDUxWG05SWNEZElQQUFlR0s0c0hLRkoKRzZQQkxKMXBBb0dCQVA4VXB2ZWRxSHBVZ2NjZFJvaDFES1FMYmY2ZnNseXlOVTVJVlFSbjdiVVlrR3ZXYkJ2ZAoramQwbGVsMml3amhvZ0lOOWpOMHYyRjRDQjRZdzZ4eHJBaWdQcDJWVThuNDZCUWRQOTFMZzNGdWxsN0pQVnMvCk1Nb2FZL0NBdlg3RW1FSDB1SFhmYk43QVhSRVhTKy9lMHhmZDFtQ3A2RnpsTDhnbG5qYThJVmtGQW9HQkFQL08KOGNJSEJXYTdYUEUwdE5jVG8xTkp2YTdWWWNLcXZydnMzRHd0aFUvY1czWUMzRGxkSHFEOWlDOStUZGdMMmJFcgpEU3lSd3RkNTBMMWh0czhOSmp4WFozbWtBb1EyR0ZKL0VaNWM5MXBOT0hQVUtXOG5IUmY5M0NpbkhscDE3RzdMCnVkYTlMZkNhbTBPNVFpNitIOFE2VUE3WnAzNmtkbWE5MHFXV29rUW5Bb0dCQUlVMGs3emJhQS81OFl1NWpndlUKbERWV2dxcGxXdzl0UU1rUW5OVWdNTkpSY1puZTc3WGR4YjBQOVBsbUhsVVUvelZ6ZFE2SitTYzlONEFBRHE4Two3WGZUdHQ4MEMvMTlMalRTMFhjTzZDVmtTc0pVOU9XaHFpamdmekFwQ3N3WWZpcHpVYUM4Zkc0V3BvTTJWMEY4CmEyQWJTTWhSOGpZUXVWTWIwZk5qYTBiQkFvR0FIdGYyOG13aVRKYSt5QjZReDNZSXRWd28wTkhOcmNra29rZ1cKN2ZLWEpsL3RiemM5RW5XVjRkZHYram9DYk5CUStUbTFwdkFVVENMVjlsKzN5Uk5PenV2REFEbTBTL2l4eWhDawpNVElJYVF6eWg1VEhRaTIzSmxObm5rYzRNN1FRUS9Pd2ZxSGt6aVAySUo1UHlvOEdDWVQyYmpQMExDTHNXOHI3CmdSZStqUFVDZ1lCcW83OTV6MWhOdmQzVXc0Nzl3cklnQXZsZlJvN3UzMkl6ZEZFTTZqc2RVN05MSDlYdm9pVVcKNnpsK2FMbm11cUJ5eFB2dlVwVGZCd3pPYVRqeS9LcXFnYXo5QThYanJudEFiQWxWclVTa0xEd0h5cjcwTERZVQpMSm9sYTBpWWFrUHovY0hsbVF4ejBINTRSR3FuSXM0L2U5QVBrTzdSUGxNNC9TZ01UbXdPNHc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=","pub":"LS0tLS1CRUdJTiBSU0EgUFVCTElDIEtFWS0tLS0tCk1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBL3VQRjBzY2dSTktsZWhqa0tLV1IKQStmNzYzWE1URE4zZ0s3UituOUFUREtMMFdjVlJOUExpdzJqSmVnSVFjRG16Q0RvMEQzKzdpVjJKalk0YTZGeQpscEs3ODZJYzRGeEljYTRDNTlnekxqOXhydVVObDBBSTM1b0VHOWRkWmJIenEwaU9laTNZMXV1dThRNnhPNTFMCmJPaldyMUs4Q2h1dDJXUGxnYk03bmI0QXlsbGZPenJxcXMrSHVuZEN6aTMzZU84WXVJQkJhMlJ6MGJnMkNNU3gKdHVNYjNDV3BoWlhZdXdNSTJzL2hBS3AybHlLZkl5NFRZalcyK2dCdDlhNnpHSUt0bk1MZ0haTjVPbU1UeTlzVgpSc25EcHhwWjhNQ045Z0RvVWhIY1h4d0RMQ0ZCUitNMUR5eFBLMHhpbXo3ejJmY1I4YnZNbC9MLzFiWEx4K1BqCnd3SURBUUFCCi0tLS0tRU5EIFJTQSBQVUJMSUMgS0VZLS0tLS0K"},"kind":"Secret","metadata":{"creationTimestamp":"2022-05-12T05:00:35Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:type":{}},"manager":"kubectl-client-side-apply","operation":"Update","time":"2022-05-12T05:00:35Z"},{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:priv":{},"f:pub":{}}},"manager":"dashboard","operation":"Update","time":"2022-05-12T05:02:47Z"}],"name":"kubernetes-dashboard-key-holder","namespace":"kubernetes-dashboard","resourceVersion":"1312","uid":"75107a0e-b809-491d-84cf-c7626dca3aea"},"type":"Opaque"}],"kind":"List","metadata":{"resourceVersion":""}}
    	object given to template engine was:
    		map[apiVersion:v1 items:[map[apiVersion:v1 kind:Secret metadata:map[annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Secret","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard-certs","namespace":"kubernetes-dashboard"},"type":"Opaque"}
    ] creationTimestamp:2022-05-12T05:00:35Z labels:map[k8s-app:kubernetes-dashboard] managedFields:[map[apiVersion:v1 fieldsType:FieldsV1 fieldsV1:map[f:metadata:map[f:annotations:map[.:map[] f:kubectl.kubernetes.io/last-applied-configuration:map[]] f:labels:map[.:map[] f:k8s-app:map[]]] f:type:map[]] manager:kubectl-client-side-apply operation:Update time:2022-05-12T05:00:35Z]] name:kubernetes-dashboard-certs namespace:kubernetes-dashboard resourceVersion:1071 uid:952ec075-0512-42c0-9845-5fc0feb1e3ed] type:Opaque] map[apiVersion:v1 data:map[csrf:014uxJAOA4YG1zOy+F0sz5LBmbii8hfVMCKREV2h3xcTCGhXQaEq2DInQlyG/ivRDENB0Yn8aRiN7N0zhkypuc+yKztoCKhzoAkTAzaLN0Ddjxk9WQksdchRlkpfF/ydwDUAAwETfH3xtshTEHDz+WXd3/q0dQG/v4RyUeJtoG1mHyCmLceoxP1nFsiIUA0EfKuu0acapiRr7vynZHhGCCUCVamQx6u4sYTN5lJnhr1wxEU8nS12LuyznLR5TmxX9R7SniQ4FivMZpP+cDDn2RgSXq9U8gC2iAHNx/USzJ81ukYJxj37buSZdah9P8wGRO5Zt9Vb8v/nrXB+VGM98g==] kind:Secret metadata:map[annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","data":{"csrf":""},"kind":"Secret","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard-csrf","namespace":"kubernetes-dashboard"},"type":"Opaque"}
    ] creationTimestamp:2022-05-12T05:00:35Z labels:map[k8s-app:kubernetes-dashboard] managedFields:[map[apiVersion:v1 fieldsType:FieldsV1 fieldsV1:map[f:data:map[] f:metadata:map[f:annotations:map[.:map[] f:kubectl.kubernetes.io/last-applied-configuration:map[]] f:labels:map[.:map[] f:k8s-app:map[]]] f:type:map[]] manager:kubectl-client-side-apply operation:Update time:2022-05-12T05:00:35Z] map[apiVersion:v1 fieldsType:FieldsV1 fieldsV1:map[f:data:map[f:csrf:map[]]] manager:dashboard operation:Update time:2022-05-12T05:02:47Z]] name:kubernetes-dashboard-csrf namespace:kubernetes-dashboard resourceVersion:1307 uid:2c6d3791-323d-4230-a1c7-64b422b0d5ae] type:Opaque] map[apiVersion:v1 data:map[priv:LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBL3VQRjBzY2dSTktsZWhqa0tLV1JBK2Y3NjNYTVRETjNnSzdSK245QVRES0wwV2NWClJOUExpdzJqSmVnSVFjRG16Q0RvMEQzKzdpVjJKalk0YTZGeWxwSzc4NkljNEZ4SWNhNEM1OWd6TGo5eHJ1VU4KbDBBSTM1b0VHOWRkWmJIenEwaU9laTNZMXV1dThRNnhPNTFMYk9qV3IxSzhDaHV0MldQbGdiTTduYjRBeWxsZgpPenJxcXMrSHVuZEN6aTMzZU84WXVJQkJhMlJ6MGJnMkNNU3h0dU1iM0NXcGhaWFl1d01JMnMvaEFLcDJseUtmCkl5NFRZalcyK2dCdDlhNnpHSUt0bk1MZ0haTjVPbU1UeTlzVlJzbkRweHBaOE1DTjlnRG9VaEhjWHh3RExDRkIKUitNMUR5eFBLMHhpbXo3ejJmY1I4YnZNbC9MLzFiWEx4K1Bqd3dJREFRQUJBb0lCQVFDQTdyaS9vU2hxaDk5YQp2c0tTNlFWTTQ0a2tGd2RMdUhFSHIrYlpmb3NJd0R6SHBRdzJMNmh6WTJlV29pT2pGeS9vSy9GNGZSTzZaVXE1Cms0M0FxLzhwdVhuSGlNWndtMTJ0MjJidTNnY3RxcndYeXhldjNaMWZkaW9ENTFJQVFoN1BFcm0zaGY5ODMrVXoKWE1vOExKbmRzbjMrVzZ4d3RJV2hSSTN3cUxoTVZyRGpoNmw3V1VaMWtNQnB5dGt3NnNhcFVJMnVva0VyNnFNUgpvWmNCU1Z4SkZZVEx0Q2VTTWw3MWdKRDFVNWMvSE5XSWcrZkoyZ2g1OUE2NEdBcUpJNEcxcjcrRzlZRndIQWM3CjNBeUlRSW1TK3RMNHBGTEVwckxYVGxPOHQydXBrNzRHYUk2dEljY0llNDUxWG05SWNEZElQQUFlR0s0c0hLRkoKRzZQQkxKMXBBb0dCQVA4VXB2ZWRxSHBVZ2NjZFJvaDFES1FMYmY2ZnNseXlOVTVJVlFSbjdiVVlrR3ZXYkJ2ZAoramQwbGVsMml3amhvZ0lOOWpOMHYyRjRDQjRZdzZ4eHJBaWdQcDJWVThuNDZCUWRQOTFMZzNGdWxsN0pQVnMvCk1Nb2FZL0NBdlg3RW1FSDB1SFhmYk43QVhSRVhTKy9lMHhmZDFtQ3A2RnpsTDhnbG5qYThJVmtGQW9HQkFQL08KOGNJSEJXYTdYUEUwdE5jVG8xTkp2YTdWWWNLcXZydnMzRHd0aFUvY1czWUMzRGxkSHFEOWlDOStUZGdMMmJFcgpEU3lSd3RkNTBMMWh0czhOSmp4WFozbWtBb1EyR0ZKL0VaNWM5MXBOT0hQVUtXOG5IUmY5M0NpbkhscDE3RzdMCnVkYTlMZkNhbTBPNVFpNitIOFE2VUE3WnAzNmtkbWE5MHFXV29rUW5Bb0dCQUlVMGs3emJhQS81OFl1NWpndlUKbERWV2dxcGxXdzl0UU1rUW5OVWdNTkpSY1puZTc3WGR4YjBQOVBsbUhsVVUvelZ6ZFE2SitTYzlONEFBRHE4Two3WGZUdHQ4MEMvMTlMalRTMFhjTzZDVmtTc0pVOU9XaHFpamdmekFwQ3N3WWZpcHpVYUM4Zkc0V3BvTTJWMEY4CmEyQWJTTWhSOGpZUXVWTWIwZk5qYTBiQkFvR0FIdGYyOG13aVRKYSt5QjZReDNZSXRWd28wTkhOcmNra29rZ1cKN2ZLWEpsL3RiemM5RW5XVjRkZHYram9DYk5CUStUbTFwdkFVVENMVjlsKzN5Uk5PenV2REFEbTBTL2l4eWhDawpNVElJYVF6eWg1VEhRaTIzSmxObm5rYzRNN1FRUS9Pd2ZxSGt6aVAySUo1UHlvOEdDWVQyYmpQMExDTHNXOHI3CmdSZStqUFVDZ1lCcW83OTV6MWhOdmQzVXc0Nzl3cklnQXZsZlJvN3UzMkl6ZEZFTTZqc2RVN05MSDlYdm9pVVcKNnpsK2FMbm11cUJ5eFB2dlVwVGZCd3pPYVRqeS9LcXFnYXo5QThYanJudEFiQWxWclVTa0xEd0h5cjcwTERZVQpMSm9sYTBpWWFrUHovY0hsbVF4ejBINTRSR3FuSXM0L2U5QVBrTzdSUGxNNC9TZ01UbXdPNHc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo= pub:LS0tLS1CRUdJTiBSU0EgUFVCTElDIEtFWS0tLS0tCk1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBL3VQRjBzY2dSTktsZWhqa0tLV1IKQStmNzYzWE1URE4zZ0s3UituOUFUREtMMFdjVlJOUExpdzJqSmVnSVFjRG16Q0RvMEQzKzdpVjJKalk0YTZGeQpscEs3ODZJYzRGeEljYTRDNTlnekxqOXhydVVObDBBSTM1b0VHOWRkWmJIenEwaU9laTNZMXV1dThRNnhPNTFMCmJPaldyMUs4Q2h1dDJXUGxnYk03bmI0QXlsbGZPenJxcXMrSHVuZEN6aTMzZU84WXVJQkJhMlJ6MGJnMkNNU3gKdHVNYjNDV3BoWlhZdXdNSTJzL2hBS3AybHlLZkl5NFRZalcyK2dCdDlhNnpHSUt0bk1MZ0haTjVPbU1UeTlzVgpSc25EcHhwWjhNQ045Z0RvVWhIY1h4d0RMQ0ZCUitNMUR5eFBLMHhpbXo3ejJmY1I4YnZNbC9MLzFiWEx4K1BqCnd3SURBUUFCCi0tLS0tRU5EIFJTQSBQVUJMSUMgS0VZLS0tLS0K] kind:Secret metadata:map[creationTimestamp:2022-05-12T05:00:35Z managedFields:[map[apiVersion:v1 fieldsType:FieldsV1 fieldsV1:map[f:type:map[]] manager:kubectl-client-side-apply operation:Update time:2022-05-12T05:00:35Z] map[apiVersion:v1 fieldsType:FieldsV1 fieldsV1:map[f:data:map[.:map[] f:priv:map[] f:pub:map[]]] manager:dashboard operation:Update time:2022-05-12T05:02:47Z]] name:kubernetes-dashboard-key-holder namespace:kubernetes-dashboard resourceVersion:1312 uid:75107a0e-b809-491d-84cf-c7626dca3aea] type:Opaque]] kind:List metadata:map[resourceVersion:]]
    
    error: error executing template "{{.data.token | base64decode}}": template: output:1:16: executing "output" at <base64decode>: invalid value; expected string
    [root@k8s-master ~]# 
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13

    这里报异常,可能是dashborad和Kubernetes的兼容性问题。dashboard v2.5.1在Kubernetes 1.23.6运行是正常的

    正常会返回一大串字符串的令牌,格式为:eyJhbGci…gMZ0RqeQ

    将令牌复制到登录页面,进行登录即可

    6. 安装nginx进行测试

    部署

    [root@k8s-master ~]# kubectl create deployment nginx --image=nginx
    deployment.apps/nginx created
    [root@k8s-master ~]#
    
    • 1
    • 2
    • 3

    在master通过命令watch -n 3 kubectl get pods -A查看状态

    暴露端口

    [root@k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort
    service/nginx exposed
    [root@k8s-master ~]# 
    
    • 1
    • 2
    • 3

    查看端口

    [root@k8s-master ~]# kubectl get pods,svc
    NAME                        READY   STATUS    RESTARTS   AGE
    pod/nginx-8f458dc5b-d4wnm   1/1     Running   0          87s
    
    NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
    service/kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        18m
    service/nginx        NodePort    10.96.155.85   <none>        80:32319/TCP   8s
    [root@k8s-master ~]# 
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8

    访问nginx页面:http://k8s-node1:32319

    nginx页面

    7. 其它可选模块部署

    7.1 metrics-server安装

    metrics-server的介绍和安装,请参考这篇博客的kubernetes-sigs/metrics-server的介绍和安装部分

    7.2 IPVS的开启

    IPVS的开启,请参考这篇博客的ipvs的开启部分

    7.3 ingress-nginx的安装

    ingress-nginx Controller的安装,请参考这篇博客的ingress-nginx Controller安装部分

    7.4 搭建NFS服务器

    搭建NFS服务器,请参考这篇博客的搭建NFS服务器部分

  • 相关阅读:
    vue3学习(四)--- watch和watchEffect监听
    武汉凯迪正大—密度继电器校验仪
    驱动开发:内核枚举LoadImage映像回调
    密码学 | RC4算法Native层分析
    Python自动合并Word文件同时添加分页符的方法
    三行代码实现图像画质修复,图片清晰度修复,清晰度提升python
    【初阶数据结构】——堆排序和TopK问题
    MySQL 教程 2.4
    Fe3O4纳米粒子/氧化锌纳米粒子/纳米氧化铈/纳米聚乙烯修饰二氧化硅微球表征探究
    买卖股票的最佳时机
  • 原文地址:https://blog.csdn.net/yy8623977/article/details/124707433