在部署应用程序的方式上,主要经历了三个时代:
传统部署:互联网早期,会直接将应用程序部署在物理机上
优点:简单,不需要其它技术的参与
缺点:不能为应用程序定义资源使用边界,很难合理地分配计算资源,而且程序之间容易产生影响
虚拟化部署:可以在一台物理机上运行多个虚拟机,每个虚拟机都是独立的一个环境
优点:程序环境不会相互产生影响,提供了一定程度的安全性
缺点:增加了操作系统,浪费了部分资源
容器化部署:与虚拟化类似,但是共享了操作系统
优点:
可以保证每个容器拥有自己的文件系统、CPU、内存、进程空间等
运行应用程序所需要的资源都被容器包装,并和底层基础架构解耦
容器化的应用程序可以跨云服务商、跨Linux操作系统发行版进行部署
![[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-pHwDUT7U-1662530874581)(../%E5%8D%9A%E5%AE%A2/%E7%A0%B4%E8%A7%A3%E5%AF%86%E7%A0%81/16622967792033.jpg)]](https://1000bd.com/contentImg/2023/11/04/044949203.png)
容器化部署方式给带来很多的便利,但是也会出现一些问题,比如说:
这些容器管理的问题统称为容器编排问题,为了解决这些容器编排问题,就产生了一些容器编排的软件:

![[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-AdHq9IBP-1662530874583)(../%E5%8D%9A%E5%AE%A2/%E7%A0%B4%E8%A7%A3%E5%AF%86%E7%A0%81/16622968902999.jpg)]](https://1000bd.com/contentImg/2023/11/04/044949215.png)
kubernetes,是一个全新的基于容器技术的分布式架构领先方案,是谷歌严格保密十几年的秘密武器----Borg系统的一个开源版本,于2014年9月发布第一个版本,2015年7月发布第一个正式版本。
kubernetes的本质是一组服务器集群,它可以在集群的每个节点上运行特定的程序,来对节点中的容器进行管理。目的是实现资源管理的自动化,主要提供了如下的主要功能:
一个kubernetes集群主要是由控制节点(master)、工作节点(node)构成,每个节点上都会安装不同的组件。
master:集群的控制平面,负责集群的决策 ( 管理 )
ApiServer : 资源操作的唯一入口,接收用户输入的命令,提供认证、授权、API注册和发现等机制
Scheduler : 负责集群资源调度,按照预定的调度策略将Pod调度到相应的node节点上
ControllerManager : 负责维护集群的状态,比如程序部署安排、故障检测、自动扩展、滚动更新等
Etcd :负责存储集群中各种资源对象的信息
node:集群的数据平面,负责为容器提供运行环境 ( 干活 )
Kubelet : 负责维护容器的生命周期,即通过控制docker,来创建、更新、销毁容器
KubeProxy : 负责提供集群内部的服务发现和负载均衡
Docker : 负责节点上容器的各种操作
![[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-6iJBdlm3-1662530874584)(../%E5%8D%9A%E5%AE%A2/%E7%A0%B4%E8%A7%A3%E5%AF%86%E7%A0%81/16622969352141.jpg)]](https://1000bd.com/contentImg/2023/11/04/044949343.png)
下面,以部署一个nginx服务来说明kubernetes系统各个组件调用关系:
1 首先要明确,一旦kubernetes环境启动之后,master和node都会将自身的信息存储到etcd数据库中
2 一个nginx服务的安装请求会首先被发送到master节点的apiServer组件
3.apiServer组件会调用scheduler组件来决定到底应该把这个服务安装到哪个node节点上
在此时,它会从etcd中读取各个node节点的信息,然后按照一定的算法进行选择,并将结果告知apiServer
4 apiServer调用controller-manager去调度Node节点安装nginx服务
5.kubelet接收到指令后,会通知docker,然后由docker来启动一个nginx的pod
pod是kubernetes的最小操作单元,容器必须跑在pod中至此,
6 一个nginx服务就运行了,如果需要访问nginx,就需要通过kube-proxy来对pod产生访问的代理
这样,外界用户就可以访问集群中的nginx服务了
Master:集群控制节点,每个集群需要至少一个master节点负责集群的管控
Node:工作负载节点,由master分配容器到这些node工作节点上,然后node节点上的docker负责容器的运行
Pod:kubernetes的最小控制单元,容器都是运行在pod中的,一个pod中可以有1个或者多个容器
Controller:控制器,通过它来实现对pod的管理,比如启动pod、停止pod、伸缩pod的数量等等
Service:pod对外服务的统一入口,下面可以维护者同一类的多个pod
Label:标签,用于对pod进行分类,同一类pod会拥有相同的标签
NameSpace:命名空间,用来隔离pod的运行环境
kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具。
这个工具能通过两条指令完成一个kubernetes集群的部署:
# 创建一个 Master 节点
$ kubeadm init
# 将一个 Node 节点加入到当前集群中
$ kubeadm join
在开始之前,部署Kubernetes集群机器需要满足以下几个条件:
-至少3台机器,操作系统 CentOS7+
![[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-plu4z41p-1662530874586)(../%E5%8D%9A%E5%AE%A2/%E7%A0%B4%E8%A7%A3%E5%AF%86%E7%A0%81/16622808333254.jpg)]](https://1000bd.com/contentImg/2023/11/04/044949415.png)
环境
| 角色 | IP |
|---|---|
| master | 192.168.229.148 |
| node1 | 192.168.229.150 |
| node2 | 192.168.229.151 |
配置 yum 源
[root@master ~]# cd /etc/yum.repos.d/
[root@master yum.repos.d]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
[root@master yum.repos.d]# sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
[root@master yum.repos.d]# dnf clean all
0 files removed
[root@master yum.repos.d]# dnf makecache
关闭防火墙:
[root@master ~]# systemctl disable --now firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
关闭selinux:
[root@master ~]# sed -i '/^SELINUX=/s/enforcing/disabled/g' /etc/selinux/config
关闭swap:
[root@master ~]# tail -1 /etc/fstab
#/dev/mapper/cs-swap none swap defaults 0 0
注释掉swap分区
在master添加hosts:
# cat >> /etc/hosts << EOF
192.168.229.148 master.example.com
192.168.229.150 node1.example.com
192.168.229.151 node2.example.com
EOF
将桥接的IPv4流量传递到iptables的链:
# cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
# sysctl --system # 生效
时间同步:
[root@master ~]# dnf -y install chrony
[root@master ~]# vi /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
pool time1.aliyun.com iburst // 阿里云时间同步
[root@master ~]# systemctl enable --now chronyd
[root@master ~]# systemctl status chronyd
● chronyd.service - NTP client/server
Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2022-09-06 12:47:26 CST; 14s ago
免密认证:
[root@master ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:XuefY6Y+melKKIpHpxGDPEEgbxQvMiZBpA0sZQUhQEk root@master
The key's randomart image is:
+---[RSA 3072]----+
|#EX=. |
|+@.. |
|*o=o. |
|o++.o |
| . o S . . |
| o .. o o |
| . +. o . .+ |
| .o. . . =.+. |
| ... .++=o. |
+----[SHA256]-----+
[root@master ~]# ssh-copy-id root@192.168.229.148
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '192.168.229.148 (192.168.229.148)' can't be established.
ECDSA key fingerprint is SHA256:n2ckGGr820b4Fez6NUHXuOApoQ3oCuf3POTLfTxOsS4.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.229.148's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@192.168.229.148'"
and check to make sure that only the key(s) you wanted were added.
[root@master ~]# ssh-copy-id root@192.168.229.150
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '192.168.229.150 (192.168.229.150)' can't be established.
ECDSA key fingerprint is SHA256:BSCsrBDXmOy0vQCzkxthvFwA+8EIkoMVyeVV45QrFdM.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.229.150's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@192.168.229.150'"
and check to make sure that only the key(s) you wanted were added.
[root@master ~]# ssh-copy-id root@192.168.229.151
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '192.168.229.151 (192.168.229.151)' can't be established.
ECDSA key fingerprint is SHA256:8hpIIROKg7YiNUKNVhMqXp6yhUetFbsglx+JETkaZXo.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.229.151's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@192.168.229.151'"
and check to make sure that only the key(s) you wanted were added.
[root@master ~]# reboot // 重启
[root@master ~]# getenforce 0
Disabled
[root@master ~]# free
total used free shared buff/cache available
Mem: 1828244 208588 1390292 8864 229364 1455504
Swap: 0 0 0
[root@master ~]#
配置 yum 源
[root@node1 ~]# cd /etc/yum.repos.d/
[root@node1 yum.repos.d]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
[root@node1 yum.repos.d]# sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
[root@node1 yum.repos.d]# dnf clean all
0 files removed
[root@node1 yum.repos.d]# dnf makecache
关闭防火墙:
[root@node1 ~]# systemctl disable --now firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
关闭selinux:
[root@node1 ~]# sed -i '/^SELINUX=/s/enforcing/disabled/g' /etc/selinux/config
关闭swap:
[root@node1 ~]# tail -1 /etc/fstab
#/dev/mapper/cs-swap none swap defaults 0 0
注释掉swap分区
在master添加hosts:
# cat >> /etc/hosts << EOF
192.168.229.148 master.example.com
192.168.229.150 node1.example.com
192.168.229.151 node2.example.com
EOF
将桥接的IPv4流量传递到iptables的链:
# cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
# sysctl --system # 生效
时间同步:
[root@node1 ~]# dnf -y install chrony
[root@node1 ~]# vi /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
pool time1.aliyun.com iburst // 阿里云时间同步
[root@node1 ~]# systemctl enable --now chronyd
[root@node1 ~]# systemctl status chronyd
● chronyd.service - NTP client/server
Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2022-09-06 12:47:26 CST; 14s ago
免密认证:
[root@node1 ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:skkSY0/i4CQptbeX91zxayC7n5bZuir7yJRcdoHaa6Y root@node1
The key's randomart image is:
+---[RSA 3072]----+
| . |
| o . . |
|+ + * . . o |
|.+ = B . o + |
| . + * S = + . |
| + * * * . . |
| o + B +o |
| o.= .+o. |
| E+++=o |
+----[SHA256]-----+
[root@node1 ~]# ssh-copy-id 192.168.229.148
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '192.168.229.148 (192.168.229.148)' can't be established.
ECDSA key fingerprint is SHA256:n2ckGGr820b4Fez6NUHXuOApoQ3oCuf3POTLfTxOsS4.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.229.148's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh '192.168.229.148'"
and check to make sure that only the key(s) you wanted were added.
[root@node1 ~]# ssh-copy-id 192.168.229.150
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '192.168.229.150 (192.168.229.150)' can't be established.
ECDSA key fingerprint is SHA256:BSCsrBDXmOy0vQCzkxthvFwA+8EIkoMVyeVV45QrFdM.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.229.150's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh '192.168.229.150'"
and check to make sure that only the key(s) you wanted were added.
[root@node1 ~]# ssh-copy-id 192.168.229.151
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '192.168.229.151 (192.168.229.151)' can't be established.
ECDSA key fingerprint is SHA256:8hpIIROKg7YiNUKNVhMqXp6yhUetFbsglx+JETkaZXo.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.229.151's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh '192.168.229.151'"
and check to make sure that only the key(s) you wanted were added.
[root@master ~]# reboot // 重启
配置 yum 源
[root@node2 ~]# cd /etc/yum.repos.d/
[root@node2 yum.repos.d]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
[root@node2 yum.repos.d]# sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
[root@node2 yum.repos.d]# dnf clean all
0 files removed
[root@node2 yum.repos.d]# dnf makecache
关闭防火墙:
[root@node2 ~]# systemctl disable --now firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
关闭selinux:
[root@node2 ~]# sed -i '/^SELINUX=/s/enforcing/disabled/g' /etc/selinux/config
关闭swap:
[root@node2 ~]# tail -1 /etc/fstab
#/dev/mapper/cs-swap none swap defaults 0 0
注释掉swap分区
在master添加hosts:
# cat >> /etc/hosts << EOF
192.168.229.148 master.example.com
192.168.229.150 node1.example.com
192.168.229.151 node2.example.com
EOF
将桥接的IPv4流量传递到iptables的链:
# cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
# sysctl --system # 生效
时间同步:
[root@node2 ~]# dnf -y install chrony
[root@node2 ~]# vi /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
pool time1.aliyun.com iburst // 阿里云时间同步
[root@node2 ~]# systemctl enable --now chronyd
[root@node2 ~]# systemctl status chronyd
● chronyd.service - NTP client/server
Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2022-09-06 12:47:26 CST; 14s ago
免密认证:
[root@node2 ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:nvT3UDOfcn7TRXfYBmgAB1TEgqv45EDjLSHSyp8F09s root@node2
The key's randomart image is:
+---[RSA 3072]----+
| o+*=. . |
| . ... o . |
| . . . . . + |
|o =o .. . *|
|o= =o.o S +oo|
|..= +o E o . +o|
| .*o o . o. o+|
| oo . o+.o|
| ..o|
+----[SHA256]-----+
[root@node2 ~]#
[root@node2 ~]# ssh-copy-id 192.168.229.148
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '192.168.229.148 (192.168.229.148)' can't be established.
ECDSA key fingerprint is SHA256:n2ckGGr820b4Fez6NUHXuOApoQ3oCuf3POTLfTxOsS4.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.229.148's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh '192.168.229.148'"
and check to make sure that only the key(s) you wanted were added.
[root@node2 ~]# ssh-copy-id 192.168.229.150
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '192.168.229.150 (192.168.229.150)' can't be established.
ECDSA key fingerprint is SHA256:BSCsrBDXmOy0vQCzkxthvFwA+8EIkoMVyeVV45QrFdM.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.229.150's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh '192.168.229.150'"
and check to make sure that only the key(s) you wanted were added.
[root@node2 ~]# ssh-copy-id 192.168.229.151
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '192.168.229.151 (192.168.229.151)' can't be established.
ECDSA key fingerprint is SHA256:8hpIIROKg7YiNUKNVhMqXp6yhUetFbsglx+JETkaZXo.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.229.151's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh '192.168.229.151'"
and check to make sure that only the key(s) you wanted were added.
[root@master ~]# reboot // 重启
Kubernetes默认CRI(容器运行时)为Docker,因此先安装Docker。
[root@master ~]# cd /etc/yum.repos.d/
[root@master yum.repos.d]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
[root@master yum.repos.d]# ls
CentOS-Base.repo docker-ce.repo
[root@master ~]# dnf list all|grep docker
containerd.io.x86_64 1.6.8-3.1.el8 docker-ce-stable
docker-ce.x86_64 3:20.10.17-3.el8 docker-ce-stable // 三台主机最好是同一个版本
docker-ce-cli.x86_64 1:20.10.17-3.el8 docker-ce-stable
安装docker
[root@master ~]# dnf -y install docker-ce
启动并开机自启
[root@master ~]# which docker
/usr/bin/docker
[root@master ~]# systemctl enable --now docker
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
[root@master ~]# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2022-09-06 13:16:00 CST; 5s ago
查看版本
[root@master ~]# docker -v
Docker version 20.10.17, build 100c701
配置加速器
[root@master ~]# cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://1izcbhll.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
[root@node1 ~]# cd /etc/yum.repos.d/
[root@node1 yum.repos.d]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
[root@node1 yum.repos.d]# ls
CentOS-Base.repo docker-ce.repo
[root@node1 ~]# dnf list all|grep docker
containerd.io.x86_64 1.6.8-3.1.el8 docker-ce-stable
docker-ce.x86_64 3:20.10.17-3.el8 docker-ce-stable // 三台主机最好是同一个版本
docker-ce-cli.x86_64 1:20.10.17-3.el8 docker-ce-stable
安装docker
[root@node1 ~]# dnf -y install docker-ce
启动并开机自启
[root@node1 ~]# which docker
/usr/bin/docker
[root@node1 ~]# systemctl enable --now docker
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
[root@node1 ~]# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2022-09-06 13:16:00 CST; 5s ago
查看版本
[root@node1 ~]# docker -v
Docker version 20.10.17, build 100c701
配置加速器
[root@node1 ~]# cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://1izcbhll.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
[root@node2 ~]# cd /etc/yum.repos.d/
[root@node2 yum.repos.d]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
[root@node2 yum.repos.d]# ls
CentOS-Base.repo docker-ce.repo
[root@node2 ~]# dnf list all|grep docker
containerd.io.x86_64 1.6.8-3.1.el8 docker-ce-stable
docker-ce.x86_64 3:20.10.17-3.el8 docker-ce-stable // 三台主机同一个版本
docker-ce-cli.x86_64 1:20.10.17-3.el8 docker-ce-stable
安装docker
[root@node2 ~]# dnf -y install docker-ce
启动并开机自启
[root@node2 ~]# which docker
/usr/bin/docker
[root@node2 ~]# systemctl enable --now docker
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
[root@node2 ~]# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2022-09-06 13:16:00 CST; 5s ago
查看版本
[root@node2 ~]# docker -v
Docker version 20.10.17, build 100c701
配置加速器
[root@node2 ~]# cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://1izcbhll.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
// master 上操作
# cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@master ~]# cat /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
// node1 上操作
# cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@node1 ~]# cat /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
// node2 上操作
# cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@node2 ~]# cat /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
由于版本更新频繁,这里指定版本号部署:
[root@master ~]# dnf list all|grep kubelet
kubelet.x86_64 1.25.0-0 kubernetes
[root@master ~]# dnf list all|grep kubeadm
kubeadm.x86_64 1.25.0-0 kubernetes
[root@master ~]# dnf list all|grep kubectl
kubectl.x86_64 1.25.0-0 kubernetes
[root@master ~]# dnf install -y kubelet kubeadm kubectl
[root@master ~]# systemctl enable kubelet
[root@node1 ~]# dnf list all|grep kubelet
kubelet.x86_64 1.25.0-0 kubernetes
[root@node1 ~]# dnf list all|grep kubeadm
kubeadm.x86_64 1.25.0-0 kubernetes
[root@node1 ~]# dnf list all|grep kubectl
kubectl.x86_64 1.25.0-0 kubernetes
[root@node1 ~]# dnf install -y kubelet kubeadm kubectl
[root@node1 ~]# systemctl enable kubelet
[root@node2 ~]# dnf list all|grep kubelet
kubelet.x86_64 1.25.0-0 kubernetes
[root@node2 ~]# dnf list all|grep kubeadm
kubeadm.x86_64 1.25.0-0 kubernetes
[root@node2 ~]# dnf list all|grep kubectl
kubectl.x86_64 1.25.0-0 kubernetes
[root@node2 ~]# dnf install -y kubelet kubeadm kubectl
[root@node2 ~]# systemctl enable kubelet
在192.168.229.148(Master)执行。
[root@master ~]# cd /etc/containerd/
[root@master containerd]# ls
config.toml
[root@master containerd]# mv config.toml /opt // 备份
[root@master containerd]# containerd config default > config.toml
[root@master ~]# sed -i 's/k8s.gcr.io/registry.cn-hangzhou.aliyuncs.com/google_containers/g' /etc/containerd/config.toml
[root@master ~]# systemctl restart containerd // 重启让其生效
// 初始化
[root@master ~]# kubeadm init \
--apiserver-advertise-address=192.168.229.148 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.25.0 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16
..... 省略N
Your Kubernetes control-plane has initialized successfully! // 初始化看到这个就说明成功
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube // 普通用户运行以下命令
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config // 普通用户运行以下命令
sudo chown $(id -u):$(id -g) $HOME/.kube/config // 普通用户运行以下命令
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf // 管理员运行此命令
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.229.148:6443 --token jvutp0.bwfvfluujjpjewvd \
--discovery-token-ca-cert-hash sha256:074be407d9cbaa4480c4757aa773759018b6de4a09733ac1a9b1f7214d83b435
// 初始化完成后最好把这个保存到一个文件了。以免后面需要用到
![[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-borZYAHO-1662530874587)(../%E5%8D%9A%E5%AE%A2/%E7%A0%B4%E8%A7%A3%E5%AF%86%E7%A0%81/1662524860942.png)]](https://1000bd.com/contentImg/2023/11/04/044949317.png)
由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址。
使用kubectl工具:
// 普通用户运行的命令
# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config
# kubectl get nodes
// 管理用运行的命令
export KUBECONFIG=/etc/kubernetes/admin.conf
// 我是管理用所以用这种方式
[root@master ~]# echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' > /etc/profile.d/k8s.sh
[root@master ~]# source /etc/profile.d/k8s.sh
[root@master ~]# echo $KUBECONFIG
/etc/kubernetes/admin.conf // 看到这个就行
[root@master ~]# kubectl get nodes // 查看当前的集群有哪些主机
NAME STATUS ROLES AGE VERSION
master NotReady control-plane 22h v1.25.0
// 只有当前主机。 状态是NotReady 。也就是网络不通。安装flannel就好了
node1和node2。。node1跟node2 就可以使用kubctl get nodes 命令了
[root@master ~]# scp /etc/kubernetes/admin.conf root@node1.example.com:/etc/kubernetes/
admin.conf 100% 5639 3.0MB/s 00:00
[root@master ~]# scp /etc/kubernetes/admin.conf root@node2.example.com:/etc/kubernetes/
admin.conf 100% 5639 2.4MB/s 00:00
[root@master ~]# scp /etc/profile.d/k8s.sh root@node1.example.com:/etc/profile.d/
k8s.sh 100% 45 43.3KB/s 00:00
[root@master ~]# scp /etc/profile.d/k8s.sh root@node2.example.com:/etc/profile.d/
k8s.sh 100% 45 42.9KB/s 00:00
[root@node1 ~]# bash
[root@node1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 23h v1.25.0
node1 Ready 20m v1.25.0
node2 Ready 20m v1.25.0
[root@node2 ~]# bash
[root@node2 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 23h v1.25.0
node1 Ready 20m v1.25.0
node2 Ready 20m v1.25.0
Flannel可以用于Kubernetes底层网络的实现,主要作用有:
它能协助Kubernetes,给每一个Node上的Docker容器都分配互相不冲突的IP地址。
它能在这些IP地址之间建立一个覆盖网络(Overlay Network),通过这个覆盖网络,将数据包原封不动地传递到目标容器内。
![[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-m1JyTCSg-1662530874588)(../%E5%8D%9A%E5%AE%A2/%E7%A0%B4%E8%A7%A3%E5%AF%86%E7%A0%81/1662526269386.png)]](https://1000bd.com/contentImg/2023/11/04/044949353.png)
# 复制网上到一个文件里
[root@master ~]# cat kube-flannel.yml
---
kind: Namespace
apiVersion: v1
metadata:
name: kube-flannel
labels:
pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-flannel
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-flannel
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni-plugin
#image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
command:
- cp
args:
- -f
- /flannel
- /opt/cni/bin/flannel
volumeMounts:
- name: cni-plugin
mountPath: /opt/cni/bin
- name: install-cni
#image: flannelcni/flannel:v0.19.2 for ppc64le and mips64le (dockerhub limitations may apply)
image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
#image: flannelcni/flannel:v0.19.2 for ppc64le and mips64le (dockerhub limitations may apply)
image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: EVENT_QUEUE_DEPTH
value: "5000"
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- name: xtables-lock
mountPath: /run/xtables.lock
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni-plugin
hostPath:
path: /opt/cni/bin
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
[root@master ~]# kubectl apply -f kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
// 看到都是 created 说明成功
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 22h v1.25.0
// 状态是 Ready 。就能做加入集群
在192.168.229.150 、192.168.229.151上(Node)执行。
向集群添加新节点,执行在kubeadm init输出的kubeadm join命令:
// 加入集群
# 加入集群之前把config.toml 文件传到node1和node2上不然加入集群时会报错
[root@master ~]# scp /etc/containerd/config.toml root@node1.example.com:/etc/containerd/
[root@node1 ~]# systemctl restart containerd
[root@master ~]# scp /etc/containerd/config.toml root@node2.example.com:/etc/containerd/
[root@node2 ~]# systemctl restart containerd
[root@node1 ~]# kubeadm join 192.168.229.148:6443 --token jvutp0.bwfvfluujjpjewvd \
--discovery-token-ca-cert-hash sha256:074be407d9cbaa4480c4757aa773759018b6de4a09733ac1a9b1f7214d83b435
.....省略N
/// 看到这个说明成功
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@node2 ~]# kubeadm join 192.168.229.148:6443 --token jvutp0.bwfvfluujjpjewvd \
--discovery-token-ca-cert-hash sha256:074be407d9cbaa4480c4757aa773759018b6de4a09733ac1a9b1f7214d83b435
.....省略N
/// 看到这个说明成功
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
// 查看node1和node2 是否已加入集群
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 23h v1.25.0
node1 Ready 3m3s v1.25.0
node2 Ready 2m58s v1.25.0
// 而且网络是通的 Ready
在Kubernetes集群中创建一个pod,验证是否正常运行:
[root@master ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
[root@master ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed
[root@master ~]# kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/nginx-76d6c9b8c-pvqg4 0/1 ContainerCreating 0 40s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 443/TCP 23h
service/nginx NodePort 10.108.180.41 80:30623/TCP 19s
// 集群IP 10.108.180.41
ImagePullBackOff 变为 Runing 才能访问
[root@master ~]# kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/nginx-76d6c9b8c-pvqg4 0/1 ImagePullBackOff 0 10m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 443/TCP 23h
service/nginx NodePort 10.108.180.41 80:30623/TCP 10m
看到 是Running 可以访问了
[root@master ~]# kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/nginx-76d6c9b8c-pvqg4 1/1 Running 0 18m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 443/TCP 23h
service/nginx NodePort 10.108.180.41 80:30623/TCP 18m
[root@master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-76d6c9b8c-pvqg4 1/1 Running 0 21m
[root@master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 23h
nginx NodePort 10.108.180.41 80:30623/TCP 21m
// 集群IP只能内部访问
[root@master ~]# curl 10.108.180.41
Welcome to nginx!
Welcome to nginx!
// 访问到了
If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.
For online documentation and support please refer to
nginx.org.
Commercial support is available at
nginx.com.
Thank you for using nginx.
访问地址:节点IP跟端口号
![[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-aplH12YJ-1662530874589)(../%E5%8D%9A%E5%AE%A2/%E7%A0%B4%E8%A7%A3%E5%AF%86%E7%A0%81/1662530141585.png)]](https://1000bd.com/contentImg/2023/11/04/044949497.png)
## 重启服务是否还在运行。运行了就说明没有问题
重启三台虚拟机
[root@master ~]# reboot
[root@node1 ~]# reboot
[root@node2 ~]# reboot
// 重启后三台主机都是 Ready 就ok了
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 24h v1.25.0
node1 Ready 51m v1.25.0
node2 Ready 51m v1.25.0
报错信息
// 添加集群报错
[root@k8s-node1 ~]# kubeadm join 192.168.229.148:6443 --token w6pnhp.q12awlrw6cx0nat1 --discovery-token-ca-cert-hash sha256:febaca84568c971d0fdad0a3644e31abbb0027d6dea49b9388ea0ccc86a05c37
[preflight] Running pre-flight checks
[WARNING Hostname]: hostname "k8s-node1" could not be reached
[WARNING Hostname]: hostname "k8s-node1": lookup k8s-node1 on 114.114.114.114:53: no such host
error execution phase preflight: couldn't validate the identity of the API Server: could not find a JWS signature in the cluster-info ConfigMap for token ID "w6pnhp" // 报错
To see the stack trace of this error execute with --v=5 or higher
// 解决
[root@k8s-master ~]# kubeadm token generate // 在主节点上操作
w6pnhp.q12awlrw6cx0nat1
// 用这个新生成的
[root@k8s-master ~]# kubeadm token create w6pnhp.q12awlrw6cx0nat1 --print-join-command --ttl=0
kubeadm join 192.168.229.148:6443 --token w6pnhp.q12awlrw6cx0nat1 --discovery-token-ca-cert-hash sha256:febaca84568c971d0fdad0a3644e31abbb0027d6dea49b9388ea0ccc86a05c37
在 node1 测试是否成功
[root@k8s-node1 ~]# kubeadm join 192.168.229.148:6443 --token w6pnhp.q12awlrw6cx0nat1 --discovery-token-ca-cert-hash sha256:febaca84568c971d0fdad0a3644e31abbb0027d6dea49b9388ea0ccc86a05c37
[preflight] Running pre-flight checks
[WARNING Hostname]: hostname "k8s-node1" could not be reached
[WARNING Hostname]: hostname "k8s-node1": lookup k8s-node1 on 114.114.114.114:53: no such host
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster: // 看到以下的就说明成功
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.