k8s通过dockershim对docker的支持从1.20版本后就已经移除,仅支持符合Container Runtime Interface(CRI)的容器运行环境,比如containerd。containerd本身就是docker底层的容器运行环境,只不过docker在containerd的基础上增加了符合人类操作的接口。docker构建的镜像并不是docker特有的镜像,它是一个OCI(开放容器标准)镜像。所以,虽然k8s移除了使用docker作为容器运行环境,但通过dockerfile构建的镜像依然可以被符合CRI接口的容器运行环境使用。
对于熟悉docker的开发者,仍然可以使用docker打包镜像,并上传到私有或公共镜像仓库,再通过k8s拉取镜像构建容器和管理pod。
准备三个运行Ubuntu系统的node。测试环境下可以使用multipass构建三个虚拟机。
$ multipass launch -n master01
$ multipass launch -n worker01
$ multipass launch -n worker02
$ multipass list
Name State IPv4 Image
master01 Running 192.168.64.15 Ubuntu 22.04 LTS
worker01 Running 192.168.64.16 Ubuntu 22.04 LTS
worker02 Running 192.168.64.17 Ubuntu 22.04 LTS
Rancher是一家提供容器管理软件的国内企业,他们在国内部署了安装k3s所需的镜像服务。使用该公司提供的安装脚本,可实现一键安装和后台服务启动。
master安装脚本:
curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn sh -
worker安装脚本:
curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn K3S_URL=https://myserver:6443 K3S_TOKEN=mynodetoken sh -
其中K3S_URL参数设置master节点的ip,K3S_TOKEN使用的值存储在master节点上的/var/lib/rancher/k3s/server/node-token路径下。
切换到master01节点,安装k3s master:
$ multipass shell master01
ubuntu@master01:~$ curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn sh -
ubuntu@master01:~$ ip a | grep 192 #查看master节点ip地址
inet 192.168.64.15/24 metric 100 brd 192.168.64.255 scope global dynamic enp0s1
ubuntu@master01:~$ sudo cat /var/lib/rancher/k3s/server/node-token #查看K3S_TOKEN
K106f19b5abfec691fb3d92bf92dd894bc22b835fe0a787a2edda37f709dff5fbc0::server:aef034344579142feb3822207e654689
ubuntu@master01:~$ exit
切换到worker01节点,安装k3s worker:
$ multipass shell worker01
ubuntu@worker01:~$ curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn K3S_URL=https://192.168.64.15:6443 K3S_TOKEN=K106f19b5abfec691fb3d92bf92dd894bc22b835fe0a787a2edda37f709dff5fbc0::server:aef034344579142feb3822207e654689 sh -
ubuntu@worker01:~$ exit
切换到worker02节点,安装k3s worker:
$ multipass shell worker02
ubuntu@worker02:~$ curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn K3S_URL=https://192.168.64.15:6443 K3S_TOKEN=K106f19b5abfec691fb3d92bf92dd894bc22b835fe0a787a2edda37f709dff5fbc0::server:aef034344579142feb3822207e654689 sh -
ubuntu@worker02:~$ exit
mac可使用brew一键安装:brew install kubectl
设置kubectl与哪个k8s集群通信,需要进行相关配置。
master节点安装后,在/etc/rancher/k3s/k3s.yaml文件中保存了集群配置。为了将该配置文件导入宿主机,需要修改该文件权限:
$ multipass shell master01
ubuntu@master01:~$ ll /etc/rancher/k3s/k3s.yaml
-rw------- 1 root root 2969 Mar 14 10:01 /etc/rancher/k3s/k3s.yaml
ubuntu@master01:~$ sudo chmod a+r /etc/rancher/k3s/k3s.yaml #为所有用户添加读权限
ubuntu@master01:~$ ll /etc/rancher/k3s/k3s.yaml
-rw-r--r-- 1 root root 2969 Mar 14 10:01 /etc/rancher/k3s/k3s.yaml
ubuntu@master01:~$ exit
$ multipass transfer master01:/etc/rancher/k3s/k3s.yaml ~/.kube/ #将master01的配置文件导入宿主机
修改该配置文件,将其中server的ip地址从127.0.0.1修改为master01的ip地址:
$ vim ~/.kube/k3s.yaml
server: https://192.168.64.15:6443
kubectl连接k8s集群时,默认使用~/.kube/config文件作为配置文件。可将导入的k3s.yaml重命名为config。
$ mv ~/.kube/k3s.yaml ~/.kube/config
至此,实现了使用宿主机的kubectl管理k8s集群:
$ kubectl get node
NAME STATUS ROLES AGE VERSION
master01 Ready control-plane,master 86m v1.28.7+k3s1
worker01 Ready <none> 59m v1.28.7+k3s1
worker02 Ready <none> 52m v1.28.7+k3s1
有三对命令与节点维护相关:
drain的参数:
–force: 当一些pod不是经ReplicationController, ReplicaSet, Job, DaemonSet或者StatefulSet管理的时候就需要用–force来强制执行 (例如:kube-proxy);
–ignore-daemonsets: 无视DaemonSet管理的pod;
–delete-local-data: 若存在挂载了本地存储的pod,会强制删掉该pod,并将存储清除。
$ kubectl get nodes
$ kubectl cordon
$ kubectl drain --force --ignore-daemonsets --delete-local-data
$ kubectl uncordon
创建nfs虚拟机,安装nfs服务器,用于给集群提供共享存储。
$ multipass launch -n nfs -d 25G
$ multipass shell nfs
ubuntu@nfs:~$ sudo apt install nfs-kernel-server #安装nfs服务
ubuntu@nfs:~$ mkdir nfs #创建一个目录,用于共享
ubuntu@nfs:~$ sudo chmod 777 nfs #修改目录权限
ubuntu@nfs:~$ sudo vim /etc/exports #编辑nfs配置文件,允许集群内部网段10和宿主机网段192访问,insecure参数允许从高端口访问nfs,否则mac连接报错
/home/ubuntu/nfs 192.0.0.0/1(rw,sync,no_subtree_check,insecure)
/home/ubuntu/nfs 10.0.0.0/1,(rw,sync,no_subtree_check,insecure)
ubuntu@nfs:~$ sudo exportfs -arv #发布共享目录
ubuntu@nfs:~$ showmount -e localhost #查看共享目录(在其它节点可通过指定ip地址来查看)
Export list for localhost:
/home/ubuntu/nfs 192.0.0.0/1
给所有集群节点安装nfs客户端:
$ sudo apt install nfs-common
在宿主机安装nfs客户端,验证挂载。
$ showmount -e 192.168.64.15 #查看共享目录
Exports list on 192.168.64.15:
/home/ubuntu/nfs 192.0.0.0/1
$ make abcd
$ sudo mount 192.168.64.15:/home/ubuntu/nfs /Users/deepbodhi/abcd
假设nfs服务器所在节点的ip为192.168.64.20,创建pv.yaml文件:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: registry-pv
namespace: default
labels:
pv: registry-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
storageClassName: registry-pv
nfs:
path: /home/ubuntu/nfs
server: 192.168.64.20
readOnly: false
查看资源状态:
$ kubectl create -f pv.yaml
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
registry-pv 1Gi RWX Retain Bound default/registry-pvc registry-pv 18m
创建pvc.yaml文件:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: registry-pvc
namespace: default
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 500Mi
storageClassName: registry-pv # 通过storageClassName匹配pv
查看资源状态:
$ kubectl create -f pvc.yaml
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
registry-pvc Bound registry-pv 1Gi RWX registry-pv 17m
若pvc处于pending状态,请检查pvc配置。可通过 kubectl describe pvc
来查看日志。
安装nginx容器,将nginx主页目录绑定到pvc指定的目录。创建nginx.yaml文件:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-pvc
namespace: default
labels:
app: nginx-pvc
spec:
selector:
matchLabels:
app: nginx-pvc
template:
metadata:
labels:
app: nginx-pvc
spec:
containers:
- name: nginx-test-pvc
image: nginx:latest
imagePullPolicy: IfNotPresent
ports:
- name: web-port
containerPort: 80
protocol: TCP
volumeMounts:
- name: nginx-persistent-storage
mountPath: /usr/share/nginx/html
volumes:
- name: nginx-persistent-storage
persistentVolumeClaim:
claimName: registry-pvc
---
# 通过service暴露端口:
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
labels:
app: nginx
spec:
ports:
- port: 80
targetPort: 80
nodePort: 30080
protocol: TCP
selector:
app: nginx-pvc
type: NodePort
查看资源状态:
$ kubectl create -f nginx.yaml
$ kubectl get svc nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx NodePort 10.43.60.227 <none> 80:30080/TCP 19m
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-pvc 1/1 1 1 20m
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-pvc-687c675c7b-46j5k 1/1 Running 0 21m
这里需要注意,首次创建nginx容器时,需要到网络拉取镜像,可通过 kubectl describe
跟踪资源创建状态。
上例中,nginx的主页目录被绑定到pv存储中,pv存储指定了前面建立的nfs服务器目录。因此,可在nfs服务器对应目录下创建index.html文件,观察集群中的nginx容器是否的确使用了nfs存储:
$ multipass shell nfs
ubuntu@nfs:~$ vim ~/nfs/index.html
<html>
<head>
<title>test</title>
</head>
<body>
<h1>Test</h1>
Hello World
</body>
</html>
ubuntu@nfs:~$ exit
$ multipass ls #查看集群节点ip
Name State IPv4 Image
master01 Running 192.168.64.23 Ubuntu 22.04 LTS
10.42.0.0
10.42.0.1
nfs Running 192.168.64.20 Ubuntu 22.04 LTS
worker01 Running 192.168.64.24 Ubuntu 22.04 LTS
10.42.1.0
10.42.1.1
worker02 Running 192.168.64.25 Ubuntu 22.04 LTS
10.42.2.0
10.42.2.1
$ curl 192.168.64.23:30080 #访问集群中任意节点的30080端口
<html>
<head>
<title>test</title>
</head>
<body>
<h1>Test</h1>
Hello World
</body>
</html>
(base)