幸运的是Kubernetes就为我们提供了这样的资源对象:
# kind:ReplicationController
# spec.replicas:指定Pod副本数量,默认为1
# spec.template:这里就是我们之前的Pod的定义的模块,但是不需要apiVersion和kind了
# spec.temlate.metadata: labels注意这里的Pod的labels(标签)要和spec.selector相同,这样RC就可以来控制当前这个Pod了
# 这个YAML文件中的意思就是定义了一个RC资源对象,它的名字叫rc-demo,保证一致会有3个Pod运行,Pod的镜像是nginx镜像
[root@master ~]# cat nginx_rc.yaml
apiVersion: "v1"
kind: ReplicationController
metadata:
name: rc-demo
labels:
name: rc
spec:
replicas: 3
selector:
name: rc
template:
metadata:
labels:
name: rc
spec:
containers:
- name: nginx-demo
image: nginx:1.20
ports:
- containerPort: 80
# 部署资源
[root@master ~]# kubectl apply -f nginx_rc.yaml
replicationcontroller/rc-demo created
# 你也可以使用delete删除一个Pod,会发现会自动又多出一个新的Pod
# 查看rc资源
[root@master ~]# kubectl get rc
NAME DESIRED CURRENT READY AGE
rc-demo 3 3 3 6m28s
# 查看pod
[root@master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
rc-demo-czr2l 1/1 Running 0 6m33s
rc-demo-gqwlg 1/1 Running 0 6m34s
rc-demo-lldzt 1/1 Running 0 6m33s
# rolling-update在1.11版本开始时,就已经不支持使用以下格式更新Pod数量了,而主要使用Deployment
kubectl rolling-update rc-demo --image=nginx1.21
最后总结以下关于RC/RS的一些特性和作用:
[root@master ~]# cat nginx_rs.yaml
apiVersion: "apps/v1"
kind: ReplicaSet
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.20
# 部署资源
[root@master ~]# kubectl apply -f nginx_rs.yaml
replicaset.apps/nginx created
# 查看rs
[root@master ~]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx 3 3 3 81s
# 查看pod资源
[root@master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-9b8x2 1/1 Running 0 96s
nginx-dlrsz 1/1 Running 0 96s
nginx-wh7gt 1/1 Running 0 96s
RC的全部功能:Deployment具备上面描述的RC的全部功能
什么是无状态服务:
通过Deployment对象,你可以轻松的做到以下事情:
maxSurge和maxUnavailable用来控制滚动更新的更新策略
比例
注意:两则不能同时为0
建议配置:
[root@master ~]# cat nginx_deployment.yaml
apiVersion: "apps/v1"
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 6
selector:
matchLabels:
app: nginx
# 用于将现有Pod替换为新Pod的部署策略
strategy:
rollingUpdate:
# maxSurge: 1 意味着在滚动更新期间,可以同时比期望的副本数多运行一个 Pod。例如,如果 .spec.replicas 是 6,那么在最坏的情况下,可能会有 7 个 Pod 同时运行(6 个旧的 + 1 个新的)。
maxSurge: 1
# maxUnavailable: 0 意味着在滚动更新期间,不会有任何旧 Pod 处于不可用状态。换句话说,在删除一个旧 Pod 之前,必须确保一个新的 Pod 已经就绪。这可以确保在更新过程中始终有至少 .spec.replicas 数量的 Pod 可用。
maxUnavailable: 0
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.20
imagePullPolicy: IfNotPresent
# 部署资源
[root@master ~]# kubectl apply -f nginx_deployment.yaml
deployment.apps/nginx created
# 查看deployment
[root@master ~]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 6/6 6 6 56s
# 查看Pod
[root@master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-795ff9d4cc-5zd79 1/1 Running 0 89s
nginx-795ff9d4cc-6wgjs 1/1 Running 0 89s
nginx-795ff9d4cc-9pfxs 1/1 Running 0 89s
nginx-795ff9d4cc-gbzvr 1/1 Running 0 86s
nginx-795ff9d4cc-hcztd 1/1 Running 0 89s
nginx-795ff9d4cc-jxz28 1/1 Running 0 88s
# 修改yaml文件中的replics配置,然后重新应用yaml文件
# replicas的值修改的比之前多就是扩容,比之前少就是缩容
[root@master ~]# cat nginx_deployment.yaml | grep 8
replicas: 8
# 重新部署
[root@master ~]# kubectl apply -f nginx_deployment.yaml
deployment.apps/nginx configured
# 还可以使用edit直接修改
[root@master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-795ff9d4cc-5zd79 1/1 Running 0 8m
nginx-795ff9d4cc-6wgjs 1/1 Running 0 8m
nginx-795ff9d4cc-9pfxs 1/1 Running 0 8m
nginx-795ff9d4cc-d4q2f 1/1 Running 0 38s
nginx-795ff9d4cc-gbzvr 1/1 Running 0 7m57s
nginx-795ff9d4cc-gxnhw 1/1 Running 0 38s
nginx-795ff9d4cc-hcztd 1/1 Running 0 8m
nginx-795ff9d4cc-jxz28 1/1 Running 0 7m59s
# 找到replicas配置修改完以后保存即可,配置文件将理解生效(谨慎)
[root@master ~]# kubectl edit deployment nginx
deployment.apps/nginx edited
# 查看deployment资源,我刚刚修改为了9个Pod副本数量
[root@master ~]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 9/9 9 9 9m46s
# 还可以使用命令行的方式指定数量即可,使用--replicas指定数量即可
[root@master ~]# kubectl scale deployment nginx --replicas=2
deployment.apps/nginx scaled
[root@master ~]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 2/2 2 2 11m
# 编写yaml文件中image,然后重新应用yaml文件
[root@master ~]# vi nginx_deployment.yaml
image: nginx:1.21
# 重新部署资源
[root@master ~]# kubectl apply -f nginx_deployment.yaml
deployment.apps/nginx configured
[root@master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-5ff79c7ff8-2fn4r 0/1 ContainerCreating 0 27s
nginx-795ff9d4cc-55h7p 1/1 Running 0 27s
nginx-795ff9d4cc-8fpqg 1/1 Running 0 27s
nginx-795ff9d4cc-9pfxs 1/1 Running 0 14m
nginx-795ff9d4cc-bmpnn 1/1 Running 0 27s
nginx-795ff9d4cc-hcztd 1/1 Running 0 14m
nginx-795ff9d4cc-rtffd 1/1 Running 0 27s
nginx-795ff9d4cc-wxvl2 1/1 Running 0 27s
nginx-795ff9d4cc-xcfbn 1/1 Running 0 27s
# 使用以下命令可以查看发布历史
[root@master ~]# kubectl rollout history deployment nginx
deployment.apps/nginx
REVISION CHANGE-CAUSE
0 <none>
1 <none>
2 <none>
# 使用--revision可以查看某个发布历史的具体信息
[root@master ~]# kubectl rollout history deployment nginx --revision=1
deployment.apps/nginx with revision #1
Pod Template:
Labels: app=nginx
pod-template-hash=795ff9d4cc
Containers:
nginx:
Image: nginx:1.20
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
[root@master ~]# kubectl rollout history deployment nginx --revision=2
deployment.apps/nginx with revision #2
Pod Template:
Labels: app=nginx
pod-template-hash=5ff79c7ff8
Containers:
nginx:
Image: nginx:1.21
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
# 还可以使用kubectl set image命令来进行滚动更新
[root@master ~]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 8/8 8 8 16m
[root@master ~]# kubectl set image deployment nginx nginx=nginx:1.22
deployment.apps/nginx image updated
[root@master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-7c8489bfcf-4jbc2 1/1 Running 0 13s
nginx-7c8489bfcf-8f575 1/1 Running 0 18s
nginx-7c8489bfcf-8pcp9 1/1 Running 0 14s
nginx-7c8489bfcf-cs2jh 1/1 Running 0 16s
nginx-7c8489bfcf-g8zmh 1/1 Running 0 12s
nginx-7c8489bfcf-t8c4l 1/1 Running 0 11s
nginx-7c8489bfcf-t8q47 1/1 Running 0 15s
# 查看发布状态
[root@master ~]# kubectl rollout status deployment nginx
deployment "nginx" successfully rolled out
# 验证是否更新nginx镜像版本
[root@master ~]# kubectl exec -it nginx-7c8489bfcf-wdtgh -- nginx -v
nginx version: nginx/1.22.1
# 查看上线历史
[root@master ~]# kubectl rollout history deployment nginx
deployment.apps/nginx
REVISION CHANGE-CAUSE
0 <none>
1 <none>
2 <none>
3 <none>
# 回滚到上一个版本(nginx:1.21)
[root@master ~]# kubectl rollout undo deployment nginx
deployment.apps/nginx rolled back
# 随便找一个deployment的Pod进行验证即可
[root@master ~]# kubectl exec -it nginx-5ff79c7ff8-pvg74 -- nginx -v
nginx version: nginx/1.21.5
# 回滚到指定版本
[root@master ~]# kubectl rollout undo deployment nginx --to-revision=1
deployment.apps/nginx rolled back
# 随便找一个deployment的Pod进行验证即可
[root@master ~]# kubectl exec -it nginx-795ff9d4cc-plm4l -- nginx -v
nginx version: nginx/1.20.2
# 查看回滚状态
[root@master ~]# kubectl rollout status deployment nginx
deployment "nginx" successfully rolled out
# 暂停deployment更新
[root@master ~]# kubectl rollout pause deployment nginx
deployment.apps/nginx paused
# 恢复deployment更新
[root@master ~]# kubectl rollout resume deployment nginx
deployment.apps/nginx resumed
这哪种情况下我们会需要用到这种业务场景呢?其实这种场景还是比较普通的,比如:
集群存储守护程序:如glusterd、ceph要部署在每个节点上提供持久化存储
节点监视守护进程:如Prometheus监控集群,可以在每个节点上运行一个node-exporter进程来收集监控节点的信息
日志收集守护程序:如fluentd或logstash,在每个节点上运行以收集容器的日志
[root@master ~]# cat nginx-ds.yaml
apiVersion: "apps/v1"
kind: DaemonSet
metadata:
name: nginx
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.20
imagePullPolicy: IfNotPresent
# 部署资源
[root@master ~]# kubectl apply -f nginx-ds.yaml
daemonset.apps/nginx created
# 查看ds
[root@master ~]# kubectl get ds
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
nginx 2 2 2 2 2 <none> 35s
# 可以查到node节点上都运行了一个nginx的Pod
[root@master ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-vplg4 1/1 Running 0 72s 10.244.1.30 node2 <none> <none>
nginx-wd7pq 1/1 Running 0 72s 10.244.2.27 node1 <none> <none>
# 仔细观察会发信啊master节点并没有运行Pod,因为这涉及到污点
[root@master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 42h v1.23.0
node1 Ready <none> 42h v1.23.0
node2 Ready <none> 42h v1.23.0
有状态服务的特点:
稳定的、唯一的网络标识符
稳定的、持久化的存储
稳定的、优雅的部署和缩放
有序的、优化的部署的缩放
有序的、优化的删除和终止
有序的、自动滚动更新
Headless service不分配clusterIP,headless service可以通过解析service的DNS返回所有的Pod的dns和ip地址(statefulSet部署的Pod才有DNS),普通的service只能通过解析service的DNS返回service的ClusterIP
为什么要用headless service(没有service ip的service)?
在使用Deployment时,创建的Pod名称是没有顺序的,是随机字符串,在用statefulset管理Pod时要求pod名称必须是有序的,每一个Pod不能被随意取代,Pod重建后pod名称还是一样的。因为pod IP是变化的,所以要用Pod名称来识别。Pod名称是pod唯一性的标识符,必须持久稳定有效。这时要用到无头服务,他可以给每个Pod一个唯一的名称
headless service会为servie分配一个域名,域名格式如下:
<service name>.$<namespace name>.svc.cluster.local
k8s中资源的全局FQDN格式
Service_NAME.NameSpace_NAME.Domain.LTD.
Domain.LTD.=svc.cluster.local. #这是默认k8s集群的域名。
StatefulSet会为关联的Pod分配一个dnsName
$<Pod Name>.$<service name>.$<namespace name>.svc.cluster.local
[root@master ~]# cat nginx_sts.yaml
apiVersion: "v1"
kind: Service
metadata:
name: nginx
spec:
clusterIP: None
ports:
- port: 80
targetPort: 80
selector:
app: nginx
---
apiVersion: "apps/v1"
kind: StatefulSet
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
serviceName: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
# 部署资源
[root@master ~]# kubectl apply -f nginx_sts.yaml
service/nginx created
statefulset.apps/nginx unchanged
# 查看sts
[root@master ~]# kubectl get sts
NAME READY AGE
nginx 2/2 45s
# 查看pod
[root@master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-0 1/1 Running 0 63s
nginx-1 1/1 Running 0 46s
[root@master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-0 1/1 Running 0 2m6s
nginx-1 1/1 Running 0 109s
nginx-2 1/1 Running 0 10s
nginx-3 1/1 Running 0 9s# --replicas选项可以调整副本数量
[root@master ~]# kubectl scale sts nginx --replicas=4
statefulset.apps/nginx scaled
[root@master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-0 1/1 Running 0 2m6s
nginx-1 1/1 Running 0 109s
nginx-2 1/1 Running 0 10s
nginx-3 1/1 Running 0 9s
[root@master ~]# kubectl delete sts nginx
statefulset.apps "nginx" deleted
# 重新加载资源用于验证
[root@master ~]# kubectl apply -f nginx_sts.yaml
service/nginx unchanged
statefulset.apps/nginx created
[root@master ~]# kubectl delete sts nginx --cascade=false
warning: --cascade=false is deprecated (boolean value) and can be replaced with --cascade=orphan.
statefulset.apps "nginx" deleted
[root@master ~]# kubectl get sts
No resources found in default namespace.
[root@master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-0 1/1 Running 0 38s
nginx-1 1/1 Running 0 36s
# 注意Job的RestartPolicy仅支持Never和OnFailure两种,不支持Always,我们直到Job就相当于来执行一个批处理任务,执行完就结束了,如果支持Always的话就会陷入死循环
[root@master ~]# cat job-demo.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: job-demo
spec:
template:
metadata:
name: job-demo
spec:
restartPolicy: Never
containers:
- name: cunter
image: busybox
command:
- "bin/sh"
- "-c"
- "for i in 9 8 7 6 5 4 3 2 1; do echo $i;sleep 1; done"
# 部署资源
[root@master ~]# kubectl apply -f job-demo.yaml
job.batch/job-demo created
# 查看job
[root@master ~]# kubectl get job
NAME COMPLETIONS DURATION AGE
job-demo 1/1 27s 46s
# job运行完以后,pod的状态是Completed
[root@master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
job-demo-qhr9p 0/1 Completed 0 104s
# 使用kubectl logs可以查看日志
[root@master ~]# kubectl logs job-demo-qhr9p
9
8
7
6
5
4
3
2
1
分 时 日 月 星期 要运行的命令
第1列分钟0~59
第2列小时0~23
第3列日1~31
第4列月1~12
第5列星期0~7(0和7表示星期天)
第6列要运行的命令
[root@master ~]# cat cronjob-demo.yaml
# 部署资源
[root@master ~]# kubectl apply -f cronjob-demo.yaml
cronjob.batch/cronjob-demo created
# 查看cronjob
[root@master ~]# kubectl get cronjob
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
cronjob-demo * * * * * False 1 9s 38s
# 查看pod
[root@master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
cronjob-demo-28656169-wdcvh 1/1 Running 0 22s