pod
rs (ReplicaSet)
deployment
job
cronjob
svc
ingress
cm (ConfigMap)
secret
sa (ServiceAcount)
volume
PV (PersistentVolume)
StatefulSet
多个容器构成pod
apiVersion: v1
kind: Pod
metadata:
name: xcrjpod
namespace: default
labels:
app: xcrjpod
spec:
containers:
- name: xcrjapp-1
image: wangyanglinux/nginx:v1.0
- name: xcrjbusybox-1
image: busybox
command:
- "/bin/bash"
- "-c"
- "sleep 3600"
kubectl apply -f 1-pod.yaml
kubectl logs xcrjpod -c xcrjapp-1
kubectl logs xcrjpod -c xcrjbusybox-1
apiVersion: v1
kind: Pod
metadata:
name: pod-initc
labels:
app: myapp
spec:
initContainers:
- name: init-meservice
image: busybox
command: ['sh', '-c', 'until nslookup meservice; do echo waiting for meservice; sleep 2; done;']
- name: init-dbservice
image: busybox
command: ['sh', '-c', 'until nslookup dbservice; do echo waiting for dbservice; sleep 2; done;']
containers:
- name: myapp-c
image: busybox
command: ['sh','-c','echo The app is running! && sleep 3600']
kubectl apply -f 2-pod-initc.yaml
kubectl logs pod-initc -c init-meservice
kubectl logs pod-initc -c init-dbservice
kubectl delete pod pod-initc
选择pod构成服务
apiVersion: v1
kind: Service
metadata:
name: meservice
namespace: default
spec:
type: NodePort
selector:
app: myapp
release: stable
ports:
- name: http
port: 80
targetPort: 80
kubectl apply -f 3-svc-meservice.yaml
kubectl get svc
ipvsadm -Ln
iptables -t nat -nvL KUBE-NODEPORTS
kubectl logs pod-initc -c init-meservice
kubectl delete svc meservice
apiVersion: v1
kind: Service
metadata:
name: dbservice
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9377
kubectl apply -f 4-svc-dbservice.yaml
kubectl get svc
ipvsadm -Ln
iptables -t nat -nvL KUBE-NODEPORTS
kubectl logs pod-initc -c init-dbservice
kubectl logs pod-initc -c myapp-c
主容器状态
readinessProbe探测主容器是否就绪,是否可以对外提供服务
livenessProbe探测主容器是否存活
主容器状态:start》liveness(readiness》stop)
readiness:pod中主容器是否就绪
就绪探针类型:http, exec, tcpSocket
readinessProbe: httpGet 就绪探针
initialDelaySeconds:在容器启动后,延时n秒才开始探测
periodSeconds:间隔时间
apiVersion: v1
kind: Pod
metadata:
name: mainc-readinessprobe-httpget-pod
namespace: default
spec:
containers:
- name: mainc-readinessprobe-httpget-container
image: wangyanglinux/nginx:v1.0
imagePullPolicy: IfNotPresent
readinessProbe:
httpGet:
port: 80
path: /index.html
initialDelaySeconds: 1
periodSeconds: 3
kubectl apply -f 5-pod-mainC-readinessProbe-httpGet.yaml
kubectl describe pod mainc-readinessprobe-httpget-pod
kubectl logs mainc-readinessprobe-httpget-pod -c mainc-readinessprobe-httpget-container
kubectl describe pod mainc-readinessprobe-httpget-pod
kubectl delete pod mainc-readinessprobe-httpget-pod
主容器状态:start》liveness(readiness》stop)
liveness:pod中主容器是否存活
运行探针类型:http, exec, tcpSocket
apiVersion: v1
kind: Pod
metadata:
name: mainc-liveness-httpget-pod
namespace: default
spec:
containers:
- name: mainc-liveness-httpget-container
image: wangyanglinux/myapp:v1
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
livenessProbe:
httpGet:
port: http
path: /index.html
initialDelaySeconds: 3
periodSeconds: 10
kubectl apply -f 6-pod-mainC-livenessProbe-httpGet.yaml
kubectl describe pod mainc-liveness-httpget-pod
kubectl logs mainc-liveness-httpget-pod -c mainc-liveness-httpget-container
apiVersion: v1
kind: Pod
metadata:
name: mainc-liveness-tcpsocket-pod
namespace: default
spec:
containers:
- name: mainc-liveness-tcpsocket-container
image: wangyanglinux/myapp:v1
livenessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 5
periodSeconds: 3
timeoutSeconds: 1
kubectl apply -f 7-pod-mainC-livenessProbe-tcpSocket.yaml
# 进入容器 查看/tmp/live文件是否存在
kubectl exec mainc-liveness-exec-pod -c mainc-liveness-exec-container -it -- /bin/sh
kubectl logs mainc-liveness-tcpsocket-pod -c mainc-liveness-tcpsocket-container
apiVersion: v1
kind: Pod
metadata:
name: mainc-liveness-exec-pod
namespace: default
spec:
containers:
- name: mainc-liveness-exec-container
image: busybox
imagePullPolicy: IfNotPresent
command: ["/bin/sh","-c","touch /tmp/live ; sleep 60; rm -rf /tmp/live; sleep 3600"]
livenessProbe:
exec:
command: ["test","-e","/tmp/live"]
initialDelaySeconds: 1
periodSeconds: 3
kubectl apply -f 8-pod-mainC-livenessProbe-exec.yaml
# 进入容器 查看/tmp/live文件是否存在
kubectl exec mainc-liveness-exec-pod -c mainc-liveness-exec-container -it -- /bin/sh
kubectl delete pod mainc-liveness-exec-pod
用于在主容器启动之后或停止之前执行的操作
lifecycle postStart: 主容器启动之后
lifecycle preEnd: 主容器停止之前
apiVersion: v1
kind: Pod
metadata:
name: mainc-start-end-pod
namespace: default
spec:
containers:
- name: mainc-start-end-container
image: wangyanglinux/myapp:v1
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /root/xcrj-second"]
preStop:
exec:
command: ["/bin/sh", "-c", "echo Hello from the preStop handler > /root/xcrj-second"]
kubectl apply -f 9-pod-mainC-start-end.yaml
# 进入容器 查看 /root/xcrj-second 文件内容
kubectl exec mainc-start-end-pod -c mainc-start-end-container -it -- /bin/sh
kubectl delete pod mainc-start-end-pod
副本控制
控制pod副本
replicas: 副本数量
selector: 选择控制那些pod
template: pod
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend-rs
spec:
replicas: 3
selector:
matchLabels:
tier: frontend
template:
metadata:
labels:
tier: frontend
spec:
containers:
- name: php-redis
image: wangyanglinux/myapp:v1
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- containerPort: 80
kubectl apply -f 10-ReplicaSet.yaml
kubectl get rs
# rs下有pod
kubectl get pod
kubectl delete rs frontend-rs
发布
滚动更新pod
滚动更新pod:创建新pod的方式进行滚动更新(删除原rs中25%pod,增加新rs中25%pod)
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: wangyanglinux/myapp:v1
ports:
- containerPort: 80
kubectl apply -f 11-Deployment.yaml
kubectl get deployment
# deployment下有rs
kubectl get rs
# rs下有pod
kubectl get pod
kubectl delete deployment nginx-deployment
# 扩容 从3到10
kubectl scale deployment nginx-deployment --replicas 10
# 自动扩容
#kubectl autoscale deployment nginx-deployment --min=10 --max=15 --cpu-percent=80
# 缩容 从10到3
kubectl scale deployment nginx-deployment --replicas 3
# 前滚 更新镜像 语法 容器=镜像
kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
# 编辑滚动 修改内容后滚动更新
kubectl edit deployment/nginx-deployment
# 查看历史rs
kubectl get rs
# 查看滚动状态 检查是否滚动成功
kubectl rollout status deployment/nginx-deployment
# 滚动更新历史 可查看revision(修订本)
kubectl rollout history deployment/nginx-deployment
# 回滚上一个revision
kubectl rollout undo deployment/nginx-deployment
# 滚动到指定revision
kubectl rollout undo deployment/nginx-deployment --to-revision=2
# 滚动暂停
kubectl rollout pause deployment/nginx-deployment
支持
至少运行1个pod副本
k8s集群增加或删除node,DS为新node增加或删除pod
ELK日志,Prometheus Node Exporter
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: deamonset-ex
labels:
app: daemonset
spec:
selector:
matchLabels:
name: deamonset-example
template:
metadata:
labels:
name: deamonset-example
spec:
containers:
- name: daemonset-example
image: wangyanglinux/myapp:v1
kubectl apply -f 11-Deployment.yaml
kubectl get ds
# ds下有pod
kubectl get pod
kubectl delete ds deamonset-ex
使用pod执行批处理任务
restartPolicy: Never或者OnFailure
apiVersion: batch/v1
kind: Job
metadata:
name: job-pi
spec:
template:
metadata:
labels:
name: pod-pi-lable
spec:
containers:
- name: pi-c
image: perl:v1
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
kubectl apply -f 13-Job.yaml
kubectl get job
# job下有pod
kubectl get pod
kubectl delete job job-pi
cron表达式job
时间点调度job执行一次任务
周期性调度job执行任务
schedule: cron表达式调度
jobTemplate: cronjob下有job
apiVersion: batch/v1
kind: CronJob
metadata:
name: cronjob-pi
spec:
schedule: "*/1 * * * *"
jobTemplate:
metadata:
spec:
template:
metadata:
labels:
name: pod-pi-label
spec:
containers:
- name: pi-c
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
kubectl apply -f 14-CronJob.yaml
kubectl get cj
# cronjob下有job
kubectl get job
# job下有pod
kubectl get pod
kubectl delete cj cronjob-pi
Deployment下有ReplicaSet
ReplicaSet下有Pod
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: myapp
release: stabel
template:
metadata:
labels:
app: myapp
release: stabel
env: test
spec:
containers:
- name: myapp
image: wangyanglinux/myapp:v1
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
kubectl apply -f 15-svc-deployment.yaml
kubectl get deployment
kubectl get rs
kubectl get pod
ClusterIP:k8s集群内部虚拟IP
type: ClusterIP,默认方式,自动分配仅仅k8s集群内部可以访问的虚拟IP
selector:选择pod构成svc
apiVersion: v1
kind: Service
metadata:
name: myapp
namespace: default
spec:
type: ClusterIP
selector:
app: myapp
release: stabel
ports:
- name: http
port: 80
targetPort: 80
kubectl apply -f 16-svc-ClusterIP.yaml
# 获取ClusterIp类型svc的IP
kubectl get svc
# 进入某个容器内部
kubectl exec myapp-deploy-b777db8b4-rw28t -c myapp -it -- /bin/sh
# 10.107.72.70上面命令获取的IP。k8s集群内部访问
wget "10.107.72.70:80"
svc不需要ClusterIP,不需要kubeProxy管理
clusterIP: “None” 就是Headless,表示不需要分配k8s集群虚拟IP
apiVersion: v1
kind: Service
metadata:
name: svc-headless
namespace: default
spec:
selector:
app: myapp
clusterIP: "None"
ports:
- port: 80
targetPort: 80
kubectl apply -f 17-svc-Headless-ClusterIP-None.yaml
# CLUSTER-IP为None
kubectl get svc
# 获取kube-dns svc的IP
kubectl get svc -n kube-system
# 查看headless svc dns解析
# dig -t A svcName.namespace.当前svc集群域名. @kube-dns svc的IP
dig -t A svc-headless.default.svc.cluster.local. @10.96.0.10
ClusterIP基础上再给NodePort结点端口
kind: Service
apiVersion: v1
metadata:
name: svc-nodeport
namespace: default
spec:
type: NodePort
selector:
app: myapp
ports:
- name: http
targetPort: 80 # 容器expose的端口
port: 30000 # 内部访问端口
nodePort: 30001 # 外部访问端口
kubectl apply -f 18-svc-NodePort.yaml
# 获取ClusterIP和port
kubectl get svc
# 进入某个容器内部
kubectl exec myapp-deploy-b777db8b4-rw28t -c myapp -it -- /bin/sh
# 测试k8s集群内部访问
wget "ClusterIP:30000"
# 测试外部部访问,使用主机IP
wget "HostIP:30001"
# kubeproxy使用iptables中nat管理pod构成的svc
iptables -t nat -nvL KUBE-NODEPORTS
svc域名DNS重定向到外部域名
将外部服务引入到了集群内部
apiVersion: v1
kind: Service
metadata:
name: svc-externalname
namespace: default
spec:
type: ExternalName
externalName: www.baidu.com
kubectl apply -f 19-svc-ExternalName.yaml
kubectl get svc
# 测试
# 进入某个容器内部
kubectl exec myapp-deploy-b777db8b4-rw28t -c myapp -it -- /bin/sh
# [svcName.namespace.domain]
# svc-externalname.default.svc.cluster.local
# ping 外部服务
ping www.baidu.com
# ping svc服务名
ping svc-externalname
# ping svc域名
ping svc-externalname.default.svc.cluster.local
安装
# 先下载
# 地址:见上
# 再运行
kubectl apply -f ingress-nginx.yaml
# 检查
kubectl get svc -n ingress-nginx
kind: Deployment
apiVersion: apps/v1
metadata:
name: deployment-nginx
namespace: default
spec:
replicas: 2
selector:
matchLabels:
name: nginx
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: c-nginx
image: wangyanglinux/myapp:v1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
kubectl apply -f 20-ingress-Deployment.yaml
kubectl get deployment
kind: Service
apiVersion: v1
metadata:
name: svc-nginx
namespace: default
spec:
type: ClusterIP
selector:
name: nginx
ports:
- protocol: TCP
port: 80 #k8s集群内部端口
targetPort: 80 #expose端口
kubectl apply -f 21-ingress-svc.yaml
kubectl get svc
本质就是使用nginx
pathType: Exact摘要类型
service: name: svc-nginx 注意需要和21-ingress-svc.yaml中的name保持一致
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-test-ingress
spec:
rules:
- host: www1.xcrj.com
http:
paths:
- path: /
pathType: Exact
backend:
service:
name: svc-nginx
port:
number: 80
kubectl apply -f 22-ingress-nginx-1.yaml
kubectl get ingress
# 修改hosts文件
# C:\Windows\System32\drivers\etc\hosts
# k8s任意worker结点ip www1.xcrj.com
192.168.96.1 www1.xcrj.com
# 浏览器访问 www1.xcrj.com
可以配置云厂商的可信https证书
# ssl整数创建,在当前目录下生成tls.key和tls.crt文件
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=svcnginx/O=svcnginx"
# 创建secret tls,tls-secret是名字
# 语法 kubectl create secret tls tls-secret --cert=path/to/tls.cert --key=path/to/tls.key
kubectl create secret tls tls-secret --key tls.key --cert tls.crt
tls -hosts:- www3.xcrj.com与rules - host: www3.xcrj.com需要保持一致
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-https
spec:
tls:
- hosts:
- www3.xcrj.com
secretName: tls-secret
rules:
- host: www3.xcrj.com
http:
paths:
- path: /
pathType: Exact
backend:
service:
name: svc-nginx
port:
number: 80
kubectl apply -f 23-ingress-https.yaml
# 修改hosts文件
# C:\Windows\System32\drivers\etc\hosts
# k8s任意worker结点ip www3.xcrj.com
192.168.96.1 www3.xcrj.com
# 浏览器访问 https://www3.xcrj.com
安装httpd,通过htpasswd生成auth文件,用来存取创建的用户及加密之后的密码
# auth是创建的文件名 xcrjuser是用户名。执行命令之后要求,输入密码
htpasswd -c auth xcrjuser
kubectl create secret generic basic-auth --from-file=auth
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-basic-auth
annotations:
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - xcrjuser'
spec:
rules:
- host: auth.xcrj.com
http:
paths:
- path: /
pathType: Exact
backend:
service:
name: svc-nginx
port:
number: 80
kubectl apply -f 24-ingress-basic-auth.yaml
# 修改hosts文件
# C:\Windows\System32\drivers\etc\hosts
# k8s任意worker结点ip auth.xcrj.com
192.168.96.1 auth.xcrj.com
# 浏览器访问 http://auth.xcrj.com
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-nginx-rewrite
annotations:
nginx.ingress.kubernetes.io/rewrite-target: http://auth.xcrj.com
spec:
rules:
- host: rewrite.xcrj.com
http:
paths:
- path: /
pathType: Exact
backend:
service:
name: svc-nginx
port:
number: 80
kubectl apply -f 25-ingress-rewrite.yaml
# 修改hosts文件
# C:\Windows\System32\drivers\etc\hosts
# k8s任意worker结点ip rewrite.xcrj.com
192.168.96.1 rewrite.xcrj.com
# 浏览器访问 http://rewrite.xcrj.com
创建下面的文件
game.properties
enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30
ui.properties
color.good=purple
color.bad=yellow
allow.textmode=true
how.nice.to.look=fairlyNice
执行
# 从文件夹创建 文件夹中的文件名=key 文件内容=value
kubectl create configmap game-config --from-file=./configmap
kubectl get cm game-config -o yaml
kubectl delete cm game-config
# 从文件创建
kubectl create configmap game-config-2 --from-file=./configmap/game.properties
kubectl get cm game-config-2 -o yaml
kubectl delete cm game-config-2
# 从多个文件创建
kubectl create configmap game-config-3 --from-file=./configmap/game.properties --from-file=./configmap/ui.properties
kubectl get cm game-config-3 -o yaml
kubectl delete cm game-config-3
# --from-literal=key=value
kubectl create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm
kubectl get cm special-config -o yaml
kubectl delete cm special-config
pod中的容器需要环境变量
kind: ConfigMap
26-ConfigMap-environment.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
special.how: very
special.type: charm
---
apiVersion: v1
kind: ConfigMap
metadata:
name: env-config
namespace: default
data:
log_level: INFO
---
apiVersion: v1
kind: Pod
metadata:
name: cm-env-pod
spec:
containers:
- name: test-container
image: wangyanglinux/myapp:v1
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: env-config
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: special.how
- name: SPECIAL_TYPE_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: special.type
restartPolicy: Never
kubectl apply -f 26-ConfigMap-environment.yaml
kubectl get cm
kubectl get cm special-config -o yaml
kubectl get cm env-config -o yaml
kubectl get pod
kubectl logs -f cm-env-pod
kubectl describe pod cm-env-pod
command:中使用$(env-name) 即可
27-ConfigMap-command.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
special.how: very
special.type: charm
---
apiVersion: v1
kind: Pod
metadata:
name: cm-cmd-pod
spec:
containers:
- name: test-container
image: wangyanglinux/myapp:v1
command: [ "/bin/sh", "-c", "echo $(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY)" ]
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: special.how
- name: SPECIAL_TYPE_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: special.type
restartPolicy: Never
kubectl apply -f 27-ConfigMap-command.yaml
# 查看打印的输出 “very charm”
kubectl logs -f cm-cmd-pod
volumes:定义卷,注意volumes和containers是平级的
volumeMounts:容器挂载卷
configMap》volumes》containers
容器挂载卷》定义所需卷》卷使用configMap
28-ConfigMap-volume.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
special.how: very
special.type: charm
---
apiVersion: v1
kind: Pod
metadata:
name: cm-volume-pod
spec:
volumes:
- name: config-volume
configMap:
name: special-config
containers:
- name: test-container
image: wangyanglinux/myapp:v1
command: [ "/bin/sh", "-c", "sleep 600s" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
restartPolicy: Never
kubectl apply -f 28-ConfigMap-volume.yaml
kubectl describe pod cm-volume-pod
kubectl delete pod cm-volume-pod
kubelet中的configmap_manager管理pod使用的configmap
更新ConfigMap后,volumeMounts挂载的Volume,大约10s左右更新
更新ConfigMap后,env不会更新
更新ConfigMap后,deployment不会滚动更新
29-ConfigMap-reload.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: log-config
namespace: default
data:
log_level: INFO
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
replicas: 1
selector:
matchLabels:
run: my-nginx
template:
metadata:
labels:
run: my-nginx
spec:
volumes:
- name: config-volume
configMap:
name: log-config
containers:
- name: my-nginx
image: wangyanglinux/myapp:v1
ports:
- containerPort: 80
volumeMounts:
- name: config-volume
mountPath: /etc/config
kubectl apply -f 29-ConfigMap-hotupdate.yaml
kubectl get pod
# pod中的所有容器共用pause容器的net和volume
# 查看pod挂载目录下的日志级别。输出INFO
kubectl exec my-nginx-864575dd4b-fp9fd -- cat /etc/config/log_level
# 更新configmap。修改INFO为DEBUG。
# 没起作用 kubectl edit cm log-config
kubectl apply -f 29-ConfigMap-hotupdate.yaml
# 等待10s钟左右后运行。输出DEBUG
kubectl exec my-nginx-864575dd4b-fp9fd -- cat /etc/config/log_level
# deployment滚动更新
# 方式1,修改29-ConfigMap-hotupdate.yaml
# 增加metadata: annotations: version/config: "20221101"
kubectl apply -f 29-ConfigMap-hotupdate.yaml
# 查看滚动状态 检查是否滚动成功
kubectl rollout status deployment/my-nginx
# 滚动更新历史 可查看revision(修订本)
kubectl rollout history deployment/my-nginx
# 编辑滚动更新,以后修改configmap后,修改version/config: "值" 即可滚动更新
kubectl edit deployment/my-nginx
type: Opaque
data是map类型以key,value编码,value必须是base64编码
echo -n "username" | base64
dXNlcm5hbWU=
echo -n "pwdpwdpwdpwd" | base64
cHdkcHdkcHdkcHdk
30-Secret-Opaque.yaml
apiVersion: v1
kind: Secret
metadata:
name: secret-opaque
type: Opaque
data:
meuname: dXNlcm5hbWU=
mepwd: cHdkcHdkcHdkcHdk
kubectl apply -f 30-Secret-Opaque.yaml
kubectl get secret secret-opaque -o yaml
env中使用Opaque类型的Secret
env下secretKeyRef中使用
31-Secret-Opaque-env.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: secret-deployment
spec:
replicas: 2
selector:
matchLabels:
app: pod-deployment
template:
metadata:
labels:
app: pod-deployment
spec:
containers:
- name: cc-1
image: wangyanglinux/myapp:v1
ports:
- containerPort: 80
env:
- name: TEST_USER
valueFrom:
secretKeyRef:
name: secret-opaque
key: meuname
- name: TEST_PASSWORD
valueFrom:
secretKeyRef:
name: secret-opaque
key: mepwd
kubectl apply -f 31-Secret-Opaque-env.yaml
volume中使用Opaque类型的Secret
volumes下secret中使用
32-Seceret-Opaque-volume.yaml
apiVersion: v1
kind: Pod
metadata:
name: secret-opaque-v
labels:
name: secret-test
spec:
volumes:
- name: secret-v
secret:
secretName: secret-opaque
containers:
- name: db
image: wangyanglinux/myapp:v1
volumeMounts:
- name: secret-v
mountPath: /etc/secret
readOnly: true
kubectl apply -f 32-Seceret-Opaque-volume.yaml
私有镜像仓库,镜像拉取所需秘钥
type: docker-registry
imagePullSecrets下使用docker-registry的Secret
# 创建docker-registry类型的secret
kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USERNAME --docker-password=DOCKER_PASSWARD --docker-email=DOCKER_EMAIL
# 查看
kubectl get secret myregistrykey -o yaml
33-Secret-docker-registry-imagePullSecrets.yaml
apiVersion: v1
kind: Pod
metadata:
name: imagepullsecret-pod
spec:
containers:
- name: k8sapp
image: 192.168.2.60:9999/k8s/myapp:v1
imagePullSecrets:
- name: myregistrykey
kubectl apply -f .\33-Secret-docker-registry-imagePullSecrets.yaml
kubectl logs pod imagepullsecret-pod
服务账号
私有镜像仓库,镜像拉取所需服务账号
过程:
创建docker-registry类型secret
# 创建docker-registry类型的secret
kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USERNAME --docker-password=DOCKER_PASSWARD --docker-email=DOCKER_EMAIL
# 查看
kubectl get secret myregistrykey -o yaml
创建ServiceAccount使用docker-registry类型secret
apiVersion: v1
kind: ServiceAccount
metadata:
name: mysa
imagePullSecrets:
- name: myregistrykey
pod中使用创建的ServiceAccount
apiVersion: v1
kind: Pod
metadata:
name: sapod
namespace: default
labels:
app: xcrjpod
spec:
serviceAccount: mysa
containers:
- name: xcrjapp-3
image: wangyanglinux/nginx:v1.0
kubectl apply -f 34-ServiceAccount.yaml
kubectl get sa mysa -o yaml
kubectl apply -f 35-ServiceAccount-pod.yaml
emptyDir空目录,临时挂载到k8s的node(主机)目录
emptyDir生命周期与pod一致
数据无需永久保存的临时目录;pod中容器间的数据共享目录
k8s自动在node上分配主机目录
containers下volumeMounts下mountPath是容器目录
vlomues下emptyDir
volumes和containers平级
36-Volume-emptyDir.yaml
apiVersion: v1
kind: Pod
metadata:
name: volume-pod-emptydir
spec:
volumes:
- name: cache-volume
emptyDir: {}
containers:
- name: myapp-c
image: wangyanglinux/myapp:v1
volumeMounts:
- name: cache-volume
mountPath: /cache
- name: busybox-c
image: busybox
imagePullPolicy: IfNotPresent
command: ["/bin/sh","-c","sleep 6000s"]
volumeMounts:
- name: cache-volume
mountPath: /cache2
kubectl apply -f 36-Volume-emptyDir.yaml
# 进入myapp-c容器/cache目录下创建xcrj.txt文件
kubectl exec volume-pod-emptydir -c myapp-c -it -- /bin/sh
# 进入busybox-c容器/cache2目录下存在xcrj.txt文件
kubectl exec volume-pod-emptydir -c busybox-c -it -- /bin/sh
hostPath,持久化到k8s的node(主机)目录
hostPath下书写k8s的node(主机)目录
37-Volume-hostPath.yaml
apiVersion: v1
kind: Pod
metadata:
name: volume-hostpath-pod
spec:
volumes:
- name: hp-volume
hostPath:
path: data
# type: Directory #挂载失败
containers:
- name: volume-hostpath-container
image: wangyanglinux/myapp:v1
volumeMounts:
- mountPath: /dir-data
name: hp-volume
kubectl apply -f 37-Volume-hostPath.yaml
# 在主机上创建data目录,并在目录下创建xcrj.txt文件
# 进入容器查看/dir-data下是否有xcrj.txt文件
kubectl exec volume-hostpath-pod -c volume-hostpath-container -it -- /bin/sh
存储构成PV
安装NFS Server服务
apt install nfs-common nfs-utils rpcbind
mkdir /root/k8s/nfsdatame
chmod 666 /root/k8s/nfsdatame
cat /etc/exports /root/k8s/nfsdata *(rm,no_root_squash,no_all_squash,sync)
systemctl start rpcbind
systemctl start nfs
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfspvme
spec:
storageClassName: nfs
nfs:
server: 172.26.112.1
path: /root/k8s/nfsdatame
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
kubectl apply -f 38-PV-1. yaml
# 获取pv的基本信息
kubectl get pv
PV访问状态
PV回收策略:仅NFS和HostPath支持
PV状态:
PV (PersistentVolume) 生命周期独立于pod
类型,路径,容量,访问控制,PVC策略
server:创建pv的k8s node
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfspv
spec:
storageClassName: nfs
nfs:
server: 172.26.112.1
path: /root/k8s/nfsdata
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfspv1
spec:
storageClassName: nfs
nfs:
server: 172.26.112.1
path: /root/k8s/nfsdata1
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfspv2
spec:
storageClassName: nfs
nfs:
server: 172.26.112.1
path: /root/k8s/nfsdata2
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfspv3
spec:
storageClassName: low
nfs:
server: 172.26.112.1
path: /root/k8s/nfsdata3
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
kubectl apply -f 39-PV-2.yaml
# 查看PV的基本信息
kubectl get pv
StatefulSet:有状态服务控制器
过程:
apiVersion: v1
kind: Service
metadata:
name: svcnginx
labels:
app: nginx
spec:
selector:
app: nginx
clusterIP: None
ports:
- name: web
port: 80
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfspvme
spec:
storageClassName: nfs
nfs:
server: 172.28.112.1
path: /D:/workspace/kubernates/nfsdatame
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ssweb
spec:
serviceName: svcnginx
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: connginx
image: wangyanglinux/myapp:v1
ports:
- name: web
containerPort: 80
volumeMounts:
- name: pvcme
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: pvcme
spec:
storageClassName: nfs
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
kubectl apply -f 40-StatefulSet.yaml
kubectl get statefulset
# 查看pod的名字。[StatefulSet.name]-[序号从0开始]
kubectl get pod
kubectl describe statefulset ssweb
控制器:
有状态服务:
PVC:
注意