• k8s/资源清单


    总结

    pod
    rs (ReplicaSet)
    deployment
    job
    cronjob
    svc
    ingress
    cm (ConfigMap)
    secret
    sa (ServiceAcount)
    volume
    PV (PersistentVolume)
    StatefulSet

    1-pod.yml

    多个容器构成pod

    • containers:pod包含多个container
    • metadata:元信息
    • kind: Pod,首字母大写
    apiVersion: v1
    kind: Pod
    metadata: 
     name: xcrjpod
     namespace: default
     labels: 
      app: xcrjpod
    spec: 
     containers: 
     - name: xcrjapp-1
       image: wangyanglinux/nginx:v1.0
     - name: xcrjbusybox-1
       image: busybox
       command: 
       - "/bin/bash"
       - "-c"
       - "sleep 3600"
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    kubectl apply -f 1-pod.yaml
    kubectl logs xcrjpod -c xcrjapp-1
    kubectl logs xcrjpod -c xcrjbusybox-1
    
    • 1
    • 2
    • 3

    2-pod-initc.yaml

    • initContainers:等待前提service启动,准备工作
    • initContainers:顺序执行
    apiVersion: v1
    kind: Pod
    metadata: 
     name: pod-initc
     labels: 
      app: myapp
    spec:
     initContainers: 
     - name: init-meservice
       image: busybox
       command: ['sh', '-c', 'until nslookup meservice; do echo waiting for meservice; sleep 2; done;']
     - name: init-dbservice
       image: busybox
       command: ['sh', '-c', 'until nslookup dbservice; do echo waiting for dbservice; sleep 2; done;']
     containers: 
     - name: myapp-c
       image: busybox
       command: ['sh','-c','echo The app is running! && sleep 3600']
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    kubectl apply -f 2-pod-initc.yaml
    kubectl logs pod-initc -c init-meservice
    kubectl logs pod-initc -c init-dbservice
    
    kubectl delete pod pod-initc
    
    • 1
    • 2
    • 3
    • 4
    • 5

    3-svc-meservice.yaml

    选择pod构成服务

    • kind:Service,首字母大写
    • metadata.name:svc name
    • type:svc类型
    • selector:选择些pod构成svc
    • ports:端口
    apiVersion: v1
    kind: Service
    metadata: 
     name: meservice
     namespace: default
    spec:
     type: NodePort
     selector: 
      app: myapp
      release: stable
     ports: 
     - name: http
       port: 80
       targetPort: 80
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    kubectl apply -f 3-svc-meservice.yaml
    kubectl get svc
    ipvsadm -Ln
    iptables -t nat -nvL KUBE-NODEPORTS
    kubectl logs pod-initc -c init-meservice
    kubectl delete svc meservice
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6

    4-svc-dbservice.yaml

    • ports-protocol:支持多种协议
    apiVersion: v1
    kind: Service
    metadata: 
     name: dbservice
    spec:
     ports: 
     - protocol: TCP
       port: 80
       targetPort: 9377
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    kubectl apply -f 4-svc-dbservice.yaml
    kubectl get svc
    ipvsadm -Ln
    iptables -t nat -nvL KUBE-NODEPORTS
    kubectl logs pod-initc -c init-dbservice
    kubectl logs pod-initc -c myapp-c
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6

    5-pod-mainC-readinessProbe-httpGet.yaml

    主容器状态
    readinessProbe探测主容器是否就绪,是否可以对外提供服务
    livenessProbe探测主容器是否存活
    主容器状态:start》liveness(readiness》stop)
    readiness:pod中主容器是否就绪
    就绪探针类型:http, exec, tcpSocket
    readinessProbe: httpGet 就绪探针
    initialDelaySeconds:在容器启动后,延时n秒才开始探测
    periodSeconds:间隔时间

    apiVersion: v1
    kind: Pod
    metadata:
     name: mainc-readinessprobe-httpget-pod
     namespace: default
    spec:
     containers:
     - name: mainc-readinessprobe-httpget-container
       image: wangyanglinux/nginx:v1.0
       imagePullPolicy: IfNotPresent
       readinessProbe: 
        httpGet:
         port: 80
         path: /index.html
        initialDelaySeconds: 1
        periodSeconds: 3   
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    kubectl apply -f 5-pod-mainC-readinessProbe-httpGet.yaml
    kubectl describe pod mainc-readinessprobe-httpget-pod
    kubectl logs mainc-readinessprobe-httpget-pod -c mainc-readinessprobe-httpget-container
    kubectl describe pod mainc-readinessprobe-httpget-pod
    kubectl delete pod mainc-readinessprobe-httpget-pod
    
    • 1
    • 2
    • 3
    • 4
    • 5

    6-pod-mainC-livenessProbe-httpGet.yaml

    主容器状态:start》liveness(readiness》stop)
    liveness:pod中主容器是否存活
    运行探针类型:http, exec, tcpSocket

    apiVersion: v1
    kind: Pod
    metadata:
     name: mainc-liveness-httpget-pod
     namespace: default
    spec:
     containers:
     - name: mainc-liveness-httpget-container
       image: wangyanglinux/myapp:v1
       imagePullPolicy: IfNotPresent
       ports: 
       - name: http
         containerPort: 80
       livenessProbe: 
        httpGet:
          port: http
          path: /index.html
        initialDelaySeconds: 3
        periodSeconds: 10
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    kubectl apply -f 6-pod-mainC-livenessProbe-httpGet.yaml
    kubectl describe pod mainc-liveness-httpget-pod
    kubectl logs mainc-liveness-httpget-pod -c mainc-liveness-httpget-container
    
    • 1
    • 2
    • 3

    7-pod-mainC-livenessProbe-tcpSocket.yaml

    apiVersion: v1
    kind: Pod
    metadata:
     name: mainc-liveness-tcpsocket-pod
     namespace: default
    spec:
     containers:
     - name: mainc-liveness-tcpsocket-container
       image: wangyanglinux/myapp:v1
       livenessProbe: 
        tcpSocket: 
         port: 8080
        initialDelaySeconds: 5
        periodSeconds: 3
        timeoutSeconds: 1
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    kubectl apply -f 7-pod-mainC-livenessProbe-tcpSocket.yaml
    # 进入容器 查看/tmp/live文件是否存在
    kubectl exec mainc-liveness-exec-pod -c mainc-liveness-exec-container -it -- /bin/sh
    kubectl logs mainc-liveness-tcpsocket-pod -c mainc-liveness-tcpsocket-container
    
    • 1
    • 2
    • 3
    • 4

    8-pod-mainC-livenessProbe-exec.yaml

    apiVersion: v1
    kind: Pod
    metadata:
     name: mainc-liveness-exec-pod
     namespace: default
    spec:
     containers:
     - name: mainc-liveness-exec-container
       image: busybox
       imagePullPolicy: IfNotPresent
       command: ["/bin/sh","-c","touch /tmp/live ; sleep 60; rm -rf /tmp/live; sleep 3600"]
       livenessProbe: 
        exec:
         command: ["test","-e","/tmp/live"]
        initialDelaySeconds: 1
        periodSeconds: 3
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    kubectl apply -f 8-pod-mainC-livenessProbe-exec.yaml
    # 进入容器 查看/tmp/live文件是否存在
    kubectl exec mainc-liveness-exec-pod -c mainc-liveness-exec-container -it -- /bin/sh
    kubectl delete pod mainc-liveness-exec-pod
    
    • 1
    • 2
    • 3
    • 4

    9-pod-mainC-start-end.yaml

    用于在主容器启动之后或停止之前执行的操作
    lifecycle postStart: 主容器启动之后
    lifecycle preEnd: 主容器停止之前

    apiVersion: v1
    kind: Pod
    metadata:
     name: mainc-start-end-pod
     namespace: default
    spec:
     containers:
     - name: mainc-start-end-container
       image: wangyanglinux/myapp:v1
       lifecycle: 
        postStart: 
         exec: 
          command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /root/xcrj-second"]
        preStop: 
         exec: 
           command: ["/bin/sh", "-c", "echo Hello from the preStop handler > /root/xcrj-second"]
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    kubectl apply -f 9-pod-mainC-start-end.yaml
    # 进入容器 查看 /root/xcrj-second 文件内容
    kubectl exec mainc-start-end-pod -c mainc-start-end-container -it -- /bin/sh
    kubectl delete pod mainc-start-end-pod
    
    • 1
    • 2
    • 3
    • 4

    10-ReplicaSet.yaml

    副本控制
    控制pod副本
    replicas: 副本数量
    selector: 选择控制那些pod
    template: pod

    apiVersion: apps/v1
    kind: ReplicaSet
    metadata:
     name: frontend-rs
    spec: 
     replicas: 3
     selector: 
      matchLabels: 
       tier: frontend
     template: 
      metadata: 
       labels: 
        tier: frontend
      spec: 
       containers: 
       - name: php-redis
         image: wangyanglinux/myapp:v1
         env: 
         - name: GET_HOSTS_FROM
           value: dns
         ports: 
         - containerPort: 80
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    kubectl apply -f 10-ReplicaSet.yaml
    kubectl get rs
    # rs下有pod
    kubectl get pod
    kubectl delete rs frontend-rs
    
    • 1
    • 2
    • 3
    • 4
    • 5

    11-Deployment.yaml

    发布
    滚动更新pod
    滚动更新pod:创建新pod的方式进行滚动更新(删除原rs中25%pod,增加新rs中25%pod)

    apiVersion: apps/v1
    kind: Deployment
    metadata:
     name: nginx-deployment
    spec:
     replicas: 3
     selector:
      matchLabels:
       app: nginx
     template:
      metadata:
       labels:
        app: nginx
      spec:
       containers:
        - name: nginx
          image: wangyanglinux/myapp:v1
          ports:
           - containerPort: 80
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    kubectl apply -f 11-Deployment.yaml
    kubectl get deployment
    # deployment下有rs
    kubectl get rs
    # rs下有pod
    kubectl get pod
    kubectl delete deployment nginx-deployment
    
    # 扩容 从3到10
    kubectl scale deployment nginx-deployment --replicas 10
    # 自动扩容
    #kubectl autoscale deployment nginx-deployment --min=10 --max=15 --cpu-percent=80
    # 缩容 从10到3
    kubectl scale deployment nginx-deployment --replicas 3
    # 前滚 更新镜像 语法 容器=镜像
    kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
    # 编辑滚动 修改内容后滚动更新
    kubectl edit deployment/nginx-deployment
    # 查看历史rs
    kubectl get rs
    
    # 查看滚动状态 检查是否滚动成功
    kubectl rollout status deployment/nginx-deployment
    # 滚动更新历史 可查看revision(修订本)
    kubectl rollout history deployment/nginx-deployment
    # 回滚上一个revision
    kubectl rollout undo deployment/nginx-deployment
    # 滚动到指定revision
    kubectl rollout undo deployment/nginx-deployment --to-revision=2
    # 滚动暂停
    kubectl rollout pause deployment/nginx-deployment
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31

    12-DaemonSet.yaml

    支持
    至少运行1个pod副本
    k8s集群增加或删除node,DS为新node增加或删除pod
    ELK日志,Prometheus Node Exporter

    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
     name: deamonset-ex
     labels:
      app: daemonset
    spec:
     selector:
      matchLabels:
       name: deamonset-example
     template:
      metadata:
       labels:
        name: deamonset-example
      spec:
       containers:
        - name: daemonset-example
          image: wangyanglinux/myapp:v1
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    kubectl apply -f 11-Deployment.yaml
    kubectl get ds
    # ds下有pod
    kubectl get pod
    kubectl delete ds deamonset-ex
    
    • 1
    • 2
    • 3
    • 4
    • 5

    13-Job.yaml

    使用pod执行批处理任务
    restartPolicy: Never或者OnFailure

    apiVersion: batch/v1
    kind: Job
    metadata:
     name: job-pi
    spec:
     template:
      metadata:
       labels: 
        name: pod-pi-lable
      spec:
       containers:
        - name: pi-c
          image: perl:v1
          command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
       restartPolicy: Never
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    kubectl apply -f 13-Job.yaml
    kubectl get job
    # job下有pod
    kubectl get pod
    kubectl delete job job-pi
    
    • 1
    • 2
    • 3
    • 4
    • 5

    14-CronJob.yaml

    cron表达式job
    时间点调度job执行一次任务
    周期性调度job执行任务
    schedule: cron表达式调度
    jobTemplate: cronjob下有job

    apiVersion: batch/v1
    kind: CronJob
    metadata:
     name: cronjob-pi
    spec:
     schedule: "*/1 * * * *"
     jobTemplate:
      metadata:
      spec:
       template:
        metadata:
         labels: 
          name: pod-pi-label
        spec:
         containers:
          - name: pi-c
            image: busybox
            args:
             - /bin/sh
             - -c
             - date; echo Hello from the Kubernetes cluster
         restartPolicy: OnFailure
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    kubectl apply -f 14-CronJob.yaml
    kubectl get cj
    # cronjob下有job
    kubectl get job
    # job下有pod
    kubectl get pod
    kubectl delete cj cronjob-pi
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7

    15-svc-deployment.yaml

    Deployment下有ReplicaSet
    ReplicaSet下有Pod

    apiVersion: apps/v1
    kind: Deployment
    metadata:
     name: myapp-deploy
     namespace: default
    spec:
     replicas: 3
     selector:
      matchLabels:
       app: myapp
       release: stabel
     template:
      metadata:
       labels:
        app: myapp
        release: stabel
        env: test
      spec:
       containers:
       - name: myapp
         image: wangyanglinux/myapp:v1
         imagePullPolicy: IfNotPresent
         ports:
         - name: http
           containerPort: 80
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    kubectl apply -f 15-svc-deployment.yaml
    kubectl get deployment
    kubectl get rs
    kubectl get pod
    
    • 1
    • 2
    • 3
    • 4

    16-svc-ClusterIP.yaml

    ClusterIP:k8s集群内部虚拟IP
    type: ClusterIP,默认方式,自动分配仅仅k8s集群内部可以访问的虚拟IP
    selector:选择pod构成svc

    apiVersion: v1
    kind: Service
    metadata:
     name: myapp
     namespace: default
    spec:
     type: ClusterIP
     selector:
      app: myapp
      release: stabel
     ports:
     - name: http
       port: 80
       targetPort: 80
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    kubectl apply -f 16-svc-ClusterIP.yaml
    # 获取ClusterIp类型svc的IP
    kubectl get svc
    # 进入某个容器内部
    kubectl exec myapp-deploy-b777db8b4-rw28t -c myapp -it -- /bin/sh
    # 10.107.72.70上面命令获取的IP。k8s集群内部访问
    wget "10.107.72.70:80"
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7

    17-svc-Headless-ClusterIP-None.yaml

    svc不需要ClusterIP,不需要kubeProxy管理
    clusterIP: “None” 就是Headless,表示不需要分配k8s集群虚拟IP

    apiVersion: v1
    kind: Service
    metadata:
     name: svc-headless
     namespace: default
    spec:
     selector:
      app: myapp
     clusterIP: "None"
     ports:
     - port: 80
       targetPort: 80
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    kubectl apply -f 17-svc-Headless-ClusterIP-None.yaml
    # CLUSTER-IP为None
    kubectl get svc
    # 获取kube-dns svc的IP
    kubectl get svc -n kube-system
    # 查看headless svc dns解析
    # dig -t A svcName.namespace.当前svc集群域名. @kube-dns svc的IP
    dig -t A svc-headless.default.svc.cluster.local. @10.96.0.10
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8

    18-svc-NodePort.yaml

    ClusterIP基础上再给NodePort结点端口

    kind: Service
    apiVersion: v1
    metadata: 
     name: svc-nodeport
     namespace: default
    spec: 
     type: NodePort
     selector: 
      app: myapp
     ports: 
      - name: http
        targetPort: 80       # 容器expose的端口
        port: 30000             # 内部访问端口
        nodePort: 30001      # 外部访问端口
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    kubectl apply -f 18-svc-NodePort.yaml
    # 获取ClusterIP和port
    kubectl get svc
    # 进入某个容器内部
    kubectl exec myapp-deploy-b777db8b4-rw28t -c myapp -it -- /bin/sh
    # 测试k8s集群内部访问
    wget "ClusterIP:30000"
    # 测试外部部访问,使用主机IP
    wget "HostIP:30001"
    
    # kubeproxy使用iptables中nat管理pod构成的svc
    iptables -t nat -nvL KUBE-NODEPORTS
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12

    19-svc-ExternalName.yaml

    svc域名DNS重定向到外部域名
    将外部服务引入到了集群内部

    apiVersion: v1
    kind: Service
    metadata:
     name: svc-externalname
     namespace: default
    spec:
     type: ExternalName
     externalName: www.baidu.com
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    kubectl apply -f 19-svc-ExternalName.yaml
    kubectl get svc
    # 测试
    # 进入某个容器内部
    kubectl exec myapp-deploy-b777db8b4-rw28t -c myapp -it -- /bin/sh
    # [svcName.namespace.domain]
    # svc-externalname.default.svc.cluster.local
    # ping 外部服务
    ping www.baidu.com
    # ping svc服务名
    ping svc-externalname
    # ping svc域名
    ping svc-externalname.default.svc.cluster.local
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13

    ingress-nginx.yaml

    安装

    # 先下载 
    # 地址:见上
    # 再运行
    kubectl apply -f ingress-nginx.yaml
    # 检查
    kubectl get svc -n ingress-nginx
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6

    20-ingress-Deployment.yaml

    kind: Deployment
    apiVersion: apps/v1
    metadata: 
     name: deployment-nginx
     namespace: default
    spec: 
     replicas: 2
     selector: 
      matchLabels: 
       name: nginx
     template: 
      metadata: 
       labels: 
        name: nginx
      spec: 
       containers: 
        - name: c-nginx
          image: wangyanglinux/myapp:v1
          imagePullPolicy: IfNotPresent
          ports: 
           - containerPort: 80
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    kubectl apply -f 20-ingress-Deployment.yaml
    kubectl get deployment
    
    • 1
    • 2

    21-ingress-svc.yaml

    kind: Service
    apiVersion: v1
    metadata: 
     name: svc-nginx
     namespace: default
    spec: 
     type: ClusterIP
     selector: 
      name: nginx
     ports: 
      - protocol: TCP
        port: 80 #k8s集群内部端口
        targetPort: 80 #expose端口
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    kubectl apply -f 21-ingress-svc.yaml
    kubectl get svc
    
    • 1
    • 2

    22-ingress-nginx-http.yaml

    本质就是使用nginx
    pathType: Exact摘要类型
    service: name: svc-nginx 注意需要和21-ingress-svc.yaml中的name保持一致

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
     name: nginx-test-ingress
    spec:
     rules:
     - host: www1.xcrj.com
       http:
        paths:
        - path: /
          pathType: Exact
          backend:
           service: 
            name: svc-nginx
            port: 
             number: 80
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    kubectl apply -f 22-ingress-nginx-1.yaml
    kubectl get ingress
    # 修改hosts文件 
    # C:\Windows\System32\drivers\etc\hosts
    # k8s任意worker结点ip www1.xcrj.com
    192.168.96.1 www1.xcrj.com
    # 浏览器访问 www1.xcrj.com
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7

    23-ingress-https.yaml

    可以配置云厂商的可信https证书

    # ssl整数创建,在当前目录下生成tls.key和tls.crt文件
    openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=svcnginx/O=svcnginx"
    # 创建secret tls,tls-secret是名字
    # 语法 kubectl create secret tls tls-secret --cert=path/to/tls.cert --key=path/to/tls.key
    kubectl create secret tls tls-secret --key tls.key --cert tls.crt
    
    • 1
    • 2
    • 3
    • 4
    • 5

    tls -hosts:- www3.xcrj.comrules - host: www3.xcrj.com需要保持一致

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
     name: ingress-https
    spec:
     tls:
      - hosts:
        - www3.xcrj.com
        secretName: tls-secret
     rules:
      - host: www3.xcrj.com
        http:
         paths:
          - path: /
            pathType: Exact
            backend:
             service: 
              name: svc-nginx
              port: 
               number: 80
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    kubectl apply -f 23-ingress-https.yaml
    # 修改hosts文件 
    # C:\Windows\System32\drivers\etc\hosts
    # k8s任意worker结点ip www3.xcrj.com
    192.168.96.1 www3.xcrj.com
    # 浏览器访问 https://www3.xcrj.com
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6

    24-ingress-basic-auth.yaml

    安装httpd,通过htpasswd生成auth文件,用来存取创建的用户及加密之后的密码

    # auth是创建的文件名 xcrjuser是用户名。执行命令之后要求,输入密码
    htpasswd -c auth xcrjuser
    kubectl create secret generic basic-auth --from-file=auth
    
    • 1
    • 2
    • 3
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
     name: ingress-basic-auth
     annotations:
      nginx.ingress.kubernetes.io/auth-type: basic
      nginx.ingress.kubernetes.io/auth-secret: basic-auth
      nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - xcrjuser'
    spec:
     rules:
      - host: auth.xcrj.com
        http:
         paths:
          - path: /
            pathType: Exact
            backend:
             service: 
              name: svc-nginx
              port: 
               number: 80
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    kubectl apply -f 24-ingress-basic-auth.yaml
    # 修改hosts文件 
    # C:\Windows\System32\drivers\etc\hosts
    # k8s任意worker结点ip auth.xcrj.com
    192.168.96.1 auth.xcrj.com
    # 浏览器访问 http://auth.xcrj.com
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6

    25-ingress-rewrite.yaml

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
     name: ingress-nginx-rewrite
     annotations:
      nginx.ingress.kubernetes.io/rewrite-target: http://auth.xcrj.com
    spec:
     rules:
      - host: rewrite.xcrj.com
        http:
         paths:
          - path: /
            pathType: Exact
            backend:
             service: 
              name: svc-nginx
              port: 
               number: 80
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    kubectl apply -f 25-ingress-rewrite.yaml
    # 修改hosts文件 
    # C:\Windows\System32\drivers\etc\hosts
    # k8s任意worker结点ip rewrite.xcrj.com
    192.168.96.1 rewrite.xcrj.com
    # 浏览器访问 http://rewrite.xcrj.com
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6

    ConfigMap from-file

    创建下面的文件
    game.properties

    enemies=aliens
    lives=3
    enemies.cheat=true
    enemies.cheat.level=noGoodRotten
    secret.code.passphrase=UUDDLRLRBABAS
    secret.code.allowed=true
    secret.code.lives=30
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7

    ui.properties

    color.good=purple
    color.bad=yellow
    allow.textmode=true
    how.nice.to.look=fairlyNice
    
    • 1
    • 2
    • 3
    • 4

    执行

    # 从文件夹创建 文件夹中的文件名=key 文件内容=value
    kubectl create configmap game-config --from-file=./configmap
    kubectl get cm game-config -o yaml
    kubectl delete cm game-config
    # 从文件创建
    kubectl create configmap game-config-2 --from-file=./configmap/game.properties
    kubectl get cm game-config-2 -o yaml
    kubectl delete cm game-config-2
    # 从多个文件创建
    kubectl create configmap game-config-3 --from-file=./configmap/game.properties --from-file=./configmap/ui.properties
    kubectl get cm game-config-3 -o yaml
    kubectl delete cm game-config-3
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12

    ConfigMap from-literal

    # --from-literal=key=value
    kubectl create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm
    kubectl get cm special-config -o yaml
    kubectl delete cm special-config
    
    • 1
    • 2
    • 3
    • 4

    ConfigMap 代替环境变量

    pod中的容器需要环境变量
    kind: ConfigMap

    • data: 下使用yaml的map格式
      kind: Pod
    • envFrom: 使用某个configmap。configMapRef
    • env: 使用某个configmap的某个key。configMapKeyRef。name=key,valueFrom=value

    26-ConfigMap-environment.yaml

    apiVersion: v1
    kind: ConfigMap
    metadata:
     name: special-config
     namespace: default
    data:
     special.how: very
     special.type: charm
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
     name: env-config
     namespace: default
    data:
     log_level: INFO
    ---
    apiVersion: v1
    kind: Pod
    metadata:
     name: cm-env-pod
    spec:
     containers:
     - name: test-container
       image: wangyanglinux/myapp:v1
       command: [ "/bin/sh", "-c", "env" ]
       envFrom: 
       - configMapRef: 
          name: env-config
       env:
       - name: SPECIAL_LEVEL_KEY
         valueFrom:
          configMapKeyRef:
           name: special-config
           key: special.how
       - name: SPECIAL_TYPE_KEY
         valueFrom:
          configMapKeyRef:
           name: special-config
           key: special.type
     restartPolicy: Never
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    kubectl apply -f 26-ConfigMap-environment.yaml
    kubectl get cm
    kubectl get cm special-config -o yaml
    kubectl get cm env-config -o yaml
    kubectl get pod
    kubectl logs -f cm-env-pod
    kubectl describe pod cm-env-pod
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7

    ConfigMap 设置命令行参数

    command:中使用$(env-name) 即可
    27-ConfigMap-command.yaml

    apiVersion: v1
    kind: ConfigMap
    metadata:
     name: special-config
     namespace: default
    data:
     special.how: very
     special.type: charm
    ---
    apiVersion: v1
    kind: Pod
    metadata:
     name: cm-cmd-pod
    spec:
     containers:
     - name: test-container
       image: wangyanglinux/myapp:v1
       command: [ "/bin/sh", "-c", "echo $(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY)" ]
       env:
        - name: SPECIAL_LEVEL_KEY
          valueFrom:
           configMapKeyRef:
            name: special-config
            key: special.how
        - name: SPECIAL_TYPE_KEY
          valueFrom:
           configMapKeyRef:
            name: special-config
            key: special.type
     restartPolicy: Never
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    kubectl apply -f 27-ConfigMap-command.yaml
    # 查看打印的输出 “very charm”
    kubectl logs -f cm-cmd-pod
    
    • 1
    • 2
    • 3

    ConfigMap 插入volume

    volumes:定义卷,注意volumes和containers是平级的
    volumeMounts:容器挂载卷
    configMap》volumes》containers
    容器挂载卷》定义所需卷》卷使用configMap
    28-ConfigMap-volume.yaml

    apiVersion: v1
    kind: ConfigMap
    metadata:
     name: special-config
     namespace: default
    data:
     special.how: very
     special.type: charm
    ---
    apiVersion: v1
    kind: Pod
    metadata:
     name: cm-volume-pod
    spec:
     volumes:
     - name: config-volume
       configMap:
        name: special-config
     containers:
     - name: test-container
       image: wangyanglinux/myapp:v1
       command: [ "/bin/sh", "-c", "sleep 600s" ]
       volumeMounts:
       - name: config-volume
         mountPath: /etc/config
     restartPolicy: Never
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    kubectl apply -f 28-ConfigMap-volume.yaml
    kubectl describe pod cm-volume-pod
    kubectl delete pod cm-volume-pod
    
    • 1
    • 2
    • 3

    ConfigMap 热加载

    kubelet中的configmap_manager管理pod使用的configmap
    更新ConfigMap后,volumeMounts挂载的Volume,大约10s左右更新
    更新ConfigMap后,env不会更新
    更新ConfigMap后,deployment不会滚动更新

    29-ConfigMap-reload.yaml

    apiVersion: v1
    kind: ConfigMap
    metadata:
     name: log-config
     namespace: default
    data:
     log_level: INFO
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
     name: my-nginx
    spec:
     replicas: 1
     selector: 
      matchLabels: 
       run: my-nginx
     template:
      metadata:
       labels:
        run: my-nginx
      spec:
       volumes:
       - name: config-volume
         configMap:
          name: log-config
       containers:
       - name: my-nginx
         image: wangyanglinux/myapp:v1
         ports:
         - containerPort: 80
         volumeMounts:
         - name: config-volume
           mountPath: /etc/config
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    kubectl apply -f 29-ConfigMap-hotupdate.yaml
    kubectl get pod
    # pod中的所有容器共用pause容器的net和volume
    # 查看pod挂载目录下的日志级别。输出INFO
    kubectl exec my-nginx-864575dd4b-fp9fd -- cat /etc/config/log_level
    # 更新configmap。修改INFO为DEBUG。
    # 没起作用 kubectl edit cm log-config
    kubectl apply -f 29-ConfigMap-hotupdate.yaml
    # 等待10s钟左右后运行。输出DEBUG
    kubectl exec my-nginx-864575dd4b-fp9fd -- cat /etc/config/log_level
    
    # deployment滚动更新
    # 方式1,修改29-ConfigMap-hotupdate.yaml
    # 增加metadata: annotations: version/config: "20221101"
    kubectl apply -f 29-ConfigMap-hotupdate.yaml
    # 查看滚动状态 检查是否滚动成功
    kubectl rollout status deployment/my-nginx
    # 滚动更新历史 可查看revision(修订本)
    kubectl rollout history deployment/my-nginx
    # 编辑滚动更新,以后修改configmap后,修改version/config: "值" 即可滚动更新
    kubectl edit deployment/my-nginx
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21

    Secret Opaque

    type: Opaque
    data是map类型以key,value编码,value必须是base64编码

    echo -n "username" | base64
    dXNlcm5hbWU=
    echo -n "pwdpwdpwdpwd" | base64
    cHdkcHdkcHdkcHdk
    
    • 1
    • 2
    • 3
    • 4

    30-Secret-Opaque.yaml

    apiVersion: v1
    kind: Secret
    metadata:
     name: secret-opaque
    type: Opaque
    data:
     meuname: dXNlcm5hbWU=
     mepwd: cHdkcHdkcHdkcHdk
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    kubectl apply -f 30-Secret-Opaque.yaml
    kubectl get secret secret-opaque -o yaml
    
    • 1
    • 2

    Secret env中使用

    env中使用Opaque类型的Secret
    env下secretKeyRef中使用

    31-Secret-Opaque-env.yaml

    apiVersion: apps/v1
    kind: Deployment
    metadata:
     name: secret-deployment
    spec:
     replicas: 2
     selector: 
      matchLabels: 
       app: pod-deployment
     template:
      metadata:
       labels:
        app: pod-deployment
      spec:
       containers:
       - name: cc-1
         image: wangyanglinux/myapp:v1
         ports:
         - containerPort: 80
         env:
         - name: TEST_USER
           valueFrom:
            secretKeyRef:
             name: secret-opaque
             key: meuname
         - name: TEST_PASSWORD
           valueFrom:
            secretKeyRef:
             name: secret-opaque
             key: mepwd
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    kubectl apply -f 31-Secret-Opaque-env.yaml
    
    • 1

    Secret volume中使用

    volume中使用Opaque类型的Secret
    volumes下secret中使用

    32-Seceret-Opaque-volume.yaml

    apiVersion: v1
    kind: Pod
    metadata:
     name: secret-opaque-v
     labels:
      name: secret-test
    spec:
     volumes:
     - name: secret-v
       secret:
        secretName: secret-opaque
     containers:
     - name: db
       image: wangyanglinux/myapp:v1
       volumeMounts:
       - name: secret-v
         mountPath: /etc/secret
         readOnly: true
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    kubectl apply -f 32-Seceret-Opaque-volume.yaml
    
    • 1

    Secret docker-registry

    私有镜像仓库,镜像拉取所需秘钥
    type: docker-registry

    imagePullSecrets下使用docker-registry的Secret

    # 创建docker-registry类型的secret
    kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USERNAME --docker-password=DOCKER_PASSWARD --docker-email=DOCKER_EMAIL
    # 查看
    kubectl get secret myregistrykey -o yaml
    
    • 1
    • 2
    • 3
    • 4

    33-Secret-docker-registry-imagePullSecrets.yaml

    apiVersion: v1
    kind: Pod
    metadata:
     name: imagepullsecret-pod
    spec:
     containers:
     - name: k8sapp
       image: 192.168.2.60:9999/k8s/myapp:v1
     imagePullSecrets:
     - name: myregistrykey
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    kubectl apply -f .\33-Secret-docker-registry-imagePullSecrets.yaml
    kubectl logs pod imagepullsecret-pod
    
    • 1
    • 2

    ServiceAcount

    服务账号
    私有镜像仓库,镜像拉取所需服务账号
    过程:

    • secret:创建docker-registry类型secret
    • docker-registry:创建ServiceAccount使用docker-registry类型secret
    • ServiceAccount:pod中使用创建的ServiceAccount

    创建docker-registry类型secret

    # 创建docker-registry类型的secret
    kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USERNAME --docker-password=DOCKER_PASSWARD --docker-email=DOCKER_EMAIL
    # 查看
    kubectl get secret myregistrykey -o yaml
    
    • 1
    • 2
    • 3
    • 4

    创建ServiceAccount使用docker-registry类型secret

    apiVersion: v1
    kind: ServiceAccount
    metadata:
       name: mysa
    imagePullSecrets:
    - name: myregistrykey 
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6

    pod中使用创建的ServiceAccount

    apiVersion: v1
    kind: Pod
    metadata: 
     name: sapod
     namespace: default
     labels: 
      app: xcrjpod
    spec: 
     serviceAccount: mysa
     containers: 
     - name: xcrjapp-3
       image: wangyanglinux/nginx:v1.0
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    kubectl apply -f 34-ServiceAccount.yaml
    kubectl get sa mysa -o yaml
    kubectl apply -f 35-ServiceAccount-pod.yaml
    
    • 1
    • 2
    • 3

    volume emptyDir

    emptyDir空目录,临时挂载到k8s的node(主机)目录
    emptyDir生命周期与pod一致
    数据无需永久保存的临时目录;pod中容器间的数据共享目录
    k8s自动在node上分配主机目录
    containers下volumeMounts下mountPath是容器目录
    vlomues下emptyDir
    volumes和containers平级

    36-Volume-emptyDir.yaml

    apiVersion: v1
    kind: Pod
    metadata:
     name: volume-pod-emptydir
    spec:
     volumes:
     - name: cache-volume
       emptyDir: {}
     containers:
      - name: myapp-c
        image: wangyanglinux/myapp:v1
        volumeMounts:
         - name: cache-volume
           mountPath: /cache
      - name: busybox-c
        image: busybox
        imagePullPolicy: IfNotPresent
        command: ["/bin/sh","-c","sleep 6000s"]
        volumeMounts:
         - name: cache-volume
           mountPath: /cache2
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    kubectl apply -f 36-Volume-emptyDir.yaml
    # 进入myapp-c容器/cache目录下创建xcrj.txt文件
    kubectl exec volume-pod-emptydir -c myapp-c -it -- /bin/sh
    # 进入busybox-c容器/cache2目录下存在xcrj.txt文件
    kubectl exec volume-pod-emptydir -c busybox-c -it -- /bin/sh
    
    • 1
    • 2
    • 3
    • 4
    • 5

    volume hostPath

    hostPath,持久化到k8s的node(主机)目录
    hostPath下书写k8s的node(主机)目录
    37-Volume-hostPath.yaml

    apiVersion: v1
    kind: Pod
    metadata:
     name: volume-hostpath-pod
    spec:
     volumes: 
      - name: hp-volume
        hostPath:
         path: data
         # type: Directory #挂载失败
     containers:
      - name: volume-hostpath-container
        image: wangyanglinux/myapp:v1
        volumeMounts:
         - mountPath: /dir-data
           name: hp-volume
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    kubectl apply -f 37-Volume-hostPath.yaml
    # 在主机上创建data目录,并在目录下创建xcrj.txt文件
    # 进入容器查看/dir-data下是否有xcrj.txt文件
    kubectl exec volume-hostpath-pod -c volume-hostpath-container -it -- /bin/sh
    
    • 1
    • 2
    • 3
    • 4

    38-PV-1. yaml

    存储构成PV

    安装NFS Server服务

    apt install nfs-common nfs-utils rpcbind
    mkdir /root/k8s/nfsdatame
    chmod 666 /root/k8s/nfsdatame
    cat /etc/exports /root/k8s/nfsdata *(rm,no_root_squash,no_all_squash,sync)
    systemctl start rpcbind
    systemctl start nfs
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    apiVersion: v1
    kind: PersistentVolume
    metadata:
     name: nfspvme
    spec:
     storageClassName: nfs
     nfs:
      server: 172.26.112.1
      path: /root/k8s/nfsdatame
     capacity:
      storage: 2Gi
     accessModes:
     - ReadWriteOnce
     persistentVolumeReclaimPolicy: Recycle
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    kubectl apply -f 38-PV-1. yaml
    # 获取pv的基本信息
    kubectl get pv
    
    • 1
    • 2
    • 3

    PV访问状态

    • RWO (ReadWriteOnce): 该PV可被挂载1次以读写模式挂载
    • ROX (ReadOnlyMany): 该PV可被挂载多次以只读模式挂载
    • RWX (ReadWriteMany):该PV可被挂载多次以读写模式挂载

    PV回收策略:仅NFS和HostPath支持

    • Retain: 保留,手动回收
    • Recycle: 回收,基本删除,rm -rf
    • Delete: 删除,关联删除,关联资产也删除

    PV状态:

    • Available:可用,空闲,未被PVC声明
    • Bound:绑定,被PVC声明
    • Released:释放,PVC被删除,还未被k8s回收
    • Failed:失败,k8s回收失败

    39-PV-2.yaml

    PV (PersistentVolume) 生命周期独立于pod
    类型,路径,容量,访问控制,PVC策略
    server:创建pv的k8s node

    apiVersion: v1
    kind: PersistentVolume
    metadata:
     name: nfspv
    spec:
     storageClassName: nfs
     nfs:
      server: 172.26.112.1
      path: /root/k8s/nfsdata
     capacity:
      storage: 2Gi
     accessModes:
     - ReadWriteOnce
     persistentVolumeReclaimPolicy: Retain
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
     name: nfspv1
    spec:
     storageClassName: nfs
     nfs:
      server: 172.26.112.1
      path: /root/k8s/nfsdata1
     capacity:
      storage: 3Gi
     accessModes:
     - ReadWriteOnce
     persistentVolumeReclaimPolicy: Retain
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
     name: nfspv2
    spec:
     storageClassName: nfs
     nfs:
      server: 172.26.112.1
      path: /root/k8s/nfsdata2
     capacity:
      storage: 10Gi
     accessModes:
     - ReadWriteOnce
     persistentVolumeReclaimPolicy: Retain
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
     name: nfspv3
    spec:
     storageClassName: low
     nfs:
      server: 172.26.112.1
      path: /root/k8s/nfsdata3
     capacity:
      storage: 2Gi
     accessModes:
     - ReadWriteMany
     persistentVolumeReclaimPolicy: Retain
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    kubectl apply -f 39-PV-2.yaml
    # 查看PV的基本信息
    kubectl get pv
    
    • 1
    • 2
    • 3

    40-StatefulSet.yaml

    StatefulSet:有状态服务控制器

    • 存储>PV>(PVC>pod)>StatefulSet

    过程:

    • 创建headless SVC
    • 创建PV,存储构成PV
    • StatefulSet引用headless SVC
    • StatefulSet创建Pod,使用headlessSVC控制pod副本的域名
    • StatefulSet创建PVC,PVC声明所需的PV,为每个pod副本创建1个pvc
    apiVersion: v1
    kind: Service
    metadata:
     name: svcnginx
     labels:
      app: nginx
    spec:
     selector:
      app: nginx
     clusterIP: None
     ports:
     - name: web
       port: 80
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
     name: nfspvme
    spec:
     storageClassName: nfs
     nfs:
      server: 172.28.112.1
      path: /D:/workspace/kubernates/nfsdatame
     capacity:
      storage: 5Gi
     accessModes:
     - ReadWriteOnce
     persistentVolumeReclaimPolicy: Recycle
    ---
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
     name: ssweb
    spec:
     serviceName: svcnginx
     replicas: 3
     selector:
      matchLabels:
       app: nginx
     template:
      metadata:
       labels:
        app: nginx
      spec:
       containers:
        - name: connginx
          image: wangyanglinux/myapp:v1
          ports:
           - name: web
             containerPort: 80
          volumeMounts:
           - name: pvcme
             mountPath: /usr/share/nginx/html
     volumeClaimTemplates:
      - metadata:
         name: pvcme
        spec:
         storageClassName: nfs
         accessModes: [ "ReadWriteOnce" ]
         resources:
          requests:
           storage: 1Gi
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    kubectl apply -f 40-StatefulSet.yaml
    kubectl get statefulset
    # 查看pod的名字。[StatefulSet.name]-[序号从0开始]
    kubectl get pod
    kubectl describe statefulset ssweb
    
    • 1
    • 2
    • 3
    • 4
    • 5

    控制器:

    • 控制pod副本数量
    • podName:[StatefulSet.name]-[序号从0开始]
    • 这里podName,ssweb-0, ssweb-1, ssweb-2

    有状态服务:

    • 它为每个pod副本创建1个DNS域名([podName].[headlessSvcName])。使用pod域名通信而不是podIP,当pod所在k8s node故障时,pod被分配到其它node,podIP发生变化但域名不变
    • 它使用headless SVC控制pod的域名

    PVC:

    • 根据volumeClaimTemplates为每个pod副本创建1个pvc([volumeClaimTemplates.name]-[podName])
    • 这里PVCName:pvcme-ssweb-0, pvcme-ssweb-1, pvcme-ssweb-2
    • 手动删除PVC将删除PV
    • 删除pod不会删除其PVC

    注意

    • 删除StatefulSet后,pod被自动删除,为pod创建的PVC不会被自动删除,因为手动删除PVC将删除PV
  • 相关阅读:
    Python学习之CSDN21天学习挑战赛计划之2
    java毕业设计体育训练队的信息管理系统服务端源码+lw文档+mybatis+系统+mysql数据库+调试
    正点原子stm32F407学习笔记6——外部中断实验
    外卖小程序系统:数字化时代餐饮业的技术奇迹
    AcWing第 77 场周赛
    经典面试题:重载和重写的区别
    C++20 Text formatting
    22服务-ReadDataByIdentifier
    矩阵分析学习笔记(六):有理标准型和Jordan标准型、复数域上矩阵的特征结构
    数据通信与网络(二)
  • 原文地址:https://blog.csdn.net/baidu_35805755/article/details/127452427