• pod原理


    存储
    • configMap 存储配置信息
    • Secret 秘钥信息
    • volume 共享存储卷 nfs
    • Persistent Volume 持久卷
    configMap
    • 提供想容器中注入配置信息的机制
    • kubectl create configmap game-config --from-file=/tmp/configmap/kubectl
    • kubectl get cm game-config -o yaml
      ls /tmp/configmap/kubectl/
    game.properties文件
    enemies=aliens
    secret.code.allowed=true
    
    ui.properties文件
    color.bad=yellow
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    服务网格 分布式 istio
    ingress
    • 解决传统的4层代理,实现七层代理
    • ingress 先绑定域名,房问nginx,会反向代理,访问后端的svc
    • ?nginx以何种形式和后端service连接的:nginx常见的部署方案是NodePort的方式
    • nginx 配置文件添加 proxy_pass 配置,只是通过ingress 通过自动添加的形式
    • ingress原理流程 :
      • pod 协程通过 informer模式监听 资源变化
      • 写入updatechannel, NginxController读取更新事件
      • 向队列syncQueue追加同步任务
      • 定去拉取队列任务是否需要 reload
      • 需要重写配置文件 nginx-sreload
      • 不需要 ,或重载后 想Lua Server post数据
      • 以nginx模块运行的Lua Server
    • wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
    • kubectl get svc -n ingress-nginx
    • kubectl get pod -n ingress-nginx
    • kubectl exec nginx-ingress-xxx-xxx-xx -it – /bin/bash
    • cat /etc/nginx/nginx.conf
      ingress-nginx.yaml
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: nginx-test
    spec:
      rules:
        - host: foo.bar.com
          http:
            paths:
            - path: /
              backend:
                serviceName: nginx-svc # ingress 链接 svc的名称
                servicePort: 80
                
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    service 代理

    p31

    • userspace 通过kube-proxy 压力较大,服务更新端口维护
    • iptables 通过防火墙 直接调度
    • ipvs 不经过kube-proxy 管理端点的绑定信息
      * rr轮训
      * lc最小连接数
      * dh目标哈希
      * sed最短期望延迟
      * nq不排队调度
      round-robin DNS 不用是因为存在缓存
      *ipvsadm -L
    • ClusterIP 使用iptables 或 lvs
    • svc创建过程
      • 1、通过kubectl 向apiserver 发送创建service命令,接收到请求后将数据存储在etcd
      • 2、节点kube-proxy进程负责感知service,pod的变化,并将信息写入本地的iptables规则中
      • 3、iptables、ipvs 使用NAT转换将virtualIP流量转至endpoint中
        myapp-deployment.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: myapp-deployment
      namespace: default
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: myapp
          release: stabel
      template:
        metadata:
          labels: # 标签3个值
            app: myapp
            release: stabel
            env: test
        spec:
          containers:
          - name: myapp
            image: nginx:latest
            imagePullPolicy: IfNotPresent
            ports:
            - name: http
              containerPort: 80
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25

    kubectl apply -f myapp-deployment.yaml
    myapp-service.yaml

    apiVersion: v1
    kind: Service
    metadata:
      name: myapp
      namespace: default
    spec:
      type: ClusterIP # svc类型
      selector:
        app: myapp # 标签选择deployment中定义的labels标签
        release: stabel  # 标签选择deployment中定义的labels标签
      ports:
      - name: http
        port: 80
        targetPort: 80
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14

    kubectl apply -f myapp-service.yaml

    • kubectl get svc
    • curl clustip:80 访问不通,原因标签字段不一致所以匹配不到
    • kubectl delete -f myapp-service.yaml 删除
    • Headless Service apec.clusterIP: “None” # 不需要负载均衡对 StatefulSet的支持
    • svc 创建后会写入coredns中
    • kubectl get pod -n kube-system -o wide
    • 获取codednsip
    • dig -t A myapp-deployment.default.svc.cluster.local. @10.244.0.21 #无头服务
    NodePort
    • spec.type: NodePort
    • ipvsadm -Ln | grep ip
      还有一个域名别名操作
      ExternalName
    • spec.type: ExternalName
    • spec.ExternalName: xx.xx.com
    kubeadm join 集群
    • kubeadmin join xxx:6443 --token xxx --discovery-token-ca-cert-hash sha256:xxxxxx
      [preflight] Running pre-flight checks
      [preflight] Reading configuration from the cluster…
      [preflight] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -o yaml’
      [kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
      [kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
      [kubelet-start] Starting the kubelet
      [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap…
      [kubelet-check] Initial timeout of 40s passed.
      [kubelet-check] It seems like the kubelet isn’t running or healthy.
      [kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz’ failed with error: Get “http://localhost:10248/healthz”: dial tcp 127.0.0.1:10248: connect: connection refused.
      [kubelet-check] It seems like the kubelet isn’t running or healthy.
      [kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz’ failed with error: Get “http://localhost:10248/healthz”: dial tcp 127.0.0.1:10248: connect: connection refused.
      [kubelet-check] It seems like the kubelet isn’t running or healthy.
      [kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz’ failed with error: Get “http://localhost:10248/healthz”: dial tcp 127.0.0.1:10248: connect: connection refused.
      [kubelet-check] It seems like the kubelet isn’t running or healthy.
      [kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz’ failed with error: Get “http://localhost:10248/healthz”: dial tcp 127.0.0.1:10248: connect: connection refused.
      [kubelet-check] It seems like the kubelet isn’t running or healthy.
      [kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz’ failed with error: Get “http://localhost:10248/healthz”: dial tcp 127.0.0.1:10248: connect: connection refused.
      error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition
      To see the stack trace of this error execute with --v=5 or higher
    service svc
    • 只提供四层负载能力 基于ip地址和端口转发,没有7层负载功能
    • 可以添加ingress 实现7层功能
    • 基于标签 如 app=nginx 负载
    • service 类型
      • ClusterIp 默认类型,cluster内部可访问
      • NodePort 在clusterip基础上在每天node上暴露对应端口 ,nginx 可以负载多个node 对应port负载均衡
      • LoadBalancer 云供应商的负载均衡器
      • ExternalName 集群外部服务引入到内部直接使用
    CronJob
    • 定期创建job
    • spec.template
    • RestartPolicy Never 永不重启,OnFailure失败后重启
    • spec.completions job运行成功运行的pod默认为1
    • spec.parallelism 并行pod数1
    • spec.activeDeadlineSeconds 失败pod 重试最大时间
    • spec.startingDeadlineSeconds 可选,错过执行时间的job
    • spec.concurrencyPolicy 并发策略 任务未运行中是否允许并发
      • allow 允许并发job
      • Forbid 禁止并发
      • Replace 取消当前,重新执行
    • spec.suspend 挂起,后续任务挂起
    • spec.successfulJobHistoryLimit failedJobsHistoryLimit 成功或失败最多保留副本数
    apiVersion: batch/v1beta1
    kind: CronJob
    metadata:
      name: hello
    spec:
      schedule: "*/1 * * * *"
      jobTemplate:
        spec:
          template:
            spec:
              containers:
              - name: hello
                image: busybox
                args:
                - /bin/sh
                -  -c 
                -  date; echo Hello from thi kubernetes cluster
              restartPolicy: OnFailure
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18

    yaml常见报错 error validating data: ValidationError(CronJob.spec.jobTemplate.spec.template.spec): unknown field

    • kubectl get cronjob -w
    • kubectl get job
    • kubectl logs podname 查看日志
    job
    apiVersion: batch/v1
    kind: Job
    metadata:
      name: pi
    spec:
      template:
        metadata:
          name: pi # job名称 pod名称前缀
        spec:
          containers:
          - name: pi # 容器名称
            image: perl
            command: ["perl","-Mbignum=bpi","-wle","print bpi(2000)"] # 圆周率小数点计算后2000位
          restartPolicy: Never # 用不重启
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • kubectl create -f job.yaml
    • kubectl get job -w
    • kubectl get pod -w 实时查看
    completions 完成数目
    NAME   COMPLETIONS   DURATION   AGE
    pi     1/1           27s        2m37s
    
    • 1
    • 2
    • 3
    • kubectl describe pod pi-6bsxh
    DaemonSet
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: daemonset-example
      labels:
        app: daemonset
    spec:
      selector:
        matchLabels:
          name: deamonset-example
      template:
        metadata:
          labels:
            name: deamonset-example
        spec:
          containers:
          - name: daemonset-example
            image: nginx:latest
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    deployment 回退

    当修改.spec.template 中的label 和 容器镜像是会创建一个新的revision,扩容修改副本数不会创建

    • kubectl rollout status deployment/nginx-deployment
    • kubectl rollout history deployment/nginx-deployment
    • kubectl rollout undo deployment/nginx-deployment
    • kubectl rollout undo deployment/nginx-deployment --to-revision=1
    • kubectl rollout undo deployment/nginx-deployment --to-revision=2
    • kubectl rollout pause deployment/nginx-deployment 暂停更新
    • 判断是否成功 kubectl rollout status deploy/nginx && echo$? 是否为0来判断
    • 保留的历史版本上线.spec.revisonHistoryLimit 如果为0表示不允许回退
    滚动发布
    • kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
    • deployment下得nginx-deployment
    • Rollover(多rolout)创建副本数未到指定期望值,回滚版本,会立刻杀死在建副本,重新部署
    扩容
    • kubectl scale deployment nginx-deployment --replicas 10
    • 集群支持hpa,自动扩容
    • kubectl autoscale deployment nginx-deployment --min=10 --max=15 --cpu-percent=80
    nginx
    • kebectl apply -f nginx-deployment.yaml --record
      record 在执行过程中的记录
    • kubectl get deployment
    • kubectl get pod -o wide 查看ip node节点信息
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: nginx-deployment
    spec:
      replicas: 3
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx:1.7.9
            ports:
            - containerPort: 80
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    label管理

    kubectl get pod --show-labels
    kubectl label pod podname tier=frontend1 --overwrite=True
    手动修改pod 标签,rs检测副本不符合,需额外修改

    mainc start stop
    spec:
      containers:
      - name:
        image:nginx
        lifecycle:
          postStart:
            exec:
              command: ["/bin/sh","-c","echo hi > /usr/share/message"]
          postStop:
            exec:
              command: ["/usr/sbin/nginx","-s","quit"]
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11

    kubectl get pod
    status 状态
    pod phase 状态值

    • 挂起 Pending 下载镜像
    • 运行中 Running pod 至少有一个容器正在创建或处于重启
    • 成功 Succeeded 所有容器成功终止,不会再重启 常出现在job createjob中
    • 失败 Failed 容器以非0状态退出
    • 未知 Unknown 无法获取pod状态与host同事失败
    pod 分类
    • 自主是pod
    • 控制器管理的pod
      控制器的类型
    • replication controller 控制副本数量
    • replicaset 多出一个标签
    • Deployment 声明式定义 apply 定义rs 滚动升级、扩缩容 控制rs的副本数量和版本,rs-old -》rs-new
    • DaemonSet
    • job 运行脚本非0退出,重试机制几次、CronJob在特定时间创建job *****分时日月周
    • StatefulSet 持久化存储方案 pvc mysql,有序部署、有序扩展 基于 init containers 不会对pod镜像发生改变,有序收缩,有序删除创建0 - n-1 删除 n-1 - 0
    RS 创建

    kubectl explain rs
    ReplicaSet

    apiVersion: extensions/v1beta1
    kind:ReplicaSet 
    metadata:
      name: frontend
    spec:
      replicas: 3
      selector:
        matchLabels:
          tier: frontend
      template:
        metadata:
          labels:
            tier: frontend
        spec:
          containers:
            - name: php-redis
              image: gcr.io/google_samples/gb-frontend:v3
              env:
              - name: GET_HOSTS_FROM
                value: dns
              ports:
              - containerPort: 80
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
  • 相关阅读:
    Nginx+Lua+OpenResty(详解及使用)
    Linux之jar包之启动与停止脚本
    大数据必学Java基础(十四):Java中的运算符
    三、鼎捷T100 APS版本维护
    Mybatis @MapKey注解返回指定Map源码解析与用例
    What?JMeter做UI自动化!
    2023 年前端 UI 组件库概述,百花齐放!
    qml布局管理器介绍与代码演示
    Python3中用户注册小案例 - 解决中文字符问题 (代码)
    《Android Studio开发实战 从零基础到App上线(第3版)》出版后记
  • 原文地址:https://blog.csdn.net/haogeoyes/article/details/127981113