• 【云原生 | Kubernetes 系列】----污点与容忍


    污点与容忍

    1. 污点(taints),用于node节点排斥Pod调度.与亲和效果相反.即taint的node排斥Pod的创建.
    2. 容忍(toleration),用于Pod容忍Node节点的污点信息.即node节点有污点,也将新的Pod创建到该Node上.

    1 污点配置方法1

    加污点

    root@k8s-master-01:~# kubectl cordon 192.168.31.114
    node/192.168.31.114 cordoned
    root@k8s-master-01:~# kubectl get node
    NAME             STATUS                     ROLES    AGE    VERSION
    192.168.31.101   Ready,SchedulingDisabled   master   121d   v1.22.5
    192.168.31.102   Ready,SchedulingDisabled   master   121d   v1.22.5
    192.168.31.103   Ready,SchedulingDisabled   master   121d   v1.22.5
    192.168.31.111   Ready                      node     121d   v1.22.5
    192.168.31.112   Ready                      node     121d   v1.22.5
    192.168.31.113   Ready                      node     24d    v1.22.5
    192.168.31.114   Ready,SchedulingDisabled   node     22h    v1.22.5
    root@k8s-master-01:~# kubectl cordon 192.168.31.114
    node/192.168.31.114 cordoned
    root@k8s-master-01:~# kubectl describe node 192.168.31.114
    Taints:             node.kubernetes.io/unschedulable:NoSchedule
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15

    去污点

    root@k8s-master-01:~# kubectl uncordon 192.168.31.114
    node/192.168.31.114 uncordoned
    root@k8s-master-01:~# kubectl get node
    NAME             STATUS                     ROLES    AGE    VERSION
    192.168.31.101   Ready,SchedulingDisabled   master   121d   v1.22.5
    192.168.31.102   Ready,SchedulingDisabled   master   121d   v1.22.5
    192.168.31.103   Ready,SchedulingDisabled   master   121d   v1.22.5
    192.168.31.111   Ready                      node     121d   v1.22.5
    192.168.31.112   Ready                      node     121d   v1.22.5
    192.168.31.113   Ready                      node     24d    v1.22.5
    192.168.31.114   Ready                      node     22h    v1.22.5
    
    root@k8s-master-01:~# kubectl describe node 192.168.31.114
    Taints:             <none>
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14

    2 污点配置方法2

    加污点

    • NoSchedule:硬限制,不将新创建的Pod调度到具有该污点的Node上.
    • PreferNoSchedule:软限制.避免k8s将尽量避免将Pod调度到具有该污点的Node上.
    • NoExecute:表示K8s将不会将Pod调度到具有该污点的Node上,同时会将Node上已经存在的Pod强制驱逐出去.
    root@k8s-master-01:~# kubectl taint node 192.168.31.114 key1=value1:NoSchedule
    node/192.168.31.114 tainted
    root@k8s-master-01:~# kubectl get node
    NAME             STATUS                     ROLES    AGE    VERSION
    192.168.31.101   Ready,SchedulingDisabled   master   121d   v1.22.5
    192.168.31.102   Ready,SchedulingDisabled   master   121d   v1.22.5
    192.168.31.103   Ready,SchedulingDisabled   master   121d   v1.22.5
    192.168.31.111   Ready                      node     121d   v1.22.5
    192.168.31.112   Ready                      node     121d   v1.22.5
    192.168.31.113   Ready                      node     24d    v1.22.5
    192.168.31.114   Ready                      node     22h    v1.22.5
    
    root@k8s-master-01:~# kubectl describe node 192.168.31.114
    Taints:             key1=value1:NoSchedule
      Normal   NodeSchedulable          12m (x2 over 14m)  kubelet     Node 192.168.31.114 status is now: NodeSchedulable
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15

    去污点

    root@k8s-master-01:~# kubectl taint node 192.168.31.114 key1:NoSchedule-
    node/192.168.31.114 untainted
    
    root@k8s-master-01:~# kubectl describe node 192.168.31.114
    Taints:             <none>
      Normal   NodeSchedulable          13m (x2 over 16m)  kubelet     Node 192.168.31.114 status is now: NodeSchedulable
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6

    3 容忍

    定义Pod的容忍度,可以调度至含有污点的Node.
    容忍基于operator的匹配污点:
    如果operator是Exists,则容忍度不需要value而是直接匹配污点类型.
    如果operator是Equal,则需要指定value并且value的值需要等于tolerations的key.

    3.1 节点打污点

    将114设置为cordon,111打上NoSchedule

    root@k8s-master-01:~# kubectl cordon 192.168.31.114
    node/192.168.31.114 cordoned
    root@k8s-master-01:~# kubectl taint nodes 192.168.31.111 key1=value1:NoSchedule
    node/192.168.31.111 tainted
    root@k8s-master-01:~# kubectl get node
    NAME             STATUS                     ROLES    AGE    VERSION
    192.168.31.101   Ready,SchedulingDisabled   master   121d   v1.22.5
    192.168.31.102   Ready,SchedulingDisabled   master   121d   v1.22.5
    192.168.31.103   Ready,SchedulingDisabled   master   121d   v1.22.5
    192.168.31.111   Ready                      node     121d   v1.22.5
    192.168.31.112   Ready                      node     121d   v1.22.5
    192.168.31.113   Ready                      node     24d    v1.22.5
    192.168.31.114   Ready,SchedulingDisabled   node     24h    v1.22.5
    root@k8s-master-01:~# kubectl describe node 192.168.31.111 |grep Taint
    Taints:             key1=value1:NoSchedule
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15

    此时部署pod,会避开111和114两台服务器

    root@k8s-master-01:/opt/k8s-data/yaml/wework/affinit/Affinit-case# kubectl get pods -n wework -o wide|grep app1
    wework-tomcat-app1-deployment-7f64c5bbcb-48wbp   1/1     Running   0            16s     172.100.140.114   192.168.31.112   <none>           <none>
    wework-tomcat-app1-deployment-7f64c5bbcb-g5t8w   1/1     Running   0            16s     172.100.76.184    192.168.31.113   <none>           <none>
    wework-tomcat-app1-deployment-7f64c5bbcb-lfsqb   1/1     Running   0            16s     172.100.76.181    192.168.31.113   <none>           <none>
    wework-tomcat-app1-deployment-7f64c5bbcb-vh66k   1/1     Running   0            16s     172.100.140.102   192.168.31.112   <none>           <none>
    wework-tomcat-app1-deployment-7f64c5bbcb-zp7p6   1/1     Running   0            16s     172.100.76.183    192.168.31.113   <none>           <none>
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6

    此时打开污点的容忍

          tolerations: 
          - key: "key1"
            operator: "Equal"
            value: "value1"
            effect: "NoSchedule"
    
    • 1
    • 2
    • 3
    • 4
    • 5

    此时pod也会被在有此标签的111上创建

    root@k8s-master-01:/opt/k8s-data/yaml/wework/affinit/Affinit-case# kubectl apply -f case5.1-taint-tolerations.yaml 
    deployment.apps/wework-tomcat-app1-deployment configured
    service/wework-tomcat-app1-service unchanged
    root@k8s-master-01:/opt/k8s-data/yaml/wework/affinit/Affinit-case# kubectl get pods -n wework -o wide|grep app1
    wework-tomcat-app1-deployment-598d979d5f-2mcvh   1/1     Running   0            36s     172.100.109.80    192.168.31.111   <none>           <none>
    wework-tomcat-app1-deployment-598d979d5f-nz225   1/1     Running   0            34s     172.100.140.119   192.168.31.112   <none>           <none>
    wework-tomcat-app1-deployment-598d979d5f-skf42   1/1     Running   0            36s     172.100.76.182    192.168.31.113   <none>           <none>
    wework-tomcat-app1-deployment-598d979d5f-whjvl   1/1     Running   0            36s     172.100.109.76    192.168.31.111   <none>           <none>
    wework-tomcat-app1-deployment-598d979d5f-x85vs   1/1     Running   0            33s     172.100.76.185    192.168.31.113   <none>           <none>
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
  • 相关阅读:
    v-for中key的作用与原理
    Fiddler 系列教程(二) Composer创建和发送HTTP Request跟手机抓包
    iOS开发Swift-10-位置授权, cocoapods,API,天气获取,城市获取-和风天气App首页代码
    Oracle-性能优化篇-分区表
    【云原生】kubectl常用命令大全
    MySQL 主从复制原理以及流程
    mybatis配置文件模板及常用标签介绍说明
    通过GFlags工具来复现因为野指针、内存越界等造成的程序崩溃
    QT中的OpenGLWidget
    大学宿舍IP一键视频对讲
  • 原文地址:https://blog.csdn.net/qq_29974229/article/details/126546787