• 【云原生 | 从零开始学Kubernetes】九、k8s的node节点选择器与node节点亲和性


    该篇文章已经被专栏《从零开始学k8s》收录

    在这里插入图片描述

    node 节点选择器

    我们在创建 pod 资源的时候,pod 会根据 schduler 进行调度,那么默认会调度到随机的一个工作节点,如果我们想要 pod 调度到指定节点或者调度到一些具有相同特点的 node 节点,怎么办呢? 可以使用 pod 中的 nodeName 或者 nodeSelector 字段指定要调度到的 node 节点

    1、nodeName

    指定 pod 节点运行在哪个具体 node 上

    #node1和2用docker下载tomcat busybox
    
    [root@k8smaster node]# vim pod-node.yml 
    apiVersion: v1 
    kind: Pod 
    metadata: 
      name: demo-pod
      namespace: default 
      labels: 
        app: myapp 
        env: dev 
    spec: 
      nodeName: k8snode
      containers: 
      - name: tomcat-pod-java 
        ports: 
        - containerPort: 8080 
        image: tomcat
        imagePullPolicy: IfNotPresent 
      - name: busybox 
        image: busybox:latest 
        command: 
        - "/bin/sh" 
        - "-c" 
        - "sleep 3600" 
     
    [root@k8smaster node]# kubectl apply -f pod-node.yml 
    pod/demo-pod created
    
    #查看 pod 调度到哪个节点 
    [root@k8smaster node]# kubectl get pods -o wide 
    NAME                          READY   STATUS    RESTARTS   AGE    IP            NODE       NOMINATED NODE  
    demo-pod                      2/2     Running   0          35s    10.244.2.18   k8snode    <none>   
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33

    2、nodeSelector

    指定 pod 调度到具有哪些标签的 node 节点上

    #给 node 节点打标签,打个具有 disk=ceph 的标签
    [root@k8smaster node]# kubectl describe nodes k8snode2   查看node属性
    [root@k8smaster node]# kubectl label nodes k8snode2 disk=ceph
    node/k8snode2 labeled
    #然后再查看去label哪里就能看到了
    
    #定义 pod 的时候指定要调度到具有 disk=ceph 标签的 node 上 
    [root@k8smaster node]# vim pod-1.yaml 
    apiVersion: v1 
    kind: Pod 
    metadata: 
      name: demo-pod-1 
      namespace: default 
      labels: 
        app: myapp 
        env: dev 
    spec: 
      nodeSelector: 
        disk: ceph
      containers: 
      - name: tomcat-pod-java 
        ports: 
        - containerPort: 8080 
        image: tomcat
        imagePullPolicy: IfNotPresent 
     
    [root@k8smaster node]# kubectl apply -f pod-1.yaml 
    pod/demo-pod-1 created
    
    #查看 pod 调度到哪个节点 
    [root@k8smaster node]# kubectl get pods -o wide 
    NAME                          READY   STATUS    RESTARTS   AGE     IP            NODE       NOMINATED NODE    demo-pod-1                    1/1     Running   0          8s      10.244.1.19   k8snode2   <none>         
    #如果标签和nodename都有的话 优先选择好的node。
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33

    污点和污点容忍

    污点容忍

    污点容忍就是某个节点可能被调度,也可能不被调度

    node 节点亲和性

    node 节点亲和性调度:nodeAffinity 用帮助文档查看亲和性字段下面的东西

    [root@k8smaster node]# kubectl explain pods.spec.affinity 
    KIND:     Pod
    VERSION:  v1
    
    RESOURCE: affinity <Object>
    
    DESCRIPTION:
         If specified, the pod's scheduling constraints
    
         Affinity is a group of affinity scheduling rules.
    
    FIELDS:
       nodeAffinity	
         Describes node affinity scheduling rules for the pod.
    
       podAffinity	
       
       podAntiAffinity	
    #有node节点亲和性 pode节点亲和性等,然后我们再详细的看看nodeAffinity 。
     
    [root@k8smaster node]# kubectl explain pods.spec.affinity.nodeAffinity 
    KIND:     Pod
    VERSION:  v1
    
    RESOURCE: nodeAffinity 
    
    DESCRIPTION:
         Describes node affinity scheduling rules for the pod.
    
         Node affinity is a group of node affinity scheduling rules.
    
    FIELDS:
       preferredDuringSchedulingIgnoredDuringExecution	<[]Object>
    
       requiredDuringSchedulingIgnoredDuringExecution	
         
    #prefered 表示有节点尽量满足这个位置定义的亲和性,这不是一个必须的条件,软亲和性,没满足也可能调度。
    #require 表示必须有节点满足这个位置定义的亲和性,这是个硬性条件,硬亲和性,没满足不可能调度。
    
    [root@k8smaster node]# kubectl explain pods.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution
    KIND:     Pod
    VERSION:  v1
    
    RESOURCE: requiredDuringSchedulingIgnoredDuringExecution 
    
    DESCRIPTION:
         If the affinity requirements specified by this field are not met at
         scheduling time, the pod will not be scheduled onto the node. If the
         affinity requirements specified by this field cease to be met at some point
         during pod execution (e.g. due to an update), the system may or may not try
         to eventually evict the pod from its node.
    
         A node selector represents the union of the results of one or more label
         queries over a set of nodes; that is, it represents the OR of the selectors
         represented by the node selector terms.
    
    FIELDS:
       nodeSelectorTerms	<[]Object> -required-		#必写字段,对象列表
     
    [root@k8smaster node]# kubectl explain pods.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTermsKIND:     Pod
    VERSION:  v1
    
    RESOURCE: nodeSelectorTerms <[]Object>
    
    DESCRIPTION:
         Required. A list of node selector terms. The terms are ORed.
    
         A null or empty node selector term matches no objects. The requirements of
         them are ANDed. The TopologySelectorTerm type implements a subset of the
         NodeSelectorTerm.
    
    FIELDS:
       matchExpressions	<[]Object>	#匹配表达式的
         A list of node selector requirements by node's labels.
    
       matchFields	<[]Object>	#匹配字段的 
         A list of node selector requirements by node's fields.
     
    [root@k8smaster node]# kubectl explain pods.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchExpressions
    KIND:     Pod
    VERSION:  v1
    
    RESOURCE: matchExpressions <[]Object>
    
    DESCRIPTION:
         A list of node selector requirements by node's labels.
    
         A node selector requirement is a selector that contains values, a key, and
         an operator that relates the key and values.
    
    FIELDS:
       key	<string> -required-		#检查 label
       
       operator	<string> -required-	#做等值选则还是不等值选则 
    
       values	<[]string>			#给定的值 
    #在做node节点亲和性的时候,values是标签的值,他会通过op匹配相等的key或者不等的key。1:使用 requiredDuringSchedulingIgnoredDuringExecution 硬亲和性 
    #node1 node2都拉nginx
     
    [root@k8smaster node]# vim pod-nodeaffinity-demo.yaml
    apiVersion: v1
    kind: Pod
    metadata:
            name: pod-node-affinity-demo
            namespace: default
            labels:
                app: myapp
                tier: frontend
    spec:
        containers:
        - name: myapp
          image: nginx
        affinity:
             nodeAffinity:
                requiredDuringSchedulingIgnoredDuringExecution:
                       nodeSelectorTerms:
                       - matchExpressions:
                         - key: zone
                           operator: In
                           values:
                           - foo
                           - bar
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110
    • 111
    • 112
    • 113
    • 114
    • 115
    • 116
    • 117
    • 118
    • 119
    • 120
    • 121
    • 122
    • 123
    • 124
    • 125
    • 126

    affinity:亲和性,下面的node是node亲和性,然后requ硬亲和性,nodeselect是对象列表,我们用-链接,然后match也是对象列表,同上,key是zone,然后等值关系,值是foo和bar。
    这个yaml意思是:我们检查当前节点中有任意一个节点拥有 zone 标签的值是 foo 或者 bar,就可以把 pod 调度到这个 node 节点的 foo 或者 bar 标签上的节点上,现在找不到,因为没打标签!

    [root@k8smaster node]# kubectl apply -f pod-nodeaffinity-demo.yaml 
    pod/pod-node-affinity-demo created
    [root@k8smaster node]# kubectl get pods -o wide | grep pod-node 
    pod-node-affinity-demo        0/1     Pending   0          11s    <none>        <none>     <none>          
    
    # status 的状态是 pending,上面说明没有完成调度,因为没有一个拥有 zone 的标签的值是 foo 或者 bar,而且使用的是硬亲和性,必须满足条件才能完成调度。
    [root@k8smaster node]# kubectl label nodes k8snode zone=foo
    node/k8snode labeled
    #给这个节点打上标签 zone=foo,在查看 
    [root@k8smaster node]# kubectl get pods -o wide 显示running了
    pod-node-affinity-demo        1/1     Running   0          4m4s   10.244.2.19   k8snode    <none>2:使用 preferredDuringSchedulingIgnoredDuringExecution 软亲和性 
    [root@k8smaster node]# vim pod-nodeaffinity-demo-2.yaml 
    apiVersion: v1
    kind: Pod
    metadata:
            name: pod-node-affinity-demo-2
            namespace: default
            labels:
                app: myapp
                tier: frontend
    spec:
        containers:
        - name: myapp
          image: nginx
        affinity:
            nodeAffinity:
                preferredDuringSchedulingIgnoredDuringExecution:
                - preference:
                   matchExpressions:
                   - key: zone1
                     operator: In
                     values:
                     - foo1
                     - bar1
                  weight: 60
              
    #用的还是node亲和性,然后是软亲和性,如果所有的工作节点都没有这个标签,pod还是会调度
    [root@k8smaster node]# kubectl apply -f pod-nodeaffinity-demo-2.yaml 
    pod/pod-node-affinity-demo-2 created
    
    [root@k8smaster node]# kubectl get pods -o wide |grep demo-2
    pod-node-affinity-demo-2      1/1     Running   0          29s     10.244.1.20   k8snode2   <none>          
    
    
    #上面说明软亲和性是可以运行这个 pod 的,尽管没有运行这个 pod 的节点定义的 zone1 标签 
     
    Node 节点亲和性针对的是 pod 和 node 的关系,Pod 调度到 node 节点的时候匹配的条件
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49

    写在最后

    创作不易,如果觉得内容对你有帮助,麻烦给个三连关注支持一下我!如果有错误,请在评论区指出,我会及时更改!
    目前正在更新的系列:从零开始学k8s
    感谢各位的观看,文章掺杂个人理解,如有错误请联系我指出~
    在这里插入图片描述

  • 相关阅读:
    SPL工业智能:原料与产品的拟合
    【Git】git commit -m 提交信息约束规范
    JSP 购物商城系统eclipse定制开发mysql数据库BS模式java编程servlet
    基于差分进化与优胜劣汰策略的灰狼优化算法-附代码
    盘点华为云GaussDB(for Redis)六大秒级能力
    神经网络与深度学习第四章前馈神经网络习题解答
    【C++基础】this指针
    逻辑回归的含义
    vuex的state,getters,mutations,actions,modules
    如何禁止公司电脑文件上传到网络
  • 原文地址:https://blog.csdn.net/qq_45400861/article/details/126021889