• 【云原生 | Kubernetes 实战】05、Pod高级实战:基于污点、容忍度、亲和性的多种调度策略(上)


    目录

    一、标签

    1.1 什么是标签?

    1.2 给pod资源打标签

    1.3 查看资源标签

    二、node 节点选择器

    2.1 nodeName:指定pod节点运行在哪个具体的node上

    2.2 nodeSelector:指定pod调度到具有特定标签的node节点上

    2.3 同时有 nodeName 和 nodeSelector 字段

    三、node 节点亲和性 

    3.1 使用 requiredDuringSchedulingIgnoredDuringExecution 硬亲和性

    3.2 使用 preferredDuringSchedulingIgnoredDuringExecution 软亲和性


    一、标签

    1.1 什么是标签?

            标签其实就一对 key/value ,被关联到对象上,比如Pod,标签的使用我们倾向于能够表示对象的特殊特点,就是一眼就看出了这个Pod是干什么的,标签可以用来划分特定的对象(比如版本,服务类型等),标签可以在创建一个对象的时候直接定义,也可以在后期随时修改,每一个对象可以拥有多个标签,但是,key值必须是唯一的。创建标签之后也可以方便我们对资源进行分组管理。如果对pod打标签,之后就可以使用标签来查看、删除指定的pod。

    k8s中,大部分资源都可以打标签。

    1.2 给pod资源打标签

    1. #1. 创建pod资源
    2. [root@k8s-master01 pod-yaml]# kubectl apply -f pod-first.yaml
    3. #2. 查看默认空间下的所有 pods
    4. [root@k8s-master01 pod-yaml]# kubectl get pods
    5. NAME READY STATUS RESTARTS AGE
    6. tomcat-test 1/1 Running 0 6s
    7. #3. 查看默认空间下的所有pods 的标签
    8. [root@k8s-master01 pod-yaml]# kubectl get pods --show-labels
    9. NAME READY STATUS RESTARTS AGE LABELS
    10. tomcat-test 1/1 Running 0 38s app=tomcat
    11. #4. 对指定的pod资源 tomcat-test 打上标签
    12. [root@k8s-master01 pod-yaml]# kubectl label pods tomcat-test release=v1
    13. #5. 查看指定pods 的标签
    14. [root@k8s-master01 pod-yaml]# kubectl get pods tomcat-test --show-labels
    15. NAME READY STATUS RESTARTS AGE LABELS
    16. tomcat-test 1/1 Running 0 4m47s app=tomcat,release=v1

    1.3 查看资源标签

    1. # 查看指定名称空间下的所有pods 的标签
    2. [root@k8s-master01 pod-yaml]# kubectl get pods -n kube-system --show-labels
    3. # 查看默认名称空间下指定pod具有的所有标签
    4. [root@k8s-master01 pod-yaml]# kubectl get pods tomcat-test --show-labels
    5. # 列出默认名称空间下标签keyrelease的pod,但不显示标签
    6. [root@k8s-master01 pod-yaml]# kubectl get pods -l release
    7. NAME READY STATUS RESTARTS AGE
    8. tomcat-test 1/1 Running 0 10m
    9. # 列出默认名称空间下标签keyrelease、值是v1的pod,不显示标签
    10. [root@k8s-master01 pod-yaml]# kubectl get pods -l release=v1
    11. NAME READY STATUS RESTARTS AGE
    12. tomcat-test 1/1 Running 0 11m
    13. # 列出默认名称空间下标签keyrelease的所有pod,并打印对应的标签值
    14. [root@k8s-master01 pod-yaml]# kubectl get pods -L release
    15. NAME READY STATUS RESTARTS AGE RELEASE
    16. tomcat-test 1/1 Running 0 12m v1
    17. # 列出默认名称空间下多个标签(release,app)的所有pods,并打印对应的标签值
    18. [root@k8s-master01 pod-yaml]# kubectl get pods -L release,app
    19. NAME READY STATUS RESTARTS AGE RELEASE APP
    20. tomcat-test 1/1 Running 0 13m v1 tomcat
    21. # 查看所有名称空间下的所有pod的标签
    22. [root@k8s-master01 pod-yaml]# kubectl get pods --all-namespaces --show-labels

    二、node 节点选择器

            我们在创建pod资源的时候,pod会根据schduler进行调度,那么默认会调度到随机的一个工作节点,如果我们想要pod调度到指定节点或者调度到一些具有相同特点的node节点,怎么办呢?
    可以使用pod中的 nodeName 或者 nodeSelector 字段指定要调度到的node节点。

    2.1 nodeName:指定pod节点运行在哪个具体的node上

    1. #1. 在node1和node2上拉取 tomcat 和 busybox 镜像
    2. [root@k8s-node1 ~]# docker pull tomcat
    3. [root@k8s-node1 ~]# docker pull busybox
    4. [root@k8s-node2 ~]# docker pull tomcat
    5. [root@k8s-node2 ~]# docker pull busybox
    6. #2. 编写 yaml 文件
    7. [root@k8s-master01 pod-yaml]# vi pod-node.yaml
    8. apiVersion: v1
    9. kind: Pod
    10. metadata:
    11. name: demo-pod
    12. namespace: default
    13. labels:
    14. app: myapp
    15. env: dev
    16. spec:
    17. nodeName: k8s-node2 # 指定调度到 node2 节点
    18. containers:
    19. - name: tomcat-pod
    20. ports:
    21. - containerPort: 8080
    22. image: tomcat:latest
    23. imagePullPolicy: IfNotPresent
    24. - name: busybox
    25. image: busybox:latest
    26. command: # busybox 容器指定 sh 编译器,-c 指定执行的命令
    27. - "/bin/sh"
    28. - "-c"
    29. - "sleep 3600"
    30. #3. 创建pod资源
    31. [root@k8s-master01 pod-yaml]# kubectl apply -f pod-node.yaml
    32. pod/demo-pod created
    33. [root@k8s-master01 pod-yaml]# kubectl get pods
    34. NAME READY STATUS RESTARTS AGE
    35. demo-pod 0/2 ContainerCreating 0 7s
    36. tomcat-test 1/1 Running 0 32m
    37. #4. 查看pods被调度到哪个节点了
    38. [root@k8s-master01 pod-yaml]# kubectl get pods -o wide
    39. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    40. demo-pod 2/2 Running 0 53s 10.244.169.140 k8s-node2 <none> <none>
    41. tomcat-test 1/1 Running 0 33m 10.244.169.139 k8s-node2 <none> <none>

    2.2 nodeSelector:指定pod调度到具有特定标签的node节点上

    1. #1. 查看k8s集群中所有node节点的标签
    2. [root@k8s-master01 pod-yaml]# kubectl get nodes --show-labels
    3. NAME STATUS ROLES AGE VERSION LABELS
    4. k8s-master01 Ready control-plane 37d v1.25.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master01,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
    5. k8s-node1 Ready work 37d v1.25.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1,kubernetes.io/os=linux,node-role.kubernetes.io/work=work
    6. k8s-node2 Ready work 37d v1.25.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node2,kubernetes.io/os=linux,node-role.kubernetes.io/work=work
    7. #2. 编写 yaml 文件
    8. [root@k8s-master01 pod-yaml]# vi pod-1.yaml
    9. apiVersion: v1
    10. kind: Pod
    11. metadata:
    12. name: demo-pod-1
    13. namespace: default
    14. labels:
    15. app: myapp
    16. env: dev
    17. spec:
    18. nodeSelector:
    19. disk: ceph # 指定标签为 disk=ceph 的node节点
    20. containers:
    21. - name: tomcat-pod-1
    22. ports:
    23. - containerPort: 8080
    24. image: tomcat:latest
    25. imagePullPolicy: IfNotPresent
    26. #3. 创建 pod 资源
    27. [root@k8s-master01 pod-yaml]# kubectl apply -f pod-1.yaml
    28. #4. 查看pods状态。demo-pod-1 为等待状态
    29. [root@k8s-master01 pod-yaml]# kubectl get pods
    30. NAME READY STATUS RESTARTS AGE
    31. demo-pod 2/2 Running 0 15m
    32. demo-pod-1 0/1 Pending 0 6s
    33. tomcat-test 1/1 Running 0 48m
    34. #5. 查看pod详细信息
    35. [root@k8s-master01 pod-yaml]# kubectl describe pods demo-pod-1

    可以看到,demo-pod-1 资源调度失败,三个节点没有可用的。因为找不到指定标签 disk: ceph 的node:

    解决办法:

    1. #1. 给node1节点打标签,打个具有disk=ceph的标签
    2. [root@k8s-master01 pod-yaml]# kubectl label nodes k8s-node1 disk=ceph
    3. #2. 再次查看pods状态
    4. [root@k8s-master01 pod-yaml]# kubectl get pods
    5. NAME READY STATUS RESTARTS AGE
    6. demo-pod 2/2 Running 0 25m
    7. demo-pod-1 1/1 Running 0 10m
    8. tomcat-test 1/1 Running 0 58m
    9. #3. 查看pods 调度信息
    10. [root@k8s-master01 pod-yaml]# kubectl get pods -o wide
    11. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    12. demo-pod 2/2 Running 0 27m 10.244.169.140 k8s-node2 <none> <none>
    13. demo-pod-1 1/1 Running 0 11m 10.244.36.75 k8s-node1 <none> <none>
    14. tomcat-test 1/1 Running 0 60m 10.244.169.139 k8s-node2 <none> <none>
    15. #4. 再次查看 node 节点的标签
    16. [root@k8s-master01 pod-yaml]# kubectl get nodes --show-labels

    做完上面实验,需要把default名称空间下的pod全都删除,kubectl delete pods pod名字:

    1. [root@k8s-master01 ~]# kubectl delete pods demo-pod demo-pod-1 tomcat-test
    2. pod "demo-pod" deleted
    3. pod "demo-pod-1" deleted
    4. pod "tomcat-test" deleted
    5. [root@k8s-master01 ~]# kubectl get pods
    6. No resources found in default namespace.
    7. # 删除node1节点打的标签 disk=ceph 。把等号换成 '-' 表示删除
    8. [root@k8s-master01 ~]# kubectl label nodes k8s-node1 disk-
    9. # 查看是否删除成功
    10. [root@k8s-master01 ~]# kubectl get nodes k8s-node1 --show-labels

    2.3 同时有 nodeName 和 nodeSelector 字段

    1. [root@k8s-master01 pod-yaml]# vi pod-1.yaml
    2. apiVersion: v1
    3. kind: Pod
    4. metadata:
    5. name: demo-pod-1
    6. namespace: default
    7. labels:
    8. app: myapp
    9. env: dev
    10. spec:
    11. nodeName: k8s-node2
    12. nodeSelector:
    13. disk: ceph
    14. containers:
    15. - name: tomcat-pod-1
    16. ports:
    17. - containerPort: 8080
    18. image: tomcat:latest
    19. imagePullPolicy: IfNotPresent
    20. # 创建pod资源
    21. [root@k8s-master01 pod-yaml]# kubectl get pods
    22. NAME READY STATUS RESTARTS AGE
    23. demo-pod-1 0/1 NodeAffinity 0 7s
    24. [root@k8s-master01 pod-yaml]# kubectl describe pods demo-pod-1

    创建失败了,不能正常调度,报错信息如下:

            结论:同一个yaml文件里定义pod资源,如果同时定义了nodeName和NodeSelector,那么条件必须都满足才可以,有一个不满足都会调度失败。

    1. # 我给node2打上标签 disk=ceph
    2. [root@k8s-master01 pod-yaml]# kubectl label nodes k8s-node2 disk=ceph
    3. [root@k8s-master01 pod-yaml]# kubectl delete pods demo-pod-1
    4. [root@k8s-master01 pod-yaml]# kubectl apply -f pod-1.yaml
    5. # 调度成功
    6. [root@k8s-master01 pod-yaml]# kubectl get pods -o wide
    7. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    8. demo-pod-1 1/1 Running 0 5s 10.244.169.143 k8s-node2 <none> <none>

    结论:如果同时定义了nodeName和NodeSelector,如果只满足NodeSelector则可以调度成功。

    三、node 节点亲和性 

    node节点亲和性调度:nodeAffinity

    官方文档:将 Pod 指派给节点 | Kubernetes

    1. # 查看帮助命令。一层一层的找
    2. [root@k8s-master01 pod-yaml]# kubectl explain pods.spec.affinity
    3. KIND: Pod
    4. VERSION: v1
    5. RESOURCE: affinity <Object>
    6. DESCRIPTION:
    7. If specified, the pod's scheduling constraints
    8. Affinity is a group of affinity scheduling rules.
    9. FIELDS:
    10. nodeAffinity <Object>
    11. Describes node affinity scheduling rules for the pod.
    12. podAffinity <Object>
    13. ······
    14. podAntiAffinity <Object>
    15. ······
    16. [root@k8s-master01 pod-yaml]# kubectl explain pods.spec.affinity.nodeAffinity
    17. KIND: Pod
    18. VERSION: v1
    19. RESOURCE: nodeAffinity <Object>
    20. DESCRIPTION:
    21. Describes node affinity scheduling rules for the pod.
    22. Node affinity is a group of node affinity scheduling rules.
    23. FIELDS:
    24. preferredDuringSchedulingIgnoredDuringExecution <[]Object>
    25. The scheduler will prefer to schedule pods to nodes that satisfy the
    26. affinity expressions specified by this field, but it may choose a node that
    27. violates one or more of the expressions. The node that is most preferred is
    28. the one with the greatest sum of weights, i.e. for each node that meets all
    29. of the scheduling requirements (resource request, requiredDuringScheduling
    30. affinity expressions, etc.), compute a sum by iterating through the
    31. elements of this field and adding "weight" to the sum if the node matches
    32. the corresponding matchExpressions; the node(s) with the highest sum are
    33. the most preferred.
    34. requiredDuringSchedulingIgnoredDuringExecution <Object>
    35. If the affinity requirements specified by this field are not met at
    36. scheduling time, the pod will not be scheduled onto the node. If the
    37. affinity requirements specified by this field cease to be met at some point
    38. during pod execution (e.g. due to an update), the system may or may not try
    39. to eventually evict the pod from its node.
    40. # prefered:表示有节点尽量满足这个位置定义的亲和性,这不是一个必须的条件,软亲和性
    41. # require:表示必须有节点满足这个位置定义的亲和性,这是个硬性条件,硬亲和性
    42. [root@k8s-master01 pod-yaml]# kubectl explain pods.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution
    43. KIND: Pod
    44. VERSION: v1
    45. RESOURCE: requiredDuringSchedulingIgnoredDuringExecution <Object>
    46. DESCRIPTION:
    47. If the affinity requirements specified by this field are not met at
    48. scheduling time, the pod will not be scheduled onto the node. If the
    49. affinity requirements specified by this field cease to be met at some point
    50. during pod execution (e.g. due to an update), the system may or may not try
    51. to eventually evict the pod from its node.
    52. A node selector represents the union of the results of one or more label
    53. queries over a set of nodes; that is, it represents the OR of the selectors
    54. represented by the node selector terms.
    55. FIELDS:
    56. nodeSelectorTerms <[]Object> -required-
    57. Required. A list of node selector terms. The terms are ORed.
    58. [root@k8s-master01 pod-yaml]# kubectl explain pods.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms
    59. KIND: Pod
    60. VERSION: v1
    61. RESOURCE: nodeSelectorTerms <[]Object>
    62. DESCRIPTION:
    63. Required. A list of node selector terms. The terms are ORed.
    64. A null or empty node selector term matches no objects. The requirements of
    65. them are ANDed. The TopologySelectorTerm type implements a subset of the
    66. NodeSelectorTerm.
    67. FIELDS:
    68. # 匹配表达式的
    69. matchExpressions <[]Object>
    70. A list of node selector requirements by node's labels.
    71. # 匹配字段的
    72. matchFields <[]Object>
    73. A list of node selector requirements by node's fields.
    74. [root@k8s-master01 pod-yaml]# kubectl explain pods.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchFields
    75. KIND: Pod
    76. VERSION: v1
    77. RESOURCE: matchFields <[]Object>
    78. DESCRIPTION:
    79. A list of node selector requirements by node's fields.
    80. A node selector requirement is a selector that contains values, a key, and
    81. an operator that relates the key and values.
    82. FIELDS:
    83. # 检查label标签
    84. key <string> -required-
    85. The label key that the selector applies to.
    86. # 做等值选则还是不等值选则
    87. operator <string> -required-
    88. Represents a key's relationship to a set of values. Valid operators are In,
    89. NotIn, Exists, DoesNotExist. Gt, and Lt.
    90. Possible enum values:
    91. - `"DoesNotExist"`
    92. - `"Exists"`
    93. - `"Gt"`
    94. - `"In"`
    95. - `"Lt"`
    96. - `"NotIn"`
    97. # 给定值
    98. values <[]string>
    99. An array of string values. If the operator is In or NotIn, the values array
    100. must be non-empty. If the operator is Exists or DoesNotExist, the values
    101. array must be empty. If the operator is Gt or Lt, the values array must
    102. have a single element, which will be interpreted as an integer. This array
    103. is replaced during a strategic merge patch.

    3.1 使用 requiredDuringSchedulingIgnoredDuringExecution 硬亲和性

    1. # 检查当前节点中有任意一个节点拥有zone标签的值是foo或者bar,就可以把pod调度到这个node节点的foo或者bar标签上的节点上
    2. [root@k8s-master01 pod-yaml]# vim pod-nodeaffinity-demo.yaml
    3. apiVersion: v1
    4. kind: Pod
    5. metadata:
    6. name: pod-node-affinity-demo
    7. namespace: default
    8. labels:
    9. app: myapp
    10. tier: frontend
    11. spec:
    12. affinity:
    13. nodeAffinity:
    14. requiredDuringSchedulingIgnoredDuringExecution:
    15. nodeSelectorTerms:
    16. - matchExpressions:
    17. - key: zone
    18. operator: In
    19. values:
    20. - foo
    21. - bar
    22. containers:
    23. - name: myapp
    24. image: nginx:latest
    25. imagePullPolicy: IfNotPresent
    26. [root@k8s-master01 pod-yaml]# kubectl get pods
    27. NAME READY STATUS RESTARTS AGE
    28. demo-pod-1 1/1 Running 0 38m
    29. pod-node-affinity-demo 0/1 Pending 0 8s
    30. [root@k8s-master01 pod-yaml]# kubectl describe pods pod-node-affinity-demo

            status 的状态是pending,上面说明没有完成调度,因为没有一个拥有zone的标签的值是foo或者bar,而且使用的是硬亲和性,必须满足条件才能完成调度:

    1. # 给 node1 打上标签 zone=foo
    2. [root@k8s-master01 pod-yaml]# kubectl label nodes k8s-node1 zone=foo
    3. node/k8s-node1 labeled
    4. 您在 /var/spool/mail/root 中有新邮件
    5. [root@k8s-master01 pod-yaml]# kubectl get pods
    6. NAME READY STATUS RESTARTS AGE
    7. demo-pod-1 1/1 Running 0 43m
    8. pod-node-affinity-demo 1/1 Running 0 4m47s
    9. # 再次查看,调度成功
    10. [root@k8s-master01 pod-yaml]# kubectl get pods -o wide
    11. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    12. demo-pod-1 1/1 Running 0 43m 10.244.169.143 k8s-node2 <none> <none>
    13. pod-node-affinity-demo 1/1 Running 0 4m52s 10.244.36.79 k8s-node1 <none> <none>

    3.2 使用 preferredDuringSchedulingIgnoredDuringExecution 软亲和性

    1. [root@k8s-master01 pod-yaml]# vim pod-nodeaffinity-demo-2.yaml
    2. apiVersion: v1
    3. kind: Pod
    4. metadata:
    5. name: pod-node-affinity-demo-2
    6. namespace: default
    7. labels:
    8. app: myapp02
    9. tier: frontend
    10. spec:
    11. containers:
    12. - name: myapp02
    13. image: nginx:latest
    14. imagePullPolicy: IfNotPresent
    15. affinity:
    16. nodeAffinity:
    17. preferredDuringSchedulingIgnoredDuringExecution:
    18. - preference:
    19. matchExpressions:
    20. - key: zone1
    21. operator: In
    22. values:
    23. - foo1
    24. - bar1
    25. weight: 10
    26. - preference:
    27. matchExpressions:
    28. - key: zone2
    29. operator: In
    30. values:
    31. - foo2
    32. - bar2
    33. weight: 20
    34. [root@k8s-master01 pod-yaml]# kubectl apply -f pod-nodeaffinity-demo-2.yaml
    35. [root@k8s-master01 pod-yaml]# kubectl get pods
    36. NAME READY STATUS RESTARTS AGE
    37. demo-pod-1 1/1 Running 1 (9m39s ago) 24h
    38. pod-node-affinity-demo 1/1 Running 1 (9m42s ago) 23h
    39. pod-node-affinity-demo-2 1/1 Running 0 4s
    40. [root@k8s-master01 pod-yaml]# kubectl get pods -o wide
    41. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    42. demo-pod-1 1/1 Running 1 (9m52s ago) 24h 10.244.169.144 k8s-node2 <none> <none>
    43. pod-node-affinity-demo 1/1 Running 1 (9m55s ago) 23h 10.244.36.81 k8s-node1 <none> <none>
    44. pod-node-affinity-demo-2 1/1 Running 0 17s 10.244.169.145 k8s-node2 <none> <none>

    上面说明软亲和性是可以运行这个pod的,尽管没有节点被定义标签 zone1、zone2,pod依然会被调度(随机)。

            下面来测试 weight 权重。weight 是相对权重,权重值越高,pod被调度到指定的node节点几率就越大。

    1. # 给node1和node2打上标签:
    2. [root@k8s-master01 pod-yaml]# kubectl label nodes k8s-node1 zone1=foo1
    3. [root@k8s-master01 pod-yaml]# kubectl label nodes k8s-node2 zone2=foo2
    4. # 删除原来的pod:
    5. [root@k8s-master01 pod-yaml]# kubectl delete pods pod-node-affinity-demo-2
    6. # 修改下权重,node1 权重为40
    7. [root@k8s-master01 pod-yaml]# vim pod-nodeaffinity-demo-2.yaml
    8. preferredDuringSchedulingIgnoredDuringExecution:
    9. - preference:
    10. matchExpressions:
    11. - key: zone1
    12. operator: In
    13. values:
    14. - foo1
    15. - bar1
    16. weight: 40
    17. - preference:
    18. matchExpressions:
    19. - key: zone2
    20. operator: In
    21. values:
    22. - foo2
    23. - bar2
    24. weight: 20
    25. # 被调度到node1了。前面是被随机调度到node2
    26. [root@k8s-master01 pod-yaml]# kubectl apply -f pod-nodeaffinity-demo-2.yaml
    27. pod/pod-node-affinity-demo-2 created
    28. [root@k8s-master01 pod-yaml]# kubectl get pods -o wide
    29. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    30. demo-pod-1 1/1 Running 1 (20m ago) 24h 10.244.169.144 k8s-node2 <none> <none>
    31. pod-node-affinity-demo 1/1 Running 1 (20m ago) 23h 10.244.36.81 k8s-node1 <none> <none>
    32. pod-node-affinity-demo-2 1/1 Running 0 4s 10.244.36.83 k8s-node1 <none> <none>

            结论:pod在定义node节点亲和性的时候,如果node1 和 node2 都满足相同的条件(如标签相同、weight 相同),都可以调度pod。但是node2具有的标签是zone2=foo2,pod在匹配zone2=foo2的权重高,那么pod就会优先调度到onode2上。

    1. # 把前面做实验的pod资源删除。为后面实验做准备
    2. [root@k8s-master01 pod-yaml]# kubectl label nodes k8s-node1 zone1-
    3. node/k8s-node1 unlabeled
    4. 您在 /var/spool/mail/root 中有新邮件
    5. [root@k8s-master01 pod-yaml]# kubectl label nodes k8s-node1 zone-
    6. node/k8s-node1 unlabeled
    7. [root@k8s-master01 pod-yaml]# kubectl label nodes k8s-node2 zone2-
    8. node/k8s-node2 unlabeled
    9. [root@k8s-master01 pod-yaml]# kubectl delete pods demo-pod-1 pod-node-affinity-demo pod-node-affinity-demo-2

    上一篇文章:【云原生 | Kubernetes 实战】04、k8s 名称空间和资源配额_Stars.Sky的博客-CSDN博客

    下一篇文章:Pod高级实战:基于污点、容忍度、亲和性的多种调度策略(下)

  • 相关阅读:
    Nvidia Jetson平台 JP4.2.1版本GStreamer视频采集异常问题调试记录
    MySQL:至少参与xxx参与的全部事件
    前端必备工具
    音视频(1) - FFmpeg4.3.4编译
    iOS打基础之Block二三事
    正则表达式简介
    解读GaussDB(for MySQL)灵活多维的二级分区表策略
    GBase 8a MPP集群管理之虚拟集群镜像表
    Reentrantlock简介及使用场景
    UI设计师岗位的基本职责八篇
  • 原文地址:https://blog.csdn.net/weixin_46560589/article/details/128086399