• 【云原生 | Kubernetes 系列】----亲和与反亲和


    亲和与反亲和

    1. nodeSelector(亲和)

    1.1 node加标签

    root@k8s-master-01:~# kubectl label node 192.168.31.113 disktype=ssd
    
    • 1
    root@k8s-master-01:~# kubectl label node 192.168.31.113 disktype=ssd
    node/192.168.31.113 labeled
    root@k8s-master-01:~# kubectl describe node 192.168.31.113
    Name:               192.168.31.113
    Roles:              node
    Labels:             beta.kubernetes.io/arch=amd64
                        beta.kubernetes.io/os=linux
                        #######这里####
                        disktype=ssd
                        kubernetes.io/arch=amd64
                        kubernetes.io/hostname=192.168.31.113
                        kubernetes.io/os=linux
                        kubernetes.io/role=node
    Annotations:        node.alpha.kubernetes.io/ttl: 0
                        volumes.kubernetes.io/controller-managed-attach-detach: true
    CreationTimestamp:  Tue, 02 Aug 2022 12:45:15 +0800
    Taints:             <none>
    Unschedulable:      false
    Lease:
      HolderIdentity:  192.168.31.113
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20

    此时部署pod只需要加上nodeSelector 这样pod一定会被创建在指定的node上

          nodeSelector:
            disktype: ssd
    
    • 1
    • 2
    root@k8s-master-01:/opt/k8s-data/yaml/wework/affinit/Affinit-case# kubectl get pods -n wework -o wide
    wework-tomcat-app2-deployment-7c7f55fd9d-7xdlk   1/1     Running   0            15m    172.100.76.156    192.168.31.113   <none>           <none>
    wework-tomcat-app2-deployment-7c7f55fd9d-gwqz5   1/1     Running   0            15m    172.100.76.155    192.168.31.113   <none>           <none>
    wework-tomcat-app2-deployment-7c7f55fd9d-k79qd   1/1     Running   0            15m    172.100.76.154    192.168.31.113   <none>           <none>
    wework-tomcat-app2-deployment-7c7f55fd9d-lj9rx   1/1     Running   0            15m    172.100.76.160    192.168.31.113   <none>           <none>
    wework-tomcat-app2-deployment-7c7f55fd9d-xhl2n   1/1     Running   0            15m    172.100.76.158    192.168.31.113   <none>           <none>
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6

    1.2 删除node标签

    kubectl label node 192.168.31.114 disktype-
    
    • 1

    2. nodeName(亲和)

    通过temp中的spec设置nodeName 也可以将pod运行在指定的node上

      template:
        metadata:
          labels:
            app: wework-tomcat-app2-selector
        spec:
          nodeName: 192.168.31.114
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6

    3. Affinity

    类似于nodeSelector允许使用者指定一些pod在Node间调度的约束.日常支持两种模式:
    requiredDuringSchedulingIgnoredDuringExecution: 硬性条件,满足则调度,不满足则不调度
    preferedDuringShedulingIgnoreDuringExecution: 软性条件,不满足的情况下可以往其他不符合的Node调度
    IgnoreDuringExecution 如果Pod已经运行,如果标签发生变化不会影响已经运行的pod.
    Affinity亲和,anti-affinity反亲和,相对于nodeSelector的功能更强大.

    1. 标签支持and,还支持in,NotIn,Exists,DoesNotExist,Gt,Lt.
    2. 可以设置软匹配和硬匹配,在软匹配如果调度器无法匹配节点,仍然会将pod调度到其他不符合的节点上去
    3. 可以对pod定义亲和策略,比如那些pod可以或者不可以被调度到同一个node上.
    • In: 标签的值存在匹配列表中
    • NotIn: 标签的值不存在指定的匹配列表中
    • Gt: 标签的值大于某个值(字符串)
    • Lt: 标签的值小于某个值
    • Exists: 指定的标签存在

    3.1 硬策略

    不匹配不会被调度
    requiredDuringSchedulingIgnoredDuringExecution
    当matchExpressions只有1个key,只要满足任意调度中的一个value.就会被调度到相应的节点上.
    多个条件之间是或的关系

          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                - matchExpressions: #匹配条件1,多个values可以调度
                  - key: disktype
                    operator: In
                    values:
                    - ssd # 只有一个value是匹配成功也可以调度
                    - hdd
                - matchExpressions: #匹配条件2,多个matchExpressions加上以及每个matchExpressions values只有其中一个value匹配成功就可以调度
                  - key: project
                    operator: In
                    values:
                    - mmm  #即使这俩条件2的都匹配不上也可以调度
                    - nnn
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16

    当matchExpressions有多个key时,需要满足所有的key,才会被调度.一个key里多个值可以任意满足一个.
    disktype这个key下ssd和hdd只要满足其中一个,那么这个条件即满足.
    project这个key必须满足
    disktype和project之间是and
    ssd和hdd之间是or

          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                - matchExpressions: #匹配条件1
                  - key: disktype
                    operator: In
                    values:
                    - ssd
                    - hdd #同个key的多个value只有有一个匹配成功就行
                  - key: project #条件1和条件2必须同时满足,否则不调度
                    operator: In
                    values:
                    - wework
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14

    3.2 软策略

    如果匹配成功,则会被调度到指定的Node上,即使不匹配,也会被调度.

          affinity:
            nodeAffinity:
              preferredDuringSchedulingIgnoredDuringExecution:
              - weight: 80
                preference:
                  matchExpressions:
                  - key: project
                    operator: In
                    values:
                      - weworkX
              - weight: 60
                preference:
                  matchExpressions:
                  - key: disktype
                    operator: In
                    values:
                      - ssdX
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17

    3.3 软策略和硬策略综合使用

    硬策略是
    (NotIn)反亲和,不往master节点调度
    软策略是:
    亲和,优先将pod调度到project标签是wework,或disktype标签为ssd的node节点.如果没有匹配上,再根据计算调度到其他节点上.
    project的优先级是80,disktype的优先级是60.如果两个标签都成立,优先往project是wework的节点调度.
    给两台服务器设置label

    kubectl label nodes 192.168.31.113 disktype=hdd project=wework
    kubectl label nodes 192.168.31.114 disktype=ssd project=linux
    root@k8s-master-01:/opt/k8s-data/yaml/wework/affinit/Affinit-case# kubectl describe node 192.168.31.113|egrep "project|disktype"
                        disktype=hdd
                        project=wework
    root@k8s-master-01:/opt/k8s-data/yaml/wework/affinit/Affinit-case# kubectl describe node 192.168.31.114|egrep "project|disktype"
                        disktype=ssd
                        project=linux
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8

    yaml中将project标签wework的优先级设置成80,disktype设置成100

          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution: #硬限制
                nodeSelectorTerms:
                - matchExpressions: #硬匹配条件1
                  - key: "kubernetes.io/role"
                    operator: NotIn
                    values:
                    - "master" #硬性匹配key 的值kubernetes.io/role不包含master的节点,即绝对不会调度到master节点(node反亲和)
              preferredDuringSchedulingIgnoredDuringExecution: #软限制
              - weight: 80
                preference:
                  matchExpressions:
                  - key: project
                    operator: In
                    values:
                      - wework
              - weight: 100
                preference:
                  matchExpressions:
                  - key: disktype
                    operator: In
                    values:
                      - ssd
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24

    此时部署之后Pod被优先调度到了符合标签disktype=ssd的114上

    root@k8s-master-01:/opt/k8s-data/yaml/wework/affinit/Affinit-case# kubectl get pods wework-tomcat-app2-deployment-7874c4c896-b8ckm  -n  wework -o wide
    NAME                                             READY   STATUS    RESTARTS   AGE   IP               NODE             NOMINATED NODE   READINESS GATES
    wework-tomcat-app2-deployment-7874c4c896-b8ckm   1/1     Running   0          48s   172.100.55.139   192.168.31.114   <none>           <none>
    
    • 1
    • 2
    • 3

    修改权重后再次部署

          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution: #硬限制
                nodeSelectorTerms:
                - matchExpressions: #硬匹配条件1
                  - key: "kubernetes.io/role" 
                    operator: NotIn
                    values:
                    - "master" #硬性匹配key 的值kubernetes.io/role不包含master的节点,即绝对不会调度到master节点(node反亲和)
              preferredDuringSchedulingIgnoredDuringExecution: #软限制
              - weight: 80 
                preference: 
                  matchExpressions: 
                  - key: project 
                    operator: In 
                    values: 
                      - wework
              - weight: 70 
                preference: 
                  matchExpressions: 
                  - key: disktype
                    operator: In 
                    values: 
                      - ssd
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24

    此时pod就被调度到了优先级更高113上.

    root@k8s-master-01:/opt/k8s-data/yaml/wework/affinit/Affinit-case# kubectl get pods  wework-tomcat-app2-deployment-765cc8fbc8-8hjfm -n  wework -o wide
    NAME                                             READY   STATUS    RESTARTS   AGE   IP               NODE             NOMINATED NODE   READINESS GATES
    wework-tomcat-app2-deployment-765cc8fbc8-8hjfm   1/1     Running   0          55s   172.100.76.157   192.168.31.113   <none>           <none>
    
    • 1
    • 2
    • 3

    4. Pod之间的亲和和反亲和

    Pod亲和与反亲和是根据已经运行在node节点上的Pod标签进行匹配.Pod标签必须指定Namespace
    亲和即将新创建的Pod分配到有这些标签的node上,可以减少网络传输的消耗.
    反亲和即创建Pod时避免将Pod新建到有这些标签标签的node上,可以用来做项目资源分配或高可用.

    Pod亲和和反亲和合法操作符有: In,NotIn,Exists,DoesNotxist
    topologyKey不能为空

    4.1 Pod之间亲和

    创建一个nginx Pod

    nginx的标签如下

      template:
        metadata:
          labels:
            app: nginx
            project: wework
    
    • 1
    • 2
    • 3
    • 4
    • 5

    此时该pod被调度到了111的服务器行

    root@k8s-master-01:/opt/k8s-data/yaml/wework/affinit/Affinit-case# kubectl apply -f case4-4.1-nginx.yaml
    deployment.apps/python-nginx-deployment created
    service/python-nginx-service configured
    
    root@k8s-master-01:/opt/k8s-data/yaml/wework/affinit/Affinit-case# kubectl get pods python-nginx-deployment-79498fbd8-mcb9f -n wework  -o wide
    NAME                                      READY   STATUS    RESTARTS   AGE   IP               NODE             NOMINATED NODE   READINESS GATES
    python-nginx-deployment-79498fbd8-mcb9f   1/1     Running   0          69s   172.100.109.93   192.168.31.111   <none>           <none>
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7

    4.1.1 软限制

    preferredDuringSchedulingIgnoredDuringExecution
    将Tomcat Pod的亲和到Namespace为wework标签project值为wework的Pod同一个Node上,如果这个Node上资源不足或标签匹配失败则被创建到其他node上.

          affinity:
            podAffinity:
              preferredDuringSchedulingIgnoredDuringExecution:
              - weight: 100
                podAffinityTerm:
                  labelSelector:
                    matchExpressions:
                    - key: project
                      operator: In
                      values:
                        - wework
                  topologyKey: kubernetes.io/hostname
                  namespaces:
                    - wework
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14

    这样新建的Tomcat Pod就被调度到了nginx Pod同一个node节点上.

    root@k8s-master-01:/opt/k8s-data/yaml/wework/affinit/Affinit-case# kubectl get pods -n wework -o wide|grep app2 
    wework-tomcat-app2-deployment-847575c495-svqp7   1/1     Running   0              40s    172.100.109.81    192.168.31.111   <none>           <none>
    root@k8s-master-01:/opt/k8s-data/yaml/wework/affinit/Affinit-case# kubectl get pods -n wework -o wide|grep nginx
    python-nginx-deployment-79498fbd8-mcb9f          1/1     Running   0              18m    172.100.109.93    192.168.31.111   <none>           <none>
    
    • 1
    • 2
    • 3
    • 4

    当调整Tomcat Pod的数量后.
    由于是软限制,当111资源不足时,新的pod就会被创建到其他node上.

    root@k8s-master-01:/opt/k8s-data/yaml/wework/affinit/Affinit-case# kubectl get pods -n wework -o wide|grep app2
    wework-tomcat-app2-deployment-847575c495-42zjp   1/1     Running   0              98s    172.100.109.99    192.168.31.111   <none>           <none>
    wework-tomcat-app2-deployment-847575c495-d4lbm   1/1     Running   0              78s    172.100.109.90    192.168.31.111   <none>           <none>
    wework-tomcat-app2-deployment-847575c495-mmsgz   1/1     Running   0              78s    172.100.76.164    192.168.31.113   <none>           <none>
    
    • 1
    • 2
    • 3
    • 4

    4.1.2 硬限制

    requiredDuringSchedulingIgnoredDuringExecution
    将Tomcat Pod的亲和到Namespace为wework标签project值为wework的Pod同一个Node上,如果Node上资源不足或匹配失败则无法创建此Pod

          affinity:
            podAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
              - labelSelector:
                  matchExpressions:
                  - key: project
                    operator: In
                    values:
                      - wework
                topologyKey: "kubernetes.io/hostname"
                namespaces:
                  - wework
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    root@k8s-master-01:/opt/k8s-data/yaml/wework/affinit/Affinit-case# kubectl apply -f case4-4.3-podaffinity-requiredDuring.yaml 
    deployment.apps/wework-tomcat-app2-deployment created
    root@k8s-master-01:/opt/k8s-data/yaml/wework/affinit/Affinit-case# kubectl get pods -n wework -o wide|grep app2
    wework-tomcat-app2-deployment-5d7457bc49-6mvn2   1/1     Running   0               3s     172.100.109.96    192.168.31.111   <none>           <none>
    
    • 1
    • 2
    • 3
    • 4

    设置pod最少需求512Mi内存.这时某些Pod就会因为资源不足无法被创建.

    root@k8s-master-01:/opt/k8s-data/yaml/wework/affinit/Affinit-case# kubectl get pods -n wework -o wide|grep app2
    wework-tomcat-app2-deployment-5b6c577ff7-2pt2b   0/1     ContainerCreating   0               36s    <none>            192.168.31.111   <none>           <none>
    wework-tomcat-app2-deployment-5b6c577ff7-798zz   1/1     Running             0               41s    172.100.109.118   192.168.31.111   <none>           <none>
    wework-tomcat-app2-deployment-5b6c577ff7-8j6vl   1/1     Running             0               41s    172.100.109.121   192.168.31.111   <none>           <none>
    wework-tomcat-app2-deployment-5b6c577ff7-8vnms   1/1     Running             0               41s    172.100.109.126   192.168.31.111   <none>           <none>
    wework-tomcat-app2-deployment-5b6c577ff7-jz9x9   1/1     Running             0               41s    172.100.109.117   192.168.31.111   <none>           <none>
    wework-tomcat-app2-deployment-5b6c577ff7-nw9bd   1/1     Running             0               39s    172.100.109.119   192.168.31.111   <none>           <none>
    wework-tomcat-app2-deployment-5b6c577ff7-p69pd   1/1     Running             0               36s    172.100.109.69    192.168.31.111   <none>           <none>
    wework-tomcat-app2-deployment-5b6c577ff7-v5wrw   1/1     Running             0               38s    172.100.109.124   192.168.31.111   <none>           <none>
    wework-tomcat-app2-deployment-5b6c577ff7-vp7vz   1/1     Running             0               41s    172.100.109.127   192.168.31.111   <none>           <none>
    wework-tomcat-app2-deployment-5b6c577ff7-xnrn6   1/1     Running             0               39s    172.100.109.123   192.168.31.111   <none>           <none>
    root@k8s-master-01:/opt/k8s-data/yaml/wework/affinit/Affinit-case# kubectl describe pods/wework-tomcat-app2-deployment-5b6c577ff7-2pt2b -n wework
    略
    Events:
      Type     Reason            Age   From               Message
      ----     ------            ----  ----               -------
      Warning  FailedScheduling  27s   default-scheduler  0/7 nodes are available: 1 Insufficient memory, 3 node(s) didn't match pod affinity rules, 3 node(s) were unschedulable.
      Warning  FailedScheduling  26s   default-scheduler  0/7 nodes are available: 1 Insufficient memory, 3 node(s) didn't match pod affinity rules, 3 node(s) were unschedulable.
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18

    4.2 Pod之间反亲和

    4.2.1 硬限制

    requiredDuringSchedulingIgnoredDuringExecution

        spec:
          containers:
          - name: wework-tomcat-app2-container
            image: harbor.intra.com/wework/tomcat-app1:2022-08-25_14_21_11
            imagePullPolicy: IfNotPresent
            #imagePullPolicy: Always
            ports:
            - containerPort: 8080
              protocol: TCP
              name: http
          affinity:
            podAntiAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
              - labelSelector:
                  matchExpressions:
                  - key: project
                    operator: In
                    values:
                      - wework
                topologyKey: "kubernetes.io/hostname"
                namespaces:
                  - wework
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22

    此时由于有反亲和,这样tomcat就会被创建在与nginx不同的node上

    root@k8s-master-01:/opt/k8s-data/yaml/wework/affinit/Affinit-case# kubectl apply -f case4-4.4-podAntiAffinity-requiredDuring.yaml
    deployment.apps/wework-tomcat-app2-deployment created
    root@k8s-master-01:/opt/k8s-data/yaml/wework/affinit/Affinit-case# kubectl get pods -n wework -o wide|egrep "nginx|app2"
    python-nginx-deployment-79498fbd8-mcb9f          1/1     Running   0               110m   172.100.109.93    192.168.31.111   <none>           <none>
    wework-tomcat-app2-deployment-6b758866f-4h9gl    1/1     Running   0               35s    172.100.76.166    192.168.31.113   <none>           <none>
    
    • 1
    • 2
    • 3
    • 4
    • 5

    这样即使创建再多的副本数,任然不会将这些pod分配到和nginx同个node上

    root@k8s-master-01:/opt/k8s-data/yaml/wework/affinit/Affinit-case# kubectl apply -f case4-4.4-podAntiAffinity-requiredDuring.yaml
    deployment.apps/wework-tomcat-app2-deployment configured
    root@k8s-master-01:/opt/k8s-data/yaml/wework/affinit/Affinit-case# kubectl get pods -n wework -o wide|egrep "nginx|app2"
    python-nginx-deployment-79498fbd8-mcb9f          1/1     Running   0               113m   172.100.109.93    192.168.31.111   <none>           <none>
    wework-tomcat-app2-deployment-6b758866f-4h9gl    1/1     Running   0               3m4s   172.100.76.166    192.168.31.113   <none>           <none>
    wework-tomcat-app2-deployment-6b758866f-7btdc    1/1     Running   0               4s     172.100.140.71    192.168.31.112   <none>           <none>
    wework-tomcat-app2-deployment-6b758866f-7v5c5    1/1     Running   0               4s     172.100.55.141    192.168.31.114   <none>           <none>
    wework-tomcat-app2-deployment-6b758866f-dsm76    1/1     Running   0               4s     172.100.76.167    192.168.31.113   <none>           <none>
    wework-tomcat-app2-deployment-6b758866f-zn2c2    1/1     Running   0               4s     172.100.55.142    192.168.31.114   <none>           <none>
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9

    4.2.2 软限制

    preferredDuringSchedulingIgnoredDuringExecution
    软限制多一个权重设置

          affinity:
            podAntiAffinity:
              preferredDuringSchedulingIgnoredDuringExecution:
              - weight: 100
                podAffinityTerm:
                  labelSelector:
                    matchExpressions:
                    - key: project
                      operator: In
                      values:
                        - wework
                  topologyKey: kubernetes.io/hostname
                  namespaces:
                    - wework
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14

    少量副本看不出效果

    root@k8s-master-01:/opt/k8s-data/yaml/wework/affinit/Affinit-case# kubectl scale --replicas=4 deploy/wework-tomcat-app2-deployment -n wework
    deployment.apps/wework-tomcat-app2-deployment scaled
    root@k8s-master-01:/opt/k8s-data/yaml/wework/affinit/Affinit-case# kubectl get pods -n wework -o wide|egrep "nginx|app2"
    python-nginx-deployment-79498fbd8-mcb9f          1/1     Running   0               117m   172.100.109.93    192.168.31.111   <none>           <none>
    wework-tomcat-app2-deployment-5886f4d479-576vz   1/1     Running   0               3s     172.100.140.69    192.168.31.112   <none>           <none>
    wework-tomcat-app2-deployment-5886f4d479-7xxxp   1/1     Running   0               52s    172.100.76.168    192.168.31.113   <none>           <none>
    wework-tomcat-app2-deployment-5886f4d479-dwh64   1/1     Running   0               3s     172.100.76.169    192.168.31.113   <none>           <none>
    wework-tomcat-app2-deployment-5886f4d479-kkprl   1/1     Running   0               3s     172.100.55.143    192.168.31.114   <none>           <none>
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8

    当将pod数量调整到足够大时,其他node节点资源耗尽,也会在同个node上创建

    root@k8s-master-01:/opt/k8s-data/yaml/wework/affinit/Affinit-case# kubectl scale --replicas=50 deploy/wework-tomcat-app2-deployment -n wework
    deployment.apps/wework-tomcat-app2-deployment scaled
    
    root@k8s-master-01:/opt/k8s-data/yaml/wework/affinit/Affinit-case# kubectl get pods -n wework -o wide|egrep "nginx|app2"
    python-nginx-deployment-79498fbd8-mcb9f          1/1     Running   0               118m    172.100.109.93    192.168.31.111   <none>           <none>
    wework-tomcat-app2-deployment-5886f4d479-2tlg8   1/1     Running   0               22s     172.100.55.155    192.168.31.114   <none>           <none>
    wework-tomcat-app2-deployment-5886f4d479-4jpbk   1/1     Running   0               22s     172.100.76.180    192.168.31.113   <none>           <none>
    wework-tomcat-app2-deployment-5886f4d479-5296d   1/1     Running   0               23s     172.100.109.66    192.168.31.111   <none>           <none>
    wework-tomcat-app2-deployment-5886f4d479-576vz   1/1     Running   0               102s    172.100.140.69    192.168.31.112   <none>           <none>
    wework-tomcat-app2-deployment-5886f4d479-5z594   1/1     Running   0               23s     172.100.140.66    192.168.31.112   <none>           <none>
    wework-tomcat-app2-deployment-5886f4d479-7lqf4   1/1     Running   0               23s     172.100.109.75    192.168.31.111   <none>           <none>
    wework-tomcat-app2-deployment-5886f4d479-7xxxp   1/1     Running   0               2m31s   172.100.76.168    192.168.31.113   <none>           <none>
    wework-tomcat-app2-deployment-5886f4d479-8p7l2   1/1     Running   0               22s     172.100.140.122   192.168.31.112   <none>           <none>
    wework-tomcat-app2-deployment-5886f4d479-8qwp4   1/1     Running   0               23s     172.100.55.154    192.168.31.114   <none>           <none>
    wework-tomcat-app2-deployment-5886f4d479-9p7gr   1/1     Running   0               23s     172.100.109.72    192.168.31.111   <none>           <none>
    wework-tomcat-app2-deployment-5886f4d479-c6xqc   1/1     Running   0               22s     172.100.140.113   192.168.31.112   <none>           <none>
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
  • 相关阅读:
    微信小程序echart导出图片
    MySQL 锁机制全面解析
    JavaSE复习总结
    Android studio:错误: 需要常量表达式
    分类预测 | MATLAB实现SSA-CNN-LSTM麻雀算法优化卷积长短期记忆神经网络数据分类预测
    应变计在工程中的角色:精准监测与安全保障的得力助手
    LeetCode 面试题 04.03. 特定深度节点链表
    数学建模学习(79):Matlab神经网络工具箱使用,实现多输入多输出预测
    【经济研究】数字技术创新与中国企业高质量发展—来自企业数字专利的证据
    探秘小米增程汽车与仿生机器人的未来:AI大模型的潜在影响及苹果iPhone15Pro发热问题解决之道
  • 原文地址:https://blog.csdn.net/qq_29974229/article/details/126541511