• 【云原生 | Kubernetes 系列】Kubernetes 资源限制


    Kubernetes 资源限制

    1. Kubernetes对单个容器的CPU及Memory实现资源限制
    2. Kubernetes对单个Pod的CPU及Memory实现资源限制
    3. Kubernetes对整个Namespace的CPU及Memory实现资源限制

    1. Kubernets中资源限制

    • CPU以核心为单位
    • Memory以字节为单位
    • requests 为Kubernetes scheduler执行pod调度时node节点至少需要拥有的资源.
    • limits 为Pod运行成功后最多可以使用的资源上限.

    2. Containers资源限制

    限制内存最多使用512M,调度时内存最少需要100M

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: limit-test-deployment
      namespace: wework
    spec:
      replicas: 1
      selector:
        matchLabels: #rs or deployment
          app: limit-test-pod
      template:
        metadata:
          labels:
            app: limit-test-pod
        spec:
          containers:
          - name: limit-test-container
            image: lorel/docker-stress-ng
            resources:
              limits:
                memory: "512Mi"
              requests:
                memory: "100Mi"
            args: ["--vm", "2", "--vm-bytes", "256M"]
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24

    生成pod
    跑了这个压测Pod,2个进程,每个进程使用256M内存,合计512M左右.此时正好跑到上线512M

    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl apply -f case1-pod-memory-limit.yml 
    deployment.apps/limit-test-deployment created
    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl get pods -n wework|grep limit
    limit-test-deployment-597c4f5466-x65f4           1/1     Running   0          8m43s
    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl top pods -n wework|grep limit
    limit-test-deployment-597c4f5466-x65f4           1863m        510Mi
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6

    修改yaml,将上限改为375M

              limits:
                memory: "375Mi"
    
    • 1
    • 2

    再次生成pod,此时POD的内存只能跑到350Mi左右,无法超过375Mi,但CPU没有做限制,所以CPU跑到了1.8

    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl apply -f case1-pod-memory-limit.yml 
    deployment.apps/limit-test-deployment configured
    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl top pods -n wework|grep limit
    limit-test-deployment-7db68c5584-qg468           1808m        350Mi 
    
    • 1
    • 2
    • 3
    • 4

    加上CPU限制

              limits:
                memory: "200Mi"
                cpu: "1.2"
    
    • 1
    • 2
    • 3

    此时再看pod状态,CPU最高也不会再超过1.2

    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl apply -f  case1-pod-memory-limit.yml 
    deployment.apps/limit-test-deployment created
    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl top pods -n wework|grep lim
    limit-test-deployment-69d988487c-pz884           1199m        198Mi   
    
    • 1
    • 2
    • 3
    • 4

    这样就能避免1个pod占用资源过多,造成系统异常

    3. Pod和Container资源限制

    当没有限制时,创建一个2G,3C的容器

    resources:
              limits:
                cpu: "3"
                memory: "512Mi"
              requests:
                memory: "100Mi"
                cpu: "500m"
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9

    Pod被正常创建且CPU为3

    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl get pods -n wework -o wide|grep lim
    limit-test-deployment-69ffd466b9-rw75s           1/1     Running   0          105s    172.100.109.76    192.168.31.111   <none>           <none>
    
    
    
    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl describe pods limit-test-deployment-69ffd466b9-rw75s -n wework
    Name:         limit-test-deployment-69ffd466b9-rw75s
    Namespace:    wework
    Priority:     0
    Node:         192.168.31.111/192.168.31.111
    Start Time:   Fri, 19 Aug 2022 14:01:25 +0800
    Labels:       app=limit-test-pod
                  pod-template-hash=69ffd466b9
    Annotations:  <none>
    Status:       Running
    IP:           172.100.109.76
    IPs:
      IP:           172.100.109.76
    Controlled By:  ReplicaSet/limit-test-deployment-69ffd466b9
    Containers:
      limit-test-container:
        Container ID:  docker://1200280f7dc02596773bf1d124428c47bb8cb4292495de8b7e16f1f9750d74b4
        Image:         lorel/docker-stress-ng
        Image ID:      docker-pullable://lorel/docker-stress-ng@sha256:c8776b750869e274b340f8e8eb9a7d8fb2472edd5b25ff5b7d55728bca681322
        Port:          <none>
        Host Port:     <none>
        Args:
          --vm
          2
          --vm-bytes
          256M
        State:          Running
          Started:      Fri, 19 Aug 2022 14:01:44 +0800
        Ready:          True
        Restart Count:  0
        Limits:
          cpu:     3
          memory:  512Mi
        Requests:
          cpu:        500m
          memory:     100Mi
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    参数作用
    type: Container针对单个容器进行限制
    type: Pod针对单个pod进行限制
    type: PersistentVolumeClaim针对namespace下pvc大小限制
    max限制最大
    min限制最小值
    defaultlimit默认值
    defaultRequestrequest默认值
    maxLimitRequestRatiolimit/request的最大值
    cpuCPU限制
    memory内存限制
    storage磁盘容量限制

    限制后wework Namespace下:
    容器:
    最多使用2个cpu,2G内存.
    小于CPU 500m和内存512Mi容器将不被创建.
    默认Cpu 500m,内存512Mi.limit最多是request的2倍.
    Pod:
    最多使用4个cpu,4G内存.
    pvc:
    最多使用50Gi,最少使用30Gi.

    apiVersion: v1
    kind: LimitRange
    metadata:
      name: limitrange-wework
      namespace: wework
    spec:
      limits:
      - type: Container       #限制的资源类型
        max:
          cpu: "2"            #限制单个容器的最大CPU
          memory: "2Gi"       #限制单个容器的最大内存
        min:
          cpu: "500m"         #限制单个容器的最小CPU
          memory: "512Mi"     #限制单个容器的最小内存
        default:
          cpu: "500m"         #默认单个容器的CPU限制
          memory: "512Mi"     #默认单个容器的内存限制
        defaultRequest:
          cpu: "500m"         #默认单个容器的CPU创建请求
          memory: "512Mi"     #默认单个容器的内存创建请求
        maxLimitRequestRatio:
          cpu: 2              #限制CPU limit/request比值最大为2  
          memory: 2         #限制内存limit/request比值最大为1.5
      - type: Pod
        max:
          cpu: "4"            #限制单个Pod的最大CPU
          memory: "4Gi"       #限制单个Pod最大内存
      - type: PersistentVolumeClaim
        max:
          storage: 50Gi        #限制PVC最大的requests.storage
        min:
          storage: 30Gi        #限制PVC最小的requests.storage
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32

    配置生效

    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl apply -f case3-LimitRange.yaml 
    limitrange/limitrange-wework created
    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl get limitranges -n  wework
    NAME                CREATED AT
    limitrange-wework   2022-08-19T05:55:17Z
    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl describe limitranges limitrange-wework -n wework
    Name:                  limitrange-wework
    Namespace:             wework
    Type                   Resource  Min    Max   Default Request  Default Limit  Max Limit/Request Ratio
    ----                   --------  ---    ---   ---------------  -------------  -----------------------
    Container              memory    512Mi  2Gi   512Mi            512Mi          2
    Container              cpu       500m   2     500m             500m           2
    Pod                    cpu       -      4     -                -              -
    Pod                    memory    -      4Gi   -                -              -
    PersistentVolumeClaim  storage   30Gi   50Gi  -                -              -
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15

    此时再次创建刚才那个pod时虽然现实创建成功,但该pod并没有被创建.

    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl apply -f case2-pod-memory-and-cpu-limit.yml
    deployment.apps/limit-test-deployment created
    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl get pods -n wework |grep lim
    
    • 1
    • 2
    • 3

    查看deployment时发现该deployment是0/1的状态
    查看deployment的输出显示
    CPU已经超出最大值了,最大值是2,我们的limit设置了3
    limit和request的比值最大是2,我们已经是6了

    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl get deployments.apps limit-test-deployment -n wework
    NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
    limit-test-deployment   0/1     0            0           23m
    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl get deployments.apps limit-test-deployment -n wework -o yaml
    略
      - lastTransitionTime: "2022-08-19T06:31:19Z"
        lastUpdateTime: "2022-08-19T06:31:19Z"
        message: 'pods "limit-test-deployment-789fffc8f4-cq5wq" is forbidden: [maximum
          cpu usage per Container is 2, but limit is 3, cpu max limit to request ratio
          per Container is 2, but provided ratio is 6.000000]'
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10

    修改yaml后,重新部署

            resources:
              limits:
                cpu: "2"
                memory: "1000Mi"
              requests:
                memory: "1000Mi"
                cpu: "1"
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7

    此时pod被正常创建了

    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl apply -f case2-pod-memory-and-cpu-limit.yml 
    deployment.apps/limit-test-deployment configured
    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl get deploy limit-test-deployment -n wework 
    NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
    limit-test-deployment   1/1     1            1           6m25s
    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl get pod limit-test-deployment-d7cb67577-dtcp7 -n wework 
    NAME                                    READY   STATUS    RESTARTS   AGE
    limit-test-deployment-d7cb67577-dtcp7   1/1     Running   0          43s
    ## 此时deployment也没有报错了
      - lastTransitionTime: "2022-08-19T06:31:19Z"
        lastUpdateTime: "2022-08-19T06:37:37Z"
        message: ReplicaSet "limit-test-deployment-d7cb67577" has successfully progressed.
        reason: NewReplicaSetAvailable
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13

    4. Pod资源限制

    单个容器限制3C3G Pod限制4C4G

    apiVersion: v1
    kind: LimitRange
    metadata:
      name: limitrange-wework
      namespace: wework
    spec:
      limits:
      - type: Container       #限制的资源类型
        max:
          cpu: "3"            #限制单个容器的最大CPU
          memory: "3Gi"       #限制单个容器的最大内存
        min:
          cpu: "500m"         #限制单个容器的最小CPU
          memory: "512Mi"     #限制单个容器的最小内存
        default:
          cpu: "500m"         #默认单个容器的CPU限制
          memory: "512Mi"     #默认单个容器的内存限制
        defaultRequest:
          cpu: "500m"         #默认单个容器的CPU创建请求
          memory: "512Mi"     #默认单个容器的内存创建请求
        maxLimitRequestRatio:
          cpu: 2              #限制CPU limit/request比值最大为2  
          memory: 2         #限制内存limit/request比值最大为1.5
      - type: Pod
        max:
          cpu: "4"            #限制单个Pod的最大CPU
          memory: "4Gi"       #限制单个Pod最大内存
      - type: PersistentVolumeClaim
        max:
          storage: 50Gi        #限制PVC最大的requests.storage
        min:
          storage: 30Gi        #限制PVC最小的requests.storage
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32

    1个Pod里2个container最大Memory 3Gi共6Gi

    kind: Deployment
    apiVersion: apps/v1
    metadata:
      labels:
        app: wework-wordpress-deployment-label
      name: wework-wordpress-deployment
      namespace: wework
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: wework-wordpress-selector
      template:
        metadata:
          labels:
            app: wework-wordpress-selector
        spec:
          containers:
          - name: wework-wordpress-nginx-container
            image: nginx:1.16.1
            imagePullPolicy: Always
            ports:
            - containerPort: 80
              protocol: TCP
              name: http
            env:
            - name: "password"
              value: "123456"
            - name: "age"
              value: "18"
            resources:
              limits:
                cpu: 1
                memory: 3Gi
              requests:
                cpu: 500m
                memory: 2Gi
    
          - name: wework-wordpress-php-container
            image: php:5.6-fpm-alpine 
            imagePullPolicy: Always
            ports:
            - containerPort: 80
              protocol: TCP
              name: http
            env:
            - name: "password"
              value: "123456"
            - name: "age"
              value: "18"
            resources:
              limits:
                cpu: 1
                #cpu: 2
                memory: 3Gi
              requests:
                cpu: 500m
                memory: 2Gi
    
    
    ---
    kind: Service
    apiVersion: v1
    metadata:
      labels:
        app: wework-wordpress-service-label
      name: wework-wordpress-service
      namespace: wework
    spec:
      type: NodePort
      ports:
      - name: http
        port: 80
        protocol: TCP
        targetPort: 8080
        nodePort: 30033
      selector:
        app: wework-wordpress-selector
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78

    生效配置
    可以看到提示了Pod内存最大4Gi,但limit设置了6442450944

    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl apply -f  case4-pod-RequestRatio-limit.yaml 
    deployment.apps/wework-wordpress-deployment configured
    service/wework-wordpress-service created
    
    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl get deployments.apps wework-wordpress-deployment -n wework -o yaml
    略
      - lastTransitionTime: "2022-08-19T07:04:17Z"
        lastUpdateTime: "2022-08-19T07:04:17Z"
        message: 'pods "wework-wordpress-deployment-577c6775d8-bxwm6" is forbidden: maximum
          memory usage per Pod is 4Gi, but limit is 6442450944'
        reason: FailedCreate
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11

    将2个Container内存需求调小到2G再试,这次就创建成功了

    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl apply -f case4-pod-RequestRatio-limit.yaml
    deployment.apps/wework-wordpress-deployment configured
    service/wework-wordpress-service unchanged
    
    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl get deploy wework-wordpress-deployment -n wework 
    NAME                          READY   UP-TO-DATE   AVAILABLE   AGE
    wework-wordpress-deployment   1/1     1            1           5m27s
    
    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl get pod  wework-wordpress-deployment-684df659bd-56ss8 -n wework 
    NAME                                           READY   STATUS    RESTARTS   AGE
    wework-wordpress-deployment-684df659bd-56ss8   2/2     Running   0          74s
    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl get deploy wework-wordpress-deployment -n wework  -o yaml
    略
      - lastTransitionTime: "2022-08-19T07:04:17Z"
        lastUpdateTime: "2022-08-19T07:09:37Z"
        message: ReplicaSet "wework-wordpress-deployment-684df659bd" has successfully
          progressed.
        reason: NewReplicaSetAvailable
        status: "True"
        type: Progressing
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20

    5. Namespace资源限制

    通过ResourceQuota限制整个Namespace的资源
    主要限制CPU,内存还可以限制GPU,Pod数量,Service数量等

    apiVersion: v1
    kind: ResourceQuota
    metadata:
      name: quota-linux
      namespace: linux
    spec:
      hard:
        requests.cpu: "8"
        limits.cpu: "8"
        requests.memory: 4Gi
        limits.memory: 4Gi
        requests.nvidia.com/gpu: 4
        pods: "2"
        services: "6"
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14

    生效配置

    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl apply -f case6-ResourceQuota-ns.yaml 
    resourcequota/quota-linux created
    
    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl get resourcequotas -n linux
    NAME          AGE   REQUEST                                                                                             LIMIT
    quota-linux   63s   pods: 2/2, requests.cpu: 0/8, requests.memory: 0/4Gi, requests.nvidia.com/gpu: 0/4, services: 2/6   limits.cpu: 0/8, limits.memory: 0/4Gi
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6

    测试创建更多的pod

    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl get resourcequotas -n linux
    NAME          AGE    REQUEST                                                                                             LIMIT
    quota-linux   3m9s   pods: 2/2, requests.cpu: 0/8, requests.memory: 0/4Gi, requests.nvidia.com/gpu: 0/4, services: 2/6   limits.cpu: 0/8, limits.memory: 0/4Gi
    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl get pods -n linux 
    NAME                                            READY   STATUS    RESTARTS   AGE
    linux-nginx-deployment-5cd9566d7f-npdw2         1/1     Running   0          7h37m
    linux-tomcat-app1-deployment-6f8864d5d9-wdwv8   1/1     Running   0          7h37m
    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl get deploy -n linux 
    NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
    linux-nginx-deployment         1/5     0            1           2d3h
    linux-tomcat-app1-deployment   1/1     1            1           2d3h
    ## 可以看到创建失败了.原因是pod数量到上限了
      - lastTransitionTime: "2022-08-19T07:31:01Z"
        lastUpdateTime: "2022-08-19T07:31:01Z"
        message: 'pods "linux-nginx-deployment-6477f56f7f-4pgnm" is forbidden: exceeded
          quota: quota-linux, requested: pods=1, used: pods=2, limited: pods=2'
        reason: FailedCreate
        status: "True"
        type: ReplicaFailure
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19

    调整最大Pod数量到20个,并修改deployment pod到3个,此时创建成功

    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl apply -f case6-ResourceQuota-ns.yaml 
    resourcequota/quota-linux configured
            
    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl get resourcequotas -n linux
    NAME          AGE    REQUEST                                                                                              LIMIT
    quota-linux   10m   pods: 4/20, requests.cpu: 1500m/8, requests.memory: 1536Mi/4Gi, requests.nvidia.com/gpu: 0/4, services: 2/6   limits.cpu: 3/8, limits.memory: 3Gi/4Gi
    
      - lastTransitionTime: "2022-08-19T07:36:22Z"
        lastUpdateTime: "2022-08-19T07:37:34Z"
        message: ReplicaSet "linux-nginx-deployment-6477f56f7f" has successfully progressed.
        reason: NewReplicaSetAvailable
        status: "True"
        type: Progressing
      observedGeneration: 2
      readyReplicas: 3
      replicas: 3
      updatedReplicas: 3
    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl get deploy -n linux
    NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
    linux-nginx-deployment         3/3     3            3           86s
    linux-tomcat-app1-deployment   1/1     1            1           2d3h
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21

    再将deployment的Pod个数调整到5个时,可以看到limits.memory到了上限制

    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl get resourcequotas -n linux
    NAME          AGE   REQUEST                                                                                                LIMIT
    quota-linux   11m   pods: 5/20, requests.cpu: 2/8, requests.memory: 2Gi/4Gi, requests.nvidia.com/gpu: 0/4, services: 2/6   limits.cpu: 4/8, limits.memory: 4Gi/4Gi
    
    • 1
    • 2
    • 3

    此时查看deployment的状态,确实是limits.memory=4Gi, limited了.
    但deployment触发上限之前创建的Pod还是正常在运行的.

      - lastTransitionTime: "2022-08-19T07:39:31Z"
        lastUpdateTime: "2022-08-19T07:39:31Z"
        message: 'pods "linux-nginx-deployment-6477f56f7f-vzht2" is forbidden: exceeded
          quota: quota-linux, requested: limits.memory=1Gi, used: limits.memory=4Gi, limited:
          limits.memory=4Gi'
        reason: FailedCreate
        status: "True"
        type: ReplicaFailure
    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl get pods -n linux
    NAME                                            READY   STATUS    RESTARTS   AGE
    linux-nginx-deployment-6477f56f7f-4r5hg         1/1     Running   0          6m22s
    linux-nginx-deployment-6477f56f7f-6cvdl         1/1     Running   0          6m22s
    linux-nginx-deployment-6477f56f7f-dffz9         1/1     Running   0          3m13s
    linux-nginx-deployment-6477f56f7f-v4sjp         1/1     Running   0          6m22s
    linux-tomcat-app1-deployment-6f8864d5d9-wdwv8   1/1     Running   0          7h48m
    root@k8s-master-01:/opt/k8s-data/yaml/limit# kubectl get deployment -n linux
    NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
    linux-nginx-deployment         4/5     4            4           6m31s
    linux-tomcat-app1-deployment   1/1     1            1           2d4h
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19

    至此K8s资源限制常见的配置方法测试完毕

  • 相关阅读:
    javaee spring aop 切入点表达式
    浅谈Python异步编程
    python+django高校教师科研成果管理系统pycharm源码lw
    React 中useImmer的使用
    向日葵远程分辨率过低解决办法
    记一次用arthas排查jvm中CPU占用过高问题
    《当代教育实践与教学研究》期刊简介及投稿要求
    Redis 高可用及持久化
    【毕业设计】基于SSM酒店后台管理系统
    机器视觉康耐视visionpro-脚本常见的编辑编译错误和运行错误及警告性错误,调试解决办法
  • 原文地址:https://blog.csdn.net/qq_29974229/article/details/126426638