• K8S常用命令


    常用命令

    Kubernetes(K8s)是一个开源系统,用于自动部署、扩展和管理容器化应用程序。在使用Kubernetes时,你会经常使用kubectl命令行工具来与Kubernetes集群交互。以下是一些常用的kubectl命令:

    1. 获取资源信息

      • kubectl get nodes - 列出所有节点。
      • kubectl get pods - 列出当前命名空间的所有Pods。
      • kubectl get deployments - 列出当前命名空间的所有部署。
      • kubectl get services - 列出当前命名空间的所有服务。
      • kubectl get namespaces - 列出集群的所有命名空间。
    2. 创建和删除资源

      • kubectl create -f - 通过文件创建资源。
      • kubectl delete -f - 通过文件删除资源。
      • kubectl delete pods, services -l - 删除具有特定标签的所有Pods和Services。
    3. 描述和检查资源

      • kubectl describe nodes - 显示一个节点的详细信息。
      • kubectl describe pods - 显示一个Pod的详细信息。
      • kubectl logs - 显示Pod的日志。
      • kubectl exec -it -- - 在Pod中执行命令。
    4. 资源编辑和更新

      • kubectl edit / - 编辑资源并应用更改。
      • kubectl apply -f - 应用文件中的配置更改。
    5. 资源扩缩容

      • kubectl scale deployment --replicas= - 调整部署的副本数量。
    6. 资源标签和注解

      • kubectl label pods = - 给Pod添加一个新的标签。
      • kubectl annotate pods = - 给Pod添加一个新的注解。
    7. 端口转发和代理

      • kubectl port-forward : - 将本地端口转发到Pod端口。
      • kubectl proxy - 运行一个Kubernetes API服务器代理。
    8. 配置上下文和集群信息

      • kubectl config view - 显示kubectl的配置。
      • kubectl config current-context - 显示当前的上下文。
      • kubectl config set-context - 设置当前的上下文。
    9. 资源回滚

      • kubectl rollout undo deployment/ - 回滚到上一个版本的部署。
    10. 查看资源状态和事件

      • kubectl get events - 查看集群事件。
      • kubectl rollout status deployment/ - 查看部署的状态。

    请注意,上述命令可能需要根据你的Kubernetes集群和命名空间进行适当的调整。例如,如果你想要在特定的命名空间中获取资源信息,你需要添加-n 到命令中。

    实际执行显示

    1. 显示node信息
      kubectl get node
      kubectl get nodes -o wide
    root@ab-P10S-WS:/home/ab# kubectl get nodes -o wide
    NAME              STATUS     ROLES           AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION                    CONTAINER-RUNTIME
    bitmain-p10s-ws   Ready      control-plane   19d   v1.25.4   172.28.9.241   <none>        Ubuntu 20.04.6 LTS   5.15.0-92-generic                 containerd://1.6.26
    se9-6t            Ready      <none>          19d   v1.25.4   172.28.9.40    <none>        Ubuntu 20.04 LTS     5.10.4-tag--00042-g04fcbe819955   containerd://1.6.26
    sophon            NotReady   <none>          19d   v1.25.4   172.28.9.130   <none>        Ubuntu 20.04 LTS     5.10.4-tag--00042-g04fcbe819955   containerd://1.6.26
    root@bitmain-P10S-WS:/home/bitmain# kubectl get node
    NAME              STATUS     ROLES           AGE   VERSION
    bitmain-p10s-ws   Ready      control-plane   19d   v1.25.4
    se9-6t            Ready      <none>          19d   v1.25.4
    sophon            NotReady   <none>  
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    1. 显示pod信息
      kubectl get pods -A
      kubectl get pods -A -o wide
    root@ab-P10S-WS:/home/ab# kubectl get pods -A
    NAMESPACE      NAME                                      READY   STATUS             RESTARTS   AGE
    kube-flannel   kube-flannel-ds-m795j                     1/1     Running            0          19d
    kube-flannel   kube-flannel-ds-prbnz                     1/1     Running            0          19d
    kube-flannel   kube-flannel-ds-zvtfk                     1/1     Running            0          19d
    kube-system    bitmain-tpu-plugin-mgxvm                  0/1     ErrImagePull       0          19d
    kube-system    bitmain-tpu-plugin-pjnvz                  0/1     ImagePullBackOff   0          19d
    kube-system    coredns-565d847f94-8h7zw                  1/1     Running            0          19d
    kube-system    coredns-565d847f94-s4rsm                  1/1     Running            0          19d
    kube-system    etcd-bitmain-p10s-ws                      1/1     Running            0          19d
    kube-system    kube-apiserver-bitmain-p10s-ws            1/1     Running            0          19d
    kube-system    kube-controller-manager-bitmain-p10s-ws   1/1     Running            0          19d
    kube-system    kube-proxy-5d496                          1/1     Running            0          19d
    kube-system    kube-proxy-fzc5p                          1/1     Running            0          19d
    kube-system    kube-proxy-jnjkp                          1/1     Running            0          19d
    kube-system    kube-scheduler-bitmain-p10s-ws            1/1     Running            0          19d
    kube-system    nginx-t-5dc4b58b8-fjfsx                   0/1     Pending            0          16d
    kube-system    nginx-t-5dc4b58b8-qtxcp                   1/1     Running            0          19d
    kube-system    nginx-t-5dc4b58b8-zfw4c                   1/1     Terminating        0          19d
    root@bitmain-P10S-WS:/home/bitmain# kubectl get pods -A -o wide
    NAMESPACE      NAME                                      READY   STATUS             RESTARTS   AGE   IP             NODE              NOMINATED NODE   READINESS GATES
    kube-flannel   kube-flannel-ds-m795j                     1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
    kube-flannel   kube-flannel-ds-prbnz                     1/1     Running            0          19d   172.28.9.130   sophon            <none>           <none>
    kube-flannel   kube-flannel-ds-zvtfk                     1/1     Running            0          19d   172.28.9.40    se9-6t            <none>           <none>
    kube-system    bitmain-tpu-plugin-mgxvm                  0/1     ErrImagePull       0          19d   10.244.2.5     se9-6t            <none>           <none>
    kube-system    bitmain-tpu-plugin-pjnvz                  0/1     ImagePullBackOff   0          19d   10.244.1.5     sophon            <none>           <none>
    kube-system    coredns-565d847f94-8h7zw                  1/1     Running            0          19d   10.244.0.2     bitmain-p10s-ws   <none>           <none>
    kube-system    coredns-565d847f94-s4rsm                  1/1     Running            0          19d   10.244.0.3     bitmain-p10s-ws   <none>           <none>
    kube-system    etcd-bitmain-p10s-ws                      1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
    kube-system    kube-apiserver-bitmain-p10s-ws            1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
    kube-system    kube-controller-manager-bitmain-p10s-ws   1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
    kube-system    kube-proxy-5d496                          1/1     Running            0          19d   172.28.9.130   sophon            <none>           <none>
    kube-system    kube-proxy-fzc5p                          1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
    kube-system    kube-proxy-jnjkp                          1/1     Running            0          19d   172.28.9.40    se9-6t            <none>           <none>
    kube-system    kube-scheduler-bitmain-p10s-ws            1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
    kube-system    nginx-t-5dc4b58b8-fjfsx                   0/1     Pending            0          16d   <none>         <none>            <none>           <none>
    kube-system    nginx-t-5dc4b58b8-qtxcp                   1/1     Running            0          19d   10.244.2.3     se9-6t            <none>           <none>
    kube-system    nginx-t-5dc4b58b8-zfw4c                   1/1     Terminating        0          19d   10.244.1.3     sophon            <none>           <none>
    root@ab-P10S-WS:/home/ab#
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    1. 显示指定namespace的pod信息

    kubectl get pods -n kube-flannel
    kubectl get pods -n kube-flannel -o wide

    root@ab-P10S-WS:/home/ab# kubectl get pods -n kube-flannel
    NAME                    READY   STATUS    RESTARTS   AGE
    kube-flannel-ds-m795j   1/1     Running   0          19d
    kube-flannel-ds-prbnz   1/1     Running   0          19d
    kube-flannel-ds-zvtfk   1/1     Running   0          19d
    root@bitmain-P10S-WS:/home/bitmain# kubectl get pods -n kube-flannel -o wide
    NAME                    READY   STATUS    RESTARTS   AGE   IP             NODE              NOMINATED NODE   READINESS GATES
    kube-flannel-ds-m795j   1/1     Running   0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
    kube-flannel-ds-prbnz   1/1     Running   0          19d   172.28.9.130   sophon            <none>           <none>
    kube-flannel-ds-zvtfk   1/1     Running   0          19d   172.28.9.40    se9-6t            <none>           <none>
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    1. 显示node详细信息
      kubectl describe nodes se9-6t
    root@bitmain-P10S-WS:/home/bitmain# kubectl describe nodes se9-6t
    Name:               se9-6t
    Roles:              <none>
    Labels:             beta.kubernetes.io/arch=arm64
                        beta.kubernetes.io/os=linux
                        kubernetes.io/arch=arm64
                        kubernetes.io/hostname=se9-6t
                        kubernetes.io/os=linux
    Annotations:        flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"16:7b:8f:9e:63:06"}
                        flannel.alpha.coreos.com/backend-type: vxlan
                        flannel.alpha.coreos.com/kube-subnet-manager: true
                        flannel.alpha.coreos.com/public-ip: 172.28.9.40
                        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock
                        node.alpha.kubernetes.io/ttl: 0
                        volumes.kubernetes.io/controller-managed-attach-detach: true
    CreationTimestamp:  Wed, 31 Jan 2024 09:42:42 +0800
    Taints:             <none>
    Unschedulable:      false
    Lease:
      HolderIdentity:  se9-6t
      AcquireTime:     <unset>
      RenewTime:       Mon, 19 Feb 2024 17:24:41 +0800
    Conditions:
      Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
      ----                 ------  -----------------                 ------------------                ------                       -------
      NetworkUnavailable   False   Wed, 31 Jan 2024 09:44:27 +0800   Wed, 31 Jan 2024 09:44:27 +0800   FlannelIsUp                  Flannel is running on this node
      MemoryPressure       False   Mon, 19 Feb 2024 17:21:17 +0800   Wed, 31 Jan 2024 09:42:42 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
      DiskPressure         False   Mon, 19 Feb 2024 17:21:17 +0800   Wed, 31 Jan 2024 09:42:42 +0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
      PIDPressure          False   Mon, 19 Feb 2024 17:21:17 +0800   Wed, 31 Jan 2024 09:42:42 +0800   KubeletHasSufficientPID      kubelet has sufficient PID available
      Ready                True    Mon, 19 Feb 2024 17:21:17 +0800   Wed, 31 Jan 2024 09:44:35 +0800   KubeletReady                 kubelet is posting ready status
    Addresses:
      InternalIP:  172.28.9.40
      Hostname:    se9-6t
    Capacity:
      cpu:                     6
      ephemeral-storage:       9079836Ki
      hugepages-1Gi:           0
      hugepages-2Mi:           0
      hugepages-32Mi:          0
      hugepages-64Ki:          0
      memory:                  1212680Ki
      pods:                    110
      tpu.bitmain.com/bm1688:  0
    Allocatable:
      cpu:                     6
      ephemeral-storage:       8367976844
      hugepages-1Gi:           0
      hugepages-2Mi:           0
      hugepages-32Mi:          0
      hugepages-64Ki:          0
      memory:                  1110280Ki
      pods:                    110
      tpu.bitmain.com/bm1688:  0
    System Info:
      Machine ID:                 1eef0b5f42d64d108c592bddfbc61a20
      System UUID:                1eef0b5f42d64d108c592bddfbc61a20
      Boot ID:                    26141532-3000-45c2-a5cb-d0a4d96b41e5
      Kernel Version:             5.10.4-tag--00042-g04fcbe819955
      OS Image:                   Ubuntu 20.04 LTS
      Operating System:           linux
      Architecture:               arm64
      Container Runtime Version:  containerd://1.6.26
      Kubelet Version:            v1.25.4
      Kube-Proxy Version:         v1.25.4
    PodCIDR:                      10.244.2.0/24
    PodCIDRs:                     10.244.2.0/24
    Non-terminated Pods:          (4 in total)
      Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
      ---------                   ----                        ------------  ----------  ---------------  -------------  ---
      kube-flannel                kube-flannel-ds-zvtfk       100m (1%)     100m (1%)   50Mi (4%)        50Mi (4%)      19d
      kube-system                 bitmain-tpu-plugin-mgxvm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19d
      kube-system                 kube-proxy-jnjkp            0 (0%)        0 (0%)      0 (0%)           0 (0%)         19d
      kube-system                 nginx-t-5dc4b58b8-qtxcp     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19d
    Allocated resources:
      (Total limits may be over 100 percent, i.e., overcommitted.)
      Resource                Requests   Limits
      --------                --------   ------
      cpu                     100m (1%)  100m (1%)
      memory                  50Mi (4%)  50Mi (4%)
      ephemeral-storage       0 (0%)     0 (0%)
      hugepages-1Gi           0 (0%)     0 (0%)
      hugepages-2Mi           0 (0%)     0 (0%)
      hugepages-32Mi          0 (0%)     0 (0%)
      hugepages-64Ki          0 (0%)     0 (0%)
      tpu.bitmain.com/bm1688  1          1
    Events:                   <none>
    root@ab-P10S-WS:/home/ab#
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    1. 显示pod的详细信息
      kubectl describe pods kube-flannel-ds-zvtfk -n kube-flannel
    root@ab-P10S-WS:/home/ab# kubectl describe pods kube-flannel-ds-zvtfk -n kube-flannel
    Name:                 kube-flannel-ds-zvtfk
    Namespace:            kube-flannel
    Priority:             2000001000
    Priority Class Name:  system-node-critical
    Service Account:      flannel
    Node:                 se9-6t/172.28.9.40
    Start Time:           Wed, 31 Jan 2024 09:42:51 +0800
    Labels:               app=flannel
                          controller-revision-hash=7f4d65bc74
                          pod-template-generation=1
                          tier=node
    Annotations:          <none>
    Status:               Running
    IP:                   172.28.9.40
    IPs:
      IP:           172.28.9.40
    Controlled By:  DaemonSet/kube-flannel-ds
    Init Containers:
      install-cni-plugin:
        Container ID:  containerd://f588bfdd8eb6255562c398e3da95337238f126425b1bea0a42f236e639237424
        Image:         docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
        Image ID:      docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b
        Port:          <none>
        Host Port:     <none>
        Command:
          cp
        Args:
          -f
          /flannel
          /opt/cni/bin/flannel
        State:          Terminated
          Reason:       Completed
          Exit Code:    0
          Started:      Wed, 31 Jan 2024 09:43:39 +0800
          Finished:     Wed, 31 Jan 2024 09:43:39 +0800
        Ready:          True
        Restart Count:  0
        Environment:    <none>
        Mounts:
          /opt/cni/bin from cni-plugin (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qkttm (ro)
      install-cni:
        Container ID:  containerd://1cb07980a3f30aac58fca774323edb7e2aacf84554b213332834af4dbc4ef7f9
        Image:         docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2
        Image ID:      docker.io/rancher/mirrored-flannelcni-flannel@sha256:ec0f0b7430c8370c9f33fe76eb0392c1ad2ddf4ccaf2b9f43995cca6c94d3832
        Port:          <none>
        Host Port:     <none>
        Command:
          cp
        Args:
          -f
          /etc/kube-flannel/cni-conf.json
          /etc/cni/net.d/10-flannel.conflist
        State:          Terminated
          Reason:       Completed
          Exit Code:    0
          Started:      Wed, 31 Jan 2024 09:44:22 +0800
          Finished:     Wed, 31 Jan 2024 09:44:22 +0800
        Ready:          True
        Restart Count:  0
        Environment:    <none>
        Mounts:
          /etc/cni/net.d from cni (rw)
          /etc/kube-flannel/ from flannel-cfg (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qkttm (ro)
    Containers:
      kube-flannel:
        Container ID:  containerd://7005b97fe8473c2469a7b4b97cf219673e0ef8b26855879b4e2f2e24722aef97
        Image:         docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2
        Image ID:      docker.io/rancher/mirrored-flannelcni-flannel@sha256:ec0f0b7430c8370c9f33fe76eb0392c1ad2ddf4ccaf2b9f43995cca6c94d3832
        Port:          <none>
        Host Port:     <none>
        Command:
          /opt/bin/flanneld
        Args:
          --ip-masq
          --kube-subnet-mgr
        State:          Running
          Started:      Wed, 31 Jan 2024 09:44:24 +0800
        Ready:          True
        Restart Count:  0
        Limits:
          cpu:     100m
          memory:  50Mi
        Requests:
          cpu:     100m
          memory:  50Mi
        Environment:
          POD_NAME:           kube-flannel-ds-zvtfk (v1:metadata.name)
          POD_NAMESPACE:      kube-flannel (v1:metadata.namespace)
          EVENT_QUEUE_DEPTH:  5000
        Mounts:
          /etc/kube-flannel/ from flannel-cfg (rw)
          /run/flannel from run (rw)
          /run/xtables.lock from xtables-lock (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qkttm (ro)
    Conditions:
      Type              Status
      Initialized       True
      Ready             True
      ContainersReady   True
      PodScheduled      True
    Volumes:
      run:
        Type:          HostPath (bare host directory volume)
        Path:          /run/flannel
        HostPathType:
      cni-plugin:
        Type:          HostPath (bare host directory volume)
        Path:          /opt/cni/bin
        HostPathType:
      cni:
        Type:          HostPath (bare host directory volume)
        Path:          /etc/cni/net.d
        HostPathType:
      flannel-cfg:
        Type:      ConfigMap (a volume populated by a ConfigMap)
        Name:      kube-flannel-cfg
        Optional:  false
      xtables-lock:
        Type:          HostPath (bare host directory volume)
        Path:          /run/xtables.lock
        HostPathType:  FileOrCreate
      kube-api-access-qkttm:
        Type:                    Projected (a volume that contains injected data from multiple sources)
        TokenExpirationSeconds:  3607
        ConfigMapName:           kube-root-ca.crt
        ConfigMapOptional:       <nil>
        DownwardAPI:             true
    QoS Class:                   Burstable
    Node-Selectors:              <none>
    Tolerations:                 :NoSchedule op=Exists
                                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                                 node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                                 node.kubernetes.io/not-ready:NoExecute op=Exists
                                 node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                                 node.kubernetes.io/unreachable:NoExecute op=Exists
                                 node.kubernetes.io/unschedulable:NoSchedule op=Exists
    Events:                      <none>
    root@ab-P10S-WS:/home/ab#
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110
    • 111
    • 112
    • 113
    • 114
    • 115
    • 116
    • 117
    • 118
    • 119
    • 120
    • 121
    • 122
    • 123
    • 124
    • 125
    • 126
    • 127
    • 128
    • 129
    • 130
    • 131
    • 132
    • 133
    • 134
    • 135
    • 136
    • 137
    • 138
    • 139
    • 140
    • 141
    • 142
    • 143
    1. 删除node
      kubectl delete node sophon
      此时kubectl get node -A 中已经没有,但kubectl get pods -A -o wide中还有,需要执行:
      kubectl delete pod nginx-t-5dc4b58b8-zfw4c --grace-period=0 --force -n kube-system
      此时有提示:
      Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
      Error from server (NotFound): pods “nginx-t-5dc4b58b8-zfw4c” not found
      再kubectl get pods -A -o wide 此时kubectl get pods -A -o wide中已经没有sophon节点的信息了
    root@ab-P10S-WS:/home/ab# kubectl get node
    NAME              STATUS     ROLES           AGE   VERSION
    bitmain-p10s-ws   Ready      control-plane   19d   v1.25.4
    se9-6t            Ready      <none>          19d   v1.25.4
    sophon            NotReady   <none>          19d   v1.25.4
    root@ab-P10S-WS:/home/ab# kubectl delete node sophon
    node "sophon" deleted
    root@ab-P10S-WS:/home/ab# kubectl get node
    NAME              STATUS   ROLES           AGE   VERSION
    bitmain-p10s-ws   Ready    control-plane   19d   v1.25.4
    se9-6t            Ready    <none>          19d   v1.25.4
    root@ab-P10S-WS:/home/ab# kubectl get pods -A -o wide
    NAMESPACE      NAME                                      READY   STATUS             RESTARTS   AGE   IP             NODE              NOMINATED NODE   READINESS GATES
    kube-flannel   kube-flannel-ds-m795j                     1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
    kube-flannel   kube-flannel-ds-prbnz                     1/1     Running            0          19d   172.28.9.130   sophon            <none>           <none>
    kube-flannel   kube-flannel-ds-zvtfk                     1/1     Running            0          19d   172.28.9.40    se9-6t            <none>           <none>
    kube-system    bitmain-tpu-plugin-mgxvm                  0/1     ImagePullBackOff   0          19d   10.244.2.5     se9-6t            <none>           <none>
    kube-system    bitmain-tpu-plugin-pjnvz                  0/1     ImagePullBackOff   0          19d   10.244.1.5     sophon            <none>           <none>
    kube-system    coredns-565d847f94-8h7zw                  1/1     Running            0          19d   10.244.0.2     bitmain-p10s-ws   <none>           <none>
    kube-system    coredns-565d847f94-s4rsm                  1/1     Running            0          19d   10.244.0.3     bitmain-p10s-ws   <none>           <none>
    kube-system    etcd-bitmain-p10s-ws                      1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
    kube-system    kube-apiserver-bitmain-p10s-ws            1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
    kube-system    kube-controller-manager-bitmain-p10s-ws   1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
    kube-system    kube-proxy-5d496                          1/1     Running            0          19d   172.28.9.130   sophon            <none>           <none>
    kube-system    kube-proxy-fzc5p                          1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
    kube-system    kube-proxy-jnjkp                          1/1     Running            0          19d   172.28.9.40    se9-6t            <none>           <none>
    kube-system    kube-scheduler-bitmain-p10s-ws            1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
    kube-system    nginx-t-5dc4b58b8-fjfsx                   0/1     Pending            0          16d   <none>         <none>            <none>           <none>
    kube-system    nginx-t-5dc4b58b8-qtxcp                   1/1     Running            0          19d   10.244.2.3     se9-6t            <none>           <none>
    kube-system    nginx-t-5dc4b58b8-zfw4c                   1/1     Terminating        0          19d   10.244.1.3     sophon            <none>           <none>
    root@ab-P10S-WS:/home/ab# kubectl delete pod <pod-name> --grace-period=0 --force -n <namespace>
    bash: syntax error near unexpected token `newline'
    root@ab-P10S-WS:/home/ab# kubectl delete pod nginx-t-5dc4b58b8-zfw4c --grace-period=0 --force -n kube-system
    Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
    Error from server (NotFound): pods "nginx-t-5dc4b58b8-zfw4c" not found
    root@ab-P10S-WS:/home/ab# kubectl get pods -A -o wide
    NAMESPACE      NAME                                      READY   STATUS             RESTARTS   AGE   IP             NODE              NOMINATED NODE   READINESS GATES
    kube-flannel   kube-flannel-ds-m795j                     1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
    kube-flannel   kube-flannel-ds-zvtfk                     1/1     Running            0          19d   172.28.9.40    se9-6t            <none>           <none>
    kube-system    bitmain-tpu-plugin-mgxvm                  0/1     ImagePullBackOff   0          19d   10.244.2.5     se9-6t            <none>           <none>
    kube-system    coredns-565d847f94-8h7zw                  1/1     Running            0          19d   10.244.0.2     bitmain-p10s-ws   <none>           <none>
    kube-system    coredns-565d847f94-s4rsm                  1/1     Running            0          19d   10.244.0.3     bitmain-p10s-ws   <none>           <none>
    kube-system    etcd-bitmain-p10s-ws                      1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
    kube-system    kube-apiserver-bitmain-p10s-ws            1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
    kube-system    kube-controller-manager-bitmain-p10s-ws   1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
    kube-system    kube-proxy-fzc5p                          1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
    kube-system    kube-proxy-jnjkp                          1/1     Running            0          19d   172.28.9.40    se9-6t            <none>           <none>
    kube-system    kube-scheduler-bitmain-p10s-ws            1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
    kube-system    nginx-t-5dc4b58b8-fjfsx                   0/1     Pending            0          16d   <none>         <none>            <none>           <none>
    kube-system    nginx-t-5dc4b58b8-qtxcp                   1/1     Running            0          19d   10.244.2.3     se9-6t            <none>           <none>
    root@ab-P10S-WS:/home/ab#
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    1. 进入node容器
      kubectl exec nginx-t-5dc4b58b8-qtxcp -n kube-system -it – “bash”
    root@ab-P10S-WS:/home/ab# kubectl get pods -A -o wide
    NAMESPACE      NAME                                      READY   STATUS             RESTARTS   AGE   IP             NODE              NOMINATED NODE   READINESS GATES
    kube-flannel   kube-flannel-ds-m795j                     1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
    kube-flannel   kube-flannel-ds-zvtfk                     1/1     Running            0          19d   172.28.9.40    se9-6t            <none>           <none>
    kube-system    bitmain-tpu-plugin-mgxvm                  0/1     ImagePullBackOff   0          19d   10.244.2.5     se9-6t            <none>           <none>
    kube-system    coredns-565d847f94-8h7zw                  1/1     Running            0          19d   10.244.0.2     bitmain-p10s-ws   <none>           <none>
    kube-system    coredns-565d847f94-s4rsm                  1/1     Running            0          19d   10.244.0.3     bitmain-p10s-ws   <none>           <none>
    kube-system    etcd-bitmain-p10s-ws                      1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
    kube-system    kube-apiserver-bitmain-p10s-ws            1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
    kube-system    kube-controller-manager-bitmain-p10s-ws   1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
    kube-system    kube-proxy-fzc5p                          1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
    kube-system    kube-proxy-jnjkp                          1/1     Running            0          19d   172.28.9.40    se9-6t            <none>           <none>
    kube-system    kube-scheduler-bitmain-p10s-ws            1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
    kube-system    nginx-t-5dc4b58b8-fjfsx                   0/1     Pending            0          16d   <none>         <none>            <none>           <none>
    kube-system    nginx-t-5dc4b58b8-qtxcp                   1/1     Running            0          19d   10.244.2.3     se9-6t            <none>           <none>
    root@ab-P10S-WS:/home/ab# kubectl exec  nginx-t-5dc4b58b8-qtxcp -n kube-system -it -- "bash"
    root@nginx-t-5dc4b58b8-qtxcp:/# ls /dev
    bm-tpu0    fd    ion     null  pts     shm        soph-dpu  soph-ive  soph-stitch  soph-tde0  soph-vpss    soph_vc_enc  stdin   termination-log  urandom
    bmdev-ctl  full  mqueue  ptmx  random  soph-base  soph-dwa  soph-ldc  soph-sys     soph-tde1  soph_vc_dec  stderr       stdout  tty              zero
    root@nginx-t-5dc4b58b8-qtxcp:/#
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20

    查看k8s服务

    在使用systemctl来查看Kubernetes服务时,你通常是在检查Kubernetes集群的控制平面组件和节点组件的状态。这些组件可能是以系统服务的形式运行的,尤其是在使用kubeadm或其他类似工具部署的集群中。

    下面是一些常用的systemctl命令来查看和管理Kubernetes服务:

    查看kubelet服务状态:

    systemctl status kubelet
    kubelet是在每个节点上运行的主要“节点代理”,它负责维护和管理该节点上的容器。

    查看所有Kubernetes相关服务:

    systemctl list-units --type=service | grep kube
    这将列出所有名称中包含“kube”的服务。

    启动/停止/重启kubelet服务:

    systemctl start kubelet
    systemctl stop kubelet
    systemctl restart kubelet
    这些命令分别用于启动、停止和重启kubelet服务。

    查看服务的日志:

    journalctl -u kubelet
    使用journalctl命令可以查看kubelet服务的日志。你可以替换kubelet为其他服务的名称,如docker、containerd或etcd等,以查看这些服务的日志。

    启用/禁用服务自启动:

    systemctl enable kubelet
    systemctl disable kubelet
    这些命令分别用于设置kubelet服务在系统启动时自动启动或禁用自启动。

    请注意,Kubernetes的控制平面组件(如API服务器、控制器管理器、调度器等)可能不是作为系统服务运行的。在某些安装中,这些组件可能是作为容器运行的,特别是在使用kubeadm等工具时。在这种情况下,你不会看到这些组件作为单独的服务出现在systemctl的输出中。如果你的控制平面组件是作为Pod运行的,你应该使用kubectl来管理和检查它们的状态,而不是systemctl。

  • 相关阅读:
    java毕业设计客观题考试mybatis+源码+调试部署+系统+数据库+lw
    spring-session-core导致的接口调用问题,排查记录
    学习ASP.NET Core Blazor编程系列三——实体
    CMOS反相器的工作原理和电路结构
    流量分析——一、蚁剑流量特征
    亚商投资顾问 早餐FM/1110我国推进甲烷排放
    【ARM 嵌入式 C 入门及渐进 1.2 -- 是否为 n 字节对齐】
    TornadoFx设置保存功能((config和preference使用))
    邮件发送原理及实现
    K8S部署Dashboard
  • 原文地址:https://blog.csdn.net/pj_wxyjxy/article/details/136173644