• k8s node环境部署(三)


    1、添加node1、node2环境

    前面配置master环境的截图最后一段 复制下来 分别在node主机执行

    kubeadm join 192.168.37.132:6443 --token p5omh3.cqjqt8ymrwkdn2fc \
            --discovery-token-ca-cert-hash sha256:608a1cbadd060cfdeac2fae84c19609061b750ab51bf9a19887ff7ea0b849985

    k8s-node1

    1. [root@k8s-node1 ~]# netstat -pltun
    2. Active Internet connections (only servers)
    3. Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
    4. tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1303/master
    5. tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 950/sshd
    6. tcp6 0 0 ::1:25 :::* LISTEN 1303/master
    7. tcp6 0 0 :::22 :::* LISTEN 950/sshd
    8. [root@k8s-node1 ~]# kubeadm join 192.168.37.132:6443 --token p5omh3.cqjqt8ymrwkdn2fc \
    9. > --discovery-token-ca-cert-hash sha256:608a1cbadd060cfdeac2fae84c19609061b750ab51bf9a19887ff7ea0b849985
    10. [preflight] Running pre-flight checks
    11. [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 20.10
    12. [preflight] Reading configuration from the cluster...
    13. [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
    14. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    15. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    16. [kubelet-start] Starting the kubelet
    17. [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
    18. This node has joined the cluster:
    19. * Certificate signing request was sent to apiserver and a response was received.
    20. * The Kubelet was informed of the new secure connection details.
    21. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
    22. [root@k8s-node1 ~]# netstat -pltun
    23. Active Internet connections (only servers)
    24. Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
    25. tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1303/master
    26. tcp 0 0 127.0.0.1:41244 0.0.0.0:* LISTEN 10223/kubelet
    27. tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 10223/kubelet
    28. tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 10663/kube-proxy
    29. tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 950/sshd
    30. tcp6 0 0 ::1:25 :::* LISTEN 1303/master
    31. tcp6 0 0 :::10250 :::* LISTEN 10223/kubelet
    32. tcp6 0 0 :::10256 :::* LISTEN 10663/kube-proxy
    33. tcp6 0 0 :::22 :::* LISTEN 950/sshd

     k8s-node2

    1. [root@k8s-node2 ~]# kubeadm join 192.168.37.132:6443 --token p5omh3.cqjqt8ymrwkdn2fc \
    2. > --discovery-token-ca-cert-hash sha256:608a1cbadd060cfdeac2fae84c19609061b750ab51bf9a19887ff7ea0b849985
    3. [preflight] Running pre-flight checks
    4. [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 20.10
    5. [preflight] Reading configuration from the cluster...
    6. [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
    7. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    8. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    9. [kubelet-start] Starting the kubelet
    10. [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
    11. This node has joined the cluster:
    12. * Certificate signing request was sent to apiserver and a response was received.
    13. * The Kubelet was informed of the new secure connection details.
    14. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

    master环境验证

    1. [root@k8s-master ~]# kubectl get nodes
    2. NAME STATUS ROLES AGE VERSION
    3. k8s-master NotReady control-plane,master 18m v1.23.6
    4. k8s-node1 NotReady 5m28s v1.23.6
    5. [root@k8s-master ~]# kubectl get nodes
    6. NAME STATUS ROLES AGE VERSION
    7. k8s-master NotReady control-plane,master 21m v1.23.6
    8. k8s-node1 NotReady 8m6s v1.23.6
    9. k8s-node2 NotReady 2m31s v1.23.6

     处理 节点的notready问题

    [root@k8s-master ~]# kubectl get pod -n kube-system
    NAME                                 READY   STATUS    RESTARTS   AGE
    coredns-6d8c4cb4d-hjbtl              0/1     Pending   0          28m
    coredns-6d8c4cb4d-pwhjz              0/1     Pending   0          28m

    master环境执行

      需要下载网络插件

    mkdir /opt/k8s/ && cd /opt/k8s/

    curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml  -O

    修改这俩行

     

    取消注释

    改成

                - name: CALICO_IPV4POOL_CIDR
                  value: "10.244.0.0/16"  #要和pod的网段ip一样

    1. # 删除镜像 docker.io缀,避免下载过慢导致失败
    2. [root@k8s-master k8s]# sed -i 's#docker.io/##g' calico.yaml 
    3. [root@k8s-master k8s]# grep image calico.yaml
    4.           image: calico/cni:v3.25.0
    5.           imagePullPolicy: IfNotPresent
    6.           image: calico/cni:v3.25.0
    7.           imagePullPolicy: IfNotPresent
    8.           image: calico/node:v3.25.0
    9.           imagePullPolicy: IfNotPresent
    10.           image: calico/node:v3.25.0
    11.           imagePullPolicy: IfNotPresent
    12.           image: calico/kube-controllers:v3.25.0
    13.           imagePullPolicy: IfNotPresent
    14. [root@k8s-master k8s]# kubectl apply -f calico.yaml
    15. poddisruptionbudget.policy/calico-kube-controllers created
    16. serviceaccount/calico-kube-controllers created
    17. serviceaccount/calico-node created
    18. configmap/calico-config created
    19. customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
    20. customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
    21. customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
    22. customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
    23. customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
    24. customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
    25. customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
    26. customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
    27. customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
    28. customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
    29. customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
    30. customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
    31. customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
    32. customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
    33. customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
    34. customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
    35. customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
    36. clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
    37. clusterrole.rbac.authorization.k8s.io/calico-node created
    38. clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
    39. clusterrolebinding.rbac.authorization.k8s.io/calico-node created
    40. daemonset.apps/calico-node created
    41. deployment.apps/calico-kube-controllers created

               

     查看pod是否启动

    #节点已经Ready状态

    [root@k8s-master k8s]# kubectl get nodes
    NAME         STATUS   ROLES                  AGE   VERSION
    k8s-master   Ready    control-plane,master   52m   v1.23.6
    k8s-node1    Ready                     39m   v1.23.6
    k8s-node2    Ready                     33m   v1.23.6
     

    解决node节点不可以使用kubectl命令

    [root@k8s-node1 ~]# kubectl get pod
    The connection to the server localhost:8080 was refused - did you specify the right host or port?

    1. [root@k8s-master ~]# cd /etc/kubernetes/
    2. [root@k8s-master kubernetes]# scp admin.conf k8s-node1:/etc/kubernetes/
    3. The authenticity of host 'k8s-node1 (192.168.37.133)' can't be established.
    4. ECDSA key fingerprint is SHA256:v+UeuPf/k3LWyRvPn3fm67FDoU4yIZ7IprAUBLqxnFQ.
    5. ECDSA key fingerprint is MD5:7c:dd:0f:66:0b:a8:a9:53:ec:25:80:50:80:06:b6:48.
    6. Are you sure you want to continue connecting (yes/no)? yes
    7. [root@k8s-master kubernetes]# scp admin.conf k8s-node2:/etc/kubernetes/
    8. The authenticity of host 'k8s-node2 (192.168.37.134)' can't be established.
    9. ECDSA key fingerprint is SHA256:wQKSzrjeA0e37M7zYFsiHZH2n+kO43VYr/NR+pXvwlk.
    10. ECDSA key fingerprint is MD5:ff:80:7d:af:61:3a:75:88:ef:e3:0e:48:2e:51:53:0b.
    11. Are you sure you want to continue connecting (yes/no)? yes

    分别在node1、node2节点执行
     

    1.  echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
    2.  source ~/.bash_

    验证

    [root@k8s-node2 ~]# kubectl get nodes
    NAME         STATUS   ROLES                  AGE   VERSION
    k8s-master   Ready    control-plane,master   83m   v1.23.6
    k8s-node1    Ready                     70m   v1.23.6
    k8s-node2    Ready                     64m   v1.23.6

    因为命令行工具是向apiserver发送请求,是因为node1、node2环境没有apiserver配置地址

    就需要把master的admin.conf文件scp到node节点,这样node节点就可以找到目标端在发送请求

    这样k8s的集群环境就部署结束了
     

  • 相关阅读:
    React原理(一)VirtualDom 到渲染UI
    Apple App Store和Google Play 进行ASO有哪些区别?
    解决 jenkins 插件下载失败问题 - 配置 jenkins 插件中心为国内镜像地址
    Twain的学习记录和基于Qt的相关开源项目详解
    软件工程师必读的13本书
    9.现代循环神经网络
    今天又做了三个梦,其中一个梦梦里的我还有意识会思考?
    Android音视频开发:MediaRecorder录制音频
    Python编程基础 | Python编程基础内置函数
    CF1012C Hills
  • 原文地址:https://blog.csdn.net/m0_52454621/article/details/132736016