• Ubuntu系统Kubernetes(1.25)快速安装手册


    环境说明

    节点名称IP地址
    k8s-master192.168.2.180
    k8s-node1192.168.2.181

    版本信息

    系统及软件版本号
    Ubuntu22.04 LTS
    Docker20.10.18
    Kubernetes1.25.0

    操作系统

    通过官方网站下载22.04 LTS版本的系统镜像文件,在VMware中安装Ubuntu,由于下载的是live版本,安装过程需要联网,安装过程略。

    修改root口令
    在安装向导中需要设置登录用户test,Ubuntu为root用户生成了随机口令,安装完成后,登录test用户并通过命令修改root用户的登录口令。

    test@k8s-node1:~$ sudo passwd root
    [sudo] password for test: 
    New password: 
    Retype new password: 
    passwd: password updated successfully
    
    • 1
    • 2
    • 3
    • 4
    • 5

    配置sshd
    默认操作系统不允许以root用户通过ssh连接系统,需要修改配置文件启用。

    /etc/ssh/sshd_config文件中增加以下配置:

    PermitRootLogin yes
    
    • 1

    重启sshd服务:

    systemctl restart sshd
    
    • 1

    关闭swap
    注释/etc/fstab文件的最后一行:

    #/swap.img      none    swap    sw      0       0
    
    • 1

    安装Docker

    安装依赖组件

    sudo apt-get update
    sudo apt-get install ca-certificates curl gnupg lsb-release
    
    • 1
    • 2

    配置GPG密钥

    sudo mkdir -p /etc/apt/keyrings
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
    echo \
      "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
      $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    
    • 1
    • 2
    • 3
    • 4
    • 5

    安装Docker组件

    sudo apt-get update
    sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
    
    • 1
    • 2

    开启IPv4转发

    cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-iptables  = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.ipv4.ip_forward                 = 1
    EOF
    
    sudo sysctl --system
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7

    安装cri-dockerd

    Kubernetes在v1.24版本之后删除了dockershim,Docker不再是默认的容器运行时了,要想继续使用Docker运行时,需要安装cri-dockerd

    下载软件包
    直接从github下载速度较慢,这里使用了代理加速:

    wget https://ghproxy.com/https://github.com/Mirantis/cri-dockerd/releases/download/v0.2.5/cri-dockerd_0.2.5.3-0.ubuntu-jammy_amd64.deb
    
    • 1

    安装软件包

    dpkg -i cri-dockerd_0.2.5.3-0.ubuntu-jammy_amd64.deb 
    
    • 1

    调整启动参数

    sed -i -e 's#ExecStart=.*#ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.7#g' /usr/lib/systemd/system/cri-docker.service
    
    • 1

    设置开启自启动

    systemctl daemon-reload
    systemctl enable cri-docker
    
    • 1
    • 2

    安装Kubernetes

    安装依赖组件

    sudo apt-get install -y apt-transport-https ca-certificates curl
    
    • 1

    安装GPG密钥

    sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg  https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg
    
    echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] http://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
    
    • 1
    • 2
    • 3

    安装Kubernetes

    sudo apt-get update
    sudo apt-get install -y kubelet kubeadm kubectl
    
    • 1
    • 2

    标记软件包
    标记软件包,避免其自动更新。

    sudo apt-mark hold kubelet kubeadm kubectl
    
    • 1

    初始化集群前需要重启。

    初始化Kubernetes集群

    执行kubeadm init命令进行集群的初始化:

    kubeadm init --image-repository registry.aliyuncs.com/google_containers \
                 --apiserver-advertise-address=192.168.2.180 \
                 --service-cidr=192.168.200.0/21 \
                 --pod-network-cidr=10.0.0.0/16 \
                 --cri-socket /var/run/cri-dockerd.sock
    
    • 1
    • 2
    • 3
    • 4
    • 5

    完成初始化后将会看到以下输出信息,给出了需要执行的一些操作,以及集群Node节点加入集群的命令:

    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    Alternatively, if you are the root user, you can run:
    
      export KUBECONFIG=/etc/kubernetes/admin.conf
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 192.168.2.180:6443 --token dc4wxa.qar86v4pb1b2umvm \
            --discovery-token-ca-cert-hash sha256:1df0074a2226ed1a56f53b9d33bf263c51d3794b4c4b9d6132f07b68592ac38a
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20

    配置节点

    根据主节点初始化集群的输出,在Worker节点执行以下命令将该节点加入Kubernetes集群:

    kubeadm join 192.168.2.180:6443 --token dc4wxa.qar86v4pb1b2umvm \
            --discovery-token-ca-cert-hash sha256:1df0074a2226ed1a56f53b9d33bf263c51d3794b4c4b9d6132f07b68592ac38a \
            --cri-socket unix:///var/run/cri-dockerd.sock
    
    • 1
    • 2
    • 3

    未指定--cri-socket参数时,会出现以下错误:

    Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
    
    • 1

    安装命令补全

    安装软件包

    apt install bash-completion
    
    • 1

    添加配置

    source /usr/share/bash-completion/bash_completion
    source <(kubectl completion bash)
    echo "source <(kubectl completion bash)" >> ~/.bashrc
    
    • 1
    • 2
    • 3

    安装Calico网络策略

    配置域名地址映射

    echo "185.199.108.133 raw.githubusercontent.com" | tee -a /etc/hosts
    
    • 1

    下载tigera-operator配置文件

    wget https://raw.githubusercontent.com/projectcalico/calico/v3.24.1/manifests/tigera-operator.yaml
    
    • 1

    创建tigera-operator

    kubectl create -f tigera-operator.yaml
    
    • 1

    下载custom-resources配置文件

    wget https://raw.githubusercontent.com/projectcalico/calico/v3.24.1/manifests/custom-resources.yaml
    
    • 1

    修改cidr配置
    spec.calicoNetwork.ipPools.cidr值设置为初始化集群时--pod-network-cidr参数指定的网段:

    apiVersion: operator.tigera.io/v1
    kind: Installation
    metadata:
      name: default
    spec:
      # Configures Calico networking.
      calicoNetwork:
        # Note: The ipPools section cannot be modified post-install.
        ipPools:
        - blockSize: 26
          cidr: 10.0.0.0/16
          encapsulation: VXLANCrossSubnet
          natOutgoing: Enabled
          nodeSelector: all()
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14

    创建custom-resources

    kubectl create -f custom-resources.yaml
    
    • 1

    查看Pod状态

    root@k8s-master:~# kubectl get pod -A
    NAMESPACE          NAME                                       READY   STATUS    RESTARTS   AGE
    calico-apiserver   calico-apiserver-548cbc944d-4pft2          1/1     Running   0          115s
    calico-apiserver   calico-apiserver-548cbc944d-j8rdv          1/1     Running   0          115s
    calico-system      calico-kube-controllers-864f96fccc-8bjqc   1/1     Running   0          5m18s
    calico-system      calico-node-cjgbb                          1/1     Running   0          5m18s
    calico-system      calico-node-kkzpp                          1/1     Running   0          5m18s
    calico-system      calico-typha-6885c858c9-ljxb4              1/1     Running   0          5m18s
    calico-system      csi-node-driver-kfknd                      2/2     Running   0          3m16s
    calico-system      csi-node-driver-n5mdb                      2/2     Running   0          3m51s
    kube-system        coredns-c676cc86f-k7jzh                    1/1     Running   0          54m
    kube-system        coredns-c676cc86f-zcrp4                    1/1     Running   0          54m
    kube-system        etcd-k8s-master                            1/1     Running   0          54m
    kube-system        kube-apiserver-k8s-master                  1/1     Running   0          54m
    kube-system        kube-controller-manager-k8s-master         1/1     Running   0          54m
    kube-system        kube-proxy-pmqt5                           1/1     Running   0          39m
    kube-system        kube-proxy-rslz5                           1/1     Running   0          54m
    kube-system        kube-scheduler-k8s-master                  1/1     Running   0          54m
    tigera-operator    tigera-operator-6675dc47f4-k8ps2           1/1     Running   0          12m
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19

    此时calico网络组件Pod均为Running状态。

    验证集群

    我们通过创建一个nginx的Pod,以及Service资源,验证是否能正常访问nginx页面。

    编写资源文件
    nginx.yaml

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-deployment
    spec:
      selector:
        matchLabels:
          app: nginx
      replicas: 1
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx:1.23.1
            ports:
            - containerPort: 80
            
    ---
    
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx-service
    spec:
      selector:
        app: nginx
      ports:
      - protocol: TCP
        port: 80
        targetPort: 80
        nodePort: 30080
      type: NodePort
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35

    创建nginx服务

    kubectl create -f nginx.yaml
    
    • 1

    查看状态

    root@k8s-master:~# kubectl get pod -o wide
    NAME                                READY   STATUS    RESTARTS   AGE     IP           NODE        NOMINATED NODE   READINESS GATES
    nginx-deployment-665fc7dc59-lzw98   1/1     Running   0          4m50s   10.0.36.69   k8s-node1   <none>           <none>
    
    root@k8s-master:~# kubectl get svc -o wide
    NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE     SELECTOR
    kubernetes      ClusterIP   192.168.200.1    <none>        443/TCP        77m     <none>
    nginx-service   NodePort    192.168.202.74   <none>        80:30080/TCP   5m56s   app=nginx
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8

    访问nginx

    root@k8s-master:~# curl -I http://192.168.2.180:30080
    HTTP/1.1 200 OK
    Server: nginx/1.23.1
    Date: Sun, 11 Sep 2022 15:15:17 GMT
    Content-Type: text/html
    Content-Length: 615
    Last-Modified: Tue, 19 Jul 2022 14:05:27 GMT
    Connection: keep-alive
    ETag: "62d6ba27-267"
    Accept-Ranges: bytes
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10

    通过命令输出可以看到,nginx已经通过30080端口对外提供服务了。

    安装过程中使用到的资源配置文件可以点击下载

    重置集群

    kubeadm reset -f --cri-socket unix:///var/run/cri-dockerd.sock
    
    • 1
  • 相关阅读:
    软件设计模式白话文系列(十四)策略模式
    C++医学临床影像信息管理系统源码
    用AI打造一个属于自己的歌手,让她C位霸气出道
    【数据结构与算法】之深入解析“摘樱桃II”的求解思路与算法示例
    微服务保护
    P18 JMenuBar菜单栏
    网易面试总结——面试案例5~面试案例8
    图像相似度对比分析软件,图像相似度计算方法
    Unity 3D 物体的Inspector面板
    Kotlin(五) 循环语句
  • 原文地址:https://blog.csdn.net/ldjjbzh626/article/details/126808731