• Kubernetes (K8S) 1.24.3 For Ubuntu 安装


    Master 和 Node 都要做的部分

    ---------------- 只需要运行一次的部分 ----------------

    参考 https://blog.csdn.net/weixin_43501172/article/details/125869017

    hosts文件 域名通信

    echo 192.168.50.61 k8s-master >> /etc/hosts
    echo 192.168.50.62 k8s-node-01 >> /etc/hosts
    echo 192.168.50.63 k8s-node-02 >> /etc/hosts
    
    • 1
    • 2
    • 3

    添加源

    curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add
    echo "deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main" >>  /etc/apt/sources.list
    
    • 1
    • 2

    安装 docker 和 docker-ce

    apt install docker.io
    
    • 1

    一定要保证 master 和各个 node 之间的 所有版本一直,否则会出现 node 无法 pull 的情况。

    docker version
    
    • 1

    在这里插入图片描述
    在这里插入图片描述

    删除 docker

    apt-get purge docker-ce docker-ce-cli containerd.io docker-compose-plugin
    
    • 1

    永久关闭,这个需要重启生效

    sed -i 's#\/swap.img#\#\/swap.img#g' /etc/fstab
    
    • 1

    设置iptable

    参考kubadm官网: https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

    cat <
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    sudo sysctl --system
    
    • 1

    将docker的cgroup修改为systemd

    参考链接:https://www.jianshu.com/p/8a62750c0eef

    sudo mkdir /etc/docker
    cat <
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11

    安装 cri-dockerd

    下载安装文件: cri-dockerd_0.2.3.3-0.ubuntu-jammy_amd64.deb
    https://github.com/Mirantis/cri-dockerd/tags
    查看版本代号

    lsb_release -a 
    
    • 1

    安装 cri-dockerd

    dpkg -i cri-dockerd_0.2.3.3-0.ubuntu-jammy_amd64.deb
    
    • 1

    替换 cri-docker.service

    cri-docker.service

    [Unit]
    Description=CRI Interface for Docker Application Container Engine
    Documentation=https://docs.mirantis.com
    After=network-online.target firewalld.service docker.service
    Wants=network-online.target
    Requires=cri-docker.socket
    
    [Service]
    Type=notify
    ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.7
    ExecReload=/bin/kill -s HUP $MAINPID
    TimeoutSec=0
    RestartSec=2
    Restart=always
    StartLimitBurst=3
    
    StartLimitInterval=60s
    
    LimitNOFILE=infinity
    LimitNPROC=infinity
    LimitCORE=infinity
    
    TasksMax=infinity
    Delegate=yes
    KillMode=process
    
    [Install]
    WantedBy=multi-user.target
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28

    替换

    cp -rf cri-docker.service /etc/systemd/system/multi-user.target.wants/cri-docker.service
    
    • 1

    替换 cri-docker.socket

    cri-docker.socket

    [Unit]
    Description=CRI Docker Socket for the API
    PartOf=cri-docker.service
    
    [Socket]
    ListenStream=%t/cri-dockerd.sock
    SocketMode=0660
    SocketUser=root
    SocketGroup=docker
    
    [Install]
    WantedBy=sockets.target
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12

    替换

    cp -rf cri-docker.socket /usr/lib/systemd/system/cri-docker.socket 
    
    • 1

    运行 ipvs.modules

    ipvs.modules

    #!/bin/bash
    ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
    for kernel_module in ${ipvs_modules}; do
        /sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
        if [ 0 -eq 0 ]; then
            /sbin/modprobe ${kernel_module}
        fi
    done
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8

    运行

    ./ipvs.modules && lsmod | grep ip_vs
    
    • 1

    ---------------- 每次都尽量运行的部分 ----------------

    取名:install.sh

    #!/bin/bash
    # 基本上都是固定格式,后面需要更改的地方是ip和主机名称,需要更改一下,其他均不变
    # close firewall
    ufw disable
    
    # close swap-
    #修改swap可以参考链接:https://blog.csdn.net/weixin_42599091/article/details/107164366
    #临时关闭
    swapoff -a
    
    sudo systemctl enable docker
    sudo systemctl daemon-reload
    sudo systemctl restart docker
    
    
    # install k8s apt packages
    # 参考kubadm官网(同步骤3):https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
    sudo apt-get update
    sudo apt-get upgrade -y
    sudo apt-get install -y apt-transport-https ca-certificates curl
    
    # nstall kubelet kubeadm kubectl
    #参考kubadm官网(同步骤3):https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
    apt install kubeadm
    apt install kubectl
    apt install kubelet
    apt-mark hold kubelet kubeadm kubectl
    
    
    # 启动cri-docker并设置开机自动启动
    systemctl daemon-reload
    systemctl restart cri-docker
    systemctl enable cri-docker --now
    
    service containerd restart
    service kubelet restart
    service docker restart
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38

    ---------------- ----------------

    安装 master 【master-install.sh】

    做完上述操作后,再运行下面脚本
    master-install.sh

    #!/bin/bash
    
    # 初始化master节点
    kubeadm init --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --cri-socket unix://var/run/cri-dockerd.sock --ignore-preflight-errors=NumCPU
    
    # wget --no-check-certificate https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    kubectl apply -f ./include/kube-flannel.yml
    
    # 查看node节点加入的 sha
    echo "kubeadm token create --print-join-command"
    
    # 看到coredns这类的pod都处于running状态
    echo "kubectl get pods -n kube-system -o wide"
    
    # 查看节点
    echo "kubectl get nodes"
    
    # 获取flannel的pods
    echo "kubectl get pods -n kube-flannel"
    
    # 查看节点 01 的状态
    echo  "kubectl describe nodes k8s-node-01"
    
    # 重置 kubeadm
    # kubeadm reset
    
    # 查看 proxy 状态
    echo "kubectl describe po kube-proxy-9x6fr -n kube-system"
    
    
    mkdir -p $HOME/.kube
    rm -rf  /root/.kube/*
    cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    chown $(id -u):$(id -g) $HOME/.kube/config
    export KUBECONFIG=/etc/kubernetes/admin.conf
    
    
    apt install -y bash-completion
    source /usr/share/bash-completion/bash_completion
    source <(kubectl completion bash)
    echo "source <(kubectl completion bash)" >> ~/.bashrc
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42

    设置 flannel

    kube-flannel.yml

    ---
    kind: Namespace
    apiVersion: v1
    metadata:
      name: kube-flannel
      labels:
        pod-security.kubernetes.io/enforce: privileged
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: flannel
    rules:
    - apiGroups:
      - ""
      resources:
      - pods
      verbs:
      - get
    - apiGroups:
      - ""
      resources:
      - nodes
      verbs:
      - list
      - watch
    - apiGroups:
      - ""
      resources:
      - nodes/status
      verbs:
      - patch
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: flannel
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: flannel
    subjects:
    - kind: ServiceAccount
      name: flannel
      namespace: kube-flannel
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: flannel
      namespace: kube-flannel
    ---
    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: kube-flannel-cfg
      namespace: kube-flannel
      labels:
        tier: node
        app: flannel
    data:
      cni-conf.json: |
        {
          "name": "cbr0",
          "cniVersion": "0.3.1",
          "plugins": [
            {
              "type": "flannel",
              "delegate": {
                "hairpinMode": true,
                "isDefaultGateway": true
              }
            },
            {
              "type": "portmap",
              "capabilities": {
                "portMappings": true
              }
            }
          ]
        }
      net-conf.json: |
        {
          "Network": "10.244.0.0/16",
          "Backend": {
            "Type": "vxlan"
          }
        }
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds
      namespace: kube-flannel
      labels:
        tier: node
        app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                    - linux
          hostNetwork: true
          priorityClassName: system-node-critical
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni-plugin
           #image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
            image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
            command:
            - cp
            args:
            - -f
            - /flannel
            - /opt/cni/bin/flannel
            volumeMounts:
            - name: cni-plugin
              mountPath: /opt/cni/bin
          - name: install-cni
           #image: flannelcni/flannel:v0.19.0 for ppc64le and mips64le (dockerhub limitations may apply)
            image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.0
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
           #image: flannelcni/flannel:v0.19.0 for ppc64le and mips64le (dockerhub limitations may apply)
            image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.0
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "100Mi"
              limits:
                cpu: "100m"
                memory: "200Mi"
            securityContext:
              privileged: false
              capabilities:
                add: ["NET_ADMIN", "NET_RAW"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: EVENT_QUEUE_DEPTH
              value: "5000"
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
            - name: xtables-lock
              mountPath: /run/xtables.lock
          volumes:
          - name: run
            hostPath:
              path: /run/flannel
          - name: cni-plugin
            hostPath:
              path: /opt/cni/bin
          - name: cni
            hostPath:
              path: /etc/cni/net.d
          - name: flannel-cfg
            configMap:
              name: kube-flannel-cfg
          - name: xtables-lock
            hostPath:
              path: /run/xtables.lock
              type: FileOrCreate
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110
    • 111
    • 112
    • 113
    • 114
    • 115
    • 116
    • 117
    • 118
    • 119
    • 120
    • 121
    • 122
    • 123
    • 124
    • 125
    • 126
    • 127
    • 128
    • 129
    • 130
    • 131
    • 132
    • 133
    • 134
    • 135
    • 136
    • 137
    • 138
    • 139
    • 140
    • 141
    • 142
    • 143
    • 144
    • 145
    • 146
    • 147
    • 148
    • 149
    • 150
    • 151
    • 152
    • 153
    • 154
    • 155
    • 156
    • 157
    • 158
    • 159
    • 160
    • 161
    • 162
    • 163
    • 164
    • 165
    • 166
    • 167
    • 168
    • 169
    • 170
    • 171
    • 172
    • 173
    • 174
    • 175
    • 176
    • 177
    • 178
    • 179
    • 180
    • 181
    • 182
    • 183
    • 184
    • 185
    • 186
    • 187
    • 188
    • 189
    • 190
    • 191
    • 192
    • 193
    • 194
    • 195
    • 196
    • 197
    • 198
    • 199
    • 200
    • 201
    • 202
    • 203
    • 204

    使用 flannel.yaml

    kubectl apply -f ./kube-flannel.yml
    
    • 1

    master 端使用如下命令应该可以看到一些东西了。

    # 查看node节点加入的 sha
    kubeadm token create --print-join-command
    
    # 看到coredns这类的pod都处于running状态
    kubectl get pods -n kube-system -o wide
    kubectl get pods -n kube-flannel  -o wide
    kubectl get nodes  -o wide
    
    kubectl get pod -n kubernetes-dashboard -o wide
    kubectl get svc  -n kubernetes-dashboard -o wide
    
    # 查看节点 01 的状态
    # kubectl describe nodes k8s-node-01
    
    # 重置 kubeadm
    # kubeadm reset
    
    # 查看 proxy 状态
    # kubectl describe po kube-proxy-9x6fr -n kube-system
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20

    ---------------- ----------------

    安装 node ,每个 node 都一样

    在 master 端运行如下命令查看node节点加入的 sha

    kubeadm token create --print-join-command
    
    • 1

    再在node端运行上面命令得出的结果,记得在末尾加上 --cri-socket unix://var/run/cri-dockerd.sock。类似如下:

    kubeadm join 192.168.50.61:6443 --token qr0dir.kfuer8ovhhyx6maa --discovery-token-ca-cert-hash sha256:28a80b14f648afe1b884efec0a4cf8131b41333e04c1252790c3e2997e9e85af   --cri-socket unix://var/run/cri-dockerd.sock
    
    • 1

    此时在 master 端运行如下命令,应该可以看到本节点。

    kubectl get nodes  -o wide
    
    • 1

    ---------------- ----------------

    重装 master

    先停止服务:

    systemctl restart docker
    systemctl restart kubelet
    systemctl restart cri-docker
    systemctl restart containerd
    
    • 1
    • 2
    • 3
    • 4

    再运行上面 master-install.sh 即可

    ./master-install.sh
    
    • 1

    如果出现问题,视情况可以再运行如下命令

    mkdir -p $HOME/.kube
    cp -rf /etc/kubernetes/admin.conf $HOME/.kube/config
    chown $(id -u):$(id -g) $HOME/.kube/config
    
    kubectl apply -f ./kube-flannel.yml
    
    • 1
    • 2
    • 3
    • 4
    • 5

    ---------------- ----------------

    重装 node

    先删除 docker 中所有的容器

    docker rm $(docker ps -a -q)
    
    
    • 1
    • 2

    然后运行上面每次都要运行的脚本名:install.sh

    ./install.sh
    
    • 1

    重启服务,并加入到 master 节点

    systemctl restart docker
    systemctl restart kubelet
    systemctl restart cri-docker
    systemctl restart containerd
    
    kubeadm join 192.168.50.61:6443 --token qr0dir.kfuer8ovhhyx6maa --discovery-token-ca-cert-hash sha256:28a80b14f648afe1b884efec0a4cf8131b41333e04c1252790c3e2997e9e85af   --cri-socket unix://var/run/cri-dockerd.sock
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6

    ---------------- ----------------

    重装 kubernetes

    #!/bin/bash
    
    docker rm $(docker ps -a -q)
    
    systemctl stop docker
    docker rm $(docker ps -a -q)
    systemctl restart kubelet
    systemctl stop cri-docker
    systemctl stop containerd
    
    kubeadm reset
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11

    安装 kubernetes dashboard

    参考文章:
    https://blog.csdn.net/wuxingge/article/details/125487736
    https://blog.51cto.com/u_15098527/3592147

    解决kubernetes的dashboard不显示cpu和memory

    参考文章:
    https://blog.csdn.net/xujiamin0022016/article/details/107676240

    设置外网可以访问

    kubectl proxy --address='0.0.0.0'  --accept-hosts='^*$' --port=8001
    
    # kubectl port-forward -n kubernetes-dashboard --address 0.0.0.0 service/kubernetes-dashboard 8001:443
    
    • 1
    • 2
    • 3

    卸载 dashboard

    #!/bin/bash
    kubectl delete deployment kubernetes-dashboard --namespace=kube-system
    kubectl delete service kubernetes-dashboard  --namespace=kube-system
    kubectl delete role kubernetes-dashboard-minimal --namespace=kube-system
    kubectl delete rolebinding kubernetes-dashboard-minimal --namespace=kube-system
    kubectl delete sa kubernetes-dashboard --namespace=kube-system
    kubectl delete secret kubernetes-dashboard-certs --namespace=kube-system
    kubectl delete secret kubernetes-dashboard-csrf --namespace=kube-system
    kubectl delete secret kubernetes-dashboard-key-holder --namespace=kube-system
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9

    安装 Harbor

    安装文件:https://github.com/goharbor/harbor/releases
    参考文章:https://www.freesion.com/article/16241153409/
    设置 /etc/docker/daemon.json

    {
            "exec-opts": ["native.cgroupdriver=systemd"],
            "log-driver": "json-file",
            "log-opts": {
                    "max-size": "100m"
    	},
    	"storage-driver": "overlay2",
    	"insecure-registries": ["https://hub.xxx.cn:30002"]
    }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9

    把解压安装包后,把 harbor.simple.yml 改成 harbor.yml,内容大致如下:

    hostname: hub.xxx.cn
    http:
      port: 80
    https:
      # https port for harbor, default is 443
      port: 443
      # The path of cert and key files for nginx
      certificate: /etc/pki/server.crt
      private_key: /etc/pki/harbor/server.key
    
    harbor_admin_password: Harbor12345
    ......
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12

    创建 https 证书以及配置相关目录权限证书以及配置相关目录权

    openssl genrsa -des3 -out server.key 2048
    openssl req -new -key server.key -out server.csr
    cp server.key server.key.org
    openssl rsa -in server.key.org -out server.key
    openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
    mkdir /data/cert
    chmod -R 777 /data/cert
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7

    运行脚本进行安装

    ./install.sh
    
    • 1

    默认管理员用户名认管理员用户名/密码为密码为admin / Harbor12345

  • 相关阅读:
    C# 中感叹号 (!) 的一些常见用法
    怎么监控上网记录?监控上网记录的软件推荐
    微服务主流框架概览
    CJSON工具类
    【Vue】科学计数法常见处理
    Go :测试浮点文字语法(附完整源码)
    C语言刷题(一)
    BGP进阶:BGP 综合实验一
    Windows远程访问本地 jupyter notebook服务
    Redo日志和Undo日志
  • 原文地址:https://blog.csdn.net/u011643449/article/details/126241671