• kubenates的傻瓜式部署教程(K8S部署教程)


    推荐配置

    主节点2C4G,两个从节点2C2G即可
    操作系统我用的centos

    一、Dokcer环境安装(阿里云环境)

    1.1 下载docker实例

    sudo wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    
    • 1

    1.2 安装

    sudo yum -y install docker-ce
    
    • 1

    1.3 检查是否安装成功

    sudo docker -v
    
    • 1

    1.4 启动docker服务

    sudo systemctl start docker
    
    • 1

    1.5 设置开机自启动

    sudo systemctl enable docker
    
    • 1

    1.6 检查docker的服务状态

    sudo systemctl status docker
    
    • 1

    二、K8S安装前环境配置

    2.1 设置各个节点名称

    hostnamectl set-hostname 节点名称
    
    • 1

    2.2 禁用linux的安全设置

    2.2.1 临时禁用

    sudo setenforce 0
    # (响应结果为setenforce: SELinux is disabled即为正确)
    
    • 1
    • 2

    2.2.2 永久禁用

    sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
    
    • 1

    2.3 关闭swap分区

    2.3.1 临时关闭

    swapoff -a  
    
    • 1

    2.3.2 永久关闭

    如果是非root用户请添加sudo授权

    sed -ri 's/.*swap.*/#&/' /etc/fstab
    
    • 1

    2.4 允许iptables检查桥接流量

    cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
    br_netfilter
    EOF
    
    cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8

    2.5 更新配置

    sudo sysctl --system
    
    • 1

    三、K8S环境的核心依赖安装

    3.1 配置kubelet、kubeadm、kubectl的下载源信息

    cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    exclude=kubelet kubeadm kubectl
    EOF
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11

    3.2 下载并安装

    sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes
    
    • 1

    3.3 启动kubelet并设置开机启动

    sudo systemctl enable --now kubelet
    
    • 1

    3.4 检查kubelet启动状态(因为还没装kubeadm,所以状态为running和exited两种都为正常)

    systemctl status kubelet
    
    • 1

    3.5 依赖镜像下载

    编写执行脚本

    sudo tee ./images.sh <<-'EOF'
    #!/bin/bash
    images=(
    kube-apiserver:v1.20.9
    kube-proxy:v1.20.9
    kube-controller-manager:v1.20.9
    kube-scheduler:v1.20.9
    coredns:1.7.0
    etcd:3.4.13-0
    pause:3.2
    )
    for imageName in ${images[@]} ; do
    docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
    done
    EOF
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15

    授权并执行

    chmod +x ./images.sh && ./images.sh
    
    • 1

    四、K8S节点初始化

    4.1 为所有节点添加主节点域名映射(可以通过ip a命令查看master主机ip地址)

    echo "172.26.132.136  cluster-endpoint" >> /etc/hosts
    
    • 1

    4.2 初始化master节点(仅在master节点运行)

    service-cidr网段及pod-network无需更改

    kubeadm init \
    --apiserver-advertise-address=172.26.132.136 \
    --control-plane-endpoint=cluster-endpoint \
    --image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
    --kubernetes-version v1.20.9 \
    --service-cidr=10.96.0.0/16 \
    --pod-network-cidr=192.168.0.0/16
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7

    安装成功(一定要保存,后面加入子节点需要使用):

    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    Alternatively, if you are the root user, you can run:
    
      export KUBECONFIG=/etc/kubernetes/admin.conf
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    You can now join any number of control-plane nodes by copying certificate authorities
    and service account keys on each node and then running the following as root:
    
      kubeadm join cluster-endpoint:6443 --token 4612mg.3xomnt3l1zfhc6ye \
        --discovery-token-ca-cert-hash sha256:93c235c16e18e8f0db8cd4990343c70ec9ad16397154e52d50b6529e51b0514e \
        --control-plane 
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join cluster-endpoint:6443 --token 4612mg.3xomnt3l1zfhc6ye \
        --discovery-token-ca-cert-hash sha256:93c235c16e18e8f0db8cd4990343c70ec9ad16397154e52d50b6529e51b0514e 
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28

    4.3 根据安装成功官方建议配置kubeadm(仅在master节点运行)

      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    • 1
    • 2
    • 3

    查看当前节点注册状态

    kubectl get nodes
    
    • 1

    4.4 注册网络插件(仅在master节点运行)

    下载配置文件:

    curl https://docs.projectcalico.org/v3.20/manifests/calico.yaml -O
    
    • 1

    应用配置文件(重要命令)

    kubectl apply -f calico.yaml
    
    • 1

    查看k8s集群部署的应用

    kubectl get pods -A
    
    • 1

    如果出现异常,可以查看kube状态(仅出现问题时排查使用,无异常可忽略)

    kubectl get cs
    
    • 1

    如果发现controller-manager和scheduler 出现 Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused 是由于配置文件中默认port设置为0的缘故,我们需要修改一下(仅出现问题时排查使用,无异常可忽略)

    # 进入配置文件路径
    cd /etc/kubernetes/manifests
    # 查看文件
    ls
    
    • 1
    • 2
    • 3
    • 4

    修改kube-controller-manager.yaml和kube-scheduler.yaml(修改前请记得备份)
    将port = 0 注释(仅出现问题时排查使用,无异常可忽略)

    #参考 请勿复制  kube-controller-manager.yaml
    spec:
      containers:
      - command:
        - kube-controller-manager
        - --allocate-node-cidrs=true
        - --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
        - --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
        - --bind-address=127.0.0.1
        - --client-ca-file=/etc/kubernetes/pki/ca.crt
        - --cluster-cidr=10.244.0.0/16
        - --cluster-name=kubernetes
        - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
        - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
        - --controllers=*,bootstrapsigner,tokencleaner
        - --experimental-cluster-signing-duration=876000h
        - --kubeconfig=/etc/kubernetes/controller-manager.conf
        - --leader-elect=true
        - --node-cidr-mask-size=24
          #    - --port=0  注释掉这里
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    #参考 请勿复制 kube-scheduler.yaml
    spec:
      containers:
      - command:
        - kube-scheduler
        - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
        - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
        - --bind-address=127.0.0.1
        - --kubeconfig=/etc/kubernetes/scheduler.conf
        - --leader-elect=true
          #    - --port=0
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 重启kubelet(仅出现问题时排查使用,无异常可忽略)
    systemctl restart kubelet
    
    • 1

    最后查看状态,正常即可。

    4.5 加入工作节点

    通过启动kubeadmin给的命令启动(关键字:Then you can join any number of worker nodes by running the following on each as root

    kubeadm join cluster-endpoint:6443 --token 4612mg.3xomnt3l1zfhc6ye \
        --discovery-token-ca-cert-hash sha256:93c235c16e18e8f0db8cd4990343c70ec9ad16397154e52d50b6529e51b0514e 
    
    • 1
    • 2

    如果刚才的令牌在运行阶段忘记保存,可以在master节点重新生成新的令牌

    kubeadm token create --print-join-command
    
    • 1

    五、部署可视化面板(在主节点运行)

    5.1 获取dashbord配置文件并部署

    kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml
    
    • 1

    5.2 修改dashbord访问端口配置

    kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
    # type: ClusterIP 改为 type: NodePort
    # 输入i进入输入模式 esc后输入:进入命令号,wq保存并退出
    
    • 1
    • 2
    • 3

    5.3 获取dashbord生成的随机端口号,并在防火墙(安全组)中配置放行

    kubectl get svc -A |grep kubernetes-dashboard
    ## 找到端口,在安全组放行 这里可以页面配置 
    
    • 1
    • 2

    访问链接: https://master节点公网地址+生成的随机端口

    如果出现浏览器安全选项可以刷新后用英文输入法输入thisisunsafe

    5.4 创建访问账号

    vi dash.yaml
    
    • 1
    #将下面的配置粘贴进去
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: admin-user
      namespace: kubernetes-dashboard
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: admin-user
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
    - kind: ServiceAccount
      name: admin-user
      namespace: kubernetes-dashboard
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19

    应用配置文件

    kubectl apply -f dash.yaml
    
    • 1

    5.5 获取访问令牌

    #获取访问令牌(注意复制到root@之前)
    kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
    
    • 1
    • 2

    部署指南到此完结撒花,后面会继续出k8s的使用指南

  • 相关阅读:
    Linux应用 防止程序重复发起
    Linux 内存top命令详解
    [C#] opencvsharp对Mat数据进行序列化或者反序列化以及格式化输出
    P4315 月下“毛景树”(树链剖分)
    2023年天津农学院专升本专业课报名、确认缴费及准考证打印流程
    MATLAB环境下基于深度学习的语音降噪方法
    CP-CNN 实验
    list的特性及使用
    【VSCode】Windows系统的WSL无法启动vscode问题
    MySQL性能优化(硬件,系统配置,表结构,SQL语句)
  • 原文地址:https://blog.csdn.net/xiaoai1994/article/details/134648633