• (2022版)一套教程搞定k8s安装到实战 | K8s集群安装(二进制)


    视频来源:B站《(2022版)最新、最全、最详细的Kubernetes(K8s)教程,从K8s安装到实战一套搞定》

    一边学习一边整理老师的课程内容及试验笔记,并与大家分享,侵权即删,谢谢支持!

    附上汇总贴:(2022版)一套教程搞定k8s安装到实战 | 汇总_COCOgsta的博客-CSDN博客


    基本环境配置

    主机信息,服务器IP地址不能设置为dhcp,要配置静态IP。

    VIP(虚拟IP)不要和公司内网IP重复,首先去ping一下,不通才可用。VIP需要和主机在同一个局域网内!

    1. 192.168.1.107 k8s-master01 # 2C2G 20G
    2. 192.168.1.108 k8s-master02 # 2C2G 20G
    3. 192.168.1.109 k8s-master03 # 2C2G 20G
    4. 192.168.1.236 k8s-master-lb # VIP虚IP不占用机器资源 # 如果不是高可用集群,该IP为Master01的IP
    5. 192.168.1.110 k8s-node01 # 2C2G 20G
    6. 192.168.1.111 k8s-node02 # 2C2G 20G

    K8s Service网段:10.96.0.0/12

    K8s Pod网段:172.16.0.0/12

    系统环境:

    1. [root@master01 ~]# cat /etc/redhat-release
    2. CentOS Linux release 7.9.2009 (Core)
    3. [root@master01 ~]#

    虚拟机环境:

    配置所有节点hosts文件

    1. [root@master01 ~]# cat /etc/hosts
    2. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
    3. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
    4. 192.168.1.107 k8s-master01
    5. 192.168.1.108 k8s-master02
    6. 192.168.1.109 k8s-master03
    7. 192.168.1.236 k8s-master-lb
    8. 192.168.1.110 k8s-node01
    9. 192.168.1.111 k8s-node02
    10. [root@master01 ~]#

    CentOS7安装yum源如下:

    1. curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
    2. yum install -y yum-utils device-mapper-persistent-data lvm2
    3. yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    4. sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo

    必备工具安装

    yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y

    所有节点关闭firewalld、dnsmasq、selinux

    1. systemctl disable --now firewalld
    2. systemctl disable --now dnsmasq
    3. setenforce 0
    4. sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
    5. sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

    所有节点关闭swap分区,fstab注释swap

    1. swapoff -a && sysctl -w vm.swappiness=0
    2. sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab

    所有节点同步时间

    安装ntpdate

    1. rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm
    2. yum install ntpdate -y

    所有节点同步时间。时间同步配置如下:

    1. ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
    2. echo 'Asia/Shanghai' > /etc/timezone
    3. ntpdate time2.aliyun.com
    4. # 加入到crontab
    5. crontab -e
    6. */5 * * * * ntpdate time2.aliyun.com

    所有节点配置limit:

    1. ulimit -SHn 65535
    2. vim /etc/security/limits.conf
    3. #末尾添加如下内容
    4. * soft nofile 655360
    5. * hard nofile 131072
    6. * soft nproc 655350
    7. * hard nproc 655350
    8. * soft memlock unlimited
    9. * hard memlock unlimited

    Master01节点免密钥登录其他节点,安装过程中生成配置文件和证书均在Master01上操作,集群管理也在Master01上操作,阿里云或者AWS上需要单独一台kubectl服务器。密钥配置如下:

    [root@k8s-master01 ~]# ssh-keygen -t rsa

    Master01配置免密码登录其他节点

    for i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02;do ssh-copy-id -i .ssh/id_rsa.pub $i;done

    Master01下载安装文件

    1. git config --global --unset http.proxy
    2. git config --global --unset https.proxy
    3. git clone https://github.com/dotbalo/k8s-ha-install.git

    所有节点升级系统并重启,此处升级没有升级内核,升级内核可参考之前的内容一套教程搞定k8s安装到实战 。

    yum update -y --exclude=kernel* --skip-broken && reboot #CentOS需要升级,CentOS8可以按需升级系统

    基本组件安装

    本节主要安装的是集群中用到各种组件,比如Docker-ce、Kubernetes各组件等。

    Docker安装

    所有节点安装Docker-ce 19.03

    yum install docker-ce-19.03.* -y

    温馨提示:

    由于新版kubelet建议使用systemd,所以可以把docker的CgroupDriver改成systemd

    1. mkdir /etc/docker
    2. cat > /etc/docker/daemon.json <<EOF
    3. {
    4. "exec-opts":["native.cgroupdriver=systemd"]
    5. }
    6. EOF

    所有节点设置开机自启动Docker:

    systemctl daemon-reload && systemctl enable --now docker

    K8s及etcd安装

    Master01下载kubernetes安装包

    wget https://dl.k8s.io/v1.20.0/kubernetes-server-linux-amd64.tar.gz

    下载etcd安装包

    wget https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz

    解压kubernetes安装文件

    tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}

    解压etcd安装文件

    tar -zxvf etcd-v3.4.13-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.4.13-linux-amd64/etcd{,ctl}

    版本查看

    1. [root@k8s-master01 ~]# kubelet --version
    2. Kubernetes v1.20.0
    3. [root@k8s-master01 ~]#
    4. [root@k8s-master01 ~]# etcdctl version
    5. etcdctl version: 3.4.13
    6. API version: 3.4
    7. [root@k8s-master01 ~]#

    将组件发送到其他节点

    1. MasterNodes='k8s-master02 k8s-master03'
    2. WorkNodes='k8s-node01 k8s-node02'
    3. for NODE in $MasterNodes;do echo $NODE;scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/;scp /usr/local/bin/etcd* $NODE:/usr/local/bin;done
    4. for NODE in $WorkNodes;do scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin;done

    所有节点创建/opt/cni/bin目录

    mkdir -p /opt/cni/bin

    切换分支

    1. git branch -a
    2. git checkout manual-installation-v1.20.x

    生成证书

    二进制安装最关键步骤,一步错误全盘皆输,一定要注意每个步骤都要是正确的

    Master01下载生成证书工具

    1. wget "https://pkg.cfssl.org/R1.2/cfssl_linux-amd64" -O /usr/local/bin/cfssl
    2. wget "https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64" -O /usr/local/bin/cfssljson
    3. chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson

    所有Master节点创建etcd证书目录

    mkdir /etc/etcd/ssl -p

    所有节点创建kubernetes相关目录

    mkdir -p /etc/kubernetes/pki

    Master01节点生成etcd证书

    生成证书的CSR文件:证书签名请求文件,配置了一些域名、公司、单位

    1. cd /root/k8s-ha-install/pki
    2. # 生成CA证书和CA证书的key
    3. cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca
    4. # 曾在这里因为hostname问题导致etcd服务无法启动,后删除无用的ip地址,并修改节点域名为IP地址解决
    5. cfssl gencert -ca=/etc/etcd/ssl/etcd-ca.pem -ca-key=/etc/etcd/ssl/etcd-ca-key.pem -config=ca-config.json -hostname=127.0.0.1,192.168.1.107,192.168.1.108,192.168.1.109 -profile=kubernetes etcd-csr.json|cfssljson -bare /etc/etcd/ssl/etcd

    将证书复制到其他节点

    1. MasterNodes='k8s-master02 k8s-master03'
    2. WorkNodes='k8s-node01 k8s-node02'
    3. for NODE in $MasterNodes;do
    4. ssh $NODE "mkdir -p /etc/etcd/ssl"
    5. for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem;do
    6. scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}
    7. done
    8. done

    Master01生成kubernetes证书

    1. cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca
    2. # 10.96.0是k8s service的网段,如果说需要更改k8s service网段,那就需要更改10.96.0.1
    3. cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -hostname=10.96.0.1,192.168.1.236,127.0.0.1,kubernetes,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,192.168.1.107,192.168.1.108,192.168.1.109 -profile=kubernetes apiserver-csr.json|cfssljson -bare /etc/kubernetes/pki/apiserver
    4. cfssl gencert -initca front-proxy-ca-csr.json|cfssljson -bare /etc/kubernetes/pki/front-proxy-ca
    5. # 生成apiserver的聚合证书。Requestheader-client-xxx request-allowed-xxx:aggregator
    6. cfssl gencert -ca=/etc/kubernetes/pki/front-proxy-ca.pem -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem -config=ca-config.json -profile=kubernetes front-proxy-client-csr.json|cfssljson -bare /etc/kubernetes/pki/front-proxy-client
    7. # 生成controller-manage的证书
    8. cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -profile=kubernetes manager-csr.json|cfssljson -bare /etc/kubernetes/pki/controller-manager
    9. # set-cluster:设置一个集群项
    10. kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.1.236:8443 --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
    11. # 设置一个环境项,一个上下文
    12. kubectl config set-context system:kube-controller-manager@kubernetes --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
    13. # set-credentials 设置一个用户项
    14. kubectl config set-credentials system:kube-controller-manager --client-certificate=/etc/kubernetes/pki/controller-manager.pem --client-key=/etc/kubernetes/pki/controller-manager-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
    15. #使用某个环境当做默认环境
    16. kubectl config use-context system:kube-controller-manager@kubernetes --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
    17. cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -profile=kubernetes scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler
    18. kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.1.236:8443 --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
    19. kubectl config set-credentials system:kube-scheduler --client-certificate=/etc/kubernetes/pki/scheduler.pem --client-key=/etc/kubernetes/pki/scheduler-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
    20. kubectl config set-context system:kube-scheduler@kubernetes --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
    21. kubectl config use-context system:kube-scheduler@kubernetes --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
    22. cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin
    23. kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.1.236:8443 --kubeconfig=/etc/kubernetes/admin.kubeconfig
    24. kubectl config set-credentials kubernetes-admin --client-certificate=/etc/kubernetes/pki/admin.pem --client-key=/etc/kubernetes/pki/admin-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/admin.kubeconfig
    25. kubectl config set-context kubernetes-admin@kubernetes --cluster=kubernetes --user=kubernetes-admin --kubeconfig=/etc/kubernetes/admin.kubeconfig
    26. kubectl config use-context kubernetes-admin@kubernetes --kubeconfig=/etc/kubernetes/admin.kubeconfig

    创建ServiceAccount Key

    1. openssl genrsa -out /etc/kubernetes/pki/sa.key 2048
    2. openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub
    3. for NODE in k8s-master02 k8s-master03;do
    4. for FILE in $(ls /etc/kubernetes/pki | grep -v etcd);do
    5. scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE};
    6. done;
    7. for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig;do
    8. scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE};
    9. done;
    10. done
    1. # 确认生成的文件有23个
    2. [root@k8s-master01 pki]# ls /etc/kubernetes/pki | wc -l
    3. 23
    4. [root@k8s-master01 pki]#

    Kubernetes系统组件配置

    etcd配置

    etcd配置大致相同,注意修改每个Master节点的etcd配置的主机名和IP地址

    Master01

    1. vim /etc/etcd/etcd.config.yml
    2. name: 'k8s-master01'
    3. data-dir: /var/lib/etcd
    4. wal-dir: /var/lib/etcd/wal
    5. snapshot-count: 5000
    6. heartbeat-interval: 100
    7. election-timeout: 1000
    8. quota-backend-bytes: 0
    9. listen-peer-urls: 'https://192.168.1.107:2380'
    10. listen-client-urls: 'https://192.168.1.107:2379,http://127.0.0.1:2379'
    11. max-snapshots: 3
    12. max-wals: 5
    13. cors:
    14. initial-advertise-peer-urls: 'https://192.168.1.107:2380'
    15. advertise-client-urls: 'https://192.168.0.107:2379'
    16. discovery:
    17. discovery-fallback: 'proxy'
    18. discovery-proxy:
    19. discovery-srv:
    20. initial-cluster: 'k8s-master01=https://192.168.1.107:2380,k8s-master02=https://192.168.1.108:2380,k8s-master03=https://192.168.1.109:2380'
    21. initial-cluster-token: 'etcd-k8s-cluster'
    22. initial-cluster-state: 'new'
    23. strict-reconfig-check: false
    24. enable-v2: true
    25. enable-pprof: true
    26. proxy: 'off'
    27. proxy-failure-wait: 5000
    28. proxy-refresh-interval: 30000
    29. proxy-dial-timeout: 1000
    30. proxy-write-timeout: 0
    31. client-transport-security:
    32. cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
    33. key-file: 'etc/kubernetes/pki/etcd/etcd-key.pem'
    34. client-cert-auth: true
    35. trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
    36. auto-tls: true
    37. peer-transport-security:
    38. cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
    39. key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
    40. peer-client-cert-auth: true
    41. trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
    42. auto-tls: true
    43. debug: false
    44. log-package-levels:
    45. log-outputs: [default]
    46. force-new-cluster: false

    Master02

    1. vim /etc/etcd/etcd.config.yml
    2. name: 'k8s-master02'
    3. data-dir: /var/lib/etcd
    4. wal-dir: /var/lib/etcd/wal
    5. snapshot-count: 5000
    6. heartbeat-interval: 100
    7. election-timeout: 1000
    8. quota-backend-bytes: 0
    9. listen-peer-urls: 'https://192.168.1.108:2380'
    10. listen-client-urls: 'https://192.168.1.108:2379,http://127.0.0.1:2379'
    11. max-snapshots: 3
    12. max-wals: 5
    13. cors:
    14. initial-advertise-peer-urls: 'https://192.168.1.108:2380'
    15. advertise-client-urls: 'https://192.168.1.108:2379'
    16. discovery:
    17. discovery-fallback: 'proxy'
    18. discovery-proxy:
    19. discovery-srv:
    20. initial-cluster: 'k8s-master01=https://192.168.1.107:2380,k8s-master02=https://192.168.1.108:2380,k8s-master03=https://192.168.1.109:2380'
    21. initial-cluster-token: 'etcd-k8s-cluster'
    22. initial-cluster-state: 'new'
    23. strict-reconfig-check: false
    24. enable-v2: true
    25. enable-pprof: true
    26. proxy: 'off'
    27. proxy-failure-wait: 5000
    28. proxy-refresh-interval: 30000
    29. proxy-dial-timeout: 1000
    30. proxy-write-timeout: 0
    31. client-transport-security:
    32. cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
    33. key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
    34. client-cert-auth: true
    35. trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
    36. auto-tls: true
    37. peer-transport-security:
    38. cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
    39. key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
    40. peer-client-cert-auth: true
    41. trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
    42. auto-tls: true
    43. debug: false
    44. log-package-levels:
    45. log-outputs: [default]
    46. force-new-cluster: false

    Master03

    1. vim /etc/etcd/etcd.config.yml
    2. name: 'k8s-master03'
    3. data-dir: /var/lib/etcd
    4. wal-dir: /var/lib/etcd/wal
    5. snapshot-count: 5000
    6. heartbeat-interval: 100
    7. election-timeout: 1000
    8. quota-backend-bytes: 0
    9. listen-peer-urls: 'https://192.168.1.109:2380'
    10. listen-client-urls: 'https://192.168.1.109:2379,http://127.0.0.1:2379'
    11. max-snapshots: 3
    12. max-wals: 5
    13. cors:
    14. initial-advertise-peer-urls: 'https://192.168.1.109:2380'
    15. advertise-client-urls: 'https://192.168.1.109:2379'
    16. discovery:
    17. discovery-fallback: 'proxy'
    18. discovery-proxy:
    19. discovery-srv:
    20. initial-cluster: 'k8s-master01=https://192.168.1.107:2380,k8s-master02=https://192.168.1.108:2380,k8s-master03=https://192.168.1.109:2380'
    21. initial-cluster-token: 'etcd-k8s-cluster'
    22. initial-cluster-state: 'new'
    23. strict-reconfig-check: false
    24. enable-v2: true
    25. enable-pprof: true
    26. proxy: 'off'
    27. proxy-failure-wait: 5000
    28. proxy-refresh-interval: 30000
    29. proxy-dial-timeout: 1000
    30. proxy-write-timeout: 0
    31. client-transport-security:
    32. cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
    33. key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
    34. client-cert-auth: true
    35. trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
    36. auto-tls: true
    37. peer-transport-security:
    38. cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
    39. key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
    40. peer-client-cert-auth: true
    41. trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
    42. auto-tls: true
    43. debug: false
    44. log-package-levels:
    45. log-outputs: [default]
    46. force-new-cluster: false

    创建Service

    所有Master节点创建etcd service并启动

    1. vi /usr/lib/systemd/system/etcd.service
    2. [Unit]
    3. Description=Etcd Service
    4. Documentation=https://coreos.com/etcd/docs/latest/
    5. After=network.target
    6. [Service]
    7. Type=notify
    8. ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
    9. Restart=on-failure
    10. RestartSec=10
    11. LimitNOFILE=65536
    12. [Install]
    13. WantedBy=multi-user.target
    14. Alias=etcd3.service

    所有Master节点创建etcd的证书目录

    1. mkdir /etc/kubernetes/pki/etcd
    2. ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd
    3. systemctl daemon-reload
    4. systemctl enable --now etcd

    查看etcd状态

    1. [root@k8s-master01 etcd]# export ETCDCTL_API=3
    2. [root@k8s-master01 pki]# etcdctl --endpoints="192.168.1.109:2379,192.168.1.108:2379,192.168.1.107:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table
    3. +--------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
    4. | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
    5. +--------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
    6. | 192.168.1.109:2379 | 40ebe70dffb00505 | 3.4.13 | 20 kB | false | false | 14 | 9 | 9 | |
    7. | 192.168.1.108:2379 | 4087302bf9046cf1 | 3.4.13 | 20 kB | false | false | 14 | 9 | 9 | |
    8. | 192.168.1.107:2379 | 8af8195104313994 | 3.4.13 | 20 kB | true | false | 14 | 9 | 9 | |
    9. +--------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
    10. [root@k8s-master01 pki]#

    高可用配置

    高可用配置(注意:如果不是高可用集群,haproxy和keepalived无需安装)

    如果在云上安装也无需执行此章节的步骤,可以直接使用云上的lb,比如阿里云slb,腾讯云elb等

    所有Master节点安装keepalived和haproxy

    yum install keepalived haproxy -y

    所有Master配置HAProxy,配置一样

    1. vi /etc/haproxy/haproxy.cfg
    2. global
    3. maxconn 2000
    4. ulimit-n 16384
    5. log 127.0.0.1 local0 err
    6. stats timeout 30s
    7. defaults
    8. log global
    9. mode http
    10. option httplog
    11. timeout connect 5000
    12. timeout client 50000
    13. timeout server 50000
    14. timeout http-request 15s
    15. timeout http-keep-alive 15s
    16. frontend k8s-master
    17. bind 0.0.0.0:8443
    18. bind 127.0.0.1:8443
    19. mode tcp
    20. option tcplog
    21. tcp-request inspect-delay 5s
    22. default_backend k8s-master
    23. backend k8s-master
    24. mode tcp
    25. option tcplog
    26. option tcp-check
    27. balance roundrobin
    28. default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
    29. server k8s-master01 192.168.1.107:6443 check
    30. server k8s-master02 192.168.1.108:6443 check
    31. server k8s-master03 192.168.1.109:6443 check

    Master01 keepalived

    所有Master节点配置KeepAlived,配置不一样,注意区分[root@k8s-master01 pki]# vim
    /etc/keepalived/keepalived.conf,注意每个节点的IP和网卡(interface参数)

    1. !Configuration File for keepalived
    2. global_defs {
    3. router_id LVS_DEVEL
    4. }
    5. vrrp_script chk_apiserver {
    6. script "/etc/keepalived/check_apiserver.sh"
    7. interval 5
    8. weight -5
    9. fall 2
    10. rise 1
    11. }
    12. vrrp_instance VI_1 {
    13. state MASTER
    14. interface eno16777736
    15. mcast_src_ip 192.168.1.107
    16. virtual_router_id 51
    17. priority 101
    18. nopreempt
    19. advert_int 2
    20. authentication {
    21. auth_type PASS
    22. auth_pass K8SHA_KA_AUTH
    23. }
    24. virtual_ipaddress {
    25. 192.168.1.236
    26. }
    27. track_script {
    28. chk_apiserver
    29. }
    30. }

    Master02 keepalived

    1. !Configuration File for keepalived
    2. global_defs {
    3. router_id LVS_DEVEL
    4. }
    5. vrrp_script chk_apiserver {
    6. script "/etc/keepalived/check_apiserver.sh"
    7. interval 5
    8. weight -5
    9. fall 2
    10. rise 1
    11. }
    12. vrrp_instance VI_1 {
    13. state BACKUP
    14. interface eno16777736
    15. mcast_src_ip 192.168.1.108
    16. virtual_router_id 51
    17. priority 100
    18. nopreempt
    19. advert_int 2
    20. authentication {
    21. auth_type PASS
    22. auth_pass K8SHA_KA_AUTH
    23. }
    24. virtual_ipaddress {
    25. 192.168.1.236
    26. }
    27. track_script {
    28. chk_apiserver
    29. }
    30. }

    Master03 keepalived

    1. !Configuration File for keepalived
    2. global_defs {
    3. router_id LVS_DEVEL
    4. }
    5. vrrp_script chk_apiserver {
    6. script "/etc/keepalived/check_apiserver.sh"
    7. interval 5
    8. weight -5
    9. fall 2
    10. rise 1
    11. }
    12. vrrp_instance VI_1 {
    13. state BACKUP
    14. interface eno16777736
    15. mcast_src_ip 192.168.1.109
    16. virtual_router_id 51
    17. priority 100
    18. nopreempt
    19. advert_int 2
    20. authentication {
    21. auth_type PASS
    22. auth_pass K8SHA_KA_AUTH
    23. }
    24. virtual_ipaddress {
    25. 192.168.1.236
    26. }
    27. track_script {
    28. chk_apiserver
    29. }
    30. }

    健康检查配置

    所有master节点

    1. vi /etc/keepalived/check_apiserver.sh
    2. #!/bin/bash
    3. err=0
    4. for k in $(seq 1 3)
    5. do
    6. check_code=$(pgrep haproxy)
    7. if [[ $check_code == "" ]]; then
    8. err=$(expr $err +1)
    9. sleep 1
    10. continue
    11. else
    12. err=0
    13. break
    14. fi
    15. done
    16. if [[ $err != "0" ]]; then
    17. echo "systemctl stop keepalived"
    18. /usr/bin/systemctl stop keepalived
    19. exit 1
    20. else
    21. exit 0
    22. fi
    chmod +x /etc/keepalived/check_apiserver.sh

    所有master节点启动haproxy和keepalived

    1. systemctl daemon-reload
    2. systemctl enable --now haproxy
    3. systemctl enable --now keepalived

    VIP测试

    1. [root@k8s-master01 pki]# ping 192.168.1.236
    2. PING 192.168.1.236 (192.168.1.236) 56(84) bytes of data.
    3. 64 bytes from 192.168.1.236: icmp_seq=1 ttl=64 time=0.033 ms
    4. 64 bytes from 192.168.1.236: icmp_seq=2 ttl=64 time=0.081 ms
    5. 64 bytes from 192.168.1.236: icmp_seq=3 ttl=64 time=0.077 ms
    6. ^C
    7. --- 192.168.1.236 ping statistics ---
    8. 3 packets transmitted, 3 received, 0% packet loss, time 2042ms
    9. rtt min/avg/max/mdev = 0.033/0.063/0.081/0.023 ms
    10. [root@k8s-master01 pki]#

    重要:如果安装了keepalived和haproxy, 需要测试keepalived是否是正常的

    1. [root@k8s-master01 pki]# telnet 192.168.1.236 8443
    2. Trying 192.168.1.236...
    3. Connected to 192.168.1.236.
    4. Escape character is '^]'.
    5. Connection closed by foreign host.
    6. [root@k8s-master01 pki]#

    如果ping不通且telnet没有出现']',则认为VIP不可以,不可再继续往下执行,需要排查keepalived的问题,比如防火墙和selinux,haproxy和keepalived的状态,监听端口灯

    所有节点查看防火墙状态必须为disable和inactive:systemctl status firewalld

    所有节点查看selinux状态,必须为disable:getenforce

    master节点查看haproxy和keepalived状态:systemctl status keepalived haproxy

    Kubernetes系统组件配置(续)

    Apiserver

    所有节点创建相关目录

    mkdir -p /etc/kubernetes/manifest/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes

    所有Master节点创建kube-apiserver service,# 注意,如果不是高可用集群,192.168.1.236改为master01的地址

    Master01配置

    注意本文档使用的k8s service网段为10.96.0.0/12,该网段不能和宿主机的网段、Pod网段重复,请按需修改

    1. vi /usr/lib/systemd/system/kube-apiserver.service
    2. [Unit]
    3. Description=Kubernetes API Server
    4. Documentation=https://github.com/kubernetes/kubernetes
    5. After=network.target
    6. [Service]
    7. ExecStart=/usr/local/bin/kube-apiserver \
    8. --v=2 \
    9. --logtostderr=true \
    10. --allow-privileged=true \
    11. --bind-address=0.0.0.0 \
    12. --secure-port=6443 \
    13. --insecure-port=0 \
    14. --advertise-address=192.168.1.107 \
    15. --service-cluster-ip-range=10.96.0.0/12 \
    16. --service-node-port-range=30000-32767 \
    17. --etcd-servers=https://192.168.1.107:2379,https://192.168.1.108:2379,https://192.168.1.109:2379 \
    18. --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
    19. --etcd-certfile=/etc/etcd/ssl/etcd.pem \
    20. --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
    21. --client-ca-file=/etc/kubernetes/pki/ca.pem \
    22. --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
    23. --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
    24. --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
    25. --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
    26. --service-account-key-file=/etc/kubernetes/pki/sa.pub \
    27. --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \
    28. --service-account-issuer=https://kubernetes.default.svc.cluster.local \
    29. --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
    30. --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
    31. --authorization-mode=Node,RBAC \
    32. --enable-bootstrap-token-auth=true \
    33. --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
    34. --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
    35. --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
    36. --requestheader-allowed-names=aggregator \
    37. --requestheader-extra-headers-prefix=X-Remote-Group \
    38. --requestheader-username-headers=X-Remote-User
    39. # --token-auth-file=/etc/kubernetes/token.csv
    40. Restart=on-failure
    41. RestartSec=10s
    42. LimitNOFILE=65535
    43. [Install]
    44. WantedBy=multi-user.target

    Master02配置

    1. vi /usr/lib/systemd/system/kube-apiserver.service
    2. [Unit]
    3. Description=Kubernetes API Server
    4. Documentation=https://github.com/kubernetes/kubernetes
    5. After=network.target
    6. [Service]
    7. ExecStart=/usr/local/bin/kube-apiserver \
    8. --v=2 \
    9. --logtostderr=true \
    10. --allow-privileged=true \
    11. --bind-address=0.0.0.0 \
    12. --secure-port=6443 \
    13. --insecure-port=0 \
    14. --advertise-address=192.168.1.108 \
    15. --service-cluster-ip-range=10.96.0.0/12 \
    16. --service-node-port-range=30000-32767 \
    17. --etcd-servers=https://192.168.1.107:2379,https://192.168.1.108:2379,https://192.168.1.109:2379 \
    18. --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
    19. --etcd-certfile=/etc/etcd/ssl/etcd.pem \
    20. --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
    21. --client-ca-file=/etc/kubernetes/pki/ca.pem \
    22. --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
    23. --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
    24. --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
    25. --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
    26. --service-account-key-file=/etc/kubernetes/pki/sa.pub \
    27. --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \
    28. --service-account-issuer=https://kubernetes.default.svc.cluster.local \
    29. --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
    30. --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
    31. --authorization-mode=Node,RBAC \
    32. --enable-bootstrap-token-auth=true \
    33. --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
    34. --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
    35. --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
    36. --requestheader-allowed-names=aggregator \
    37. --requestheader-group-headers=X-Remote-Group \
    38. --requestheader-extra-headers-prefix=X-Remote-Extra- \
    39. --requestheader-username-headers=X-Remote-User
    40. # --token-auth-file=/etc/kubernetes/token.csv
    41. Restart=on-failure
    42. RestartSec=10s
    43. LimitNOFILE=65535
    44. [Install]
    45. WantedBy=multi-user.target

    Master03配置

    1. vi /usr/lib/systemd/system/kube-apiserver.service
    2. [Unit]
    3. Description=Kubernetes API Server
    4. Documentation=https://github.com/kubernetes/kubernetes
    5. After=network.target
    6. [Service]
    7. ExecStart=/usr/local/bin/kube-apiserver \
    8. --v=2 \
    9. --logtostderr=true \
    10. --allow-privileged=true \
    11. --bind-address=0.0.0.0 \
    12. --secure-port=6443 \
    13. --insecure-port=0 \
    14. --advertise-address=192.168.1.109 \
    15. --service-cluster-ip-range=10.96.0.0/12 \
    16. --service-node-port-range=30000-32767 \
    17. --etcd-servers=https://192.168.1.107:2379,https://192.168.1.108:2379,https://192.168.1.109:2379 \
    18. --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
    19. --etcd-certfile=/etc/etcd/ssl/etcd.pem \
    20. --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
    21. --client-ca-file=/etc/kubernetes/pki/ca.pem \
    22. --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
    23. --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
    24. --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
    25. --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
    26. --service-account-key-file=/etc/kubernetes/pki/sa.pub \
    27. --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \
    28. --service-account-issuer=https://kubernetes.default.svc.cluster.local \
    29. --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
    30. --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
    31. --authorization-mode=Node,RBAC \
    32. --enable-bootstrap-token-auth=true \
    33. --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
    34. --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
    35. --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
    36. --requestheader-allowed-names=aggregator \
    37. --requestheader-group-headers=X-Remote-Group \
    38. --requestheader-extra-headers-prefix=X-Remote-Extra- \
    39. --requestheader-username-headers=X-Remote-User
    40. # --token-auth-file=/etc/kubernetes/token.csv
    41. Restart=on-failure
    42. RestartSec=10s
    43. LimitNOFILE=65535
    44. [Install]
    45. WantedBy=multi-user.target

    启动apiserver

    所有Master节点开启kube-apiserver

    systemctl daemon-reload && systemctl enable --now kube-apiserver

    检测kube-server状态

    1. [root@k8s-master01 pki]# systemctl status kube-apiserver
    2. ● kube-apiserver.service - Kubernetes API Server
    3. Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
    4. Active: active (running) since Tue 2022-06-21 21:15:19 CST; 11s ago
    5. Docs: https://github.com/kubernetes/kubernetes
    6. Main PID: 4604 (kube-apiserver)
    7. Tasks: 8
    8. Memory: 276.4M
    9. CGroup: /system.slice/kube-apiserver.service
    10. └─4604 /usr/local/bin/kube-apiserver --v=2 --logtostderr=true --allow-privileged=true --bind-address=0.0.0.0 --secure-port=6443 --insecure-port=0 --advertise-address=192.168.1.107 --serv...
    11. Jun 21 21:15:25 k8s-master-lb kube-apiserver[4604]: [-]poststarthook/rbac/bootstrap-roles failed: not finished
    12. Jun 21 21:15:25 k8s-master-lb kube-apiserver[4604]: I0621 21:15:25.359623 4604 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
    13. Jun 21 21:15:25 k8s-master-lb kube-apiserver[4604]: [-]poststarthook/rbac/bootstrap-roles failed: not finished
    14. Jun 21 21:15:25 k8s-master-lb kube-apiserver[4604]: I0621 21:15:25.457019 4604 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
    15. Jun 21 21:15:25 k8s-master-lb kube-apiserver[4604]: [-]poststarthook/rbac/bootstrap-roles failed: not finished
    16. Jun 21 21:15:25 k8s-master-lb kube-apiserver[4604]: I0621 21:15:25.557168 4604 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
    17. Jun 21 21:15:25 k8s-master-lb kube-apiserver[4604]: [-]poststarthook/rbac/bootstrap-roles failed: not finished
    18. Jun 21 21:15:25 k8s-master-lb kube-apiserver[4604]: W0621 21:15:25.714977 4604 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.1.107 192.168.1.108 192.168.1.109]
    19. Jun 21 21:15:25 k8s-master-lb kube-apiserver[4604]: I0621 21:15:25.718182 4604 controller.go:606] quota admission added evaluator for: endpoints
    20. Jun 21 21:15:25 k8s-master-lb kube-apiserver[4604]: I0621 21:15:25.726940 4604 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
    21. [root@k8s-master01 pki]#

    ControllerManager

    所有Master节点配置kube-controller-manager service

    1. vi /usr/lib/systemd/system/kube-controller-manager.service
    2. [Unit]
    3. Description=Kubernetes Controller Manager
    4. Documentation=https://github.com/kubernetes/kubernetes
    5. After=network.target
    6. [Service]
    7. ExecStart=/usr/local/bin/kube-controller-manager \
    8. --v=2 \
    9. --logtostderr=true \
    10. --address=127.0.0.1 \
    11. --root-ca-file=/etc/kubernetes/pki/ca.pem \
    12. --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
    13. --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
    14. --service-account-private-key-file=/etc/kubernetes/pki/sa.key \
    15. --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
    16. --leader-elect=true \
    17. --use-service-account-credentials=true \
    18. --node-monitor-grace-period=40s \
    19. --node-monitor-period=5s \
    20. --pod-eviction-timeout=2m0s \
    21. --controllers=*,bootstrapsigner,tokencleaner \
    22. --allocate-node-cidrs=true \
    23. --cluster-cidr=172.16.0.0/12 \
    24. --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
    25. --node-cidr-mask-size=24
    26. Restart=always
    27. RestartSec=10s
    28. [Install]
    29. WantedBy=multi-user.target

    所有Master节点启动kube-controller-manager

    1. systemctl daemon-reload
    2. systemctl enable --now kube-controller-manager

    查看启动状态

    1. [root@k8s-master01 pki]# systemctl status kube-controller-manager
    2. ● kube-controller-manager.service - Kubernetes Controller Manager
    3. Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
    4. Active: active (running) since Tue 2022-06-21 21:16:16 CST; 11s ago
    5. Docs: https://github.com/kubernetes/kubernetes
    6. Main PID: 4721 (kube-controller)
    7. Tasks: 5
    8. Memory: 27.4M
    9. CGroup: /system.slice/kube-controller-manager.service
    10. └─4721 /usr/local/bin/kube-controller-manager --v=2 --logtostderr=true --address=127.0.0.1 --root-ca-file=/etc/kubernetes/pki/ca.pem --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pe...
    11. Jun 21 21:16:20 k8s-master-lb kube-controller-manager[4721]: I0621 21:16:20.705608 4721 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator
    12. Jun 21 21:16:20 k8s-master-lb kube-controller-manager[4721]: I0621 21:16:20.855590 4721 controllermanager.go:554] Started "pv-protection"
    13. Jun 21 21:16:20 k8s-master-lb kube-controller-manager[4721]: I0621 21:16:20.855687 4721 controllermanager.go:539] Starting "replicaset"
    14. Jun 21 21:16:20 k8s-master-lb kube-controller-manager[4721]: I0621 21:16:20.855787 4721 pv_protection_controller.go:83] Starting PV protection controller
    15. Jun 21 21:16:20 k8s-master-lb kube-controller-manager[4721]: I0621 21:16:20.855807 4721 shared_informer.go:240] Waiting for caches to sync for PV protection
    16. Jun 21 21:16:21 k8s-master-lb kube-controller-manager[4721]: I0621 21:16:21.015750 4721 controllermanager.go:554] Started "replicaset"
    17. Jun 21 21:16:21 k8s-master-lb kube-controller-manager[4721]: I0621 21:16:21.015803 4721 controllermanager.go:539] Starting "nodeipam"
    18. Jun 21 21:16:21 k8s-master-lb kube-controller-manager[4721]: I0621 21:16:21.015894 4721 replica_set.go:182] Starting replicaset controller
    19. Jun 21 21:16:21 k8s-master-lb kube-controller-manager[4721]: I0621 21:16:21.015908 4721 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
    20. Jun 21 21:16:21 k8s-master-lb kube-controller-manager[4721]: I0621 21:16:21.055523 4721 node_ipam_controller.go:91] Sending events to api server.
    21. [root@k8s-master01 pki]#

    Scheduler

    所有Master节点配置kube-scheduler service

    1. vi /usr/lib/systemd/system/kube-scheduler.service
    2. [Unit]
    3. Description=Kubernetes Scheduler
    4. Documentation=https://github.com/kubernetes/kubernetes
    5. After=network.target
    6. [Service]
    7. ExecStart=/usr/local/bin/kube-scheduler \
    8. --v=2 \
    9. --logtostderr=true \
    10. --address=127.0.0.1 \
    11. --leader-elect=true \
    12. --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
    13. Restart=always
    14. RestartSec=10s
    15. [Install]
    16. WantedBy=multi-user.target
    1. systemctl daemon-reload
    2. systemctl enable --now kube-scheduler

    TLS Bootstrapping配置

    在Master01创建bootstrap

    #注意,如果不是高可用集群,192.168.1.236:8443改为master01的地址,8443改为apiserver的端口,默认是6443

    1. cd /root/k8s-ha-install/bootstrap
    2. kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.1.236:8443 --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
    3. kubectl config set-credentials tls-bootstrap-token-user --token=c8ad9c.2e4d610cf3e7426e --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
    4. kubectl config set-context tls-bootstrap-token-user@kubernetes --cluster=kubernetes --user=tls-bootstrap-token-user --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
    5. kubectl config use-context tls-bootstrap-token-user@kubernetes --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig

    注意:如果要修改boostrap.secret.yaml的token-id和token-secret,需要保证下图红圈内内的字符串一致的,并且位数是一样的。还要保证上个命令的黄色字体:c8ad9c.2e4d610cf3e7426e与你修改的字符串要一致

    1. mkdir -p /root/.kube; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config
    2. kubectl create -f bootstrap.secret.yaml

    Node节点配置

    复制证书

    Master01节点复制证书至Node节点

    1. cd /etc/kubernetes/
    2. for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do
    3. ssh $NODE mkdir -p /etc/kubernetes/pki /etc/etcd/ssl /etc/etcd/ssl
    4. for FILE in etcd-ca.pem etcd.pem etcd-key.pem;do
    5. scp /etc/etcd/ssl/$FILE $NODE:/etc/etcd/ssl/
    6. done
    7. for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig; do
    8. scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}
    9. done
    10. done

    Kubelet配置

    所有节点创建相关目录

    mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/

    所有节点配置kubelet service

    1. vi /usr/lib/systemd/system/kubelet.service
    2. [Unit]
    3. Description=Kubernetes Kubelet
    4. Documentation=https://github.com/kubernetes/kubernetes
    5. After=docker.service
    6. Requires=docker.service
    7. [Service]
    8. ExecStart=/usr/local/bin/kubelet
    9. Restart=always
    10. StartLimitInterval=0
    11. RestartSec=10
    12. [Install]
    13. WantedBy=multi-user.target

    所有节点配置kubelet service的配置文件

    1. vi /etc/systemd/system/kubelet.service.d/10-kubelet.conf
    2. [Service]
    3. Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
    4. Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
    5. Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2"
    6. Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node=''"
    7. ExecStart=
    8. ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS

    创建kubelet的配置文件

    注意:如果更改了k8s的service网段,需要更改kubelet-conf.yml的clusterDNS:配置,改成k8s Service网段的第十个地址,比如10.96.0.10

    1. vi /etc/kubernetes/kubelet-conf.yml
    2. apiVersion: kubelet.config.k8s.io/v1beta1
    3. kind: KubeletConfiguration
    4. address: 0.0.0.0
    5. port: 10250
    6. readOnlyPort: 10255
    7. authentication:
    8. anonymous:
    9. enabled: false
    10. webhook:
    11. cacheTTL: 2m0s
    12. enabled: true
    13. x509:
    14. clientCAFile: /etc/kubernetes/pki/ca.pem
    15. authorization:
    16. mode: Webhook
    17. webhook:
    18. cacheAuthorizedTTL: 5m0s
    19. cacheUnauthorizedTTL: 30s
    20. cgroupDriver: systemd
    21. cgroupsPerQOS: true
    22. clusterDNS:
    23. - 10.96.0.10
    24. clusterDomain: cluster.local
    25. containerLogMaxFiles: 5
    26. containerLogMaxSize: 10Mi
    27. contentType: application/vnd.kubernetes.protobuf
    28. cpuCFSQuota: true
    29. cpuManagerPolicy: none
    30. cpuManagerReconcilePeriod: 10s
    31. enableControllerAttachDetach: true
    32. enableDebuggingHandlers: true
    33. enforceNodeAllocatable:
    34. - pods
    35. eventBurst: 10
    36. eventRecordQPS: 5
    37. evictionHard:
    38. imagefs.available: 15%
    39. memory.available: 100Mi
    40. nodefs.available: 10%
    41. nodefs.inodesFree: 5%
    42. evictionPressureTransitionPeriod: 5m0s
    43. failSwapOn: true
    44. fileCheckFrequency: 20s
    45. hairpinMode: promiscuous-bridge
    46. healthzBindAddress: 127.0.0.1
    47. healthzPort: 10248
    48. httpCheckFrequency: 20s
    49. imageGCHighThresholdPercent: 85
    50. imageGCLowThresholdPercent: 80
    51. imageMinimumGCAge: 2m0s
    52. iptablesDropBit: 15
    53. iptablesMasqueradeBit: 14
    54. kubeAPIBurst: 10
    55. kubeAPIQPS: 5
    56. makeIPTablesUtilChains: true
    57. maxOpenFiles: 1000000
    58. maxPods: 110
    59. nodeStatusUpdateFrequency: 10s
    60. oomScoreAdj: -999
    61. podPidsLimit: -1
    62. registryBurst: 10
    63. registryPullQPS: 5
    64. resolvConf: /etc/resolv.conf
    65. rotateCertificates: true
    66. runtimeRequestTimeout: 2m0s
    67. serializeImagePulls: true
    68. staticPodPath: /etc/kubernetes/manifests
    69. streamingConnectionIdleTimeout: 4h0m0s
    70. syncFrequency: 1m0s
    71. volumeStatsAggPeriod: 1m0s

    启动所有节点kubelet

    1. systemctl daemon-reload
    2. systemctl enable --now kubelet

    查看集群状态

    kube-proxy配置

    以下操作在Master01执行

    1. cd /root/k8s-ha-install
    2. kubectl -n kube-system create serviceaccount kube-proxy
    3. kubectl create clusterrolebinding system:kube-proxy --clusterrole system:node-proxier --serviceaccount kube-system:kube-proxy
    4. SECRET=$(kubectl -n kube-system get sa/kube-proxy --output=jsonpath='{.secrets[0].name}')
    5. JWT_TOKEN=$(kubectl -n kube-system get secret/$SECRET --output=jsonpath='{.data.token}'|base64 -d)
    6. PKI_DIR=/etc/kubernetes/pki
    7. K8S_DIR=/etc/kubernetes
    8. kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.1.236:8443 --kubeconfig=${K8S_DIR}/kube-proxy.kubeconfig
    9. kubectl config set-credentials kubernetes --token=${JWT_TOKEN} --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
    10. kubectl config set-context kubernetes --cluster=kubernetes --user=kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
    11. kubectl config use-context kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

    在master01将kube-proxy的systemd Service文件发送到其他节点

    如果更改了集群Pod的网段,需要更改
    kube-proxy/kube-proxy.conf的clusterCIDR:172.16.0.0/12参数为Pod的网段。vi kube-proxy/kube-proxy.conf(在k8s-ha-install目录下执行)

    1. for NODE in k8s-master02 k8s-master03;do
    2. scp ${K8S_DIR}/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig
    3. scp kube-proxy/kube-proxy.conf $NODE:/etc/kubernetes/kube-proxy.conf
    4. scp kube-proxy/kube-proxy.service $NODE:/usr/lib/systemd/system/kube-proxy.service
    5. done
    6. for NODE in k8s-node01 k8s-node02;do
    7. scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig
    8. scp kube-proxy/kube-proxy.conf $NODE:/etc/kubernetes/kube-proxy.conf
    9. scp kube-proxy/kube-proxy.service $NODE:/usr/lib/systemd/system/kube-proxy.service
    10. done

    所有节点启动kube-proxy

    1. systemctl daemon-reload
    2. systemctl enable --now kube-proxy

    安装Calico

    以下步骤只在master01执行

    cd /root/k8s-ha-install/calico/

    修改calico-etcd.yaml的以下位置

    1. sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://192.168.1.107:2379,https://192.168.1.108:2379,https://192.168.1.109:2379"#g' calico-etcd.yaml
    2. ETCD_CA=`cat /etc/kubernetes/pki/etcd/etcd-ca.pem | base64 | tr -d '\n'`
    3. ETCD_CERT=`cat /etc/kubernetes/pki/etcd/etcd.pem | base64 | tr -d '\n'`
    4. ETCD_KEY=`cat /etc/kubernetes/pki/etcd/etcd-key.pem | base64 | tr -d '\n'`
    5. sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml
    6. sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml
    7. set -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml
    8. sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml
    9. # 更改此处为自己的pod网段
    10. POD_SUBNET="172.16.0.0/12"
    11. sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@# value: "192.168.0.0/16"@ value: '"${POD_SUBNET}"'@g' calico-etcd.yaml
    kubectl apply -f calico-etcd.yaml

    查看容器状态

    查看节点状态

    安装CoreDNS

    安装对应版本(推荐)

     cd /root/k8s-ha-install/

    如果更改了k8s service的网段需要将coredns的serviceIP改成k8s service网段的第十个IP

    sed -i "s#10.96.0.10#10.96.0.10#g" CoreDNS/coredns.yaml

    安装coredns

    kubectl create -f CoreDNS/coredns.yaml

    安装最新版CoreDNS

    1. git clone https://github.com/coredns/deployment.git
    2. cd deployment/kubernetes
    3. ./deploy.sh -s -i 10.96.0.10 | kuberctl apply -f -
    1. 查看状态
    2. kubectl get pods -n kube-system -I k8s-app=kube-dns

    安装Metrics Server

    在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率。

    安装metrics server

    1. cd /root/k8s-ha-install/metrics-server-0.4.x/
    2. kubectl create -f .

    等待metrics server启动然后查看状态

    安装dashboard

    Dashboard部署

    Dashboard用于展示集群中的各类资源,同时也可以通过Dashboard实时查看Pod的日志和在容器中执行一些命令等。

    安装指定版本dashboard(推荐)

    1. cd /root/k8s-ha-install/dashboard/
    2. kubectl create -f .

    安装最新版

    官方Github地址:
    https://github.com/kubernetes/dashboard

    可以在官方dashboard查看到最新版dashboard

    kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml

    创建管理员用户vim admin.yaml

    1. apiVersion:v1
    2. kind: ServiceAccount
    3. metadata:
    4. name: admin-user
    5. namespace: kube-system
    6. ---
    7. apiVersion: rbac.authorization.k8s.io/v1
    8. kind: ClusterRoleBinding
    9. metadata:
    10. name: admin-user
    11. annotations:
    12. rbac.authorization.kubernetes.io/autoupdate: "true"
    13. roleRef:
    14. apiGroup: rbac.authorization.k8s.io
    15. kind: ClusterRole
    16. name: cluster-admin
    17. subjects:
    18. - kind: ServiceAccount
    19. name: admin-user
    20. namespace: kube-system
    kubectl apply -f admin.yaml -n kube-system

    登录dashboard

    在Chrome浏览器启动文件中加入启动参数,用于解决无法访问Dashboard的问题

    --test-type --ignore-certificate-errors

    更改dashboard的svc为NodePort:

    kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

    将ClusterIP更改为NodePort(如果已经为NodePort忽略此步骤):

    查看端口号:

    根据自己的实例端口号,通过任意安装了kube-proxy的宿主机或者VIP的IP+端口即可访问到dashboard:

    访问dashboard:
    https://192.168.1.236:30825/#/login,选择登录方式为令牌(即token方式)

    查看token值

    1. [root@k8s-master-lb dashboard]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
    2. Name: admin-user-token-hzblx
    3. Namespace: kube-system
    4. Labels: <none>
    5. Annotations: kubernetes.io/service-account.name: admin-user
    6. kubernetes.io/service-account.uid: efd2583f-2cb9-4829-96e5-428901a2f931
    7. Type: kubernetes.io/service-account-token
    8. Data
    9. ====
    10. ca.crt: 1411 bytes
    11. namespace: 11 bytes
    12. token: eyJhbGciOiJSUzI1NiIsImtpZCI6IllRQVIxdm1zLWowdUp5aXdBdU9KNnBhbTQtMkhBRkJ1U092c0FvN0VPU3cifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWh6Ymx4Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJlZmQyNTgzZi0yY2I5LTQ4MjktOTZlNS00Mjg5MDFhMmY5MzEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.gKfex-XB0Kj1B07d4duvKS3UHmXh2k17tda1HENefHGO92wz4mrYVdG6iwZ1L-_lbLgSYh-nD7XLr0DezvVgu-lb8JXYSlKNqoKlaG2w9FUtVgOxfrTJ_9o4PfiZUMbykGBUl4TwnPO4z6QQNEuCpuf1MEuxlCiXrMenYTTg4mtIMZMxXiZevTt69CNK5HboorCQ-_jArbzdRwevVuWzOP_3Sjv939X6pKyhUB0NE6tESywmvnZblpk8BNnY6wfWEPcJxwPYMPrlbZ3XHYH0Y63bo14MB-sC7lMzEOXTQioGA2IXNVDALcwwpvJ-1Ln7zqEtDfHJKg1cQsVr2tiKWA
    13. You have new mail in /var/spool/mail/root
    14. [root@k8s-master-lb dashboard]#

    将token值输入到令牌后,单击登录即可访问Dashboard

  • 相关阅读:
    ELK日志分析系统叙述与部署,嘎嘎详细
    P8352-[SDOI/SXOI2022]小N的独立集【dp套dp】
    ThinkPHP 配置跨域请求,使用TP的内置跨域类配置,小程序和web网页跨域请求的区别及格式说明
    Kafka之Producer网络传输
    Vue中如何进行分布式任务调度与定时任务管理
    HTML案例-3.表格练习
    在linux服务器用git命令对比master和release分支变化,把改动的文件类名存入到一个文件里用java怎么实现
    Sql Server 数据库事务与锁,同一事务更新又查询锁的变化,期望大家来解惑!
    MySQL - 深入理解锁机制和实战场景
    番外 1 : Java 环境下的 selenium 搭建
  • 原文地址:https://blog.csdn.net/guolianggsta/article/details/125409347