• Rancher安装并部署Kubernetes高可用集群


    目录

    一、Rancher是什么?

    二、前期准备

    1.硬件要求(最低标准)

    2.设置主机名

    3.关闭SElinux防火墙与swap 交换分区

    4.配置时间同步

    5.配置系统日志持久化保存日志的目录

    6.安装Docker

    三、部署Rancher

    1.拉取rancher镜像

    2.启动Rancher

    3.登录rancher

    4. 创建rancher用户

    四、部署高可用Kubernetes集群

    1.创建集群

     2.部署一个Nginx服务并运行

    3.配置宿主机kubectl命令


    一、Rancher是什么?

             Rancher是一个开源的企业级多集群Kubernetes管理平台,实现了Kubernetes集群在混合云+本地数据中心的集中部署与管理,以确保集群的安全性,加速企业数字化转型。通过Rancher企业再也不必自己使用一系列的开源软件去从头搭建容器服务平台。Rancher提供了在生产环境中使用的管理Docker和Kubernetes的全栈化容器部署与管理平台。Rancher是美国一家科技公司的产物,后被SUSE收购。

    二、前期准备

    1.硬件要求(最低标准)

    RoleCPUMemoryOsIP
    rancher-server2C5GCentos:7.910.10.10.130
    k8s-master012C3GCentos:7.910.10.10.120
    k8s-master022C3GCentos:7.910.10.10.110
    k8s-node011C2GCentos:7.910.10.10.100
    k8s-node021C2GCentos:7.9

    10.10.10.90

    2.设置主机名

    1. #修改机器名称使用下面这条命令
    2. hostnamectl set-hostname 名称
    3. #修改hosts文件
    4. cat > /etc/hosts <<-'EOF'
    5. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
    6. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
    7. rancher-server 10.10.10.130
    8. k8s-master01 10.10.10.120
    9. k8s-master02 10.10.10.110
    10. k8s-node01 10.10.10.100
    11. k8s-node02 10.10.10.90
    12. EOF

    3.关闭SElinux防火墙与swap 交换分区

    1. #关闭交换分区
    2. swapoff -a
    3. sed -ri 's/.swap./#&/' /etc/fstab
    4. #关闭防火墙
    5. setenforce 0
    6. systemctl stop firewalld
    7. systemctl disable firewalld

    4.配置时间同步

    1. #安装时间服务(每台机器都安装并且要配置)
    2. yum -y install chrony
    3. #修改配置文件
    4. cat > /etc/chrony.conf <<-'EOF'
    5. pool 10.10.10.130 iburst
    6. driftfile /var/lib/chrony/drift
    7. makestep 1.0 3
    8. rtcsync
    9. allow 10.10.10.0/24
    10. local stratum 10
    11. keyfile /etc/chrony.keys
    12. leapsectz right/UTC
    13. logdir /var/log/chrony
    14. EOF
    15. #启动时间服务
    16. systemctl restart chronyd.service --now
    17. #手动同步系统时钟
    18. chronyc -a makestep
    19. #查看时间同步源、查看时间同步源状态
    20. chronyc sources -v
    21. chronyc sourcestats -v
    22. #重启依赖于系统时间的服务
    23. systemctl restart rsyslog && systemctl restart crond

    5.配置系统日志持久化保存日志的目录

      centos7以后,引导方式改为了systemd,所以会有两个日志系统同时工作只保留一个日志(journald)的方法 设置rsyslogd 和 systemd journald。

    1. mkdir /var/log/journal
    2. cat > /etc/systemd/journald.conf <<-'EOF'
    3. [Journal]
    4. #持久化保存到磁盘
    5. Storage=persistent
    6. #压缩历史日志
    7. Compress=yes
    8. SyncIntervalSec=5m
    9. RateLimitInterval=30s
    10. RateLimitBurst=1000
    11. #最大占用空间10G
    12. SystemMaxUse=10G
    13. #单日志文件最大200M
    14. SystemMaxFileSize=200M
    15. #日志保存时间 2
    16. MaxRetentionSec=2week
    17. #不将日志转发到
    18. syslog ForwardToSyslog=no
    19. EOF
    20. #重启journald配置
    21. systemctl restart systemd-journald

    6.安装Docker

    1. #安装常用工具
    2. yum -y install vim net-tools lrzsz iptables curl wget git yum-utils device-mapper-persistent-data lvm2
    3. #配置docker源
    4. yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    5. sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
    6. #更新安装docker-ce
    7. yum makecache fast
    8. yum -y install docker-ce-20.10.10-3.el7 (这里我选择安装较稳定版本)
    9. #配置docker镜像加速
    10. mkdir -p /etc/docker
    11. cat > /etc/docker/daemon.json <<-'EOF'
    12. {
    13. "registry-mirrors": ["https://26ahzfln.mirror.aliyuncs.com"]
    14. }
    15. EOF
    16. #启动docker
    17. systemctl daemon-reload && systemctl restart docker && systemctl enable docker

    三、部署Rancher

    1.拉取rancher镜像

    1. #我选择安装rancher2.5版本,你们可以自行安装个人感觉2.6版本ui不太友好
    2. #镜像站:https://hub.docker.com/r/rancher/rancher/tags?page=1&ordering=last_updated&name=2.5
    3. docker pull rancher/rancher:v2.5.15-linux-amd64

    2.启动Rancher

    1. #建rancher数据目录
    2. mkdir -p /home/rancher
    3. #启动
    4. docker run -d --privileged -p 80:80 -p 443:443 -v /home/rancher:/var/lib/rancher/ --restart=always --name rancher2.5 rancher/rancher:v2.5.15-linux-amd64

    3.登录rancher

    4. 创建rancher用户

     

     根据权重自行授予权限即可.

    四、部署高可用Kubernetes集群

    1.创建集群

      

     1)添加master节点

    1. #两台master节点的话就把复制的命令在两台节点上执行
    2. sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.5.15 --server https://10.10.10.130 --token 4r9nqk2c8v8cv7tb6krkz6lzmdcb4m5pxq5pj589r9bshv64nndgkm --ca-checksum 401d8c9f9c9d95ad9f330a051c2298301bcd82130dcb6842a02f9ed037496ff8 --etcd --controlplane --worker
    3. #执行过程中可能有点慢,速度取决于您的硬件速度及网络速度

    2)添加node工作节点 

    1. #添加node节点复制命令在主机上执行
    2. sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.5.15 --server https://10.10.10.130 --token 4r9nqk2c8v8cv7tb6krkz6lzmdcb4m5pxq5pj589r9bshv64nndgkm --ca-checksum 401d8c9f9c9d95ad9f330a051c2298301bcd82130dcb6842a02f9ed037496ff8 --worker
    3. #等待节点添加完成开始工作

     2.部署一个Nginx服务并运行

    可以根据自己需求定义容器,下面显示已经创建完成 

    1. #写测试文件
    2. [root@k8s-master01 ~]# cd /home/data/
    3. [root@k8s-master01 data]# echo "welcome to china" > index.html
    4. #下面通过ip+端口访问

    容器的其它操作

    3.配置宿主机kubectl命令

    使用Rancher安装的k8s集群,默认不能使用kubectl通过命令行对主机进行直接管理。一旦Rancher挂掉之后,会很头疼。

    1)在宿主机上下载kubectl二进制文件

    1. [root@k8s-master02]# curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
    2. [root@k8s-master02]# chmod +x ./kubectl
    3. [root@k8s-master02]# mv ./kubectl /usr/local/bin/kubectl

    2)从Rancher上复制集群的kubeconfig文件

     

    1. [root@k8s-master02 ~]# mkdir ~/.kube
    2. [root@k8s-master02 ~]# vim ~/.kube/config
    3. apiVersion: v1
    4. kind: Config
    5. clusters:
    6. - name: "ha-k8s"
    7. cluster:
    8. server: "https://10.10.10.130/k8s/clusters/c-4x285"
    9. certificate-authority-data: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJwekNDQ\
    10. VUyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQTdNUnd3R2dZRFZRUUtFeE5rZVc1aGJXbGoKY\
    11. kdsemRHVnVaWEl0YjNKbk1Sc3dHUVlEVlFRREV4SmtlVzVoYldsamJHbHpkR1Z1WlhJdFkyRXdIa\
    12. GNOTWpJdwpOekF4TURJeE1qRXlXaGNOTXpJd05qSTRNREl4TWpFeVdqQTdNUnd3R2dZRFZRUUtFe\
    13. E5rZVc1aGJXbGpiR2x6CmRHVnVaWEl0YjNKbk1Sc3dHUVlEVlFRREV4SmtlVzVoYldsamJHbHpkR\
    14. 1Z1WlhJdFkyRXdXVEFUQmdjcWhrak8KUFFJQkJnZ3Foa2pPUFFNQkJ3TkNBQVRTdnozV21oNG5GW\
    15. Wl2VDlmb3dDaFR2c0ZPWi8zVmVidldGMkZna3BwNQpFVXZSck5VeWczcVVLVnBicmJBc0xCKzd4S\
    16. FpaRHQrR0MwSDdnbE0xR2dvcW8wSXdRREFPQmdOVkhROEJBZjhFCkJBTUNBcVF3RHdZRFZSMFRBU\
    17. UgvQkFVd0F3RUIvekFkQmdOVkhRNEVGZ1FVTFd3RTNrZlV5bTBJSUwvWGVZaDQKT1U2L2o0SXdDZ\
    18. 1lJS29aSXpqMEVBd0lEU0FBd1JRSWhBSUMrY1FHNU9mZzN1Wm9tTmpFNTRSM245Vi9WS2J1bAppb\
    19. CtvcHVzZ1pHc2lBaUExbzF3RUkvOUluVVpNOXRJYkJpL2poMVdBNVduVUdnODNkU244UXVnWC93P\
    20. T0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQ=="
    21. - name: "ha-k8s-k8s-master01"
    22. cluster:
    23. server: "https://10.10.10.120:6443"
    24. certificate-authority-data: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM0VENDQ\
    25. WNtZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFTTVJBd0RnWURWUVFERXdkcmRXSmwKT\
    26. FdOaE1CNFhEVEl5TURjd01UQXlNemt3TlZvWERUTXlNRFl5T0RBeU16a3dOVm93RWpFUU1BNEdBM\
    27. VVFQXhNSAphM1ZpWlMxallUQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ\
    28. 0VCQU01UVAwemllb1U5ClZtN0VyU1YwZ2ZRZWJoKzBIUXFXcE4vdHdzNXQrd1BMRG04amFkZ1g3a\
    29. E43d2dtMmRZV3M0UE9QQjYralg3WmkKb1pnRVR1dHVDZU9MZTQ3dmY4aGIzdXhKeU4zYXlBODZ5T\
    30. GhncFI2YXpBNUwyUG45eFZWNDlBT01ManZTYVRWeAowQU1NWHVDd2tsM3ZTVVYzWVYwbVlXS21IQ\
    31. 3RFbU9nTitCS3JPUjYxbEcveEVVU0M3SHpGOWxXS25iVlhNZVJQCnBTK3R4REo2U0tOL0E5ZTdOM\
    32. lFtUUN1M0xZVFVVU0dNQzYvQjlML0dIZTZBUVVjVTZ3VHdzVkZBRVdZZWdnWEYKandpQzBuUkFxb\
    33. FdoOFpmaC9wOTV5VWdyekdJOXduNy9TcURIWFBYdFhraTltK1FoOHJYc3dLRnptdkRZOGNRMgpvU\
    34. Hc5cEhqN1RFMENBd0VBQWFOQ01FQXdEZ1lEVlIwUEFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93U\
    35. UZNQU1CCkFmOHdIUVlEVlIwT0JCWUVGSFJxR1JtS1ZFUUVQNjh6azlYR1Iwd2IyUFA3TUEwR0NTc\
    36. UdTSWIzRFFFQkN3VUEKQTRJQkFRQnB5N3FRd3NyYVdhbkZLSWFTZDJTN3ZzQnVoS2hvdTBYNUhxU\
    37. FFOa1FNZFJwYVlZQVRQclc0Y0p4WQpzVHZOT0E4Y09GREdYVlc2U3NvN0JnalBlWWdTY08xQnBCV\
    38. 0xlbEYrUFA1YkhDYVExODhTZEFtbmlGSCszSWczClB1Q0RFc1RuUlArZlMxNmFxeGRGZzRkT0FyW\
    39. mF5Rzk2K2VLclVGdzNxd0hzWjBqUTdVR1pzcFRDQ3Ard0ZLN0IKUVNCSktkRnMxS2FXd1FHSUxqU\
    40. nY4ZVg1R2lhY0lJUXB5ZXBiVno3TEtDNXlFUGRONUM2bGJaanVwWlRhY2RFVAowcDAvRDU0RS9RW\
    41. TlhM0RWaHYxVGJtdjJHT2xWUGhGalp5cnVWaGpuOUpGeEkrZHZxSXNKMEhIOUNUbXJKUHhDCndGa\
    42. 3JuU25UK3FMcnUwQ1FOdG02SVlrc05KYzEKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo="
    43. - name: "ha-k8s-k8s-master02"
    44. cluster:
    45. server: "https://10.10.10.110:6443"
    46. certificate-authority-data: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM0VENDQ\
    47. WNtZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFTTVJBd0RnWURWUVFERXdkcmRXSmwKT\
    48. FdOaE1CNFhEVEl5TURjd01UQXlNemt3TlZvWERUTXlNRFl5T0RBeU16a3dOVm93RWpFUU1BNEdBM\
    49. VVFQXhNSAphM1ZpWlMxallUQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ\
    50. 0VCQU01UVAwemllb1U5ClZtN0VyU1YwZ2ZRZWJoKzBIUXFXcE4vdHdzNXQrd1BMRG04amFkZ1g3a\
    51. E43d2dtMmRZV3M0UE9QQjYralg3WmkKb1pnRVR1dHVDZU9MZTQ3dmY4aGIzdXhKeU4zYXlBODZ5T\
    52. GhncFI2YXpBNUwyUG45eFZWNDlBT01ManZTYVRWeAowQU1NWHVDd2tsM3ZTVVYzWVYwbVlXS21IQ\
    53. 3RFbU9nTitCS3JPUjYxbEcveEVVU0M3SHpGOWxXS25iVlhNZVJQCnBTK3R4REo2U0tOL0E5ZTdOM\
    54. lFtUUN1M0xZVFVVU0dNQzYvQjlML0dIZTZBUVVjVTZ3VHdzVkZBRVdZZWdnWEYKandpQzBuUkFxb\
    55. FdoOFpmaC9wOTV5VWdyekdJOXduNy9TcURIWFBYdFhraTltK1FoOHJYc3dLRnptdkRZOGNRMgpvU\
    56. Hc5cEhqN1RFMENBd0VBQWFOQ01FQXdEZ1lEVlIwUEFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93U\
    57. UZNQU1CCkFmOHdIUVlEVlIwT0JCWUVGSFJxR1JtS1ZFUUVQNjh6azlYR1Iwd2IyUFA3TUEwR0NTc\
    58. UdTSWIzRFFFQkN3VUEKQTRJQkFRQnB5N3FRd3NyYVdhbkZLSWFTZDJTN3ZzQnVoS2hvdTBYNUhxU\
    59. FFOa1FNZFJwYVlZQVRQclc0Y0p4WQpzVHZOT0E4Y09GREdYVlc2U3NvN0JnalBlWWdTY08xQnBCV\
    60. 0xlbEYrUFA1YkhDYVExODhTZEFtbmlGSCszSWczClB1Q0RFc1RuUlArZlMxNmFxeGRGZzRkT0FyW\
    61. mF5Rzk2K2VLclVGdzNxd0hzWjBqUTdVR1pzcFRDQ3Ard0ZLN0IKUVNCSktkRnMxS2FXd1FHSUxqU\
    62. nY4ZVg1R2lhY0lJUXB5ZXBiVno3TEtDNXlFUGRONUM2bGJaanVwWlRhY2RFVAowcDAvRDU0RS9RW\
    63. TlhM0RWaHYxVGJtdjJHT2xWUGhGalp5cnVWaGpuOUpGeEkrZHZxSXNKMEhIOUNUbXJKUHhDCndGa\
    64. 3JuU25UK3FMcnUwQ1FOdG02SVlrc05KYzEKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo="
    65. users:
    66. - name: "ha-k8s"
    67. user:
    68. token: "kubeconfig-user-m25dm.c-4x285:kxvdw7jdqpc8qgm9ttpxg49m4vz2s249b6sb49cfp6n4lr9nmc7nzm"
    69. contexts:
    70. - name: "ha-k8s"
    71. context:
    72. user: "ha-k8s"
    73. cluster: "ha-k8s"
    74. - name: "ha-k8s-k8s-master01"
    75. context:
    76. user: "ha-k8s"
    77. cluster: "ha-k8s-k8s-master01"
    78. - name: "ha-k8s-k8s-master02"
    79. context:
    80. user: "ha-k8s"
    81. cluster: "ha-k8s-k8s-master02"
    82. current-context: "ha-k8s"

    3)宿主机测试

    1. [root@k8s-master02 ~]# kubectl get node
    2. NAME STATUS ROLES AGE VERSION
    3. k8s-master01 Ready controlplane,etcd,worker 3h27m v1.19.16
    4. k8s-master02 Ready controlplane,etcd,worker 3h27m v1.19.16
    5. k8s-node01 Ready worker 150m v1.19.16
    6. k8s-node02 Ready worker 170m v1.19.16
    7. [root@k8s-master02 ~]# kubectl get ns
    8. NAME STATUS AGE
    9. cattle-system Active 3h27m
    10. default Active 3h28m
    11. fleet-system Active 3h26m
    12. ingress-nginx Active 3h27m
    13. kube-node-lease Active 3h28m
    14. kube-public Active 3h28m
    15. kube-system Active 3h28m
    16. security-scan Active 3h26m
    17. [root@k8s-master02 ~]# kubectl get pod -A
    18. NAMESPACE NAME READY STATUS RESTARTS AGE
    19. cattle-system cattle-cluster-agent-7697fd77df-vfssz 1/1 Running 4 3h27m
    20. cattle-system cattle-node-agent-5kkkx 1/1 Running 0 3h27m
  • 相关阅读:
    EMQ 携“云边一体化”IoT 解决方案亮相第十届中国电子信息博览会
    数据结构与算法介绍与学习路线
    Vite 配置 Eslint 规范代码
    中兴通讯5G为何要开拓第二曲线业务?一切都是为了得到更好的发展!
    导致静脉炎的因素有哪些呢?
    CMD端口占用和进程终止
    数据库常用命令(未完)
    windows 脚本永久配置 openvino环境变量
    拒绝拖延症
    强转对象方法
  • 原文地址:https://blog.csdn.net/yeyslspi59/article/details/125540549