一、Kubernetes 区域可采用 Kubeadm 方式进行安装
1.3所有节点安装kubeadm,kubelet和kubectl
1.4.1复制镜像和脚本到 node 节点,并在 node 节点上执行脚本
在 node01和node02 节点上执行 kubeadm join 命令加入群集
三、编写service对应的yaml文件,使用NodePort类型和TCP 30000端口将Nginx服务发布出去。
k8s集群 | ||
k8s集群node01 | 192.168.246.11 | |
k8s集群node02 | 192.168.246.12 | |
k8s集群master | 192.168.246.10 |
k8s集群nginx+keepalive | |
负载均衡nginx+keepalive01(master) | 192.168.246.13 |
负载均衡nginx+keepalive02(backup) | 192.168.246.14 |
VIP 192.168.246.100 |
iptables防火墙服务器 | 192.168.246.7 |
客户机 | 12.0.0.12 |
(1)Kubernetes 区域可采用 Kubeadm 方式进行安装。
(2)要求在 Kubernetes 环境中,通过yaml文件的方式,创建2个Nginx Pod分别放置在两个不同的节点上,Pod使用hostPath类型的存储卷挂载,节点本地目录共享使用 /data,2个Pod副本测试页面二者要不同,以做区分,测试页面可自己定义。
(3)编写service对应的yaml文件,使用NodePort类型和TCP 30000端口将Nginx服务发布出去。
(4)负载均衡区域配置Keepalived+Nginx,实现负载均衡高可用,通过VIP 192.168.10.100和自定义的端口号即可访问K8S发布出来的服务。
(5)iptables防火墙服务器,设置双网卡,并且配置SNAT和DNAT转换实现外网客户端可以通过12.0.0.1访问内网的Web服务。
关闭防火墙规则,关闭selinux,关闭swap交换
- systemctl stop firewalld
- systemctl disable firewalld
- setenforce 0
- sed -i 's/enforcing/disabled/' /etc/selinux/config
- iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
-
- swapoff -a #交换分区必须要关闭
- sed -ri 's/.*swap.*/#&/' /etc/fstab #永久关闭swap分区,&符号在sed命令中代表上次匹配的结果
-
- #加载 ip_vs 模块
- for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");do echo $i; /sbin/modinfo -F filename $i >/dev/null 2>&1 && /sbin/modprobe $i;done
-
- #修改主机名
- hostnamectl set-hostname master01
- hostnamectl set-hostname node01
- hostnamectl set-hostname node02
-
- #所有节点修改hosts文件
- vim /etc/hosts
- 192.168.246.10 master01
- 192.168.246.11 node01
- 192.168.246.12 node02
-
- #所有节点调整内核参数
- yum install -y yum-utils device-mapper-persistent-data lvm2
- yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
- yum install -y docker-ce docker-ce-cli containerd.io
-
- mkdir /etc/docker
-
- cat > /etc/docker/daemon.json <<EOF
- {
- "registry-mirrors": ["https://6ijb8ubo.mirror.aliyuncs.com"],
- "exec-opts": ["native.cgroupdriver=systemd"],
- "log-driver": "json-file",
- "log-opts": {
- "max-size": "100m"
- }
- }
- EOF
- #使用Systemd管理的Cgroup来进行资源控制与管理,因为相对Cgroupfs而言,Systemd限制CPU、内存等资源更加简单和成熟稳定。
- #日志使用json-file格式类型存储,大小为100M,保存在/var/log/containers目录下,方便ELK等日志系统收集和管理日志。
-
- systemctl daemon-reload
- systemctl restart docker.service
- systemctl enable docker.service
-
- docker info | grep "Cgroup Driver"
- Cgroup Driver: systemd
- #定义kubernetes源
- cat > /etc/yum.repos.d/kubernetes.repo << EOF
- [kubernetes]
- name=Kubernetes
- baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
- enabled=1
- gpgcheck=0
- repo_gpgcheck=0
- gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
- EOF
-
- yum install -y kubelet-1.20.11 kubeadm-1.20.11 kubectl-1.20.11
-
- #开机自启kubelet
- systemctl enable kubelet.service
- #K8S通过kubeadm安装出来以后都是以Pod方式存在,即底层是以容器方式运行,所以kubelet必须设置开机自启
systemctl enable kubelet.service
- #查看初始化需要的镜像
- kubeadm config images list
-
- #在 master01 节点上传 v1.20.11.zip 压缩包至 /opt 目录
- unzip v1.20.11.zip -d /opt/k8s
- cd /opt/k8s/v1.20.11
- for i in $(ls *.tar); do docker load -i $i; done
-
- #复制镜像和脚本到 node 节点,并在 node 节点上执行脚本加载镜像文件
- scp -r /opt/k8s root@node01:/opt
- scp -r /opt/k8s root@node02:/opt
- 复制镜像和脚本到 node 节点,并在 node 节点上执行脚本加载镜像文件
- scp -r /opt/k8s root@node01:/opt
- scp -r /opt/k8s root@node02:/opt
-
- #然后到各个node节点执行
- for i in $(ls *.tar); do docker load -i $i; done
-
- 初始化kubeadm
-
- 方法一:
- kubeadm config print init-defaults > /opt/kubeadm-config.yaml
-
- cd /opt/
- vim kubeadm-config.yaml
- ......
- 11 localAPIEndpoint:
- 12 advertiseAddress: 192.168.246.10 #指定master节点的IP地址
- 13 bindPort: 6443
- ......
- 34 kubernetesVersion: v1.20.11 #指定kubernetes版本号
- 35 networking:
- 36 dnsDomain: cluster.local
- 37 podSubnet: "10.244.0.0/16" #指定pod网段,10.244.0.0/16用于匹配flannel默认网段
- 38 serviceSubnet: 10.96.0.0/16 #指定service网段
- 39 scheduler: {} #末尾再添加以下内容
- ---
- apiVersion: kubeproxy.config.k8s.io/v1alpha1
- kind: KubeProxyConfiguration
- mode: ipvs #把默认的kube-proxy调度方式改为ipvs模式
-
- kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
-
- #--experimental-upload-certs 参数可以在后续执行加入节点时自动分发证书文件,K8S V1.16版本开始替换为 --upload-certs
- #tee kubeadm-init.log 用以输出日志
-
- #查看 kubeadm-init 日志
- less kubeadm-init.log
-
- #kubernetes配置文件目录
- ls /etc/kubernetes/
-
- #存放ca等证书和密码的目录
- ls /etc/kubernetes/pki
- kubeadm init --config=kubeadm-config.yaml --upload-certs| tee kubeadm-init.log
- #--experimental-upload-certs 参数可以在后续执行加入节点时自动分发证书文件,K8S V1.16版本开始替换为 --upload-certs
- #tee kubeadm-init.log 用以输出日志
-
-
- less kubeadm-init.log #查看 kubeadm-init 日志
-
- ls /etc/kubernetes/ #kubernetes配置文件目录
-
- ls /etc/kubernetes/pki #存放ca等证书和密码的目录
- Your Kubernetes control-plane has initialized successfully!
-
- To start using your cluster, you need to run the following as a regular user:
-
- mkdir -p $HOME/.kube
- sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
- sudo chown $(id -u):$(id -g) $HOME/.kube/config
-
- Alternatively, if you are the root user, you can run:
-
- export KUBECONFIG=/etc/kubernetes/admin.conf
-
- You should now deploy a pod network to the cluster.
- Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
- https://kubernetes.io/docs/concepts/cluster-administration/addons/
-
- Then you can join any number of worker nodes by running the following on each as root:
-
- kubeadm join 192.168.246.10:6443 --token abcdef.0123456789abcdef \
- --discovery-token-ca-cert-hash sha256:764f6f1c1cce7eef4fbfb1585c2e1353e04429de7c6163a4fab16a1f96a6b9a5
-
-
- #设定kubectl
- kubectl需经由API server认证及授权后方能执行相应的管理操作,kubeadm 部署的集群为其生成了一个具有管理员权限的认证配置文件 /etc/kubernetes/admin.conf,它可由 kubectl 通过默认的 “$HOME/.kube/config” 的路径进行加载。
-
- mkdir -p $HOME/.kube
- cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
- chown $(id -u):$(id -g) $HOME/.kube/config
- kubeadm join 192.168.246.10:6443 --token abcdef.0123456789abcdef \
- --discovery-token-ca-cert-hash sha256:764f6f1c1cce7eef4fbfb1585c2e1353e04429de7c6163a4fab16a1f96a6b9a5
- 如果 kubectl get cs 发现集群不健康,更改以下两个文件
-
- vim /etc/kubernetes/manifests/kube-scheduler.yaml
-
- vim /etc/kubernetes/manifests/kube-controller-manager.yaml
- # 修改如下内容
- 把--bind-address=127.0.0.1变成--bind-address=192.168.246.10
- #修改成k8s的控制节点master01的ip
-
- 把httpGet:字段下的hosts由127.0.0.1变成192.168.246.10(有两处)
- #- --port=0 # 搜索port=0,把这一行注释掉
-
- systemctl restart kubelet
-
- cd /opt
- 上传安装包 kuadmin.zip
- unzip kuadmin.zip #解压
-
- 拷贝到node节点
- scp flannel-cni-v1.2.0.tar flannel-v0.22.2.tar 192.168.246.11:/opt
- scp flannel-cni-v1.2.0.tar flannel-v0.22.2.tar 192.168.246.12:/opt
-
- 加载
- docker load -i flannel-v0.22.2.tar
- docker load -i flannel-cni-v1.2.0.tar
-
- kubectl apply -f kube-flannel.yml
要求在 Kubernetes 环境中,通过yaml文件的方式,创建2个Nginx Pod分别放置在两个不同的节点上,Pod使用hostPath类型的存储卷挂载,节点本地目录共享使用 /data,2个Pod副本测试页面二者要不同,以做区分,测试页面可自己定义。
- [root@master01 ~]#kubectl run mynginx --image=nginx:1.14 --port=80 --dry-run=client -o yaml > nginx-pod.yaml
- [root@master01 ~]#cd /opt
- [root@master01 opt]#mkdir /opt/kaoshi
- [root@master01 opt]#cd
- [root@master01 ~]#mv nginx-pod.yaml /opt/kaoshi/
- [root@master01 ~]#cd /opt/kaoshi/
- [root@master01 kaoshi]#ls
- nginx-pod.yaml
- [root@master01 kaoshi]#vim nginx-pod.yaml
- apiVersion: v1
- kind: Pod
- metadata:
- labels:
- run: nginx
- name: nginx01
- spec:
- nodeName: node01
- containers:
- - image: nginx
- name: nginx
- ports:
- - containerPort: 80
- volumeMounts:
- - name: node01-html
- mountPath: /usr/share/nginx/html
- readOnly: false
- volumes:
- - name: node01-html
- hostPath:
- path: /data
- type: DirectoryOrCreate
-
- ---
- apiVersion: v1
- kind: Pod
- metadata:
- labels:
- run: nginx
- name: nginx02
- spec:
- nodeName: node02
- containers:
- - image: nginx
- name: nginx
- ports:
- - containerPort: 80
- volumeMounts:
- - name: node02-html
- mountPath: /usr/share/nginx/html
- readOnly: false
- volumes:
- - name: node02-html
- hostPath:
- path: /data
- type: DirectoryOrCreate
- [root@master01 kaoshi]#kubectl apply -f nginx-pod.yaml
- pod/nginx01 created
- pod/nginx02 created
查看调度
#查看创建的两个pod,被调度到了不同的node节点
- [root@master01 kaoshi]#kubectl get pods -o wide|grep nginx
- nginx01 1/1 Running 0 3m46s 10.244.1.14 node01 <none> <none>
- nginx02 1/1 Running 0 3m46s 10.244.2.9 node02 <none> <none>
[root@master01 kaoshi]#vim pod-nginx-service.yaml
- apiVersion: v1
- kind: Service
- metadata:
- labels:
- run: nginx
- name: nginx-service
- spec:
- type: NodePort
- ports:
- - protocol: TCP
- port: 80
- targetPort: 80
- nodePort: 30000
- selector:
- run: nginx
负载均衡区域配置Keepalived+Nginx,实现负载均衡高可用,通过VIP 192.168.10.100和自定义的端口号即可访问K8S发布出来的服务。
k8s集群nginx+keepalive | |
负载均衡nginx+keepalive01(master) | 192.168.246.13 |
负载均衡nginx+keepalive02(backup) | 192.168.246.14 |
两台机器都要操作
- cat > /etc/yum.repos.d/nginx.repo << 'EOF'
- [nginx]
- name=nginx repo
- baseurl=http://nginx.org/packages/centos/7/$basearch/
- gpgcheck=0
- EOF
-
- yum install nginx -y
这边也可以使用nginx的七层负载均衡upstream模块,此文章采用stream模块做四层负载均衡
- stream {
- log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sen
- t';
-
- access_log /var/log/nginx/k8s-access.log main;
-
- upstream k8s-nodes {
- server 192.168.246.11:30000;
- server 192.168.246.12:30000;
- }
- server {
- listen 30000;
- proxy_pass k8s-nodes;
- }
- }
- nginx -t
-
- systemctl start nginx
- systemctl enable nginx
- netstat -natp | grep nginx
yum install keepalived -y
- ! Configuration File for keepalived
-
- global_defs { # 接收邮件地址
- notification_email {
- acassen@firewall.loc
- failover@firewall.loc
- sysadmin@firewall.loc
- }
- notification_email_from Alexandre.Cassen@firewall.loc # 邮件发送地址
- smtp_server 127.0.0.1 #修改
- smtp_connect_timeout 30
- router_id NGINX_MASTER # #nginx01节点的为NGINX_MASTER,nginx02节点的为NGINX_BACKUP
- }
- vrrp_script check_nginx { #添加一个周期性执行的脚本
- script "/etc/nginx/check_nginx.sh" #指定检查nginx存活的脚本路径
- }
-
- vrrp_instance VI_1 {
- state MASTER #nginx01节点的为 MASTER,nginx02节点的为 BACKUP
- interface ens33 #指定网卡名称 ens33
- virtual_router_id 51 #指定vrid,两个节点要一致
- priority 100 #nginx01节点的为 100,nginx02节点的为 80
- advert_int 1
- authentication {
- auth_type PASS
- auth_pass 1111
- }
- virtual_ipaddress { #指定虚拟ip地址
- 192.168.246.100/24
- }
-
- track_script { #指定vrrp_script配置的脚本
- check_nginx
- }
- }
- #!/bin/bash
- killall -0 nginx
- if [ $? -ne 0 ];then
- systemctl stop keepalived
- fi
此脚本有缺陷,就是每次都要重启keepalived服务,如下有方法二,可以自动进行漂移
-
- chmod +x /etc/nginx/check_nginx.sh
- #重启服务
- systemctl restart keepalived.service
- systemctl enable keepalived.service
虚拟ip又回到主7-7上面了,
方法二:这里是keepalived的配置文件
-
- ! Configuration File for keepalived
-
- global_defs { # 接收邮件地址
- notification_email {
- acassen@firewall.loc
- failover@firewall.loc
- sysadmin@firewall.loc
- }
- notification_email_from Alexandre.Cassen@firewall.loc # 邮件发送地址
- smtp_server 127.0.0.1 #修改
- smtp_connect_timeout 30
- router_id NGINX_MASTER # #nginx01节点的为NGINX_MASTER,nginx02节点的为NGINX_BACKUP
- }
-
- vrrp_script check_nginx { #添加一个周期性执行的脚本
- script "/etc/nginx/check_nginx.sh" #指定检查nginx存活的脚本路径
- }
- vrrp_script check_down {
- script "/etc/nginx/check_nginx.sh"
- interval 1
- weight -30
- fall 3
- rise 2
- timeout 2
- }
-
- vrrp_instance VI_1 {
- state MASTER #nginx01节点的为 MASTER,nginx02节点的为 BACKUP
- interface ens33 #指定网卡名称 ens33
- virtual_router_id 51 #指定vrid,两个节点要一致
- priority 100 #nginx01节点的为 100,nginx02节点的为 80
- advert_int 1
- authentication {
- auth_type PASS
- auth_pass 1111
- }
- virtual_ipaddress { #指定虚拟ip地址
- 192.168.246.100/24
- }
-
- track_script { #指定vrrp_script配置的脚本
- check_nginx
- }
- }
编辑脚本
- vim /etc/nginx/check_nginx.sh
- killall -0 nginx
-
- chmod +x /etc/nginx/check_nginx.sh
-
- #这个命令killall -0 nginx的作用是向所有名为nginx的进程发送一个信号0,用于检查进程是否存在且是否具有权限。发送信号0不会实际杀死进程,只是用来检查进程是否存在。
-
采用方法二,当master停掉nginx服务,可以进行虚拟ip自动漂移,当master开启nginx服务,它又会自动漂移到master服务上来,不需要重启keepalived服务,所以此处建议采用方法二进行配置
iptables防火墙服务器,设置双网卡,并且配置SNAT和DNAT转换实现外网客户端可以通过12.0.0.1访问内网的Web服务。
iptables防火墙服务器 | 192.168.246.7 |
这边要把DNS解析注释掉,它用网关的ip 进行解析
systemctl restart network
- vim /etc/sysctl.conf
- net.ipv4.ip_forward = 1
- [root@localhost network-scripts]#iptables -t nat -A POSTROUTING -o ens36 -s 192.168.246.0/24 -j SNAT --to 12.0.0.1
- [root@localhost network-scripts]#iptables -t nat -A PREROUTING -i ens36 -d 12.0.0.1 -p tcp --dport 80 -j DNAT --to 192.168.246.100:30000
客户机 | 12.0.0.12 |
systemctl restart network
修改nginx+keepalived机器的网卡地址,指向iptables机器的ip地址
开启路由转发
- vim /etc/sysctl.conf
-
- net.ipv4.ip_forward = 0
- net.ipv4.conf.all.send_redirects = 0
- net.ipv4.conf.default.send_redirects = 0
- net.ipv4.conf.ens33.send_redirects = 0
-
- sysctl -p