架构图:
k8s环境规划:
podSubnet(pod网段) 10.244.0.0/16
serviceSubnet(serivce网段) 10.10.0.0/16
实验环境规划:
操作系统: Centos7.9
配置: 2G内存/4vCPU/40G磁盘
网络:NAT模式
开启虚拟机的虚拟化
集群ip以及常见的需要安装的组件
常见的两种安装方式:
kubeadm以及二进制安装
目前是按照kubeadm安装
#修改/etc/sysconfig/network-scrips/ifcfg-ens33文件
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
IPADDR=192.168.48.180
NETMASK=255.255.255.0
GATEWAY=192.168.48.2
DNS1=192.168.48.2
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
DEVICE=ens33
ONBOOT=yes
#修改配置重启网路服务生效
service network restart
三台机器分别配置主机名字
hostnamectl set-hostname liaowenmaster1 && bash
hostnamectl set-hostname liaowenmaster2 && bash
hostnamectl set-hostname liaowennode1 && bash
修改每台集群的/etc/hosts文件,增加如下三行:
192.168.48.180 liaowenmaster1
192.168.48.181 liaowenmaster2
192.168.48.182 liaowennode1
#1 ssh-keygen
#一路回车
#2 将本地生成的秘钥文件和私钥文件拷贝到远程主机
ssh-copy-id liaowenmaster1
ssh-copy-id liaowenmaster2
ssh-copy-id liaowennode1
#3三台机器重复1,2流程
#临时关闭
swapoff -a
三台机器都需要关闭
#永久关闭:注释swap挂载,给swap这行开头加一下注释
打开/etc/fstab文件
三台机器都需要操作
注意:k8s为了提升性能,在kubeadm初始化会检测是否关闭swap分区,没有关闭初始化会失败。当然也可以不关闭交换区,只是安装的时候要指定参数 --ignore-preflight-errors=Swap解决
#配置netfilter
modprobe br_netfilter
#开启数据包转发
echo "modprobe br_netfilter" >> /etc/profile
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
#运行时配置内核参数
sysctl -p /etc/sysctl.d/k8s.conf
操作完成后:
三台机器都是同样的操作
systemctl stop firewalld ; systemctl disable firewalld
三台机器都需要操作
#永久生效
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
重启永久生效
#查看状态
getenforce
#临时设置
setenforce 0
安装rzsz命令
yum install lrzsz -y
安装scp
yum install openssh-clients
备份基础repo源
mkdir /root/repo.bak
cd /etc/yum.repos.d/
mv * /root/repo.bak/
下载阿里云的repo源
下载到/etc/yum.repos.d/目录下
配置国内阿里云docker的repo源
yum install yum-utils
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
最终结果:
添加配置文件 /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
所有的集群都需要操作
安装ntpdate命令
yum install ntpdate -y
跟网络时间做同步
ntpdate cn.pool.ntp.org
把时间同步做成计划任务
crontab -e
重启crond服务
编写脚本
在/etc/sysconfig/modules/ipvs.modules文件
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in ${ipvs_modules}; do
/sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
if [ 0 -eq 0 ]; then
/sbin/modprobe ${kernel_module}
fi
done
执行添加权限
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
如下图:
三台机器同样此操作
yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet ipvsadm
成功后如下:三台机器都需要操作
安装iptables
yum install iptables-services -y
禁用iptables
service iptables stop && systemctl disable iptables
清空防火墙规则
iptables -F
成功如下:
安装docker服务
yum install docker-ce-20.10.6 docker-ce-cli-20.10.6 containerd.io -y && systemctl start docker && systemctl enable docker && systemctl status docker
成功后如下:
三台机器都需要安装
创建守护进程启动配置
vim /etc/docker/daemon.json
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://xbmqrz1y.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl status docker
`
yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6 && systemctl enable kubelet && systemctl start kubelet && systemctl status kubelet
看到的kubelet状态不是running状态,这个是正常的;当k8s起来的时候就正常了。
Kubeadm: kubeadm 是一个工具,用来初始化 k8s 集群的
kubelet: 安装在集群所有节点上,用于启动 Pod 的
kubectl: 通过 kubectl 可以部署和管理应用,查看各种资源,创建、删除和更新各种组件
以上所有的操作都是安装k8s通用操作。
配置epel源
三台机器都需要配置
控制结点安装nginx和keepalived
yum install nginx keepalived -y
修改nginx.conf配置
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
# 四层负载均衡,为两台 Master apiserver 组件提供负载均衡
stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status
$upstream_bytes_sent';
access_log /var/log/nginx/k8s-access.log main;
upstream k8s-apiserver {
server 192.168.48.180:6443; # Master1 APISERVER IP:PORT
server 192.168.48.181:6443; # Master2 APISERVER IP:PORT
}
server {
listen 16443; # 由于 nginx 与 master 节点复用,这个监听端口不能是 6443,否则会冲突
proxy_pass k8s-apiserver;
}
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
server {
listen 80 default_server;
server_name _;
location / {
}
}
}
主keepalived
配置
vim /etc/keepalived/keepalived.conf
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_MASTER
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
state MASTER
interface ens33 # 修改为实际网卡名
virtual_router_id 51 # VRRP 路由 ID 实例,每个实例是唯一的
priority 100 # 优先级,备服务器设置 90
advert_int 1 # 指定 VRRP 心跳包通告间隔时间,默认 1 秒
authentication {
auth_type PASS
auth_pass 1111
}
# 虚拟 IP
virtual_ipaddress {
192.168.48.199/24
}
track_script {
check_nginx
}
}
#vrrp_script:指定检查 nginx 工作状态脚本(根据 nginx 状态判断是否故障转移)
#virtual_ipaddress:虚拟 IP(VIP)
编写脚本/etc/keepalived/check_nginx.sh
#!/bin/bash
count=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
systemctl stop keepalived
fi
chmod +x /etc/keepalived/check_nginx.sh
启动nginx
systemctl daemon-reload && yum install nginx-mod-stream -y && systemctl start nginx
启动keepalived
systemctl start keepalived && systemctl enable nginx keepalived && systemctl status keepalived
注意:keepalived无法启动的时候,查看keepalived.conf配置权限
chmod 644 keepalived.conf
启动成功如下:
两天控制结点都需要启动
master1查看 ip addr
停掉master1上的nginx。vip会漂移到master2结点上去
service nginx stop
master1显示:
master2显示:
master1重新启动:
systemctl daemon-reload && systemctl start nginx && systemctl start keepalived
创建配置
master1控制结点:
cd /root/
创建kubeadm-config.yaml配置
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.20.6
controlPlaneEndpoint: 192.168.48.199:16443
imageRepository: registry.aliyuncs.com/google_containers
apiServer:
certSANs:
- 192.168.48.180
- 192.168.48.181
- 192.168.48.182
- 192.168.48.199
networking:
podSubnet: 10.244.0.0/16
serviceSubnet: 10.10.0.0/16
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
使用kubeadm初始化k8s集群
#k8s离线包上次到master1,master2,node1结点上,手动解压
docker load -i k8simage-1-20-6.tar.gz (三台服务器都需要解压)
#初始化
master1操作:
kubeadm init --config kubeadm-config.yaml --ignore-preflight-errors=SystemVerification 执行命令后
添加master到集群:
kubeadm join 192.168.48.199:16443 --token si8kqe.aqqbxj4hw6yk8tu0 \
--discovery-token-ca-cert-hash sha256:d5fbefcf40c636f205c391f736de1bcf4a129 44d15adb253662619019f1d1eb8 \
--control-plane
添加node结点到集群:
kubeadm join 192.168.48.199:16443 --token si8kqe.aqqbxj4hw6yk8tu0 \
--discovery-token-ca-cert-hash sha256:d5fbefcf40c636f205c391f736de1bcf4a129 44d15adb253662619019f1d1eb8
配置kubectl的配置文件config
mkdir -p $HOME/.kube && cp -i /etc/kubernetes/admin.conf $HOME/.kube/config && chown $(id -u):$(id -g) $HOME/.kube/config
查看集群结点:
kubectl get nodes
master2操作:创建存储目录
cd /root && mkdir -p /etc/kubernetes/pki/etcd &&mkdir -p ~/.kube/
如下图:
将master1整数拷贝到master2上
master1操作
生成token命令
master2
启动结点
执行master1生成的命令,需要加上参数 --controll-plane
master1操作
kubeadm token create --print-join-command
显示如下:
kubeadm join 192.168.48.199:16443 --token ewchm4.uxqm5jacqvyez5vl --discovery-token-ca-cert-hash sha256:d5fbefcf40c636f205c391f736de1bcf4a12944d15adb253662619019f1d1eb8
node1
操作
执行master1生成的命令
kubeadm join 192.168.48.199:16443 --token ewchm4.uxqm5jacqvyez5vl --discovery-token-ca-cert-hash sha256:d5fbefcf40c636f205c391f736de1bcf4a12944d15adb253662619019f1d1eb8