集群管理不仅包括集群搭建,还包括比如:
云原生场景中集群应该按照我们的期望的状态运行,这意味着集群管理应该建立在声明式API的基础之上。
如果需要通过声明式api来管理集群,那么需要怎样抽象管理模型呢。
下面以官网图为例说明一下怎样基于声明式api管理集群。
参考链接:https://www.cnblogs.com/longtds/p/15998001.html
https://kind.sigs.k8s.io/docs/user/quick-start/#installing-with-a-package-manager
https://github.com/kubernetes-sigs/cluster-api/releases
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.12.0/kind-linux-amd64
chmod +x ./kind
mv ./kind /some-dir-in-your-PATH/kind
wget https://github.91chi.fun//https://github.com//kubernetes-sigs/cluster-api/releases/download/v1.1.3/clusterctl-linux-amd64
cp /root/clusterctl-linux-amd64 /usr/bin/clusterctl
2.安装集群
部署脚本如下:
#cat create_cluster.sh
env KIND_EXPERIMENTAL_DOCKER_NETWORK=bridge
kind create cluster --config ./kind.conf
#cat kind.conf
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraMounts:
- hostPath: /var/run/docker.sock
containerPath: /var/run/docker.sock
注:这里拉取镜像可能会很慢,需要设置docker镜像加速。
cat /etc/docker/daemon.json
{
"registry-mirrors": [
"https://0bo4n5zl.mirror.aliyuncs.com",
"https://hub-mirror.c.163.com/",
"https://reg-mirror.qiniu.com"]
}
systemctl restart docker
再次安装即可。
查看集群信息
kubectl cluster-info
kubectl get node -o wide
3.安装管理集群
# 使用docker作为基础架构
clusterctl init --infrastructure docker
这里需要替换镜像。
k get deploy -A -o wide
kubectl set image deploy/capd-controller-manager -n capd-system manager=docker.io/cncamp/capd-manager:v0.4.2
kubectl set image deploy/capi-kubeadm-bootstrap-controller-manager -n capi-kubeadm-bootstrap-system manager=docker.io/cncamp/kubeadm-bootstrap-controller:v0.4.2
kubectl set image deploy/capi-kubeadm-control-plane-controller-manager -n capi-kubeadm-control-plane-system manager=docker.io/cncamp/kubeadm-control-plane-controller:v0.4.2
kubectl set image deploy/capi-controller-manager -n capi-system manager=docker.io/cncamp/cluster-api-controller:v0.4.2
等待pod都处于running状态。
4.安装负载集群
创建集群配置文件
# cat generate_workload_cluster.sh
clusterctl generate cluster capi-quickstart --flavor development \
--kubernetes-version v1.22.2 \
--control-plane-machine-count=1 \
--worker-machine-count=1 \
> capi-quickstart.yaml
# cat capi-quickstart.yaml
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: capi-quickstart
namespace: default
spec:
clusterNetwork:
pods:
cidrBlocks:
- 192.168.0.0/16
serviceDomain: cluster.local
services:
cidrBlocks:
- 10.128.0.0/12
controlPlaneRef:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
name: capi-quickstart-control-plane
namespace: default
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerCluster
name: capi-quickstart
namespace: default
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerCluster
metadata:
name: capi-quickstart
namespace: default
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerMachineTemplate
metadata:
name: capi-quickstart-control-plane
namespace: default
spec:
template:
spec:
extraMounts:
- containerPath: /var/run/docker.sock
hostPath: /var/run/docker.sock
---
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
metadata:
name: capi-quickstart-control-plane
namespace: default
spec:
kubeadmConfigSpec:
clusterConfiguration:
apiServer:
certSANs:
- localhost
- 127.0.0.1
controllerManager:
extraArgs:
enable-hostpath-provisioner: "true"
initConfiguration:
nodeRegistration:
criSocket: /var/run/containerd/containerd.sock
kubeletExtraArgs:
cgroup-driver: cgroupfs
eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
joinConfiguration:
nodeRegistration:
criSocket: /var/run/containerd/containerd.sock
kubeletExtraArgs:
cgroup-driver: cgroupfs
eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
machineTemplate:
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerMachineTemplate
name: capi-quickstart-control-plane
namespace: default
replicas: 1
version: v1.22.2
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerMachineTemplate
metadata:
name: capi-quickstart-md-0
namespace: default
spec:
template:
spec: {}
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
metadata:
name: capi-quickstart-md-0
namespace: default
spec:
template:
spec:
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
cgroup-driver: cgroupfs
eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
name: capi-quickstart-md-0
namespace: default
spec:
clusterName: capi-quickstart
replicas: 1
selector:
matchLabels: null
template:
spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
name: capi-quickstart-md-0
namespace: default
clusterName: capi-quickstart
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerMachineTemplate
name: capi-quickstart-md-0
namespace: default
version: v1.22.2
安装
./generate_workload_cluster.sh
kubectl apply -f capi-quickstart.yaml
我这里安装报错了。暂未解决。
未完待续。。。