• kubernetes集群环境搭建


    1. kubernetes集群环境搭建

    1.1 前置知识点

    目前生产部署Kubernetes 集群主要有两种方式:

    kubeadm

    Kubeadm 是一个K8s 部署工具,提供kubeadm init 和kubeadm join,用于快速部署Kubernetes 集群。

    官方地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

    二进制包

    从github 下载发行版的二进制包,手动部署每个组件,组成Kubernetes 集群。

    Kubeadm 降低部署门槛,但屏蔽了很多细节,遇到问题很难排查。如果想更容易可控,推荐使用二进制包部署Kubernetes 集群,虽然手动部署麻烦点,期间可以学习很多工作原理,也利于后期维护。

    在这里插入图片描述

    1.2 kubeadm 部署方式介绍

    kubeadm 是官方社区推出的一个用于快速部署kubernetes 集群的工具,这个工具能通过两条指令完成一个kubernetes 集群的部署:

    • 创建一个Master 节点kubeadm init
    • 将Node 节点加入到当前集群中$ kubeadm join

    1.3 安装要求

    在开始之前,部署Kubernetes 集群机器需要满足以下几个条件:

    • 一台或多台机器,操作系统CentOS7.x-86_x64
    • 硬件配置:2GB 或更多RAM,2 个CPU 或更多CPU,硬盘30GB 或更多
    • 集群中所有机器之间网络互通
    • 可以访问外网,需要拉取镜像
    • 禁止swap 分区

    1.4 最终目标

    • 在所有节点上安装Docker 和kubeadm
    • 部署Kubernetes Master
    • 部署容器网络插件
    • 部署Kubernetes Node,将节点加入Kubernetes 集群中
    • 部署Dashboard Web 页面,可视化查看Kubernetes 资源

    1.5 准备环境

    在这里插入图片描述

    角色IP地址组件
    master01192.167.0.128docker,kubectl,kubeadm,kubelet
    node01192.167.0.129docker,kubectl,kubeadm,kubelet
    node02192.167.0.130docker,kubectl,kubeadm,kubelet

    2.使用kubeadm安装方式1

    2.1 安装docker

    本文在centos7.9 64位下进行docker的安装。这里建议安装在CentOS7.x以上的版本,在CentOS6.x的版本中,安装前需要安装其他很多的环境而且Docker很多补丁不支持更新。

    其他系统参照如下文档

    https://docs.docker.com/engine/install/centos/

    2.1.1 移除以前docker相关包

    sudo yum remove docker*
    
    • 1

    2.1.2 配置yum源

    sudo yum install -y yum-utils
    sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    
    • 1
    • 2

    2.1.3 安装docker

    sudo yum install -y docker-ce docker-ce-cli containerd.io
    
    #安装指定版本的docke
    yum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7  containerd.io-1.4.6
    
    • 1
    • 2
    • 3
    • 4

    2.1.4 启动

    systemctl enable docker --now
    
    • 1

    2.1.5 配置镜像加速器

    这里需要登录阿里云,找到阿里云的容器镜像服务,如下图所示:

    在这里插入图片描述

    点击左侧的镜像加速器,然后可以在右侧查看到镜像加速器的配置信息,如红框所示

    在这里插入图片描述

    具体代码如下

    sudo mkdir -p /etc/docker
    sudo tee /etc/docker/daemon.json <<-'EOF'
    {
      "registry-mirrors": ["https://6c6b34mj.mirror.aliyuncs.com"]
    }
    EOF
    sudo systemctl daemon-reload
    sudo systemctl restart docker
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8

    2.2 kubernetes安装

    2.2.1 基础环境准备

    所有机器执行以下操作

    #各个机器设置自己的域名
    hostnamectl set-hostname xxxx
    例: hostnamectl set-hostname k8s-master/k8s-node1/k8s-node2.....
    
    
    # 将 SELinux 设置为 permissive 模式(相当于将其禁用)
    sudo setenforce 0
    sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
    
    #关闭swap
    swapoff -a  
    sed -ri 's/.*swap.*/#&/' /etc/fstab
    
    #允许 iptables 检查桥接流量
    cat <
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24

    安装kubelet、kubeadm、kubectl

    cat <
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16

    kubelet 现在每隔几秒就会重启,因为它陷入了一个等待 kubeadm 指令的死循环

    2.2.2 使用kubeadm引导集群

    下载各个机器需要的镜像

    sudo tee ./images.sh <<-'EOF'
    #!/bin/bash
    images=(
    kube-apiserver:v1.20.9
    kube-proxy:v1.20.9
    kube-controller-manager:v1.20.9
    kube-scheduler:v1.20.9
    coredns:1.7.0
    etcd:3.4.13-0
    pause:3.2
    )
    for imageName in ${images[@]} ; do
    docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
    done
    EOF
       
    chmod +x ./images.sh && ./images.sh
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17

    初始化主节点

    #所有机器添加master域名映射,以下需要修改为自己的
    echo "192.167.0.128  cluster-endpoint" >> /etc/hosts
    
    
    
    #主节点初始化
    kubeadm init \
    --apiserver-advertise-address=192.167.0.128 \
    --control-plane-endpoint=cluster-endpoint \
    --image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
    --kubernetes-version v1.20.9 \
    --service-cidr=10.96.0.0/16 \
    --pod-network-cidr=192.168.0.0/16
    
    #所有网络范围不重叠
    
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17

    2.2.3 根据提示继续

    master成功后提示如下:

    在这里插入图片描述

    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    Alternatively, if you are the root user, you can run:
    
      export KUBECONFIG=/etc/kubernetes/admin.conf
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    You can now join any number of control-plane nodes by copying certificate authorities
    and service account keys on each node and then running the following as root:
    
      kubeadm join cluster-endpoint:6443 --token t689lm.bk3zu0vm23nengzq \
        --discovery-token-ca-cert-hash sha256:48c75ac33feb48bc5241138417ad14dcde09977d488e5004fc8584ba57df7d17 \
        --control-plane 
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join cluster-endpoint:6443 --token t689lm.bk3zu0vm23nengzq \
        --discovery-token-ca-cert-hash sha256:48c75ac33feb48bc5241138417ad14dcde09977d488e5004fc8584ba57df7d17
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27

    初始化master

      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    • 1
    • 2
    • 3

    安装网络组件

    具体版本对应关系见系统要求

    calico官网

    #版本需要匹配,这里使用v.3.20版本
    curl https://docs.projectcalico.org/v3.20/manifests/calico.yaml -O
    kubectl apply -f calico.yaml
    
    • 1
    • 2
    • 3

    2.2.4 加入node节点

    kubeadm join cluster-endpoint:6443 --token t689lm.bk3zu0vm23nengzq \
        --discovery-token-ca-cert-hash sha256:48c75ac33feb48bc5241138417ad14dcde09977d488e5004fc8584ba57df7d17
    
    • 1
    • 2

    新令牌

    kubeadm token create --print-join-command

    高可用部署方式,也是在这一步的时候,使用添加主节点的命令即可

    2.2.5 验证集群

    • 验证集群节点状态

      • kubectl get nodes

    2.2.6 部署dashboard

    2.2.6.1 部署

    kubernetes官方提供的可视化界面
    https://github.com/kubernetes/dashboard

    kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml
    
    • 1

    如果以上链接不能访问,则使用vi创建文件xxx.yaml,然后复制以下内容到xxx.yaml,使用kubectl apply -f xxx.yaml安装dashboard。

    # Copyright 2017 The Kubernetes Authors.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #     http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    
    apiVersion: v1
    kind: Namespace
    metadata:
      name: kubernetes-dashboard
    
    ---
    
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    
    ---
    
    kind: Service
    apiVersion: v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    spec:
      ports:
        - port: 443
          targetPort: 8443
      selector:
        k8s-app: kubernetes-dashboard
    
    ---
    
    apiVersion: v1
    kind: Secret
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard-certs
      namespace: kubernetes-dashboard
    type: Opaque
    
    ---
    
    apiVersion: v1
    kind: Secret
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard-csrf
      namespace: kubernetes-dashboard
    type: Opaque
    data:
      csrf: ""
    
    ---
    
    apiVersion: v1
    kind: Secret
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard-key-holder
      namespace: kubernetes-dashboard
    type: Opaque
    
    ---
    
    kind: ConfigMap
    apiVersion: v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard-settings
      namespace: kubernetes-dashboard
    
    ---
    
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    rules:
      # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
      - apiGroups: [""]
        resources: ["secrets"]
        resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
        verbs: ["get", "update", "delete"]
        # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
      - apiGroups: [""]
        resources: ["configmaps"]
        resourceNames: ["kubernetes-dashboard-settings"]
        verbs: ["get", "update"]
        # Allow Dashboard to get metrics.
      - apiGroups: [""]
        resources: ["services"]
        resourceNames: ["heapster", "dashboard-metrics-scraper"]
        verbs: ["proxy"]
      - apiGroups: [""]
        resources: ["services/proxy"]
        resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
        verbs: ["get"]
    
    ---
    
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
    rules:
      # Allow Metrics Scraper to get metrics from the Metrics server
      - apiGroups: ["metrics.k8s.io"]
        resources: ["pods", "nodes"]
        verbs: ["get", "list", "watch"]
    
    ---
    
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: kubernetes-dashboard
    subjects:
      - kind: ServiceAccount
        name: kubernetes-dashboard
        namespace: kubernetes-dashboard
    
    ---
    
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: kubernetes-dashboard
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: kubernetes-dashboard
    subjects:
      - kind: ServiceAccount
        name: kubernetes-dashboard
        namespace: kubernetes-dashboard
    
    ---
    
    kind: Deployment
    apiVersion: apps/v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    spec:
      replicas: 1
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          k8s-app: kubernetes-dashboard
      template:
        metadata:
          labels:
            k8s-app: kubernetes-dashboard
        spec:
          containers:
            - name: kubernetes-dashboard
              image: kubernetesui/dashboard:v2.3.1
              imagePullPolicy: Always
              ports:
                - containerPort: 8443
                  protocol: TCP
              args:
                - --auto-generate-certificates
                - --namespace=kubernetes-dashboard
                # Uncomment the following line to manually specify Kubernetes API server Host
                # If not specified, Dashboard will attempt to auto discover the API server and connect
                # to it. Uncomment only if the default does not work.
                # - --apiserver-host=http://my-address:port
              volumeMounts:
                - name: kubernetes-dashboard-certs
                  mountPath: /certs
                  # Create on-disk volume to store exec logs
                - mountPath: /tmp
                  name: tmp-volume
              livenessProbe:
                httpGet:
                  scheme: HTTPS
                  path: /
                  port: 8443
                initialDelaySeconds: 30
                timeoutSeconds: 30
              securityContext:
                allowPrivilegeEscalation: false
                readOnlyRootFilesystem: true
                runAsUser: 1001
                runAsGroup: 2001
          volumes:
            - name: kubernetes-dashboard-certs
              secret:
                secretName: kubernetes-dashboard-certs
            - name: tmp-volume
              emptyDir: {}
          serviceAccountName: kubernetes-dashboard
          nodeSelector:
            "kubernetes.io/os": linux
          # Comment the following tolerations if Dashboard must not be deployed on master
          tolerations:
            - key: node-role.kubernetes.io/master
              effect: NoSchedule
    
    ---
    
    kind: Service
    apiVersion: v1
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      name: dashboard-metrics-scraper
      namespace: kubernetes-dashboard
    spec:
      ports:
        - port: 8000
          targetPort: 8000
      selector:
        k8s-app: dashboard-metrics-scraper
    
    ---
    
    kind: Deployment
    apiVersion: apps/v1
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      name: dashboard-metrics-scraper
      namespace: kubernetes-dashboard
    spec:
      replicas: 1
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          k8s-app: dashboard-metrics-scraper
      template:
        metadata:
          labels:
            k8s-app: dashboard-metrics-scraper
          annotations:
            seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
        spec:
          containers:
            - name: dashboard-metrics-scraper
              image: kubernetesui/metrics-scraper:v1.0.6
              ports:
                - containerPort: 8000
                  protocol: TCP
              livenessProbe:
                httpGet:
                  scheme: HTTP
                  path: /
                  port: 8000
                initialDelaySeconds: 30
                timeoutSeconds: 30
              volumeMounts:
              - mountPath: /tmp
                name: tmp-volume
              securityContext:
                allowPrivilegeEscalation: false
                readOnlyRootFilesystem: true
                runAsUser: 1001
                runAsGroup: 2001
          serviceAccountName: kubernetes-dashboard
          nodeSelector:
            "kubernetes.io/os": linux
          # Comment the following tolerations if Dashboard must not be deployed on master
          tolerations:
            - key: node-role.kubernetes.io/master
              effect: NoSchedule
          volumes:
            - name: tmp-volume
              emptyDir: {}
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110
    • 111
    • 112
    • 113
    • 114
    • 115
    • 116
    • 117
    • 118
    • 119
    • 120
    • 121
    • 122
    • 123
    • 124
    • 125
    • 126
    • 127
    • 128
    • 129
    • 130
    • 131
    • 132
    • 133
    • 134
    • 135
    • 136
    • 137
    • 138
    • 139
    • 140
    • 141
    • 142
    • 143
    • 144
    • 145
    • 146
    • 147
    • 148
    • 149
    • 150
    • 151
    • 152
    • 153
    • 154
    • 155
    • 156
    • 157
    • 158
    • 159
    • 160
    • 161
    • 162
    • 163
    • 164
    • 165
    • 166
    • 167
    • 168
    • 169
    • 170
    • 171
    • 172
    • 173
    • 174
    • 175
    • 176
    • 177
    • 178
    • 179
    • 180
    • 181
    • 182
    • 183
    • 184
    • 185
    • 186
    • 187
    • 188
    • 189
    • 190
    • 191
    • 192
    • 193
    • 194
    • 195
    • 196
    • 197
    • 198
    • 199
    • 200
    • 201
    • 202
    • 203
    • 204
    • 205
    • 206
    • 207
    • 208
    • 209
    • 210
    • 211
    • 212
    • 213
    • 214
    • 215
    • 216
    • 217
    • 218
    • 219
    • 220
    • 221
    • 222
    • 223
    • 224
    • 225
    • 226
    • 227
    • 228
    • 229
    • 230
    • 231
    • 232
    • 233
    • 234
    • 235
    • 236
    • 237
    • 238
    • 239
    • 240
    • 241
    • 242
    • 243
    • 244
    • 245
    • 246
    • 247
    • 248
    • 249
    • 250
    • 251
    • 252
    • 253
    • 254
    • 255
    • 256
    • 257
    • 258
    • 259
    • 260
    • 261
    • 262
    • 263
    • 264
    • 265
    • 266
    • 267
    • 268
    • 269
    • 270
    • 271
    • 272
    • 273
    • 274
    • 275
    • 276
    • 277
    • 278
    • 279
    • 280
    • 281
    • 282
    • 283
    • 284
    • 285
    • 286
    • 287
    • 288
    • 289
    • 290
    • 291
    • 292
    • 293
    • 294
    • 295
    • 296
    • 297
    • 298
    • 299
    • 300
    • 301
    • 302
    2.2.6.2 设置访问端口
    kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
    
    • 1

    type: ClusterIP 改为 type: NodePort

    kubectl get svc -A |grep kubernetes-dashboard
    ## 找到端口,在安全组放行
    
    • 1
    • 2

    访问: https://集群任意IP:端口

    访问:https://192.167.0.129:31859/#/login,结果如下

    在这里插入图片描述

    2.2.6.3 创建访问者账号
    #新建一个yaml文件; vi dash.yaml
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: admin-user
      namespace: kubernetes-dashboard
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: admin-user
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
    - kind: ServiceAccount
      name: admin-user
      namespace: kubernetes-dashboard
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19

    创建

    kubectl apply -f dash.yaml
    
    • 1
    2.2.6.4 获取令牌
    #获取访问令牌
    kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
    
    • 1
    • 2
    eyJhbGciOiJSUzI1NiIsImtpZCI6IkhjSHBRYWZhbktYeUpYajNlR3BkOERaTTJaOXZPeFhlVmp2b0djNDB4eGsifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXN4MjV4Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI3NGFkYWRiZC1lM2NkLTQ1OTItYjdmZi0yODY2MmU5ODQzNTAiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.TU2QazuZYVyG3Bi05Bkz7GNPv1E41_h40TTyJSns6aYxBrMQMkqkKeaifX2wJUdFfkqPRJmbxAgssjP4avALbzajzwxmPSPa-Epo888VzJ2lSa6BJSis5OL2o7S5vpQViN0YnJ6IY4oy34yOCg19fFtyV1FruaBXECIb5TCP9LiU4R-VoRitwnGE1FffahUnOj3s5HQ-glWML2WEbiJBuMDj1F4AixYXmHd8QHHHdAZtV7bKJMdIWW2XpguTI_ysWJCKOUtJ6YQuNdHwoQcv0J2dV3dYMgDhEhoZc46nRFH-GPZcSWGGhwz87PxsnTdj9-9E-uRVfFSyVrimrp7Kgg
    
    • 1

    复制token,进行访问即可:

    在这里插入图片描述

    3. 使用kubeadm安装方式2

    3.1 基础环境准备

    所有机器执行以下操作

    #各个机器设置自己的域名
    hostnamectl set-hostname xxxx
    例: hostnamectl set-hostname k8s-master/k8s-node1/k8s-node2.....
    
    
    # 将 SELinux 设置为 permissive 模式(相当于将其禁用)
    sudo setenforce 0
    sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
    
    #关闭swap
    swapoff -a  
    sed -ri 's/.*swap.*/#&/' /etc/fstab
    
    #允许 iptables 检查桥接流量
    cat <| sudo tee /etc/modules-load.d/k8s.conf
    br_netfilter
    EOF
    
    cat <| sudo tee /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
    sudo sysctl --system
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24

    安装kubelet、kubeadm、kubectl

    cat <| sudo tee /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    exclude=kubelet kubeadm kubectl
    EOF
    
    
    sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes
    
    sudo systemctl enable --now kubelet
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16

    kubelet 现在每隔几秒就会重启,因为它陷入了一个等待 kubeadm 指令的死循环

    3.2 使用kubeadm引导集群

    下载各个机器需要的镜像

    sudo tee ./images.sh <<-'EOF'
    #!/bin/bash
    images=(
    kube-apiserver:v1.20.9
    kube-proxy:v1.20.9
    kube-controller-manager:v1.20.9
    kube-scheduler:v1.20.9
    coredns:1.7.0
    etcd:3.4.13-0
    pause:3.2
    )
    for imageName in ${images[@]} ; do
    docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
    done
    EOF
       
    chmod +x ./images.sh && ./images.sh
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17

    初始化主节点

    #所有机器添加master域名映射,以下需要修改为自己的
    echo "192.167.0.128  cluster-endpoint" >> /etc/hosts
    
    
    
    #主节点初始化
    kubeadm init \
    --apiserver-advertise-address=192.167.0.128 \
    --control-plane-endpoint=cluster-endpoint \
    --image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
    --kubernetes-version v1.20.9 \
    --service-cidr=10.96.0.0/16 \
    --pod-network-cidr=192.168.0.0/16
    
    #所有网络范围不重叠
    
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17

    3.3、根据提示继续

    master成功后提示如下:

    在这里插入图片描述

    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    Alternatively, if you are the root user, you can run:
    
      export KUBECONFIG=/etc/kubernetes/admin.conf
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    You can now join any number of control-plane nodes by copying certificate authorities
    and service account keys on each node and then running the following as root:
    
      kubeadm join cluster-endpoint:6443 --token t689lm.bk3zu0vm23nengzq \
        --discovery-token-ca-cert-hash sha256:48c75ac33feb48bc5241138417ad14dcde09977d488e5004fc8584ba57df7d17 \
        --control-plane 
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join cluster-endpoint:6443 --token t689lm.bk3zu0vm23nengzq \
        --discovery-token-ca-cert-hash sha256:48c75ac33feb48bc5241138417ad14dcde09977d488e5004fc8584ba57df7d17
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27

    初始化master

      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    • 1
    • 2
    • 3

    安装网络组件

    具体版本对应关系见系统要求

    calico官网

    #版本需要匹配,这里使用v.3.20版本
    curl https://docs.projectcalico.org/v3.20/manifests/calico.yaml -O
    kubectl apply -f calico.yaml
    
    • 1
    • 2
    • 3

    3.4、加入node节点

    kubeadm join cluster-endpoint:6443 --token t689lm.bk3zu0vm23nengzq \
        --discovery-token-ca-cert-hash sha256:48c75ac33feb48bc5241138417ad14dcde09977d488e5004fc8584ba57df7d17
    
    • 1
    • 2

    新令牌

    kubeadm token create --print-join-command

    高可用部署方式,也是在这一步的时候,使用添加主节点的命令即可

    3.5、验证集群

    • 验证集群节点状态

      • kubectl get nodes

    3.6、部署dashboard

    3.6.1 部署

    kubernetes官方提供的可视化界面
    https://github.com/kubernetes/dashboard

    kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml
    
    • 1

    如果拉取不下来,可以使用vi …创建yaml文件,并将以下内容复制到yaml中,然后执行:kubectl apply -f xxx.yaml

    # Copyright 2017 The Kubernetes Authors.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #     http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    
    apiVersion: v1
    kind: Namespace
    metadata:
      name: kubernetes-dashboard
    
    ---
    
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    
    ---
    
    kind: Service
    apiVersion: v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    spec:
      ports:
        - port: 443
          targetPort: 8443
      selector:
        k8s-app: kubernetes-dashboard
    
    ---
    
    apiVersion: v1
    kind: Secret
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard-certs
      namespace: kubernetes-dashboard
    type: Opaque
    
    ---
    
    apiVersion: v1
    kind: Secret
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard-csrf
      namespace: kubernetes-dashboard
    type: Opaque
    data:
      csrf: ""
    
    ---
    
    apiVersion: v1
    kind: Secret
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard-key-holder
      namespace: kubernetes-dashboard
    type: Opaque
    
    ---
    
    kind: ConfigMap
    apiVersion: v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard-settings
      namespace: kubernetes-dashboard
    
    ---
    
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    rules:
      # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
      - apiGroups: [""]
        resources: ["secrets"]
        resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
        verbs: ["get", "update", "delete"]
        # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
      - apiGroups: [""]
        resources: ["configmaps"]
        resourceNames: ["kubernetes-dashboard-settings"]
        verbs: ["get", "update"]
        # Allow Dashboard to get metrics.
      - apiGroups: [""]
        resources: ["services"]
        resourceNames: ["heapster", "dashboard-metrics-scraper"]
        verbs: ["proxy"]
      - apiGroups: [""]
        resources: ["services/proxy"]
        resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
        verbs: ["get"]
    
    ---
    
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
    rules:
      # Allow Metrics Scraper to get metrics from the Metrics server
      - apiGroups: ["metrics.k8s.io"]
        resources: ["pods", "nodes"]
        verbs: ["get", "list", "watch"]
    
    ---
    
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: kubernetes-dashboard
    subjects:
      - kind: ServiceAccount
        name: kubernetes-dashboard
        namespace: kubernetes-dashboard
    
    ---
    
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: kubernetes-dashboard
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: kubernetes-dashboard
    subjects:
      - kind: ServiceAccount
        name: kubernetes-dashboard
        namespace: kubernetes-dashboard
    
    ---
    
    kind: Deployment
    apiVersion: apps/v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    spec:
      replicas: 1
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          k8s-app: kubernetes-dashboard
      template:
        metadata:
          labels:
            k8s-app: kubernetes-dashboard
        spec:
          containers:
            - name: kubernetes-dashboard
              image: kubernetesui/dashboard:v2.3.1
              imagePullPolicy: Always
              ports:
                - containerPort: 8443
                  protocol: TCP
              args:
                - --auto-generate-certificates
                - --namespace=kubernetes-dashboard
                # Uncomment the following line to manually specify Kubernetes API server Host
                # If not specified, Dashboard will attempt to auto discover the API server and connect
                # to it. Uncomment only if the default does not work.
                # - --apiserver-host=http://my-address:port
              volumeMounts:
                - name: kubernetes-dashboard-certs
                  mountPath: /certs
                  # Create on-disk volume to store exec logs
                - mountPath: /tmp
                  name: tmp-volume
              livenessProbe:
                httpGet:
                  scheme: HTTPS
                  path: /
                  port: 8443
                initialDelaySeconds: 30
                timeoutSeconds: 30
              securityContext:
                allowPrivilegeEscalation: false
                readOnlyRootFilesystem: true
                runAsUser: 1001
                runAsGroup: 2001
          volumes:
            - name: kubernetes-dashboard-certs
              secret:
                secretName: kubernetes-dashboard-certs
            - name: tmp-volume
              emptyDir: {}
          serviceAccountName: kubernetes-dashboard
          nodeSelector:
            "kubernetes.io/os": linux
          # Comment the following tolerations if Dashboard must not be deployed on master
          tolerations:
            - key: node-role.kubernetes.io/master
              effect: NoSchedule
    
    ---
    
    kind: Service
    apiVersion: v1
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      name: dashboard-metrics-scraper
      namespace: kubernetes-dashboard
    spec:
      ports:
        - port: 8000
          targetPort: 8000
      selector:
        k8s-app: dashboard-metrics-scraper
    
    ---
    
    kind: Deployment
    apiVersion: apps/v1
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      name: dashboard-metrics-scraper
      namespace: kubernetes-dashboard
    spec:
      replicas: 1
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          k8s-app: dashboard-metrics-scraper
      template:
        metadata:
          labels:
            k8s-app: dashboard-metrics-scraper
          annotations:
            seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
        spec:
          containers:
            - name: dashboard-metrics-scraper
              image: kubernetesui/metrics-scraper:v1.0.6
              ports:
                - containerPort: 8000
                  protocol: TCP
              livenessProbe:
                httpGet:
                  scheme: HTTP
                  path: /
                  port: 8000
                initialDelaySeconds: 30
                timeoutSeconds: 30
              volumeMounts:
              - mountPath: /tmp
                name: tmp-volume
              securityContext:
                allowPrivilegeEscalation: false
                readOnlyRootFilesystem: true
                runAsUser: 1001
                runAsGroup: 2001
          serviceAccountName: kubernetes-dashboard
          nodeSelector:
            "kubernetes.io/os": linux
          # Comment the following tolerations if Dashboard must not be deployed on master
          tolerations:
            - key: node-role.kubernetes.io/master
              effect: NoSchedule
          volumes:
            - name: tmp-volume
              emptyDir: {}
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110
    • 111
    • 112
    • 113
    • 114
    • 115
    • 116
    • 117
    • 118
    • 119
    • 120
    • 121
    • 122
    • 123
    • 124
    • 125
    • 126
    • 127
    • 128
    • 129
    • 130
    • 131
    • 132
    • 133
    • 134
    • 135
    • 136
    • 137
    • 138
    • 139
    • 140
    • 141
    • 142
    • 143
    • 144
    • 145
    • 146
    • 147
    • 148
    • 149
    • 150
    • 151
    • 152
    • 153
    • 154
    • 155
    • 156
    • 157
    • 158
    • 159
    • 160
    • 161
    • 162
    • 163
    • 164
    • 165
    • 166
    • 167
    • 168
    • 169
    • 170
    • 171
    • 172
    • 173
    • 174
    • 175
    • 176
    • 177
    • 178
    • 179
    • 180
    • 181
    • 182
    • 183
    • 184
    • 185
    • 186
    • 187
    • 188
    • 189
    • 190
    • 191
    • 192
    • 193
    • 194
    • 195
    • 196
    • 197
    • 198
    • 199
    • 200
    • 201
    • 202
    • 203
    • 204
    • 205
    • 206
    • 207
    • 208
    • 209
    • 210
    • 211
    • 212
    • 213
    • 214
    • 215
    • 216
    • 217
    • 218
    • 219
    • 220
    • 221
    • 222
    • 223
    • 224
    • 225
    • 226
    • 227
    • 228
    • 229
    • 230
    • 231
    • 232
    • 233
    • 234
    • 235
    • 236
    • 237
    • 238
    • 239
    • 240
    • 241
    • 242
    • 243
    • 244
    • 245
    • 246
    • 247
    • 248
    • 249
    • 250
    • 251
    • 252
    • 253
    • 254
    • 255
    • 256
    • 257
    • 258
    • 259
    • 260
    • 261
    • 262
    • 263
    • 264
    • 265
    • 266
    • 267
    • 268
    • 269
    • 270
    • 271
    • 272
    • 273
    • 274
    • 275
    • 276
    • 277
    • 278
    • 279
    • 280
    • 281
    • 282
    • 283
    • 284
    • 285
    • 286
    • 287
    • 288
    • 289
    • 290
    • 291
    • 292
    • 293
    • 294
    • 295
    • 296
    • 297
    • 298
    • 299
    • 300
    • 301
    • 302

    3.6.2 设置访问端口

    kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
    
    • 1

    type: ClusterIP 改为 type: NodePort

    kubectl get svc -A |grep kubernetes-dashboard
    ## 找到端口,在安全组放行
    
    • 1
    • 2

    访问: https://集群任意IP:端口

    访问:https://192.167.0.129:31859/#/login,结果如下

    3.6.3 创建访问者账号

    #新建一个yaml文件; vi dash.yaml
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: admin-user
      namespace: kubernetes-dashboard
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: admin-user
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
    - kind: ServiceAccount
      name: admin-user
      namespace: kubernetes-dashboard
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19

    创建

    kubectl apply -f dash.yaml
    
    • 1

    3.6.4 获取令牌

    #获取访问令牌
    kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
    
    • 1
    • 2
    eyJhbGciOiJSUzI1NiIsImtpZCI6IkhjSHBRYWZhbktYeUpYajNlR3BkOERaTTJaOXZPeFhlVmp2b0djNDB4eGsifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXN4MjV4Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI3NGFkYWRiZC1lM2NkLTQ1OTItYjdmZi0yODY2MmU5ODQzNTAiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.TU2QazuZYVyG3Bi05Bkz7GNPv1E41_h40TTyJSns6aYxBrMQMkqkKeaifX2wJUdFfkqPRJmbxAgssjP4avALbzajzwxmPSPa-Epo888VzJ2lSa6BJSis5OL2o7S5vpQViN0YnJ6IY4oy34yOCg19fFtyV1FruaBXECIb5TCP9LiU4R-VoRitwnGE1FffahUnOj3s5HQ-glWML2WEbiJBuMDj1F4AixYXmHd8QHHHdAZtV7bKJMdIWW2XpguTI_ysWJCKOUtJ6YQuNdHwoQcv0J2dV3dYMgDhEhoZc46nRFH-GPZcSWGGhwz87PxsnTdj9-9E-uRVfFSyVrimrp7Kgg
    
    • 1

    复制token,进行访问即可:

    在这里插入图片描述

    3.7 环境初始化

    3.7.1 检查操作系统的版本

    # 此方式下安装kubernetes集群要求Centos版本要在7.5或之上
    [root@master ~]# cat /etc/redhat-release
    Centos Linux 7.5.1804 (Core)
    
    • 1
    • 2
    • 3

    3.7.2 主机名解析

    为了方便集群节点间的直接调用,在这个配置一下主机名解析,企业中推荐使用内部DNS服务器

    # 主机名成解析 编辑三台服务器的/etc/hosts文件,添加下面内容
    192.168.90.100 master
    192.168.90.106 node1
    192.168.90.107 node2
    
    • 1
    • 2
    • 3
    • 4

    3.7.3 时间同步

    kubernetes要求集群中的节点时间必须精确一直,这里使用chronyd服务从网络同步时间

    企业中建议配置内部的会见同步服务器

    # 启动chronyd服务
    [root@master ~]# systemctl start chronyd
    [root@master ~]# systemctl enable chronyd
    [root@master ~]# date
    
    • 1
    • 2
    • 3
    • 4

    3.7.4 禁用iptable和firewalld服务

    kubernetes和docker 在运行的中会产生大量的iptables规则,为了不让系统规则跟它们混淆,直接关闭系统的规则

    # 1 关闭firewalld服务
    [root@master ~]# systemctl stop firewalld
    [root@master ~]# systemctl disable firewalld
    # 2 关闭iptables服务
    [root@master ~]# systemctl stop iptables
    [root@master ~]# systemctl disable iptables
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6

    3.7.5 禁用selinux

    selinux是linux系统下的一个安全服务,如果不关闭它,在安装集群中会产生各种各样的奇葩问题

    # 编辑 /etc/selinux/config 文件,修改SELINUX的值为disable
    # 注意修改完毕之后需要重启linux服务
    SELINUX=disabled
    
    • 1
    • 2
    • 3

    3.7.6 禁用swap分区

    swap分区指的是虚拟内存分区,它的作用是物理内存使用完,之后将磁盘空间虚拟成内存来使用,启用swap设备会对系统的性能产生非常负面的影响,因此kubernetes要求每个节点都要禁用swap设备,但是如果因为某些原因确实不能关闭swap分区,就需要在集群安装过程中通过明确的参数进行配置说明

    # 编辑分区配置文件/etc/fstab,注释掉swap分区一行
    # 注意修改完毕之后需要重启linux服务
    vim /etc/fstab
    注释掉 /dev/mapper/centos-swap swap
    # /dev/mapper/centos-swap swap
    
    • 1
    • 2
    • 3
    • 4
    • 5

    3.7.7 修改linux的内核参数

    # 修改linux的内核采纳数,添加网桥过滤和地址转发功能
    # 编辑/etc/sysctl.d/kubernetes.conf文件,添加如下配置:
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
    
    # 重新加载配置
    [root@master ~]# sysctl -p
    # 加载网桥过滤模块
    [root@master ~]# modprobe br_netfilter
    # 查看网桥过滤模块是否加载成功
    [root@master ~]# lsmod | grep br_netfilter
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12

    3.7.8 配置ipvs功能

    在Kubernetes中Service有两种带来模型,一种是基于iptables的,一种是基于ipvs的两者比较的话,ipvs的性能明显要高一些,但是如果要使用它,需要手动载入ipvs模块

    # 1.安装ipset和ipvsadm
    [root@master ~]# yum install ipset ipvsadm -y
    # 2.添加需要加载的模块写入脚本文件
    [root@master ~]# cat < /etc/sysconfig/modules/ipvs.modules
    #!/bin/bash
    modprobe -- ip_vs
    modprobe -- ip_vs_rr
    modprobe -- ip_vs_wrr
    modprobe -- ip_vs_sh
    modprobe -- nf_conntrack_ipv4
    EOF
    # 3.为脚本添加执行权限
    [root@master ~]# chmod +x /etc/sysconfig/modules/ipvs.modules
    # 4.执行脚本文件
    [root@master ~]# /bin/bash /etc/sysconfig/modules/ipvs.modules
    # 5.查看对应的模块是否加载成功
    [root@master ~]# lsmod | grep -e ip_vs -e nf_conntrack_ipv4
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17

    3.7.9 安装docker

    # 1、切换镜像源
    [root@master ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
    
    # 2、查看当前镜像源中支持的docker版本
    [root@master ~]# yum list docker-ce --showduplicates
    
    # 3、安装特定版本的docker-ce
    # 必须制定--setopt=obsoletes=0,否则yum会自动安装更高版本
    [root@master ~]# yum install --setopt=obsoletes=0 docker-ce-18.06.3.ce-3.el7 -y
    
    # 4、添加一个配置文件
    #Docker 在默认情况下使用Vgroup Driver为cgroupfs,而Kubernetes推荐使用systemd来替代cgroupfs
    [root@master ~]# mkdir /etc/docker
    [root@master ~]# cat < /etc/docker/daemon.json
    {
    	"exec-opts": ["native.cgroupdriver=systemd"],
    	"registry-mirrors": ["https://kn0t2bca.mirror.aliyuncs.com"]
    }
    EOF
    
    # 5、启动dokcer
    [root@master ~]# systemctl restart docker
    [root@master ~]# systemctl enable docker
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23

    3.7.10 安装Kubernetes组件

    # 1、由于kubernetes的镜像在国外,速度比较慢,这里切换成国内的镜像源
    # 2、编辑/etc/yum.repos.d/kubernetes.repo,添加下面的配置
    [kubernetes]
    name=Kubernetes
    baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgchech=0
    repo_gpgcheck=0
    gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
    			http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    
    # 3、安装kubeadm、kubelet和kubectl
    [root@master ~]# yum install --setopt=obsoletes=0 kubeadm-1.17.4-0 kubelet-1.17.4-0 kubectl-1.17.4-0 -y
    
    # 4、配置kubelet的cgroup
    #编辑/etc/sysconfig/kubelet, 添加下面的配置
    KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
    KUBE_PROXY_MODE="ipvs"
    
    # 5、设置kubelet开机自启
    [root@master ~]# systemctl enable kubelet
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21

    3.7.11 准备集群镜像

    # 在安装kubernetes集群之前,必须要提前准备好集群需要的镜像,所需镜像可以通过下面命令查看
    [root@master ~]# kubeadm config images list
    
    # 下载镜像
    # 此镜像kubernetes的仓库中,由于网络原因,无法连接,下面提供了一种替换方案
    images=(
    	kube-apiserver:v1.17.4
    	kube-controller-manager:v1.17.4
    	kube-scheduler:v1.17.4
    	kube-proxy:v1.17.4
    	pause:3.1
    	etcd:3.4.3-0
    	coredns:1.6.5
    )
    
    for imageName in ${images[@]};do
    	docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
    	docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
    	docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName 
    done
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21

    3.7.11 集群初始化

    下面的操作只需要在master节点上执行即可

    # 创建集群
    [root@master ~]# kubeadm init \
    	--apiserver-advertise-address=192.168.90.100 \
    	--image-repository registry.aliyuncs.com/google_containers \
    	--kubernetes-version=v1.17.4 \
    	--service-cidr=10.96.0.0/12 \
    	--pod-network-cidr=10.244.0.0/16
    # 创建必要文件
    [root@master ~]# mkdir -p $HOME/.kube
    [root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    [root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11

    下面的操作只需要在node节点上执行即可

    kubeadm join 192.168.0.100:6443 --token awk15p.t6bamck54w69u4s8 \
        --discovery-token-ca-cert-hash sha256:a94fa09562466d32d29523ab6cff122186f1127599fa4dcd5fa0152694f17117 
    
    • 1
    • 2

    在master上查看节点信息

    [root@master ~]# kubectl get nodes
    NAME    STATUS   ROLES     AGE   VERSION
    master  NotReady  master   6m    v1.17.4
    node1   NotReady     22s   v1.17.4
    node2   NotReady     19s   v1.17.4
    
    • 1
    • 2
    • 3
    • 4
    • 5

    3.7.13 安装网络插件,只在master节点操作即可

    wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    
    • 1

    由于外网不好访问,如果出现无法访问的情况,可以直接用下面的 记得文件名是kube-flannel.yml,位置:/root/kube-flannel.yml内容:

    https://github.com/flannel-io/flannel/tree/master/Documentation/kube-flannel.yml
    
    • 1

    也可手动拉取指定版本
    docker pull quay.io/coreos/flannel:v0.14.0 #拉取flannel网络,三台主机
    docker images #查看仓库是否拉去下来
    若是集群状态一直是 notready,用下面语句查看原因,
    journalctl -f -u kubelet.service
    若原因是: cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
    mkdir -p /etc/cni/net.d #创建目录给flannel做配置文件
    vim /etc/cni/net.d/10-flannel.conf #编写配置文件

    {
     "name":"cbr0",
     "cniVersion":"0.3.1",
     "type":"flannel",
     "deledate":{
        "hairpinMode":true,
        "isDefaultGateway":true
      }
    
    }
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11

    3.7.14 使用kubeadm reset重置集群

    #在master节点之外的节点进行操作
    kubeadm reset
    systemctl stop kubelet
    systemctl stop docker
    rm -rf /var/lib/cni/
    rm -rf /var/lib/kubelet/*
    rm -rf /etc/cni/
    ifconfig cni0 down
    ifconfig flannel.1 down
    ifconfig docker0 down
    ip link delete cni0
    ip link delete flannel.1
    ##重启kubelet
    systemctl restart kubelet
    ##重启docker
    systemctl restart docker
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16

    3.7.15 重启kubelet和docker

    # 重启kubelet
    systemctl restart kubelet
    # 重启docker
    systemctl restart docker
    
    • 1
    • 2
    • 3
    • 4

    使用配置文件启动fannel

    kubectl apply -f kube-flannel.yml
    
    • 1

    等待它安装完毕 发现已经是 集群的状态已经是Ready

    在这里插入图片描述

    3.7.16 kubeadm中的命令

    # 生成 新的token
    [root@master ~]# kubeadm token create --print-join-command
    
    • 1
    • 2

    3.8 集群测试

    3.8.1 创建一个nginx服务

    kubectl create deployment nginx  --image=nginx:1.14-alpine
    
    • 1

    3.8.2 暴露端口

    kubectl expose deploy nginx  --port=80 --target-port=80  --type=NodePort
    
    • 1

    3.8.3 查看服务

    kubectl get pod,svc
    
    • 1

    3.8.4 查看pod

    在这里插入图片描述

    浏览器测试结果:

    在这里插入图片描述

  • 相关阅读:
    jar包导入到本地仓库无法引用
    Oracle中系统内置函数(四)
    代码随想录算法训练营第45天 [ 198.打家劫舍 213.打家劫舍II 337.打家劫舍III ]
    注册电气工程师(供配电)专业考试大纲纯干货分享
    基于mbedtls的AES加密(C/C++)
    Casdoor + OAuth 实现单点登录 SSO
    蓝桥杯刷题二
    AcWing算法学习第三节---高精度问题.
    微信小程序源码合集8(iOS计算器+备忘录+仿今日头条+仿腾讯视频+艺术)
    C#线索二叉树的定义
  • 原文地址:https://blog.csdn.net/Learning_xzj/article/details/126391090