• 猿创征文|云原生|minikube的部署安装完全手册


    前言:

    学习一个新平台首先当然是能够有这么一个平台了,而kubernetes的部署安装无疑是提高了这一学习的门槛,不管是二进制安装还是kubeadm安装都还是需要比较多的运维技巧的,并且在搭建学习的时候,需要的硬件资源也是比较多的,至少都需要三台或者三台以上的服务器才能够完成部署安装。

    那么,kind或者minikube这样的工具就是一个能够快速的搭建出学习平台的工具,特点是简单,易用,一台服务器的资源就可以搞定了(只是单机需要的内存大一些,建议至少8G内存吧),自动化程度高,基本什么都给你设置好了,并且支持多种虚拟化引擎,比如,docker,container,kvm这些常用的虚拟化引擎都支持。缺点是基本没有定制化。

    minikube支持的虚拟化引擎:

     好了,本教程大部分资料都是从官网的docs里扒的,docs的网址是:Welcome! | minikube




    相关安装部署文件(conntrack.tar.gz解压后,rpm -ivh * 安装就可以了,是相关依赖,minikube-images.tar.gz是镜像包,解压后倒入docker,三个可执行文件放入/root/.minikube/cache/linux/amd64/v1.18.8/目录下即可。):

    链接:https://pan.baidu.com/s/14-r59VfpZRpfiVGj4IadxA?pwd=k8ss 
    提取码:k8ss 

     

    一,

    get started minikube(开始部署minkube)

    安装部署前的先决条件

    至少两个CPU,2G内存,20G空余磁盘空间,可访问互联网,有一个虚拟化引擎, Docker, Hyperkit, Hyper-V, KVM, Parallels, Podman, VirtualBox, or VMware Fusion/Workstation其中的一个,那么,docker是比较容易安装的,就不说了,docker吧,操作系统是centos。

    1. What you’ll need
    2. 2 CPUs or more
    3. 2GB of free memory
    4. 20GB of free disk space
    5. Internet connection
    6. Container or virtual machine manager, such as: Docker, Hyperkit, Hyper-V, KVM, Parallels, Podman, VirtualBox, or VMware Fusion/Workstation

    docker的离线安装以及本地化配置_zsk_john的博客-CSDN博客 离线安装docker环境的博文,照此做就可以了,请确保docker环境是安装好的。

    docker版本至少是18.09到20.10

    二,

    开始安装

    下载minikube的执行程序

    1. curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
    2. sudo install minikube-linux-amd64 /usr/local/bin/minikube

    三,

    导入镜像

    由于minikube在安装kubernetes的时候使用的镜像是从外网拉取的,国内由于被墙是无法拉取的,因此,制作了这个离线镜像包。

    1. [root@slave3 ~]# tar zxf minikube-images.tar.gz
    2. [root@slave3 ~]# cd minikube-images
    3. [root@slave3 minikube-images]# for i in `ls ./*`;do docker load <$i;done
    4. dfccba63d0cc: Loading layer [==================================================>] 80.82MB/80.82MB
    5. Loaded image: gcr.io/k8s-minikube/storage-provisioner:v1.8.1
    6. 225df95e717c: Loading layer [==================================================>] 336.4kB/336.4kB
    7. c965b38a6629: Loading layer [==================================================>] 43.58MB/43.58MB
    8. 。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。略略略

    四,

    初始化kubernetes集群的命令:

    这里大概介绍一下,image-repostory是使用阿里云下载镜像,cni是指定网络插件就用flannel,如果不想用这个删掉这行就可以了,其它没什么要注意的。

    1. minikube config set driver none
    2. minikube start pod-network-cidr='10.244.0.0/16'\
    3. --extra-config=kubelet.pod-cidr=10.244.0.0/16 \
    4. --network-plugin=cni \
    5. --image-repository='registry.aliyuncs.com/google_containers' \
    6. --cni=flannel \
    7. --apiserver-ips=192.168.217.23 \
    8. --kubernetes-version=1.18.8 \
    9. --vm-driver=none

    启动集群的日志

    1. [root@slave3 conntrack]# minikube start --driver=none --kubernetes-version=1.18.8
    2. * minikube v1.26.1 on Centos 7.4.1708
    3. * Using the none driver based on user configuration
    4. * Starting control plane node minikube in cluster minikube
    5. * Running on localhost (CPUs=4, Memory=7983MB, Disk=51175MB) ...
    6. * OS release is CentOS Linux 7 (Core)
    7. E0911 11:23:25.121495 14039 docker.go:148] "Failed to enable" err=<
    8. sudo systemctl enable docker.socket: exit status 1
    9. stdout:
    10. stderr:
    11. Failed to execute operation: No such file or directory
    12. > service="docker.socket"
    13. ! This bare metal machine is having trouble accessing https://k8s.gcr.io
    14. * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
    15. > kubectl.sha256: 65 B / 65 B [-------------------------] 100.00% ? p/s 0s
    16. > kubelet: 108.05 MiB / 108.05 MiB [--------] 100.00% 639.49 KiB p/s 2m53s
    17. - Generating certificates and keys ...
    18. - Booting up control plane ...
    19. ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.8:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": exit status 1
    20. stdout:
    21. [init] Using Kubernetes version: v1.18.8
    22. [preflight] Running pre-flight checks
    23. [preflight] Pulling images required for setting up a Kubernetes cluster
    24. [preflight] This might take a minute or two, depending on the speed of your internet connection
    25. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    26. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    27. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    28. [kubelet-start] Starting the kubelet
    29. [certs] Using certificateDir folder "/var/lib/minikube/certs"
    30. [certs] Using existing ca certificate authority
    31. [certs] Using existing apiserver certificate and key on disk
    32. [certs] Generating "apiserver-kubelet-client" certificate and key
    33. [certs] Generating "front-proxy-ca" certificate and key
    34. [certs] Generating "front-proxy-client" certificate and key
    35. [certs] Generating "etcd/ca" certificate and key
    36. [certs] Generating "etcd/server" certificate and key
    37. [certs] etcd/server serving cert is signed for DNS names [slave3 localhost] and IPs [192.168.217.136 127.0.0.1 ::1]
    38. [certs] Generating "etcd/peer" certificate and key
    39. [certs] etcd/peer serving cert is signed for DNS names [slave3 localhost] and IPs [192.168.217.136 127.0.0.1 ::1]
    40. [certs] Generating "etcd/healthcheck-client" certificate and key
    41. [certs] Generating "apiserver-etcd-client" certificate and key
    42. [certs] Generating "sa" key and public key
    43. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    44. [kubeconfig] Writing "admin.conf" kubeconfig file
    45. [kubeconfig] Writing "kubelet.conf" kubeconfig file
    46. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    47. [kubeconfig] Writing "scheduler.conf" kubeconfig file
    48. [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    49. [control-plane] Creating static Pod manifest for "kube-apiserver"
    50. [control-plane] Creating static Pod manifest for "kube-controller-manager"
    51. [control-plane] Creating static Pod manifest for "kube-scheduler"
    52. [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    53. [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    54. [kubelet-check] Initial timeout of 40s passed.
    55. [kubelet-check] It seems like the kubelet isn't running or healthy.
    56. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
    57. [kubelet-check] It seems like the kubelet isn't running or healthy.
    58. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
    59. [kubelet-check] It seems like the kubelet isn't running or healthy.
    60. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
    61. [kubelet-check] It seems like the kubelet isn't running or healthy.
    62. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
    63. [kubelet-check] It seems like the kubelet isn't running or healthy.
    64. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
    65. Unfortunately, an error has occurred:
    66. timed out waiting for the condition
    67. This error is likely caused by:
    68. - The kubelet is not running
    69. - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
    70. If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    71. - 'systemctl status kubelet'
    72. - 'journalctl -xeu kubelet'
    73. Additionally, a control plane component may have crashed or exited when started by the container runtime.
    74. To troubleshoot, list all containers using your preferred container runtimes CLI.
    75. Here is one example how you may list all Kubernetes containers running in docker:
    76. - 'docker ps -a | grep kube | grep -v pause'
    77. Once you have found the failing container, you can inspect its logs with:
    78. - 'docker logs CONTAINERID'
    79. stderr:
    80. W0911 11:26:38.783101 14450 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
    81. [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    82. [WARNING Swap]: running with swap on is not supported. Please disable swap
    83. [WARNING FileExisting-socat]: socat not found in system path
    84. [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
    85. W0911 11:26:48.464749 14450 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
    86. W0911 11:26:48.466754 14450 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
    87. error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
    88. To see the stack trace of this error execute with --v=5 or higher
    89. - Generating certificates and keys ...
    90. - Booting up control plane ...
    91. - Configuring RBAC rules ...
    92. * Configuring local host environment ...
    93. *
    94. ! The 'none' driver is designed for experts who need to integrate with an existing VM
    95. * Most users should use the newer 'docker' driver instead, which does not require root!
    96. * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
    97. *
    98. ! kubectl and minikube configuration will be stored in /root
    99. ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
    100. *
    101. - sudo mv /root/.kube /root/.minikube $HOME
    102. - sudo chown -R $USER $HOME/.kube $HOME/.minikube
    103. *
    104. * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
    105. * Verifying Kubernetes components...
    106. - Using image gcr.io/k8s-minikube/storage-provisioner:v5
    107. * Enabled addons: storage-provisioner, default-storageclass
    108. * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

    minikube的停止和删除: 

    如果这个集群想停止的话,那么命令就非常简单了:

    1. minkube stop
    2. 输出如下;
    3. * Stopping "minikube" in none ...
    4. * Node "minikube" stopped.

    如果重启了服务器,那么,只需要参数换成start就可以再次启动minikube了。删除minkube也非常简单,参数换成 delete即可,这个删除会将配置文件什么的都给删除掉,前提这些文件是minikube自己建立的,否则不会删除。

    start的输出:

    1. [root@node3 manifests]# minikube start
    2. * minikube v1.12.0 on Centos 7.4.1708
    3. * Using the none driver based on existing profile
    4. * Starting control plane node minikube in cluster minikube
    5. * Restarting existing none bare metal machine for "minikube" ...
    6. * OS release is CentOS Linux 7 (Core)
    7. * Preparing Kubernetes v1.18.8 on Docker 19.03.9 ...
    8. * Configuring local host environment ...
    9. *
    10. ! The 'none' driver is designed for experts who need to integrate with an existing VM
    11. * Most users should use the newer 'docker' driver instead, which does not require root!
    12. * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
    13. *
    14. ! kubectl and minikube configuration will be stored in /root
    15. ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
    16. *
    17. - sudo mv /root/.kube /root/.minikube $HOME
    18. - sudo chown -R $USER $HOME/.kube $HOME/.minikube
    19. *
    20. * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
    21. * Verifying Kubernetes components...
    22. * Enabled addons: default-storageclass, storage-provisioner
    23. * Done! kubectl is now configured to use "minikube"

     

    以上的输出表明kubernetes单节点集群已经安装成功了,但有一些警告需要处理:

    (1)

    关于kubeadmin,kubelet,kubectl这三个命令的缓存

    1. > kubectl.sha256: 65 B / 65 B [-------------------------] 100.00% ? p/s 0s
    2. > kubelet: 108.05 MiB / 108.05 MiB [--------] 100.00% 639.49 KiB p/s 2m53s

    这几个命令是下载到/root/.minikube/cache/linux/amd64/v1.18.8/这个目录下的,因此,想要提高速度,离线化部署就需要这么做:

    建立以上的目录:

    mkdir -p /root/.minikube/cache/linux/amd64/v1.18.8/

    给文件赋予权限并拷贝文件到这个目录下:

    1. chmod a+x kube* #赋予权限
    2. [root@node3 v1.18.8]# pwd
    3. /root/.minikube/cache/linux/amd64/v1.18.8
    4. [root@slave3 v1.18.8]# ll
    5. total 192544
    6. -rwxr-xr-x 1 root root 39821312 Sep 11 11:24 kubeadm
    7. -rwxr-xr-x 1 root root 44040192 Sep 11 11:24 kubectl
    8. -rwxr-xr-x 1 root root 113300248 Sep 11 11:26 kubelet

    (2)

    集群健康检查报错的解决方案

    1. [root@slave3 ~]# kubectl get cs
    2. NAME STATUS MESSAGE ERROR
    3. controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused
    4. scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
    5. etcd-0 Healthy {"health":"true"}

    解决方案:

    删除/etc/kubernetes/manifests/kube-scheduler.yaml和/etc/kubernetes/manifests/kube-controller-manager.yaml两个文件内的--port=0 这个字段

    稍等片刻,再次查询就正常了:

    1. [root@slave3 ~]# kubectl get cs
    2. NAME STATUS MESSAGE ERROR
    3. controller-manager Healthy ok
    4. scheduler Healthy ok
    5. etcd-0 Healthy {"health":"true"}

    三,

    dashboard的安装

    1. [root@slave3 ~]# minikube dashboard
    2. * Enabling dashboard ...
    3. - Using image kubernetesui/metrics-scraper:v1.0.8
    4. - Using image kubernetesui/dashboard:v2.6.0
    5. * Verifying dashboard health ...
    6. * Launching proxy ...
    7. * Verifying proxy health ...
    8. http://127.0.0.1:32844/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

    设置代理

    1. [root@slave3 v1.18.8]# kubectl proxy --port=45396 --address='0.0.0.0' --disable-filter=true --accept-hosts='^.*'
    2. W0911 12:49:38.664081 8709 proxy.go:167] Request filter disabled, your proxy is vulnerable to XSRF attacks, please be cautious
    3. Starting to serve on [::]:45396

    浏览器登录网址

    本机IP是192.168.217.11,和上面的http://127.0.0.1:32844/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
    拼接就好了

    http://192.168.217.11:45396/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

    至此,minikube就安装完了。

    附录:

    关于addons

    可以看到有安装StorageClass,但很多addons还没有安装

    1. [root@slave3 v1.18.8]# minikube addons list
    2. |-----------------------------|----------|--------------|--------------------------------|
    3. | ADDON NAME | PROFILE | STATUS | MAINTAINER |
    4. |-----------------------------|----------|--------------|--------------------------------|
    5. | ambassador | minikube | disabled | 3rd party (Ambassador) |
    6. | auto-pause | minikube | disabled | Google |
    7. | csi-hostpath-driver | minikube | disabled | Kubernetes |
    8. | dashboard | minikube | enabled ✅ | Kubernetes |
    9. | default-storageclass | minikube | enabled ✅ | Kubernetes |
    10. | efk | minikube | disabled | 3rd party (Elastic) |
    11. | freshpod | minikube | disabled | Google |
    12. | gcp-auth | minikube | disabled | Google |
    13. | gvisor | minikube | disabled | Google |
    14. | headlamp | minikube | disabled | 3rd party (kinvolk.io) |
    15. | helm-tiller | minikube | disabled | 3rd party (Helm) |
    16. | inaccel | minikube | disabled | 3rd party (InAccel |
    17. | | | | [info@inaccel.com]) |
    18. | ingress | minikube | disabled | Kubernetes |
    19. | ingress-dns | minikube | disabled | Google |
    20. | istio | minikube | disabled | 3rd party (Istio) |
    21. | istio-provisioner | minikube | disabled | 3rd party (Istio) |
    22. | kong | minikube | disabled | 3rd party (Kong HQ) |
    23. | kubevirt | minikube | disabled | 3rd party (KubeVirt) |
    24. | logviewer | minikube | disabled | 3rd party (unknown) |
    25. | metallb | minikube | disabled | 3rd party (MetalLB) |
    26. | metrics-server | minikube | disabled | Kubernetes |
    27. | nvidia-driver-installer | minikube | disabled | Google |
    28. | nvidia-gpu-device-plugin | minikube | disabled | 3rd party (Nvidia) |
    29. | olm | minikube | disabled | 3rd party (Operator Framework) |
    30. | pod-security-policy | minikube | disabled | 3rd party (unknown) |
    31. | portainer | minikube | disabled | 3rd party (Portainer.io) |
    32. | registry | minikube | disabled | Google |
    33. | registry-aliases | minikube | disabled | 3rd party (unknown) |
    34. | registry-creds | minikube | disabled | 3rd party (UPMC Enterprises) |
    35. | storage-provisioner | minikube | enabled ✅ | Google |
    36. | storage-provisioner-gluster | minikube | disabled | 3rd party (Gluster) |
    37. | volumesnapshots | minikube | disabled | Kubernetes |
    38. |-----------------------------|----------|--------------|--------------------------------|

     以安装ingress为例(安装的同时,输出安装的错误日志):

    1. [root@slave3 v1.18.8]# minikube addons enable ingress --alsologtostderr
    2. I0911 13:09:08.559523 14428 out.go:296] Setting OutFile to fd 1 ...
    3. I0911 13:09:08.572541 14428 out.go:343] TERM=xterm,COLORTERM=, which probably does not support color
    4. I0911 13:09:08.572593 14428 out.go:309] Setting ErrFile to fd 2...
    5. I0911 13:09:08.572609 14428 out.go:343] TERM=xterm,COLORTERM=, which probably does not support color
    6. I0911 13:09:08.572908 14428 root.go:333] Updating PATH: /root/.minikube/bin
    7. I0911 13:09:08.577988 14428 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
    8. You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
    9. * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
    10. You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
    11. I0911 13:09:08.580137 14428 config.go:180] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.18.8
    12. I0911 13:09:08.580198 14428 addons.go:65] Setting ingress=true in profile "minikube"
    13. I0911 13:09:08.580243 14428 addons.go:153] Setting addon ingress=true in "minikube"
    14. I0911 13:09:08.580572 14428 host.go:66] Checking if "minikube" exists ...
    15. I0911 13:09:08.581080 14428 exec_runner.go:51] Run: systemctl --version
    16. I0911 13:09:08.584877 14428 kubeconfig.go:92] found "minikube" server: "https://192.168.217.136:8443"
    17. I0911 13:09:08.584942 14428 api_server.go:165] Checking apiserver status ...
    18. I0911 13:09:08.584982 14428 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
    19. I0911 13:09:08.611630 14428 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15576/cgroup
    20. I0911 13:09:08.626851 14428 api_server.go:181] apiserver freezer: "9:freezer:/kubepods/burstable/pod1a4a24f29bac3cef528a8b328b9798b5/c8a589a612154591de984664d86a3ad96a449f3d0b1145527ceea9c5ed267124"
    21. I0911 13:09:08.626952 14428 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod1a4a24f29bac3cef528a8b328b9798b5/c8a589a612154591de984664d86a3ad96a449f3d0b1145527ceea9c5ed267124/freezer.state
    22. I0911 13:09:08.638188 14428 api_server.go:203] freezer state: "THAWED"
    23. I0911 13:09:08.638329 14428 api_server.go:240] Checking apiserver healthz at https://192.168.217.136:8443/healthz ...
    24. I0911 13:09:08.649018 14428 api_server.go:266] https://192.168.217.136:8443/healthz returned 200:
    25. ok
    26. I0911 13:09:08.650082 14428 out.go:177] - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
    27. - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
    28. I0911 13:09:08.652268 14428 out.go:177] - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
    29. - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
    30. I0911 13:09:08.653129 14428 out.go:177] - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
    31. - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
    32. I0911 13:09:08.654440 14428 addons.go:345] installing /etc/kubernetes/addons/ingress-deploy.yaml
    33. I0911 13:09:08.654528 14428 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15118 bytes)
    34. I0911 13:09:08.654720 14428 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4099945938 /etc/kubernetes/addons/ingress-deploy.yaml
    35. I0911 13:09:08.668351 14428 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.8/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
    36. I0911 13:09:09.748481 14428 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.8/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.080019138s)
    37. I0911 13:09:09.748552 14428 addons.go:383] Verifying addon ingress=true in "minikube"
    38. I0911 13:09:09.751805 14428 out.go:177] * Verifying ingress addon...

    可以看到,安装的时候使用的资源清单文件是这个:

    sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.8/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml

    该文件内容非常多,但,由于是使用的国外的镜像网址,因此,一般是不会安装成功的。

    解决方案为查找里面涉及的images,替换为国内可下载的镜像即可。 

    还有一个权限问题,可能会报错:

    F0911 05:24:52.171825       6 ssl.go:389] unexpected error storing fake SSL Cert: could not create PEM certificate file /etc/ingress-controller/ssl/default-fake-certificate.pem: open /etc/ingress-controller/ssl/default-fake-certificate.pem: permission denied

    解决方案是:

    还是编辑下面这个文件, runAsUser 的值修改为33 

    重新apply 此文件:

    kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml

    1. [root@slave3 v1.18.8]# cat /etc/kubernetes/addons/ingress-deploy.yaml
    2. # Copyright 2021 The Kubernetes Authors All rights reserved.
    3. #
    4. # Licensed under the Apache License, Version 2.0 (the "License");
    5. # you may not use this file except in compliance with the License.
    6. # You may obtain a copy of the License at
    7. #
    8. # http://www.apache.org/licenses/LICENSE-2.0
    9. #
    10. # Unless required by applicable law or agreed to in writing, software
    11. # distributed under the License is distributed on an "AS IS" BASIS,
    12. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    13. # See the License for the specific language governing permissions and
    14. # limitations under the License.
    15. # ref: https://github.com/kubernetes/ingress-nginx/blob/main/deploy/static/provider/kind/deploy.yaml
    16. apiVersion: v1
    17. kind: Namespace
    18. metadata:
    19. labels:
    20. app.kubernetes.io/instance: ingress-nginx
    21. app.kubernetes.io/name: ingress-nginx
    22. name: ingress-nginx
    23. ---
    24. apiVersion: v1
    25. automountServiceAccountToken: true
    26. kind: ServiceAccount
    27. metadata:
    28. labels:
    29. app.kubernetes.io/component: controller
    30. app.kubernetes.io/instance: ingress-nginx
    31. app.kubernetes.io/name: ingress-nginx
    32. name: ingress-nginx
    33. namespace: ingress-nginx
    34. ---
    35. apiVersion: v1
    36. kind: ServiceAccount
    37. metadata:
    38. labels:
    39. app.kubernetes.io/component: admission-webhook
    40. app.kubernetes.io/instance: ingress-nginx
    41. app.kubernetes.io/name: ingress-nginx
    42. name: ingress-nginx-admission
    43. namespace: ingress-nginx
    44. ---
    45. apiVersion: rbac.authorization.k8s.io/v1
    46. kind: Role
    47. metadata:
    48. labels:
    49. app.kubernetes.io/component: controller
    50. app.kubernetes.io/instance: ingress-nginx
    51. app.kubernetes.io/name: ingress-nginx
    52. name: ingress-nginx
    53. namespace: ingress-nginx
    54. rules:
    55. - apiGroups:
    56. - ""
    57. resources:
    58. - namespaces
    59. verbs:
    60. - get
    61. - apiGroups:
    62. - ""
    63. resources:
    64. - configmaps
    65. - pods
    66. - secrets
    67. - endpoints
    68. verbs:
    69. - get
    70. - list
    71. - watch
    72. - apiGroups:
    73. - ""
    74. resources:
    75. - services
    76. verbs:
    77. - get
    78. - list
    79. - watch
    80. - apiGroups:
    81. - extensions
    82. - networking.k8s.io
    83. resources:
    84. - ingresses
    85. verbs:
    86. - get
    87. - list
    88. - watch
    89. - apiGroups:
    90. - extensions
    91. - networking.k8s.io
    92. resources:
    93. - ingresses/status
    94. verbs:
    95. - update
    96. - apiGroups:
    97. - networking.k8s.io
    98. resources:
    99. - ingressclasses
    100. verbs:
    101. - get
    102. - list
    103. - watch
    104. - apiGroups:
    105. - ""
    106. resourceNames:
    107. - ingress-controller-leader
    108. resources:
    109. - configmaps
    110. verbs:
    111. - get
    112. - update
    113. - apiGroups:
    114. - ""
    115. resources:
    116. - configmaps
    117. verbs:
    118. - create
    119. - apiGroups:
    120. - ""
    121. resources:
    122. - events
    123. verbs:
    124. - create
    125. - patch
    126. ---
    127. apiVersion: rbac.authorization.k8s.io/v1
    128. kind: Role
    129. metadata:
    130. labels:
    131. app.kubernetes.io/component: admission-webhook
    132. app.kubernetes.io/instance: ingress-nginx
    133. app.kubernetes.io/name: ingress-nginx
    134. name: ingress-nginx-admission
    135. namespace: ingress-nginx
    136. rules:
    137. - apiGroups:
    138. - ""
    139. resources:
    140. - secrets
    141. verbs:
    142. - get
    143. - create
    144. ---
    145. apiVersion: rbac.authorization.k8s.io/v1
    146. kind: ClusterRole
    147. metadata:
    148. labels:
    149. app.kubernetes.io/instance: ingress-nginx
    150. app.kubernetes.io/name: ingress-nginx
    151. name: ingress-nginx
    152. rules:
    153. - apiGroups:
    154. - ""
    155. resources:
    156. - configmaps
    157. - endpoints
    158. - nodes
    159. - pods
    160. - secrets
    161. - namespaces
    162. verbs:
    163. - list
    164. - watch
    165. - apiGroups:
    166. - ""
    167. resources:
    168. - nodes
    169. verbs:
    170. - get
    171. - apiGroups:
    172. - ""
    173. resources:
    174. - services
    175. verbs:
    176. - get
    177. - list
    178. - watch
    179. - apiGroups:
    180. - extensions
    181. - networking.k8s.io
    182. resources:
    183. - ingresses
    184. verbs:
    185. - get
    186. - list
    187. - watch
    188. - apiGroups:
    189. - ""
    190. resources:
    191. - events
    192. verbs:
    193. - create
    194. - patch
    195. - apiGroups:
    196. - extensions
    197. - networking.k8s.io
    198. resources:
    199. - ingresses/status
    200. verbs:
    201. - update
    202. - apiGroups:
    203. - networking.k8s.io
    204. resources:
    205. - ingressclasses
    206. verbs:
    207. - get
    208. - list
    209. - watch
    210. ---
    211. apiVersion: rbac.authorization.k8s.io/v1
    212. kind: ClusterRole
    213. metadata:
    214. labels:
    215. app.kubernetes.io/component: admission-webhook
    216. app.kubernetes.io/instance: ingress-nginx
    217. app.kubernetes.io/name: ingress-nginx
    218. name: ingress-nginx-admission
    219. rules:
    220. - apiGroups:
    221. - admissionregistration.k8s.io
    222. resources:
    223. - validatingwebhookconfigurations
    224. verbs:
    225. - get
    226. - update
    227. ---
    228. apiVersion: rbac.authorization.k8s.io/v1
    229. kind: RoleBinding
    230. metadata:
    231. labels:
    232. app.kubernetes.io/component: controller
    233. app.kubernetes.io/instance: ingress-nginx
    234. app.kubernetes.io/name: ingress-nginx
    235. name: ingress-nginx
    236. namespace: ingress-nginx
    237. roleRef:
    238. apiGroup: rbac.authorization.k8s.io
    239. kind: Role
    240. name: ingress-nginx
    241. subjects:
    242. - kind: ServiceAccount
    243. name: ingress-nginx
    244. namespace: ingress-nginx
    245. ---
    246. apiVersion: rbac.authorization.k8s.io/v1
    247. kind: RoleBinding
    248. metadata:
    249. labels:
    250. app.kubernetes.io/component: admission-webhook
    251. app.kubernetes.io/instance: ingress-nginx
    252. app.kubernetes.io/name: ingress-nginx
    253. name: ingress-nginx-admission
    254. namespace: ingress-nginx
    255. roleRef:
    256. apiGroup: rbac.authorization.k8s.io
    257. kind: Role
    258. name: ingress-nginx-admission
    259. subjects:
    260. - kind: ServiceAccount
    261. name: ingress-nginx-admission
    262. namespace: ingress-nginx
    263. ---
    264. apiVersion: rbac.authorization.k8s.io/v1
    265. kind: ClusterRoleBinding
    266. metadata:
    267. labels:
    268. app.kubernetes.io/instance: ingress-nginx
    269. app.kubernetes.io/name: ingress-nginx
    270. name: ingress-nginx
    271. roleRef:
    272. apiGroup: rbac.authorization.k8s.io
    273. kind: ClusterRole
    274. name: ingress-nginx
    275. subjects:
    276. - kind: ServiceAccount
    277. name: ingress-nginx
    278. namespace: ingress-nginx
    279. ---
    280. apiVersion: rbac.authorization.k8s.io/v1
    281. kind: ClusterRoleBinding
    282. metadata:
    283. labels:
    284. app.kubernetes.io/component: admission-webhook
    285. app.kubernetes.io/instance: ingress-nginx
    286. app.kubernetes.io/name: ingress-nginx
    287. name: ingress-nginx-admission
    288. roleRef:
    289. apiGroup: rbac.authorization.k8s.io
    290. kind: ClusterRole
    291. name: ingress-nginx-admission
    292. subjects:
    293. - kind: ServiceAccount
    294. name: ingress-nginx-admission
    295. namespace: ingress-nginx
    296. ---
    297. apiVersion: v1
    298. data:
    299. # see https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/configmap.md for all possible options and their description
    300. hsts: "false"
    301. # see https://github.com/kubernetes/minikube/pull/12702#discussion_r727519180: 'allow-snippet-annotations' should be used only if strictly required by another part of the deployment
    302. # allow-snippet-annotations: "true"
    303. kind: ConfigMap
    304. metadata:
    305. labels:
    306. app.kubernetes.io/component: controller
    307. app.kubernetes.io/instance: ingress-nginx
    308. app.kubernetes.io/name: ingress-nginx
    309. name: ingress-nginx-controller
    310. namespace: ingress-nginx
    311. ---
    312. apiVersion: v1
    313. kind: ConfigMap
    314. metadata:
    315. name: tcp-services
    316. namespace: ingress-nginx
    317. labels:
    318. app.kubernetes.io/name: ingress-nginx
    319. app.kubernetes.io/instance: ingress-nginx
    320. app.kubernetes.io/component: controller
    321. ---
    322. apiVersion: v1
    323. kind: ConfigMap
    324. metadata:
    325. name: udp-services
    326. namespace: ingress-nginx
    327. labels:
    328. app.kubernetes.io/name: ingress-nginx
    329. app.kubernetes.io/instance: ingress-nginx
    330. app.kubernetes.io/component: controller
    331. ---
    332. apiVersion: v1
    333. kind: Service
    334. metadata:
    335. labels:
    336. app.kubernetes.io/component: controller
    337. app.kubernetes.io/instance: ingress-nginx
    338. app.kubernetes.io/name: ingress-nginx
    339. name: ingress-nginx-controller
    340. namespace: ingress-nginx
    341. spec:
    342. ports:
    343. - name: http
    344. port: 80
    345. protocol: TCP
    346. targetPort: http
    347. - name: https
    348. port: 443
    349. protocol: TCP
    350. targetPort: https
    351. selector:
    352. app.kubernetes.io/component: controller
    353. app.kubernetes.io/instance: ingress-nginx
    354. app.kubernetes.io/name: ingress-nginx
    355. type: NodePort
    356. ---
    357. apiVersion: v1
    358. kind: Service
    359. metadata:
    360. labels:
    361. app.kubernetes.io/component: controller
    362. app.kubernetes.io/instance: ingress-nginx
    363. app.kubernetes.io/name: ingress-nginx
    364. name: ingress-nginx-controller-admission
    365. namespace: ingress-nginx
    366. spec:
    367. ports:
    368. - name: https-webhook
    369. port: 443
    370. targetPort: webhook
    371. selector:
    372. app.kubernetes.io/component: controller
    373. app.kubernetes.io/instance: ingress-nginx
    374. app.kubernetes.io/name: ingress-nginx
    375. type: ClusterIP
    376. ---
    377. apiVersion: apps/v1
    378. kind: Deployment
    379. metadata:
    380. labels:
    381. app.kubernetes.io/component: controller
    382. app.kubernetes.io/instance: ingress-nginx
    383. app.kubernetes.io/name: ingress-nginx
    384. name: ingress-nginx-controller
    385. namespace: ingress-nginx
    386. spec:
    387. minReadySeconds: 0
    388. revisionHistoryLimit: 10
    389. selector:
    390. matchLabels:
    391. app.kubernetes.io/component: controller
    392. app.kubernetes.io/instance: ingress-nginx
    393. app.kubernetes.io/name: ingress-nginx
    394. strategy:
    395. rollingUpdate:
    396. maxUnavailable: 1
    397. type: RollingUpdate
    398. template:
    399. metadata:
    400. labels:
    401. app.kubernetes.io/component: controller
    402. app.kubernetes.io/instance: ingress-nginx
    403. app.kubernetes.io/name: ingress-nginx
    404. gcp-auth-skip-secret: "true"
    405. spec:
    406. containers:
    407. - args:
    408. - /nginx-ingress-controller
    409. - --election-id=ingress-controller-leader
    410. - --ingress-class=nginx
    411. - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
    412. - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
    413. - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
    414. - --validating-webhook=:8443
    415. - --validating-webhook-certificate=/usr/local/certificates/cert
    416. - --validating-webhook-key=/usr/local/certificates/key
    417. env:
    418. - name: POD_NAME
    419. valueFrom:
    420. fieldRef:
    421. fieldPath: metadata.name
    422. - name: POD_NAMESPACE
    423. valueFrom:
    424. fieldRef:
    425. fieldPath: metadata.namespace
    426. - name: LD_PRELOAD
    427. value: /usr/local/lib/libmimalloc.so
    428. image: k8s.gcr.io/ingress-nginx/controller:v0.49.3@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324
    429. imagePullPolicy: IfNotPresent
    430. lifecycle:
    431. preStop:
    432. exec:
    433. command:
    434. - /wait-shutdown
    435. livenessProbe:
    436. failureThreshold: 5
    437. httpGet:
    438. path: /healthz
    439. port: 10254
    440. scheme: HTTP
    441. initialDelaySeconds: 10
    442. periodSeconds: 10
    443. successThreshold: 1
    444. timeoutSeconds: 1
    445. name: controller
    446. ports:
    447. - containerPort: 80
    448. hostPort: 80
    449. name: http
    450. protocol: TCP
    451. - containerPort: 443
    452. hostPort: 443
    453. name: https
    454. protocol: TCP
    455. - containerPort: 8443
    456. name: webhook
    457. protocol: TCP
    458. readinessProbe:
    459. failureThreshold: 3
    460. httpGet:
    461. path: /healthz
    462. port: 10254
    463. scheme: HTTP
    464. initialDelaySeconds: 10
    465. periodSeconds: 10
    466. successThreshold: 1
    467. timeoutSeconds: 1
    468. resources:
    469. requests:
    470. cpu: 100m
    471. memory: 90Mi
    472. securityContext:
    473. allowPrivilegeEscalation: true
    474. capabilities:
    475. add:
    476. - NET_BIND_SERVICE
    477. drop:
    478. - ALL
    479. runAsUser: 101
    480. volumeMounts:
    481. - mountPath: /usr/local/certificates/
    482. name: webhook-cert
    483. readOnly: true
    484. dnsPolicy: ClusterFirst
    485. nodeSelector:
    486. minikube.k8s.io/primary: "true"
    487. kubernetes.io/os: linux
    488. serviceAccountName: ingress-nginx
    489. terminationGracePeriodSeconds: 0
    490. tolerations:
    491. - effect: NoSchedule
    492. key: node-role.kubernetes.io/master
    493. operator: Equal
    494. volumes:
    495. - name: webhook-cert
    496. secret:
    497. secretName: ingress-nginx-admission
    498. ---
    499. apiVersion: batch/v1
    500. kind: Job
    501. metadata:
    502. labels:
    503. app.kubernetes.io/component: admission-webhook
    504. app.kubernetes.io/instance: ingress-nginx
    505. app.kubernetes.io/name: ingress-nginx
    506. name: ingress-nginx-admission-create
    507. namespace: ingress-nginx
    508. spec:
    509. template:
    510. metadata:
    511. labels:
    512. app.kubernetes.io/component: admission-webhook
    513. app.kubernetes.io/instance: ingress-nginx
    514. app.kubernetes.io/name: ingress-nginx
    515. name: ingress-nginx-admission-create
    516. spec:
    517. containers:
    518. - args:
    519. - create
    520. - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
    521. - --namespace=$(POD_NAMESPACE)
    522. - --secret-name=ingress-nginx-admission
    523. env:
    524. - name: POD_NAMESPACE
    525. valueFrom:
    526. fieldRef:
    527. fieldPath: metadata.namespace
    528. image: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7
    529. imagePullPolicy: IfNotPresent
    530. name: create
    531. securityContext:
    532. allowPrivilegeEscalation: false
    533. nodeSelector:
    534. minikube.k8s.io/primary: "true"
    535. kubernetes.io/os: linux
    536. restartPolicy: OnFailure
    537. securityContext:
    538. runAsNonRoot: true
    539. runAsUser: 2000
    540. serviceAccountName: ingress-nginx-admission
    541. ---
    542. apiVersion: batch/v1
    543. kind: Job
    544. metadata:
    545. labels:
    546. app.kubernetes.io/component: admission-webhook
    547. app.kubernetes.io/instance: ingress-nginx
    548. app.kubernetes.io/name: ingress-nginx
    549. name: ingress-nginx-admission-patch
    550. namespace: ingress-nginx
    551. spec:
    552. template:
    553. metadata:
    554. labels:
    555. app.kubernetes.io/component: admission-webhook
    556. app.kubernetes.io/instance: ingress-nginx
    557. app.kubernetes.io/name: ingress-nginx
    558. name: ingress-nginx-admission-patch
    559. spec:
    560. containers:
    561. - args:
    562. - patch
    563. - --webhook-name=ingress-nginx-admission
    564. - --namespace=$(POD_NAMESPACE)
    565. - --patch-mutating=false
    566. - --secret-name=ingress-nginx-admission
    567. - --patch-failure-policy=Fail
    568. env:
    569. - name: POD_NAMESPACE
    570. valueFrom:
    571. fieldRef:
    572. fieldPath: metadata.namespace
    573. image: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7
    574. imagePullPolicy: IfNotPresent
    575. name: patch
    576. securityContext:
    577. allowPrivilegeEscalation: false
    578. nodeSelector:
    579. minikube.k8s.io/primary: "true"
    580. kubernetes.io/os: linux
    581. restartPolicy: OnFailure
    582. securityContext:
    583. runAsNonRoot: true
    584. runAsUser: 2000
    585. serviceAccountName: ingress-nginx-admission
    586. ---
    587. apiVersion: admissionregistration.k8s.io/v1
    588. kind: ValidatingWebhookConfiguration
    589. metadata:
    590. labels:
    591. app.kubernetes.io/component: admission-webhook
    592. app.kubernetes.io/instance: ingress-nginx
    593. app.kubernetes.io/name: ingress-nginx
    594. name: ingress-nginx-admission
    595. webhooks:
    596. - admissionReviewVersions:
    597. - v1
    598. - v1beta1
    599. clientConfig:
    600. service:
    601. name: ingress-nginx-controller-admission
    602. namespace: ingress-nginx
    603. path: /networking/v1beta1/ingresses
    604. failurePolicy: Fail
    605. matchPolicy: Equivalent
    606. name: validate.nginx.ingress.kubernetes.io
    607. rules:
    608. - apiGroups:
    609. - networking.k8s.io
    610. apiVersions:
    611. - v1beta1
    612. operations:
    613. - CREATE
    614. - UPDATE
    615. resources:
    616. - ingresses
    617. sideEffects: None

    安装完毕后可以看到:

    1. [root@slave3 v1.18.8]# kubectl get all -n ingress-nginx
    2. NAME READY STATUS RESTARTS AGE
    3. pod/ingress-nginx-admission-create-n5hc5 0/1 Completed 0 28m
    4. pod/ingress-nginx-admission-patch-cgzl9 0/1 Completed 0 28m
    5. pod/ingress-nginx-controller-54b856d6d7-7fr7q 1/1 Running 0 9m54s
    6. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    7. service/ingress-nginx-controller NodePort 10.107.186.74 80:31411/TCP,443:32683/TCP 28m
    8. service/ingress-nginx-controller-admission ClusterIP 10.106.184.40 443/TCP 28m
    9. NAME READY UP-TO-DATE AVAILABLE AGE
    10. deployment.apps/ingress-nginx-controller 1/1 1 1 28m
    11. NAME DESIRED CURRENT READY AGE
    12. replicaset.apps/ingress-nginx-controller-54b856d6d7 1 1 1 9m54s
    13. replicaset.apps/ingress-nginx-controller-7689b8b4f9 0 0 0 17m
    14. replicaset.apps/ingress-nginx-controller-77cc874b76 0 0 0 28m
    15. NAME COMPLETIONS DURATION AGE
    16. job.batch/ingress-nginx-admission-create 1/1 21s 28m
    17. job.batch/ingress-nginx-admission-patch 1/1 22s 28m
    18. [root@slave3 v1.18.8]#

    addons里的ingress就安装好啦。

  • 相关阅读:
    2022腾讯云年终双十一配置表汇总-云服务器CVM+轻量应用服务器
    代码随想录——钥匙和房间(图论)
    AidLux OS搭建web站点
    mysql主从复制与读写分离
    代理,反射,AOP
    数据库:Centos7安装解压版mysql5.7图文教程,亲测成功
    【微前端】single-spa 到底是个什么鬼
    Obsidian配置
    P5661 [CSP-J2019] 公交换乘
    Python学习基础笔记二十一——迭代器
  • 原文地址:https://blog.csdn.net/alwaysbefine/article/details/126802883