• SuperEdge易学易用系列-一键搭建SuperEdge集群


    条件说明:

    系统 公网IP 内网IP 服务器所在地 K8S版本
    Centos7.9 114.116.101.254 192.168.0.245 北京 v1.22.6
    Centos7.9 119.8.1.96 192.168.0.83 香港 v1.22.6
    Ubuntu22 94.74.108.152 192.168.0.154 纽约 v1.22.6

    1. 开始部署

    1.1 两条指令从零搭建一个边缘集群

    1.1.1 下载安装包

    edgeadm 最近两个版本[v0.9.0,v0.8.2]支持的体系结构 arch[amd64, arm64]以及kubernetes 版本[1.22.6, 1.20.6]组合如下,请大家按需下载:
    
        CPU arch [amd64, arm64], kubernetes version [1.22.6], version: v0.9.0
        CPU arch [amd64, arm64], kubernetes version [1.22.6, 1.20.6], version: v0.8.2 注意修改 arch/version/kubernetesVersion 变量参数来下载 tgz 包:
    
    • 1
    • 2
    • 3
    • 4
    arch=amd64 version=v0.9.0 kubernetesVersion=1.22.6 && rm -rf edgeadm-linux-* && wget https://superedge-1253687700.cos.ap-guangzhou.myqcloud.com/$version/$arch/edgeadm-linux-$arch-$version-k8s-$kubernetesVersion.tgz && tar -xzvf edgeadm-linux-* && cd edgeadm-linux-$arch-$version-k8s-$kubernetesVersion && ./edgeadm
    #这里需要根据系统的架构,选择arch=amd64 
    
    • 1
    • 2
    [root@cloucore1 ~]# arch=amd64 version=v0.9.0 kubernetesVersion=1.22.6 && rm -rf edgeadm-linux-* && wget https://superedge-1253687700.cos.ap-guangzhou.myqcloud.com/$version/$arch/edgeadm-linux-$arch-$version-k8s-$kubernetesVersion.tgz && tar -xzvf edgeadm-linux-* && cd edgeadm-linux-$arch-$version-k8s-$kubernetesVersion && ./edgeadm
    --2023-10-12 20:38:36--  https://superedge-1253687700.cos.ap-guangzhou.myqcloud.com/v0.9.0/amd64/edgeadm-linux-amd64-v0.9.0-k8s-1.22.6.tgz
    Resolving superedge-1253687700.cos.ap-guangzhou.myqcloud.com (superedge-1253687700.cos.ap-guangzhou.myqcloud.com)... 159.75.57.36, 159.75.57.69
    Connecting to superedge-1253687700.cos.ap-guangzhou.myqcloud.com (superedge-1253687700.cos.ap-guangzhou.myqcloud.com)|159.75.57.36|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 245866753 (234M) [application/octet-stream]
    Saving to: ‘edgeadm-linux-amd64-v0.9.0-k8s-1.22.6.tgz’
    
    100%[============================================================================================================>] 245,866,753 10.9MB/s   in 21s    
    
    2023-10-12 20:38:57 (11.4 MB/s) - ‘edgeadm-linux-amd64-v0.9.0-k8s-1.22.6.tgz’ saved [245866753/245866753]
    
    ./edgeadm-linux-amd64-v0.9.0-k8s-1.22.6/
    ./edgeadm-linux-amd64-v0.9.0-k8s-1.22.6/edgeadm
    ./edgeadm-linux-amd64-v0.9.0-k8s-1.22.6/kube-linux-amd64-v1.22.6.tar.gz
    edgeadm use to manage edge kubernetes cluster
    
    Usage:
      edgeadm COMMAND [arg...] [flags]
      edgeadm [command]
    
    Available Commands:
      addon       Addon apps to Kubernetes cluster
      change      Change kubernetes cluster to edge cluster
      detach      Detach apps from Kubernetes cluster
      help        Help about any command
      init        Run this command in order to set up the Kubernetes control plane
      join        Run this on any machine you wish to join an existing cluster
      manifests   Output edge cluster manifest yaml files
      reset       Performs a best effort revert of changes made to this host by 'edgeadm init' or 'edgeadm join'
      revert      Revert edge cluster to your original cluster
      token       Manage bootstrap tokens
      version     Output edgeadm build info
    
    Flags:
          --add-dir-header                   If true, adds the file directory to the header of the log messages
          --alsologtostderr                  log to standard error as well as files
          --enable-edge                      Enable of install edge kubernetes cluster. (default true)
      -h, --help                             help for edgeadm
          --kubeconfig string                The path to the kubeconfig file. [necessary] (default "~/.kube/config")
          --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)
          --log-dir string                   If non-empty, write log files in this directory
          --log-file string                  If non-empty, use this log file
          --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
          --logtostderr                      log to standard error instead of files (default true)
          --one-output                       If true, only write logs to their native severity level (vs also writing to each lower severity level)
          --skip-headers                     If true, avoid header prefixes in the log messages
          --skip-log-headers                 If true, avoid headers when opening log files
          --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)
      -v, --v Level                          number for the log level verbosity
          --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging
          --worker-path string               Worker path of install edge kubernetes cluster. (default "/tmp/edgeadm-tmp/")
    
    Use "edgeadm [command] --help" for more information about a command.
    [root@cloucore1 edgeadm-linux-amd64-v0.9.0-k8s-1.22.6]# ll
    total 276400
    -rwxr-xr-x 1 root root  61731777 Apr 17 19:42 edgeadm
    -rw-r--r-- 1 root root 221297932 Apr 17 19:42 kube-linux-amd64-v1.22.6.tar.gz
    [root@cloucore1 edgeadm-linux-amd64-v0.9.0-k8s-1.22.6]# 
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59

    此静态安装包也可以从 Github Release页面 下载

    2. 安装边缘 Kubernetes master 节点

    将下载的压缩包解压后,进入目录,执行下面的命令:

    #成功执行后的结果需要记住,一会把节点加入到集群中是需要的
    ./edgeadm init --kubernetes-version=1.22.6 --image-repository superedge.tencentcloudcr.com/superedge --service-cidr=10.244.0.0/16 --pod-network-cidr=10.233.0.0/16 --install-pkg-path ./kube-linux-*.tar.gz --apiserver-cert-extra-sans=114.116.101.254  --apiserver-advertise-address=192.168.0.245 --enable-edge=true --edge-version=0.9.0
    
    • 1
    • 2

    –enable-edge=true: 是否部署边缘能力组件,默认 true --enable-edge=false 表示安装原生 Kubernetes 集群,和 kubeadm 搭建的集群完全一样;

    –install-pkg-path的值可以为机器上的路径,也可以为网络地址(比如:http://xxx/xxx/kube-linux-arm64/amd64-*.tar.gz, 能免密wget到就可以),注意用和机器体系匹配的Kubernetes静态安装包;

    –image-repository:镜像仓库地址

    –apiserver-cert-extra-sans=IP/域名等>:这里的外网 IP 指的是边缘节点需要接入的云端控制面的公网 IP以及外网域名,apiserver 会签发相应的证书供边缘节点访问

    –apiserver-advertise-address=:这里的内网 IP 指的是 edgeadm 用于初始化 etcd 和 apiserver 需要绑定的节点内部 IP

    –edge-version=0.9.0:如果需要使用最新的 Kins 能力, 这里需要指定最新v0.9.0的版本(仅支持 Kubernetes 1.22.6);如果不需要 Kins 能力,同时又希望能够使用类似 1.20 的低 K8s 版本,可以使用 v0.8.2版本,支持最新的云边隧道能力,支持云端 master、worker 和边缘节点三种类型节点的 7 层协议互通,适配更加完善

    [root@cloucore1 edgeadm-linux-amd64-v0.9.0-k8s-1.22.6]# pwd
    /root/edgeadm-linux-amd64-v0.9.0-k8s-1.22.6
    [root@cloucore1 edgeadm-linux-amd64-v0.9.0-k8s-1.22.6]# ./edgeadm init --kubernetes-version=1.22.6 --image-repository superedge.tencentcloudcr.com/superedge --service-cidr=10.244.0.0/16 --pod-network-cidr=10.233.0.0/16 --install-pkg-path ./kube-linux-*.tar.gz --apiserver-cert-extra-sans=114.116.101.254  --apiserver-advertise-address=192.168.0.245 --enable-edge=true --edge-version=0.9.0
    [init] Deploy Kubernetes kubeadm-config: {"BootstrapTokens":[{"token":"qgpl1h.3ttnx38x8oe1oztp","description":"The default bootstrap token generated by 'edgeadm init'.","ttl":"24h0m0s","usages":["signing","authentication"],"groups":["system:bootstrappers:kubeadm:default-node-token"]}],"NodeRegistration":{"Name":"cloucore1","CRISocket":"/run/containerd/containerd.sock","Taints":[{"key":"node-role.kubernetes.io/master","effect":"NoSchedule"}],"KubeletExtraArgs":null,"IgnorePreflightErrors":[],"imagePullPolicy":"IfNotPresent"},"LocalAPIEndpoint":{"AdvertiseAddress":"192.168.0.245","BindPort":6443},"CertificateKey":"","SkipPhases":null,"Patches":null}
    I1012 20:44:22.725008    8688 runtime.go:64] Installed container runtime containerd successfully
    [preflight] Running pre-flight checks
    	[WARNING FileExisting-socat]: socat not found in system path
    	[WARNING Hostname]: hostname "cloucore1" could not be reached
    	[WARNING Hostname]: hostname "cloucore1": lookup cloucore1 on 100.125.21.250:53: no such host
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Generating "ca" certificate and key
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [cloucore1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.244.0.1 192.168.0.245 114.116.101.254]
    [certs] Generating "apiserver-kubelet-client" certificate and key
    [certs] Generating "front-proxy-ca" certificate and key
    [certs] Generating "front-proxy-client" certificate and key
    [certs] Generating "etcd/ca" certificate and key
    [certs] Generating "etcd/server" certificate and key
    [certs] etcd/server serving cert is signed for DNS names [cloucore1 localhost] and IPs [192.168.0.245 127.0.0.1 ::1]
    [certs] Generating "etcd/peer" certificate and key
    [certs] etcd/peer serving cert is signed for DNS names [cloucore1 localhost] and IPs [192.168.0.245 127.0.0.1 ::1]
    [certs] Generating "etcd/healthcheck-client" certificate and key
    [certs] Generating "apiserver-etcd-client" certificate and key
    [certs] Generating "sa" key and public key
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Using existing ca certificate authority
    [certs] Generating "tunnel-anp-client" certificate and key
    [certs] Using the existing "sa" key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Starting the kubelet
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [patches] Reading patches from path "/etc/kubernetes/patch/"
    [patches] Found the following patch files: [kube-apiserver0+merge.yaml]
    [patches] Applied patch of type "application/merge-patch+json" to target "kube-apiserver"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [apiclient] All control plane components are healthy after 18.004611 seconds
    [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
    [upload-certs] Skipping phase. Please see --upload-certs
    [mark-control-plane] Marking the node cloucore1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
    [mark-control-plane] Marking the node cloucore1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
    [bootstrap-token] Using token: qgpl1h.3ttnx38x8oe1oztp
    [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    I1012 20:45:49.600288    8688 cni_addon.go:91] Deploy kube-flannel.yaml success!
    I1012 20:45:50.386959    8688 edge_app.go:281] Deploy service-group success!
    Create tunnel-cloud.yaml success!
    I1012 20:45:52.599666    8688 edge_app.go:197] Deploy tunnel-cloud.yaml success!
    I1012 20:45:52.999891    8688 deploy_tunnel.go:123] Deploy tunnel-edge.yaml success!
    I1012 20:45:52.999910    8688 edge_app.go:216] Deploy tunnel-edge.yaml success!
    I1012 20:45:54.408108    8688 deploy_edge_health.go:66] Create edge-health.yaml success!
    I1012 20:45:54.578420    8688 deploy_edge_health.go:66] Create edge-health-admission.yaml success!
    I1012 20:45:54.587455    8688 deploy_edge_health.go:66] Create edge-health-webhook.yaml success!
    I1012 20:45:55.415176    8688 deploy_site_manager.go:40] Deploy site-manager.yaml success!
    I1012 20:45:56.210365    8688 deploy_edge_coredns.go:55] Deploy edge-coredns.yaml success!
    I1012 20:45:57.571537    8688 edge_app.go:390] Prepare Config Join Node configMap success
    W1012 20:45:58.974177    8688 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
    W1012 20:45:58.981270    8688 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
    I1012 20:45:59.570894    8688 edge_app.go:364] Update Kubernetes cluster config support marginal autonomy success
    
    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    edgeadm join 114.116.101.254:6443 --token qgpl1h.3ttnx38x8oe1oztp \
    	--discovery-token-ca-cert-hash sha256:a093916ca55608f318b608617821a976c012c271ba40081d097a977922f1640a  \
        --install-pkg-path <Path of edgeadm kube-* install package>
    
    [root@cloucore1 edgeadm-linux-amd64-v0.9.0-k8s-1.22.6]# 
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    #执行这3句在边缘master 节点
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    • 1
    • 2
    • 3
    • 4
    # [root@edge-node1 ~]在所有节点上执行
    arch=amd64 version=v0.9.0 kubernetesVersion=1.22.6 && rm -rf edgeadm-linux-* && wget https://superedge-1253687700.cos.ap-guangzhou.myqcloud.com/$version/$arch/edgeadm-linux-$arch-$version-k8s-$kubernetesVersion.tgz && tar -xzvf edgeadm-linux-* && cd edgeadm-linux-$arch-$version-k8s-$kubernetesVersion && ./edgeadm
    #这里需要根据系统的架构,选择arch=amd64 
    
    • 1
    • 2
    • 3
    # [root@edge-node1 ~]在所有节点上执行
    ./edgeadm  join 114.116.101.254:6443 --token qgpl1h.3ttnx38x8oe1oztp --discovery-token-ca-cert-hash sha256:a093916ca55608f318b608617821a976c012c271ba40081d097a977922f1640a   --install-pkg-path  ./kube-linux-amd64-v1.22.6.tar.gz
    
    • 1
    • 2
    [root@edge-node1 ~]# ll
    total 276408
    -rwxr-xr-x 1 root root  61731777 Oct 12 20:54 edgeadm
    -rw-r--r-- 1 root root 221297932 Oct 12 20:56 kube-linux-amd64-v1.22.6.tar.gz
    [root@edge-node1 ~]# ./edgeadm  join 114.116.101.254:6443 --token qgpl1h.3ttnx38x8oe1oztp \
    > --discovery-token-ca-cert-hash sha256:a093916ca55608f318b608617821a976c012c271ba40081d097a977922f1640a  \
    >     --install-pkg-path ./kube-linux-amd64-v1.22.6.tar.gz 
    [preflight] Reading configuration from the cluster...
    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
    W1012 20:58:50.152755    8170 join_master.go:168] Get cluster-info configMap node-delay-domain value nil
    W1012 20:58:50.152811    8170 join_master.go:186] Get cluster-info configMap host-config value nil
    W1012 20:58:50.157688    8170 join_master.go:197] Get cluster-info configMap insecure-registries value nil
    I1012 20:58:54.215269    8170 runtime.go:64] Installed container runtime containerd successfully
    W1012 20:58:54.223056    8170 runcmd.go:52] Wait command: systemctl status lite-apiserver.service exec finish error: exit status 4
    W1012 20:58:54.223082    8170 lite_apiserver_init.go:108] Running linux command: systemctl status lite-apiserver.service error: exit status 4
    I1012 20:58:54.587444    8170 lite_apiserver_init.go:170] Deploy lite-apiserver success!
    [preflight] Running pre-flight checks
    	[WARNING FileExisting-socat]: socat not found in system path
    	[WARNING Hostname]: hostname "edge-node1" could not be reached
    	[WARNING Hostname]: hostname "edge-node1": lookup edge-node1 on 100.125.1.22:53: no such host
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Starting the kubelet
    [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
    
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
    
    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
    
    [root@edge-node1 ~]# 看到输入这里的结果说明安装成功了
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32

    同理如果有其他的节点,也需要第2步的操作

    3. 检查集群是否正常

     kubectl get nodes
     kubectl get pods -n kube-system
     kubectl get pods -A
     kubectl get svc -A
    
    • 1
    • 2
    • 3
    • 4
    [root@cloucore1 ~]# kubectl get nodes
    NAME         STATUS   ROLES                  AGE     VERSION
    cloucore1    Ready    control-plane,master   41m     v1.22.6
    edge-node1   Ready    <none>                 28m     v1.22.6
    edge-node2   Ready    <none>                 2m27s   v1.22.6
    [root@cloucore1 ~]#
    [root@cloucore1 ~]# kubectl get pods -n kube-system -o wide
    NAME                                READY   STATUS    RESTARTS       AGE   IP              NODE         NOMINATED NODE   READINESS GATES
    coredns-5b686c9994-g6nh2            1/1     Running   0              42m   10.233.0.2      cloucore1    <none>           <none>
    coredns-5b686c9994-pz7t2            1/1     Running   0              42m   10.233.0.7      cloucore1    <none>           <none>
    etcd-cloucore1                      1/1     Running   0              43m   192.168.0.245   cloucore1    <none>           <none>
    kube-apiserver-cloucore1            1/1     Running   0              43m   192.168.0.245   cloucore1    <none>           <none>
    kube-controller-manager-cloucore1   1/1     Running   0              43m   192.168.0.245   cloucore1    <none>           <none>
    kube-flannel-ds-76j2x               1/1     Running   0              42m   192.168.0.245   cloucore1    <none>           <none>
    kube-flannel-ds-dclmr               1/1     Running   0              29m   192.168.0.83    edge-node1   <none>           <none>
    kube-flannel-ds-hv726               1/1     Running   1 (102s ago)   4m    192.168.0.154   edge-node2   <none>           <none>
    kube-proxy-fbj2g                    1/1     Running   0              42m   192.168.0.245   cloucore1    <none>           <none>
    kube-scheduler-cloucore1            1/1     Running   0              43m   192.168.0.245   cloucore1    <none>           <none>
    
    [root@cloucore1 ~]# kubectl get pods -A -o wide
    NAMESPACE     NAME                                           READY   STATUS             RESTARTS       AGE     IP              NODE         NOMINATED NODE   READINESS GATES
    default       nginx-7cf7d6dbc8-r87hl                         1/1     Running            0              22m     10.233.1.3      edge-node1   <none>           <none>
    edge-system   application-grid-controller-5f89c65b48-tbhsn   1/1     Running            0              43m     10.233.0.3      cloucore1    <none>           <none>
    edge-system   application-grid-wrapper-node-7cmck            1/1     Running            2 (41s ago)    4m36s   192.168.0.154   edge-node2   <none>           <none>
    edge-system   application-grid-wrapper-node-j9wrx            1/1     Running            0              30m     192.168.0.83    edge-node1   <none>           <none>
    edge-system   edge-coredns-pjm27                             1/1     Running            2 (116s ago)   3m46s   192.168.0.154   edge-node2   <none>           <none>
    edge-system   edge-coredns-t4s9r                             1/1     Running            0              29m     192.168.0.83    edge-node1   <none>           <none>
    edge-system   edge-health-admission-8579779db9-td5fw         1/1     Running            0              43m     10.233.0.6      cloucore1    <none>           <none>
    edge-system   edge-health-mj24g                              1/1     Running            0              29m     10.233.1.2      edge-node1   <none>           <none>
    edge-system   edge-health-p9rks                              1/1     Running            1 (63s ago)    3m46s   10.233.2.2      edge-node2   <none>           <none>
    edge-system   edge-kube-proxy-5j7xs                          1/1     Running            4 (45s ago)    4m36s   192.168.0.154   edge-node2   <none>           <none>
    edge-system   edge-kube-proxy-zjvk6                          1/1     Running            0              30m     192.168.0.83    edge-node1   <none>           <none>
    edge-system   site-manager-7c884d6f98-l5sp2                  1/1     Running            0              43m     10.233.0.5      cloucore1    <none>           <none>
    edge-system   tunnel-cloud-6d4d79dc94-4zzrz                  1/1     Running            0              43m     10.233.0.4      cloucore1    <none>           <none>
    edge-system   tunnel-edge-b9phq                              1/1     Running            0              29m     192.168.0.83    edge-node1   <none>           <none>
    edge-system   tunnel-edge-n5t78                              1/1     Running            1 (6s ago)     3m46s   192.168.0.154   edge-node2   <none>           <none>
    kube-system   coredns-5b686c9994-g6nh2                       1/1     Running            0              43m     10.233.0.2      cloucore1    <none>           <none>
    kube-system   coredns-5b686c9994-pz7t2                       1/1     Running            0              43m     10.233.0.7      cloucore1    <none>           <none>
    kube-system   etcd-cloucore1                                 1/1     Running            0              43m     192.168.0.245   cloucore1    <none>           <none>
    kube-system   kube-apiserver-cloucore1                       1/1     Running            0              43m     192.168.0.245   cloucore1    <none>           <none>
    kube-system   kube-controller-manager-cloucore1              1/1     Running            0              43m     192.168.0.245   cloucore1    <none>           <none>
    kube-system   kube-flannel-ds-76j2x                          1/1     Running            0              43m     192.168.0.245   cloucore1    <none>           <none>
    kube-system   kube-flannel-ds-dclmr                          1/1     Running            0              30m     192.168.0.83    edge-node1   <none>           <none>
    kube-system   kube-flannel-ds-hv726                          1/1     Running            2 (34s ago)    4m36s   192.168.0.154   edge-node2   <none>           <none>
    kube-system   kube-proxy-fbj2g                               1/1     Running            0              43m     192.168.0.245   cloucore1    <none>           <none>
    kube-system   kube-scheduler-cloucore1                       1/1     Running            0              43m     192.168.0.245   cloucore1    <none>           <none>
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46

    4.部署一个服务测试集群

    cat > nginx.yaml << EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        app: nginx
      name: nginx
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - image: nginx
            name: nginx
            imagePullPolicy: IfNotPresent
    ---
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        app: nginx
      name: nginx
    spec:
      type: NodePort
      ports:
      - port: 80
        protocol: TCP
        targetPort: 80
      selector:
        app: nginx
    EOF
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    kubectl apply -f nginx.yaml 
    kubectl get pods,svc
    
    • 1
    • 2
    [root@ecs-f30f edgeadm-linux-amd64-v0.9.0-k8s-1.22.6]# crictl images ls
    IMAGE                                                                TAG                 IMAGE ID            SIZE
    ccr.ccs.tencentyun.com/library/pause                                 latest              80d28bedfe5de       299kB
    superedge.tencentcloudcr.com/superedge/application-grid-controller   v0.9.0              9244dd60a6669       23.6MB
    superedge.tencentcloudcr.com/superedge/coredns                       v1.8.4              8d147537fb7d1       13.7MB
    superedge.tencentcloudcr.com/superedge/edge-health-admission         v0.9.0              782c112a5560b       21MB
    superedge.tencentcloudcr.com/superedge/etcd                          3.5.0-0             0048118155842       99.9MB
    superedge.tencentcloudcr.com/superedge/flannel                       v0.12.0-edge-2.0    6eedad6577f88       17.6MB
    superedge.tencentcloudcr.com/superedge/init-cni-plugins              v0.8.3-edge-1.2     f06424df9aa39       40.2MB
    superedge.tencentcloudcr.com/superedge/kube-apiserver                v1.22.6             d35b182b4200a       31.3MB
    superedge.tencentcloudcr.com/superedge/kube-controller-manager       v1.22.6             3618e4ab750f2       29.8MB
    superedge.tencentcloudcr.com/superedge/kube-proxy                    v1.22.6             63f3f385dcfed       35.9MB
    superedge.tencentcloudcr.com/superedge/kube-scheduler                v1.22.6             9fe44a6192d1e       15MB
    superedge.tencentcloudcr.com/superedge/pause                         3.5                 ed210e3e4a5ba       301kB
    superedge.tencentcloudcr.com/superedge/site-manager                  v0.9.0              bad8dad27fa67       23.8MB
    superedge.tencentcloudcr.com/superedge/tunnel                        v0.9.0              1a4fc5d77191c       46.9MB
    [root@ecs-f30f edgeadm-linux-amd64-v0.9.0-k8s-1.22.6]# crictl ps -a
    CONTAINER ID        IMAGE               CREATED             STATE               NAME                          ATTEMPT             POD ID
    06a71a6b83464       9244dd60a6669       21 minutes ago      Running             application-grid-controller   0                   184f58d0995d1
    612c083922fec       782c112a5560b       21 minutes ago      Running             edge-health-admission         0                   0071a4eab2b04
    33ea12e71d24b       1a4fc5d77191c       21 minutes ago      Running             tunnel-cloud                  0                   1fd7a92548043
    6f448dfd6ae80       bad8dad27fa67       21 minutes ago      Running             site-manager                  0                   ebbcbef4b797e
    577cc60f62b28       8d147537fb7d1       22 minutes ago      Running             coredns                       0                   3af22e5ab4685
    5befc10cbbe0e       8d147537fb7d1       22 minutes ago      Running             coredns                       0                   87cf88f687731
    79c2b3532aaf4       6eedad6577f88       22 minutes ago      Running             kube-flannel                  0                   884ff8a94a504
    cea40f8759924       6eedad6577f88       22 minutes ago      Exited              install-cni                   0                   884ff8a94a504
    770cd29fb6080       f06424df9aa39       22 minutes ago      Exited              install-cni-plugins           0                   884ff8a94a504
    ffdcb0ab5c500       63f3f385dcfed       22 minutes ago      Running             kube-proxy                    0                   58c6df602d3bb
    50ba204c45bd7       0048118155842       22 minutes ago      Running             etcd                          0                   d506abd4d442c
    8d106db7ae961       9fe44a6192d1e       22 minutes ago      Running             kube-scheduler                0                   0f45cd27528fb
    b2a8c8d79f96c       d35b182b4200a       22 minutes ago      Running             kube-apiserver                0                   d9de8eb9a4d0f
    445e6586ac892       3618e4ab750f2       22 minutes ago      Running             kube-controller-manager       0                   dc30b07e7e07e
    [root@ecs-f30f edgeadm-linux-amd64-v0.9.0-k8s-1.22.6]# 
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34

    注意:这里是用腾讯公司推出的edgeadm 一键安装的K8S边缘集群。

    网络组件时flannel,容器用的是containerd容器。

    参考地址:https://github.com/superedge/edgeadm

  • 相关阅读:
    边缘计算网关
    Redis是如何轻松实现系统秒杀的?
    xilinx 除法ip核(divider) 不同模式结果和资源对比(VHDL&ISE)
    git 命令积累
    mysql数据库报错:1166-Incorrect column name ‘xxx‘
    北峰多层级融合通信解决方案,搭建调度“一张图”通信网络
    Vue3+Ts+Vite项目(第一篇)——使用Vite创建Vue3项目
    企业电子招投标采购系统——功能模块&功能描述+数字化采购管理 采购招投标
    叮~程序员,你的专属1024程序员节已到账,请注意查收!
    服装商城网站 毕业设计-附源码241505
  • 原文地址:https://blog.csdn.net/qq_14910065/article/details/133798885