KubeSphere 是在 Kubernetes 之上构建的 多租户 容器平台,以应用为中心,提供全栈的 IT 自动化运维的能力,简化企业的 DevOps 工作流。使用 KubeSphere 不仅能够帮助企业在公有云或私有化数据中心快速搭建 Kubernetes 集群,还提供了一套功能丰富的向导式操作界面。总的来说,就是这个很牛逼,不止是kubernetes的一个web管理工具,还集成有devpos,Jenkins,普罗米修斯,minio等等工具。想比较与dashboard,功能更加多,当然,部署难度也更大了。
部署前的环境准备:
硬件方面,需要配置比较高,内存建议至少8G,磁盘空间大致是50G左右,CPU至少4核。
软件方面,一个可正常运行的kubernetes集群,该集群是单master集群。三台服务器一master(也是node),三node节点。
操作系统是centos7,内核升级到了5.16。
正式部署
一,
安装kubesphere需要安装Metrics Server,而Metrics Server需要开启网络聚合功能。逻辑关系是:网络聚合--->Metrics Serve>kubesphere。
还有一个地方需要注意,安装kubesphere先决条件需要一个default的StorageClass存储
(1)网络聚合的意义:
API 聚合机制是 Kubernetes 1.7 版本引入的特性,能够将用户扩展的 API 注册到 kube-apiserver 上,仍然通过 API Server 的 HTTP URL 对新的 API 进行访问和操作。为了实现这个机制,Kubernetes 在 kube-apiserver 服务中引入了一个 API 聚合层(API Aggregation Layer),用于将扩展 API 的访问请求转发到用户服务的功能。
设计 API 聚合机制的主要目标如下。
总的来说,API 聚合机制的目标是提供集中的 API 发现机制和安全的代理功能,将开发人员的新 API 动态地、无缝地注册到 Kubernetes API Server 中进行测试和使用。
(2)
Metrics Server的意义:
(3)
因此,当前的kubernetes集群需要先开启网络聚合功能,也就是需要开启AA模式(API Aggregation),其次,需要部署Metrics Server
二,
给kubernetes集群开启AA模式(这里是二进制部署的kubernetes集群,如果是kubeadm部署的集群,只需要编辑/etc/kubernetes/manifests/kube-apiserver.yaml这个文件,在此文件内添加 - --enable-aggregator-routing=true 这个字段即可,apiserver会自动重启生效网络聚合功能。)
(1)制作证书
- [root@master AA]# cat proxy-client-csr.json
- {
- "CN": "aggregator",
- "hosts": [],
- "key": {
- "algo": "rsa",
- "size": 2048
- },
- "names": [
- {
- "C": "CN",
- "ST": "Hangzhou",
- "L": "Hangzhou",
- "O": "system:masters",
- "OU": "System"
- }
- ]
- }
这个文件里,随意写,杭州也行,北京也行,天津什么的都可以,随便啦。在该文件目录下,执行证书生成命令:
cfssl gencert -ca=/root/k8s/ca.pem -ca-key=/root/k8s/ca-key.pem -config=/root/k8s/ca-config.json -profile=kubernetes proxy-client-csr.json | cfssljson -bare proxy-client
ca.pem和ca-key.pem是集群的根证书,ca-config.json 包括前面的文件都是集群部署的时候使用的,没什么好说的,根据自己的情况填写路径哦。此命令执行成功后会生成两个pem文件,将这两个pem文件放置到kubernetes集群的证书存放位置,本例是/opt/kubernetes/ssl目录下。
(2)
修改kube-apiserver这个服务的配置文件,在配置文件内加入以下配置:
- --proxy-client-cert-file=/opt/kubernetes/ssl/proxy-client.pem \
- --proxy-client-key-file=/opt/kubernetes/ssl/proxy-client-key.pem \
- --runtime-config=api/all=true \
- --requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \
- --requestheader-allowed-names=aggregator \
- --requestheader-extra-headers-prefix=X-Remote-Extra- \
- --requestheader-group-headers=X-Remote-Group \
- --requestheader-username-headers=X-Remote-User \
证书的指定路径这些是根据自己的实际情况填写,另外,kube-proxy 服务确保是运行正常的,重启服务:
systemctl daemon-reload && systemctl restart kube-apiserver
查看服务状态,看到是这样的就表示可以了:
- [root@master ssl]# systemctl status kube-apiserver -l
- ● kube-apiserver.service - Kubernetes API Server
- Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
- Active: active (running) since Wed 2022-08-31 13:57:57 CST; 3h 40min ago
- Docs: https://github.com/kubernetes/kubernetes
- Main PID: 3412 (kube-apiserver)
- Memory: 942.9M
- CGroup: /system.slice/kube-apiserver.service
- └─3412 /opt/kubernetes/bin/kube-apiserver --v=2 --logtostderr=false --log-dir=/opt/kubernetes/logs --etcd-servers=https://192.168.217.16:2379,https://192.168.217.17:2379,https://192.168.217.18:2379 --bind-address=192.168.217.16 --secure-port=6443 --advertise-address=192.168.217.16 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --enable-bootstrap-token-auth=true --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-32767 --kubelet-client-certificate=/opt/kubernetes/ssl/server.pem --kubelet-client-key=/opt/kubernetes/ssl/server-key.pem --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/opt/kubernetes/logs/k8s-audit.log --proxy-client-cert-file=/opt/kubernetes/ssl/proxy-client.pem --proxy-client-key-file=/opt/kubernetes/ssl/proxy-client-key.pem --runtime-config=api/all=true --requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem --requestheader-allowed-names=aggregator --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User
当下面第三步的Metricsserver安装完毕后,另一个检测方法是执行 kubectl top node 可以看到各个节点的输出表示网络聚合功能开启成功,如果不能执行此命令,表示失败啦:
- # kubectl top node
- NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
- develop-master-1 1980m 50% 7220Mi 46%
- develop-worker-1 2170m 55% 6803Mi 43%
- develop-worker-2 1239m 31% 6344Mi 40%
三,
部署安装Metrics-server,执行此文件即可。
- [root@master mnt]# cat components-metrics.yaml
- apiVersion: v1
- kind: ServiceAccount
- metadata:
- labels:
- k8s-app: metrics-server
- name: metrics-server
- namespace: kube-system
- ---
- apiVersion: rbac.authorization.k8s.io/v1
- kind: ClusterRole
- metadata:
- labels:
- k8s-app: metrics-server
- rbac.authorization.k8s.io/aggregate-to-admin: "true"
- rbac.authorization.k8s.io/aggregate-to-edit: "true"
- rbac.authorization.k8s.io/aggregate-to-view: "true"
- name: system:aggregated-metrics-reader
- rules:
- - apiGroups:
- - metrics.k8s.io
- resources:
- - pods
- - nodes
- verbs:
- - get
- - list
- - watch
- ---
- apiVersion: rbac.authorization.k8s.io/v1
- kind: ClusterRole
- metadata:
- labels:
- k8s-app: metrics-server
- name: system:metrics-server
- rules:
- - apiGroups:
- - ""
- resources:
- - pods
- - nodes
- - nodes/stats
- - namespaces
- - configmaps
- verbs:
- - get
- - list
- - watch
- ---
- apiVersion: rbac.authorization.k8s.io/v1
- kind: RoleBinding
- metadata:
- labels:
- k8s-app: metrics-server
- name: metrics-server-auth-reader
- namespace: kube-system
- roleRef:
- apiGroup: rbac.authorization.k8s.io
- kind: Role
- name: extension-apiserver-authentication-reader
- subjects:
- - kind: ServiceAccount
- name: metrics-server
- namespace: kube-system
- ---
- apiVersion: rbac.authorization.k8s.io/v1
- kind: ClusterRoleBinding
- metadata:
- labels:
- k8s-app: metrics-server
- name: metrics-server:system:auth-delegator
- roleRef:
- apiGroup: rbac.authorization.k8s.io
- kind: ClusterRole
- name: system:auth-delegator
- subjects:
- - kind: ServiceAccount
- name: metrics-server
- namespace: kube-system
- ---
- apiVersion: rbac.authorization.k8s.io/v1
- kind: ClusterRoleBinding
- metadata:
- labels:
- k8s-app: metrics-server
- name: system:metrics-server
- roleRef:
- apiGroup: rbac.authorization.k8s.io
- kind: ClusterRole
- name: system:metrics-server
- subjects:
- - kind: ServiceAccount
- name: metrics-server
- namespace: kube-system
- ---
- apiVersion: v1
- kind: Service
- metadata:
- labels:
- k8s-app: metrics-server
- name: metrics-server
- namespace: kube-system
- spec:
- ports:
- - name: https
- port: 443
- protocol: TCP
- targetPort: https
- selector:
- k8s-app: metrics-server
- ---
- apiVersion: apps/v1
- kind: Deployment
- metadata:
- labels:
- k8s-app: metrics-server
- name: metrics-server
- namespace: kube-system
- spec:
- selector:
- matchLabels:
- k8s-app: metrics-server
- strategy:
- rollingUpdate:
- maxUnavailable: 0
- template:
- metadata:
- labels:
- k8s-app: metrics-server
- spec:
- containers:
- - args:
- - --cert-dir=/tmp
- - --secure-port=4443
- - --kubelet-preferred-address-types=InternalIP #删掉 ExternalIP,Hostname这两个,这里已经改好了
- - --kubelet-use-node-status-port
- - --kubelet-insecure-tls #加上该启动参数
- image: registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server:v0.4.1
- imagePullPolicy: IfNotPresent
- livenessProbe:
- failureThreshold: 3
- httpGet:
- path: /livez
- port: https
- scheme: HTTPS
- periodSeconds: 10
- name: metrics-server
- ports:
- - containerPort: 4443
- name: https
- protocol: TCP
- readinessProbe:
- failureThreshold: 3
- httpGet:
- path: /readyz
- port: https
- scheme: HTTPS
- periodSeconds: 10
- securityContext:
- readOnlyRootFilesystem: true
- runAsNonRoot: true
- runAsUser: 1000
- volumeMounts:
- - mountPath: /tmp
- name: tmp-dir
- nodeSelector:
- kubernetes.io/os: linux
- priorityClassName: system-cluster-critical
- serviceAccountName: metrics-server
- volumes:
- - emptyDir: {}
- name: tmp-dir
- ---
- apiVersion: apiregistration.k8s.io/v1
- kind: APIService
- metadata:
- labels:
- k8s-app: metrics-server
- name: v1beta1.metrics.k8s.io
- spec:
- group: metrics.k8s.io
- groupPriorityMinimum: 100
- insecureSkipTLSVerify: true
- service:
- name: metrics-server
- namespace: kube-system
- version: v1beta1
- versionPriority: 100
四,部署并设置一个defualt的StorageClass
部署和设置default就不重复了以免篇幅太长,见我的博客:kubernetes学习之持久化存储StorageClass(4)_zsk_john的博客-CSDN博客
五,
开始kubesphere的安装:
这个文件不需要改动
- [root@master media]# cat kubesphere-installer.yaml
- ---
- apiVersion: apiextensions.k8s.io/v1beta1
- kind: CustomResourceDefinition
- metadata:
- name: clusterconfigurations.installer.kubesphere.io
- spec:
- group: installer.kubesphere.io
- versions:
- - name: v1alpha1
- served: true
- storage: true
- scope: Namespaced
- names:
- plural: clusterconfigurations
- singular: clusterconfiguration
- kind: ClusterConfiguration
- shortNames:
- - cc
-
- ---
- apiVersion: v1
- kind: Namespace
- metadata:
- name: kubesphere-system
-
- ---
- apiVersion: v1
- kind: ServiceAccount
- metadata:
- name: ks-installer
- namespace: kubesphere-system
-
- ---
- apiVersion: rbac.authorization.k8s.io/v1
- kind: ClusterRole
- metadata:
- name: ks-installer
- rules:
- - apiGroups:
- - ""
- resources:
- - '*'
- verbs:
- - '*'
- - apiGroups:
- - apps
- resources:
- - '*'
- verbs:
- - '*'
- - apiGroups:
- - extensions
- resources:
- - '*'
- verbs:
- - '*'
- - apiGroups:
- - batch
- resources:
- - '*'
- verbs:
- - '*'
- - apiGroups:
- - rbac.authorization.k8s.io
- resources:
- - '*'
- verbs:
- - '*'
- - apiGroups:
- - apiregistration.k8s.io
- resources:
- - '*'
- verbs:
- - '*'
- - apiGroups:
- - apiextensions.k8s.io
- resources:
- - '*'
- verbs:
- - '*'
- - apiGroups:
- - tenant.kubesphere.io
- resources:
- - '*'
- verbs:
- - '*'
- - apiGroups:
- - certificates.k8s.io
- resources:
- - '*'
- verbs:
- - '*'
- - apiGroups:
- - devops.kubesphere.io
- resources:
- - '*'
- verbs:
- - '*'
- - apiGroups:
- - monitoring.coreos.com
- resources:
- - '*'
- verbs:
- - '*'
- - apiGroups:
- - logging.kubesphere.io
- resources:
- - '*'
- verbs:
- - '*'
- - apiGroups:
- - jaegertracing.io
- resources:
- - '*'
- verbs:
- - '*'
- - apiGroups:
- - storage.k8s.io
- resources:
- - '*'
- verbs:
- - '*'
- - apiGroups:
- - admissionregistration.k8s.io
- resources:
- - '*'
- verbs:
- - '*'
- - apiGroups:
- - policy
- resources:
- - '*'
- verbs:
- - '*'
- - apiGroups:
- - autoscaling
- resources:
- - '*'
- verbs:
- - '*'
- - apiGroups:
- - networking.istio.io
- resources:
- - '*'
- verbs:
- - '*'
- - apiGroups:
- - config.istio.io
- resources:
- - '*'
- verbs:
- - '*'
- - apiGroups:
- - iam.kubesphere.io
- resources:
- - '*'
- verbs:
- - '*'
- - apiGroups:
- - notification.kubesphere.io
- resources:
- - '*'
- verbs:
- - '*'
- - apiGroups:
- - auditing.kubesphere.io
- resources:
- - '*'
- verbs:
- - '*'
- - apiGroups:
- - events.kubesphere.io
- resources:
- - '*'
- verbs:
- - '*'
- - apiGroups:
- - core.kubefed.io
- resources:
- - '*'
- verbs:
- - '*'
- - apiGroups:
- - installer.kubesphere.io
- resources:
- - '*'
- verbs:
- - '*'
- - apiGroups:
- - storage.kubesphere.io
- resources:
- - '*'
- verbs:
- - '*'
- - apiGroups:
- - security.istio.io
- resources:
- - '*'
- verbs:
- - '*'
- - apiGroups:
- - monitoring.kiali.io
- resources:
- - '*'
- verbs:
- - '*'
- - apiGroups:
- - kiali.io
- resources:
- - '*'
- verbs:
- - '*'
- - apiGroups:
- - networking.k8s.io
- resources:
- - '*'
- verbs:
- - '*'
- - apiGroups:
- - kubeedge.kubesphere.io
- resources:
- - '*'
- verbs:
- - '*'
- - apiGroups:
- - types.kubefed.io
- resources:
- - '*'
- verbs:
- - '*'
-
- ---
- kind: ClusterRoleBinding
- apiVersion: rbac.authorization.k8s.io/v1
- metadata:
- name: ks-installer
- subjects:
- - kind: ServiceAccount
- name: ks-installer
- namespace: kubesphere-system
- roleRef:
- kind: ClusterRole
- name: ks-installer
- apiGroup: rbac.authorization.k8s.io
-
- ---
- apiVersion: apps/v1
- kind: Deployment
- metadata:
- name: ks-installer
- namespace: kubesphere-system
- labels:
- app: ks-install
- spec:
- replicas: 1
- selector:
- matchLabels:
- app: ks-install
- template:
- metadata:
- labels:
- app: ks-install
- spec:
- serviceAccountName: ks-installer
- containers:
- - name: installer
- image: kubesphere/ks-installer:v3.1.1
- imagePullPolicy: "IfNotPresent"
- resources:
- limits:
- cpu: "1"
- memory: 1Gi
- requests:
- cpu: 20m
- memory: 100Mi
- volumeMounts:
- - mountPath: /etc/localtime
- name: host-time
- volumes:
- - hostPath:
- path: /etc/localtime
- type: ""
- name: host-time
集群设置和功能选择的配置文件:
- [root@master media]# cat cluster-configuration.yaml
- ---
- apiVersion: installer.kubesphere.io/v1alpha1
- kind: ClusterConfiguration
- metadata:
- name: ks-installer
- namespace: kubesphere-system
- labels:
- version: v3.1.1
- spec:
- persistence:
- storageClass: "" #这里保持默认即可,因为有默认的存储类
- authentication:
- jwtSecret: ""
-
- local_registry: "" # Add your private registry address if it is needed.这里是离线安装的仓库地址,如果你有内网仓库的话。
- etcd:
- monitoring: true # 改为"true",表示开启etcd的监控功能
- endpointIps: 192.168.217.16 # 改为自己的master节点IP地址
- port: 2379 # etcd port.
- tlsEnable: true
- common:
- redis:
- enabled: true #改为"true",开启redis功能
- openldap:
- enabled: true #改为"true",开启轻量级目录协议
- minioVolumeSize: 20Gi # Minio PVC size.
- openldapVolumeSize: 2Gi # openldap PVC size.
- redisVolumSize: 2Gi # Redis PVC size.
- monitoring:
- endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 # Prometheus endpoint to get metrics data.
- es: # Storage backend for logging, events and auditing
-
- elasticsearchMasterVolumeSize: 4Gi # The volume size of Elasticsearch master nodes.
- elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes.
- logMaxAge: 7 # Log retention time in built-in Elasticsearch. It is 7 days by default.
- elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-
-log. - basicAuth:
- enabled: false #此处的"false"不用改为"true",这个标识在开启监控功能之后是否要连接ElasticSearch的账户和密码,此处不用
- username: ""
- password: ""
- externalElasticsearchUrl: ""
- externalElasticsearchPort: ""
- console:
- enableMultiLogin: true # Enable or disable simultaneous logins. It allows different users to log in with the same account at the same time.
- port: 30880
- alerting: # (CPU: 0.1 Core, Memory: 100 MiB) It enables users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.
- enabled: true # 改为"true",开启告警功能
-
- auditing:
- enabled: true # 改为"true",开启审计功能
- devops: # (CPU: 0.47 Core, Memory: 8.6 G) Provide an out-of-the-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image.
- enabled: true # 改为"true",开启DevOps功能
- jenkinsMemoryLim: 2Gi # Jenkins memory limit.
- jenkinsMemoryReq: 1500Mi # Jenkins memory request.
- jenkinsVolumeSize: 8Gi # Jenkins volume size.
- jenkinsJavaOpts_Xms: 512m # The following three fields are JVM parameters.
- jenkinsJavaOpts_Xmx: 512m
- jenkinsJavaOpts_MaxRAM: 2g
- events: # Provide a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.
- enabled: true # 改为"true",开启集群的事件功能
- ruler:
- enabled: true
- replicas: 2
- logging: # (CPU: 57 m, Memory: 2.76 G) Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.
- enabled: true # 改为"true",开启日志功能
- logsidecar:
- enabled: true
- replicas: 2
- metrics_server: # (CPU: 56 m, Memory: 44.35 MiB) It enables HPA (Horizontal Pod Autoscaler).
- enabled: false # 这个不用修改,因为在上卖弄我们已经安装过了,如果这里开启,镜像是官方的,会拉取镜像失败
- monitoring:
- storageClass: ""
- prometheusMemoryRequest: 400Mi # Prometheus request memory.
- prometheusVolumeSize: 20Gi # Prometheus PVC size.
-
- multicluster:
- clusterRole: none # host | member | none # You can install a solo cluster, or specify it as the Host or Member Cluster.
- network:
- networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).
- enabled: true # 改为"true",开启网络策略
- ippool: # Use Pod IP Pools to manage the Pod network address space. Pods to be created can be assigned IP addresses from a Pod IP Pool.
- type: none #如果你的网络插件是calico,需要修改为"calico",这里我是Flannel,保持默认。
- topology: # Use Service Topology to view Service-to-Service communication based on Weave Scope.
- type: none # Specify "weave-scope" for this field to enable Service Topology. "none" means that Service Topology is disabled.
- openpitrix: # An App Store that is accessible to all platform tenants. You can use it to manage apps across their entire lifecycle.
- store:
- enabled: true # 改为"true",开启应用商店
- servicemesh: # (0.3 Core, 300 MiB) Provide fine-grained traffic management, observability and tracing, and visualized traffic topology.
- enabled: true # 改为"true",开启微服务治理
- kubeedge: # Add edge nodes to your cluster and deploy workloads on edge nodes.
- enabled: false # 这个就不修改了,这个是边缘服务,我们也没有边缘的设备。
- cloudCore:
- nodeSelector: {"node-role.kubernetes.io/worker": ""}
- tolerations: []
- cloudhubPort: "10000"
- cloudhubQuicPort: "10001"
- cloudhubHttpsPort: "10002"
- cloudstreamPort: "10003"
- tunnelPort: "10004"
- cloudHub:
- advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
- - "" # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.
- nodeLimit: "100"
- service:
- cloudhubNodePort: "30000"
- cloudhubQuicNodePort: "30001"
- cloudhubHttpsNodePort: "30002"
- cloudstreamNodePort: "30003"
- tunnelNodePort: "30004"
- edgeWatcher:
- nodeSelector: {"node-role.kubernetes.io/worker": ""}
- tolerations: []
- edgeWatcherAgent:
- nodeSelector: {"node-role.kubernetes.io/worker": ""}
- tolerations: []
执行以上两个文件即可,时间比较长,估计得1个小时左右,安装结束这样的是正确的:
- Start installing servicemesh
- **************************************************
- Waiting for all tasks to be completed ...
- task alerting status is successful (1/10)
- task multicluster status is successful (2/10)
- task network status is successful (3/10)
- task openpitrix status is successful (4/10)
- task auditing status is successful (5/10)
- task logging status is successful (6/10)
- task events status is successful (7/10)
- task devops status is successful (8/10)
- task monitoring status is successful (9/10)
- task servicemesh status is successful (10/10)
- **************************************************
- Collecting installation results ...
- #####################################################
- ### Welcome to KubeSphere! ###
- #####################################################
-
- Console: http://192.168.217.16:30880
- Account: admin
- Password: P@88w0rd
-
- NOTES:
- 1. After you log into the console, please check the
- monitoring status of service components in
- "Cluster Management". If any service is not
- ready, please wait patiently until all components
- are up and running.
- 2. Please change the default password after login.
-
- #####################################################
- https://kubesphere.io 2022-08-31 13:33:27
- #####################################################
大概安装了这些
- [root@master ~]# k get po -A
- NAMESPACE NAME READY STATUS RESTARTS AGE
- default nginx1 1/1 Running 4 3d19h
- istio-system istio-ingressgateway-76df6567c6-xf4k8 0/1 ContainerCreating 0 40m
- istio-system jaeger-operator-587999bcb9-rx25x 1/1 Running 0 22m
- istio-system kiali-operator-855cc4486d-w2gz9 0/1 ContainerCreating 0 22m
- kube-system coredns-76648cbfc9-lb75g 1/1 Running 4 3d16h
- kube-system kube-flannel-ds-mhkdq 1/1 Running 11 3d21h
- kube-system kube-flannel-ds-mlb7l 1/1 Running 9 3d21h
- kube-system kube-flannel-ds-sl4qv 1/1 Running 6 3d21h
- kube-system metrics-server-7d594964f5-nf6nz 1/1 Running 2 148m
- kube-system nfs-client-provisioner-9c9f9bd86-ffc4c 0/1 CrashLoopBackOff 8 107m
- kube-system snapshot-controller-0 1/1 Running 0 51m
- kubesphere-controls-system default-http-backend-857d7b6856-mdtnz 1/1 Running 0 49m
- kubesphere-controls-system kubectl-admin-db9fc54f5-cxwqq 1/1 Running 0 20m
- kubesphere-devops-system ks-jenkins-58fffc7489-h8cnz 0/1 Init:0/1 0 40m
- kubesphere-devops-system s2ioperator-0 1/1 Running 0 44m
- kubesphere-logging-system elasticsearch-logging-data-0 1/1 Running 0 50m
- kubesphere-logging-system elasticsearch-logging-data-1 1/1 Running 4 44m
- kubesphere-logging-system elasticsearch-logging-discovery-0 1/1 Running 1 50m
- kubesphere-logging-system fluentbit-operator-85cbc8c7b6-9dvn4 0/1 PodInitializing 0 49m
- kubesphere-logging-system ks-events-operator-8dbf7fccc-rkzvs 0/1 ContainerCreating 0 43m
- kubesphere-logging-system kube-auditing-operator-697658f8d-cdh79 1/1 Running 0 45m
- kubesphere-logging-system kube-auditing-webhook-deploy-9484b5ff-8jjdr 1/1 Running 0 29m
- kubesphere-logging-system kube-auditing-webhook-deploy-9484b5ff-t4qpn 1/1 Running 0 29m
- kubesphere-logging-system logsidecar-injector-deploy-74c66bfd85-jj4f6 0/2 ContainerCreating 0 44m
- kubesphere-logging-system logsidecar-injector-deploy-74c66bfd85-sks5d 2/2 Running 0 44m
- kubesphere-monitoring-system alertmanager-main-0 2/2 Running 0 36m
- kubesphere-monitoring-system alertmanager-main-1 2/2 Running 0 36m
- kubesphere-monitoring-system alertmanager-main-2 0/2 ContainerCreating 0 36m
- kubesphere-monitoring-system kube-state-metrics-d6645c6b-ttv7g 3/3 Running 0 39m
- kubesphere-monitoring-system node-exporter-4vz5v 2/2 Running 0 39m
- kubesphere-monitoring-system node-exporter-fhwwl 0/2 ContainerCreating 0 39m
- kubesphere-monitoring-system node-exporter-m84q5 2/2 Running 0 39m
- kubesphere-monitoring-system notification-manager-deployment-674dddcbd9-4xgs5 1/1 Running 0 34m
- kubesphere-monitoring-system notification-manager-deployment-674dddcbd9-9xfx7 1/1 Running 0 34m
- kubesphere-monitoring-system notification-manager-operator-7877c6574f-6nj58 2/2 Running 3 35m
- kubesphere-monitoring-system prometheus-k8s-0 0/3 ContainerCreating 0 38m
- kubesphere-monitoring-system prometheus-k8s-1 0/3 ContainerCreating 0 38m
- kubesphere-monitoring-system prometheus-operator-7d7684fc68-wkhlr 2/2 Running 0 39m
- kubesphere-monitoring-system thanos-ruler-kubesphere-0 2/2 Running 0 35m
- kubesphere-monitoring-system thanos-ruler-kubesphere-1 2/2 Running 0 35m
- kubesphere-system ks-apiserver-5dd69b75b9-qgc2v 1/1 Running 0 20m
- kubesphere-system ks-console-7494896c94-c4jkj 1/1 Running 0 48m
- kubesphere-system ks-controller-manager-579b4c7847-xzgtw 1/1 Running 1 20m
- kubesphere-system ks-installer-7568684bbc-jsshg 1/1 Running 0 54m
- kubesphere-system minio-597cb64f44-6wkdr 1/1 Running 0 50m
- kubesphere-system openldap-0 1/1 Running 1 50m
- kubesphere-system openpitrix-import-job-qmk6r 0/1 Completed 1 46m
- kubesphere-system redis-5566549765-k8kks 1/1 Running 0 51m
可以看到kubesphere有非常多的组件,例如,openldap,prometheus,node-exporter,elasticsearch,minio等等。因此,kubesphere可以看做是一个PaaS平台啦。