• kubesphere安装部署(可离线部署)


    前言

    KubeSphere 是在 Kubernetes 之上构建的 多租户 容器平台,以应用为中心,提供全栈的 IT 自动化运维的能力,简化企业的 DevOps 工作流。使用 KubeSphere 不仅能够帮助企业在公有云或私有化数据中心快速搭建 Kubernetes 集群,还提供了一套功能丰富的向导式操作界面。总的来说,就是这个很牛逼,不止是kubernetes的一个web管理工具,还集成有devpos,Jenkins,普罗米修斯,minio等等工具。想比较与dashboard,功能更加多,当然,部署难度也更大了。

    部署前的环境准备:

    硬件方面,需要配置比较高,内存建议至少8G,磁盘空间大致是50G左右,CPU至少4核。

    软件方面,一个可正常运行的kubernetes集群,该集群是单master集群。三台服务器一master(也是node),三node节点。

    操作系统是centos7,内核升级到了5.16。

    正式部署

    一,

    安装kubesphere需要安装Metrics Server,而Metrics Server需要开启网络聚合功能。逻辑关系是:网络聚合--->Metrics Serve>kubesphere。

    还有一个地方需要注意,安装kubesphere先决条件需要一个default的StorageClass存储

    (1)网络聚合的意义:

    API 聚合机制是 Kubernetes 1.7 版本引入的特性,能够将用户扩展的 API 注册到 kube-apiserver 上,仍然通过 API Server 的 HTTP URL 对新的 API 进行访问和操作。为了实现这个机制,Kubernetes 在 kube-apiserver 服务中引入了一个 API 聚合层(API Aggregation Layer),用于将扩展 API 的访问请求转发到用户服务的功能。

    设计 API 聚合机制的主要目标如下。

    • 增加 API 的扩展性:使得开发人员可以编写自己的 API Server 来发布他们的 API,而无需对 Kubernetes 核心代码进行任何修改。
    • 无需等待 Kubernetes 核心团队的复杂审查:允许开发人员将其 API 作为单独的 API Server 发布,使集群管理员不用对 Kubernetes 核心代码进行修改就能使用新的 API,也就无需等待社区繁杂的审查了。
    • 支持试验性新特性 API 开发:可以在独立的 API 聚合服务中开发新的 API,不影响系统现有的功能。
    • 确保新的 API 遵循 Kubernetes 的规范:如果没有 API 聚合机制,开发人员就可能会被迫推出自己的设计,可能不遵循 Kubernetes 规范。

    总的来说,API 聚合机制的目标是提供集中的 API 发现机制和安全的代理功能,将开发人员的新 API 动态地、无缝地注册到 Kubernetes API Server 中进行测试和使用。

    (2)

    Metrics Server的意义:

    • Kubernetes Metrics Server 是 Cluster 的核心监控数据的聚合器,kubeadm 默认是不部署的。
    • Metrics Server 供 Dashboard 等其他组件使用,是一个扩展的 APIServer,依赖于 API Aggregator。所以,在安装 Metrics Server 之前需要先在 kube-apiserver 中开启 API Aggregator。
    • Metrics API 只可以查询当前的度量数据,并不保存历史数据。
    • Metrics API URI 为 /apis/metrics.k8s.io/,在 k8s.io/metrics 下维护。
    • 必须部署 metrics-server 才能使用该 API,metrics-server 通过调用 kubelet Summary API 获取数据。

    (3)

    因此,当前的kubernetes集群需要先开启网络聚合功能,也就是需要开启AA模式(API Aggregation),其次,需要部署Metrics Server 

    二,

    给kubernetes集群开启AA模式(这里是二进制部署的kubernetes集群,如果是kubeadm部署的集群,只需要编辑/etc/kubernetes/manifests/kube-apiserver.yaml这个文件,在此文件内添加    - --enable-aggregator-routing=true 这个字段即可,apiserver会自动重启生效网络聚合功能。)

    (1)制作证书

    1. [root@master AA]# cat proxy-client-csr.json
    2. {
    3. "CN": "aggregator",
    4. "hosts": [],
    5. "key": {
    6. "algo": "rsa",
    7. "size": 2048
    8. },
    9. "names": [
    10. {
    11. "C": "CN",
    12. "ST": "Hangzhou",
    13. "L": "Hangzhou",
    14. "O": "system:masters",
    15. "OU": "System"
    16. }
    17. ]
    18. }

    这个文件里,随意写,杭州也行,北京也行,天津什么的都可以,随便啦。在该文件目录下,执行证书生成命令:

    cfssl gencert -ca=/root/k8s/ca.pem  -ca-key=/root/k8s/ca-key.pem  -config=/root/k8s/ca-config.json -profile=kubernetes  proxy-client-csr.json | cfssljson -bare proxy-client

    ca.pem和ca-key.pem是集群的根证书,ca-config.json 包括前面的文件都是集群部署的时候使用的,没什么好说的,根据自己的情况填写路径哦。此命令执行成功后会生成两个pem文件,将这两个pem文件放置到kubernetes集群的证书存放位置,本例是/opt/kubernetes/ssl目录下。

    (2)

    修改kube-apiserver这个服务的配置文件,在配置文件内加入以下配置:

    1. --proxy-client-cert-file=/opt/kubernetes/ssl/proxy-client.pem \
    2. --proxy-client-key-file=/opt/kubernetes/ssl/proxy-client-key.pem \
    3. --runtime-config=api/all=true \
    4. --requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \
    5. --requestheader-allowed-names=aggregator \
    6. --requestheader-extra-headers-prefix=X-Remote-Extra- \
    7. --requestheader-group-headers=X-Remote-Group \
    8. --requestheader-username-headers=X-Remote-User \

    证书的指定路径这些是根据自己的实际情况填写,另外,kube-proxy 服务确保是运行正常的,重启服务:

    systemctl daemon-reload && systemctl restart kube-apiserver

    查看服务状态,看到是这样的就表示可以了:

    1. [root@master ssl]# systemctl status kube-apiserver -l
    2. ● kube-apiserver.service - Kubernetes API Server
    3. Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
    4. Active: active (running) since Wed 2022-08-31 13:57:57 CST; 3h 40min ago
    5. Docs: https://github.com/kubernetes/kubernetes
    6. Main PID: 3412 (kube-apiserver)
    7. Memory: 942.9M
    8. CGroup: /system.slice/kube-apiserver.service
    9. └─3412 /opt/kubernetes/bin/kube-apiserver --v=2 --logtostderr=false --log-dir=/opt/kubernetes/logs --etcd-servers=https://192.168.217.16:2379,https://192.168.217.17:2379,https://192.168.217.18:2379 --bind-address=192.168.217.16 --secure-port=6443 --advertise-address=192.168.217.16 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --enable-bootstrap-token-auth=true --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-32767 --kubelet-client-certificate=/opt/kubernetes/ssl/server.pem --kubelet-client-key=/opt/kubernetes/ssl/server-key.pem --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/opt/kubernetes/logs/k8s-audit.log --proxy-client-cert-file=/opt/kubernetes/ssl/proxy-client.pem --proxy-client-key-file=/opt/kubernetes/ssl/proxy-client-key.pem --runtime-config=api/all=true --requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem --requestheader-allowed-names=aggregator --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User

    当下面第三步的Metricsserver安装完毕后,另一个检测方法是执行 kubectl top node 可以看到各个节点的输出表示网络聚合功能开启成功,如果不能执行此命令,表示失败啦:

    1. # kubectl top node
    2. NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
    3. develop-master-1 1980m 50% 7220Mi 46%
    4. develop-worker-1 2170m 55% 6803Mi 43%
    5. develop-worker-2 1239m 31% 6344Mi 40%

    三,

    部署安装Metrics-server,执行此文件即可。

    1. [root@master mnt]# cat components-metrics.yaml
    2. apiVersion: v1
    3. kind: ServiceAccount
    4. metadata:
    5. labels:
    6. k8s-app: metrics-server
    7. name: metrics-server
    8. namespace: kube-system
    9. ---
    10. apiVersion: rbac.authorization.k8s.io/v1
    11. kind: ClusterRole
    12. metadata:
    13. labels:
    14. k8s-app: metrics-server
    15. rbac.authorization.k8s.io/aggregate-to-admin: "true"
    16. rbac.authorization.k8s.io/aggregate-to-edit: "true"
    17. rbac.authorization.k8s.io/aggregate-to-view: "true"
    18. name: system:aggregated-metrics-reader
    19. rules:
    20. - apiGroups:
    21. - metrics.k8s.io
    22. resources:
    23. - pods
    24. - nodes
    25. verbs:
    26. - get
    27. - list
    28. - watch
    29. ---
    30. apiVersion: rbac.authorization.k8s.io/v1
    31. kind: ClusterRole
    32. metadata:
    33. labels:
    34. k8s-app: metrics-server
    35. name: system:metrics-server
    36. rules:
    37. - apiGroups:
    38. - ""
    39. resources:
    40. - pods
    41. - nodes
    42. - nodes/stats
    43. - namespaces
    44. - configmaps
    45. verbs:
    46. - get
    47. - list
    48. - watch
    49. ---
    50. apiVersion: rbac.authorization.k8s.io/v1
    51. kind: RoleBinding
    52. metadata:
    53. labels:
    54. k8s-app: metrics-server
    55. name: metrics-server-auth-reader
    56. namespace: kube-system
    57. roleRef:
    58. apiGroup: rbac.authorization.k8s.io
    59. kind: Role
    60. name: extension-apiserver-authentication-reader
    61. subjects:
    62. - kind: ServiceAccount
    63. name: metrics-server
    64. namespace: kube-system
    65. ---
    66. apiVersion: rbac.authorization.k8s.io/v1
    67. kind: ClusterRoleBinding
    68. metadata:
    69. labels:
    70. k8s-app: metrics-server
    71. name: metrics-server:system:auth-delegator
    72. roleRef:
    73. apiGroup: rbac.authorization.k8s.io
    74. kind: ClusterRole
    75. name: system:auth-delegator
    76. subjects:
    77. - kind: ServiceAccount
    78. name: metrics-server
    79. namespace: kube-system
    80. ---
    81. apiVersion: rbac.authorization.k8s.io/v1
    82. kind: ClusterRoleBinding
    83. metadata:
    84. labels:
    85. k8s-app: metrics-server
    86. name: system:metrics-server
    87. roleRef:
    88. apiGroup: rbac.authorization.k8s.io
    89. kind: ClusterRole
    90. name: system:metrics-server
    91. subjects:
    92. - kind: ServiceAccount
    93. name: metrics-server
    94. namespace: kube-system
    95. ---
    96. apiVersion: v1
    97. kind: Service
    98. metadata:
    99. labels:
    100. k8s-app: metrics-server
    101. name: metrics-server
    102. namespace: kube-system
    103. spec:
    104. ports:
    105. - name: https
    106. port: 443
    107. protocol: TCP
    108. targetPort: https
    109. selector:
    110. k8s-app: metrics-server
    111. ---
    112. apiVersion: apps/v1
    113. kind: Deployment
    114. metadata:
    115. labels:
    116. k8s-app: metrics-server
    117. name: metrics-server
    118. namespace: kube-system
    119. spec:
    120. selector:
    121. matchLabels:
    122. k8s-app: metrics-server
    123. strategy:
    124. rollingUpdate:
    125. maxUnavailable: 0
    126. template:
    127. metadata:
    128. labels:
    129. k8s-app: metrics-server
    130. spec:
    131. containers:
    132. - args:
    133. - --cert-dir=/tmp
    134. - --secure-port=4443
    135. - --kubelet-preferred-address-types=InternalIP #删掉 ExternalIP,Hostname这两个,这里已经改好了
    136. - --kubelet-use-node-status-port
    137. - --kubelet-insecure-tls #加上该启动参数
    138. image: registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server:v0.4.1
    139. imagePullPolicy: IfNotPresent
    140. livenessProbe:
    141. failureThreshold: 3
    142. httpGet:
    143. path: /livez
    144. port: https
    145. scheme: HTTPS
    146. periodSeconds: 10
    147. name: metrics-server
    148. ports:
    149. - containerPort: 4443
    150. name: https
    151. protocol: TCP
    152. readinessProbe:
    153. failureThreshold: 3
    154. httpGet:
    155. path: /readyz
    156. port: https
    157. scheme: HTTPS
    158. periodSeconds: 10
    159. securityContext:
    160. readOnlyRootFilesystem: true
    161. runAsNonRoot: true
    162. runAsUser: 1000
    163. volumeMounts:
    164. - mountPath: /tmp
    165. name: tmp-dir
    166. nodeSelector:
    167. kubernetes.io/os: linux
    168. priorityClassName: system-cluster-critical
    169. serviceAccountName: metrics-server
    170. volumes:
    171. - emptyDir: {}
    172. name: tmp-dir
    173. ---
    174. apiVersion: apiregistration.k8s.io/v1
    175. kind: APIService
    176. metadata:
    177. labels:
    178. k8s-app: metrics-server
    179. name: v1beta1.metrics.k8s.io
    180. spec:
    181. group: metrics.k8s.io
    182. groupPriorityMinimum: 100
    183. insecureSkipTLSVerify: true
    184. service:
    185. name: metrics-server
    186. namespace: kube-system
    187. version: v1beta1
    188. versionPriority: 100

    四,部署并设置一个defualt的StorageClass

    部署和设置default就不重复了以免篇幅太长,见我的博客:kubernetes学习之持久化存储StorageClass(4)_zsk_john的博客-CSDN博客

    五,

    开始kubesphere的安装:

    这个文件不需要改动

    1. [root@master media]# cat kubesphere-installer.yaml
    2. ---
    3. apiVersion: apiextensions.k8s.io/v1beta1
    4. kind: CustomResourceDefinition
    5. metadata:
    6. name: clusterconfigurations.installer.kubesphere.io
    7. spec:
    8. group: installer.kubesphere.io
    9. versions:
    10. - name: v1alpha1
    11. served: true
    12. storage: true
    13. scope: Namespaced
    14. names:
    15. plural: clusterconfigurations
    16. singular: clusterconfiguration
    17. kind: ClusterConfiguration
    18. shortNames:
    19. - cc
    20. ---
    21. apiVersion: v1
    22. kind: Namespace
    23. metadata:
    24. name: kubesphere-system
    25. ---
    26. apiVersion: v1
    27. kind: ServiceAccount
    28. metadata:
    29. name: ks-installer
    30. namespace: kubesphere-system
    31. ---
    32. apiVersion: rbac.authorization.k8s.io/v1
    33. kind: ClusterRole
    34. metadata:
    35. name: ks-installer
    36. rules:
    37. - apiGroups:
    38. - ""
    39. resources:
    40. - '*'
    41. verbs:
    42. - '*'
    43. - apiGroups:
    44. - apps
    45. resources:
    46. - '*'
    47. verbs:
    48. - '*'
    49. - apiGroups:
    50. - extensions
    51. resources:
    52. - '*'
    53. verbs:
    54. - '*'
    55. - apiGroups:
    56. - batch
    57. resources:
    58. - '*'
    59. verbs:
    60. - '*'
    61. - apiGroups:
    62. - rbac.authorization.k8s.io
    63. resources:
    64. - '*'
    65. verbs:
    66. - '*'
    67. - apiGroups:
    68. - apiregistration.k8s.io
    69. resources:
    70. - '*'
    71. verbs:
    72. - '*'
    73. - apiGroups:
    74. - apiextensions.k8s.io
    75. resources:
    76. - '*'
    77. verbs:
    78. - '*'
    79. - apiGroups:
    80. - tenant.kubesphere.io
    81. resources:
    82. - '*'
    83. verbs:
    84. - '*'
    85. - apiGroups:
    86. - certificates.k8s.io
    87. resources:
    88. - '*'
    89. verbs:
    90. - '*'
    91. - apiGroups:
    92. - devops.kubesphere.io
    93. resources:
    94. - '*'
    95. verbs:
    96. - '*'
    97. - apiGroups:
    98. - monitoring.coreos.com
    99. resources:
    100. - '*'
    101. verbs:
    102. - '*'
    103. - apiGroups:
    104. - logging.kubesphere.io
    105. resources:
    106. - '*'
    107. verbs:
    108. - '*'
    109. - apiGroups:
    110. - jaegertracing.io
    111. resources:
    112. - '*'
    113. verbs:
    114. - '*'
    115. - apiGroups:
    116. - storage.k8s.io
    117. resources:
    118. - '*'
    119. verbs:
    120. - '*'
    121. - apiGroups:
    122. - admissionregistration.k8s.io
    123. resources:
    124. - '*'
    125. verbs:
    126. - '*'
    127. - apiGroups:
    128. - policy
    129. resources:
    130. - '*'
    131. verbs:
    132. - '*'
    133. - apiGroups:
    134. - autoscaling
    135. resources:
    136. - '*'
    137. verbs:
    138. - '*'
    139. - apiGroups:
    140. - networking.istio.io
    141. resources:
    142. - '*'
    143. verbs:
    144. - '*'
    145. - apiGroups:
    146. - config.istio.io
    147. resources:
    148. - '*'
    149. verbs:
    150. - '*'
    151. - apiGroups:
    152. - iam.kubesphere.io
    153. resources:
    154. - '*'
    155. verbs:
    156. - '*'
    157. - apiGroups:
    158. - notification.kubesphere.io
    159. resources:
    160. - '*'
    161. verbs:
    162. - '*'
    163. - apiGroups:
    164. - auditing.kubesphere.io
    165. resources:
    166. - '*'
    167. verbs:
    168. - '*'
    169. - apiGroups:
    170. - events.kubesphere.io
    171. resources:
    172. - '*'
    173. verbs:
    174. - '*'
    175. - apiGroups:
    176. - core.kubefed.io
    177. resources:
    178. - '*'
    179. verbs:
    180. - '*'
    181. - apiGroups:
    182. - installer.kubesphere.io
    183. resources:
    184. - '*'
    185. verbs:
    186. - '*'
    187. - apiGroups:
    188. - storage.kubesphere.io
    189. resources:
    190. - '*'
    191. verbs:
    192. - '*'
    193. - apiGroups:
    194. - security.istio.io
    195. resources:
    196. - '*'
    197. verbs:
    198. - '*'
    199. - apiGroups:
    200. - monitoring.kiali.io
    201. resources:
    202. - '*'
    203. verbs:
    204. - '*'
    205. - apiGroups:
    206. - kiali.io
    207. resources:
    208. - '*'
    209. verbs:
    210. - '*'
    211. - apiGroups:
    212. - networking.k8s.io
    213. resources:
    214. - '*'
    215. verbs:
    216. - '*'
    217. - apiGroups:
    218. - kubeedge.kubesphere.io
    219. resources:
    220. - '*'
    221. verbs:
    222. - '*'
    223. - apiGroups:
    224. - types.kubefed.io
    225. resources:
    226. - '*'
    227. verbs:
    228. - '*'
    229. ---
    230. kind: ClusterRoleBinding
    231. apiVersion: rbac.authorization.k8s.io/v1
    232. metadata:
    233. name: ks-installer
    234. subjects:
    235. - kind: ServiceAccount
    236. name: ks-installer
    237. namespace: kubesphere-system
    238. roleRef:
    239. kind: ClusterRole
    240. name: ks-installer
    241. apiGroup: rbac.authorization.k8s.io
    242. ---
    243. apiVersion: apps/v1
    244. kind: Deployment
    245. metadata:
    246. name: ks-installer
    247. namespace: kubesphere-system
    248. labels:
    249. app: ks-install
    250. spec:
    251. replicas: 1
    252. selector:
    253. matchLabels:
    254. app: ks-install
    255. template:
    256. metadata:
    257. labels:
    258. app: ks-install
    259. spec:
    260. serviceAccountName: ks-installer
    261. containers:
    262. - name: installer
    263. image: kubesphere/ks-installer:v3.1.1
    264. imagePullPolicy: "IfNotPresent"
    265. resources:
    266. limits:
    267. cpu: "1"
    268. memory: 1Gi
    269. requests:
    270. cpu: 20m
    271. memory: 100Mi
    272. volumeMounts:
    273. - mountPath: /etc/localtime
    274. name: host-time
    275. volumes:
    276. - hostPath:
    277. path: /etc/localtime
    278. type: ""
    279. name: host-time

    集群设置和功能选择的配置文件:

    1. [root@master media]# cat cluster-configuration.yaml
    2. ---
    3. apiVersion: installer.kubesphere.io/v1alpha1
    4. kind: ClusterConfiguration
    5. metadata:
    6. name: ks-installer
    7. namespace: kubesphere-system
    8. labels:
    9. version: v3.1.1
    10. spec:
    11. persistence:
    12. storageClass: "" #这里保持默认即可,因为有默认的存储类
    13. authentication:
    14. jwtSecret: ""
    15. local_registry: "" # Add your private registry address if it is needed.这里是离线安装的仓库地址,如果你有内网仓库的话。
    16. etcd:
    17. monitoring: true # 改为"true",表示开启etcd的监控功能
    18. endpointIps: 192.168.217.16 # 改为自己的master节点IP地址
    19. port: 2379 # etcd port.
    20. tlsEnable: true
    21. common:
    22. redis:
    23. enabled: true #改为"true",开启redis功能
    24. openldap:
    25. enabled: true #改为"true",开启轻量级目录协议
    26. minioVolumeSize: 20Gi # Minio PVC size.
    27. openldapVolumeSize: 2Gi # openldap PVC size.
    28. redisVolumSize: 2Gi # Redis PVC size.
    29. monitoring:
    30. endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 # Prometheus endpoint to get metrics data.
    31. es: # Storage backend for logging, events and auditing
    32. elasticsearchMasterVolumeSize: 4Gi # The volume size of Elasticsearch master nodes.
    33. elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes.
    34. logMaxAge: 7 # Log retention time in built-in Elasticsearch. It is 7 days by default.
    35. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log.
    36. basicAuth:
    37. enabled: false #此处的"false"不用改为"true",这个标识在开启监控功能之后是否要连接ElasticSearch的账户和密码,此处不用
    38. username: ""
    39. password: ""
    40. externalElasticsearchUrl: ""
    41. externalElasticsearchPort: ""
    42. console:
    43. enableMultiLogin: true # Enable or disable simultaneous logins. It allows different users to log in with the same account at the same time.
    44. port: 30880
    45. alerting: # (CPU: 0.1 Core, Memory: 100 MiB) It enables users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.
    46. enabled: true # 改为"true",开启告警功能
    47. auditing:
    48. enabled: true # 改为"true",开启审计功能
    49. devops: # (CPU: 0.47 Core, Memory: 8.6 G) Provide an out-of-the-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image.
    50. enabled: true # 改为"true",开启DevOps功能
    51. jenkinsMemoryLim: 2Gi # Jenkins memory limit.
    52. jenkinsMemoryReq: 1500Mi # Jenkins memory request.
    53. jenkinsVolumeSize: 8Gi # Jenkins volume size.
    54. jenkinsJavaOpts_Xms: 512m # The following three fields are JVM parameters.
    55. jenkinsJavaOpts_Xmx: 512m
    56. jenkinsJavaOpts_MaxRAM: 2g
    57. events: # Provide a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.
    58. enabled: true # 改为"true",开启集群的事件功能
    59. ruler:
    60. enabled: true
    61. replicas: 2
    62. logging: # (CPU: 57 m, Memory: 2.76 G) Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.
    63. enabled: true # 改为"true",开启日志功能
    64. logsidecar:
    65. enabled: true
    66. replicas: 2
    67. metrics_server: # (CPU: 56 m, Memory: 44.35 MiB) It enables HPA (Horizontal Pod Autoscaler).
    68. enabled: false # 这个不用修改,因为在上卖弄我们已经安装过了,如果这里开启,镜像是官方的,会拉取镜像失败
    69. monitoring:
    70. storageClass: ""
    71. prometheusMemoryRequest: 400Mi # Prometheus request memory.
    72. prometheusVolumeSize: 20Gi # Prometheus PVC size.
    73. multicluster:
    74. clusterRole: none # host | member | none # You can install a solo cluster, or specify it as the Host or Member Cluster.
    75. network:
    76. networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).
    77. enabled: true # 改为"true",开启网络策略
    78. ippool: # Use Pod IP Pools to manage the Pod network address space. Pods to be created can be assigned IP addresses from a Pod IP Pool.
    79. type: none #如果你的网络插件是calico,需要修改为"calico",这里我是Flannel,保持默认。
    80. topology: # Use Service Topology to view Service-to-Service communication based on Weave Scope.
    81. type: none # Specify "weave-scope" for this field to enable Service Topology. "none" means that Service Topology is disabled.
    82. openpitrix: # An App Store that is accessible to all platform tenants. You can use it to manage apps across their entire lifecycle.
    83. store:
    84. enabled: true # 改为"true",开启应用商店
    85. servicemesh: # (0.3 Core, 300 MiB) Provide fine-grained traffic management, observability and tracing, and visualized traffic topology.
    86. enabled: true # 改为"true",开启微服务治理
    87. kubeedge: # Add edge nodes to your cluster and deploy workloads on edge nodes.
    88. enabled: false # 这个就不修改了,这个是边缘服务,我们也没有边缘的设备。
    89. cloudCore:
    90. nodeSelector: {"node-role.kubernetes.io/worker": ""}
    91. tolerations: []
    92. cloudhubPort: "10000"
    93. cloudhubQuicPort: "10001"
    94. cloudhubHttpsPort: "10002"
    95. cloudstreamPort: "10003"
    96. tunnelPort: "10004"
    97. cloudHub:
    98. advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
    99. - "" # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.
    100. nodeLimit: "100"
    101. service:
    102. cloudhubNodePort: "30000"
    103. cloudhubQuicNodePort: "30001"
    104. cloudhubHttpsNodePort: "30002"
    105. cloudstreamNodePort: "30003"
    106. tunnelNodePort: "30004"
    107. edgeWatcher:
    108. nodeSelector: {"node-role.kubernetes.io/worker": ""}
    109. tolerations: []
    110. edgeWatcherAgent:
    111. nodeSelector: {"node-role.kubernetes.io/worker": ""}
    112. tolerations: []

    执行以上两个文件即可,时间比较长,估计得1个小时左右,安装结束这样的是正确的:

    1. Start installing servicemesh
    2. **************************************************
    3. Waiting for all tasks to be completed ...
    4. task alerting status is successful (1/10)
    5. task multicluster status is successful (2/10)
    6. task network status is successful (3/10)
    7. task openpitrix status is successful (4/10)
    8. task auditing status is successful (5/10)
    9. task logging status is successful (6/10)
    10. task events status is successful (7/10)
    11. task devops status is successful (8/10)
    12. task monitoring status is successful (9/10)
    13. task servicemesh status is successful (10/10)
    14. **************************************************
    15. Collecting installation results ...
    16. #####################################################
    17. ### Welcome to KubeSphere! ###
    18. #####################################################
    19. Console: http://192.168.217.16:30880
    20. Account: admin
    21. Password: P@88w0rd
    22. NOTES:
    23. 1. After you log into the console, please check the
    24. monitoring status of service components in
    25. "Cluster Management". If any service is not
    26. ready, please wait patiently until all components
    27. are up and running.
    28. 2. Please change the default password after login.
    29. #####################################################
    30. https://kubesphere.io 2022-08-31 13:33:27
    31. #####################################################

    大概安装了这些

    1. [root@master ~]# k get po -A
    2. NAMESPACE NAME READY STATUS RESTARTS AGE
    3. default nginx1 1/1 Running 4 3d19h
    4. istio-system istio-ingressgateway-76df6567c6-xf4k8 0/1 ContainerCreating 0 40m
    5. istio-system jaeger-operator-587999bcb9-rx25x 1/1 Running 0 22m
    6. istio-system kiali-operator-855cc4486d-w2gz9 0/1 ContainerCreating 0 22m
    7. kube-system coredns-76648cbfc9-lb75g 1/1 Running 4 3d16h
    8. kube-system kube-flannel-ds-mhkdq 1/1 Running 11 3d21h
    9. kube-system kube-flannel-ds-mlb7l 1/1 Running 9 3d21h
    10. kube-system kube-flannel-ds-sl4qv 1/1 Running 6 3d21h
    11. kube-system metrics-server-7d594964f5-nf6nz 1/1 Running 2 148m
    12. kube-system nfs-client-provisioner-9c9f9bd86-ffc4c 0/1 CrashLoopBackOff 8 107m
    13. kube-system snapshot-controller-0 1/1 Running 0 51m
    14. kubesphere-controls-system default-http-backend-857d7b6856-mdtnz 1/1 Running 0 49m
    15. kubesphere-controls-system kubectl-admin-db9fc54f5-cxwqq 1/1 Running 0 20m
    16. kubesphere-devops-system ks-jenkins-58fffc7489-h8cnz 0/1 Init:0/1 0 40m
    17. kubesphere-devops-system s2ioperator-0 1/1 Running 0 44m
    18. kubesphere-logging-system elasticsearch-logging-data-0 1/1 Running 0 50m
    19. kubesphere-logging-system elasticsearch-logging-data-1 1/1 Running 4 44m
    20. kubesphere-logging-system elasticsearch-logging-discovery-0 1/1 Running 1 50m
    21. kubesphere-logging-system fluentbit-operator-85cbc8c7b6-9dvn4 0/1 PodInitializing 0 49m
    22. kubesphere-logging-system ks-events-operator-8dbf7fccc-rkzvs 0/1 ContainerCreating 0 43m
    23. kubesphere-logging-system kube-auditing-operator-697658f8d-cdh79 1/1 Running 0 45m
    24. kubesphere-logging-system kube-auditing-webhook-deploy-9484b5ff-8jjdr 1/1 Running 0 29m
    25. kubesphere-logging-system kube-auditing-webhook-deploy-9484b5ff-t4qpn 1/1 Running 0 29m
    26. kubesphere-logging-system logsidecar-injector-deploy-74c66bfd85-jj4f6 0/2 ContainerCreating 0 44m
    27. kubesphere-logging-system logsidecar-injector-deploy-74c66bfd85-sks5d 2/2 Running 0 44m
    28. kubesphere-monitoring-system alertmanager-main-0 2/2 Running 0 36m
    29. kubesphere-monitoring-system alertmanager-main-1 2/2 Running 0 36m
    30. kubesphere-monitoring-system alertmanager-main-2 0/2 ContainerCreating 0 36m
    31. kubesphere-monitoring-system kube-state-metrics-d6645c6b-ttv7g 3/3 Running 0 39m
    32. kubesphere-monitoring-system node-exporter-4vz5v 2/2 Running 0 39m
    33. kubesphere-monitoring-system node-exporter-fhwwl 0/2 ContainerCreating 0 39m
    34. kubesphere-monitoring-system node-exporter-m84q5 2/2 Running 0 39m
    35. kubesphere-monitoring-system notification-manager-deployment-674dddcbd9-4xgs5 1/1 Running 0 34m
    36. kubesphere-monitoring-system notification-manager-deployment-674dddcbd9-9xfx7 1/1 Running 0 34m
    37. kubesphere-monitoring-system notification-manager-operator-7877c6574f-6nj58 2/2 Running 3 35m
    38. kubesphere-monitoring-system prometheus-k8s-0 0/3 ContainerCreating 0 38m
    39. kubesphere-monitoring-system prometheus-k8s-1 0/3 ContainerCreating 0 38m
    40. kubesphere-monitoring-system prometheus-operator-7d7684fc68-wkhlr 2/2 Running 0 39m
    41. kubesphere-monitoring-system thanos-ruler-kubesphere-0 2/2 Running 0 35m
    42. kubesphere-monitoring-system thanos-ruler-kubesphere-1 2/2 Running 0 35m
    43. kubesphere-system ks-apiserver-5dd69b75b9-qgc2v 1/1 Running 0 20m
    44. kubesphere-system ks-console-7494896c94-c4jkj 1/1 Running 0 48m
    45. kubesphere-system ks-controller-manager-579b4c7847-xzgtw 1/1 Running 1 20m
    46. kubesphere-system ks-installer-7568684bbc-jsshg 1/1 Running 0 54m
    47. kubesphere-system minio-597cb64f44-6wkdr 1/1 Running 0 50m
    48. kubesphere-system openldap-0 1/1 Running 1 50m
    49. kubesphere-system openpitrix-import-job-qmk6r 0/1 Completed 1 46m
    50. kubesphere-system redis-5566549765-k8kks 1/1 Running 0 51m

    可以看到kubesphere有非常多的组件,例如,openldap,prometheus,node-exporter,elasticsearch,minio等等。因此,kubesphere可以看做是一个PaaS平台啦。

  • 相关阅读:
    为什么选择快速应用开发
    QsciScintilla自动代码完成实现原理
    python LeetCode 刷题记录 26
    【项目管理】PMO技能树21项参照
    Redis cluster集群搭建
    QTableView/QTableWidget设置单元格字体颜色及背景色
    Python绘图系统25:新增8种绘图函数
    ruoyi系统启动
    【Python脚本进阶】2.2、构建一个SSH僵尸网络(下):利用SSH中的弱私钥
    数据分析三剑客之一:Numpy详解及实战
  • 原文地址:https://blog.csdn.net/alwaysbefine/article/details/126626789