• helm3 快速部署 Harbor 镜像仓库


    1、什么是Harbor?

    Harbor 是一个CNCF基金会托管的开源的可信的云原生docker registry项目,可以用于存储、签名、扫描镜像内容,Harbor 通过添加一些常用的功能如安全性、身份权限管理等来扩展 docker registry 项目,此外还支持在 registry 之间复制镜像,还提供更加高级的安全功能,如用户管理、访问控制和活动审计等,在新版本中还添加了Helm仓库托管的支持。

    Harbor最核心的功能就是给 docker registry 添加上一层权限保护的功能,要实现这个功能,就需要我们在使用 docker login、pull、push 等命令的时候进行拦截,先进行一些权限相关的校验,再进行操作,其实这一系列的操作 docker registry v2 就已经为我们提供了支持,v2 集成了一个安全认证的功能,将安全认证暴露给外部服务,让外部服务去实现。

    Harbor 官方网站:https://goharbor.io/

    Harbor Github地址:https://github.com/goharbor/harbor/releases

    前面我们说了 docker registry v2 将安全认证暴露给了外部服务使用,那么是怎样暴露的呢?我们在命令行中输入docker login https://registry.qikqiak.com为例来为大家说明下认证流程:

    1、docker client 接收到用户输入的 docker login 命令,将命令转化为调用 engine api 的 RegistryLogin 方法;

    2、 在 RegistryLogin 方法中通过 http 盗用 registry 服务中的 auth 方法;

    3、因为我们这里使用的是 v2 版本的服务,所以会调用 loginV2 方法,在 loginV2 方法中会进行 /v2/ 接口调用,该接口会对请求进行认证;

    4、此时的请求中并没有包含 token 信息,认证会失败,返回 401 错误,同时会在 header 中返回去哪里请求认证的服务器地址;

    5、registry client 端收到上面的返回结果后,便会去返回的认证服务器那里进行认证请求,向认证服务器发送的请求的 header 中包含有加密的用户名和密码;

    6、 认证服务器从 header 中获取到加密的用户名和密码,这个时候就可以结合实际的认证系统进行认证了,比如从数据库中查询用户认证信息或者对接 ldap 服务进行认证校验;

    7、认证成功后,会返回一个 token 信息,client 端会拿着返回的 token 再次向 registry 服务发送请求,这次需要带上得到的 token,请求验证成功,返回状态码就是200了;

    8、docker client 端接收到返回的200状态码,说明操作成功,在控制台上打印Login Succeeded的信息

    2、Harbor的安装

    Harbor 支持多种安装方式,源码目录下面默认有一个安装脚本(make/install.sh),采用 docker-compose 的形式可运行 Harbor 各个组件。这里我们将 Harbor 安装到 Kubernetes 集群中,如果同学们对 Harbor 的各个组件之间的运行关系非常熟悉,也可以自己手动编写资源清单文件进行部署。当然了如果上面的一些基本配置不能满足你的需求,你也可以做一些更高级的配置。我们可以在/harbor/make/目录下面找到所有的 Harbor 的配置模板,做相应的修改即可。不过我们这里给大家介绍另外一种简单的安装方法:Helm,Harbor 官方提供了对应的 Helm Chart 包,所以我们可以很容易安装。首先下载 Harbor Chart 包到要安装的集群上:

    1. [root@kubernetes-01 ~]# git clone https://gitee.com/z0ukun/harbor-helm.git
    2. Cloning into 'harbor-helm'...
    3. remote: Enumerating objects: 4109, done.
    4. remote: Counting objects: 100% (4109/4109), done.
    5. remote: Compressing objects: 100% (1446/1446), done.
    6. remote: Total 4109 (delta 2642), reused 4109 (delta 2642), pack-reused 0
    7. Receiving objects: 100% (4109/4109), 15.44 MiB | 2.33 MiB/s, done.
    8. Resolving deltas: 100% (2642/2642), done.
    9. [root@kubernetes-01 ~]# cd harbor-helm/
    10. [root@kubernetes-01 harbor-helm]# ls
    11. cert Chart.yaml conf CONTRIBUTING.md docs LICENSE README.md templates test values.yaml
    12. [root@kubernetes-01 harbor-helm]#

    注:这里我们通过码云进行了加速下载;这里我们安装的最新版;如果小伙伴们想安装分支也可以通过git checkout命令进行切换。

    2.1、Helm Chart包

    安装 Helm Chart 包最重要的当然是values.yaml文件了,我们可以通过覆盖该文件中的属性来改变配置;下面是关于 Harbor Helmvalues.yaml文件的一些常用配置解析、小伙伴们可以看看:

    1. expose:
    2. # 配置服务暴露方式:ingress、clusterIP或nodePort多种类型
    3. type: ingress
    4. # 是否开启 tls
    5. tls:
    6. # 注:如果服务暴露方式是 ingress 并且tls被禁用,则在pull/push镜像时,则必须包含端口。详细查看文档:https://github.com/goharbor/harbor/issues/5291
    7. enabled: true
    8. certSource: auto
    9. auto:
    10. # common name 是用于生成证书的,当类型是 clusterIP 或者 nodePort 并且 secretName 为空的时候才需要
    11. commonName: ""
    12. secret:
    13. # 如果你想使用自己的 TLS 证书和私钥,请填写这个 secret 的名称,这个 secret 必须包含名为 tls.crt 和 tls.key 的证书和私钥文件,如果没有设置则会自动生成证书和私钥文件
    14. secretName: ""
    15. # 默认 Notary 服务会使用上面相同的证书和私钥文件,如果你想用一个独立的则填充下面的字段,注意只有类型是 ingress 的时候才需要
    16. notarySecretName: ""
    17. ingress:
    18. hosts:
    19. core: core.harbor.domain
    20. notary: notary.harbor.domain
    21. controller: default
    22. annotations:
    23. ingress.kubernetes.io/ssl-redirect: "true"
    24. ingress.kubernetes.io/proxy-body-size: "0"
    25. nginx.ingress.kubernetes.io/ssl-redirect: "true"
    26. nginx.ingress.kubernetes.io/proxy-body-size: "0"
    27. notary:
    28. annotations: {}
    29. harbor:
    30. annotations: {}
    31. # ClusterIP 的服务名称
    32. clusterIP:
    33. name: harbor
    34. annotations: {}
    35. ports:
    36. httpPort: 80
    37. httpsPort: 443
    38. # Notary 服务监听端口,只有当 notary.enabled 设置为 true 的时候有效
    39. notaryPort: 4443
    40. # nodePort 的服务名称
    41. nodePort:
    42. name: harbor
    43. ports:
    44. http:
    45. port: 80
    46. nodePort: 30002
    47. https:
    48. port: 443
    49. nodePort: 30003
    50. notary:
    51. port: 4443
    52. nodePort: 30004
    53. # loadBalancer 的服务名称
    54. loadBalancer:
    55. name: harbor
    56. # 如果LoadBalancer支持IP分配,则需要配置IP
    57. IP: ""
    58. ports:
    59. httpPort: 80
    60. httpsPort: 443
    61. notaryPort: 4443
    62. annotations: {}
    63. sourceRanges: []
    64. # Harbor 核心服务外部访问 URL;主要用于:
    65. # 1) 补全 portal 页面上面显示的 docker/helm 命令
    66. # 2) 补全返回给 docker/notary 客户端的 token 服务 URL
    67. # 格式:protocol://domain[:port]。
    68. # 1) 如果 expose.type=ingress,"domain"的值就是 expose.ingress.hosts.core 的值
    69. # 2) 如果 expose.type=clusterIP,"domain"的值就是 expose.clusterIP.name 的值
    70. # 3) 如果 expose.type=nodePort,"domain"的值就是 k8s 节点的 IP 地址
    71. # 如果在代理后面部署 Harbor,请将其设置为代理的 URL
    72. externalURL: https://core.harbor.domain
    73. internalTLS:
    74. enabled: false
    75. certSource: "auto"
    76. trustCa: ""
    77. core:
    78. secretName: ""
    79. crt: ""
    80. key: ""
    81. jobservice:
    82. secretName: ""
    83. crt: ""
    84. key: ""
    85. registry:
    86. secretName: ""
    87. crt: ""
    88. key: ""
    89. portal:
    90. secretName: ""
    91. crt: ""
    92. key: ""
    93. chartmuseum:
    94. secretName: ""
    95. crt: ""
    96. key: ""
    97. # trivy镜像扫描证书相关的配置
    98. trivy:
    99. secretName: ""
    100. crt: ""
    101. key: ""
    102. # 默认情况下开启数据持久化,在k8s集群中需要动态的挂载卷默认需要一个StorageClass对象
    103. # 如果你有已经存在可以使用的持久卷,需要在"storageClass"中指定你的 storageClass 或者设置 "existingClaim"
    104. # 对于存储 docker 镜像和 Helm charts 包,你也可以用 "azure"、"gcs"、"s3"、"swift" 或者 "oss",直接在 "imageChartStorage" 区域设置即可
    105. persistence:
    106. enabled: true
    107. # 设置成"keep"避免在执行 helm 删除操作期间移除 PVC,留空则在 chart 被删除后删除 PVC
    108. resourcePolicy: "keep"
    109. persistentVolumeClaim:
    110. registry:
    111. existingClaim: ""
    112. # 指定"storageClass",或者使用默认的 StorageClass 对象,设置成"-"禁用动态分配挂载卷
    113. storageClass: ""
    114. subPath: ""
    115. accessMode: ReadWriteOnce
    116. size: 5Gi
    117. chartmuseum:
    118. existingClaim: ""
    119. storageClass: ""
    120. subPath: ""
    121. accessMode: ReadWriteOnce
    122. size: 5Gi
    123. jobservice:
    124. existingClaim: ""
    125. storageClass: ""
    126. subPath: ""
    127. accessMode: ReadWriteOnce
    128. size: 1Gi
    129. # 如果使用外部的数据库服务,下面的设置将会被忽略
    130. database:
    131. existingClaim: ""
    132. storageClass: ""
    133. subPath: ""
    134. accessMode: ReadWriteOnce
    135. size: 1Gi
    136. # 如果使用外部的 Redis 服务,下面的设置将会被忽略
    137. redis:
    138. existingClaim: ""
    139. storageClass: ""
    140. subPath: ""
    141. accessMode: ReadWriteOnce
    142. size: 1Gi
    143. trivy:
    144. existingClaim: ""
    145. storageClass: ""
    146. subPath: ""
    147. accessMode: ReadWriteOnce
    148. size: 5Gi
    149. # 定义使用什么存储后端来存储镜像和 charts 包,详细文档地址:https://github.com/docker/distribution/blob/master/docs/configuration.md#storage
    150. imageChartStorage:
    151. # 正对镜像和chart存储是否禁用跳转,对于一些不支持的后端(例如对于使用minio的`s3`存储),需要禁用它。为了禁止跳转,只需要设置`disableredirect=true`即可,详细文档地址:https://github.com/docker/distribution/blob/master/docs/configuration.md#redirect
    152. disableredirect: false
    153. # 指定存储类型:"filesystem", "azure", "gcs", "s3", "swift", "oss",在相应的区域填上对应的信息。
    154. # 如果你想使用 pv 则必须设置成"filesystem"类型
    155. type: filesystem
    156. filesystem:
    157. rootdirectory: /storage
    158. #maxthreads: 100
    159. azure:
    160. accountname: accountname
    161. accountkey: base64encodedaccountkey
    162. container: containername
    163. #realm: core.windows.net
    164. gcs:
    165. bucket: bucketname
    166. # The base64 encoded json file which contains the key
    167. encodedkey: base64-encoded-json-key-file
    168. #rootdirectory: /gcs/object/name/prefix
    169. #chunksize: "5242880"
    170. s3:
    171. region: us-west-1
    172. bucket: bucketname
    173. #accesskey: awsaccesskey
    174. #secretkey: awssecretkey
    175. #regionendpoint: http://myobjects.local
    176. #encrypt: false
    177. #keyid: mykeyid
    178. #secure: true
    179. #skipverify: false
    180. #v4auth: true
    181. #chunksize: "5242880"
    182. #rootdirectory: /s3/object/name/prefix
    183. #storageclass: STANDARD
    184. #multipartcopychunksize: "33554432"
    185. #multipartcopymaxconcurrency: 100
    186. #multipartcopythresholdsize: "33554432"
    187. swift:
    188. authurl: https://storage.myprovider.com/v3/auth
    189. username: username
    190. password: password
    191. container: containername
    192. #region: fr
    193. #tenant: tenantname
    194. #tenantid: tenantid
    195. #domain: domainname
    196. #domainid: domainid
    197. #trustid: trustid
    198. #insecureskipverify: false
    199. #chunksize: 5M
    200. #prefix:
    201. #secretkey: secretkey
    202. #accesskey: accesskey
    203. #authversion: 3
    204. #endpointtype: public
    205. #tempurlcontainerkey: false
    206. #tempurlmethods:
    207. oss:
    208. accesskeyid: accesskeyid
    209. accesskeysecret: accesskeysecret
    210. region: regionname
    211. bucket: bucketname
    212. #endpoint: endpoint
    213. #internal: false
    214. #encrypt: false
    215. #secure: true
    216. #chunksize: 10M
    217. #rootdirectory: rootdirectory
    218. imagePullPolicy: IfNotPresent
    219. imagePullSecrets:
    220. # - name: docker-registry-secret
    221. # - name: internal-registry-secret
    222. # The update strategy for deployments with persistent volumes(jobservice, registry
    223. # and chartmuseum): "RollingUpdate" or "Recreate"
    224. # Set it as "Recreate" when "RWM" for volumes isn't supported
    225. updateStrategy:
    226. type: RollingUpdate
    227. logLevel: info
    228. # Harbor admin 初始密码,Harbor 启动后通过 Portal 修改该密码
    229. harborAdminPassword: "Harbor12345"
    230. caSecretName: ""
    231. # 用于加密的一个 secret key,必须是一个16位的字符串
    232. secretKey: "not-a-secure-key"
    233. proxy:
    234. httpProxy:
    235. httpsProxy:
    236. noProxy: 127.0.0.1,localhost,.local,.internal
    237. components:
    238. - core
    239. - jobservice
    240. - trivy
    241. # If expose the service via "ingress", the Nginx will not be used
    242. # 如果你通过"ingress"保留服务,则下面的Nginx不会被使用
    243. nginx:
    244. image:
    245. repository: goharbor/nginx-photon
    246. tag: dev
    247. serviceAccountName: ""
    248. automountServiceAccountToken: false
    249. replicas: 1
    250. # resources:
    251. # requests:
    252. # memory: 256Mi
    253. # cpu: 100m
    254. nodeSelector: {}
    255. tolerations: []
    256. affinity: {}
    257. # 额外的 Deployment 的一些 annotations
    258. podAnnotations: {}
    259. priorityClassName:
    260. portal:
    261. image:
    262. repository: goharbor/harbor-portal
    263. tag: dev
    264. serviceAccountName: ""
    265. automountServiceAccountToken: false
    266. replicas: 1
    267. # resources:
    268. # requests:
    269. # memory: 256Mi
    270. # cpu: 100m
    271. nodeSelector: {}
    272. tolerations: []
    273. affinity: {}
    274. podAnnotations: {}
    275. priorityClassName:
    276. core:
    277. image:
    278. repository: goharbor/harbor-core
    279. tag: dev
    280. serviceAccountName: ""
    281. automountServiceAccountToken: false
    282. replicas: 1
    283. startupProbe:
    284. enabled: true
    285. initialDelaySeconds: 10
    286. # resources:
    287. # requests:
    288. # memory: 256Mi
    289. # cpu: 100m
    290. nodeSelector: {}
    291. tolerations: []
    292. affinity: {}
    293. podAnnotations: {}
    294. secret: ""
    295. secretName: ""
    296. xsrfKey: ""
    297. priorityClassName:
    298. jobservice:
    299. image:
    300. repository: goharbor/harbor-jobservice
    301. tag: dev
    302. replicas: 1
    303. serviceAccountName: ""
    304. automountServiceAccountToken: false
    305. maxJobWorkers: 10
    306. # jobs 的日志收集器:"file", "database" or "stdout"
    307. jobLoggers:
    308. - file
    309. # - database
    310. # - stdout
    311. # resources:
    312. # requests:
    313. # memory: 256Mi
    314. # cpu: 100m
    315. nodeSelector: {}
    316. tolerations: []
    317. affinity: {}
    318. podAnnotations: {}
    319. secret: ""
    320. priorityClassName:
    321. registry:
    322. serviceAccountName: ""
    323. automountServiceAccountToken: false
    324. registry:
    325. image:
    326. repository: goharbor/registry-photon
    327. tag: dev
    328. # resources:
    329. # requests:
    330. # memory: 256Mi
    331. # cpu: 100m
    332. controller:
    333. image:
    334. repository: goharbor/harbor-registryctl
    335. tag: dev
    336. # resources:
    337. # requests:
    338. # memory: 256Mi
    339. # cpu: 100m
    340. replicas: 1
    341. nodeSelector: {}
    342. tolerations: []
    343. affinity: {}
    344. podAnnotations: {}
    345. priorityClassName:
    346. secret: ""
    347. relativeurls: false
    348. credentials:
    349. username: "harbor_registry_user"
    350. password: "harbor_registry_password"
    351. # e.g. "htpasswd -nbBC10 $username $password"
    352. htpasswd: "harbor_registry_user:$2y$10$9L4Tc0DJbFFMB6RdSCunrOpTHdwhid4ktBJmLD00bYgqkkGOvll3m"
    353. middleware:
    354. enabled: false
    355. type: cloudFront
    356. cloudFront:
    357. baseurl: example.cloudfront.net
    358. keypairid: KEYPAIRID
    359. duration: 3000s
    360. ipfilteredby: none
    361. privateKeySecret: "my-secret"
    362. chartmuseum:
    363. enabled: true
    364. serviceAccountName: ""
    365. automountServiceAccountToken: false
    366. absoluteUrl: false
    367. image:
    368. repository: goharbor/chartmuseum-photon
    369. tag: dev
    370. replicas: 1
    371. # resources:
    372. # requests:
    373. # memory: 256Mi
    374. # cpu: 100m
    375. nodeSelector: {}
    376. tolerations: []
    377. affinity: {}
    378. podAnnotations: {}
    379. priorityClassName:
    380. trivy:
    381. enabled: true
    382. image:
    383. repository: goharbor/trivy-adapter-photon
    384. tag: dev
    385. serviceAccountName: ""
    386. automountServiceAccountToken: false
    387. replicas: 1
    388. debugMode: false
    389. vulnType: "os,library"
    390. severity: "UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL"
    391. ignoreUnfixed: false
    392. insecure: false
    393. gitHubToken: ""
    394. skipUpdate: false
    395. resources:
    396. requests:
    397. cpu: 200m
    398. memory: 512Mi
    399. limits:
    400. cpu: 1
    401. memory: 1Gi
    402. nodeSelector: {}
    403. tolerations: []
    404. affinity: {}
    405. podAnnotations: {}
    406. priorityClassName:
    407. notary:
    408. enabled: true
    409. server:
    410. serviceAccountName: ""
    411. automountServiceAccountToken: false
    412. image:
    413. repository: goharbor/notary-server-photon
    414. tag: dev
    415. replicas: 1
    416. # resources:
    417. # requests:
    418. # memory: 256Mi
    419. # cpu: 100m
    420. nodeSelector: {}
    421. tolerations: []
    422. affinity: {}
    423. podAnnotations: {}
    424. priorityClassName:
    425. signer:
    426. serviceAccountName: ""
    427. automountServiceAccountToken: false
    428. image:
    429. repository: goharbor/notary-signer-photon
    430. tag: dev
    431. replicas: 1
    432. # resources:
    433. # requests:
    434. # memory: 256Mi
    435. # cpu: 100m
    436. nodeSelector: {}
    437. tolerations: []
    438. affinity: {}
    439. podAnnotations: {}
    440. priorityClassName:
    441. secretName: ""
    442. database:
    443. # 如果使用外部的数据库,则设置 type=external,然后填写 external 区域的一些连接信息
    444. type: internal
    445. internal:
    446. serviceAccountName: ""
    447. automountServiceAccountToken: false
    448. image:
    449. repository: goharbor/harbor-db
    450. tag: dev
    451. # 内部的数据库的初始化超级用户的密码
    452. password: "changeit"
    453. shmSizeLimit: 512Mi
    454. # resources:
    455. # requests:
    456. # memory: 256Mi
    457. # cpu: 100m
    458. nodeSelector: {}
    459. tolerations: []
    460. affinity: {}
    461. priorityClassName:
    462. initContainer:
    463. migrator: {}
    464. # resources:
    465. # requests:
    466. # memory: 128Mi
    467. # cpu: 100m
    468. permissions: {}
    469. # resources:
    470. # requests:
    471. # memory: 128Mi
    472. # cpu: 100m
    473. external:
    474. host: "192.168.0.1"
    475. port: "5432"
    476. username: "user"
    477. password: "password"
    478. coreDatabase: "registry"
    479. notaryServerDatabase: "notary_server"
    480. notarySignerDatabase: "notary_signer"
    481. sslmode: "disable"
    482. maxIdleConns: 100
    483. maxOpenConns: 900
    484. podAnnotations: {}
    485. redis:
    486. # 如果使用外部的 Redis 服务,设置 type=external,然后补充 external 部分的连接信息。
    487. type: internal
    488. internal:
    489. serviceAccountName: ""
    490. automountServiceAccountToken: false
    491. image:
    492. repository: goharbor/redis-photon
    493. tag: dev
    494. # resources:
    495. # requests:
    496. # memory: 256Mi
    497. # cpu: 100m
    498. nodeSelector: {}
    499. tolerations: []
    500. affinity: {}
    501. priorityClassName:
    502. external:
    503. addr: "192.168.0.2:6379"
    504. sentinelMasterSet: ""
    505. # coreDatabaseIndex 必须设置为0
    506. coreDatabaseIndex: "0"
    507. jobserviceDatabaseIndex: "1"
    508. registryDatabaseIndex: "2"
    509. chartmuseumDatabaseIndex: "3"
    510. trivyAdapterIndex: "5"
    511. password: ""
    512. podAnnotations: {}
    513. exporter:
    514. replicas: 1
    515. # resources:
    516. # requests:
    517. # memory: 256Mi
    518. # cpu: 100m
    519. podAnnotations: {}
    520. serviceAccountName: ""
    521. automountServiceAccountToken: false
    522. image:
    523. repository: goharbor/harbor-exporter
    524. tag: dev
    525. nodeSelector: {}
    526. tolerations: []
    527. affinity: {}
    528. cacheDuration: 23
    529. cacheCleanInterval: 14400
    530. priorityClassName:
    531. metrics:
    532. enabled: false
    533. core:
    534. path: /metrics
    535. port: 8001
    536. registry:
    537. path: /metrics
    538. port: 8001
    539. jobservice:
    540. path: /metrics
    541. port: 8001
    542. exporter:
    543. path: /metrics
    544. port: 8001
    545. serviceMonitor:
    546. enabled: false
    547. additionalLabels: {}
    548. interval: ""
    549. metricRelabelings: []
    550. # - action: keep
    551. # regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
    552. # sourceLabels: [__name__]
    553. # Relabel configs to apply to samples before ingestion.
    554. relabelings: []
    555. # - sourceLabels: [__meta_kubernetes_pod_node_name]
    556. # separator: ;
    557. # regex: ^(.*)$
    558. # targetLabel: nodename
    559. # replacement: $1
    560. # action: replace

    2.2、定义values配置文件

    有了上面的配置说明,我们就可以根据自己的需求来覆盖上面的值,比如我们这里新建一个 harbor-values.yaml 的文件,文件内容如下:

    1. [root@kubernetes01 harbor-helm]# cat harbor-z0ukun-values.yaml
    2. # Ingress 网关入口配置
    3. expose:
    4. type: ingress
    5. tls:
    6. # 是否启用 https 协议,如果不想启用 HTTPS,则可以设置为 false
    7. enabled: true
    8. # 指定使用 sectet 挂载证书模式,且使用上面创建的 secret 资源(选配、如果选择挂载证书则需要手动创建证书)
    9. certSource: secret
    10. secret:
    11. secretName: "register-z0ukun-tls"
    12. notarySecretName: "register-z0ukun-tls"
    13. ingress:
    14. hosts:
    15. # 配置 Harbor 的访问域名
    16. core: registry.z0ukun.com
    17. notary: notary.z0ukun.com
    18. controller: default
    19. annotations:
    20. ingress.kubernetes.io/ssl-redirect: "true"
    21. ingress.kubernetes.io/proxy-body-size: "0"
    22. # 如果是 traefik ingress,则按下面配置
    23. kubernetes.io/ingress.class: "traefik"
    24. traefik.ingress.kubernetes.io/router.tls: 'true'
    25. traefik.ingress.kubernetes.io/router.entrypoints: websecure
    26. # 如果是 nginx ingress,则按下面配置(选配)
    27. # nginx.ingress.kubernetes.io/ssl-redirect: "true"
    28. # nginx.ingress.kubernetes.io/proxy-body-size: "0"
    29. # 如果不想使用 Ingress 方式,则可以配置下面参数,配置为 NodePort
    30. #clusterIP:
    31. # name: harbor
    32. # ports:
    33. # httpPort: 80
    34. # httpsPort: 443
    35. # notaryPort: 4443
    36. #nodePort:
    37. # name: harbor
    38. # ports:
    39. # http:
    40. # port: 80
    41. # nodePort: 30011
    42. # https:
    43. # port: 443
    44. # nodePort: 30012
    45. # notary:
    46. # port: 4443
    47. # nodePort: 30013
    48. # 如果Harbor部署在代理后,将其设置为代理的URL;这个值一般要和上面的 Ingress 配置的地址保存一致
    49. externalURL: https://registry.z0ukun.com
    50. # Harbor 各个组件的持久化配置,并设置各个组件 existingClaim 参数为上面创建的对应 PVC 名称
    51. persistence:
    52. enabled: true
    53. # 存储保留策略,当PVC、PV删除后,是否保留存储数据
    54. resourcePolicy: "keep"
    55. persistentVolumeClaim:
    56. registry:
    57. existingClaim: ""
    58. storageClass: "harbor-rook-ceph-block"
    59. subPath: ""
    60. accessMode: ReadWriteOnce
    61. size: 5Gi
    62. chartmuseum:
    63. existingClaim: ""
    64. storageClass: "harbor-rook-ceph-block"
    65. subPath: ""
    66. accessMode: ReadWriteOnce
    67. size: 5Gi
    68. jobservice:
    69. existingClaim: ""
    70. storageClass: "harbor-rook-ceph-block"
    71. subPath: ""
    72. accessMode: ReadWriteOnce
    73. size: 1Gi
    74. database:
    75. existingClaim: ""
    76. storageClass: "harbor-rook-ceph-block"
    77. subPath: ""
    78. accessMode: ReadWriteOnce
    79. size: 1Gi
    80. redis:
    81. existingClaim: ""
    82. storageClass: "harbor-rook-ceph-block"
    83. subPath: ""
    84. accessMode: ReadWriteOnce
    85. size: 1Gi
    86. trivy:
    87. existingClaim: ""
    88. storageClass: "harbor-rook-ceph-block"
    89. subPath: ""
    90. accessMode: ReadWriteOnce
    91. size: 5Gi
    92. # 默认用户名 admin 的密码配置(如果没有配置则默认密码为Harbor12345)
    93. harborAdminPassword: "z0uKun123456"
    94. # 设置日志级别
    95. logLevel: info
    96. [root@kubernetes01 harbor-helm]#

    2.3、自定义证书挂载模式(选配)

    如果我们选择指定使用 sectet 挂载证书模式、则需要手动创建证书来绑定使用、详细操作命令如下:

    1. [root@kubernetes01 ca]# openssl req -newkey rsa:4096 -nodes -sha256 -keyout ca.key -x509 -days 3650 -out ca.crt
    2. Generating a 4096 bit RSA private key
    3. .........................................................................++
    4. .......++
    5. writing new private key to 'ca.key'
    6. -----
    7. You are about to be asked to enter information that will be incorporated
    8. into your certificate request.
    9. What you are about to enter is what is called a Distinguished Name or a DN.
    10. There are quite a few fields but you can leave some blank
    11. For some fields there will be a default value,
    12. If you enter '.', the field will be left blank.
    13. -----
    14. Country Name (2 letter code) [XX]:
    15. State or Province Name (full name) []:
    16. Locality Name (eg, city) [Default City]:
    17. Organization Name (eg, company) [Default Company Ltd]:
    18. Organizational Unit Name (eg, section) []:
    19. Common Name (eg, your name or your server's hostname) []:
    20. Email Address []:
    21. [root@kubernetes01 ca]#
    22. [root@kubernetes01 ca]#
    23. [root@kubernetes01 ca]#
    24. [root@kubernetes01 ca]#
    25. [root@kubernetes01 ca]# openssl req -newkey rsa:4096 -nodes -sha256 -keyout tls.key -out tls.csr
    26. Generating a 4096 bit RSA private key
    27. .........................
    28. ...++
    29. ...............................................................++
    30. writing new private key to 'tls.key'
    31. -----
    32. You are about to be asked to enter information that will be incorporated
    33. into your certificate request.
    34. What you are about to enter is what is called a Distinguished Name or a DN.
    35. There are quite a few fields but you can leave some blank
    36. For some fields there will be a default value,
    37. If you enter '.', the field will be left blank.
    38. -----
    39. Country Name (2 letter code) [XX]:State or Province Name (full name) []:
    40. Locality Name (eg, city) [Default City]:
    41. Organization Name (eg, company) [Default Company Ltd]:
    42. Organizational Unit Name (eg, section) []:
    43. Common Name (eg, your name or your server's hostname) []:
    44. Email Address []:
    45. Please enter the following 'extra' attributes
    46. to be sent with your certificate request
    47. A challenge password []:
    48. An optional company name []:
    49. [root@kubernetes01 ca]#
    50. [root@kubernetes01 ca]#
    51. [root@kubernetes01 ca]#
    52. [root@kubernetes01 ca]#
    53. [root@kubernetes01 ca]#
    54. [root@kubernetes01 ca]# openssl x509 -req -days 3650 -in tls.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out tls.crt
    55. Signature ok
    56. subject=/C=XX/L=Default City/O=Default Company Ltd
    57. Getting CA Private Key
    58. [root@kubernetes01 ca]# kubectl create secret generic harbor-z0ukun-tls --from-file=tls.crt --from-file=tls.key --from-file=ca.crt -n harbor
    59. secret/harbor-z0ukun-tls created
    60. [root@kubernetes01 ca]# kubectl get secret -A
    61. NAMESPACE NAME TYPE DATA AGE
    62. harbor harbor-z0ukun-tls Opaque 3 11s

    这里我没有使用 sectet 挂载证书模式、而是通过 Traefik Ingress 自动生成证书和 IngressRoute 信息。

    2.4、创建 StorageClass

    需要我们定制的部分很少,我们手动配置了数据持久化的部分;我们需要提前为上面的这些服务创建好可用的 PVC 或者 StorageClass 对象,比如我们这里使用一个名为 harbor-rook-ceph-block 的 StorageClass 资源对象,当然也可以根据我们实际的需求修改 accessMode 或者存储容量(harbor-data-storageclass.yaml):

    1. apiVersion: ceph.rook.io/v1
    2. kind: CephBlockPool
    3. metadata:
    4. name: harbor-replicapool
    5. namespace: rook-ceph
    6. spec:
    7. failureDomain: host
    8. replicated:
    9. size: 3
    10. requireSafeReplicaSize: true
    11. ---
    12. apiVersion: storage.k8s.io/v1
    13. kind: StorageClass
    14. metadata:
    15. name: harbor-rook-ceph-block
    16. provisioner: rook-ceph.rbd.csi.ceph.com
    17. parameters:
    18. clusterID: rook-ceph
    19. pool: harbor-replicapool
    20. imageFormat: "2"
    21. imageFeatures: layering
    22. csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
    23. csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
    24. csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
    25. csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph
    26. csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
    27. csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
    28. csi.storage.k8s.io/fstype: xfs
    29. allowVolumeExpansion: true
    30. reclaimPolicy: Delete

    我们先创建上面的CephBlockPool和StorageClass两个资源文件:

    1. [root@kubernetes-01 harbor-helm]# kubectl apply -f harbor-data-storageclass.yaml
    2. cephblockpool.ceph.rook.io/harbor-replicapool created
    3. storageclass.storage.k8s.io/harbor-rook-ceph-block created
    4. [root@kubernetes-01 harbor-helm]# kubectl get CephBlockPool -n rook-ceph
    5. NAME AGE
    6. harbor-replicapool 7h11m
    7. replicapool 29h
    8. [root@kubernetes-01 harbor-helm]#
    9. [root@kubernetes-01 harbor-helm]# kubectl get storageclass -n harbor
    10. NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
    11. harbor-rook-ceph-block rook-ceph.rbd.csi.ceph.com Delete Immediate true 7h5m
    12. rook-ceph-block rook-ceph.rbd.csi.ceph.com Delete Immediate false 29h
    13. [root@kubernetes-01 harbor-helm]#

    2.5、安装Harbor

    创建完成以后,使用前面我们自定义的values文件安装Harbor:

    1. [root@kubernetes01 harbor-helm]# helm search repo harbor
    2. NAME CHART VERSION APP VERSION DESCRIPTION
    3. apphub/harbor 4.0.0 1.10.1 Harbor is an an open source trusted cloud nativ...
    4. harbor/harbor 1.7.1 2.3.1 An open source trusted cloud native registry th...
    5. [root@kubernetes01 harbor-helm]#
    6. [root@kubernetes01 harbor-helm]# helm install harbor harbor/harbor --version 1.7.1 -f harbor-values.yaml -n harbor
    7. NAME: harbor
    8. LAST DEPLOYED: Thu Jul 29 21:44:39 2021
    9. NAMESPACE: harbor
    10. STATUS: deployed
    11. REVISION: 1
    12. TEST SUITE: None
    13. NOTES:
    14. Please wait for several minutes for Harbor deployment to complete.
    15. Then you should be able to visit the Harbor portal at https://registry.z0ukun.com
    16. For more details, please visit https://github.com/goharbor/harbor
    17. [root@kubernetes01 harbor-helm]#

    安装完成以后我们来验证一下 harbor 的相关信息创建完成并 running 正常:

    1. [root@kubernetes01 harbor-helm]# kubectl get secret -n harbor
    2. NAME TYPE DATA AGE
    3. default-token-mvfzn kubernetes.io/service-account-token 3 16m
    4. harbor-chartmuseum Opaque 1 13m
    5. harbor-core Opaque 8 13m
    6. harbor-database Opaque 1 13m
    7. harbor-ingress kubernetes.io/tls 3 13m
    8. harbor-jobservice Opaque 2 13m
    9. harbor-notary-server Opaque 5 13m
    10. harbor-registry Opaque 2 13m
    11. harbor-registry-htpasswd Opaque 1 13m
    12. harbor-trivy Opaque 2 13m
    13. sh.helm.release.v1.harbor.v1 helm.sh/release.v1 1 13m
    14. [root@kubernetes01 harbor-helm]# kubectl get pod -n harbor
    15. NAME READY STATUS RESTARTS AGE
    16. harbor-chartmuseum-9668d67f7-nntlf 1/1 Running 0 13m
    17. harbor-core-57b48998b5-9cxjh 1/1 Running 0 13m
    18. harbor-database-0 1/1 Running 0 13m
    19. harbor-jobservice-5dcf78dc87-79fxd 1/1 Running 0 13m
    20. harbor-notary-server-75c9588f76-2krb8 1/1 Running 1 13m
    21. harbor-notary-signer-b4986db8d-bj4zl 1/1 Running 1 13m
    22. harbor-portal-c55c48545-9qgnp 1/1 Running 0 13m
    23. harbor-redis-0 1/1 Running 0 13m
    24. harbor-registry-77b85984bc-4zltn 2/2 Running 0 13m
    25. harbor-trivy-0 1/1 Running 0 13m
    26. [root@kubernetes01 harbor-helm]# kubectl get pvc -n harbor
    27. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
    28. data-harbor-redis-0 Bound pvc-f2341a4a-7780-46eb-83ad-09e5efb8bbd5 1Gi RWO harbor-rook-ceph-block 13m
    29. data-harbor-trivy-0 Bound pvc-510b78d2-ed21-4a50-96c3-8e3bc81ee888 5Gi RWO harbor-rook-ceph-block 13m
    30. database-data-harbor-database-0 Bound pvc-e72cd985-e503-4e74-8960-62c36a8be648 1Gi RWO harbor-rook-ceph-block 13m
    31. harbor-chartmuseum Bound pvc-10a7b303-027f-44dc-9b85-b4ea249bf91b 5Gi RWO harbor-rook-ceph-block 13m
    32. harbor-jobservice Bound pvc-0184419f-aa5a-4b25-91d4-260e8293f362 1Gi RWO harbor-rook-ceph-block 13m
    33. harbor-registry Bound pvc-c8bf9a72-3dc5-4aa4-ae5c-8c1dc48dc7ba 5Gi RWO harbor-rook-ceph-block 13m
    34. [root@kubernetes01 harbor-helm]# kubectl get svc -n harbor
    35. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    36. harbor-chartmuseum ClusterIP 10.254.209.202 <none> 80/TCP 13m
    37. harbor-core ClusterIP 10.254.198.49 <none> 80/TCP 13m
    38. harbor-database ClusterIP 10.254.197.38 <none> 5432/TCP 13m
    39. harbor-jobservice ClusterIP 10.254.135.127 <none> 80/TCP 13m
    40. harbor-notary-server ClusterIP 10.254.252.0 <none> 4443/TCP 13m
    41. harbor-notary-signer ClusterIP 10.254.116.63 <none> 7899/TCP 13m
    42. harbor-portal ClusterIP 10.254.118.20 <none> 80/TCP 13m
    43. harbor-redis ClusterIP 10.254.71.102 <none> 6379/TCP 13m
    44. harbor-registry ClusterIP 10.254.167.186 <none> 5000/TCP,8080/TCP 13m
    45. harbor-trivy ClusterIP 10.254.246.126 <none> 8080/TCP 13m
    46. [root@kubernetes01 harbor-helm]#

    上面是我们通过 Helm 安装所有涉及到的一些资源对象,稍微等一会儿,就可以安装成功了。现在我们可以看到所有资源对象都是Running状态了,都成功运行起来了。

    注:重点来了、如果在初次访问Harbor的时候提示:用户名或者密码不正确;先不要惊慌、首选看看你是不是通过TLS访问的Harbor、如果不是请使用TLS访问Harbor。然后再把 redis 服务重启或者删除该POD(redis删除之后k8s会自动重建)再登录试试、我这边是这么操作的。

    3、访问Harbor

    我们可以先到 Traefik 管理页面查看是否已经生成了 Harbor 的 Ingress 信息(或者通过命令行查看 kubectl get ingressroute -n harbor)、如果有我们就可以修改本机 hosts 文件添加IP域名对应关系、然后用 register.z0ukun.com 域名来访问 Harbor 啦。

    然后输入用户名:admin,密码:Harbor12345(当然我们这里的密码是 Helm 安装的时候自己覆盖 harborAdminPassword)即可登录进入 Portal 首页:

    在 Harbor 首页我们可以看到有很多功能,默认情况下会有一个名叫library的项目,该项目默认是公开访问权限的;进入项目可以看到里面还有 Helm Chart 包的管理,我们可以手动在这里上传也可以对改项目里面的镜像进行一些配置,比如是否开启自动扫描镜像功能:

    4、配置镜像仓库

    登录镜像仓库需要用到 https 证书 ca.crt 、这里我们可以去 Harbor 首页–项目–library 中点击注册证书进行下载;下载完成以后我们把证书内容复制到服务器指定目录中。

    这里我们进入服务器,然后在服务器上 /etc/docker 目录下创建 certs.d 文件夹,然后在 certs.d 文件夹下创建 Harobr 域名文件夹,可以输入下面命令创建对应目录。在 /etc/docker/certs.d/registry.z0ukun.com 目录下创建上面的 ca 证书文件,内容如下:

    1. [root@kubernetes01 ~]# mkdir -p /etc/docker/certs.d/registry.z0ukun.com/
    2. [root@kubernetes01 ~]# cd /etc/docker/certs.d/registry.z0ukun.com/
    3. [root@kubernetes01 registry.z0ukun.com]# cat ca.crt
    4. -----BEGIN CERTIFICATE-----
    5. MIIDEzCCAfugAwIBAgIQGbNEDw9kTrrweuqtpfRfLjANBgkqhkiG9w0BAQsFADAU
    6. MRIwEAYDVQQDEwloYXJib3ItY2EwHhcNMjEwNzI5MTM0NDQzWhcNMjIwNzI5MTM0
    7. NDQzWjAUMRIwEAYDVQQDEwloYXJib3ItY2EwggEiMA0GCSqGSIb3DQEBAQUAA4IB
    8. DwAwggEKAoIBAQDCjm9eUHN8S+T1wxkN0Js0katvSpiCj/Pp2s9CKQB7FlikLk6i
    9. GLzxxkGHJGxa7wa+9xj7c1qGttHkwGKJc7YY163i4DoH6+EhdxBDasEBuwDphh+j
    10. SoKp+MmI8oACyPfShYYV2RiYV5AGE059a7ODc+TXLKNAT/vNG0f4g40EVhaT8SjD
    11. KEzmwN6WbkyUQpy3QplqAtv5WX89tJy85tXH1iw/3e3rLebXxJFrrik5jRe7L7Uc
    12. T6bG7hB2w6D9PX3Ne5OapgAdPFLezHhFbxDQF36bikx2/kWIZZ1Wr1lma/jwk+gp
    13. iAKnzB5A7ryNH5LjAgwgWWlV90+JwfvX1EZXAgMBAAGjYTBfMA4GA1UdDwEB/wQE
    14. AwICpDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDwYDVR0TAQH/BAUw
    15. AwEB/zAdBgNVHQ4EFgQU2Qc33UVujdkSoyYuYVGt9/vF4+AwDQYJKoZIhvcNAQEL
    16. BQADggEBAFsKe+5rJAzCZKI5WUBuW1U7F+tKmtBajM3X+xmMs3+5VCRqFEEK9PuI
    17. RLF5sQbnhMv6jPvng0ujCDVp6UytV0srU+kgGr5ey5xWaiN4qAto3E50ojJzvrtg
    18. GlRKTNyJg4xLh1jWC34ny0OaAQYm3AputpsaLHmQ2OhkP6VtuKlbtk5j9SoY6uR4
    19. XSoh8cSGngE2VbBXB3q54nT+lY28ltVuUn7tIVJr+uziSeGIXATTJjJseUmSiUvF
    20. xSvLGZCPtRB92Tr/3LvlCj26nstwOhUR3kU+aKOdHjuFtwPCh4/jBX+kkV2tjAEr
    21. Wxga0Hg1D3QLRxKL77aL4cPK6/Ub260=
    22. -----END CERTIFICATE-----
    23. [root@kubernetes01 registry.z0ukun.com]#
    24. [root@kubernetes01 ~]# docker login registry.z0ukun.com
    25. Username: admin
    26. Password:
    27. WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
    28. Configure a credential helper to remove this warning. See
    29. https://docs.docker.com/engine/reference/commandline/login/#credentials-store
    30. Login Succeeded
    31. [root@kubernetes01 ~]#

    配置成功之后我们就可以通过 docker login 命令来登录 Harbor 仓库了、只有登录成功后才能将镜像推送到镜像仓库。

    5、功能测试

    5.1、镜像push

    我们在服务器上临时下载一个名为 busybox 的镜像文件,现在我们想把该镜像推送到我们的私有仓库中去,应该怎样操作呢?首先我们需要给该镜像重新打一个 registry.z0ukun.com 的 tag,然后在推送的时候就可以识别到推送到哪个镜像仓库;详细操作如下:

    1. [root@kubernetes01 registry.z0ukun.com]# docker pull busybox
    2. Using default tag: latest
    3. latest: Pulling from library/busybox
    4. b71f96345d44: Pull complete
    5. Digest: sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60
    6. Status: Downloaded newer image for busybox:latest
    7. docker.io/library/busybox:latest
    8. [root@kubernetes01 registry.z0ukun.com]# docker images | grep busybox
    9. busybox latest 69593048aa3a 7 weeks ago 1.24MB
    10. [root@kubernetes01 registry.z0ukun.com]# docker tag busybox registry.z0ukun.com/library/busybox
    11. [root@kubernetes01 registry.z0ukun.com]# docker images | grep busybox
    12. busybox latest 69593048aa3a 7 weeks ago 1.24MB
    13. registry.z0ukun.com/library/busybox latest 69593048aa3a 7 weeks ago 1.24MB
    14. [root@kubernetes01 registry.z0ukun.com]# docker push registry.z0ukun.com/library/busybox
    15. Using default tag: latest
    16. The push refers to repository [registry.z0ukun.com/library/busybox]
    17. 5b8c72934dfc: Pushed
    18. latest: digest: sha256:dca71257cd2e72840a21f0323234bb2e33fea6d949fa0f21c5102146f583486b size: 527
    19. [root@kubernetes01 registry.z0ukun.com]#

    推送完成后,我们同样可以在 Portal 页面上看到这个镜像的信息:

    5.2、镜像pull

    镜像 push 成功,我们同样可以测试下镜像 pull;我们先将本机的 registry.z0ukun.com/library/busybox 删除掉、然后使用 docker pull 命令进行镜像拉取:

    1. [root@kubernetes01 registry.z0ukun.com]# docker images | grep busybox
    2. busybox latest 69593048aa3a 7 weeks ago 1.24MB
    3. registry.z0ukun.com/library/busybox latest 69593048aa3a 7 weeks ago 1.24MB
    4. [root@kubernetes01 registry.z0ukun.com]# docker rmi registry.z0ukun.com/library/busybox
    5. Untagged: registry.z0ukun.com/library/busybox:latest
    6. Untagged: registry.z0ukun.com/library/busybox@sha256:dca71257cd2e72840a21f0323234bb2e33fea6d949fa0f21c5102146f583486b
    7. [root@kubernetes01 registry.z0ukun.com]# docker images | grep busybox
    8. busybox latest 69593048aa3a 7 weeks ago 1.24MB
    9. [root@kubernetes01 registry.z0ukun.com]# docker pull registry.z0ukun.com/library/busybox:latest
    10. latest: Pulling from library/busybox
    11. Digest: sha256:dca71257cd2e72840a21f0323234bb2e33fea6d949fa0f21c5102146f583486b
    12. Status: Downloaded newer image for registry.z0ukun.com/library/busybox:latest
    13. registry.z0ukun.com/library/busybox:latest
    14. [root@kubernetes01 registry.z0ukun.com]# docker images | grep busybox
    15. busybox latest 69593048aa3a 7 weeks ago 1.24MB
    16. registry.z0ukun.com/library/busybox latest 69593048aa3a 7 weeks ago 1.24MB
    17. [root@kubernetes01 registry.z0ukun.com]#

    从上面的内容我们可以看到、我们的私有 docker 仓库搭建成功了;大家可以尝试去创建一个私有的项目并创建一个新的用户,然后使用这个用户来进行 pull/push 镜像。当然、Harbor 还具有其他的一些功能,比如镜像复制,扫描器,P2P预热等内容。小伙伴们可以自行体验、这里就不再详细描述了。

  • 相关阅读:
    会议邀请 | 思腾合力邀您共赴CNCC 2023中国计算机大会
    【C语言】数据结构的基本概念与评价算法的指标
    C语言两个有序数组归并
    Linux知识点+命令
    【配置vscode编写python代码并输出到外部控制台】
    ABB机器人欧拉角与四元数的相互转化以及旋转矩阵的求法
    GBase8s数据库INTO table 子句
    家政服务预约小程序,推拿spa上门预约系统
    【C++ STL】模拟实现 string
    DO280管理应用部署--RC
  • 原文地址:https://blog.csdn.net/zfw_666666/article/details/125217822