• Kubernetes部署(八):k8s项目交付----(5)持续部署


    一、云计算模型概念

    1. ● You manage  # 你管理
    2. ● Managed by vendor # 供应商管理
    3. ● Applications # 开发研发出的业务
    4. ● Runtimes # 运行时环境,Applications业务运行起来,需要依赖的运行时环境,或者是编译环境或者是继承环境,比如java需要jre,python 需要依赖python-env
    5. ● Security & integr ation # 继承环境和安全,比如网络安全,业务安全
    6. ● Databases # 中间件和数据库
    7. ● Service # 跑在虚拟化上的一个一个虚拟机
    8. ● Virtualization # 虚拟化资源(kvm、zen、Open VZ、docker (轻量级虚拟化))
    9. ● Service HW # 服务器的硬件资源
    10. ● Storage # 存储资源
    11. ● Networking # 网络资源
    • IaaS (基础设施及服务)云计算最底层,①去某供应商购买IaaS平台,供应商提供一些基础设施(Networking网络资源,Storage 存储资源,Service HW 服务器的硬件资源、Virtualization虚拟化资源(kvm、zen、Open VZ、docker (轻量级虚拟化)、Service 跑在虚拟化上的一个一个虚拟机))等等。②供应商不提供软件服务(Applications 开发研发发出的业务、Runtimes运行时环境、Security & integr ation 继承环境和安全、Databases中间件和数据库)。说白了,由供应商提供(底层硬件、网络、硬件实现方法),本公司自主提供运维、开发,由运维、开发提供产品(运行时环境、数据库、代码)。
    • PaaS ①供应商提供(底层硬件、网络、硬件实现方法、数据库、运行时环境)。②供应商不提供应用程序。说白了,由供应商提供(底层硬件、网络、硬件实现方法、产品运行时环境、数据库),本公司自主提供开发,由开发提供代码。
    • SaaS ①所有的供应商提供(底层硬件、网络、硬件实现方法、产品运行时环境、数据库、代码),都是供应商实现的,本公司只负责拿钱买,直接用就行。

    Kubernetes不是什么?

    Kubernetes从设计上并不是传统的PaaS(平台即服务)系统,更应该说是传统的IaaS。但对于企业来说,企业只想要一个云周围的生态,要的是一个PaaS,只需要自己提供代码,其他的全部都是供应商提供,所以才有了阿里云、腾讯云等等,企业只需要把代码运行在些写供应商平台上就行。

    • Kubernetes不限制支持应用的类型,不限制应用框架。限制受支持的语言runtimes (例如, Java, Python, Ruby),满足12-factor applications 。不区分 “apps” 或者“services”。 Kubernetes支持不同负载应用,包括有状态、无状态、数据处理类型的应用。只要这个应用可以在容器里运行,那么就能很好的运行在Kubernetes上。
    • Kubernetes不提供中间件(如message buses)、数据处理框架(如Spark)、数据库(如Mysql)或者集群存储系统(如Cephfs)作为内置服务。但这些应用都可以运行在Kubernetes上面。
    • Kubernetes不部署源码不编译应用。持续集成的 (CI)工作流方面,不同的用户有不同的需求和偏好的区域,因此,我们提供分层的 CI工作流,但并不定义它应该如何工作。
    • Kubernetes允许用户选择自己的日志、监控和报警系统。
    • Kubernetes不提供或授权一个全面的应用程序配置 语言/系统(例如,jsonnet)。
    • Kubernetes不提供任何机器配置、维护、管理或者自修复系统。

    二、PaaS平台介绍

    • 因为k8s更偏向于IaaS的一个软件,提供的容器的编排能力,编排存储、安全、网络、业务容器、pod业务容器,所以越来越多的云计算厂商,正在基于k8s构建PaaS平台。比如阿里云,直接提供了k8s服务,跟我们部署没区别,只不过是他们把k8s封装成一个ansible,在需要的时候跑一遍ansible脚本,k8s服务就起来了。青云、腾讯云、微软的Azure云,都提供了以k8s为基础的PaaS平台,提供了好多web页面,只需要点点点,就能发布软件。
    • 获得PaaS能力的几个必要条件:
    1.   ● docker引擎提供了统一应用的运行时环境(docker)
    2.     ● 有IaaS能力(基础设施编排能力)
    3.     ● 有可靠的中间件集群、数据库集群(DBA的主要工作)
    4.   ● 有分布式存储集群(存储工程师的主要工作)
    5.     ● 有适配的监控、日志系统(Prometheus、ELK)
    6.     ● 有完善的CI、CD系统(Jenkins、?)

    统一应用的运行时环境(docker):

    • docker 三大核心概念:
      镜像、容器、仓库是docker的三大核心概念。
      docker镜像类似于虚拟机镜像,你可以将其理解为一个只读模板。
      docker容器类似于一个轻量级的沙箱,Docker利用容器来运行和隔离应用。
      容器是从镜像创建的应用运行实例。可以将其启动、开始、停止、删除,而这些容器都是彼此相互隔离的、互不可见的。
      镜像自身是只读的。容器从镜像启动时,会在镜像的最上层创建一个可写层。
      简单的说,容器是镜像的一个运行实例。所不同的是,镜像只是静态的只读文件,而容器带有运行时需要的可写文件层。
      如果认为虚拟机是模拟运行的一整套操作系统(包括内核、应用运行态环境和其它系统环境)和跑在上面的应用,那么docker容器就是独立运行的一个(或一组)应用,以及它必须的运行环境。
      docker仓库类似于代码仓库,它是docker集中存放镜像文件的场所。
    • 有IaaS能力(基础设施编排能力):
      (比如k8s,提供了网络编排能力,依赖CNI网络插件,让容器跨宿主机通信,k8s可以横向扩展,集群规模不在受基设施约束,只要有钱加硬件,不受技术条件制约,还有自己的安全能力,rbac机制,存储能力,编排存储,这篇文章主要是nfs网络附加存储,一种naas。除了nfs还可以使用分布式存储,对象存储、san存储,通过pv跟pvc接入k8s直接使用。k8s还有网络、存储、运算资源、容器生命周期等编排能力,还可以给容器制定一系列的就绪型探针、存活型探针、启动后钩子函数、停止前钩子函数等高级用法,k8s有非常强大的IaaS能力,k8s能很好的帮你管理IaaS基础设施)
    • 有可靠的中间件集群、数据库集群:
      (DBA的主要工作:状态的应用,如ES、mongodb、redis开持久化、mysql、orcale),系统工程师管理无状态的服务,都封装到k8s中

    什么是CD:

    在上述获得PaaS能力的几个必要条件中,只有最后CD系统是不清楚的,什么是CD,CD是持续部署,之前交付dubbo微服务的时候,流程是:开发提代码到git仓库,jenkins拉取git仓库的代码做持续集成(带参数化的流水线),变成image镜像后,推送到harbor的私有仓库中,然后手动编写k8s的yaml文件方式手动部署到k8s中,CD持续部署主要解决,从jenkins生成镜像到harbor仓库后,如何自动编写yaml,自动部署到k8s中

    常见的基于K8S的CD系统:

    ●  自研 ,通过python等自己开发编写CD,调用kubetctl apply -f,或者传递apiserver需要干什么
    ●  Argo CD,基于gitlab的工具,原理也是调取k8s的API
    ●  Openshift,红帽发布的基于k8s构建的一个企业级的PaaS平台,重量级(笨重),有CD、容器的私有仓库、流水线等功能
    ●  Spinnaker,完全开源免费,缺点有点复杂

    三、Spinnaker介绍

    Spinnaker 是 Netflix 在2015年开源的一款持续交付平台,它继承了 Netflix 上一代集群和部署管理工具 Asgard(阿斯加德):Web-based Cloud Management and Deployment的优点,同时根据公司业务以及技术的的发展抛弃了一些过时的设计:提高了持续交付系统的可复用性,提供了稳定可靠的API,提供了对基础设施和程序全局性的视图,配置、管理、运维都更简单,而且还完全兼容 Asgard,总之对于 Netflix 来说 Spinnaker 是更牛逼的持续交付平台。

    3.1、主要功能

    集群管理:

    集群管理代表的意思是(Spinnaker主要用于管理云资源),Spinnaker 所说的“”云“”可以理解成AWS,即主要是IaaS的资源,比如可以管理OpenStack、Google、微软云等,后来还支持了管理Kubernetes,管理方式还是按照管理基础设施模式来设计的。

    部署管理:

    管理部署流程是Spinnaker 的核心功能,他负责将Jenkins流水线创建的镜像,部署到Kubernetes集群中去,让服务真正的运行起来

    3.2、架构

    mark

    各组件功能:

    Spinnaker自己本身也是基础java、Spring cloud  开发的一套微服务

    • Deck 是一套完全独立的,前端静态web页面项目
    • Gate 是API网关,整个Spinnaker的心脏,所有的Spinnaker 用户界面和API调用程序都要通过Spinnaker Gate 进行通信
    • Custom Script/API Caller 提供用户自定义写脚本和自定义写API调用程序,通过调Gate的接口,获取Spinnaker的一些功能,实现自己的需求
    • Fiat 是Spinnaker的认证服务(账户认证+权限管理),相当于登录到Spinnaker中,需要经过Fiat 进行身份认证,可以接AD域,openldap等等账户统一认证组件,相当于openldap创建的账户跟密码,通过Fiat调用openldap,获取信息为登录进Spinnaker账户跟密码,此文章用不到。
    • Clouddriver 云驱动,驱动整个底层云计算集群的引擎,他处于大脑,决定了Spinnaker到底跟哪一个云计算引擎连接,是Kubernetes还是谷歌云等,管理哪一个云的引擎,是Kubernetes还是谷歌云等,最难部署的
    • Front50 用于管理数据持久化的组件,Spinnaker作为一套企业级k8s运维平台自动化部署的微服务,总的有数据持久化。Front50用的比较巧妙,没有用关系型跟非关系型数据库,使用redis做缓存,在后面接了一个对象存储用于保存应用程序,管道,项目和通知的元数据。对象存储(熟知的比如阿里云的OSS),Front50默认接对象存储为(亚马逊云的S3),但是接S3必须由亚马逊账户,不能因为Front50去创建一个亚马逊的账户。所以此方案中,对象存储使用了小巧的对象存储开源软件minio,Front50去连接minio的方式,恰好跟连接S3一样。注:redis是缓存,redis宕机只不过页面的成功等返回信息没了,数据没影响,都在minio。
    • Orca 是编排引擎。它处理所有临时操作和流水线。很重要呈上Gate启下Clouddriver等,从Deck页面点点点,通过Gate调用Clouddriver必须要经过Orca编排任务组件。在比如若干的流水线,若干Spinnaker任务,都需要经过Gate调用Orca,Orca决定是调用Front50拿数据还是调用Clouddriver去调云计算的引擎或者调用Rosco还是Kayenta
    • Rosco 是协助管理调度虚拟机(VM、Zen等等),此文章用不到。
    • Kayenta 帮助提供了自动化的金丝雀分析的组件,此文章用不到。
    • Echo 是信息通信服务组件。用来发现哪一个任务完成应该给谁发邮件或提示信息,它支持发送通知(例如,Slack,电子邮件,SMS),并处理来自Github之类的服务中传入的Webhook。架构中Igor是依赖于Echo。
    • Igor 用于跟Jenkins通信,如果Spinnaker想调用jenkins接口,必须要Igor ,持续集成系统进整合。
    • Halyard CLI 是一个Sprinnaker的脚手架工具,什么叫脚手架工具,盖楼需要脚手架工具工人才能爬上去,砌砖等等,它是帮助安装部署、配置、升级Sprinnaker工具,此文档不用,原因是Halyard CLI本身需要了解大量的Halyard CLI命令、而Halyard CLI 是官方工具,所以不得不使用谷歌源gcr.io/spinnaker-marketplace,可是国内拉不到。除非有亚马逊云的环境。

    四、部署Sprinnaker的Armory发行版

    由于Sprinnaker镜像在谷歌源,国内一般无法下载。所以在国内衍生出了许多第三方的镜像,他们通过某种渠道下载后,在传入到国内github上。但无法的得知镜像是否存在改动、安不安全,而且由于不及时更新,有可能找不到Sprinnaker最新版本。而Sprinnaker越来越成熟、社区越开越活跃后,有一些大型的第三方公司,把原装的Sprinnaker封装起来,通过使用Sprinnaker实现自家产品功能,所以Sprinnaker变相的可以买卖或者送,典型比如Hadoop、第三方公司如CDH。所以可以部署Sprinnaker的Armory的发行版。

    部署流程:minio  →  redis  →  Clouddriver   →   Front50   →   Orca   →    Echo  →  Igor  →  Gate   → Deck  →  Deck前端代理nginx(静态程序) 

    1、部署Minio

    1.1、准备镜像

    [root@hdss7-200 ~]# docker pull minio/minio:latest
    [root@hdss7-200 ~]# docker image ls -a |grep minio
    minio/minio                                    latest                          e31e0721a96b   3 months ago    406MB

    [root@hdss7-200 ~]# docker image tag e31e0721a96b harbor.od.com:180/armory/minio:latest

    harbor.od.com创建私有类型的armory

    推送镜像到armory仓库
    [root@hdss7-200 ~]# docker login harbor.od.com:180
    [root@hdss7-200 ~]# docker image push harbor.od.com:180/armory/minio:latest

    1.2、准备资源配置清单

    [root@hdss7-200 ~]# mkdir /data/k8s-yaml/armory;cd /data/k8s-yaml/armory
    [root@hdss7-200 armory ]# mkdir minio;cd  minio
    [root@hdss7-200 minio]# vi dp.yaml   # minio提供上传下载的功能,跟S3功能一致。注意老版本的minio使用9000端口就行页面访问,新版本的minio区分开来,需要指定console页面访问端口,具体查看k8s 部署 minio_Jerry00713的博客-CSDN博客

    1. apiVersion: extensions/v1beta1
    2. kind: Deployment
    3. metadata:
    4. labels:
    5. name: minio
    6. name: minio
    7. namespace: armory
    8. spec:
    9. progressDeadlineSeconds: 600
    10. replicas: 1
    11. revisionHistoryLimit: 7
    12. selector:
    13. matchLabels:
    14. name: minio
    15. template:
    16. metadata:
    17. labels:
    18. app: minio
    19. name: minio
    20. spec:
    21. containers:
    22. - name: minio
    23. env:
    24. - name: MINIO_ROOT_USER
    25. value: "admin"
    26. - name: MINIO_ROOT_PASSWORD
    27. value: "admin123"
    28. image: harbor.od.com:180/armory/minio:latest
    29. imagePullPolicy: IfNotPresent
    30. command:
    31. - /bin/sh
    32. - -c
    33. - minio server /data --console-address ":5000"
    34. ports:
    35. - name: data
    36. containerPort: 9000
    37. protocol: "TCP"
    38. - name: console
    39. containerPort: 5000
    40. protocol: "TCP"
    41. readinessProbe:
    42. failureThreshold: 3
    43. httpGet:
    44. path: /minio/health/ready
    45. port: 9000
    46. scheme: HTTP
    47. initialDelaySeconds: 10
    48. periodSeconds: 10
    49. successThreshold: 1
    50. timeoutSeconds: 5
    51. volumeMounts:
    52. - mountPath: /data
    53. name: data
    54. imagePullSecrets:
    55. - name: harbor
    56. volumes:
    57. - nfs:
    58. server: hdss7-200
    59. path: /data/nfs-volume/minio
    60. name: data

    解释:

    progressDeadlineSeconds: 600  # 主要用于k8s 在升级过程中有可能由于各种原因升级卡住(这个时候还没有明确的升级失败),在容器启动后,progressDeadlineSeconds会按照配置文件中内容进行倒计时,在倒计时期间不会有任何的影响。而我们了解,delpoyment会跟pod资源通过标签先择器进行绑定,而pod资源最先会调用镜像,如果拉取镜像时候,出问题了,那么此时容器会一直处于不正常。这时候只有配置了progressDeadlineSeconds,在progressDeadlineSeconds倒数多少时间后,检测pod资源是不是正常,如果不正常,会上报这个情况,这个时候这个 Deployment 状态就被标记为 False,并且注明原因。然后会重启启动这个资源。非Runing都被标记为 False

    replicas: 1  # 启动一份

    revisionHistoryLimit: 7  #  留几份历史记录,Deployment 更新容器后会把之前的配置等备份记录,默认kubernetes 将保存 Deployment 的所有更新(rollout)历史。方便需要回滚到指定的版本。  

    imagePullPolicy: IfNotPresent   # 如果本地有镜像,就不从harbor拉取  

     args:
        - server
        - /data 
      

    readinessProbe:            # 就绪探针readinessProbe
    failureThreshold: 3        # 失败次数3次
       httpGet:                       # http的GET请求
       path: /minio/health/ready   # 访问的url
          port: 9000                 # 访问的端口
          scheme: HTTP         # 类型http
             initialDelaySeconds: 10     # K8S将在Pod开始启动后10s开始探测
             periodSeconds: 10             # 在Pod运行过程中,K8S仍然会每隔10s(periodSeconds检测9000端口的/minio/health/ready
             successThreshold: 1        #  在持续10s中成功一次代表success
             timeoutSeconds: 5            # Pod启动10s,持续5s一直探测,没有返回200~300内,就绪检查就失败

    总结:K8S将在Pod开始启动后10s开始,持续5s一直探测curl -I 127.0.0.1:9000/minio/health/ready有没有返回200~300内,如果有一次success,代表成功。一次都没有成功。将在10s后再一次探测,持续5s还是一次都没有成功,在10s后再一次探测,持续5s还是一次都没有成功。连续3都是失败,代表检测失败了,此容器将不会被调度流量。

     env:
            - name: MINIO_ROOT_USER
       # minio容器中,网页启动脚本中,账户是从MINIO_ACCESS_KEY环境变量获取
              value: admin
            - name: MINIO_ROOT_PASSWORD
      # minio容器中,网页启动脚本中,密码是从MINIO_ACCESS_KEY环境变量获取
              value: admin123

    volumes:
          - nfs:
              server: hdss7-200
              path: /data/nfs-volume/minio   
    # 为了保证数据持久化,就是容器宕机也无所谓
            name: data

    [root@hdss7-200 minio]# mkdir /data/nfs-volume/minio
    [root@hdss7-200 minio]# vi svc.yaml

    1. apiVersion: v1
    2. kind: Service
    3. metadata:
    4. name: minio
    5. namespace: armory
    6. spec:
    7. ports:
    8. - name: data
    9. port: 80
    10. targetPort: 9000
    11. protocol: TCP
    12. - name: console
    13. port: 5000
    14. targetPort: 5000
    15. protocol: TCP
    16. selector:
    17. app: minio

    [root@hdss7-200 minio]# vi ingress.yaml    # minio 是一个对象存储,对外提供一个http接口,可以通过http上传下载文件,跟S3一样

    1. apiVersion: extensions/v1beta1
    2. kind: Ingress
    3. metadata:
    4. name: minio
    5. namespace: armory
    6. spec:
    7. rules:
    8. - host: minio.od.com
    9. http:
    10. paths:
    11. - path: /
    12. backend:
    13. serviceName: minio
    14. servicePort: 5000

    1.3、配置DNS解析

    [root@hdss7-11 ~]# vi /var/named/od.com.zone

    1. $ORIGIN od.com.
    2. $TTL 600 ; 10 minutes
    3. @ IN SOA dns.od.com. dnsadmin.od.com. (
    4. 2020010501 ; serial
    5. 10800 ; refresh (3 hours)
    6. 900 ; retry (15 minutes)
    7. 604800 ; expire (1 week)
    8. 86400 ; minimum (1 day)
    9. )
    10. NS dns.od.com.
    11. $TTL 60 ; 1 minute
    12. dns A 10.4.7.11
    13. $ORIGIN od.com.
    14. $TTL 600 ; 10 minutes
    15. @ IN SOA dns.od.com. dnsadmin.od.com. (
    16. 2020010518 ; serial
    17. 10800 ; refresh (3 hours)
    18. 900 ; retry (15 minutes)
    19. 604800 ; expire (1 week)
    20. 86400 ; minimum (1 day)
    21. )
    22. NS dns.od.com.
    23. $TTL 60 ; 1 minute
    24. dns A 10.4.7.11
    25. harbor A 10.4.7.200
    26. k8s-yaml A 10.4.7.200
    27. traefik A 10.4.7.10
    28. dashboard A 10.4.7.10
    29. zk1 A 10.4.7.11
    30. zk2 A 10.4.7.12
    31. zk3 A 10.4.7.21
    32. jenkins A 10.4.7.10
    33. dubbo-monitor A 10.4.7.10
    34. demo A 10.4.7.10
    35. config A 10.4.7.10
    36. mysql A 10.4.7.11
    37. portal A 10.4.7.10
    38. zk-test A 10.4.7.11
    39. zk-prod A 10.4.7.12
    40. config-test A 10.4.7.10
    41. config-prod A 10.4.7.10
    42. demo-test A 10.4.7.10
    43. demo-prod A 10.4.7.10
    44. blackbox A 10.4.7.10
    45. prometheus A 10.4.7.10
    46. grafana A 10.4.7.10
    47. km A 10.4.7.10
    48. kibana A 10.4.7.10
    49. minio A 10.4.7.10

    [root@hdss7-11 ~]# systemctl restart named
    [root@hdss7-11 ~]# dig -t A minio.od.com +short @10.4.7.11 
    10.4.7.10

    加深讲解 L4 L7 调度跟apiserver通信具体查看:https://blog.csdn.net/Jerry00713/article/details/124216958

    1.4、创建armory命名空间、secret

    因为harbor.od.com:180/armory/minio:lates在私有仓库,所以minio的pod资源拉取不到镜像,所以在armory命名空间创建一个secret,secret绑定harbor的账户密码等信息,然后pod资源调用此secret去私有仓库armory取镜像

    [root@hdss7-21 ~]# kubectl create ns armory
    namespace/armory created

    [root@hdss7-21 ~]# kubectl create secret docker-registry harbor --docker-server=harbor.od.com:180 --docker-username=admin --docker-password=Harbor12345 -n armory

    1.5、应用资源配置清单

    1. [root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/armory/minio/dp.yaml
    2. deployment.extensions/minio created
    3. [root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/armory/minio/svc.yaml
    4. service/minio created
    5. [root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/armory/minio/ingress.yaml
    6. ingress.extensions/minio created

    dashboard看pod

    此pod资源Running起来比较慢。原因是里面有就绪型探针。

    1.6、访问minio.od.com

    一个特别简单的页面,账户admin、密码admin123,是通过(- name: MINIO_ROOT_USER    value: admin)( - name: MINIO_ROOT_PASSWORD    value: admin123) 传递进去的。而比较神奇的是,S3也是这么传递用户密码 。等装完Front50就会有内容。

    2、部署redis

    版本比较随意,但尽量别太高,此文章没做将redis持久化,毕竟redis是一个缓存软件,就算宕机数据都是了,也不影响什么只不过不过是界面历史记录没了而已。

    2.1、准备镜像

    1. [root@hdss7-200 ~]# docker pull redis:4.0.14
    2. [root@hdss7-200 minio]# docker images |grep redis
    3. redis 4.0.14 191c4017dcdd 2 years ago 89.3MB
    4. goharbor/redis-photon v1.9.4 48c941077683 2 years ago 113MB
    5. [root@hdss7-200 ~]# docker tag 191c4017dcdd harbor.od.com:180/armory/redis:4.0.14
    6. [root@hdss7-200 ~]# docker login harbor.od.com:180
    7. [root@hdss7-200 ~]# docker image push harbor.od.com:180/armory/redis:4.0.14

    2.2、准备资源配置清单

    redis不对外提供http服务,对内提供service资源,说明k8s其他的组件需要通过service资源名字(redis.aromry.svc.cluster.local. )连接redis。为什么其他组件不能通过service资源的cluster_ip进行连接,因为cluster_ip重启是会变的,但是service资源名字(redis.aromry.svc.cluster.local. )永远不会变。

    [root@hdss7-200 ~]# cd /data/k8s-yaml/armory
    [root@hdss7-200 armory ]# mkdir redis;cd redis
    [root@hdss7-200 redis]# vi dp.yaml

    1. apiVersion: extensions/v1beta1
    2. kind: Deployment
    3. metadata:
    4. labels:
    5. name: redis
    6. name: redis
    7. namespace: armory
    8. spec:
    9. replicas: 1
    10. revisionHistoryLimit: 7
    11. selector:
    12. matchLabels:
    13. name: redis
    14. template:
    15. metadata:
    16. labels:
    17. app: redis
    18. name: redis
    19. spec:
    20. containers:
    21. - name: redis
    22. image: harbor.od.com:180/armory/redis:4.0.14
    23. imagePullPolicy: IfNotPresent
    24. ports:
    25. - containerPort: 6379
    26. protocol: TCP
    27. imagePullSecrets:
    28. - name: harbor

    [root@hdss7-200 redis]# vi svc.yaml

    1. apiVersion: v1
    2. kind: Service
    3. metadata:
    4. name: redis
    5. namespace: armory
    6. spec:
    7. ports:
    8. - port: 6379
    9. protocol: TCP
    10. targetPort: 6379
    11. selector:
    12. app: redis

    2.2、应用资源配置清单

    1. [root@hdss7-21 minio]# kubectl apply -f http://k8s-yaml.od.com/armory/redis/dp.yaml
    2. deployment.extensions/redis created
    3. [root@hdss7-21 minio]# kubectl apply -f http://k8s-yaml.od.com/armory/redis/svc.yaml
    4. service/redis created

    2.3、验证是否启动成功

    1. [root@hdss7-21 minio]# kubectl get pod -n armory -o wide |grep redis
    2. redis-58b569cdd-4v5jk 1/1 Running 0 9s 172.7.21.8 hdss7-21.host.com <none> <none>
    3. [root@hdss7-21 minio]# telnet 172.7.21.8 6379 # telnet 容器的 6379
    4. Trying 172.7.21.8...
    5. Connected to 172.7.21.8.
    6. Escape character is '^]'.
    7. 或者
    8. [root@hdss7-21 ~]# kubectl get svc -n armory -o wide |grep redis
    9. redis ClusterIP 192.168.119.189 <none> 6379/TCP 4m36s app=redis
    10. [root@hdss7-21 ~]# telnet 192.168.119.189 6379 # telnet cluster IP 的 6379
    11. Trying 192.168.119.189...
    12. Connected to 192.168.119.189.
    13. Escape character is '^]'.

    3、部署Spinnaker-clouddriver

    部署clouddriver云驱动软件

    3.1、准备镜像

    1. # 当前版本的镜像比较老,不需要太新,可以考虑新版本进行尝试,slim微小、瘦身版本
    2. [root@hdss7-200 ~]# docker pull armory/spinnaker-clouddriver-slim:release-1.11.x-bee52673a
    3. [root@hdss7-200 clouddriver]# docker image ls -a |grep clouddriver
    4. armory/spinnaker-clouddriver-slim release-1.11.x-bee52673a f1d52d01e28d 3 years ago 1.05GB
    5. [root@hdss7-200 ~]# docker tag f1d52d01e28d harbor.od.com:180/armory/clouddriver:v1.11.x
    6. [root@hdss7-200 ~]# docker push harbor.od.com:180/armory/clouddriver:v1.11.x

    3.2、准备minio的secret

    什么是k8s的secret,k8s secrets用于存储和管理一些敏感数据,比如密码,token,密钥等敏感信息。有三种类型,其中docker registry,专门存储harbor的认证信息。 generic存储自定义的账户密码,跟env挂载环境变量是一样的,只不过这个是给加密,非管理员权限是不能轻易看见。我们创建一个 generic存储minio的账户密码,让font50能登录进去跟minio通信,具体为什么font50拿着账户密码就能访问minio,因为font50默认跟配套的s3连接就是这么连接的,s3和minio是一样的。关于secret:k8s之Secret详细理解及使用_Jerry00713的博客-CSDN博客

    [root@hdss7-200 ~]# cd /data/k8s-yaml/armory
    [root@hdss7-200 armory]# mkdir clouddriver;cd clouddriver
    [root@hdss7-200 clouddriver]# vi credentials    # minio 账号密码,后期会挂载到Spinnaker

    1. [default]
    2. aws_access_key_id=admin
    3. aws_secret_access_key=admin123

    # 在运算节点把7-200 /clouddriver/credentials下载过来
    [root@hdss7-21 ~]# wget http://k8s-yaml.od.com/armory/clouddriver/credentials

    # 在armory空间下创建generic类型的secert
    [root@hdss7-21 ~]# kubectl create secret generic credentials --from-file=./credentials -n armory

    dashboard空间看一下secret

    3.3、准备k8s的用户配置

    因为clouddriver组件要管理k8s集群,就必须有k8s集群的集群管理员权限,或者相应的权限,两种方法,一种是ServiceAccount类型,例如部署dashboard时候,给了他相应的cluster-admin权限(声明了一个kubernetes-dashboard-admin,让cluster-admin(角色) 和 集群角色 ,进行了ClusterRoleBinding 集群角色绑定 ),但clouddriver不建议用这么粗暴的形式做,使用另一种kubeconfig形式(ua配置文件),比如kubectl 使用的是kubelet.config 文件访问集群 。这种形式需要制作证书,让apiserver接受私钥+ca证书,在本地生成一个文件附带公钥、使用者信息,通过此文件就可以访问api的信息

    3.3.1、签发证书制作kube-config文件

    给clouddriver签发一个admin-pem证书

    [root@hdss7-200 ~]# cd /opt/certs/
    [root@hdss7-200 certs]# cp client-csr.json admin-csr.json
    [root@hdss7-200 certs]# vi admin-csr.json   # 签发证书,将CN改成cluster-admin,因为证书中的CN直接作为请求的用户名,如果不想使用此用户,还想用此证书,自创角色后,自创角色还要和集群中的服务账号做集群角色绑定

    1. {
    2. "CN": "cluster-admin",
    3. "hosts": [
    4. ],
    5. "key": {
    6. "algo": "rsa",
    7. "size": 2048
    8. },
    9. "names": [
    10. {
    11. "C": "CN",
    12. "ST": "beijing",
    13. "L": "beijing",
    14. "O": "od",
    15. "OU": "ops"
    16. }
    17. ]
    18. }

    [root@hdss7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client admin-csr.json | cfssl-json -bare admin

    解释:

    gencert: 生成新的key(密钥)和签名证书
     -ca:指明ca的证书
     -ca-key:指明ca的私钥文件
     -config:指明请求证书的json文件
     -profile:与-config中的profile对应,是指根据config中的profile段来生成证书的相关信息
    所以cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json的意思是,使用ca.pem公钥   ca-key.pem私钥   ca-config.json 给某个文件生成一套新的key(密钥)和签名证书。为什么要使用这三个才能给某个文件生成证书,因为cfssl 要验证ca是不是没有被篡改过的,比如ca.pem是公钥也是证书,都知道证书是验证是不是没合法的,没被篡改,公钥加上私钥跟申请文件中的CN验证是不是一致。
    -profile=client 是读取的ca-config.json中的信息,走的是client的配置
    cfssjson 只是整理json格式,-bare主要的意思在于命名
    整体的意思是,使用ca的认证给admin-csr.json配置文件,生成一套证书+私钥+申请文件

    [root@hdss7-200 certs]# ll admin*     # 给clouddriver制作ua(userAccount)配置文件,需要使用(admin-key.pem、admin.pem)

    1. -rw-r--r--. 1 root root 1001 Jul 27 08:03 admin.csr
    2. -rw-r--r--. 1 root root  285 Jul 27 07:58 admin-csr.json
    3. -rw-------. 1 root root 1675 Jul 27 08:03 admin-key.pem       # 私钥文件
    4. -rw-r--r--. 1 root root 1371 Jul 27 08:03 admin.pem           # 证书文件

    3.3.2、运算节点做Kubeconfig 配置

    任意node运算节点:

    [root@hdss7-21 ~]# scp root@hdss7-200:/opt/certs/ca.pem .
    [root@hdss7-21 ~]# scp root@hdss7-200:/opt/certs/admin.pem .
    [root@hdss7-21 ~]# scp root@hdss7-200:/opt/certs/admin-key.pem .

    # kube-config文件也是用于通过apiserver来操作k8s集群创建资源使用,一般给予cluster-admin权限

    # 指向了10.4.7.10:7443,不会指定那个apiserver,才能做到高可用。整体意思为配置一个在k8s中名字为myk8s的一个集群用户,它绑定的ca.pem,并告知apiserver,最后把这份信息输出到当前目录的config文件,运行后可以cat config查看,文件中有ca证书
    [root@hdss7-21 ~]# kubectl config set-cluster myk8s --certificate-authority=./ca.pem --embed-certs=true --server=https://10.4.7.10:7443 --kubeconfig=config

    # 在k8s中,创建一个cluster-admin用户,并赋予他admin-key.pem、admin.pem证书,把这份信息输出到config文件中,运行后可以cat config查看,文件中有ca证书、admin-key.pem、admin.pem证书,users: cluster-admin (userAccount就是ua)。这么做得意思是,我们使用config文件申请资源的时候,使用cluster-admin用户,去跟apiserver通信,通信期间发送绑定证书admin.pem,apiserver使用第一步添加的ca.pem公钥对admin.pem证书进行解密,解密后是admin的公钥+证书的CN,通过发现CN跟访问用户是一致的,说明是没问题的(跟访问网站域名需要跟证书CN一致)。然后使用admin的公钥加密apiserver自己的私钥,发送给客户端本地后,使用admin的私钥,进行解密后,获取apiserver的私钥,进而通信
    [root@hdss7-21 ~]# kubectl config set-credentials cluster-admin --client-certificate=./admin.pem --client-key=./admin-key.pem --embed-certs=true --kubeconfig=config

    # set context (定义运行环境,kubeconfig配置文件中设置一个环境项),k8s可以基于命名空间实现完全隔离的环境,可以根据业务的不同划分不同的namespace来使用,k8s 通过set-context 控制namespace 进行隔离。kubectl use-context配置多集群访问,比如 kubectl 连接k8s集群时,默认情况下,kubectl 会在 /root/.kube 目录下查找名为 config 的文件。通过此文件信心连接与 apiserver 通信。细心的会发现,如果 kubectl 和 apiserver部署在一起,没有此文件kubectl 也与 apiserver 通信,因为 kubectl  访问的是本地的http,本地的回环,127.0.0.1:8080(netstat-tulpn |grep 8080 后显示 127.0.0.1:8080  LISTEN  1554/kube-apiserver)和apiserver通信的。 rancher针对每个集群都有对应的kubeconfig文件,文件中连接的用户(user)名、集群(cluster)名、上下文(contexts)都是对应的。在将集群、用户和上下文定义在一个或多个配置文件中之后,用户可以使用 kubectl config use-context 命令快速地在集群之间进行切换。在myk8s-context运行环境下,绑定集群环境myk8s-context,使用cluster-admin用户,把这份信息输出到当前目录的config文件
    [root@hdss7-21 ~]# kubectl config set-context myk8s-context --cluster=myk8s --user=cluster-admin --kubeconfig=config

    使用集群用户myk8s运行环境,把这份信息输出到当前目录的config文件
    [root@hdss7-21 ~]# kubectl config use-context myk8s-context --kubeconfig=config

    1. [root@hdss7-21 ~]# cat config 查看
    2. [root@hdss7-21 ~]# kubectl get clusterrolebindings
    3. NAME AGE
    4. cluster-admin 137d
    5. grafana 108d
    6. heapster 123d
    7. k8s-node 136d
    8. kube-state-metrics 108d
    9. kubernetes-dashboard-admin 129d
    10. prometheus 108d
    11. spinnake 35m
    12. system:basic-user 137d
    13. system:controller:attachdetach-controller 137d
    14. system:controller:certificate-controller 137d
    15. system:controller:clusterrole-aggregation-controller 137d
    16. system:controller:cronjob-controller 137d
    17. system:controller:daemon-set-controller 137d
    18. system:controller:deployment-controller 137d
    19. system:controller:disruption-controller 137d
    20. system:controller:endpoint-controller 137d
    21. system:controller:endpointslice-controller 101d
    22. system:controller:endpointslicemirroring-controller 101d
    23. system:controller:ephemeral-volume-controller 101d
    24. system:controller:expand-controller 137d
    25. system:controller:generic-garbage-collector 137d
    26. system:controller:horizontal-pod-autoscaler 137d
    27. system:controller:job-controller 137d
    28. system:controller:namespace-controller 137d
    29. system:controller:node-controller 137d
    30. system:controller:persistent-volume-binder 137d
    31. system:controller:pod-garbage-collector 137d
    32. system:controller:pv-protection-controller 137d
    33. system:controller:pvc-protection-controller 137d
    34. system:controller:replicaset-controller 137d
    35. system:controller:replication-controller 137d
    36. system:controller:resourcequota-controller 137d
    37. system:controller:root-ca-cert-publisher 101d
    38. system:controller:route-controller 137d
    39. system:controller:service-account-controller 137d
    40. system:controller:service-controller 137d
    41. system:controller:statefulset-controller 137d
    42. system:controller:ttl-after-finished-controller 101d
    43. system:controller:ttl-controller 137d
    44. system:coredns 130d
    45. system:discovery 137d
    46. system:kube-controller-manager 137d
    47. system:kube-dns 137d
    48. system:kube-scheduler 137d
    49. system:monitoring 101d
    50. system:node 137d
    51. system:node-proxier 137d
    52. system:public-info-viewer 137d
    53. system:service-account-issuer-discovery 101d
    54. system:volume-scheduler 137d
    55. traefik-ingress-controller 129d

    创建集群角色绑定,使用cluster-admin用户, 和集群角色cluster-admin,进行绑定。执行后,cat config  查看后发现,执行kubectl create clusterrolebinding无变化,所以这个只是在k8s内部做集群绑定
    [root@hdss7-21 ~]# kubectl create clusterrolebinding myk8s-admin --clusterrole=cluster-admin --user=cluster-admin

    # 测试kube-config是否可以用,在上述已经告知,kubect会先找/root/.kube/config 文件与apiserver通信,没有在跟访问1270.01:8080   注意:dashborad只能用service account登陆

    [root@hdss7-200 ~]# mkdir /root/.kube;cd /root/.kube/
    [root@hdss7-21 ~]# scp config root@10.4.7.200:/root/.kube/
    [root@hdss7-21 ~]# scp /opt/kubernetes/server/bin/kubectl root@10.4.7.200:/usr/bin/
    [root@hdss7-200 .kube]# kubectl get pod  # 查看是否可以

    # 默认kubect会先找/root/.kube/config 文件,如何修改找其他的文件 (了解不需要操作)

    echo "export KUBECONFIG=文件" >>~/.bash_profile

    3.3.3、创建ConfigMap资源

    将刚刚创建的 Kubeconfig 的配置文件config,使用ConfigMap形式挂载到k8s中,挂载到Spinnaker-clouddriver的pod,实现Spinnaker-clouddriver通过config ,进而跟apiserver通信

    [root@hdss7-21 ~]# kubectl create configmap default-kubeconfig --from-file=default-kubeconfig=config -n armory

    3.3.4、删除config相关配置

    为了安全起见,在操作完成后,删除config、ca.pem、admin.pem 、admin-key.pem

     [root@hdss7-21 ~]# rm -f config ca.pem admin.pem admin-key.pem

    3.4资源配置清单

    Spinnaker 的配置比较繁琐,其中有一个default-config.yaml的configmap非常复杂,一般不需要修改。原因是sprinnaker的armory发行版,本身把sprinnaker复杂的配置综合起来。

    [root@hdss7-200 ~]# cd /data/k8s-yaml/armory/clouddriver
    [root@hdss7-200 clouddriver]# vi init-env.yaml

    1. # init-env.yaml
    2. # 包括redis地址、对外的API接口域名等
    3. kind: ConfigMap
    4. apiVersion: v1
    5. metadata:
    6. name: init-env
    7. namespace: armory
    8. data:
    9. API_HOST: http://spinnaker.od.com/api
    10. ARMORY_ID: c02f0781-92f5-4e80-86db-0ba8fe7b8544
    11. ARMORYSPINNAKER_CONF_STORE_BUCKET: armory-platform
    12. ARMORYSPINNAKER_CONF_STORE_PREFIX: front50
    13. ARMORYSPINNAKER_GCS_ENABLED: "false"
    14. ARMORYSPINNAKER_S3_ENABLED: "true"
    15. AUTH_ENABLED: "false"
    16. AWS_REGION: us-east-1
    17. BASE_IP: 127.0.0.1
    18. CLOUDDRIVER_OPTS: -Dspring.profiles.active=armory,configurator,local
    19. CONFIGURATOR_ENABLED: "false"
    20. DECK_HOST: http://spinnaker.od.com
    21. ECHO_OPTS: -Dspring.profiles.active=armory,configurator,local
    22. GATE_OPTS: -Dspring.profiles.active=armory,configurator,local
    23. IGOR_OPTS: -Dspring.profiles.active=armory,configurator,local
    24. PLATFORM_ARCHITECTURE: k8s
    25. REDIS_HOST: redis://redis:6379
    26. SERVER_ADDRESS: 0.0.0.0
    27. SPINNAKER_AWS_DEFAULT_REGION: us-east-1
    28. SPINNAKER_AWS_ENABLED: "false"
    29. SPINNAKER_CONFIG_DIR: /home/spinnaker/config
    30. SPINNAKER_GOOGLE_PROJECT_CREDENTIALS_PATH: ""
    31. SPINNAKER_HOME: /home/spinnaker
    32. SPRING_PROFILES_ACTIVE: armory,configurator,local

    注释:

    1、在配置文件中 data: 下做了一系列的环境变量(API_HOST: http://spinnaker.od.com/api、REDIS_HOST: redis://redis:6379 连接的是名字是redis的service资源的6379端口)
    2、API_HOST: http://spinnaker.od.com/api    API_HOST就是我们的Gate组件的地址,所有的请求必须通过Gate组件
    3、 ARMORYSPINNAKER_CONF_STORE_BUCKET: armory-platform   部署spinnaker clouddriver 后,clouddriver 的 pod 通过这个字段,在 minio 中创建一个armory-platform名字的桶,可以按需求更改
    4、ARMORY_ID: c02f0781-92f5-4e80-86db-0ba8fe7b8544   随机的字符串,使用脚手架工具导出的方案,随机生成的,都用这个ID无问题。

    [root@hdss7-200 clouddriver]# vi default-config.yaml

    1. kind: ConfigMap
    2. apiVersion: v1
    3. metadata:
    4. name: default-config
    5. namespace: armory
    6. data:
    7. barometer.yml: |
    8. server:
    9. port: 9092
    10. spinnaker:
    11. redis:
    12. host: ${services.redis.host}
    13. port: ${services.redis.port}
    14. clouddriver-armory.yml: |
    15. aws:
    16. defaultAssumeRole: role/${SPINNAKER_AWS_DEFAULT_ASSUME_ROLE:SpinnakerManagedProfile}
    17. accounts:
    18. - name: default-aws-account
    19. accountId: ${SPINNAKER_AWS_DEFAULT_ACCOUNT_ID:none}
    20. client:
    21. maxErrorRetry: 20
    22. serviceLimits:
    23. cloudProviderOverrides:
    24. aws:
    25. rateLimit: 15.0
    26. implementationLimits:
    27. AmazonAutoScaling:
    28. defaults:
    29. rateLimit: 3.0
    30. AmazonElasticLoadBalancing:
    31. defaults:
    32. rateLimit: 5.0
    33. security.basic.enabled: false
    34. management.security.enabled: false
    35. clouddriver-dev.yml: |
    36. serviceLimits:
    37. defaults:
    38. rateLimit: 2
    39. clouddriver.yml: |
    40. server:
    41. port: ${services.clouddriver.port:7002}
    42. address: ${services.clouddriver.host:localhost}
    43. redis:
    44. connection: ${REDIS_HOST:redis://localhost:6379}
    45. udf:
    46. enabled: ${services.clouddriver.aws.udf.enabled:true}
    47. udfRoot: /opt/spinnaker/config/udf
    48. defaultLegacyUdf: false
    49. default:
    50. account:
    51. env: ${providers.aws.primaryCredentials.name}
    52. aws:
    53. enabled: ${providers.aws.enabled:false}
    54. defaults:
    55. iamRole: ${providers.aws.defaultIAMRole:BaseIAMRole}
    56. defaultRegions:
    57. - name: ${providers.aws.defaultRegion:us-east-1}
    58. defaultFront50Template: ${services.front50.baseUrl}
    59. defaultKeyPairTemplate: ${providers.aws.defaultKeyPairTemplate}
    60. azure:
    61. enabled: ${providers.azure.enabled:false}
    62. accounts:
    63. - name: ${providers.azure.primaryCredentials.name}
    64. clientId: ${providers.azure.primaryCredentials.clientId}
    65. appKey: ${providers.azure.primaryCredentials.appKey}
    66. tenantId: ${providers.azure.primaryCredentials.tenantId}
    67. subscriptionId: ${providers.azure.primaryCredentials.subscriptionId}
    68. google:
    69. enabled: ${providers.google.enabled:false}
    70. accounts:
    71. - name: ${providers.google.primaryCredentials.name}
    72. project: ${providers.google.primaryCredentials.project}
    73. jsonPath: ${providers.google.primaryCredentials.jsonPath}
    74. consul:
    75. enabled: ${providers.google.primaryCredentials.consul.enabled:false}
    76. cf:
    77. enabled: ${providers.cf.enabled:false}
    78. accounts:
    79. - name: ${providers.cf.primaryCredentials.name}
    80. api: ${providers.cf.primaryCredentials.api}
    81. console: ${providers.cf.primaryCredentials.console}
    82. org: ${providers.cf.defaultOrg}
    83. space: ${providers.cf.defaultSpace}
    84. username: ${providers.cf.account.name:}
    85. password: ${providers.cf.account.password:}
    86. kubernetes:
    87. enabled: ${providers.kubernetes.enabled:false}
    88. accounts:
    89. - name: ${providers.kubernetes.primaryCredentials.name}
    90. dockerRegistries:
    91. - accountName: ${providers.kubernetes.primaryCredentials.dockerRegistryAccount}
    92. openstack:
    93. enabled: ${providers.openstack.enabled:false}
    94. accounts:
    95. - name: ${providers.openstack.primaryCredentials.name}
    96. authUrl: ${providers.openstack.primaryCredentials.authUrl}
    97. username: ${providers.openstack.primaryCredentials.username}
    98. password: ${providers.openstack.primaryCredentials.password}
    99. projectName: ${providers.openstack.primaryCredentials.projectName}
    100. domainName: ${providers.openstack.primaryCredentials.domainName:Default}
    101. regions: ${providers.openstack.primaryCredentials.regions}
    102. insecure: ${providers.openstack.primaryCredentials.insecure:false}
    103. userDataFile: ${providers.openstack.primaryCredentials.userDataFile:}
    104. lbaas:
    105. pollTimeout: 60
    106. pollInterval: 5
    107. dockerRegistry:
    108. enabled: ${providers.dockerRegistry.enabled:false}
    109. accounts:
    110. - name: ${providers.dockerRegistry.primaryCredentials.name}
    111. address: ${providers.dockerRegistry.primaryCredentials.address}
    112. username: ${providers.dockerRegistry.primaryCredentials.username:}
    113. passwordFile: ${providers.dockerRegistry.primaryCredentials.passwordFile}
    114. credentials:
    115. primaryAccountTypes: ${providers.aws.primaryCredentials.name}, ${providers.google.primaryCredentials.name}, ${providers.cf.primaryCredentials.name}, ${providers.azure.primaryCredentials.name}
    116. challengeDestructiveActionsEnvironments: ${providers.aws.primaryCredentials.name}, ${providers.google.primaryCredentials.name}, ${providers.cf.primaryCredentials.name}, ${providers.azure.primaryCredentials.name}
    117. spectator:
    118. applicationName: ${spring.application.name}
    119. webEndpoint:
    120. enabled: ${services.spectator.webEndpoint.enabled:false}
    121. prototypeFilter:
    122. path: ${services.spectator.webEndpoint.prototypeFilter.path:}
    123. stackdriver:
    124. enabled: ${services.stackdriver.enabled}
    125. projectName: ${services.stackdriver.projectName}
    126. credentialsPath: ${services.stackdriver.credentialsPath}
    127. stackdriver:
    128. hints:
    129. - name: controller.invocations
    130. labels:
    131. - account
    132. - region
    133. dinghy.yml: ""
    134. echo-armory.yml: |
    135. diagnostics:
    136. enabled: true
    137. id: ${ARMORY_ID:unknown}
    138. armorywebhooks:
    139. enabled: false
    140. forwarding:
    141. baseUrl: http://armory-dinghy:8081
    142. endpoint: v1/webhooks
    143. echo-noncron.yml: |
    144. scheduler:
    145. enabled: false
    146. echo.yml: |
    147. server:
    148. port: ${services.echo.port:8089}
    149. address: ${services.echo.host:localhost}
    150. cassandra:
    151. enabled: ${services.echo.cassandra.enabled:false}
    152. embedded: ${services.cassandra.embedded:false}
    153. host: ${services.cassandra.host:localhost}
    154. spinnaker:
    155. baseUrl: ${services.deck.baseUrl}
    156. cassandra:
    157. enabled: ${services.echo.cassandra.enabled:false}
    158. inMemory:
    159. enabled: ${services.echo.inMemory.enabled:true}
    160. front50:
    161. baseUrl: ${services.front50.baseUrl:http://localhost:8080 }
    162. orca:
    163. baseUrl: ${services.orca.baseUrl:http://localhost:8083 }
    164. endpoints.health.sensitive: false
    165. slack:
    166. enabled: ${services.echo.notifications.slack.enabled:false}
    167. token: ${services.echo.notifications.slack.token}
    168. spring:
    169. mail:
    170. host: ${mail.host}
    171. mail:
    172. enabled: ${services.echo.notifications.mail.enabled:false}
    173. host: ${services.echo.notifications.mail.host}
    174. from: ${services.echo.notifications.mail.fromAddress}
    175. hipchat:
    176. enabled: ${services.echo.notifications.hipchat.enabled:false}
    177. baseUrl: ${services.echo.notifications.hipchat.url}
    178. token: ${services.echo.notifications.hipchat.token}
    179. twilio:
    180. enabled: ${services.echo.notifications.sms.enabled:false}
    181. baseUrl: ${services.echo.notifications.sms.url:https://api.twilio.com/ }
    182. account: ${services.echo.notifications.sms.account}
    183. token: ${services.echo.notifications.sms.token}
    184. from: ${services.echo.notifications.sms.from}
    185. scheduler:
    186. enabled: ${services.echo.cron.enabled:true}
    187. threadPoolSize: 20
    188. triggeringEnabled: true
    189. pipelineConfigsPoller:
    190. enabled: true
    191. pollingIntervalMs: 30000
    192. cron:
    193. timezone: ${services.echo.cron.timezone}
    194. spectator:
    195. applicationName: ${spring.application.name}
    196. webEndpoint:
    197. enabled: ${services.spectator.webEndpoint.enabled:false}
    198. prototypeFilter:
    199. path: ${services.spectator.webEndpoint.prototypeFilter.path:}
    200. stackdriver:
    201. enabled: ${services.stackdriver.enabled}
    202. projectName: ${services.stackdriver.projectName}
    203. credentialsPath: ${services.stackdriver.credentialsPath}
    204. webhooks:
    205. artifacts:
    206. enabled: true
    207. fetch.sh: |+
    208. CONFIG_LOCATION=${SPINNAKER_HOME:-"/opt/spinnaker"}/config
    209. CONTAINER=$1
    210. rm -f /opt/spinnaker/config/*.yml
    211. mkdir -p ${CONFIG_LOCATION}
    212. for filename in /opt/spinnaker/config/default/*.yml; do
    213. cp $filename ${CONFIG_LOCATION}
    214. done
    215. if [ -d /opt/spinnaker/config/custom ]; then
    216. for filename in /opt/spinnaker/config/custom/*; do
    217. cp $filename ${CONFIG_LOCATION}
    218. done
    219. fi
    220. add_ca_certs() {
    221. ca_cert_path="$1"
    222. jks_path="$2"
    223. alias="$3"
    224. if [[ "$(whoami)" != "root" ]]; then
    225. echo "INFO: I do not have proper permisions to add CA roots"
    226. return
    227. fi
    228. if [[ ! -f ${ca_cert_path} ]]; then
    229. echo "INFO: No CA cert found at ${ca_cert_path}"
    230. return
    231. fi
    232. keytool -importcert \
    233. -file ${ca_cert_path} \
    234. -keystore ${jks_path} \
    235. -alias ${alias} \
    236. -storepass changeit \
    237. -noprompt
    238. }
    239. if [ `which keytool` ]; then
    240. echo "INFO: Keytool found adding certs where appropriate"
    241. add_ca_certs "${CONFIG_LOCATION}/ca.crt" "/etc/ssl/certs/java/cacerts" "custom-ca"
    242. else
    243. echo "INFO: Keytool not found, not adding any certs/private keys"
    244. fi
    245. saml_pem_path="/opt/spinnaker/config/custom/saml.pem"
    246. saml_pkcs12_path="/tmp/saml.pkcs12"
    247. saml_jks_path="${CONFIG_LOCATION}/saml.jks"
    248. x509_ca_cert_path="/opt/spinnaker/config/custom/x509ca.crt"
    249. x509_client_cert_path="/opt/spinnaker/config/custom/x509client.crt"
    250. x509_jks_path="${CONFIG_LOCATION}/x509.jks"
    251. x509_nginx_cert_path="/opt/nginx/certs/ssl.crt"
    252. if [ "${CONTAINER}" == "gate" ]; then
    253. if [ -f ${saml_pem_path} ]; then
    254. echo "Loading ${saml_pem_path} into ${saml_jks_path}"
    255. openssl pkcs12 -export -out ${saml_pkcs12_path} -in ${saml_pem_path} -password pass:changeit -name saml
    256. keytool -genkey -v -keystore ${saml_jks_path} -alias saml \
    257. -keyalg RSA -keysize 2048 -validity 10000 \
    258. -storepass changeit -keypass changeit -dname "CN=armory"
    259. keytool -importkeystore \
    260. -srckeystore ${saml_pkcs12_path} \
    261. -srcstoretype PKCS12 \
    262. -srcstorepass changeit \
    263. -destkeystore ${saml_jks_path} \
    264. -deststoretype JKS \
    265. -storepass changeit \
    266. -alias saml \
    267. -destalias saml \
    268. -noprompt
    269. else
    270. echo "No SAML IDP pemfile found at ${saml_pem_path}"
    271. fi
    272. if [ -f ${x509_ca_cert_path} ]; then
    273. echo "Loading ${x509_ca_cert_path} into ${x509_jks_path}"
    274. add_ca_certs ${x509_ca_cert_path} ${x509_jks_path} "ca"
    275. else
    276. echo "No x509 CA cert found at ${x509_ca_cert_path}"
    277. fi
    278. if [ -f ${x509_client_cert_path} ]; then
    279. echo "Loading ${x509_client_cert_path} into ${x509_jks_path}"
    280. add_ca_certs ${x509_client_cert_path} ${x509_jks_path} "client"
    281. else
    282. echo "No x509 Client cert found at ${x509_client_cert_path}"
    283. fi
    284. if [ -f ${x509_nginx_cert_path} ]; then
    285. echo "Creating a self-signed CA (EXPIRES IN 360 DAYS) with java keystore: ${x509_jks_path}"
    286. echo -e "\n\n\n\n\n\ny\n" | keytool -genkey -keyalg RSA -alias server -keystore keystore.jks -storepass changeit -validity 360 -keysize 2048
    287. keytool -importkeystore \
    288. -srckeystore keystore.jks \
    289. -srcstorepass changeit \
    290. -destkeystore "${x509_jks_path}" \
    291. -storepass changeit \
    292. -srcalias server \
    293. -destalias server \
    294. -noprompt
    295. else
    296. echo "No x509 nginx cert found at ${x509_nginx_cert_path}"
    297. fi
    298. fi
    299. if [ "${CONTAINER}" == "nginx" ]; then
    300. nginx_conf_path="/opt/spinnaker/config/default/nginx.conf"
    301. if [ -f ${nginx_conf_path} ]; then
    302. cp ${nginx_conf_path} /etc/nginx/nginx.conf
    303. fi
    304. fi
    305. fiat.yml: |-
    306. server:
    307. port: ${services.fiat.port:7003}
    308. address: ${services.fiat.host:localhost}
    309. redis:
    310. connection: ${services.redis.connection:redis://localhost:6379}
    311. spectator:
    312. applicationName: ${spring.application.name}
    313. webEndpoint:
    314. enabled: ${services.spectator.webEndpoint.enabled:false}
    315. prototypeFilter:
    316. path: ${services.spectator.webEndpoint.prototypeFilter.path:}
    317. stackdriver:
    318. enabled: ${services.stackdriver.enabled}
    319. projectName: ${services.stackdriver.projectName}
    320. credentialsPath: ${services.stackdriver.credentialsPath}
    321. hystrix:
    322. command:
    323. default.execution.isolation.thread.timeoutInMilliseconds: 20000
    324. logging:
    325. level:
    326. com.netflix.spinnaker.fiat: DEBUG
    327. front50-armory.yml: |
    328. spinnaker:
    329. redis:
    330. enabled: true
    331. host: redis
    332. front50.yml: |
    333. server:
    334. port: ${services.front50.port:8080}
    335. address: ${services.front50.host:localhost}
    336. hystrix:
    337. command:
    338. default.execution.isolation.thread.timeoutInMilliseconds: 15000
    339. cassandra:
    340. enabled: ${services.front50.cassandra.enabled:false}
    341. embedded: ${services.cassandra.embedded:false}
    342. host: ${services.cassandra.host:localhost}
    343. aws:
    344. simpleDBEnabled: ${providers.aws.simpleDBEnabled:false}
    345. defaultSimpleDBDomain: ${providers.aws.defaultSimpleDBDomain}
    346. spinnaker:
    347. cassandra:
    348. enabled: ${services.front50.cassandra.enabled:false}
    349. host: ${services.cassandra.host:localhost}
    350. port: ${services.cassandra.port:9042}
    351. cluster: ${services.cassandra.cluster:CASS_SPINNAKER}
    352. keyspace: front50
    353. name: global
    354. redis:
    355. enabled: ${services.front50.redis.enabled:false}
    356. gcs:
    357. enabled: ${services.front50.gcs.enabled:false}
    358. bucket: ${services.front50.storage_bucket:}
    359. bucketLocation: ${services.front50.bucket_location:}
    360. rootFolder: ${services.front50.rootFolder:front50}
    361. project: ${providers.google.primaryCredentials.project}
    362. jsonPath: ${providers.google.primaryCredentials.jsonPath}
    363. s3:
    364. enabled: ${services.front50.s3.enabled:false}
    365. bucket: ${services.front50.storage_bucket:}
    366. rootFolder: ${services.front50.rootFolder:front50}
    367. spectator:
    368. applicationName: ${spring.application.name}
    369. webEndpoint:
    370. enabled: ${services.spectator.webEndpoint.enabled:false}
    371. prototypeFilter:
    372. path: ${services.spectator.webEndpoint.prototypeFilter.path:}
    373. stackdriver:
    374. enabled: ${services.stackdriver.enabled}
    375. projectName: ${services.stackdriver.projectName}
    376. credentialsPath: ${services.stackdriver.credentialsPath}
    377. stackdriver:
    378. hints:
    379. - name: controller.invocations
    380. labels:
    381. - application
    382. - cause
    383. - name: aws.request.httpRequestTime
    384. labels:
    385. - status
    386. - exception
    387. - AWSErrorCode
    388. - name: aws.request.requestSigningTime
    389. labels:
    390. - exception
    391. gate-armory.yml: |+
    392. lighthouse:
    393. baseUrl: http://${DEFAULT_DNS_NAME:lighthouse}:5000
    394. gate.yml: |
    395. server:
    396. port: ${services.gate.port:8084}
    397. address: ${services.gate.host:localhost}
    398. redis:
    399. connection: ${REDIS_HOST:redis://localhost:6379}
    400. configuration:
    401. secure: true
    402. spectator:
    403. applicationName: ${spring.application.name}
    404. webEndpoint:
    405. enabled: ${services.spectator.webEndpoint.enabled:false}
    406. prototypeFilter:
    407. path: ${services.spectator.webEndpoint.prototypeFilter.path:}
    408. stackdriver:
    409. enabled: ${services.stackdriver.enabled}
    410. projectName: ${services.stackdriver.projectName}
    411. credentialsPath: ${services.stackdriver.credentialsPath}
    412. stackdriver:
    413. hints:
    414. - name: EurekaOkClient_Request
    415. labels:
    416. - cause
    417. - reason
    418. - status
    419. igor-nonpolling.yml: |
    420. jenkins:
    421. polling:
    422. enabled: false
    423. igor.yml: |
    424. server:
    425. port: ${services.igor.port:8088}
    426. address: ${services.igor.host:localhost}
    427. jenkins:
    428. enabled: ${services.jenkins.enabled:false}
    429. masters:
    430. - name: ${services.jenkins.defaultMaster.name}
    431. address: ${services.jenkins.defaultMaster.baseUrl}
    432. username: ${services.jenkins.defaultMaster.username}
    433. password: ${services.jenkins.defaultMaster.password}
    434. csrf: ${services.jenkins.defaultMaster.csrf:false}
    435. travis:
    436. enabled: ${services.travis.enabled:false}
    437. masters:
    438. - name: ${services.travis.defaultMaster.name}
    439. baseUrl: ${services.travis.defaultMaster.baseUrl}
    440. address: ${services.travis.defaultMaster.address}
    441. githubToken: ${services.travis.defaultMaster.githubToken}
    442. dockerRegistry:
    443. enabled: ${providers.dockerRegistry.enabled:false}
    444. redis:
    445. connection: ${REDIS_HOST:redis://localhost:6379}
    446. spectator:
    447. applicationName: ${spring.application.name}
    448. webEndpoint:
    449. enabled: ${services.spectator.webEndpoint.enabled:false}
    450. prototypeFilter:
    451. path: ${services.spectator.webEndpoint.prototypeFilter.path:}
    452. stackdriver:
    453. enabled: ${services.stackdriver.enabled}
    454. projectName: ${services.stackdriver.projectName}
    455. credentialsPath: ${services.stackdriver.credentialsPath}
    456. stackdriver:
    457. hints:
    458. - name: controller.invocations
    459. labels:
    460. - master
    461. kayenta-armory.yml: |
    462. kayenta:
    463. aws:
    464. enabled: ${ARMORYSPINNAKER_S3_ENABLED:false}
    465. accounts:
    466. - name: aws-s3-storage
    467. bucket: ${ARMORYSPINNAKER_CONF_STORE_BUCKET}
    468. rootFolder: kayenta
    469. supportedTypes:
    470. - OBJECT_STORE
    471. - CONFIGURATION_STORE
    472. s3:
    473. enabled: ${ARMORYSPINNAKER_S3_ENABLED:false}
    474. google:
    475. enabled: ${ARMORYSPINNAKER_GCS_ENABLED:false}
    476. accounts:
    477. - name: cloud-armory
    478. bucket: ${ARMORYSPINNAKER_CONF_STORE_BUCKET}
    479. rootFolder: kayenta-prod
    480. supportedTypes:
    481. - METRICS_STORE
    482. - OBJECT_STORE
    483. - CONFIGURATION_STORE
    484. gcs:
    485. enabled: ${ARMORYSPINNAKER_GCS_ENABLED:false}
    486. kayenta.yml: |2
    487. server:
    488. port: 8090
    489. kayenta:
    490. atlas:
    491. enabled: false
    492. google:
    493. enabled: false
    494. aws:
    495. enabled: false
    496. datadog:
    497. enabled: false
    498. prometheus:
    499. enabled: false
    500. gcs:
    501. enabled: false
    502. s3:
    503. enabled: false
    504. stackdriver:
    505. enabled: false
    506. memory:
    507. enabled: false
    508. configbin:
    509. enabled: false
    510. keiko:
    511. queue:
    512. redis:
    513. queueName: kayenta.keiko.queue
    514. deadLetterQueueName: kayenta.keiko.queue.deadLetters
    515. redis:
    516. connection: ${REDIS_HOST:redis://localhost:6379}
    517. spectator:
    518. applicationName: ${spring.application.name}
    519. webEndpoint:
    520. enabled: true
    521. swagger:
    522. enabled: true
    523. title: Kayenta API
    524. description:
    525. contact:
    526. patterns:
    527. - /admin.*
    528. - /canary.*
    529. - /canaryConfig.*
    530. - /canaryJudgeResult.*
    531. - /credentials.*
    532. - /fetch.*
    533. - /health
    534. - /judges.*
    535. - /metadata.*
    536. - /metricSetList.*
    537. - /metricSetPairList.*
    538. - /pipeline.*
    539. security.basic.enabled: false
    540. management.security.enabled: false
    541. nginx.conf: |
    542. user nginx;
    543. worker_processes 1;
    544. error_log /var/log/nginx/error.log warn;
    545. pid /var/run/nginx.pid;
    546. events {
    547. worker_connections 1024;
    548. }
    549. http {
    550. include /etc/nginx/mime.types;
    551. default_type application/octet-stream;
    552. log_format main '$remote_addr - $remote_user [$time_local] "$request" '
    553. '$status $body_bytes_sent "$http_referer" '
    554. '"$http_user_agent" "$http_x_forwarded_for"';
    555. access_log /var/log/nginx/access.log main;
    556. sendfile on;
    557. keepalive_timeout 65;
    558. include /etc/nginx/conf.d/*.conf;
    559. }
    560. stream {
    561. upstream gate_api {
    562. server armory-gate:8085;
    563. }
    564. server {
    565. listen 8085;
    566. proxy_pass gate_api;
    567. }
    568. }
    569. nginx.http.conf: |
    570. gzip on;
    571. gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/vnd.ms-fontobject application/x-font-ttf font/opentype image/svg+xml image/x-icon;
    572. server {
    573. listen 80;
    574. listen [::]:80;
    575. location / {
    576. proxy_pass http://armory-deck/;
    577. }
    578. location /api/ {
    579. proxy_pass http://armory-gate:8084/;
    580. }
    581. location /slack/ {
    582. proxy_pass http://armory-platform:10000/;
    583. }
    584. rewrite ^/login(.*)$ /api/login$1 last;
    585. rewrite ^/auth(.*)$ /api/auth$1 last;
    586. }
    587. nginx.https.conf: |
    588. gzip on;
    589. gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/vnd.ms-fontobject application/x-font-ttf font/opentype image/svg+xml image/x-icon;
    590. server {
    591. listen 80;
    592. listen [::]:80;
    593. return 301 https://$host$request_uri;
    594. }
    595. server {
    596. listen 443 ssl;
    597. listen [::]:443 ssl;
    598. ssl on;
    599. ssl_certificate /opt/nginx/certs/ssl.crt;
    600. ssl_certificate_key /opt/nginx/certs/ssl.key;
    601. location / {
    602. proxy_pass http://armory-deck/;
    603. }
    604. location /api/ {
    605. proxy_pass http://armory-gate:8084/;
    606. proxy_set_header Host $host;
    607. proxy_set_header X-Real-IP $proxy_protocol_addr;
    608. proxy_set_header X-Forwarded-For $proxy_protocol_addr;
    609. proxy_set_header X-Forwarded-Proto $scheme;
    610. }
    611. location /slack/ {
    612. proxy_pass http://armory-platform:10000/;
    613. }
    614. rewrite ^/login(.*)$ /api/login$1 last;
    615. rewrite ^/auth(.*)$ /api/auth$1 last;
    616. }
    617. orca-armory.yml: |
    618. mine:
    619. baseUrl: http://${services.barometer.host}:${services.barometer.port}
    620. pipelineTemplate:
    621. enabled: ${features.pipelineTemplates.enabled:false}
    622. jinja:
    623. enabled: true
    624. kayenta:
    625. enabled: ${services.kayenta.enabled:false}
    626. baseUrl: ${services.kayenta.baseUrl}
    627. jira:
    628. enabled: ${features.jira.enabled:false}
    629. basicAuth: "Basic ${features.jira.basicAuthToken}"
    630. url: ${features.jira.createIssueUrl}
    631. webhook:
    632. preconfigured:
    633. - label: Enforce Pipeline Policy
    634. description: Checks pipeline configuration against policy requirements
    635. type: enforcePipelinePolicy
    636. enabled: ${features.certifiedPipelines.enabled:false}
    637. url: "http://lighthouse:5000/v1/pipelines/${execution.application}/${execution.pipelineConfigId}?check_policy=yes"
    638. headers:
    639. Accept:
    640. - application/json
    641. method: GET
    642. waitForCompletion: true
    643. statusUrlResolution: getMethod
    644. statusJsonPath: $.status
    645. successStatuses: pass
    646. canceledStatuses:
    647. terminalStatuses: TERMINAL
    648. - label: "Jira: Create Issue"
    649. description: Enter a Jira ticket when this pipeline runs
    650. type: createJiraIssue
    651. enabled: ${jira.enabled}
    652. url: ${jira.url}
    653. customHeaders:
    654. "Content-Type": application/json
    655. Authorization: ${jira.basicAuth}
    656. method: POST
    657. parameters:
    658. - name: summary
    659. label: Issue Summary
    660. description: A short summary of your issue.
    661. - name: description
    662. label: Issue Description
    663. description: A longer description of your issue.
    664. - name: projectKey
    665. label: Project key
    666. description: The key of your JIRA project.
    667. - name: type
    668. label: Issue Type
    669. description: The type of your issue, e.g. "Task", "Story", etc.
    670. payload: |
    671. {
    672. "fields" : {
    673. "description": "${parameterValues['description']}",
    674. "issuetype": {
    675. "name": "${parameterValues['type']}"
    676. },
    677. "project": {
    678. "key": "${parameterValues['projectKey']}"
    679. },
    680. "summary": "${parameterValues['summary']}"
    681. }
    682. }
    683. waitForCompletion: false
    684. - label: "Jira: Update Issue"
    685. description: Update a previously created Jira Issue
    686. type: updateJiraIssue
    687. enabled: ${jira.enabled}
    688. url: "${execution.stages.?[type == 'createJiraIssue'][0]['context']['buildInfo']['self']}"
    689. customHeaders:
    690. "Content-Type": application/json
    691. Authorization: ${jira.basicAuth}
    692. method: PUT
    693. parameters:
    694. - name: summary
    695. label: Issue Summary
    696. description: A short summary of your issue.
    697. - name: description
    698. label: Issue Description
    699. description: A longer description of your issue.
    700. payload: |
    701. {
    702. "fields" : {
    703. "description": "${parameterValues['description']}",
    704. "summary": "${parameterValues['summary']}"
    705. }
    706. }
    707. waitForCompletion: false
    708. - label: "Jira: Transition Issue"
    709. description: Change state of existing Jira Issue
    710. type: transitionJiraIssue
    711. enabled: ${jira.enabled}
    712. url: "${execution.stages.?[type == 'createJiraIssue'][0]['context']['buildInfo']['self']}/transitions"
    713. customHeaders:
    714. "Content-Type": application/json
    715. Authorization: ${jira.basicAuth}
    716. method: POST
    717. parameters:
    718. - name: newStateID
    719. label: New State ID
    720. description: The ID of the state you want to transition the issue to.
    721. payload: |
    722. {
    723. "transition" : {
    724. "id" : "${parameterValues['newStateID']}"
    725. }
    726. }
    727. waitForCompletion: false
    728. - label: "Jira: Add Comment"
    729. description: Add a comment to an existing Jira Issue
    730. type: commentJiraIssue
    731. enabled: ${jira.enabled}
    732. url: "${execution.stages.?[type == 'createJiraIssue'][0]['context']['buildInfo']['self']}/comment"
    733. customHeaders:
    734. "Content-Type": application/json
    735. Authorization: ${jira.basicAuth}
    736. method: POST
    737. parameters:
    738. - name: body
    739. label: Comment body
    740. description: The text body of the component.
    741. payload: |
    742. {
    743. "body" : "${parameterValues['body']}"
    744. }
    745. waitForCompletion: false
    746. orca.yml: |
    747. server:
    748. port: ${services.orca.port:8083}
    749. address: ${services.orca.host:localhost}
    750. oort:
    751. baseUrl: ${services.oort.baseUrl:localhost:7002}
    752. front50:
    753. baseUrl: ${services.front50.baseUrl:localhost:8080}
    754. mort:
    755. baseUrl: ${services.mort.baseUrl:localhost:7002}
    756. kato:
    757. baseUrl: ${services.kato.baseUrl:localhost:7002}
    758. bakery:
    759. baseUrl: ${services.bakery.baseUrl:localhost:8087}
    760. extractBuildDetails: ${services.bakery.extractBuildDetails:true}
    761. allowMissingPackageInstallation: ${services.bakery.allowMissingPackageInstallation:true}
    762. echo:
    763. enabled: ${services.echo.enabled:false}
    764. baseUrl: ${services.echo.baseUrl:8089}
    765. igor:
    766. baseUrl: ${services.igor.baseUrl:8088}
    767. flex:
    768. baseUrl: http://not-a-host
    769. default:
    770. bake:
    771. account: ${providers.aws.primaryCredentials.name}
    772. securityGroups:
    773. vpc:
    774. securityGroups:
    775. redis:
    776. connection: ${REDIS_HOST:redis://localhost:6379}
    777. tasks:
    778. executionWindow:
    779. timezone: ${services.orca.timezone}
    780. spectator:
    781. applicationName: ${spring.application.name}
    782. webEndpoint:
    783. enabled: ${services.spectator.webEndpoint.enabled:false}
    784. prototypeFilter:
    785. path: ${services.spectator.webEndpoint.prototypeFilter.path:}
    786. stackdriver:
    787. enabled: ${services.stackdriver.enabled}
    788. projectName: ${services.stackdriver.projectName}
    789. credentialsPath: ${services.stackdriver.credentialsPath}
    790. stackdriver:
    791. hints:
    792. - name: controller.invocations
    793. labels:
    794. - application
    795. rosco-armory.yml: |
    796. redis:
    797. timeout: 50000
    798. rosco:
    799. jobs:
    800. local:
    801. timeoutMinutes: 60
    802. rosco.yml: |
    803. server:
    804. port: ${services.rosco.port:8087}
    805. address: ${services.rosco.host:localhost}
    806. redis:
    807. connection: ${REDIS_HOST:redis://localhost:6379}
    808. aws:
    809. enabled: ${providers.aws.enabled:false}
    810. docker:
    811. enabled: ${services.docker.enabled:false}
    812. bakeryDefaults:
    813. targetRepository: ${services.docker.targetRepository}
    814. google:
    815. enabled: ${providers.google.enabled:false}
    816. accounts:
    817. - name: ${providers.google.primaryCredentials.name}
    818. project: ${providers.google.primaryCredentials.project}
    819. jsonPath: ${providers.google.primaryCredentials.jsonPath}
    820. gce:
    821. bakeryDefaults:
    822. zone: ${providers.google.defaultZone}
    823. rosco:
    824. configDir: ${services.rosco.configDir}
    825. jobs:
    826. local:
    827. timeoutMinutes: 30
    828. spectator:
    829. applicationName: ${spring.application.name}
    830. webEndpoint:
    831. enabled: ${services.spectator.webEndpoint.enabled:false}
    832. prototypeFilter:
    833. path: ${services.spectator.webEndpoint.prototypeFilter.path:}
    834. stackdriver:
    835. enabled: ${services.stackdriver.enabled}
    836. projectName: ${services.stackdriver.projectName}
    837. credentialsPath: ${services.stackdriver.credentialsPath}
    838. stackdriver:
    839. hints:
    840. - name: bakes
    841. labels:
    842. - success
    843. spinnaker-armory.yml: |
    844. armory:
    845. architecture: 'k8s'
    846. features:
    847. artifacts:
    848. enabled: true
    849. pipelineTemplates:
    850. enabled: ${PIPELINE_TEMPLATES_ENABLED:false}
    851. infrastructureStages:
    852. enabled: ${INFRA_ENABLED:false}
    853. certifiedPipelines:
    854. enabled: ${CERTIFIED_PIPELINES_ENABLED:false}
    855. configuratorEnabled:
    856. enabled: true
    857. configuratorWizard:
    858. enabled: true
    859. configuratorCerts:
    860. enabled: true
    861. loadtestStage:
    862. enabled: ${LOADTEST_ENABLED:false}
    863. jira:
    864. enabled: ${JIRA_ENABLED:false}
    865. basicAuthToken: ${JIRA_BASIC_AUTH}
    866. url: ${JIRA_URL}
    867. login: ${JIRA_LOGIN}
    868. password: ${JIRA_PASSWORD}
    869. slaEnabled:
    870. enabled: ${SLA_ENABLED:false}
    871. chaosMonkey:
    872. enabled: ${CHAOS_ENABLED:false}
    873. armoryPlatform:
    874. enabled: ${PLATFORM_ENABLED:false}
    875. uiEnabled: ${PLATFORM_UI_ENABLED:false}
    876. services:
    877. default:
    878. host: ${DEFAULT_DNS_NAME:localhost}
    879. clouddriver:
    880. host: ${DEFAULT_DNS_NAME:armory-clouddriver}
    881. entityTags:
    882. enabled: false
    883. configurator:
    884. baseUrl: http://${CONFIGURATOR_HOST:armory-configurator}:8069
    885. echo:
    886. host: ${DEFAULT_DNS_NAME:armory-echo}
    887. deck:
    888. gateUrl: ${API_HOST:service.default.host}
    889. baseUrl: ${DECK_HOST:armory-deck}
    890. dinghy:
    891. enabled: ${DINGHY_ENABLED:false}
    892. host: ${DEFAULT_DNS_NAME:armory-dinghy}
    893. baseUrl: ${services.default.protocol}://${services.dinghy.host}:${services.dinghy.port}
    894. port: 8081
    895. front50:
    896. host: ${DEFAULT_DNS_NAME:armory-front50}
    897. cassandra:
    898. enabled: false
    899. redis:
    900. enabled: true
    901. gcs:
    902. enabled: ${ARMORYSPINNAKER_GCS_ENABLED:false}
    903. s3:
    904. enabled: ${ARMORYSPINNAKER_S3_ENABLED:false}
    905. storage_bucket: ${ARMORYSPINNAKER_CONF_STORE_BUCKET}
    906. rootFolder: ${ARMORYSPINNAKER_CONF_STORE_PREFIX:front50}
    907. gate:
    908. host: ${DEFAULT_DNS_NAME:armory-gate}
    909. igor:
    910. host: ${DEFAULT_DNS_NAME:armory-igor}
    911. kayenta:
    912. enabled: true
    913. host: ${DEFAULT_DNS_NAME:armory-kayenta}
    914. canaryConfigStore: true
    915. port: 8090
    916. baseUrl: ${services.default.protocol}://${services.kayenta.host}:${services.kayenta.port}
    917. metricsStore: ${METRICS_STORE:stackdriver}
    918. metricsAccountName: ${METRICS_ACCOUNT_NAME}
    919. storageAccountName: ${STORAGE_ACCOUNT_NAME}
    920. atlasWebComponentsUrl: ${ATLAS_COMPONENTS_URL:}
    921. lighthouse:
    922. host: ${DEFAULT_DNS_NAME:armory-lighthouse}
    923. port: 5000
    924. baseUrl: ${services.default.protocol}://${services.lighthouse.host}:${services.lighthouse.port}
    925. orca:
    926. host: ${DEFAULT_DNS_NAME:armory-orca}
    927. platform:
    928. enabled: ${PLATFORM_ENABLED:false}
    929. host: ${DEFAULT_DNS_NAME:armory-platform}
    930. baseUrl: ${services.default.protocol}://${services.platform.host}:${services.platform.port}
    931. port: 5001
    932. rosco:
    933. host: ${DEFAULT_DNS_NAME:armory-rosco}
    934. enabled: true
    935. configDir: /opt/spinnaker/config/packer
    936. bakery:
    937. allowMissingPackageInstallation: true
    938. barometer:
    939. enabled: ${BAROMETER_ENABLED:false}
    940. host: ${DEFAULT_DNS_NAME:armory-barometer}
    941. baseUrl: ${services.default.protocol}://${services.barometer.host}:${services.barometer.port}
    942. port: 9092
    943. newRelicEnabled: ${NEW_RELIC_ENABLED:false}
    944. redis:
    945. host: redis
    946. port: 6379
    947. connection: ${REDIS_HOST:redis://localhost:6379}
    948. fiat:
    949. enabled: ${FIAT_ENABLED:false}
    950. host: ${DEFAULT_DNS_NAME:armory-fiat}
    951. port: 7003
    952. baseUrl: ${services.default.protocol}://${services.fiat.host}:${services.fiat.port}
    953. providers:
    954. aws:
    955. enabled: ${SPINNAKER_AWS_ENABLED:true}
    956. defaultRegion: ${SPINNAKER_AWS_DEFAULT_REGION:us-west-2}
    957. defaultIAMRole: ${SPINNAKER_AWS_DEFAULT_IAM_ROLE:SpinnakerInstanceProfile}
    958. defaultAssumeRole: ${SPINNAKER_AWS_DEFAULT_ASSUME_ROLE:SpinnakerManagedProfile}
    959. primaryCredentials:
    960. name: ${SPINNAKER_AWS_DEFAULT_ACCOUNT:default-aws-account}
    961. kubernetes:
    962. proxy: localhost:8001
    963. apiPrefix: api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/#
    964. spinnaker.yml: |2
    965. global:
    966. spinnaker:
    967. timezone: 'America/Los_Angeles'
    968. architecture: ${PLATFORM_ARCHITECTURE}
    969. services:
    970. default:
    971. host: localhost
    972. protocol: http
    973. clouddriver:
    974. host: ${services.default.host}
    975. port: 7002
    976. baseUrl: ${services.default.protocol}://${services.clouddriver.host}:${services.clouddriver.port}
    977. aws:
    978. udf:
    979. enabled: true
    980. echo:
    981. enabled: true
    982. host: ${services.default.host}
    983. port: 8089
    984. baseUrl: ${services.default.protocol}://${services.echo.host}:${services.echo.port}
    985. cassandra:
    986. enabled: false
    987. inMemory:
    988. enabled: true
    989. cron:
    990. enabled: true
    991. timezone: ${global.spinnaker.timezone}
    992. notifications:
    993. mail:
    994. enabled: false
    995. host: # the smtp host
    996. fromAddress: # the address for which emails are sent from
    997. hipchat:
    998. enabled: false
    999. url: # the hipchat server to connect to
    1000. token: # the hipchat auth token
    1001. botName: # the username of the bot
    1002. sms:
    1003. enabled: false
    1004. account: # twilio account id
    1005. token: # twilio auth token
    1006. from: # phone number by which sms messages are sent
    1007. slack:
    1008. enabled: false
    1009. token: # the API token for the bot
    1010. botName: # the username of the bot
    1011. deck:
    1012. host: ${services.default.host}
    1013. port: 9000
    1014. baseUrl: ${services.default.protocol}://${services.deck.host}:${services.deck.port}
    1015. gateUrl: ${API_HOST:services.gate.baseUrl}
    1016. bakeryUrl: ${services.bakery.baseUrl}
    1017. timezone: ${global.spinnaker.timezone}
    1018. auth:
    1019. enabled: ${AUTH_ENABLED:false}
    1020. fiat:
    1021. enabled: false
    1022. host: ${services.default.host}
    1023. port: 7003
    1024. baseUrl: ${services.default.protocol}://${services.fiat.host}:${services.fiat.port}
    1025. front50:
    1026. host: ${services.default.host}
    1027. port: 8080
    1028. baseUrl: ${services.default.protocol}://${services.front50.host}:${services.front50.port}
    1029. storage_bucket: ${SPINNAKER_DEFAULT_STORAGE_BUCKET:}
    1030. bucket_location:
    1031. bucket_root: front50
    1032. cassandra:
    1033. enabled: false
    1034. redis:
    1035. enabled: false
    1036. gcs:
    1037. enabled: false
    1038. s3:
    1039. enabled: false
    1040. gate:
    1041. host: ${services.default.host}
    1042. port: 8084
    1043. baseUrl: ${services.default.protocol}://${services.gate.host}:${services.gate.port}
    1044. igor:
    1045. enabled: false
    1046. host: ${services.default.host}
    1047. port: 8088
    1048. baseUrl: ${services.default.protocol}://${services.igor.host}:${services.igor.port}
    1049. kato:
    1050. host: ${services.clouddriver.host}
    1051. port: ${services.clouddriver.port}
    1052. baseUrl: ${services.clouddriver.baseUrl}
    1053. mort:
    1054. host: ${services.clouddriver.host}
    1055. port: ${services.clouddriver.port}
    1056. baseUrl: ${services.clouddriver.baseUrl}
    1057. orca:
    1058. host: ${services.default.host}
    1059. port: 8083
    1060. baseUrl: ${services.default.protocol}://${services.orca.host}:${services.orca.port}
    1061. timezone: ${global.spinnaker.timezone}
    1062. enabled: true
    1063. oort:
    1064. host: ${services.clouddriver.host}
    1065. port: ${services.clouddriver.port}
    1066. baseUrl: ${services.clouddriver.baseUrl}
    1067. rosco:
    1068. host: ${services.default.host}
    1069. port: 8087
    1070. baseUrl: ${services.default.protocol}://${services.rosco.host}:${services.rosco.port}
    1071. configDir: /opt/rosco/config/packer
    1072. bakery:
    1073. host: ${services.rosco.host}
    1074. port: ${services.rosco.port}
    1075. baseUrl: ${services.rosco.baseUrl}
    1076. extractBuildDetails: true
    1077. allowMissingPackageInstallation: false
    1078. docker:
    1079. targetRepository: # Optional, but expected in spinnaker-local.yml if specified.
    1080. jenkins:
    1081. enabled: ${services.igor.enabled:false}
    1082. defaultMaster:
    1083. name: Jenkins
    1084. baseUrl: # Expected in spinnaker-local.yml
    1085. username: # Expected in spinnaker-local.yml
    1086. password: # Expected in spinnaker-local.yml
    1087. redis:
    1088. host: redis
    1089. port: 6379
    1090. connection: ${REDIS_HOST:redis://localhost:6379}
    1091. cassandra:
    1092. host: ${services.default.host}
    1093. port: 9042
    1094. embedded: false
    1095. cluster: CASS_SPINNAKER
    1096. travis:
    1097. enabled: false
    1098. defaultMaster:
    1099. name: ci # The display name for this server. Gets prefixed with "travis-"
    1100. baseUrl: https://travis-ci.com
    1101. address: https://api.travis-ci.org
    1102. githubToken: # GitHub scopes currently required by Travis is required.
    1103. spectator:
    1104. webEndpoint:
    1105. enabled: false
    1106. stackdriver:
    1107. enabled: ${SPINNAKER_STACKDRIVER_ENABLED:false}
    1108. projectName: ${SPINNAKER_STACKDRIVER_PROJECT_NAME:${providers.google.primaryCredentials.project}}
    1109. credentialsPath: ${SPINNAKER_STACKDRIVER_CREDENTIALS_PATH:${providers.google.primaryCredentials.jsonPath}}
    1110. providers:
    1111. aws:
    1112. enabled: ${SPINNAKER_AWS_ENABLED:false}
    1113. simpleDBEnabled: false
    1114. defaultRegion: ${SPINNAKER_AWS_DEFAULT_REGION:us-west-2}
    1115. defaultIAMRole: BaseIAMRole
    1116. defaultSimpleDBDomain: CLOUD_APPLICATIONS
    1117. primaryCredentials:
    1118. name: default
    1119. defaultKeyPairTemplate: "{{name}}-keypair"
    1120. google:
    1121. enabled: ${SPINNAKER_GOOGLE_ENABLED:false}
    1122. defaultRegion: ${SPINNAKER_GOOGLE_DEFAULT_REGION:us-central1}
    1123. defaultZone: ${SPINNAKER_GOOGLE_DEFAULT_ZONE:us-central1-f}
    1124. primaryCredentials:
    1125. name: my-account-name
    1126. project: ${SPINNAKER_GOOGLE_PROJECT_ID:}
    1127. jsonPath: ${SPINNAKER_GOOGLE_PROJECT_CREDENTIALS_PATH:}
    1128. consul:
    1129. enabled: ${SPINNAKER_GOOGLE_CONSUL_ENABLED:false}
    1130. cf:
    1131. enabled: false
    1132. defaultOrg: spinnaker-cf-org
    1133. defaultSpace: spinnaker-cf-space
    1134. primaryCredentials:
    1135. name: my-cf-account
    1136. api: my-cf-api-uri
    1137. console: my-cf-console-base-url
    1138. azure:
    1139. enabled: ${SPINNAKER_AZURE_ENABLED:false}
    1140. defaultRegion: ${SPINNAKER_AZURE_DEFAULT_REGION:westus}
    1141. primaryCredentials:
    1142. name: my-azure-account
    1143. clientId:
    1144. appKey:
    1145. tenantId:
    1146. subscriptionId:
    1147. titan:
    1148. enabled: false
    1149. defaultRegion: us-east-1
    1150. primaryCredentials:
    1151. name: my-titan-account
    1152. kubernetes:
    1153. enabled: ${SPINNAKER_KUBERNETES_ENABLED:false}
    1154. primaryCredentials:
    1155. name: my-kubernetes-account
    1156. namespace: default
    1157. dockerRegistryAccount: ${providers.dockerRegistry.primaryCredentials.name}
    1158. dockerRegistry:
    1159. enabled: ${SPINNAKER_KUBERNETES_ENABLED:false}
    1160. primaryCredentials:
    1161. name: my-docker-registry-account
    1162. address: ${SPINNAKER_DOCKER_REGISTRY:https://index.docker.io/ }
    1163. repository: ${SPINNAKER_DOCKER_REPOSITORY:}
    1164. username: ${SPINNAKER_DOCKER_USERNAME:}
    1165. passwordFile: ${SPINNAKER_DOCKER_PASSWORD_FILE:}
    1166. openstack:
    1167. enabled: false
    1168. defaultRegion: ${SPINNAKER_OPENSTACK_DEFAULT_REGION:RegionOne}
    1169. primaryCredentials:
    1170. name: my-openstack-account
    1171. authUrl: ${OS_AUTH_URL}
    1172. username: ${OS_USERNAME}
    1173. password: ${OS_PASSWORD}
    1174. projectName: ${OS_PROJECT_NAME}
    1175. domainName: ${OS_USER_DOMAIN_NAME:Default}
    1176. regions: ${OS_REGION_NAME:RegionOne}
    1177. insecure: false

    注释:

    可以看到对(clouddriver-armory.yml)基础配置,还分环境(clouddriver-dev.yml、clouddriver.yml),还对不同的产品进行区分(aws、azure、google)

    [root@hdss7-200 armory]# vi custom-config.yaml  # 自定义配置

    1. # custom-config.yaml
    2. # 该配置文件指定访问k8s、harbor、minio、Jenkins的访问方式
    3. # 其中部分地址可以根据是否在k8s内部,和是否同一个名称空间来选择是否使用短域名
    4. kind: ConfigMap
    5. apiVersion: v1
    6. metadata:
    7. name: custom-config
    8. namespace: armory
    9. data:
    10. clouddriver-local.yml: |
    11. kubernetes:
    12. enabled: true
    13. accounts:
    14. - name: cluster-admin
    15. serviceAccount: false
    16. dockerRegistries:
    17. - accountName: harbor
    18. namespace: []
    19. namespaces:
    20. - test
    21. - prod
    22. kubeconfigFile: /opt/spinnaker/credentials/custom/default-kubeconfig
    23. primaryAccount: cluster-admin
    24. dockerRegistry:
    25. enabled: true
    26. accounts:
    27. - name: harbor
    28. requiredGroupMembership: []
    29. providerVersion: V1
    30. insecureRegistry: true
    31. address: http://harbor.od.com:180
    32. username: admin
    33. password: Harbor12345
    34. primaryAccount: harbor
    35. artifacts:
    36. s3:
    37. enabled: true
    38. accounts:
    39. - name: armory-config-s3-account
    40. apiEndpoint: http://minio
    41. apiRegion: us-east-1
    42. gcs:
    43. enabled: false
    44. accounts:
    45. - name: armory-config-gcs-account
    46. custom-config.json: ""
    47. echo-configurator.yml: |
    48. diagnostics:
    49. enabled: true
    50. front50-local.yml: |
    51. spinnaker:
    52. s3:
    53. endpoint: http://minio
    54. igor-local.yml: |
    55. jenkins:
    56. enabled: true
    57. masters:
    58. - name: jenkins-admin
    59. address: http://jenkins.od.com
    60. username: admin
    61. password: admin123
    62. primaryAccount: jenkins-admin
    63. nginx.conf: |
    64. gzip on;
    65. gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/vnd.ms-fontobject application/x-font-ttf font/opentype image/svg+xml image/x-icon;
    66. server {
    67. listen 80;
    68. location / {
    69. proxy_pass http://armory-deck/;
    70. }
    71. location /api/ {
    72. proxy_pass http://armory-gate:8084/;
    73. }
    74. rewrite ^/login(.*)$ /api/login$1 last;
    75. rewrite ^/auth(.*)$ /api/auth$1 last;
    76. }
    77. spinnaker-local.yml: |
    78. services:
    79. igor:
    80. enabled: true

    注释:

    clouddriver-local.yml 进行配置:
      accounts:
         - name: cluster-admin    # 使用的accounts,名字是cluster-admin,刚才做的ua。
     dockerRegistries:
        - accountName: harbor   # 使用名字是harbor的dockerRegistries,在dockerRegistry下定义
     dockerRegistry: # 对应上述的字是harbor的dockerRegistries的配置
        accounts:
            - name: harbor
        namespaces:   # 管理得两个名称空间
          - test
          - prod

    echo-configurator.yml: 进行配置:
     

    front50-local.yml: 进行配置:
       
    endpoint: http://minio   连接到minio中(endpoint,是对service匹配,这就解释了为什么是http://minio,因为在上文中,我们配置了minio的service资源,他的service资源的名字是minio(kind: Service  name:minio port: 80  targetPort: 9000),service资源的80端口代理minio 的 pod 的 9000端口。所以访问http://minio就是访问http://minio:80,也就是访问的http://minio的pod的IP:9000)

    igor-local.yml: 进行配置:
       
    address: http://jenkins.infra    使用的是jenkins.od.com
          username: admin   账户名
           password: admin123   密码
        primaryAccount: 配置主用户是cluster-admin
     

    nginx.conf: 进行配置:最外层代理
          proxy_pass http://armory-deck/;  也是连接的service资源的servicename
          proxy_pass http://armory-gate:8084/;    也是连接的service资源的servicename

    [root@hdss7-200 armory]# vi dp.yaml

    1. apiVersion: apps/v1
    2. kind: Deployment
    3. metadata:
    4. labels:
    5. app: armory-clouddriver
    6. name: armory-clouddriver
    7. namespace: armory
    8. spec:
    9. replicas: 1
    10. revisionHistoryLimit: 7
    11. selector:
    12. matchLabels:
    13. app: armory-clouddriver
    14. template:
    15. metadata:
    16. annotations:
    17. artifact.spinnaker.io/location: '"armory"'
    18. artifact.spinnaker.io/name: '"armory-clouddriver"'
    19. artifact.spinnaker.io/type: '"kubernetes/deployment"'
    20. moniker.spinnaker.io/application: '"armory"'
    21. moniker.spinnaker.io/cluster: '"clouddriver"'
    22. labels:
    23. app: armory-clouddriver
    24. spec:
    25. containers:
    26. - name: armory-clouddriver
    27. image: harbor.od.com:180/armory/clouddriver:v1.11.x
    28. imagePullPolicy: IfNotPresent
    29. command:
    30. - bash
    31. - -c
    32. args:
    33. - bash /opt/spinnaker/config/default/fetch.sh && cd /home/spinnaker/config
    34. && /opt/clouddriver/bin/clouddriver
    35. ports:
    36. - containerPort: 7002
    37. protocol: TCP
    38. env:
    39. - name: JAVA_OPTS
    40. value: -Xmx2048M
    41. envFrom:
    42. - configMapRef:
    43. name: init-env
    44. livenessProbe:
    45. failureThreshold: 5
    46. httpGet:
    47. path: /health
    48. port: 7002
    49. scheme: HTTP
    50. initialDelaySeconds: 600
    51. periodSeconds: 3
    52. successThreshold: 1
    53. timeoutSeconds: 1
    54. readinessProbe:
    55. failureThreshold: 5
    56. httpGet:
    57. path: /health
    58. port: 7002
    59. scheme: HTTP
    60. initialDelaySeconds: 180
    61. periodSeconds: 3
    62. successThreshold: 5
    63. timeoutSeconds: 1
    64. securityContext:
    65. runAsUser: 0
    66. volumeMounts:
    67. - mountPath: /etc/podinfo
    68. name: podinfo
    69. - mountPath: /home/spinnaker/.aws
    70. name: credentials
    71. - mountPath: /opt/spinnaker/credentials/custom
    72. name: default-kubeconfig
    73. - mountPath: /opt/spinnaker/config/default
    74. name: default-config
    75. - mountPath: /opt/spinnaker/config/custom
    76. name: custom-config
    77. imagePullSecrets:
    78. - name: harbor
    79. volumes:
    80. - configMap:
    81. defaultMode: 420
    82. name: default-kubeconfig
    83. name: default-kubeconfig
    84. - configMap:
    85. defaultMode: 420
    86. name: custom-config
    87. name: custom-config
    88. - configMap:
    89. defaultMode: 420
    90. name: default-config
    91. name: default-config
    92. - name: credentials
    93. secret:
    94. defaultMode: 420
    95. secretName: credentials
    96. - downwardAPI:
    97. defaultMode: 420
    98. items:
    99. - fieldRef:
    100. apiVersion: v1
    101. fieldPath: metadata.labels
    102. path: labels
    103. - fieldRef:
    104. apiVersion: v1
    105. fieldPath: metadata.annotations
    106. path: annotations
    107. name: podinfo

    注释:

    1. 1、有注解
    2. annotations:
    3.      artifact.spinnaker.io/location: '"armory"'
    4.      artifact.spinnaker.io/name: '"armory-clouddrive
    5. 2、 如何执行,脚本在default-config.yaml中
    6.   command:
    7.     - bash
    8.    - -c
    9.   args:
    10.     - bash /opt/spinnaker/config/default/fetch.sh && cd /home/spinnaker/config
    11.     && /opt/clouddriver/bin/clouddriver
    12. 总结:bash -c bash /opt/spinnaker/config/default/fetch.sh && cd /home/spinnaker/config && /opt/clouddriver/bin/clouddriver
    13. 3、暴露端口7220,本身是java的应用
    14.     ports:
    15.      - containerPort: 7002
    16.       protocol: TCP
    17. 4、jvm 调优
    18.     - name: JAVA_OPTS
    19.     # 生产中调大到2048-4096M
    20.       value: -Xmx1024M
    21. 5、downwardAPI 相当于把pod的一些标签注解等信息挂载到自己本身里面,目前还不清楚有什么用
    22.   volumeMounts:
    23.    - mountPath: /etc/podinfo
    24.     name: podinfo
    25.    - downwardAPI:
    26.     name: podinfo
    27. 6、credentials  连接对象式存储minio 的配置文件
    28.   volumeMounts:
    29.    - mountPath: /home/spinnaker/.aws
    30.      name: credentials
    31. 7、 ua的那个连接文件
    32.    volumeMounts:
    33.     - mountPath: /opt/spinnaker/credentials/custom
    34.       name: default-kubeconfig
    35.     - mountPath: /opt/spinnaker/confi
    36. 8、envFrom 定义环境的列表,第一部部署了名字是init-env的ConfigMap,其中定了很多环境变量,通过envFrom参数,挂载到这里
    37. envFrom:
    38. - configMapRef:
    39. name: init-env
    40. 9、defaultMode: 420 权限的意思,这个文件挂载到容器中
    41. volumes:
    42. - configMap:
    43. defaultMode: 420
    44. name: default-kubeconfig

    [root@hdss7-200 armory]# vi svc.yaml   # 尽管Spinnaker-clouddriver 提供http接口,但是不直接对集群外提供服务,不需要ingress

    1. apiVersion: v1
    2. kind: Service
    3. metadata:
    4. name: armory-clouddriver
    5. namespace: armory
    6. spec:
    7. ports:
    8. - port: 7002
    9. protocol: TCP
    10. targetPort: 7002
    11. selector:
    12. app: armory-clouddriver
    1. [root@hdss7-200 clouddriver]# kubectl apply -f init-env.yaml
    2. [root@hdss7-200 clouddriver]# kubectl apply -f default-config.yaml
    3. [root@hdss7-200 clouddriver]# kubectl apply -f custom-config.yaml
    4. [root@hdss7-200 clouddriver]# kubectl apply -f dp.yaml
    5. [root@hdss7-200 clouddriver]# kubectl apply -f svc.yaml

    验证:必须操作

    1. [root@hdss7-22 ~]# kubectl get pod -n armory -owide
    2. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    3. armory-clouddriver-c45d94c59-4h87z 1/1 Running 0 4m20s 172.7.21.11 hdss7-21.host.com <none> <none>
    4. minio-847ffc9ccd-mskl2 1/1 Running 3 40h 172.7.21.8 hdss7-21.host.com <none> <none>
    5. redis-58b569cdd-4v5jk 1/1 Running 3 39h 172.7.21.4 hdss7-21.host.com <none> <none>
    1. [root@hdss7-22 ~]# curl 172.7.21.11:7002/health
    2. {"status":"UP","kubernetes":{"status":"UP"},"dockerRegistry":{"status":"UP"},"redisHealth":{"status":"UP","maxIdle":100,"minIdle":25,"numActive":0,"numIdle":3,"numWaiters":0},"diskSpace":{"status":"UP","total":71897190400,"free":61508448256,"threshold":10485760}}
    3. [root@hdss7-22 ~]# kubectl exec -it minio-847ffc9ccd-mskl2 -n armory sh
    4. sh-4.4# curl armory-clouddriver:7002/health
    5. {"status":"UP","kubernetes":{"status":"UP"},"dockerRegistry":{"status":"UP"},"redisHealth":{"status":"UP","maxIdle":100,"minIdle":25,"numActive":0,"numIdle":3,"numWaiters":0},"diskSpace":{"status":"UP","total":71897190400,"free":61508444160,"threshold":10485760}}

    4、部署Spinnaker其余组件

    4.1、部署FRONT50

    4.1.1 准备镜像

    1. [root@hdss7-200 ~]# docker pull armory/spinnaker-front50-slim:release-1.8.x-93febf2
    2. [root@hdss7-200 ~]# docker image ls -a |grep front50
    3. armory/spinnaker-front50-slim release-1.8.x-93febf2 0d353788f4f2 3 years ago 273MB
    4. [root@hdss7-200 ~]# docker tag 0d353788f4f2 harbor.od.com:180/armory/front50:v1.8.x
    5. [root@hdss7-200 ~]# docker push harbor.od.com:180/armory/front50:v1.8.x

    4.1.2 准备资源清单

    [root@hdss7-200 ~]# mkdir /data/k8s-yaml/armory/front50
    [root@hdss7-200 ~]# cd /data/k8s-yaml/armory/front50/
    Deployment:

    1. cat >dp.yaml <<'EOF'
    2. apiVersion: apps/v1
    3. kind: Deployment
    4. metadata:
    5. labels:
    6. app: armory-front50
    7. name: armory-front50
    8. namespace: armory
    9. spec:
    10. replicas: 1
    11. revisionHistoryLimit: 7
    12. selector:
    13. matchLabels:
    14. app: armory-front50
    15. template:
    16. metadata:
    17. annotations:
    18. artifact.spinnaker.io/location: '"armory"'
    19. artifact.spinnaker.io/name: '"armory-front50"'
    20. artifact.spinnaker.io/type: '"kubernetes/deployment"'
    21. moniker.spinnaker.io/application: '"armory"'
    22. moniker.spinnaker.io/cluster: '"front50"'
    23. labels:
    24. app: armory-front50
    25. spec:
    26. containers:
    27. - name: armory-front50
    28. image: harbor.od.com:180/armory/front50:v1.8.x
    29. imagePullPolicy: IfNotPresent
    30. command:
    31. - bash
    32. - -c
    33. args:
    34. - bash /opt/spinnaker/config/default/fetch.sh && cd /home/spinnaker/config
    35. && /opt/front50/bin/front50
    36. ports:
    37. - containerPort: 8080
    38. protocol: TCP
    39. env:
    40. - name: JAVA_OPTS
    41. value: -javaagent:/opt/front50/lib/jamm-0.2.5.jar -Xmx1000M
    42. envFrom:
    43. - configMapRef:
    44. name: init-env
    45. livenessProbe:
    46. failureThreshold: 3
    47. httpGet:
    48. path: /health
    49. port: 8080
    50. scheme: HTTP
    51. initialDelaySeconds: 600
    52. periodSeconds: 3
    53. successThreshold: 1
    54. timeoutSeconds: 1
    55. readinessProbe:
    56. failureThreshold: 3
    57. httpGet:
    58. path: /health
    59. port: 8080
    60. scheme: HTTP
    61. initialDelaySeconds: 180
    62. periodSeconds: 5
    63. successThreshold: 8
    64. timeoutSeconds: 1
    65. volumeMounts:
    66. - mountPath: /etc/podinfo
    67. name: podinfo
    68. - mountPath: /home/spinnaker/.aws
    69. name: credentials
    70. - mountPath: /opt/spinnaker/config/default
    71. name: default-config
    72. - mountPath: /opt/spinnaker/config/custom
    73. name: custom-config
    74. imagePullSecrets:
    75. - name: harbor
    76. volumes:
    77. - configMap:
    78. defaultMode: 420
    79. name: custom-config
    80. name: custom-config
    81. - configMap:
    82. defaultMode: 420
    83. name: default-config
    84. name: default-config
    85. - name: credentials
    86. secret:
    87. defaultMode: 420
    88. secretName: credentials
    89. - downwardAPI:
    90. defaultMode: 420
    91. items:
    92. - fieldRef:
    93. apiVersion: v1
    94. fieldPath: metadata.labels
    95. path: labels
    96. - fieldRef:
    97. apiVersion: v1
    98. fieldPath: metadata.annotations
    99. path: annotations
    100. name: podinfo
    101. EOF

    Service:

    1. cat >svc.yaml <<'EOF'
    2. apiVersion: v1
    3. kind: Service
    4. metadata:
    5. name: armory-front50
    6. namespace: armory
    7. spec:
    8. ports:
    9. - port: 8080
    10. protocol: TCP
    11. targetPort: 8080
    12. selector:
    13. app: armory-front50
    14. EOF
    1. [root@hdss7-200 front50]# kubectl apply -f dp.yaml
    2. deployment.apps/armory-front50 created
    3. [root@hdss7-200 front50]# kubectl apply -f svc.yaml
    4. service/armory-front50 created

    4.1.3 验证front50的健康端口:

    [root@hdss7-200 clouddriver]# kubectl get pod -n armory

    1. NAME READY STATUS RESTARTS AGE
    2. armory-clouddriver-c45d94c59-4h87z 1/1 Running 2 3d20h
    3. armory-front50-c57d59db-8fjdl 1/1 Running 0 7m46s
    4. minio-847ffc9ccd-lwng4 1/1 Running 1 3d12h
    5. redis-58b569cdd-4v5jk 1/1 Running 5 5d11h

     # 进入minio容器中,去 curl front50的健康端口

    1. [root@hdss7-200 clouddriver]# kubectl exec minio-847ffc9ccd-lwng4 -n armory -- curl -s 'http://armory-front50:8080/health'
    2. {"status":"UP"}[root@hdss7-200 clouddriver]#

    访问 http://minio.od.com/buckets,生成了一个桶,是由于init-env.yaml 中的 ARMORYSPINNAKER_CONF_STORE_BUCKET: armory-platform 创建,Font50是管minio写东西,包括sprinter配置的流水线统统存在minio

    这样的话以后就备份/data/nfs-volume/minio/就可以

    1. [root@hdss7-200 clouddriver]# cd /data/nfs-volume/minio/
    2. [root@hdss7-200 minio]# ll
    3. total 0
    4. drwxr-xr-x. 2 root root 6 Aug 1 19:12 armory-platform
    5. [root@hdss7-200 minio]#

    4.2、部署Orca

    4.2.1 准备镜像

    1. [root@hdss7-200 ~]# docker pull armory/spinnaker-orca-slim:release-1.8.x-de4ab55
    2. [root@hdss7-200 clouddriver]# docker image ls |grep orca
    3. armory/spinnaker-orca-slim release-1.8.x-de4ab55 5103b1f73e04 3 years ago 141MB
    4. [root@hdss7-200 clouddriver]# docker tag 5103b1f73e04 harbor.od.com:180/armory/orca:v1.8.x
    5. [root@hdss7-200 clouddriver]# docker push harbor.od.com:180/armory/orca:v1.8.x

    4.2.2 准备资源清单

    [root@hdss7-200 ~]# mkdir /data/k8s-yaml/armory/orca
    [root@hdss7-200 ~]# cd /data/k8s-yaml/armory/orca
    Deployment:

    1. cat >dp.yaml <<'EOF'
    2. apiVersion: apps/v1
    3. kind: Deployment
    4. metadata:
    5. labels:
    6. app: armory-orca
    7. name: armory-orca
    8. namespace: armory
    9. spec:
    10. replicas: 1
    11. revisionHistoryLimit: 7
    12. selector:
    13. matchLabels:
    14. app: armory-orca
    15. template:
    16. metadata:
    17. annotations:
    18. artifact.spinnaker.io/location: '"armory"'
    19. artifact.spinnaker.io/name: '"armory-orca"'
    20. artifact.spinnaker.io/type: '"kubernetes/deployment"'
    21. moniker.spinnaker.io/application: '"armory"'
    22. moniker.spinnaker.io/cluster: '"orca"'
    23. labels:
    24. app: armory-orca
    25. spec:
    26. containers:
    27. - name: armory-orca
    28. image: harbor.od.com:180/armory/orca:v1.8.x
    29. imagePullPolicy: IfNotPresent
    30. command:
    31. - bash
    32. - -c
    33. args:
    34. - bash /opt/spinnaker/config/default/fetch.sh && cd /home/spinnaker/config
    35. && /opt/orca/bin/orca
    36. ports:
    37. - containerPort: 8083
    38. protocol: TCP
    39. env:
    40. - name: JAVA_OPTS
    41. value: -Xmx1000M
    42. envFrom:
    43. - configMapRef:
    44. name: init-env
    45. livenessProbe:
    46. failureThreshold: 5
    47. httpGet:
    48. path: /health
    49. port: 8083
    50. scheme: HTTP
    51. initialDelaySeconds: 600
    52. periodSeconds: 5
    53. successThreshold: 1
    54. timeoutSeconds: 1
    55. readinessProbe:
    56. failureThreshold: 3
    57. httpGet:
    58. path: /health
    59. port: 8083
    60. scheme: HTTP
    61. initialDelaySeconds: 180
    62. periodSeconds: 3
    63. successThreshold: 5
    64. timeoutSeconds: 1
    65. volumeMounts:
    66. - mountPath: /etc/podinfo
    67. name: podinfo
    68. - mountPath: /opt/spinnaker/config/default
    69. name: default-config
    70. - mountPath: /opt/spinnaker/config/custom
    71. name: custom-config
    72. imagePullSecrets:
    73. - name: harbor
    74. volumes:
    75. - configMap:
    76. defaultMode: 420
    77. name: custom-config
    78. name: custom-config
    79. - configMap:
    80. defaultMode: 420
    81. name: default-config
    82. name: default-config
    83. - downwardAPI:
    84. defaultMode: 420
    85. items:
    86. - fieldRef:
    87. apiVersion: v1
    88. fieldPath: metadata.labels
    89. path: labels
    90. - fieldRef:
    91. apiVersion: v1
    92. fieldPath: metadata.annotations
    93. path: annotations
    94. name: podinfo
    95. EOF

    注释:

    会发现启动命令等都跟之前部署的clouddriver、FRONT50 差不多,是因为他是有armory的Spinnaker发行版脚手架工具导出的,而armory把所有的启动参数,都放入到/opt/spinnaker/config/default/fetch.sh, 脚本会自动判断启动那个组件

    Service:

    1. cat >svc.yaml <<'EOF'
    2. apiVersion: v1
    3. kind: Service
    4. metadata:
    5. name: armory-orca
    6. namespace: armory
    7. spec:
    8. ports:
    9. - port: 8083
    10. protocol: TCP
    11. targetPort: 8083
    12. selector:
    13. app: armory-orca
    14. EOF
    1. [root@hdss7-200 orca]# kubectl apply -f dp.yaml
    2. deployment.apps/armory-orca created
    3. [root@hdss7-200 orca]# kubectl apply -f svc.yaml
    4. service/armory-orca created

    4.2.3 验证Orca的健康端口:

    [root@hdss7-200 clouddriver]# kubectl get pod -n armory

    1. NAME READY STATUS RESTARTS AGE
    2. armory-clouddriver-c45d94c59-4h87z 1/1 Running 2 3d20h
    3. armory-front50-c57d59db-8fjdl 1/1 Running 0 43m
    4. armory-orca-86466cc5b4-x9d2g 1/1 Running 0 8m52s
    5. minio-847ffc9ccd-lwng4 1/1 Running 1 3d12h
    6. redis-58b569cdd-4v5jk 1/1 Running 5 5d12h

     # 进入minio容器中,去 curl front50的健康端口

    1. [root@hdss7-200 orca]# kubectl exec minio-847ffc9ccd-lwng4 -n armory -- curl -s 'http://armory-orca:8083/health'
    2. {"status":"UP"}[root@hdss7-200 orca]#

    4.3、部署ECHO

    4.3.1 准备镜像

    1. [root@hdss7-200 orca]# docker pull docker.io/armory/echo-armory:c36d576-release-1.8.x-617c567
    2. [root@hdss7-200 orca]# # docker image ls |grep echo
    3. armory/echo-armory c36d576-release-1.8.x-617c567 415efd46f474 4 years ago 287MB
    4. [root@hdss7-200 orca]# docker tag 415efd46f474 harbor.od.com:180/armory/echo:v1.8.x
    5. [root@hdss7-200 orca]# docker push harbor.od.com:180/armory/echo:v1.8.x

    4.1.2 准备资源清单

    [root@hdss7-200 ~]# mkdir /data/k8s-yaml/armory/echo
    [root@hdss7-200 ~]# cd /data/k8s-yaml/armory/echo/
    Deployment:

    1. cat >dp.yaml <<'EOF'
    2. apiVersion: apps/v1
    3. kind: Deployment
    4. metadata:
    5. labels:
    6. app: armory-echo
    7. name: armory-echo
    8. namespace: armory
    9. spec:
    10. replicas: 1
    11. revisionHistoryLimit: 7
    12. selector:
    13. matchLabels:
    14. app: armory-echo
    15. template:
    16. metadata:
    17. annotations:
    18. artifact.spinnaker.io/location: '"armory"'
    19. artifact.spinnaker.io/name: '"armory-echo"'
    20. artifact.spinnaker.io/type: '"kubernetes/deployment"'
    21. moniker.spinnaker.io/application: '"armory"'
    22. moniker.spinnaker.io/cluster: '"echo"'
    23. labels:
    24. app: armory-echo
    25. spec:
    26. containers:
    27. - name: armory-echo
    28. image: harbor.od.com:180/armory/echo:v1.8.x
    29. imagePullPolicy: IfNotPresent
    30. command:
    31. - bash
    32. - -c
    33. args:
    34. - bash /opt/spinnaker/config/default/fetch.sh && cd /home/spinnaker/config
    35. && /opt/echo/bin/echo
    36. ports:
    37. - containerPort: 8089
    38. protocol: TCP
    39. env:
    40. - name: JAVA_OPTS
    41. value: -javaagent:/opt/echo/lib/jamm-0.2.5.jar -Xmx1000M
    42. envFrom:
    43. - configMapRef:
    44. name: init-env
    45. livenessProbe:
    46. failureThreshold: 3
    47. httpGet:
    48. path: /health
    49. port: 8089
    50. scheme: HTTP
    51. initialDelaySeconds: 600
    52. periodSeconds: 3
    53. successThreshold: 1
    54. timeoutSeconds: 1
    55. readinessProbe:
    56. failureThreshold: 3
    57. httpGet:
    58. path: /health
    59. port: 8089
    60. scheme: HTTP
    61. initialDelaySeconds: 180
    62. periodSeconds: 3
    63. successThreshold: 5
    64. timeoutSeconds: 1
    65. volumeMounts:
    66. - mountPath: /etc/podinfo
    67. name: podinfo
    68. - mountPath: /opt/spinnaker/config/default
    69. name: default-config
    70. - mountPath: /opt/spinnaker/config/custom
    71. name: custom-config
    72. imagePullSecrets:
    73. - name: harbor
    74. volumes:
    75. - configMap:
    76. defaultMode: 420
    77. name: custom-config
    78. name: custom-config
    79. - configMap:
    80. defaultMode: 420
    81. name: default-config
    82. name: default-config
    83. - downwardAPI:
    84. defaultMode: 420
    85. items:
    86. - fieldRef:
    87. apiVersion: v1
    88. fieldPath: metadata.labels
    89. path: labels
    90. - fieldRef:
    91. apiVersion: v1
    92. fieldPath: metadata.annotations
    93. path: annotations
    94. name: podinfo
    95. EOF

    Service:

    1. cat >svc.yaml <<'EOF'
    2. apiVersion: v1
    3. kind: Service
    4. metadata:
    5. name: armory-echo
    6. namespace: armory
    7. spec:
    8. ports:
    9. - port: 8089
    10. protocol: TCP
    11. targetPort: 8089
    12. selector:
    13. app: armory-echo
    14. EOF
    1. [root@hdss7-200 echo]# kubectl apply -f dp.yaml
    2. deployment.apps/armory-echo created
    3. [root@hdss7-200 echo]# kubectl apply -f svc.yaml
    4. service/armory-echo created
    5. [root@hdss7-200 echo]#

    4.3.3 验证Orca的健康端口:

    1. [root@hdss7-200 echo]# kubectl get pod -n armory
    2. NAME READY STATUS RESTARTS AGE
    3. armory-clouddriver-c45d94c59-4h87z 1/1 Running 2 3d21h
    4. armory-echo-64c9ffb959-j4svr 1/1 Running 0 7m30s
    5. armory-front50-c57d59db-8fjdl 1/1 Running 0 61m
    6. armory-orca-86466cc5b4-x9d2g 1/1 Running 0 27m
    7. minio-847ffc9ccd-lwng4 1/1 Running 1 3d13h
    8. redis-58b569cdd-4v5jk 1/1 Running 5 5d12h
    1. [root@hdss7-200 echo]# kubectl exec minio-847ffc9ccd-lwng4 -n armory -- curl -s 'http://armory-echo:8089/health'
    2. {"status":"UP"}[root@hdss7-200 echo]#

    4.4、部署IGOR

    相当重要的组件,如果你想让Spinnaker跟jenkins通信,能够读取到jenkins的信息,流水线,必须装IGOR,装IGOR必须装ECHO。IGOR支持两位CI工具,jenkins和Travis GitHub - spinnaker/igor: Integration with Jenkins and Git for Spinnaker

    4.4.1 准备镜像

    1. [root@hdss7-200 echo]# docker pull docker.io/armory/spinnaker-igor-slim:release-1.8-x-new-install-healthy-ae2b329
    2. [root@hdss7-200 echo]# docker image ls |grep igor
    3. armory/spinnaker-igor-slim release-1.8-x-new-install-healthy-ae2b329 23984f5b43f6 4 years ago 135MB
    4. [root@hdss7-200 echo]# docker tag 23984f5b43f6 harbor.od.com:180/armory/igor:v1.8.x
    5. [root@hdss7-200 echo]# docker push harbor.od.com:180/armory/igor:v1.8.x

    4.4.2 准备资源清单

    [root@hdss7-200 ~]# mkdir /data/k8s-yaml/armory/igor
    [root@hdss7-200 ~]# cd /data/k8s-yaml/armory/igor/
    Deployment:

    1. cat >dp.yaml <<'EOF'
    2. apiVersion: apps/v1
    3. kind: Deployment
    4. metadata:
    5. labels:
    6. app: armory-igor
    7. name: armory-igor
    8. namespace: armory
    9. spec:
    10. replicas: 1
    11. revisionHistoryLimit: 7
    12. selector:
    13. matchLabels:
    14. app: armory-igor
    15. template:
    16. metadata:
    17. annotations:
    18. artifact.spinnaker.io/location: '"armory"'
    19. artifact.spinnaker.io/name: '"armory-igor"'
    20. artifact.spinnaker.io/type: '"kubernetes/deployment"'
    21. moniker.spinnaker.io/application: '"armory"'
    22. moniker.spinnaker.io/cluster: '"igor"'
    23. labels:
    24. app: armory-igor
    25. spec:
    26. containers:
    27. - name: armory-igor
    28. image: harbor.od.com:180/armory/igor:v1.8.x
    29. imagePullPolicy: IfNotPresent
    30. command:
    31. - bash
    32. - -c
    33. args:
    34. - bash /opt/spinnaker/config/default/fetch.sh && cd /home/spinnaker/config
    35. && /opt/igor/bin/igor
    36. ports:
    37. - containerPort: 8088
    38. protocol: TCP
    39. env:
    40. - name: IGOR_PORT_MAPPING
    41. value: -8088:8088
    42. - name: JAVA_OPTS
    43. value: -Xmx1000M
    44. envFrom:
    45. - configMapRef:
    46. name: init-env
    47. livenessProbe:
    48. failureThreshold: 3
    49. httpGet:
    50. path: /health
    51. port: 8088
    52. scheme: HTTP
    53. initialDelaySeconds: 600
    54. periodSeconds: 3
    55. successThreshold: 1
    56. timeoutSeconds: 1
    57. readinessProbe:
    58. failureThreshold: 3
    59. httpGet:
    60. path: /health
    61. port: 8088
    62. scheme: HTTP
    63. initialDelaySeconds: 180
    64. periodSeconds: 5
    65. successThreshold: 5
    66. timeoutSeconds: 1
    67. volumeMounts:
    68. - mountPath: /etc/podinfo
    69. name: podinfo
    70. - mountPath: /opt/spinnaker/config/default
    71. name: default-config
    72. - mountPath: /opt/spinnaker/config/custom
    73. name: custom-config
    74. imagePullSecrets:
    75. - name: harbor
    76. securityContext:
    77. runAsUser: 0
    78. volumes:
    79. - configMap:
    80. defaultMode: 420
    81. name: custom-config
    82. name: custom-config
    83. - configMap:
    84. defaultMode: 420
    85. name: default-config
    86. name: default-config
    87. - downwardAPI:
    88. defaultMode: 420
    89. items:
    90. - fieldRef:
    91. apiVersion: v1
    92. fieldPath: metadata.labels
    93. path: labels
    94. - fieldRef:
    95. apiVersion: v1
    96. fieldPath: metadata.annotations
    97. path: annotations
    98. name: podinfo
    99. EOF

    Service:

    1. cat >svc.yaml <<'EOF'
    2. apiVersion: v1
    3. kind: Service
    4. metadata:
    5. name: armory-igor
    6. namespace: armory
    7. spec:
    8. ports:
    9. - port: 8088
    10. protocol: TCP
    11. targetPort: 8088
    12. selector:
    13. app: armory-igor
    14. EOF
    1. [root@hdss7-200 igor]# kubectl apply -f dp.yaml
    2. deployment.apps/armory-igor created
    3. [root@hdss7-200 igor]# kubectl apply -f svc.yaml
    4. service/armory-igor created

    4.4.3 验证igor的健康端口:

    1. [root@hdss7-200 igor]# kubectl get pod -n armory
    2. NAME READY STATUS RESTARTS AGE
    3. armory-clouddriver-c45d94c59-4h87z 1/1 Running 2 3d21h
    4. armory-echo-64c9ffb959-j4svr 1/1 Running 0 28m
    5. armory-front50-c57d59db-8fjdl 1/1 Running 0 82m
    6. armory-igor-5f4f87d864-hc4qz 1/1 Running 0 3m42s
    7. armory-orca-86466cc5b4-x9d2g 1/1 Running 0 47m
    8. minio-847ffc9ccd-lwng4 1/1 Running 1 3d13h
    9. redis-58b569cdd-4v5jk 1/1 Running 5 5d13h
    1. [root@hdss7-200 igor]# kubectl exec minio-847ffc9ccd-lwng4 -n armory -- curl -s 'http://armory-igor:8088/health'
    2. {"status":"UP"}[root@hdss7-200 igor]#

    4.5、部署GATE

    4.5.1 准备镜像

    1. [root@hdss7-200 igor]# docker pull docker.io/armory/gate-armory:dfafe73-release-1.8.x-5d505ca
    2. [root@hdss7-200 igor]# docker image ls |grep gate
    3. armory/gate-armory dfafe73-release-1.8.x-5d505ca b092d4665301 4 years ago 179MB
    4. [root@hdss7-200 igor]# docker tag b092d4665301 harbor.od.com:180/armory/gate:v1.8.x
    5. [root@hdss7-200 igor]# docker push harbor.od.com:180/armory/gate:v1.8.x

    4.5.2 准备资源清单

    [root@hdss7-200 ~]# mkdir /data/k8s-yaml/armory/gate
    [root@hdss7-200 ~]# cd /data/k8s-yaml/armory/gate

    Deployment:

    1. cat >dp.yaml <<'EOF'
    2. apiVersion: apps/v1
    3. kind: Deployment
    4. metadata:
    5. labels:
    6. app: armory-gate
    7. name: armory-gate
    8. namespace: armory
    9. spec:
    10. replicas: 1
    11. revisionHistoryLimit: 7
    12. selector:
    13. matchLabels:
    14. app: armory-gate
    15. template:
    16. metadata:
    17. annotations:
    18. artifact.spinnaker.io/location: '"armory"'
    19. artifact.spinnaker.io/name: '"armory-gate"'
    20. artifact.spinnaker.io/type: '"kubernetes/deployment"'
    21. moniker.spinnaker.io/application: '"armory"'
    22. moniker.spinnaker.io/cluster: '"gate"'
    23. labels:
    24. app: armory-gate
    25. spec:
    26. containers:
    27. - name: armory-gate
    28. image: harbor.od.com:180/armory/gate:v1.8.x
    29. imagePullPolicy: IfNotPresent
    30. command:
    31. - bash
    32. - -c
    33. args:
    34. - bash /opt/spinnaker/config/default/fetch.sh gate && cd /home/spinnaker/config
    35. && /opt/gate/bin/gate
    36. ports:
    37. - containerPort: 8084
    38. name: gate-port
    39. protocol: TCP
    40. - containerPort: 8085
    41. name: gate-api-port
    42. protocol: TCP
    43. env:
    44. - name: GATE_PORT_MAPPING
    45. value: -8084:8084
    46. - name: GATE_API_PORT_MAPPING
    47. value: -8085:8085
    48. - name: JAVA_OPTS
    49. value: -Xmx1000M
    50. envFrom:
    51. - configMapRef:
    52. name: init-env
    53. livenessProbe:
    54. exec:
    55. command:
    56. - /bin/bash
    57. - -c
    58. - wget -O - http://localhost:8084/health || wget -O - https://localhost:8084/health
    59. failureThreshold: 5
    60. initialDelaySeconds: 600
    61. periodSeconds: 5
    62. successThreshold: 1
    63. timeoutSeconds: 1
    64. readinessProbe:
    65. exec:
    66. command:
    67. - /bin/bash
    68. - -c
    69. - wget -O - http://localhost:8084/health?checkDownstreamServices=true&downstreamServices=true
    70. || wget -O - https://localhost:8084/health?checkDownstreamServices=true&downstreamServices=true
    71. failureThreshold: 3
    72. initialDelaySeconds: 180
    73. periodSeconds: 5
    74. successThreshold: 10
    75. timeoutSeconds: 1
    76. volumeMounts:
    77. - mountPath: /etc/podinfo
    78. name: podinfo
    79. - mountPath: /opt/spinnaker/config/default
    80. name: default-config
    81. - mountPath: /opt/spinnaker/config/custom
    82. name: custom-config
    83. imagePullSecrets:
    84. - name: harbor
    85. securityContext:
    86. runAsUser: 0
    87. volumes:
    88. - configMap:
    89. defaultMode: 420
    90. name: custom-config
    91. name: custom-config
    92. - configMap:
    93. defaultMode: 420
    94. name: default-config
    95. name: default-config
    96. - downwardAPI:
    97. defaultMode: 420
    98. items:
    99. - fieldRef:
    100. apiVersion: v1
    101. fieldPath: metadata.labels
    102. path: labels
    103. - fieldRef:
    104. apiVersion: v1
    105. fieldPath: metadata.annotations
    106. path: annotations
    107. name: podinfo
    108. EOF

    Service:

    1. cat >svc.yaml <<'EOF'
    2. apiVersion: v1
    3. kind: Service
    4. metadata:
    5. name: armory-gate
    6. namespace: armory
    7. spec:
    8. ports:
    9. - name: gate-port
    10. port: 8084
    11. protocol: TCP
    12. targetPort: 8084
    13. - name: gate-api-port
    14. port: 8085
    15. protocol: TCP
    16. targetPort: 8085
    17. selector:
    18. app: armory-gate
    19. EOF
    1. [root@hdss7-200 gate]# kubectl apply -f dp.yaml
    2. deployment.apps/armory-gate created
    3. [root@hdss7-200 gate]# kubectl apply -f svc.yaml
    4. service/armory-gate created

    4.5.3 验证gate的健康端口:

    1. [root@hdss7-200 gate]# kubectl get pod -n armory
    2. NAME READY STATUS RESTARTS AGE
    3. armory-clouddriver-c45d94c59-4h87z 1/1 Running 2 3d21h
    4. armory-echo-64c9ffb959-j4svr 1/1 Running 0 48m
    5. armory-front50-c57d59db-8fjdl 1/1 Running 0 102m
    6. armory-gate-5b954d9bd4-xc2jk 1/1 Running 0 4m12s
    7. armory-igor-5f4f87d864-hc4qz 1/1 Running 0 23m
    8. armory-orca-86466cc5b4-x9d2g 1/1 Running 0 68m
    9. minio-847ffc9ccd-lwng4 1/1 Running 1 3d13h
    10. redis-58b569cdd-4v5jk 1/1 Running 5 5d13h
    11. [root@hdss7-200 gate]# kubectl exec minio-847ffc9ccd-lwng4 -n armory -- curl -s 'http://armory-gate:8084/health'
    12. {"status":"UP"}[root@hdss7-200 gate]#

    4.6、部署DECK

    4.6.1 准备镜像

    1. [root@hdss7-200 gate]# docker pull docker.io/armory/deck-armory:d4bf0cf-release-1.8.x-0a33f94
    2. [root@hdss7-200 gate]# docker image ls |grep deck
    3. armory/deck-armory d4bf0cf-release-1.8.x-0a33f94
    4. [root@hdss7-200 gate]# docker tag 9a87ba3b319f harbor.od.com:180/armory/deck:v1.8.x
    5. [root@hdss7-200 gate]# docker push harbor.od.com:180/armory/deck:v1.8.x

    4.6.2 准备资源清单

    [root@hdss7-200 ~]# mkdir /data/k8s-yaml/armory/deck
    [root@hdss7-200 ~]# cd /data/k8s-yaml/armory/deck

    Deployment:

    1. cat >dp.yaml <<'EOF'
    2. apiVersion: apps/v1
    3. kind: Deployment
    4. metadata:
    5. labels:
    6. app: armory-deck
    7. name: armory-deck
    8. namespace: armory
    9. spec:
    10. replicas: 1
    11. revisionHistoryLimit: 7
    12. selector:
    13. matchLabels:
    14. app: armory-deck
    15. template:
    16. metadata:
    17. annotations:
    18. artifact.spinnaker.io/location: '"armory"'
    19. artifact.spinnaker.io/name: '"armory-deck"'
    20. artifact.spinnaker.io/type: '"kubernetes/deployment"'
    21. moniker.spinnaker.io/application: '"armory"'
    22. moniker.spinnaker.io/cluster: '"deck"'
    23. labels:
    24. app: armory-deck
    25. spec:
    26. containers:
    27. - name: armory-deck
    28. image: harbor.od.com:180/armory/deck:v1.8.x
    29. imagePullPolicy: IfNotPresent
    30. command:
    31. - bash
    32. - -c
    33. args:
    34. - bash /opt/spinnaker/config/default/fetch.sh && /entrypoint.sh
    35. ports:
    36. - containerPort: 9000
    37. protocol: TCP
    38. envFrom:
    39. - configMapRef:
    40. name: init-env
    41. livenessProbe:
    42. failureThreshold: 3
    43. httpGet:
    44. path: /
    45. port: 9000
    46. scheme: HTTP
    47. initialDelaySeconds: 180
    48. periodSeconds: 3
    49. successThreshold: 1
    50. timeoutSeconds: 1
    51. readinessProbe:
    52. failureThreshold: 5
    53. httpGet:
    54. path: /
    55. port: 9000
    56. scheme: HTTP
    57. initialDelaySeconds: 30
    58. periodSeconds: 3
    59. successThreshold: 5
    60. timeoutSeconds: 1
    61. volumeMounts:
    62. - mountPath: /etc/podinfo
    63. name: podinfo
    64. - mountPath: /opt/spinnaker/config/default
    65. name: default-config
    66. - mountPath: /opt/spinnaker/config/custom
    67. name: custom-config
    68. imagePullSecrets:
    69. - name: harbor
    70. volumes:
    71. - configMap:
    72. defaultMode: 420
    73. name: custom-config
    74. name: custom-config
    75. - configMap:
    76. defaultMode: 420
    77. name: default-config
    78. name: default-config
    79. - downwardAPI:
    80. defaultMode: 420
    81. items:
    82. - fieldRef:
    83. apiVersion: v1
    84. fieldPath: metadata.labels
    85. path: labels
    86. - fieldRef:
    87. apiVersion: v1
    88. fieldPath: metadata.annotations
    89. path: annotations
    90. name: podinfo
    91. EOF

    Service:

    1. cat >svc.yaml <<'EOF'
    2. apiVersion: v1
    3. kind: Service
    4. metadata:
    5. name: armory-deck
    6. namespace: armory
    7. spec:
    8. ports:
    9. - port: 80
    10. protocol: TCP
    11. targetPort: 9000
    12. selector:
    13. app: armory-deck
    14. EOF
    1. [root@hdss7-200 deck]# kubectl apply -f dp.yaml
    2. deployment.apps/armory-deck created
    3. [root@hdss7-200 deck]# kubectl apply -f svc.yaml
    4. service/armory-deck created
    5. [root@hdss7-200 deck]#

    4.6.3 查看nginx容器是否Running   

    1. [root@hdss7-21 ~]# kubectl get pod -n armory -o wide |grep "armory-deck"
    2. armory-deck-67b6d6db4-pcz9r 1/1 Running 1 65m 172.7.22.12 hdss7-22.host.com <none> <none>
    3. [root@hdss7-21 ~]# curl 172.7.22.12:9000
    4. html>
    5. <html class="no-js" ng-app="netflix.spinnaker">
    6. <head>
    7. <title>Armory Platform | Spinnakertitle>
    8. <meta charset="utf-8">
    9. <meta name="description" content="">
    10. <meta name="viewport" content="width=device-width">
    11. <style>
    12. body {
    13. margin: 0;
    14. background-color: #f5f7fa;
    15. background-size: cover;
    16. background-position: 100% 100%;
    17. overflow: auto;
    18. [root@hdss7-21 ~]# kubectl get svc -n armory -owide |grep "armory-deck"
    19. armory-deck ClusterIP 192.168.180.28 <none> 80/TCP 26m app=armory-deck
    20. [root@hdss7-21 ~]# curl 192.168.180.28:80
    21. html>
    22. <html class="no-js" ng-app="netflix.spinnaker">
    23. <head>
    24. <title>Armory Platform | Spinnakertitle>
    25. <meta charset="utf-8">
    26. <meta name="description" content="">
    27. <meta name="viewport" content="width=device-width">
    28. <style>
    29. body {
    30. margin: 0;
    31. background-color: #f5f7fa;
    32. background-size: cover;

    4.7、NGINX部署

    4.7.1 准备docker镜像

    1. [root@hdss7-200 deck]# docker pull nginx:1.12.2
    2. [root@hdss7-200 deck]# docker image ls |grep nginx
    3. nginx 1.12.2 4037a5562b03 4 years ago 108MB
    4. [root@hdss7-200 deck]# docker tag 4037a5562b03 harbor.od.com/armory/nginx:v1.12.2
    5. [root@hdss7-200 deck]# docker push harbor.od.com/armory/nginx:v1.12.2

    4.7.2 准备资源清单

    [root@hdss7-200 deck]# mkdir /data/k8s-yaml/armory/nginx
    [root@hdss7-200 deck]# cd /data/k8s-yaml/armory/nginx
    Deployment:

    1. apiVersion: apps/v1
    2. kind: Deployment
    3. metadata:
    4. labels:
    5. app: armory-nginx
    6. name: armory-nginx
    7. namespace: armory
    8. spec:
    9. replicas: 1
    10. revisionHistoryLimit: 7
    11. selector:
    12. matchLabels:
    13. app: armory-nginx
    14. template:
    15. metadata:
    16. annotations:
    17. artifact.spinnaker.io/location: '"armory"'
    18. artifact.spinnaker.io/name: '"armory-nginx"'
    19. artifact.spinnaker.io/type: '"kubernetes/deployment"'
    20. moniker.spinnaker.io/application: '"armory"'
    21. moniker.spinnaker.io/cluster: '"nginx"'
    22. labels:
    23. app: armory-nginx
    24. spec:
    25. containers:
    26. - name: armory-nginx
    27. image: harbor.od.com:180/armory/nginx:v1.12.2
    28. imagePullPolicy: Always
    29. command:
    30. - bash
    31. - -c
    32. args:
    33. - bash /opt/spinnaker/config/default/fetch.sh nginx && nginx -g 'daemon off;'
    34. ports:
    35. - containerPort: 80
    36. name: http
    37. protocol: TCP
    38. - containerPort: 443
    39. name: https
    40. protocol: TCP
    41. - containerPort: 8085
    42. name: api
    43. protocol: TCP
    44. livenessProbe:
    45. failureThreshold: 3
    46. httpGet:
    47. path: /
    48. port: 80
    49. scheme: HTTP
    50. initialDelaySeconds: 180
    51. periodSeconds: 3
    52. successThreshold: 1
    53. timeoutSeconds: 1
    54. readinessProbe:
    55. failureThreshold: 3
    56. httpGet:
    57. path: /
    58. port: 80
    59. scheme: HTTP
    60. initialDelaySeconds: 30
    61. periodSeconds: 3
    62. successThreshold: 5
    63. timeoutSeconds: 1
    64. volumeMounts:
    65. - mountPath: /opt/spinnaker/config/default
    66. name: default-config
    67. - mountPath: /etc/nginx/conf.d
    68. name: custom-config
    69. imagePullSecrets:
    70. - name: harbor
    71. volumes:
    72. - configMap:
    73. defaultMode: 420
    74. name: custom-config
    75. name: custom-config
    76. - configMap:
    77. defaultMode: 420
    78. name: default-config
    79. name: default-config

    Service:

    1. apiVersion: v1
    2. kind: Service
    3. metadata:
    4. name: armory-nginx
    5. namespace: armory
    6. spec:
    7. ports:
    8. - name: http
    9. port: 80
    10. protocol: TCP
    11. targetPort: 80
    12. - name: https
    13. port: 443
    14. protocol: TCP
    15. targetPort: 443
    16. - name: api
    17. port: 8085
    18. protocol: TCP
    19. targetPort: 8085
    20. selector:
    21. app: armory-nginx
    1. [root@hdss7-200 nginx]# kubectl apply -f dp.yaml
    2. deployment.apps/armory-nginx created
    3. [root@hdss7-200 nginx]# kubectl apply -f svc.yaml
    4. service/armory-nginx created
    5. [root@hdss7-200 nginx]# kubectl apply -f ingress.yaml
    6. ingress.extensions/armory-nginx created

    4.7.3 配置named解析

    [root@hdss7-11 ~]# vi /var/named/od.com.zone

    1. $ORIGIN od.com.
    2. $TTL 600 ; 10 minutes
    3. @ IN SOA dns.od.com. dnsadmin.od.com. (
    4. 2020010501 ; serial
    5. 10800 ; refresh (3 hours)
    6. 900 ; retry (15 minutes)
    7. 604800 ; expire (1 week)
    8. 86400 ; minimum (1 day)
    9. )
    10. NS dns.od.com.
    11. $TTL 60 ; 1 minute
    12. dns A 10.4.7.11
    13. harbor A 10.4.7.200
    14. k8s-yaml A 10.4.7.200
    15. traefik A 10.4.7.10
    16. dashboard A 10.4.7.10
    17. zk1 A 10.4.7.11
    18. zk2 A 10.4.7.12
    19. zk3 A 10.4.7.21
    20. jenkins A 10.4.7.10
    21. dubbo-monitor A 10.4.7.10
    22. demo A 10.4.7.10
    23. config A 10.4.7.10
    24. mysql A 10.4.7.11
    25. portal A 10.4.7.10
    26. zk-test A 10.4.7.11
    27. zk-prod A 10.4.7.12
    28. config-test A 10.4.7.10
    29. config-prod A 10.4.7.10
    30. demo-test A 10.4.7.10
    31. demo-prod A 10.4.7.10
    32. blackbox A 10.4.7.10
    33. prometheus A 10.4.7.10
    34. grafana A 10.4.7.10
    35. km A 10.4.7.10
    36. kibana A 10.4.7.10
    37. minio A 10.4.7.10
    38. nginx A 10.4.7.10
    39. spinnaker A 10.4.7.10

    [root@hdss7-11 ~]# systemctl restart named
    [root@hdss7-11 ~]# dig -t A spinnaker.od.com @10.4.7.11 +short
    10.4.7.10
     

    4.7.4 验证:

    访问 http://spinnaker.od.com


    五. Spinnaker的使用 

    1、查看应用集

     Applicatio应用集

    点击 Applicatio,会出现apollo和dubbo,相当于管理了test、prod名称空间

    如果一直在loading,查看gate、clouddriver 是不是出现问题

    2、创建应用集

    做一件疯狂的事情,把 test、prod名称空间下,除了apollo的配置清单保留,删除dubbo消费者和生产者,使用Spinnaker图形界面重新创建dubbo消费者和生产者资源配置清单并应用

    test: 

    prod: 

    删除后查看 Applicatio,如果test、prod名称空间下没有任何 dubbo 相关的资源,正常在此页面回会消失,但有时候不知为何,刷新后也没有消失,但是不影响后续操作

    随后点击

    查看minio

    3、创建Pipeline

    点击Spinnaker中的test0dubbo项目中的PIPELINES,提示No pipelines configured for this application

    所以点击Configure a new pipeline做第一条流水线,我们先做提供者dubbo-demo-service项目,没有提供者消费者无法消费

    显示如下

    其中包括Triggers 触发器

     加参数

    加通知

    点击加参数

    1. # 增加以下四个参数,本次编译和发布 dubbo-demo-service,因此默认的项目名称和镜像名称是基本确定的
    2. 1. name: app_name
    3. required: true
    4. default: dubbo-demo-service
    5. description: 项目在Git仓库名称
    6. 2. name: git_ver
    7. required: true
    8. description: 项目的版本或者commit ID或者分支
    9. 3. image_name
    10. required: true
    11. default: app/dubbo-demo-service
    12. description: 镜像名称,仓库/image
    13. 4. name: add_tag
    14. required: true
    15. description: 标签的一部分,追加在git_ver后面,使用YYYYmmdd_HHMM

     点击save

     

    4、创建Jenkins构建步骤

     在最上面,点击Add stage, 我们要加一个流水线的阶段

    在Type中找到Jenkins 

    选择Jenkins-admin ,注意要确保 http://jenkins.od.com能访问,而且账户密码是username: admin password: admin123。因为在custom-config.yaml 中配置的

    发现自动发现Jenkins两个流水线 ,我们做dubbo-demo-service使用dubb-demo

    点击dubbo-demo,直接显示jenkins的参数化构建列表

    注意,在jenkins流水线的时候,我们写了add_tag,而在刚刚我们在第一步,也添加了add_tag参数,再这个地方可以直接调用第一步的add_tag,为什么这么做好处,同一个项目,除了add_tag、git_ver、app_name、image_name在每次做项目的时候都需要变化,其他的都不变。

    如果在测试环境中,Jenkins一般是流水线的一部分,而在生产环境中,一般跳过Jenkins这个步骤。

    1. ${ parameters.add_tag }
    2. ${ parameters.app_name }
    3. https://gitee.com/jerry00713/dubbo-demo-service.git
    4. ${ parameters.git_ver }
    5. ${ parameters.image_name}
    6. ./dubbo-server/target

    5、运行Jenkins构建

    上一步点击保存后,反回test0dubbo的Pipeline,已经建立好了

    点击运行这条流水线

    git_ver :apollo   # 使用apollo分支最新的
    add_tag :一般以时间为tag
    点击run

    开始调用jenkins开始构建了

    注:如果报错Jenkins API 返回403错误,解决方案Spinnaker调用Jenkins API 返回403错误_Jerry00713的博客-CSDN博客

    点击build 后面的数字,比如是 #8 ,这个#8代表的是jenkins的编8号 

    点击#8后,跳到jenkins

    等待success

    查看test0dubbo

    查看harbor是否有镜像

     6、创建生产者资源

    我们有jenkins工具,目前这些,对比jenkins并没有体现出多大的好处,所以spinnaker提供了创建deployment资源能力,点击Configure

    在此点击Add stage

    选择deploy

    点击Add server group

    这里就是实现通过点点部署deploy

    Basic Settings: 主要配置容器的一些基础信息,使用镜像等

    1、点击Account,选择一开始创建的cluster-admin集群管理员权限,进行deploy资源创建应用

    2、namespace 选则 test,意思是发布到test环境中,还可以发布prod

    3、Stack 可以滞空,一般情况,加灰度发布、金丝雀发布,会给一个c (Canary)

    4、 Detail 项目的名字,建议要和Gitee或者Gitlab中的项目名一致

    5、Containers  写 harbor中的 image

        由于使用ELK 边车模式 ,所以加载filebeak

     6、 Init Containers 高级特性,当你的业务容器起来之前,先行启动一个容器

    7、Strategy 发布策略,比如金丝雀发布、滚动更新等,我们先使用None,因为没有部署Kayenta ,所以有可能其他策略不生效

    deploymenrt: 配置的时deploymenrt

    1、选择对号 ,说明生成deploymenrt

    2、Strategy 由于之前选择Node,所以更新策略为默认的更新策略(RollingUpdate 滚动升级、Recreate 重建,RollingUpdate 时deploy的默认侧策略)

    3、History Limit :7 历史版本, 更新策略后保留的历史版本
    4、Max Surge 最大存活数   Max Unavailable 最大不可达

    replicas : 副本数

    1、Capacity副本数,通过Capacity设置

    2、 Autoscaling  自动扩缩容,设置最小几份,最大几份,看cpu的情况,类似HPA,需要依赖Metrics,自动扩缩容

    Volume Sources  : 挂载

    1、Volume Sources  挂载类型,在讲述日志收集,filebeat的时候,需要filebeat+业务日志通过 Volume的emptyDir{}形式,挂载到一起

    2、Config Map Name 将那个cm挂载到这个deploy

    Advanced Setting  : 高级设置

    1、DNS Policy :
         clusterFisrt   走coredns,集群内部的,不与集群外部的通信
         default  走宿主机的网络,不与集群内部的通信
         clusterFirstWithHostNet  内部、外部都可以通信、
    2、Service Account Name   sa服务账户,默认default,除非容器要解除底层资源
    3、Termination Grace  Period 单位s,自定义容器死了多久重启

    4、Replica Set Annotations  在控制器deploy上加注解


    5、Pod Annotations  在定义pod中添加注解,要接入Prometheus监控

     ​​

    Key:balckbox_scheme   Value:tcp            
    Key:balckbox_port   Value:20880                #  Apollo 定义得到port
    Key:prometheus_io_scrape   Value:true     # 表明要收集jvm的信息
    Key:prometheus_io_path   Value:/              # 表明要收集jvm的信息
    Key:prometheus_io_port   Value:12346      # 表明要收集jvm的信息

    1. 修改dubbo-service 服务接入prometheus监控
    2. "annotations": {
    3. "prometheus_io_scrape": "true",
    4. "prometheus_io_port": "12346",
    5. "prometheus_io_path": "/"
    6. "blackbox_port": "20880",
    7. "blackbox_scheme": "tcp"
    8. }

    6、Node Selector 人工介入调度到那个节点7、Tolerations 容忍度
    Operator :Exists 存在 Equal 相同的
    Effect :影响程度

    • NoSchedule 不调度
    • NoSchedule 尽量不调度,如果存在次污点,不调度,但是如果都有这个污点,我不能让我的pod起不来,那就听从schedulser
    •  NoExecute 绝不调度

    Container 详细配置容器

    1、Image 第一个容器
    2、Pull Policy :拖镜像策略
    3、Commands : 配置启动命令
    4、Args 启动命令的参数
    5、Environment Variables:环境变量配置
     Name:JAR_BALL
     Source:  Explicit
     Value: dubbo-server.jar
     Name:C_OPTS
     Source:  Explicit
     Value: -Denv=fat -Dapollo.meta=http://config-test.od.com
    6、Resources : 设置资源
        CPU        Requests 容器启动占用的资源,Limits 最大限制的资源
        Memory   Requests 容器启动占用的资源,Limits 最大限制的资源

    7、Ports 容器开放端口
    8、Volume Mounts : 配置挂载,搭配Volume Sources 
    9、Probes 探针
    10、Security Context :使用什么用户启动 ,0 代表root,因为root的uid=0
    11、SElinux Otions: SElinux的配置不使用
    12、Lifecycle hooks: 声明周期的钩子函数

    • PostStart 容器起来的第一个运行参数,启动以后,可以执行执行一个exec或者一个http,exec可以执行执行一个脚本,http可以执行执行一个get
    • PreStop 容器在宕机之前最后一个命令

    总结配置清单

     ​​

     

    然后点击Add保存,生成一条

    问题:我们的宗旨是一次,能不麻烦一定不要找麻烦,所以我们在一步配置了add_tag、git_ver、app_name、image_name 在每次做项目的时候都需要变化变量,而第二步构建jenkins的时候,直接通过参数获取,到这三步骤了,也的这样,所以修改deployment资源

     1、在修改image地址时候,发现修改不了怎么办

    通过Edit stage as JSON,点击 Edit stage as JSON

    发现就是我们的资源配置清单

    1. # 对json中以下部分内容进行调整未变量格式
    2. "imageId": "harbor.od.com:180/${parameters.image_name}:${parameters.git_ver}_${parameters.add_tag}",
    3. "registry": "harbor.od.com:180",
    4. "repository": "${parameters.image_name}",
    5. "tag": "${parameters.git_ver}_${parameters.add_tag}"

    最后点击Upate stage

    最后点击Save Change运行deployment资源

    发现变成两段了

    点击第一段,点击build 后面的数字,比如是 #9 ,这个#9代表的是jenkins的编号

    等待success

    紧跟着 执行第二段

    7、如何修改资源 

     如果deploy资源写错了,怎么改有报错。可以停止

     点击stop

    进入Configure

    注意,如果你想修改关于add_tag、git_ver、app_name、image_name ,不能通过edit改,因为已经通过参数化构建变为变量,只能用Edit stage as JSON

    修改正确后,通过重新运行。但是只能全部重新运行

    8、创建消费者资源

    在dubbo0demo大项目中,在添加一个dubbo消费者项目,点击添加

    项目名称 dubbo-demo-web

    点击加参数

    点击添加Jenkins,选择demo-tomcat

     添加参数化构建

    1. ${ parameters.add_tag }
    2. ${ parameters.app_name }
    3. git@gitee.com:jerry00713/dubbo-demo-web-tomcat.git
    4. ${ parameters.git_ver }
    5. ${ parameters.image_name }

    点击参数化构建,为什么不等配置deploy等资源在一起,这样为了防止镜参数化构建问题

     

     

    注:消费者项目,需要提供service跟ingress资源,所以先配置这两种资源,在配置deployment,因为在配置deployment的时候,可以选择和那个service资源绑定

     

    Detail 项目的名字,他会搭配总项目名+此名字,生成这个资源的名字。并注意不能有特殊字符,不能超过24和字符

     

     

    注意,别忘了把之前配置的dubbo的消费者service资源删除

    添加ingress资源

     

    选择跟那个service资源绑定

     

     

     

     

     

     注意,别忘了把之前配置的dubbo的消费者ingress资源删除

    配置deployment

     

     

     选择关联的sertvice,这就是做service资源的时候,没让配这个

     

    1. 修改dubbo-consumer服务接入prometheus监控
    2. "annotations": {
    3. "blackbox_scheme": "http",
    4. "blackbox_path": "/hello?name=health"
    5. "blackbox_port": "8080",
    6. "prometheus_io_scrape": "true",
    7. "prometheus_io_port": "12346",
    8. "prometheus_io_path": "/"
    9. }

     Name:C_OPTS
     Source:  Explicit
     Value: -Denv=fat -Dapollo.meta=http://config-test.od.com

    Volume Mounts

     

    filebeat

     

    1. # 对json中以下部分内容进行调整未变量格式
    2. "imageId": "harbor.od.com:180/${parameters.image_name}:${parameters.git_ver}_${parameters.add_tag}",
    3. "registry": "harbor.od.com:180",
    4. "repository": "${parameters.image_name}",
    5. "tag": "${parameters.git_ver}_${parameters.add_tag}"

    9、检查

    访问http://demo-test.od.com/hello?name=tomcat 检查是否正常, 查看Kibana

  • 相关阅读:
    创建vue项目教程
    详解 DES加密技术 | 凯撒密码 | 栅栏密码
    Qt易忘样式表总结
    Nginx反向代理与负载均衡
    什么是美颜SDK?如何创建自定义美颜直播应用?
    STM32 CAN使用记录:FDCAN基础通讯
    B站崩了,那晚负责修复的开发人员做了什么?
    机器人仿真-gazebo学习笔记(5)添加Gazebo属性
    SpringMVC
    Vue 监听属性Watch
  • 原文地址:https://blog.csdn.net/Jerry00713/article/details/120266638