• 【云原生 | Kubernetes 系列】--Gitops持续交付和持续Tekton Triggers


    Tekton Triggers

    Tekton 几个项目

    • Tekton Pipelines

      • Task

        • Steps:执行具体操作,在容器中执行的命令或脚本
          • command
          • script
      • TaskRun

        运行为一个pod,该pod中每个step都运行为一个容器,根据定义的次序,依次运行

        那些由用户通过command或script指定要运行的脚本或程序,实际上是通过entrypoint启动的.

      • Pipeline

        组合一个到多个Task

      • PipelineRun

        根据指定的Pipeline创建一个特定的运行实例

        将Pipeline中定义一到多个Task,根据用户配置的次序,运行TaskRun

    • Tekton Triggers

    • Tekton Dashboard

    • Tekton CLI

    模块化Task的一个重要要素: Parameters,在task中定义Parameter,在step中引用.

    ​ 在command或script中引用

    数据传递:

    ​ Step to Step, Task to Task

    Results: 其后端的存储是由pipeline自行创建和管理,仅能够传递小于4096字节的数据

    Workspace: 工作空间,文件系统
    其对应的后端存储,需要由用户在运行TaskRun或PipelineRun时予以定义
    允许在Step或Task间共享使用
    通常用语存储卷就是kubernetes的存储卷
    emptyDir:生命周期同Pod
    pvc/pv:k8s上独立的资源,生命周期独立,可以跨Task使用
    pv的预配
    Static pv
    Dynamic pv: 要求后端存储服务允许PVC能够按对应PVC中声明的存储特性按需创建存储空间
    Ceph RBD
    通过csi接口对接到k8s
    nfs-csi driver
    PVC:Claim
    隶属于namespace

    ConfigMap:提供只读的配置信息

    Secrets:提供加密信息

    1. Pipeline和Task上的数据共享

    1.1 Workspace

    由Task进行声明,在TaskRun时以volume进行赋值,并作为文件系统使用
    通常是comfigmap,secret,emptyDir,静态PVC或动态pvc
    emptyDir的生命周期通Pod,只能在同一Task的各Step之间共享数据
    跨Task数据共享,需要pvc

    1.2 Results

    由Task进行声明
    将Task中某Step生成的结果保存在临时文件中(/token/results/),而后由同一Task中后面的Step引用,或者由后面其他Task中的Step引用.例如:“$(results..path)”
    由Tekton的Results API负责实现,仅可用于共享小于4096字节规模的小数据片

    2. 在Task上声明Workspace

    定义在spec.workspace字段中

    定义嵌套以下字段:

    - name: 必选,workspace唯一标识符
    - description: 描述信息,通常标明其使用目的
    - readOnly: 是否为只读,默认false,configmap和secret为只读
    - optional: 是否可选,默认false
    - mountPath: 在各step中挂载路径,默认为/workspace/,name为workspace的名称
    
    • 1
    • 2
    • 3
    • 4
    • 5

    可用的workspace变量:

    • $(workspace..path)由name指定workspace挂载的路径,对于可选且TaskRun未声明时,其值为空

      有2种引用方式:

      • /workspace/
      • $(workspace..path)
    • $(workspace..bound):值为true或false,用于标识指定的workspace是否已经绑定,对于optional为false的workspace改变量始终为true

    • $(workspace..claim),由name标识的workspace使用的pvc名称,对于非pvc类型的存储卷改变量值为空

    • $(workspace..volume):由name标识的workspace所使用的存储卷的名称

    3. EmptyDir单Step测试

    # cat 01-task-workspace-demo.yaml 
    apiVersion: tekton.dev/v1beta1
    kind: Task
    metadata:
      name: workspace-demo
    spec:
      params:
      - name: target
        type: string
        default: pana
      steps:
        - name: write-message
          image: alpine:3.16
          script: |
            #!/bin/sh
            set -xe
            if [ "$(workspaces.messages.bound)" == "true" ] ; then
              echo "Hello $(params.target)" > $(workspaces.messages.path)/message
              cat $(workspaces.messages.path)/message
            fi
            echo "Mount Path: $(workspaces.messages.path)"
            echo "Volume Name: $(workspaces.messages.volume)"
      workspaces:
        - name: messages
          description: |
            The folder where we write the message to. If no workspace
            is provided then the message will not be written.
          optional: true
          mountPath: /data
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29

    创建task

    root@k8s-master-01:/apps/tekton-and-argocd-in-practise/03-tekton-advanced# kubectl apply -f 01-task-workspace-demo.yaml 
    task.tekton.dev/workspace-demo created
    root@k8s-master-01:/apps/tekton-and-argocd-in-practise/03-tekton-advanced# tkn task ls
    NAME             DESCRIPTION   AGE
    hello                          3 days ago
    hello-params                   1 day ago
    logger                         22 hours ago
    multiple                       23 hours ago
    script                         23 hours ago
    workspace-demo                 16 seconds ago
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10

    实例化task

    root@k8s-master-01:~# tkn task start workspace-demo --showlog \
    > -p target="www.baidu.com" \	## 传参数给变量target,如果不传就是default的pana生效
    > -w name=messages,emptyDir=""		## 指定workspace是messages,emptyDir只能是空,不能传参数所以用""
    TaskRun started: workspace-demo-run-qwsnh
    Waiting for logs to be available...
    [write-message] + '[' true '==' true ]
    [write-message] + echo 'Hello www.baidu.com'
    [write-message] + cat /data/message
    [write-message] Hello www.baidu.com
    [write-message] + echo 'Mount Path: /data'
    [write-message] Mount Path: /data
    [write-message] + echo 'Volume Name: ws-4mljp'		## emptyDir的名字是ws-4mljp
    [write-message] Volume Name: ws-4mljp
    
    root@k8s-master-01:~# kubectl describe pods workspace-demo-run-qwsnh-pod|grep 4mljp
          /data from ws-4mljp (rw)
      ws-4mljp:
    root@k8s-master-01:~# 
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18

    4. Empty 多Step间文件共享

    4.1 公开的git仓库

    这里需要留意一点,仓库必须是公开,否则就要对仓库做免密或者http://用户名:密码@仓库

    root@k8s-master-01:/apps/tekton-and-argocd-in-practise/03-tekton-advanced# cat 02-task-with-workspace.yaml
    apiVersion: tekton.dev/v1beta1
    kind: Task
    metadata:
      name: source-lister
    spec:
      params:
      - name: git-repo
        type: string
        description: Git repository to be cloned
      workspaces:
      - name: source
      steps:
      - name: git-clone
        image: alpine/git:v2.36.1
        script: git clone -v $(params.git-repo) $(workspaces.source.path)/source
      - name: list-files
        image: alpine:3.16
        command:
        - /bin/sh
        args:
        - '-c'
        - 'cat $(workspaces.source.path)/source/issue'
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23

    创建task

    root@k8s-master-01:/apps/tekton-and-argocd-in-practise/03-tekton-advanced# kubectl apply -f 02-task-with-workspace.yaml 
    task.tekton.dev/source-lister created
    root@k8s-master-01:/apps/tekton-and-argocd-in-practise/03-tekton-advanced# tkn task ls |grep source-lister
    source-lister                  49 seconds ago
    
    • 1
    • 2
    • 3
    • 4

    实例化task

    root@k8s-master-01:~# tkn task start source-lister --showlog -p git-repo=http://192.168.31.199/tekton/app01.git -w name=source,emptyDir=""
    TaskRun started: source-lister-run-b8n4d
    Waiting for logs to be available...
    [git-clone] Cloning into '/workspace/source/source'...
    [git-clone] POST git-upload-pack (175 bytes)
    [git-clone] POST git-upload-pack (217 bytes)
    
    [list-files] Ubuntu 18.04.6 LTS \n \l
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8

    4.2 私有的git仓库

    注意当密码中有特殊字符时需要转码

    !    #    $     &    '    (    )    *    +    ,    /    :    ;    =    ?    @    [    ]
    %21  %23  %24   %26  %27  %28  %29  %2A  %2B  %2C  %2F  %3A  %3B  %3D  %3F  %40  %5B  %5D
    
    • 1
    • 2

    效果如下

    root@k8s-master-01:~# tkn task start source-lister --showlog \
    > -p git-repo=http://root:Pana%23infra8@192.168.31.199/wework/app1.git \
    > -w name=source,emptyDir=""
    TaskRun started: source-lister-run-jgrx6
    Waiting for logs to be available...
    [git-clone] Cloning into '/workspace/source/source'...
    [git-clone] POST git-upload-pack (175 bytes)
    [git-clone] POST git-upload-pack (217 bytes)
    
    [list-files] Ubuntu 18.04.6 LTS \n \l
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10

    由于emptyDir每次都会被重建,这样如果需要将数据持久化的话就无法实现,可以看到每次执行volume都会发生改变

    root@k8s-master-01:~# tkn task start source-lister --showlog -p git-repo=http://root:Pana%23infra8@192.168.31.199/wework/app1.git -w name=source,emptyDir=""
    TaskRun started: source-lister-run-h95rn
    Waiting for logs to be available...
    [git-clone] Cloning into '/workspace/source/source'...
    [git-clone] POST git-upload-pack (175 bytes)
    [git-clone] POST git-upload-pack (217 bytes)
    
    [list-files] Volume Name: ws-pnk58
    
    root@k8s-master-01:~# tkn task start source-lister --showlog -p git-repo=http://root:Pana%23infra8@192.168.31.199/wework/app1.git -w name=source,emptyDir=""
    TaskRun started: source-lister-run-kt2gc
    Waiting for logs to be available...
    [git-clone] Cloning into '/workspace/source/source'...
    [git-clone] POST git-upload-pack (175 bytes)
    [git-clone] POST git-upload-pack (217 bytes)
    
    [list-files] Volume Name: ws-qr666
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17

    5. Pipeline示例

    # cat 03-pipeline-workspace.yaml 
    apiVersion: tekton.dev/v1beta1
    kind: Pipeline			# 类型是pipeline
    metadata:
      name: pipeline-source-lister		# pipeline的名字
    spec:
      workspaces:
      - name: codebase			# pipeline的workspace
      params:
      - name: git-url			# pipeline的参数
        type: string
        description: Git repository url to be cloned
      tasks:
      - name: git-clone				# pipeline对task的命名
        taskRef:					# 引用task
          name: source-lister		# 引用source-lister task
        workspaces:
        - name: source				# source-lister 的workspace是source
          workspace: codebase		# 将pipeline的codebase传递给source-lister的source
        params:
        - name: git-repo			
          value: $(params.git-url)	# 将pipeline参数git-url的值 传递给task的git-repo参数
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22

    部署pipeline

    # kubectl apply -f 03-pipeline-workspace.yaml 
    pipeline.tekton.dev/pipeline-source-lister created
    
    root@k8s-master-01:~# tkn pipeline ls
    NAME                     AGE              LAST RUN                           STARTED        DURATION   STATUS
    pipeline-demo            1 day ago        pipeline-demo-run-20221025         23 hours ago   17s        Succeeded
    pipeline-source-lister   11 seconds ago   ---                                ---            ---        ---
    pipeline-task-ordering   22 hours ago     pipeline-task-ordering-run-fzb5j   22 hours ago   3m5s       Succeeded
    pipeline-with-params     23 hours ago     pipeline-with-params-run-1025-2    22 hours ago   7s         Succeeded
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10

    实例化pipeline

    root@k8s-master-01:~# tkn pipeline start pipeline-source-lister --showlog \
    > -p git-url=http://192.168.31.199/tekton/app01.git \
    > -w name=codebase,emptyDir=""
    PipelineRun started: pipeline-source-lister-run-blc9x
    Waiting for logs to be available...
    [git-clone : git-clone] Cloning into '/workspace/source/source'...
    [git-clone : git-clone] POST git-upload-pack (175 bytes)
    [git-clone : git-clone] POST git-upload-pack (217 bytes)
    
    [git-clone : list-files] Ubuntu 18.04.6 LTS \n \l
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10

    pipelinerun

    root@k8s-master-01:~# tkn pipeline start pipeline-source-lister --showlog -p git-url=http://192.168.31.199/tekton/app01.git -w name=codebase,emptyDir="" --dry-run
    apiVersion: tekton.dev/v1beta1
    kind: PipelineRun
    metadata:
      creationTimestamp: null
      generateName: pipeline-source-lister-run-
      namespace: default
    spec:
      params:
      - name: git-url
        value: http://192.168.31.199/tekton/app01.git
      pipelineRef:
        name: pipeline-source-lister
      workspaces:
      - emptyDir: {}
        name: codebase
    status: {}
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17

    应该使用volumeClaimTemplate来动态绑定pvc

    6. 基于nfs-csi的PVC共享

    # 项目地址https://github.com/kubernetes-csi/csi-driver-nfs
    
    • 1

    6.1 开启nfs服务

    nfs服务器

    # cat /etc/exports
    /data/k8s *(rw,sync,no_root_squash)
    root@haproxy-1:~# systemctl enable --now nfs-server.service
    
    • 1
    • 2
    • 3

    所有node节点上执行,或者给node打tag并使用nodeselect

    # mkdir /nfs-vol &&mount -t nfs 192.168.31.109:/data/k8s/nfs /nfs-vol
    
    • 1

    6.2 在k8s cluster启用nfs server

    root@k8s-master-01:~# kubectl create ns nfs
    namespace/nfs created
    root@k8s-master-01:~# kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/deploy/example/nfs-provisioner/nfs-server.yaml -n nfs
    service/nfs-server created
    deployment.apps/nfs-server created
    root@k8s-master-01:~# kubectl get deployment -n nfs
    NAME         READY   UP-TO-DATE   AVAILABLE   AGE
    nfs-server   1/1     1            1           55s
    root@k8s-master-01:~# kubectl get svc -n nfs
    NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)            AGE
    nfs-server   ClusterIP   10.200.73.254   <none>        2049/TCP,111/UDP   58s
    root@k8s-master-01:~# 
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12

    6.3 安装NFS CSI v3.1.0

    克隆离线安装文件

    # git clone https://github.com/kubernetes-csi/csi-driver-nfs.git
    Cloning into 'csi-driver-nfs'...
    remote: Enumerating objects: 21474, done.
    remote: Counting objects: 100% (880/880), done.
    remote: Compressing objects: 100% (484/484), done.
    remote: Total 21474 (delta 423), reused 734 (delta 364), pack-reused 20594
    Receiving objects: 100% (21474/21474), 22.10 MiB | 114.00 KiB/s, done.
    Resolving deltas: 100% (11719/11719), done.
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8

    下载镜像(先从github到阿里转一道)

    docker pull registry.cn-shanghai.aliyuncs.com/qiuqin/csi-driver-nfs:csi-provisioner
    docker pull registry.cn-shanghai.aliyuncs.com/qiuqin/csi-driver-nfs:livenessprobe
    docker pull registry.cn-shanghai.aliyuncs.com/qiuqin/csi-driver-nfs:nfsplugin
    docker pull registry.cn-shanghai.aliyuncs.com/qiuqin/csi-driver-nfs:csi-node-driver-registrar
    docker tag registry.cn-shanghai.aliyuncs.com/qiuqin/csi-driver-nfs:csi-provisioner harbor.intra.com/csi-driver-nfs/csi-provisioner:v2.2.2
    docker push harbor.intra.com/csi-driver-nfs/csi-provisioner:v2.2.2
    docker tag registry.cn-shanghai.aliyuncs.com/qiuqin/csi-driver-nfs:livenessprobe harbor.intra.com/csi-driver-nfs/livenessprobe:v2.5.0
    docker push harbor.intra.com/csi-driver-nfs/livenessprobe:v2.5.0
    docker tag registry.cn-shanghai.aliyuncs.com/qiuqin/csi-driver-nfs:nfsplugin harbor.intra.com/csi-driver-nfs/nfsplugin:v3.1.0
    docker push harbor.intra.com/csi-driver-nfs/nfsplugin:v3.1.0
    docker tag registry.cn-shanghai.aliyuncs.com/qiuqin/csi-driver-nfs:csi-node-driver-registrar harbor.intra.com/csi-driver-nfs/csi-node-driver-registrar:v2.4.0
    docker push harbor.intra.com/csi-driver-nfs/csi-node-driver-registrar:v2.4.0
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13

    替换镜像

    sed -i 's#registry.k8s.io/sig-storage#harbor.intra.com/csi-driver-nfs#g' *.yaml
    
    • 1

    替换后的结果

    root@k8s-master-01:/data/csi-driver-nfs/deploy/v3.1.0# grep image *.yaml
    csi-nfs-controller.yaml:          image: registry.k8s.io/sig-storage/csi-provisioner:v2.2.2
    csi-nfs-controller.yaml:          image: registry.k8s.io/sig-storage/livenessprobe:v2.5.0
    csi-nfs-controller.yaml:          image: registry.k8s.io/sig-storage/nfsplugin:v3.1.0
    csi-nfs-controller.yaml:          imagePullPolicy: IfNotPresent
    csi-nfs-node.yaml:          image: registry.k8s.io/sig-storage/livenessprobe:v2.5.0
    csi-nfs-node.yaml:          image: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.4.0
    csi-nfs-node.yaml:          image: registry.k8s.io/sig-storage/nfsplugin:v3.1.0
    csi-nfs-node.yaml:          imagePullPolicy: "IfNotPresent"
    root@k8s-master-01:/data/csi-driver-nfs/deploy/v3.1.0# sed -i 's#registry.k8s.io/sig-storage#harbor.intra.com/csi-driver-nfs#g' *.yaml
    root@k8s-master-01:/data/csi-driver-nfs/deploy/v3.1.0# grep image *.yaml
    csi-nfs-controller.yaml:          image: harbor.intra.com/csi-driver-nfs/csi-provisioner:v2.2.2
    csi-nfs-controller.yaml:          image: harbor.intra.com/csi-driver-nfs/livenessprobe:v2.5.0
    csi-nfs-controller.yaml:          image: harbor.intra.com/csi-driver-nfs/nfsplugin:v3.1.0
    csi-nfs-controller.yaml:          imagePullPolicy: IfNotPresent
    csi-nfs-node.yaml:          image: harbor.intra.com/csi-driver-nfs/livenessprobe:v2.5.0
    csi-nfs-node.yaml:          image: harbor.intra.com/csi-driver-nfs/csi-node-driver-registrar:v2.4.0
    csi-nfs-node.yaml:          image: harbor.intra.com/csi-driver-nfs/nfsplugin:v3.1.0
    csi-nfs-node.yaml:          imagePullPolicy: "IfNotPresent"
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19

    执行本地化部署,这里有个小坑,安装3.1.0时会提示./deploy/v3.1.0/rbac-csi-nfs.yaml文件没有,只需要将./deploy/v3.1.0/rbac-csi-nfs-controller.yaml复制过去就可以了.

    root@k8s-master-01:/data/csi-driver-nfs# ./deploy/install-driver.sh v3.1.0 local
    use local deploy
    Installing NFS CSI driver, version: v3.1.0 ...
    error: the path "./deploy/v3.1.0/rbac-csi-nfs.yaml" does not exist
    root@k8s-master-01:/data/csi-driver-nfs# cp ./deploy/v3.1.0/rbac-csi-nfs-controller.yaml ./deploy/v3.1.0/rbac-csi-nfs.yaml
    root@k8s-master-01:/data/csi-driver-nfs# ./deploy/install-driver.sh v3.1.0 local
    use local deploy
    Installing NFS CSI driver, version: v3.1.0 ...
    serviceaccount/csi-nfs-controller-sa unchanged
    clusterrole.rbac.authorization.k8s.io/nfs-external-provisioner-role unchanged
    clusterrolebinding.rbac.authorization.k8s.io/nfs-csi-provisioner-binding unchanged
    csidriver.storage.k8s.io/nfs.csi.k8s.io created
    deployment.apps/csi-nfs-controller created
    daemonset.apps/csi-nfs-node configured
    NFS CSI driver installed successfully.
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15

    此时在kube-system下的pod已经正常启动了

    root@k8s-master-01:/data/csi-driver-nfs# kubectl get pods -n kube-system 
    NAME                                       READY   STATUS    RESTARTS         AGE
    calico-kube-controllers-76586bcfb6-tf6kg   1/1     Running   7 (4d9h ago)     43d
    calico-node-48nd2                          1/1     Running   4 (52d ago)      134d
    calico-node-6csj7                          1/1     Running   8 (4d22h ago)    134d
    calico-node-brphr                          1/1     Running   34 (107s ago)    182d
    calico-node-jd8xr                          1/1     Running   35 (4d22h ago)   134d
    calico-node-pl8sz                          1/1     Running   33 (95s ago)     85d
    calico-node-szqd6                          1/1     Running   11 (4d22h ago)   62d
    calico-node-xzfgw                          1/1     Running   35 (3d23h ago)   182d
    coredns-55556fc4d7-92ptb                   1/1     Running   18 (95s ago)     43d
    csi-nfs-controller-7d758f5b6b-6847h        3/3     Running   0                5m35s
    csi-nfs-controller-7d758f5b6b-k7m9z        3/3     Running   0                5m35s
    kube-state-metrics-8444bbc459-mw4zk        1/1     Running   13 (4d9h ago)    43d
    metrics-server-5b5f4d4dd4-m7xc9            1/1     Running   19 (4d9h ago)    40d
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15

    部署csi-nfs-node

    先搞镜像

    docker pull registry.cn-shanghai.aliyuncs.com/qiuqin/csi-driver-nfs:livenessprobe
    docker pull registry.cn-shanghai.aliyuncs.com/qiuqin/csi-driver-nfs:nfsplugin
    docker pull registry.cn-shanghai.aliyuncs.com/qiuqin/csi-driver-nfs:csi-node-driver-registrar
    docker tag registry.cn-shanghai.aliyuncs.com/qiuqin/csi-driver-nfs:livenessprobe harbor.intra.com/csi-driver-nfs/livenessprobe:v2.7.0
    docker push harbor.intra.com/csi-driver-nfs/livenessprobe:v2.7.0
    docker tag registry.cn-shanghai.aliyuncs.com/qiuqin/csi-driver-nfs:nfsplugin harbor.intra.com/csi-driver-nfs/nfsplugin:canary
    docker push harbor.intra.com/csi-driver-nfs/nfsplugin:canary
    docker tag registry.cn-shanghai.aliyuncs.com/qiuqin/csi-driver-nfs:csi-node-driver-registrar harbor.intra.com/csi-driver-nfs/csi-node-driver-registrar:v2.5.1
    docker push harbor.intra.com/csi-driver-nfs/csi-node-driver-registrar:v2.5.1
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9

    替换镜像

    sed -i 's#registry.k8s.io/sig-storage#harbor.intra.com/csi-driver-nfs#g' deploy/csi-nfs-node.yaml
    sed -i 's#gcr.io/k8s-staging-sig-storage#harbor.intra.com/csi-driver-nfs#g' csi-nfs-node.yaml
    
    • 1
    • 2

    这里又有一个坑了…

    用csi-node-driver-registrar:v2.5.1会发现Pod状态是CrashLoopBackOff,故障原因是node-driver-registrar这个container引起,报错内容是flag provided but not defined: -csi-address

    所以把node-driver-registra的镜像改成2.4.0就好了

    # grep image deploy/csi-nfs-node.yaml 
              image: harbor.intra.com/csi-driver-nfs/livenessprobe:v2.7.0
              image: harbor.intra.com/csi-driver-nfs/csi-node-driver-registrar:v2.4.0
              image: harbor.intra.com/csi-driver-nfs/nfsplugin:canary
              imagePullPolicy: "IfNotPresent"
    
    • 1
    • 2
    • 3
    • 4
    • 5

    部署生效

    # kubectl apply -f deploy/rbac-csi-nfs.yaml
    # kubectl apply -f deploy/csi-nfs-node.yaml
    
    • 1
    • 2

    此时kube-system下csi-nfs-node也都创建好了.

    # kubectl get pods -n kube-system 
    NAME                                       READY   STATUS    RESTARTS       AGE
    calico-kube-controllers-76586bcfb6-tf6kg   1/1     Running   8 (54m ago)    44d
    calico-node-48nd2                          1/1     Running   5 (29m ago)    135d
    calico-node-6csj7                          1/1     Running   9 (54m ago)    135d
    calico-node-brphr                          1/1     Running   34 (56m ago)   182d
    calico-node-jd8xr                          1/1     Running   36 (32m ago)   135d
    calico-node-pl8sz                          1/1     Running   33 (56m ago)   85d
    calico-node-szqd6                          1/1     Running   12 (32m ago)   62d
    calico-node-xzfgw                          1/1     Running   36 (54m ago)   182d
    coredns-55556fc4d7-92ptb                   1/1     Running   18 (56m ago)   44d
    csi-nfs-controller-7d758f5b6b-6847h        3/3     Running   3 (29m ago)    60m
    csi-nfs-controller-7d758f5b6b-k7m9z        3/3     Running   0              60m
    csi-nfs-node-84g5n                         3/3     Running   0              7m39s
    csi-nfs-node-92gr2                         3/3     Running   0              7m39s
    csi-nfs-node-k9kwq                         3/3     Running   0              7m39s
    csi-nfs-node-klmb4                         3/3     Running   0              7m39s
    csi-nfs-node-mjm6r                         3/3     Running   0              7m39s
    csi-nfs-node-pxs8r                         3/3     Running   0              7m39s
    csi-nfs-node-r4j8f                         3/3     Running   0              7m39s
    kube-state-metrics-8444bbc459-mw4zk        1/1     Running   15 (53m ago)   44d
    metrics-server-5b5f4d4dd4-m7xc9            1/1     Running   33 (53m ago)   40d
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22

    没外网有点太曲折了

    有外网可以直接

    curl -skSL https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/v3.1.0/deploy/install-driver.sh | bash -s v3.1.0 --
    
    • 1

    6.4 关联nfs-csi-driver与nfs-server

    创建StorageClass

    # cat nfs-csi-driver-sc.yaml
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: nfs-csi
    provisioner: nfs.csi.k8s.io
    parameters:
      server: nfs-server.nfs.svc.cluster.local
      share: /
      # csi.storage.k8s.io/provisioner-secret is only needed for providing mountOptions in DeleteVolume
      # csi.storage.k8s.io/provisioner-secret-name: "mount-options"
      # csi.storage.k8s.io/provisioner-secret-namespace: "default"
    reclaimPolicy: Delete
    volumeBindingMode: Immediate
    mountOptions:
      - nconnect=8
      - nfsvers=4.1
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17

    生效

    root@k8s-master-01:/data/nfs# kubectl apply -f nfs-csi-driver-sc.yaml 
    storageclass.storage.k8s.io/nfs-csi created
    root@k8s-master-01:/data/nfs# kubectl get sc
    NAME      PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
    nfs-csi   nfs.csi.k8s.io   Delete          Immediate           false                  4s
    
    ## 如果不希望删除pvc后数据立即回收可以将reclaimPolicy改为Retain
    reclaimPolicy: Retain
    
    root@k8s-master-01:/data/nfs# kubectl get sc
    NAME      PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
    nfs-csi   nfs.csi.k8s.io   Retain          Immediate           false                  2s
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12

    测试动态制备pvc

    # kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/deploy/example/pvc-nfs-csi-dynamic.yaml
    root@k8s-master-01:/data/nfs# cat pvc-nfs-csi-dynamic.yaml
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-nfs-dynamic
    spec:
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 10Gi
      storageClassName: nfs-csi
    root@k8s-master-01:/data/nfs# kubectl get pvc
    NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    pvc-nfs-dynamic   Bound    pvc-f0d481de-14a3-4428-92db-02736d895f4c   10Gi       RWX            nfs-csi        5s
    root@k8s-master-01:/data/nfs# kubectl get pv|grep pvc-f0d481de-14a3-4428-92db-02736d895f4c
    pvc-f0d481de-14a3-4428-92db-02736d895f4c   10Gi       RWX            Delete           Bound       default/pvc-nfs-dynamic          nfs-csi                 55s
    ## 清理
    root@k8s-master-01:/data/nfs# kubectl delete pvc pvc-nfs-dynamic
    persistentvolumeclaim "pvc-nfs-dynamic" deleted
    root@k8s-master-01:/data/nfs# kubectl get pv|grep pvc-f0d481de-14a3-4428-92db-02736d895f4c
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23

    至此已经可以基于VolumeClaimTemplate动态请求pvc

    6.5 PipelineRun使用pvc

    配置文件

    # cat pipelinerun-with-pvc-demo.yaml 
    apiVersion: tekton.dev/v1beta1
    kind: PipelineRun
    metadata:
      name: pipeline-source-lister-run-001
    spec:
      pipelineRef:
        name: pipeline-source-lister
      params:
      - name: git-url
        value: http://192.168.31.199/tekton/app01.git
      workspaces:
      - name: codebase
        volumeClaimTemplate:
          spec:
            accessModes: ["ReadWriteOnce"]
            resources:
              requests:
                storage: 1Gi
            storageClassName: nfs-csi
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20

    部署运行

    # kubectl apply -f pipelinerun-with-pvc-demo.yaml
    pipelinerun.tekton.dev/pipeline-source-lister-run-001 created
    # tkn pipelinerun ls|grep pipeline-source-lister-run-001
    pipeline-source-lister-run-001     1 minute ago   15s        Succeeded
    # tkn pipelinerun logs pipeline-source-lister-run-001
    [git-clone : git-clone] Cloning into '/workspace/source/source'...
    [git-clone : git-clone] POST git-upload-pack (175 bytes)
    [git-clone : git-clone] POST git-upload-pack (217 bytes)
    
    [git-clone : list-files] Ubuntu 18.04.6 LTS \n \l
    # kubectl get pvc
    NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    pvc-05b2130fe7   Bound    pvc-8b7a15ba-f38c-42b0-9b5d-986c4bb9fa6f   1Gi        RWO            nfs-csi        16m
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13

    此时pipelinerun创建的pod已经结束了生命周期,但pvc不会因此被删除

    7. 综合示例

    通过pvc实现在2个Task间数据共享

    apiVersion: tekton.dev/v1beta1
    kind: Pipeline
    metadata:
      name: volume-share
    spec:
      params:
        - name: git-url
          type: string
      workspaces:
        - name: codebase
      tasks:
        - name: fetch-from-source
          params:
            - name: url
              value: $(params.git-url)
          taskSpec:
            workspaces:
              - name: source
            params:
              - name: url
            steps:
              - name: git-clone
                image: alpine/git:v2.36.1
                script: git clone -v $(params.url) $(workspaces.source.path)/source
          workspaces:
            - name: source
              workspace: codebase
        - name: source-lister
          runAfter:
            - fetch-from-source
          taskSpec:
            steps:
              - name: list-files
                image: alpine:3.16
                script: ls $(workspaces.source.path)/source
            workspaces:
              - name: source
          workspaces:
            - name: source
              workspace: codebase
    ---
    apiVersion: tekton.dev/v1beta1
    kind: PipelineRun
    metadata:
      name: volume-share-run-xxxx
    spec:
      pipelineRef:
        name: volume-share
      params:
        - name: git-url
          value: http://192.168.31.199/tekton/app01.git
      workspaces:
        - name: codebase
          volumeClaimTemplate:
            spec:
              accessModes:
                - ReadWriteOnce
              resources:
                requests:
                  storage: 1Gi
              storageClassName: nfs-csi
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61

    执行

    # kubectl apply -f 04-pipeline-worlspace-02.yaml 
    pipeline.tekton.dev/volume-share created
    pipelinerun.tekton.dev/volume-share-run-xxxx created
    root@k8s-master-01:/apps/tekton-and-argocd-in-practise/03-tekton-advanced# tkn pipeline ls
    NAME                     AGE            LAST RUN                           STARTED          DURATION   STATUS
    pipeline-demo            1 day ago      pipeline-demo-run-20221025         1 day ago        17s        Succeeded
    pipeline-source-lister   22 hours ago   pipeline-source-lister-run-001     1 hour ago       15s        Succeeded
    pipeline-task-ordering   1 day ago      pipeline-task-ordering-run-fzb5j   1 day ago        3m5s       Succeeded
    pipeline-with-params     1 day ago      pipeline-with-params-run-1025-2    1 day ago        7s         Succeeded
    volume-share             1 minute ago   volume-share-run-xxxx              59 seconds ago   23s        Succeeded
    root@k8s-master-01:/apps/tekton-and-argocd-in-practise/03-tekton-advanced# tkn pipelinerun ls
    NAME                               STARTED          DURATION   STATUS
    volume-share-run-xxxx              28 seconds ago   23s        Succeeded
    pipeline-source-lister-run-001     1 hour ago       15s        Succeeded
    pipeline-source-lister-run-blc9x   22 hours ago     6s         Succeeded
    pipeline-task-ordering-run-fzb5j   1 day ago        3m5s       Succeeded
    pipeline-with-params-run-1025-2    1 day ago        7s         Succeeded
    pipeline-with-params-run-1025      1 day ago        7s         Succeeded
    pipeline-with-params-run-22w2m     1 day ago        8s         Succeeded
    pipeline-with-params-run-rhp75     1 day ago        14s        Succeeded
    pipeline-with-params-run-xxxxx     1 day ago        8s         Succeeded
    pipeline-with-params-run-dwt8h     1 day ago        8s         Succeeded
    pipeline-with-params-run-jrvrh     1 day ago        1m22s      Succeeded
    pipeline-demo-run-20221025         1 day ago        17s        Succeeded
    pipeline-demo-run-p2g6t            1 day ago        20s        Succeeded
    pipeline-demo-run-7chp9            1 day ago        18s        Succeeded
    pipeline-demo-run-rlgzp            1 day ago        9s         Succeeded
    pipeline-demo-run-2tjlz            1 day ago        9s         Succeeded
    root@k8s-master-01:/apps/tekton-and-argocd-in-practise/03-tekton-advanced# kubectl get pvc
    NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    pvc-05b2130fe7   Bound    pvc-8b7a15ba-f38c-42b0-9b5d-986c4bb9fa6f   1Gi        RWO            nfs-csi        73m
    pvc-b496e3a247   Bound    pvc-ea19b117-6406-400a-8004-45293fd4a23e   1Gi        RWO            nfs-csi        33s
    root@k8s-master-01:/apps/tekton-and-argocd-in-practise/03-tekton-advanced# tkn pipeline logs volume-share
    Cloning into '/workspace/source/source'...
    POST git-upload-pack (175 bytes)
    POST git-upload-pack (217 bytes)
    
    README.md
    issue
    root@k8s-master-01:/apps/tekton-and-argocd-in-practise/03-tekton-advanced# tkn pipelinerun  logs volume-share-run-xxxx
    [fetch-from-source : git-clone] Cloning into '/workspace/source/source'...
    [fetch-from-source : git-clone] POST git-upload-pack (175 bytes)
    [fetch-from-source : git-clone] POST git-upload-pack (217 bytes)
    
    [source-lister : list-files] README.md
    [source-lister : list-files] issue
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46

    如果用emptyDir就无法实现不同task之间的传递,第二个task就会失败

    root@k8s-master-01:/apps/tekton-and-argocd-in-practise/03-tekton-advanced# tkn pipeline start volume-share -p git-url=http://192.168.31.199/tekton/app01.git -w name=codebase,emptyDir="" --showlog
    PipelineRun started: volume-share-run-ptgcm
    Waiting for logs to be available...
    [fetch-from-source : git-clone] Cloning into '/workspace/source/source'...
    [fetch-from-source : git-clone] POST git-upload-pack (175 bytes)
    [fetch-from-source : git-clone] POST git-upload-pack (217 bytes)
    
    [source-lister : list-files] ls: /workspace/source/source: No such file or directory
    
    failed to get logs for task source-lister : container step-list-files has failed  : [{"key":"StartedAt","value":"2022-10-27T04:35:32.423Z","type":3}]
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
  • 相关阅读:
    Laravel依赖注入全解析:构建灵活应用的秘诀
    4.8 playwright实现页面及接口的自动化
    如何转换图片格式?教你三招一键轻松转换图片格式
    Linux高并发服务器开发(六)线程
    java和js实现AES对称加密
    【TypeScript】中的函数详解
    uni-app 5小时快速入门 12 uni-app生命周期(下)
    JVM之垃圾回收
    十三届蓝桥杯c/c++省赛C组
    类和对象(2)
  • 原文地址:https://blog.csdn.net/qq_29974229/article/details/127550069