• 运维(39) 通过KubeSphere部署SpringBoot到K8S案例 DevOps


    DevOps

    demo源码见:https://gitee.com/zhengqingya/java-workspace
    基于kubesphere 3.2.1

    自动检出 (Checkout) 代码、测试、分析、构建、部署并发布

    在这里插入图片描述

    一、创建DevOps项目

    在这里插入图片描述

    在这里插入图片描述

    二、DevOps凭证

    1、gitee仓库认证gitee-auth

    在这里插入图片描述

    2、阿里云docker仓库认证aliyun-docker-registry-auth

    在这里插入图片描述

    3、k8s凭证kubeconfig-auth
    # k8s权限配置文件
    cat /root/.kube/config
    
    • 1
    • 2

    在这里插入图片描述

    将内容中的https://lb.kubesphere.local:6443 -> https://指定IP:6443,不然之后部署可能会出现问题…
    在这里插入图片描述


    最终
    在这里插入图片描述

    三、maven配置阿里云中央仓库

    平台管理 -> 集群管理 -> default -> 配置 -> 配置字典 -> ks-devops-agent

    在这里插入图片描述

    编辑设置

    在这里插入图片描述

    
        
        
            nexus-aliyun
            central
            Nexus aliyun
            http://maven.aliyun.com/nexus/content/groups/public
        
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9

    在这里插入图片描述

    四、k8s-项目配置

    阿里云docker仓库认证 aliyun-docker-registry-auth

    在这里插入图片描述

    在这里插入图片描述

    在这里插入图片描述

    # 也可通过命令查看凭证
    kubectl get secrets -n my-project
    
    • 1
    • 2

    五、创建流水线

    在这里插入图片描述

    在这里插入图片描述

    进入后可以点击编辑流水线,提供了一些模板

    在这里插入图片描述

    ex: 第一步拉取代码

    在这里插入图片描述

    这里自己点着玩吧,很简单… 根据自己的需求去定制即可…

    在这里插入图片描述

    六、其它

    Jenkinsfile
    pipeline {
        agent {
            node {
                label 'maven'
            }
    
        }
    
        environment {
            DOCKER_REGISTRY_AUTH = "aliyun-docker-registry-auth"
            DOCKER_REGISTRY = 'registry.cn-hangzhou.aliyuncs.com'
            DOCKER_REGISTRY_NAMESPACE = 'zhengqingya'
            APP_DOCKER_IMAGE = "${DOCKER_REGISTRY}/${DOCKER_REGISTRY_NAMESPACE}/${APP_NAME}:${BRANCH_NAME}"
            PROJECT_GIT_URL = 'https://gitee.com/zhengqingya/test.git'
            APP_NAME = 'test'
            BRANCH_NAME = 'master'
            IS_SKIP_BUILD = 'false'
            JAVA_OPTS = "-XX:+UseG1GC -Xms100m -Xmx100m -Dserver.port=8080"
        }
    
    //    parameters {
    //        string(name: 'BRANCH_NAME', defaultValue: 'master', description: 'git分支名')
    //        choice(name: 'IS_SKIP_BUILD', choices: ['false', 'true'], description: '是否跳过构建,直接部署')
    //        choice(name: 'SERVICE_NAMES', choices: ['test', 'system' ,'all'], description: '请选择要构建的服务,支持单个服务发布或全部服务发布')
    //    }
    
        stages {
    
            stage('参数验证') {
                agent none
                steps {
                    container('maven') {
                        sh """
                            echo "分支: ${BRANCH_NAME}"
                            echo "是否跳过构建,直接部署(tips:适用于之前已经进行过构建打包的情景):${IS_SKIP_BUILD}"
                            echo "app镜像: ${APP_DOCKER_IMAGE}"
                            echo "构建运行ID: ${BUILD_NUMBER}"
                            echo "JAVA_OPTS: ${JAVA_OPTS}"
                        """
                    }
                }
            }
    
            stage('拉取代码') {
                agent none
                steps {
                    container('maven') {
                        git(credentialsId: 'gitee-auth', url: "${PROJECT_GIT_URL}", branch: "${BRANCH_NAME}", changelog: true, poll: false)
                        sh 'ls -al'
                    }
                }
            }
    
            stage('项目编译') {
                agent none
                steps {
                    container('maven') {
                        sh 'mvn clean package -Dmaven.test.skip=true'
                        sh 'ls -al'
                    }
                }
            }
    
            stage('docker镜像构建&推送') {
                agent none
                steps {
                    container('maven') {
                        sh 'cp target/*.jar docker'
                        sh """
                            cd docker
                            ls
                            echo "app镜像: ${APP_DOCKER_IMAGE}"
                            docker build -f Dockerfile  -t ${APP_DOCKER_IMAGE} . --no-cache
                        """
                        withCredentials([usernamePassword(credentialsId: "${DOCKER_REGISTRY_AUTH}", passwordVariable: 'DOCKER_PASSWORD', usernameVariable: 'DOCKER_USERNAME',)]) {
                            sh 'echo "$DOCKER_PASSWORD" | docker login $DOCKER_REGISTRY -u "$DOCKER_USERNAME" --password-stdin'
                            sh 'docker push ${APP_DOCKER_IMAGE}'
                            sh "echo 镜像推送成功:${APP_DOCKER_IMAGE}"
                            sh 'ls -al'
                        }
    
                    }
    
                }
            }
    
            stage('发布到k8s') {
                agent none
                steps {
                    container('maven') {
                        sh 'ls -al'
                        withCredentials([kubeconfigFile(credentialsId: 'kubeconfig-auth', variable: 'KUBECONFIG')]) {
                            // envsubst: 将相关参数传给该yml文件
                            sh 'envsubst < k8s/k8s-deploy.yml | kubectl apply -f -'
                        }
                    }
                }
            }
    
        }
    }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    k8s-deploy.yml
    ---
    # 定义工作负载
    apiVersion: apps/v1
    kind: Deployment  # 无状态部署
    metadata:
      name: ${APP_NAME}
      namespace: my-project   # TODO 命名空间
      labels:
        app: ${APP_NAME}
    spec:
      replicas: 3 # TODO 3个副本
      strategy:
        rollingUpdate: # 由于replicas为3,则整个升级,pod个数在2-4个之间
          maxSurge: 1        # 滚动升级时会先启动1个pod
          maxUnavailable: 1  # 滚动升级时允许的最大Unavailable的pod个数
      selector:
        matchLabels:
          app: ${APP_NAME}
      template:
        metadata:
          labels:
            app: ${APP_NAME}
        spec:
          imagePullSecrets:
            - name: aliyun-docker-registry-auth  # TODO 提前在项目下配置访问阿里云仓库的账号密码
          containers:
            - name: ${APP_NAME}
              image: ${APP_DOCKER_IMAGE} # TODO 镜像地址
              imagePullPolicy: Always
              env: # 环境变量
                - name: JAVA_OPTS
                  value: ${JAVA_OPTS}
              ports:
                - name: http
                  containerPort: 8080
                  protocol: TCP
              # CPU内存限制
              resources:
                limits:
                  cpu: 300m
                  memory: 600Mi
              # 就绪探针
    #          readinessProbe:
    #            httpGet:
    #              path: /actuator/health
    #              port: 8080
    #            timeoutSeconds: 10
    #            failureThreshold: 30
    #            periodSeconds: 5
    
    ---
    # 定义服务
    apiVersion: v1
    kind: Service
    metadata:
      name: ${APP_NAME}  # TODO 服务名
      namespace: my-project   # TODO 命名空间
    spec:
      selector:
        app: ${APP_NAME} # TODO  label selector配置,将选择具有label标签的Pod作为管理
      type: ClusterIP # 访问方式  ClusterIP/NodePort
      ports:
        - name: http            # 端口名称
          port: 8080
          protocol: TCP    # 端口协议,支持TCP和UDP,默认TCP
          targetPort: 8080
          # nodePort: 666  # TODO 当`type = NodePort`时 对外开放端口
      sessionAffinity: None  # 是否支持session
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    k8s yaml 在线编写工具
    • https://k8syaml.com
    报错 ERROR: java.lang.RuntimeException: io.kubernetes.client.openapi.ApiException: java.net.UnknownHostException: lb.kubesphere.local: Name or service not known

    在这里插入图片描述

    Deploy to Kubernetes14.66 s失败
    Starting Kubernetes deployment
    Loading configuration: /home/jenkins/agent/workspace/devops-testp5hsh/test/k8s/k8s-deploy.yml
    ERROR: ERROR: java.lang.RuntimeException: io.kubernetes.client.openapi.ApiException: java.net.UnknownHostException: lb.kubesphere.local: Name or service not known
    hudson.remoting.ProxyException: java.lang.RuntimeException: io.kubernetes.client.openapi.ApiException: java.net.UnknownHostException: lb.kubesphere.local: Name or service not known
    	at com.microsoft.jenkins.kubernetes.wrapper.ResourceManager.handleApiExceptionExceptNotFound(ResourceManager.java:180)
    	at com.microsoft.jenkins.kubernetes.wrapper.V1ResourceManager$DeploymentUpdater.getCurrentResource(V1ResourceManager.java:213)
    	at com.microsoft.jenkins.kubernetes.wrapper.V1ResourceManager$DeploymentUpdater.getCurrentResource(V1ResourceManager.java:201)
    	at com.microsoft.jenkins.kubernetes.wrapper.ResourceManager$ResourceUpdater.createOrApply(ResourceManager.java:93)
    	at com.microsoft.jenkins.kubernetes.wrapper.KubernetesClientWrapper.handleResource(KubernetesClientWrapper.java:289)
    	at com.microsoft.jenkins.kubernetes.wrapper.KubernetesClientWrapper.apply(KubernetesClientWrapper.java:256)
    	at com.microsoft.jenkins.kubernetes.command.DeploymentCommand$DeploymentTask.doCall(DeploymentCommand.java:172)
    	at com.microsoft.jenkins.kubernetes.command.DeploymentCommand$DeploymentTask.call(DeploymentCommand.java:124)
    	at com.microsoft.jenkins.kubernetes.command.DeploymentCommand$DeploymentTask.call(DeploymentCommand.java:106)
    	at hudson.remoting.UserRequest.perform(UserRequest.java:212)
    	at hudson.remoting.UserRequest.perform(UserRequest.java:54)
    	at hudson.remoting.Request$2.run(Request.java:369)
    	at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at hudson.remoting.Engine$1.lambda$newThread$0(Engine.java:93)
    	at java.lang.Thread.run(Thread.java:748)
    	Suppressed: hudson.remoting.Channel$CallSiteStackTrace: Remote call to JNLP4-connect connection from 10.233.70.143/10.233.70.143:51962
    		at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1800)
    		at hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
    		at hudson.remoting.Channel.call(Channel.java:1001)
    		at hudson.FilePath.act(FilePath.java:1160)
    		at com.microsoft.jenkins.kubernetes.command.DeploymentCommand.execute(DeploymentCommand.java:68)
    		at com.microsoft.jenkins.kubernetes.command.DeploymentCommand.execute(DeploymentCommand.java:45)
    		at com.microsoft.jenkins.azurecommons.command.CommandService.runCommand(CommandService.java:88)
    		at com.microsoft.jenkins.azurecommons.command.CommandService.execute(CommandService.java:96)
    		at com.microsoft.jenkins.azurecommons.command.CommandService.executeCommands(CommandService.java:75)
    		at com.microsoft.jenkins.azurecommons.command.BaseCommandContext.executeCommands(BaseCommandContext.java:77)
    		at com.microsoft.jenkins.kubernetes.KubernetesDeploy.perform(KubernetesDeploy.java:42)
    		at com.microsoft.jenkins.azurecommons.command.SimpleBuildStepExecution.run(SimpleBuildStepExecution.java:54)
    		at com.microsoft.jenkins.azurecommons.command.SimpleBuildStepExecution.run(SimpleBuildStepExecution.java:35)
    		at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)
    		at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    		at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    		... 1 more
    Caused by: hudson.remoting.ProxyException: io.kubernetes.client.openapi.ApiException: java.net.UnknownHostException: lb.kubesphere.local: Name or service not known
    	at io.kubernetes.client.openapi.ApiClient.execute(ApiClient.java:898)
    	at io.kubernetes.client.openapi.apis.AppsV1Api.readNamespacedDeploymentWithHttpInfo(AppsV1Api.java:7299)
    	at io.kubernetes.client.openapi.apis.AppsV1Api.readNamespacedDeployment(AppsV1Api.java:7275)
    	at com.microsoft.jenkins.kubernetes.wrapper.V1ResourceManager$DeploymentUpdater.getCurrentResource(V1ResourceManager.java:210)
    	... 16 more
    Caused by: hudson.remoting.ProxyException: java.net.UnknownHostException: lb.kubesphere.local: Name or service not known
    	at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
    	at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929)
    	at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1324)
    	at java.net.InetAddress.getAllByName0(InetAddress.java:1277)
    	at java.net.InetAddress.getAllByName(InetAddress.java:1193)
    	at java.net.InetAddress.getAllByName(InetAddress.java:1127)
    	at okhttp3.Dns.lambda$static$0(Dns.java:39)
    	at okhttp3.internal.connection.RouteSelector.resetNextInetSocketAddress(RouteSelector.java:171)
    	at okhttp3.internal.connection.RouteSelector.nextProxy(RouteSelector.java:135)
    	at okhttp3.internal.connection.RouteSelector.next(RouteSelector.java:84)
    	at okhttp3.internal.connection.ExchangeFinder.findConnection(ExchangeFinder.java:187)
    	at okhttp3.internal.connection.ExchangeFinder.findHealthyConnection(ExchangeFinder.java:108)
    	at okhttp3.internal.connection.ExchangeFinder.find(ExchangeFinder.java:88)
    	at okhttp3.internal.connection.Transmitter.newExchange(Transmitter.java:169)
    	at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:41)
    	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
    	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
    	at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:94)
    	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
    	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
    	at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
    	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
    	at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:88)
    	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
    	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
    	at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:221)
    	at okhttp3.RealCall.execute(RealCall.java:81)
    	at io.kubernetes.client.openapi.ApiClient.execute(ApiClient.java:894)
    	... 19 more
    Api call failed with code 0, detailed message: null
    Kubernetes deployment ended with HasError
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    解决:

    kubesphere 3.2.1 流水线调整

    第一步:替换kubernetesDeploy部署方式

    https://github.com/kubesphere/website/pull/2098

    stage('发布到k8s') {
        agent none
        steps {
            container('maven') {
                // 废弃...
                //     kubernetesDeploy(enableConfigSubstitution: true, deleteResource: false, kubeconfigId: 'kubeconfig-auth', configs: 'k8s/**')
                // 改为下面这种方式
                withCredentials([kubeconfigFile(credentialsId: 'kubeconfig-auth', variable: 'KUBECONFIG')]) {
                    // envsubst: 将相关参数传给该yml文件
                    sh 'envsubst < k8s/k8s-deploy.yml | kubectl apply -f -'
                }
            }
        }
    }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    第二步:修改DevOps凭证kubeconfig

    将内容中的https://lb.kubesphere.local:6443 -> https://指定IP:6443

    在这里插入图片描述

    最终发布成功

    在这里插入图片描述


    今日分享语句:
    天再高又怎样,踮起脚尖就更接近阳光。

  • 相关阅读:
    Keepalived+Nginx搭建高可用负载均衡
    国际版Amazon Lightsail的功能解析
    Cortex-M3如何跳出BusFault,跳过出错代码,程序往下执行
    android的Framework
    线程安全的使用ArrayList和HashMap
    解决java在idea运行正常,但是打成jar包后中文乱码问题
    广东有哪些媒体邀约资源-电视台央媒网媒
    RuntimeError: Cannot insert a Tensor that requires grad as a constant. 可能原因
    mac idea 解决0% classes 0% lines covered不显示,非快捷键办法
    HDFS集群压测介绍-尚硅谷大数据培训
  • 原文地址:https://blog.csdn.net/qq_38225558/article/details/127800944