• 在 k8s 环境中使用 mysql 部署 dolphinscheduler (非 helm 的方式)


    浅言碎语

    • 官方文档有多种部署方式,可惜 k8s 的部署方式使用的是 helm 的方式,并不适用于公司产品的部署形式,需要转换成 yaml 的方式(自己摸石头过河把)

    关于 DolphinScheduler

    • Apache DolphinScheduler 是一个分布式易扩展的可视化 DAG 工作流任务调度开源系统。
    • 解决数据研发 ETL 错综复杂的依赖关系,不能直观监控任务健康状态等问题。
    • DolphinScheduler 以 DAG 流式的方式将 Task 组装起来,可实时监控任务的运行状态,同时支持重试、从指定节点恢复失败、暂停及 Kill 任务等操作

    简单易用

    • DAG监控界面,所有流程定义都是可视化,通过拖拽任务定制DAG,通过API方式与第三方系统对接,一键部署

    高可靠性

    • 去中心化的多 Master 和多 Worker ,自身支持 HA 功能,采用任务队列来避免过载,不会造成机器卡死

    丰富的使用场景

    • 支持暂停恢复操作.支持多租户,更好的应对大数据的使用场景
    • 支持更多的任务类型,如 spark, hive, mr, python, sub_process, shell

    高扩展性

    • 支持自定义任务类型,调度器使用分布式调度,调度能力随集群线性增长,Master 和 Worker 支持动态上下线

    默认端口

    组件默认端口
    MasterServer5678
    WorkerServer1234
    ApiApplicationServer12345

    模块介绍

    • dolphinscheduler-alert - 告警模块,提供 AlertServer 服务
    • dolphinscheduler-api - web 应用模块,提供 ApiServer 服务
    • dolphinscheduler-common - 通用的常量枚举、工具类、数据结构或者基类
    • dolphinscheduler-dao - 提供数据库访问等操作
    • dolphinscheduler-remote - 基于 netty 的客户端、服务端
    • dolphinscheduler-server - MasterServer 和 WorkerServer 服务
    • dolphinscheduler-service - service模块,
      • 包含 Quartz、Zookeeper、日志客户端访问服务,便于 server 模块和 api 模块调用
    • dolphinscheduler-ui - 前端模块

    制作镜像

    • DolphinScheduler 元数据存储在关系型数据库中,目前支持 PostgreSQL 和 MySQL,如果使用 MySQL 则需要手动下载 mysql-connector-java (8.0.16) 驱动并移动到 DolphinScheduler 的 lib 目录下
    • 下载 mysql 驱动包 mysql-connector-java-8.0.16.jar (要求 >=8.0.1)

    准备 debian 的阿里云源,文件名称为:sources.list,和下载好的 mysql 驱动包放一起

    deb http://mirrors.cloud.aliyuncs.com/debian stable main contrib non-free
    deb http://mirrors.cloud.aliyuncs.com/debian stable-proposed-updates main contrib non-free
    deb http://mirrors.cloud.aliyuncs.com/debian stable-updates main contrib non-free
    deb-src http://mirrors.cloud.aliyuncs.com/debian stable main contrib non-free
    deb-src http://mirrors.cloud.aliyuncs.com/debian stable-proposed-updates main contrib non-free
    deb-src http://mirrors.cloud.aliyuncs.com/debian stable-updates main contrib non-free
    
    deb http://mirrors.aliyun.com/debian stable main contrib non-free
    deb http://mirrors.aliyun.com/debian stable-proposed-updates main contrib non-free
    deb http://mirrors.aliyun.com/debian stable-updates main contrib non-free
    deb-src http://mirrors.aliyun.com/debian stable main contrib non-free
    deb-src http://mirrors.aliyun.com/debian stable-proposed-updates main contrib non-free
    deb-src http://mirrors.aliyun.com/debian stable-updates main contrib non-free
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13

    增加一些大佬们需要使用的工具

    FROM apache/dolphinscheduler:2.0.6
    
    ENV PIP_CMD='pip3 install --no-cache-dir -i https://pypi.tuna.tsinghua.edu.cn/simple'
    
    COPY mysql-connector-java-8.0.16.jar /opt/apache-dolphinscheduler-2.0.6-bin/lib/mysql-connector-java-8.0.16.jar
    COPY ./sources.list /tmp/
    
    RUN cat /tmp/sources.list > /etc/apt/sources.list && \
        apt-get update && \
        apt-get install -y libsasl2-dev python3-pip && \
        apt-get autoclean
    
    RUN ${PIP_CMD} \
        pyhive \
        thrift \
        thrift-sasl \
        pymysql \
        pandas \
        faker \
        sasl \
        setuptools_rust \
        wheel \
        rust \
        oss2
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24

    生成镜像

    docker build -t dolphinscheduler_mysql:2.0.6 .
    
    • 1

    准备 yaml 文件

    以下的 yaml 文件是通过 helm 启动 dolphinscheduler 导出的 yaml 文件,有很多参数没有做修改,需要各自根据实际的场景修改后使用,进攻参考使用

    以下 yaml 文件内指定的 namespace 均为 bigdata,默认已经有 mysql 以及 zookeeper

    mysql 默认用户名密码为:dolphinscheduler/dolphinscheduler

    dolphinscheduler-master.yaml

    ---
    apiVersion: v1
    data:
      LOGGER_SERVER_OPTS: -Xms512m -Xmx512m -Xmn256m
      MASTER_DISPATCH_TASK_NUM: "3"
      MASTER_EXEC_TASK_NUM: "20"
      MASTER_EXEC_THREADS: "100"
      MASTER_FAILOVER_INTERVAL: "10"
      MASTER_HEARTBEAT_INTERVAL: "10"
      MASTER_HOST_SELECTOR: LowerWeight
      MASTER_KILL_YARN_JOB_WHEN_HANDLE_FAILOVER: "true"
      MASTER_MAX_CPULOAD_AVG: "-1"
      MASTER_PERSIST_EVENT_STATE_THREADS: "10"
      MASTER_RESERVED_MEMORY: "0.3"
      MASTER_SERVER_OPTS: -Xms1g -Xmx1g -Xmn512m
      MASTER_TASK_COMMIT_INTERVAL: "1000"
      MASTER_TASK_COMMIT_RETRYTIMES: "5"
      ORG_QUARTZ_SCHEDULER_BATCHTRIGGERACQUISTITIONMAXCOUNT: "1"
      ORG_QUARTZ_THREADPOOL_THREADCOUNT: "25"
    kind: ConfigMap
    metadata:
      labels:
        app.kubernetes.io/name: dolphinscheduler-master
        app.kubernetes.io/version: 2.0.6
      name: dolphinscheduler-master
      namespace: bigdata
    ---
    apiVersion: v1
    data:
      DATA_BASEDIR_PATH: /tmp/dolphinscheduler
      DATASOURCE_ENCRYPTION_ENABLE: "false"
      DATASOURCE_ENCRYPTION_SALT: '!@#$%^&*'
      DATAX_HOME: /opt/soft/datax
      DOLPHINSCHEDULER_OPTS: ""
      HADOOP_CONF_DIR: /opt/soft/hadoop/etc/hadoop
      HADOOP_HOME: /opt/soft/hadoop
      HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE: "false"
      HDFS_ROOT_USER: hdfs
      HIVE_HOME: /opt/soft/hive
      JAVA_HOME: /usr/local/openjdk-8
      LOGIN_USER_KEYTAB_USERNAME: hdfs@HADOOP.COM
      ORG_QUARTZ_SCHEDULER_BATCHTRIGGERACQUISTITIONMAXCOUNT: "1"
      ORG_QUARTZ_THREADPOOL_THREADCOUNT: "25"
      PYTHON_HOME: /usr/bin/python
      RESOURCE_MANAGER_HTTPADDRESS_PORT: "8088"
      RESOURCE_STORAGE_TYPE: HDFS
      RESOURCE_UPLOAD_PATH: /dolphinscheduler
      SESSION_TIMEOUT_MS: "60000"
      SPARK_HOME1: /opt/soft/spark1
      SPARK_HOME2: /opt/soft/spark2
      SUDO_ENABLE: "true"
      YARN_APPLICATION_STATUS_ADDRESS: http://ds1:%s/ws/v1/cluster/apps/%s
      YARN_JOB_HISTORY_STATUS_ADDRESS: http://ds1:19888/ws/v1/history/mapreduce/jobs/%s
      YARN_RESOURCEMANAGER_HA_RM_IDS: ""
    kind: ConfigMap
    metadata:
      labels:
        app.kubernetes.io/name: dolphinscheduler-common
        app.kubernetes.io/version: 2.0.6
      name: dolphinscheduler-common
      namespace: bigdata
    ---
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        app.kubernetes.io/instance: dolphinscheduler
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: dolphinscheduler-master-headless
        app.kubernetes.io/version: 2.0.6
      name: dolphinscheduler-master-svc
      namespace: bigdata
    spec:
      ports:
      - name: master-port
        port: 5678
        protocol: TCP
      selector:
        app.kubernetes.io/name: dolphinscheduler-master
        app.kubernetes.io/version: 2.0.6
    ---
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      labels:
        app.kubernetes.io/name: dolphinscheduler-master
        app.kubernetes.io/version: 2.0.6
      name: dolphinscheduler-master
      namespace: bigdata
    spec:
      replicas: 1
      selector:
        matchLabels:
          app.kubernetes.io/name: dolphinscheduler-master
          app.kubernetes.io/version: 2.0.6
      serviceName: dolphinscheduler-master-svc
      template:
        metadata:
          creationTimestamp: null
          labels:
            app.kubernetes.io/name: dolphinscheduler-master
            app.kubernetes.io/version: 2.0.6
        spec:
          containers:
          - args:
            - master-server
            env:
            - name: TZ
              value: Asia/Shanghai
            - name: DATABASE_TYPE
              value: mysql
            # 官方要求使用的 jdbc 包是 8.0 的
            ## 因此 driver 需要写成 com.mysql.cj.jdbc.Driver
            ## 如果是 5.x 以下的版本,需要写成 com.mysql.jdbc.Driver
            - name: DATABASE_DRIVER
              value: com.mysql.cj.jdbc.Driver
            # 根据自己的实际场景修改 value
            ## 我的 mysql 是 k8s 内的,因此直接写的 svc 地址
            - name: DATABASE_HOST
              value: mysql-svc.bigdata.svc.cluster.local
            - name: DATABASE_PORT
              value: "3306"
            # 如果创建的 mysql 用户不是 dolphinscheduler
            ## 需要修改这里的 value 的值
            - name: DATABASE_USERNAME
              value: dolphinscheduler
            # 同上,密码不同,需要修改 value 的值
            - name: DATABASE_PASSWORD
              value: dolphinscheduler
            # 同上,库名不同,需要修改 value 的值
            - name: DATABASE_DATABASE
              value: dolphinscheduler
            # jdbc 6.x 以上版本,都需要增加 useSSL=false&serverTimezone=Asia/Shanghai
            # jdbc 5.x 以下版本,不需要增加 useSSL=false&serverTimezone=Asia/Shanghai
            - name: DATABASE_PARAMS
              value: useSSL=false&serverTimezone=Asia/Shanghai&useUnicode=true&characterEncoding=UTF-8
            - name: REGISTRY_PLUGIN_NAME
              value: zookeeper
            # 同 mysql 地址,zk 是 k8s 内部署,这里写的 svc
            - name: REGISTRY_SERVERS
              value: zk-svc.bigdata.svc.cluster.local:2181
            envFrom:
            - configMapRef:
                name: dolphinscheduler-common
            - configMapRef:
                name: dolphinscheduler-master
            image: dolphinscheduler_mysql:2.0.6
            imagePullPolicy: IfNotPresent
            livenessProbe:
              exec:
                command:
                - bash
                - /root/checkpoint.sh
                - MasterServer
              failureThreshold: 3
              initialDelaySeconds: 30
              periodSeconds: 30
              successThreshold: 1
              timeoutSeconds: 5
            name: dolphinscheduler-master
            ports:
            - containerPort: 5678
              name: master-port
              protocol: TCP
            readinessProbe:
              exec:
                command:
                - bash
                - /root/checkpoint.sh
                - MasterServer
              failureThreshold: 3
              initialDelaySeconds: 30
              periodSeconds: 30
              successThreshold: 1
              timeoutSeconds: 5
            volumeMounts:
            - mountPath: /opt/dolphinscheduler/logs
              name: dolphinscheduler-master
          dnsPolicy: ClusterFirst
          restartPolicy: Always
          volumes:
          - emptyDir: {}
            name: dolphinscheduler-master
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110
    • 111
    • 112
    • 113
    • 114
    • 115
    • 116
    • 117
    • 118
    • 119
    • 120
    • 121
    • 122
    • 123
    • 124
    • 125
    • 126
    • 127
    • 128
    • 129
    • 130
    • 131
    • 132
    • 133
    • 134
    • 135
    • 136
    • 137
    • 138
    • 139
    • 140
    • 141
    • 142
    • 143
    • 144
    • 145
    • 146
    • 147
    • 148
    • 149
    • 150
    • 151
    • 152
    • 153
    • 154
    • 155
    • 156
    • 157
    • 158
    • 159
    • 160
    • 161
    • 162
    • 163
    • 164
    • 165
    • 166
    • 167
    • 168
    • 169
    • 170
    • 171
    • 172
    • 173
    • 174
    • 175
    • 176
    • 177
    • 178
    • 179
    • 180
    • 181
    • 182
    • 183

    dolphinscheduler-alert.yaml

    ---
    apiVersion: v1
    data:
      ALERT_SERVER_OPTS: -Xms512m -Xmx512m -Xmn256m
    kind: ConfigMap
    metadata:
      labels:
        app.kubernetes.io/name: dolphinscheduler-alert
        app.kubernetes.io/version: 2.0.6
      name: dolphinscheduler-alert
      namespace: bigdata
    ---
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        app.kubernetes.io/name: dolphinscheduler-alert
        app.kubernetes.io/version: 2.0.6
      name: dolphinscheduler-alert
      namespace: bigdata
    spec:
      ports:
      - name: alert-port
        port: 50052
        protocol: TCP
      selector:
        app.kubernetes.io/component: alert
        app.kubernetes.io/name: dolphinscheduler-alert
        app.kubernetes.io/version: 2.0.6
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      annotations:
        deployment.kubernetes.io/revision: "1"
      labels:
        app.kubernetes.io/name: dolphinscheduler-alert
        app.kubernetes.io/version: 2.0.6
      name: dolphinscheduler-alert
      namespace: bigdata
    spec:
      progressDeadlineSeconds: 600
      replicas: 1
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          app.kubernetes.io/name: dolphinscheduler-alert
          app.kubernetes.io/version: 2.0.6
      strategy:
        rollingUpdate:
          maxSurge: 25%
          maxUnavailable: 25%
        type: RollingUpdate
      template:
        metadata:
          creationTimestamp: null
          labels:
            app.kubernetes.io/name: dolphinscheduler-alert
            app.kubernetes.io/version: 2.0.6
        spec:
          containers:
          - args:
            - alert-server
            env:
            - name: TZ
              value: Asia/Shanghai
            - name: DATABASE_TYPE
              value: mysql
            # 官方要求使用的 jdbc 包是 8.0 的
            ## 因此 driver 需要写成 com.mysql.cj.jdbc.Driver
            ## 如果是 5.x 以下的版本,需要写成 com.mysql.jdbc.Driver
            - name: DATABASE_DRIVER
              value: com.mysql.cj.jdbc.Driver
            # 根据自己的实际场景修改 value
            ## 我的 mysql 是 k8s 内的,因此直接写的 svc 地址
            - name: DATABASE_HOST
              value: mysql-svc.bigdata.svc.cluster.local
            - name: DATABASE_PORT
              value: "3306"
            # 如果创建的 mysql 用户不是 dolphinscheduler
            ## 需要修改这里的 value 的值
            - name: DATABASE_USERNAME
              value: dolphinscheduler
            # 同上,密码不同,需要修改 value 的值
            - name: DATABASE_PASSWORD
              value: dolphinscheduler
            # 同上,库名不同,需要修改 value 的值
            - name: DATABASE_DATABASE
              value: dolphinscheduler
            # jdbc 6.x 以上版本,都需要增加 useSSL=false&serverTimezone=Asia/Shanghai
            # jdbc 5.x 以下版本,不需要增加 useSSL=false&serverTimezone=Asia/Shanghai
            - name: DATABASE_PARAMS
              value: useSSL=false&serverTimezone=Asia/Shanghai&useUnicode=true&characterEncoding=UTF-8
            envFrom:
            - configMapRef:
                name: dolphinscheduler-common
            - configMapRef:
                name: dolphinscheduler-alert
            image: dolphinscheduler_mysql:2.0.6
            imagePullPolicy: IfNotPresent
            livenessProbe:
              exec:
                command:
                - bash
                - /root/checkpoint.sh
                - AlertServer
              failureThreshold: 3
              initialDelaySeconds: 30
              periodSeconds: 30
              successThreshold: 1
              timeoutSeconds: 5
            name: dolphinscheduler-alert
            ports:
            - containerPort: 50052
              name: alert-port
              protocol: TCP
            readinessProbe:
              exec:
                command:
                - bash
                - /root/checkpoint.sh
                - AlertServer
              failureThreshold: 3
              initialDelaySeconds: 30
              periodSeconds: 30
              successThreshold: 1
              timeoutSeconds: 5
            resources: {}
            terminationMessagePath: /dev/termination-log
            terminationMessagePolicy: File
            volumeMounts:
            - mountPath: /opt/dolphinscheduler/logs
              name: dolphinscheduler-alert
          dnsPolicy: ClusterFirst
          restartPolicy: Always
          schedulerName: default-scheduler
          securityContext: {}
          terminationGracePeriodSeconds: 30
          volumes:
          - emptyDir: {}
            name: dolphinscheduler-alert
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110
    • 111
    • 112
    • 113
    • 114
    • 115
    • 116
    • 117
    • 118
    • 119
    • 120
    • 121
    • 122
    • 123
    • 124
    • 125
    • 126
    • 127
    • 128
    • 129
    • 130
    • 131
    • 132
    • 133
    • 134
    • 135
    • 136
    • 137
    • 138
    • 139
    • 140
    • 141

    dolphinscheduler-worker.yaml

    ---
    apiVersion: v1
    data:
      LOGGER_SERVER_OPTS: -Xms512m -Xmx512m -Xmn256m
      WORKER_EXEC_THREADS: "100"
      WORKER_GROUPS: default
      WORKER_HEARTBEAT_INTERVAL: "10"
      WORKER_HOST_WEIGHT: "100"
      WORKER_MAX_CPULOAD_AVG: "-1"
      WORKER_RESERVED_MEMORY: "0.3"
      WORKER_RETRY_REPORT_TASK_STATUS_INTERVAL: "600"
      WORKER_SERVER_OPTS: -Xms1g -Xmx1g -Xmn512m
    kind: ConfigMap
    metadata:
      labels:
        app.kubernetes.io/name: dolphinscheduler-worker
        app.kubernetes.io/version: 2.0.6
      name: dolphinscheduler-worker
      namespace: bigdata
    ---
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        app.kubernetes.io/name: dolphinscheduler-worker-headless
        app.kubernetes.io/version: 2.0.6
      name: dolphinscheduler-worker-headless
      namespace: bigdata
    spec:
      ports:
      - name: worker-port
        port: 1234
        protocol: TCP
      - name: logger-port
        port: 50051
        protocol: TCP
      selector:
        app.kubernetes.io/component: worker
        app.kubernetes.io/instance: dolphinscheduler
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: dolphinscheduler-worker
        app.kubernetes.io/version: 2.0.6
    ---
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      labels:
        app.kubernetes.io/component: worker
        app.kubernetes.io/instance: dolphinscheduler
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: dolphinscheduler-worker
        app.kubernetes.io/version: 2.0.6
      name: dolphinscheduler-worker
      namespace: bigdata
    spec:
      replicas: 3
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          app.kubernetes.io/component: worker
          app.kubernetes.io/instance: dolphinscheduler
          app.kubernetes.io/managed-by: Helm
          app.kubernetes.io/name: dolphinscheduler-worker
          app.kubernetes.io/version: 2.0.6
      serviceName: dolphinscheduler-worker-headless
      template:
        metadata:
          creationTimestamp: null
          labels:
            app.kubernetes.io/component: worker
            app.kubernetes.io/instance: dolphinscheduler
            app.kubernetes.io/managed-by: Helm
            app.kubernetes.io/name: dolphinscheduler-worker
            app.kubernetes.io/version: 2.0.6
        spec:
          containers:
          - args:
            - worker-server
            env:
            - name: TZ
              value: Asia/Shanghai
            - name: ALERT_LISTEN_HOST
              value: dolphinscheduler-alert
            - name: DATABASE_TYPE
              value: mysql
            # 官方要求使用的 jdbc 包是 8.0 的
            ## 因此 driver 需要写成 com.mysql.cj.jdbc.Driver
            ## 如果是 5.x 以下的版本,需要写成 com.mysql.jdbc.Driver
            - name: DATABASE_DRIVER
              value: com.mysql.cj.jdbc.Driver
            # 根据自己的实际场景修改 value
            ## 我的 mysql 是 k8s 内的,因此直接写的 svc 地址
            - name: DATABASE_HOST
              value: mysql-svc.bigdata.svc.cluster.local
            - name: DATABASE_PORT
              value: "3306"
            # 如果创建的 mysql 用户不是 dolphinscheduler
            ## 需要修改这里的 value 的值
            - name: DATABASE_USERNAME
              value: dolphinscheduler
            # 同上,密码不同,需要修改 value 的值
            - name: DATABASE_PASSWORD
              value: dolphinscheduler
            # 同上,库名不同,需要修改 value 的值
            - name: DATABASE_DATABASE
              value: dolphinscheduler
            # jdbc 6.x 以上版本,都需要增加 useSSL=false&serverTimezone=Asia/Shanghai
            # jdbc 5.x 以下版本,不需要增加 useSSL=false&serverTimezone=Asia/Shanghai
            - name: DATABASE_PARAMS
              value: useSSL=false&serverTimezone=Asia/Shanghai&useUnicode=true&characterEncoding=UTF-8
            - name: REGISTRY_PLUGIN_NAME
              value: zookeeper
            # 同 mysql 地址,zk 是 k8s 内部署,这里写的 svc
            - name: REGISTRY_SERVERS
              value: zk-svc.bigdata.svc.cluster.local:2181
            envFrom:
            - configMapRef:
                name: dolphinscheduler-common
            - configMapRef:
                name: dolphinscheduler-worker
            - configMapRef:
                name: dolphinscheduler-alert
            image: dolphinscheduler_mysql:2.0.6
            imagePullPolicy: IfNotPresent
            livenessProbe:
              exec:
                command:
                - bash
                - /root/checkpoint.sh
                - WorkerServer
              failureThreshold: 3
              initialDelaySeconds: 30
              periodSeconds: 30
              successThreshold: 1
              timeoutSeconds: 5
            name: dolphinscheduler-worker
            ports:
            - containerPort: 1234
              name: worker-port
              protocol: TCP
            - containerPort: 50051
              name: logger-port
              protocol: TCP
            readinessProbe:
              exec:
                command:
                - bash
                - /root/checkpoint.sh
                - WorkerServer
              failureThreshold: 3
              initialDelaySeconds: 30
              periodSeconds: 30
              successThreshold: 1
              timeoutSeconds: 5
            resources: {}
            terminationMessagePath: /dev/termination-log
            terminationMessagePolicy: File
            volumeMounts:
            - mountPath: /tmp/dolphinscheduler
              name: dolphinscheduler-worker-data
            - mountPath: /opt/dolphinscheduler/logs
              name: dolphinscheduler-worker-logs
          dnsPolicy: ClusterFirst
          restartPolicy: Always
          schedulerName: default-scheduler
          securityContext: {}
          terminationGracePeriodSeconds: 30
          volumes:
          # 对 data 目录做一个持久化,以 hostpath 的形式创建
          - hostPath:
              path: /data/k8s_data/dolphinscheduler
              type: DirectoryOrCreate
            name: dolphinscheduler-worker-data
          - emptyDir: {}
            name: dolphinscheduler-worker-logs
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110
    • 111
    • 112
    • 113
    • 114
    • 115
    • 116
    • 117
    • 118
    • 119
    • 120
    • 121
    • 122
    • 123
    • 124
    • 125
    • 126
    • 127
    • 128
    • 129
    • 130
    • 131
    • 132
    • 133
    • 134
    • 135
    • 136
    • 137
    • 138
    • 139
    • 140
    • 141
    • 142
    • 143
    • 144
    • 145
    • 146
    • 147
    • 148
    • 149
    • 150
    • 151
    • 152
    • 153
    • 154
    • 155
    • 156
    • 157
    • 158
    • 159
    • 160
    • 161
    • 162
    • 163
    • 164
    • 165
    • 166
    • 167
    • 168
    • 169
    • 170
    • 171
    • 172
    • 173
    • 174
    • 175

    dolphinscheduler-api.yaml

    ---
    apiVersion: v1
    data:
      API_SERVER_OPTS: -Xms512m -Xmx512m -Xmn256m
    kind: ConfigMap
    metadata:
      labels:
        app.kubernetes.io/name: dolphinscheduler-api
        app.kubernetes.io/version: 2.0.6
      name: dolphinscheduler-api
      namespace: bigdata
    ---
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        app.kubernetes.io/name: dolphinscheduler-api
        app.kubernetes.io/version: 2.0.6
      name: dolphinscheduler-api
      namespace: bigdata
    spec:
      ports:
      - name: api-port
        port: 12345
        protocol: TCP
      selector:
        app.kubernetes.io/name: dolphinscheduler-api
        app.kubernetes.io/version: 2.0.6
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      annotations:
        deployment.kubernetes.io/revision: "1"
      labels:
        app.kubernetes.io/name: dolphinscheduler-api
        app.kubernetes.io/version: 2.0.6
      name: dolphinscheduler-api
      namespace: bigdata
    spec:
      progressDeadlineSeconds: 600
      replicas: 1
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          app.kubernetes.io/component: api
          app.kubernetes.io/instance: dolphinscheduler
          app.kubernetes.io/managed-by: Helm
          app.kubernetes.io/name: dolphinscheduler-api
          app.kubernetes.io/version: 2.0.6
      strategy:
        rollingUpdate:
          maxSurge: 25%
          maxUnavailable: 25%
        type: RollingUpdate
      template:
        metadata:
          creationTimestamp: null
          labels:
            app.kubernetes.io/component: api
            app.kubernetes.io/instance: dolphinscheduler
            app.kubernetes.io/managed-by: Helm
            app.kubernetes.io/name: dolphinscheduler-api
            app.kubernetes.io/version: 2.0.6
        spec:
          containers:
          - args:
            - api-server
            env:
            - name: TZ
              value: Asia/Shanghai
            - name: DATABASE_TYPE
              value: mysql
            # 官方要求使用的 jdbc 包是 8.0 的
            ## 因此 driver 需要写成 com.mysql.cj.jdbc.Driver
            ## 如果是 5.x 以下的版本,需要写成 com.mysql.jdbc.Driver
            - name: DATABASE_DRIVER
              value: com.mysql.cj.jdbc.Driver
            # 根据自己的实际场景修改 value
            ## 我的 mysql 是 k8s 内的,因此直接写的 svc 地址
            - name: DATABASE_HOST
              value: mysql-svc.bigdata.svc.cluster.local
            - name: DATABASE_PORT
              value: "3306"
            # 如果创建的 mysql 用户不是 dolphinscheduler
            ## 需要修改这里的 value 的值
            - name: DATABASE_USERNAME
              value: dolphinscheduler
            # 同上,密码不同,需要修改 value 的值
            - name: DATABASE_PASSWORD
              value: dolphinscheduler
            # 同上,库名不同,需要修改 value 的值
            - name: DATABASE_DATABASE
              value: dolphinscheduler
            # jdbc 6.x 以上版本,都需要增加 useSSL=false&serverTimezone=Asia/Shanghai
            # jdbc 5.x 以下版本,不需要增加 useSSL=false&serverTimezone=Asia/Shanghai
            - name: DATABASE_PARAMS
              value: useSSL=false&serverTimezone=Asia/Shanghai&useUnicode=true&characterEncoding=UTF-8
            - name: REGISTRY_PLUGIN_NAME
              value: zookeeper
            # 同 mysql 地址,zk 是 k8s 内部署,这里写的 svc
            - name: REGISTRY_SERVERS
              value: zk-svc.bigdata.svc.cluster.local:2181
            envFrom:
            - configMapRef:
                name: dolphinscheduler-common
            - configMapRef:
                name: dolphinscheduler-api
            image: dolphinscheduler_mysql:2.0.6
            imagePullPolicy: IfNotPresent
            livenessProbe:
              exec:
                command:
                - bash
                - /root/checkpoint.sh
                - ApiApplicationServer
              failureThreshold: 3
              initialDelaySeconds: 30
              periodSeconds: 30
              successThreshold: 1
              timeoutSeconds: 5
            name: dolphinscheduler-api
            ports:
            - containerPort: 12345
              name: api-port
              protocol: TCP
            readinessProbe:
              exec:
                command:
                - bash
                - /root/checkpoint.sh
                - ApiApplicationServer
              failureThreshold: 3
              initialDelaySeconds: 30
              periodSeconds: 30
              successThreshold: 1
              timeoutSeconds: 5
            resources: {}
            terminationMessagePath: /dev/termination-log
            terminationMessagePolicy: File
            volumeMounts:
            - mountPath: /opt/dolphinscheduler/logs
              name: dolphinscheduler-api
          dnsPolicy: ClusterFirst
          restartPolicy: Always
          schedulerName: default-scheduler
          securityContext: {}
          terminationGracePeriodSeconds: 30
          volumes:
          - emptyDir: {}
            name: dolphinscheduler-api
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110
    • 111
    • 112
    • 113
    • 114
    • 115
    • 116
    • 117
    • 118
    • 119
    • 120
    • 121
    • 122
    • 123
    • 124
    • 125
    • 126
    • 127
    • 128
    • 129
    • 130
    • 131
    • 132
    • 133
    • 134
    • 135
    • 136
    • 137
    • 138
    • 139
    • 140
    • 141
    • 142
    • 143
    • 144
    • 145
    • 146
    • 147
    • 148
    • 149
    • 150
    • 151

    dolphinscheduler-ingress.yaml

    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      generation: 1
      labels:
        app.kubernetes.io/name: dolphinscheduler
        app.kubernetes.io/version: 2.0.6
      name: dolphinscheduler
      namespace: bigdata
    spec:
      rules:
      - host: dolphinscheduler.org
        http:
          paths:
          - backend:
              serviceName: dolphinscheduler-api
              servicePort: api-port
            path: /dolphinscheduler
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18

    mysql 初始化

    创建用户

    GRANT ALL PRIVILEGES ON dolphinscheduler.* TO 'dolphinscheduler'@'%' IDENTIFIED BY 'dolphinscheduler';
    FLUSH PRIVILEGES;
    
    • 1
    • 2

    建库建表

    /*
     * Licensed to the Apache Software Foundation (ASF) under one or more
     * contributor license agreements.  See the NOTICE file distributed with
     * this work for additional information regarding copyright ownership.
     * The ASF licenses this file to You under the Apache License, Version 2.0
     * (the "License"); you may not use this file except in compliance with
     * the License.  You may obtain a copy of the License at
     *
     *    http://www.apache.org/licenses/LICENSE-2.0
     *
     * Unless required by applicable law or agreed to in writing, software
     * distributed under the License is distributed on an "AS IS" BASIS,
     * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     * See the License for the specific language governing permissions and
     * limitations under the License.
    */
    
    SET FOREIGN_KEY_CHECKS=0;
    
    -- ----------------------------
    -- Database of dolphinscheduler
    -- ----------------------------
    CREATE DATABASE IF NOT EXISTS dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
    
    USE dolphinscheduler;
    
    -- ----------------------------
    -- Table structure for QRTZ_BLOB_TRIGGERS
    -- ----------------------------
    DROP TABLE IF EXISTS `QRTZ_BLOB_TRIGGERS`;
    CREATE TABLE `QRTZ_BLOB_TRIGGERS` (
      `SCHED_NAME` varchar(120) NOT NULL,
      `TRIGGER_NAME` varchar(200) NOT NULL,
      `TRIGGER_GROUP` varchar(200) NOT NULL,
      `BLOB_DATA` blob,
      PRIMARY KEY (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`),
      KEY `SCHED_NAME` (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`),
      CONSTRAINT `QRTZ_BLOB_TRIGGERS_ibfk_1` FOREIGN KEY (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`) REFERENCES `QRTZ_TRIGGERS` (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`)
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Records of QRTZ_BLOB_TRIGGERS
    -- ----------------------------
    
    -- ----------------------------
    -- Table structure for QRTZ_CALENDARS
    -- ----------------------------
    DROP TABLE IF EXISTS `QRTZ_CALENDARS`;
    CREATE TABLE `QRTZ_CALENDARS` (
      `SCHED_NAME` varchar(120) NOT NULL,
      `CALENDAR_NAME` varchar(200) NOT NULL,
      `CALENDAR` blob NOT NULL,
      PRIMARY KEY (`SCHED_NAME`,`CALENDAR_NAME`)
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Records of QRTZ_CALENDARS
    -- ----------------------------
    
    -- ----------------------------
    -- Table structure for QRTZ_CRON_TRIGGERS
    -- ----------------------------
    DROP TABLE IF EXISTS `QRTZ_CRON_TRIGGERS`;
    CREATE TABLE `QRTZ_CRON_TRIGGERS` (
      `SCHED_NAME` varchar(120) NOT NULL,
      `TRIGGER_NAME` varchar(200) NOT NULL,
      `TRIGGER_GROUP` varchar(200) NOT NULL,
      `CRON_EXPRESSION` varchar(120) NOT NULL,
      `TIME_ZONE_ID` varchar(80) DEFAULT NULL,
      PRIMARY KEY (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`),
      CONSTRAINT `QRTZ_CRON_TRIGGERS_ibfk_1` FOREIGN KEY (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`) REFERENCES `QRTZ_TRIGGERS` (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`)
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Records of QRTZ_CRON_TRIGGERS
    -- ----------------------------
    
    -- ----------------------------
    -- Table structure for QRTZ_FIRED_TRIGGERS
    -- ----------------------------
    DROP TABLE IF EXISTS `QRTZ_FIRED_TRIGGERS`;
    CREATE TABLE `QRTZ_FIRED_TRIGGERS` (
      `SCHED_NAME` varchar(120) NOT NULL,
      `ENTRY_ID` varchar(200) NOT NULL,
      `TRIGGER_NAME` varchar(200) NOT NULL,
      `TRIGGER_GROUP` varchar(200) NOT NULL,
      `INSTANCE_NAME` varchar(200) NOT NULL,
      `FIRED_TIME` bigint(13) NOT NULL,
      `SCHED_TIME` bigint(13) NOT NULL,
      `PRIORITY` int(11) NOT NULL,
      `STATE` varchar(16) NOT NULL,
      `JOB_NAME` varchar(200) DEFAULT NULL,
      `JOB_GROUP` varchar(200) DEFAULT NULL,
      `IS_NONCONCURRENT` varchar(1) DEFAULT NULL,
      `REQUESTS_RECOVERY` varchar(1) DEFAULT NULL,
      PRIMARY KEY (`SCHED_NAME`,`ENTRY_ID`),
      KEY `IDX_QRTZ_FT_TRIG_INST_NAME` (`SCHED_NAME`,`INSTANCE_NAME`),
      KEY `IDX_QRTZ_FT_INST_JOB_REQ_RCVRY` (`SCHED_NAME`,`INSTANCE_NAME`,`REQUESTS_RECOVERY`),
      KEY `IDX_QRTZ_FT_J_G` (`SCHED_NAME`,`JOB_NAME`,`JOB_GROUP`),
      KEY `IDX_QRTZ_FT_JG` (`SCHED_NAME`,`JOB_GROUP`),
      KEY `IDX_QRTZ_FT_T_G` (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`),
      KEY `IDX_QRTZ_FT_TG` (`SCHED_NAME`,`TRIGGER_GROUP`)
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Records of QRTZ_FIRED_TRIGGERS
    -- ----------------------------
    
    -- ----------------------------
    -- Table structure for QRTZ_JOB_DETAILS
    -- ----------------------------
    DROP TABLE IF EXISTS `QRTZ_JOB_DETAILS`;
    CREATE TABLE `QRTZ_JOB_DETAILS` (
      `SCHED_NAME` varchar(120) NOT NULL,
      `JOB_NAME` varchar(200) NOT NULL,
      `JOB_GROUP` varchar(200) NOT NULL,
      `DESCRIPTION` varchar(250) DEFAULT NULL,
      `JOB_CLASS_NAME` varchar(250) NOT NULL,
      `IS_DURABLE` varchar(1) NOT NULL,
      `IS_NONCONCURRENT` varchar(1) NOT NULL,
      `IS_UPDATE_DATA` varchar(1) NOT NULL,
      `REQUESTS_RECOVERY` varchar(1) NOT NULL,
      `JOB_DATA` blob,
      PRIMARY KEY (`SCHED_NAME`,`JOB_NAME`,`JOB_GROUP`),
      KEY `IDX_QRTZ_J_REQ_RECOVERY` (`SCHED_NAME`,`REQUESTS_RECOVERY`),
      KEY `IDX_QRTZ_J_GRP` (`SCHED_NAME`,`JOB_GROUP`)
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Records of QRTZ_JOB_DETAILS
    -- ----------------------------
    
    -- ----------------------------
    -- Table structure for QRTZ_LOCKS
    -- ----------------------------
    DROP TABLE IF EXISTS `QRTZ_LOCKS`;
    CREATE TABLE `QRTZ_LOCKS` (
      `SCHED_NAME` varchar(120) NOT NULL,
      `LOCK_NAME` varchar(40) NOT NULL,
      PRIMARY KEY (`SCHED_NAME`,`LOCK_NAME`)
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Records of QRTZ_LOCKS
    -- ----------------------------
    
    -- ----------------------------
    -- Table structure for QRTZ_PAUSED_TRIGGER_GRPS
    -- ----------------------------
    DROP TABLE IF EXISTS `QRTZ_PAUSED_TRIGGER_GRPS`;
    CREATE TABLE `QRTZ_PAUSED_TRIGGER_GRPS` (
      `SCHED_NAME` varchar(120) NOT NULL,
      `TRIGGER_GROUP` varchar(200) NOT NULL,
      PRIMARY KEY (`SCHED_NAME`,`TRIGGER_GROUP`)
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Records of QRTZ_PAUSED_TRIGGER_GRPS
    -- ----------------------------
    
    -- ----------------------------
    -- Table structure for QRTZ_SCHEDULER_STATE
    -- ----------------------------
    DROP TABLE IF EXISTS `QRTZ_SCHEDULER_STATE`;
    CREATE TABLE `QRTZ_SCHEDULER_STATE` (
      `SCHED_NAME` varchar(120) NOT NULL,
      `INSTANCE_NAME` varchar(200) NOT NULL,
      `LAST_CHECKIN_TIME` bigint(13) NOT NULL,
      `CHECKIN_INTERVAL` bigint(13) NOT NULL,
      PRIMARY KEY (`SCHED_NAME`,`INSTANCE_NAME`)
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Records of QRTZ_SCHEDULER_STATE
    -- ----------------------------
    
    -- ----------------------------
    -- Table structure for QRTZ_SIMPLE_TRIGGERS
    -- ----------------------------
    DROP TABLE IF EXISTS `QRTZ_SIMPLE_TRIGGERS`;
    CREATE TABLE `QRTZ_SIMPLE_TRIGGERS` (
      `SCHED_NAME` varchar(120) NOT NULL,
      `TRIGGER_NAME` varchar(200) NOT NULL,
      `TRIGGER_GROUP` varchar(200) NOT NULL,
      `REPEAT_COUNT` bigint(7) NOT NULL,
      `REPEAT_INTERVAL` bigint(12) NOT NULL,
      `TIMES_TRIGGERED` bigint(10) NOT NULL,
      PRIMARY KEY (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`),
      CONSTRAINT `QRTZ_SIMPLE_TRIGGERS_ibfk_1` FOREIGN KEY (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`) REFERENCES `QRTZ_TRIGGERS` (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`)
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Records of QRTZ_SIMPLE_TRIGGERS
    -- ----------------------------
    
    -- ----------------------------
    -- Table structure for QRTZ_SIMPROP_TRIGGERS
    -- ----------------------------
    DROP TABLE IF EXISTS `QRTZ_SIMPROP_TRIGGERS`;
    CREATE TABLE `QRTZ_SIMPROP_TRIGGERS` (
      `SCHED_NAME` varchar(120) NOT NULL,
      `TRIGGER_NAME` varchar(200) NOT NULL,
      `TRIGGER_GROUP` varchar(200) NOT NULL,
      `STR_PROP_1` varchar(512) DEFAULT NULL,
      `STR_PROP_2` varchar(512) DEFAULT NULL,
      `STR_PROP_3` varchar(512) DEFAULT NULL,
      `INT_PROP_1` int(11) DEFAULT NULL,
      `INT_PROP_2` int(11) DEFAULT NULL,
      `LONG_PROP_1` bigint(20) DEFAULT NULL,
      `LONG_PROP_2` bigint(20) DEFAULT NULL,
      `DEC_PROP_1` decimal(13,4) DEFAULT NULL,
      `DEC_PROP_2` decimal(13,4) DEFAULT NULL,
      `BOOL_PROP_1` varchar(1) DEFAULT NULL,
      `BOOL_PROP_2` varchar(1) DEFAULT NULL,
      PRIMARY KEY (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`),
      CONSTRAINT `QRTZ_SIMPROP_TRIGGERS_ibfk_1` FOREIGN KEY (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`) REFERENCES `QRTZ_TRIGGERS` (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`)
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Records of QRTZ_SIMPROP_TRIGGERS
    -- ----------------------------
    
    -- ----------------------------
    -- Table structure for QRTZ_TRIGGERS
    -- ----------------------------
    DROP TABLE IF EXISTS `QRTZ_TRIGGERS`;
    CREATE TABLE `QRTZ_TRIGGERS` (
      `SCHED_NAME` varchar(120) NOT NULL,
      `TRIGGER_NAME` varchar(200) NOT NULL,
      `TRIGGER_GROUP` varchar(200) NOT NULL,
      `JOB_NAME` varchar(200) NOT NULL,
      `JOB_GROUP` varchar(200) NOT NULL,
      `DESCRIPTION` varchar(250) DEFAULT NULL,
      `NEXT_FIRE_TIME` bigint(13) DEFAULT NULL,
      `PREV_FIRE_TIME` bigint(13) DEFAULT NULL,
      `PRIORITY` int(11) DEFAULT NULL,
      `TRIGGER_STATE` varchar(16) NOT NULL,
      `TRIGGER_TYPE` varchar(8) NOT NULL,
      `START_TIME` bigint(13) NOT NULL,
      `END_TIME` bigint(13) DEFAULT NULL,
      `CALENDAR_NAME` varchar(200) DEFAULT NULL,
      `MISFIRE_INSTR` smallint(2) DEFAULT NULL,
      `JOB_DATA` blob,
      PRIMARY KEY (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`),
      KEY `IDX_QRTZ_T_J` (`SCHED_NAME`,`JOB_NAME`,`JOB_GROUP`),
      KEY `IDX_QRTZ_T_JG` (`SCHED_NAME`,`JOB_GROUP`),
      KEY `IDX_QRTZ_T_C` (`SCHED_NAME`,`CALENDAR_NAME`),
      KEY `IDX_QRTZ_T_G` (`SCHED_NAME`,`TRIGGER_GROUP`),
      KEY `IDX_QRTZ_T_STATE` (`SCHED_NAME`,`TRIGGER_STATE`),
      KEY `IDX_QRTZ_T_N_STATE` (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`,`TRIGGER_STATE`),
      KEY `IDX_QRTZ_T_N_G_STATE` (`SCHED_NAME`,`TRIGGER_GROUP`,`TRIGGER_STATE`),
      KEY `IDX_QRTZ_T_NEXT_FIRE_TIME` (`SCHED_NAME`,`NEXT_FIRE_TIME`),
      KEY `IDX_QRTZ_T_NFT_ST` (`SCHED_NAME`,`TRIGGER_STATE`,`NEXT_FIRE_TIME`),
      KEY `IDX_QRTZ_T_NFT_MISFIRE` (`SCHED_NAME`,`MISFIRE_INSTR`,`NEXT_FIRE_TIME`),
      KEY `IDX_QRTZ_T_NFT_ST_MISFIRE` (`SCHED_NAME`,`MISFIRE_INSTR`,`NEXT_FIRE_TIME`,`TRIGGER_STATE`),
      KEY `IDX_QRTZ_T_NFT_ST_MISFIRE_GRP` (`SCHED_NAME`,`MISFIRE_INSTR`,`NEXT_FIRE_TIME`,`TRIGGER_GROUP`,`TRIGGER_STATE`),
      CONSTRAINT `QRTZ_TRIGGERS_ibfk_1` FOREIGN KEY (`SCHED_NAME`, `JOB_NAME`, `JOB_GROUP`) REFERENCES `QRTZ_JOB_DETAILS` (`SCHED_NAME`, `JOB_NAME`, `JOB_GROUP`)
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Records of QRTZ_TRIGGERS
    -- ----------------------------
    
    -- ----------------------------
    -- Table structure for t_ds_access_token
    -- ----------------------------
    DROP TABLE IF EXISTS `t_ds_access_token`;
    CREATE TABLE `t_ds_access_token` (
      `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key',
      `user_id` int(11) DEFAULT NULL COMMENT 'user id',
      `token` varchar(64) DEFAULT NULL COMMENT 'token',
      `expire_time` datetime DEFAULT NULL COMMENT 'end time of token ',
      `create_time` datetime DEFAULT NULL COMMENT 'create time',
      `update_time` datetime DEFAULT NULL COMMENT 'update time',
      PRIMARY KEY (`id`)
    ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Records of t_ds_access_token
    -- ----------------------------
    
    -- ----------------------------
    -- Table structure for t_ds_alert
    -- ----------------------------
    DROP TABLE IF EXISTS `t_ds_alert`;
    CREATE TABLE `t_ds_alert` (
      `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key',
      `title` varchar(64) DEFAULT NULL COMMENT 'title',
      `content` text COMMENT 'Message content (can be email, can be SMS. Mail is stored in JSON map, and SMS is string)',
      `alert_status` tinyint(4) DEFAULT '0' COMMENT '0:wait running,1:success,2:failed',
      `log` text COMMENT 'log',
      `alertgroup_id` int(11) DEFAULT NULL COMMENT 'alert group id',
      `create_time` datetime DEFAULT NULL COMMENT 'create time',
      `update_time` datetime DEFAULT NULL COMMENT 'update time',
      PRIMARY KEY (`id`)
    ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Records of t_ds_alert
    -- ----------------------------
    
    -- ----------------------------
    -- Table structure for t_ds_alertgroup
    -- ----------------------------
    DROP TABLE IF EXISTS `t_ds_alertgroup`;
    CREATE TABLE `t_ds_alertgroup`(
      `id`             int(11) NOT NULL AUTO_INCREMENT COMMENT 'key',
      `alert_instance_ids` varchar (255) DEFAULT NULL COMMENT 'alert instance ids',
      `create_user_id` int(11) DEFAULT NULL COMMENT 'create user id',
      `group_name`     varchar(255) DEFAULT NULL COMMENT 'group name',
      `description`    varchar(255) DEFAULT NULL,
      `create_time`    datetime     DEFAULT NULL COMMENT 'create time',
      `update_time`    datetime     DEFAULT NULL COMMENT 'update time',
      PRIMARY KEY (`id`),
      UNIQUE KEY `t_ds_alertgroup_name_un` (`group_name`)
    ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Records of t_ds_alertgroup
    -- ----------------------------
    
    -- ----------------------------
    -- Table structure for t_ds_command
    -- ----------------------------
    DROP TABLE IF EXISTS `t_ds_command`;
    CREATE TABLE `t_ds_command` (
      `id`                        int(11)    NOT NULL AUTO_INCREMENT COMMENT 'key',
      `command_type`              tinyint(4) DEFAULT NULL COMMENT 'Command type: 0 start workflow, 1 start execution from current node, 2 resume fault-tolerant workflow, 3 resume pause process, 4 start execution from failed node, 5 complement, 6 schedule, 7 rerun, 8 pause, 9 stop, 10 resume waiting thread',
      `process_definition_code`   bigint(20) NOT NULL COMMENT 'process definition code',
      `process_definition_version` int(11) DEFAULT '0' COMMENT 'process definition version',
      `process_instance_id`       int(11) DEFAULT '0' COMMENT 'process instance id',
      `command_param`             text COMMENT 'json command parameters',
      `task_depend_type`          tinyint(4) DEFAULT NULL COMMENT 'Node dependency type: 0 current node, 1 forward, 2 backward',
      `failure_strategy`          tinyint(4) DEFAULT '0' COMMENT 'Failed policy: 0 end, 1 continue',
      `warning_type`              tinyint(4) DEFAULT '0' COMMENT 'Alarm type: 0 is not sent, 1 process is sent successfully, 2 process is sent failed, 3 process is sent successfully and all failures are sent',
      `warning_group_id`          int(11) DEFAULT NULL COMMENT 'warning group',
      `schedule_time`             datetime DEFAULT NULL COMMENT 'schedule time',
      `start_time`                datetime DEFAULT NULL COMMENT 'start time',
      `executor_id`               int(11) DEFAULT NULL COMMENT 'executor id',
      `update_time`               datetime DEFAULT NULL COMMENT 'update time',
      `process_instance_priority` int(11) DEFAULT NULL COMMENT 'process instance priority: 0 Highest,1 High,2 Medium,3 Low,4 Lowest',
      `worker_group`              varchar(64)  COMMENT 'worker group',
      `environment_code`          bigint(20) DEFAULT '-1' COMMENT 'environment code',
      `dry_run`                   tinyint(4) DEFAULT '0' COMMENT 'dry run flag:0 normal, 1 dry run',
      PRIMARY KEY (`id`),
      KEY `priority_id_index` (`process_instance_priority`,`id`) USING BTREE
    ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Records of t_ds_command
    -- ----------------------------
    
    -- ----------------------------
    -- Table structure for t_ds_datasource
    -- ----------------------------
    DROP TABLE IF EXISTS `t_ds_datasource`;
    CREATE TABLE `t_ds_datasource` (
      `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key',
      `name` varchar(64) NOT NULL COMMENT 'data source name',
      `note` varchar(255) DEFAULT NULL COMMENT 'description',
      `type` tinyint(4) NOT NULL COMMENT 'data source type: 0:mysql,1:postgresql,2:hive,3:spark',
      `user_id` int(11) NOT NULL COMMENT 'the creator id',
      `connection_params` text NOT NULL COMMENT 'json connection params',
      `create_time` datetime NOT NULL COMMENT 'create time',
      `update_time` datetime DEFAULT NULL COMMENT 'update time',
      PRIMARY KEY (`id`),
      UNIQUE KEY `t_ds_datasource_name_un` (`name`, `type`)
    ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Records of t_ds_datasource
    -- ----------------------------
    
    -- ----------------------------
    -- Table structure for t_ds_error_command
    -- ----------------------------
    DROP TABLE IF EXISTS `t_ds_error_command`;
    CREATE TABLE `t_ds_error_command` (
      `id` int(11) NOT NULL COMMENT 'key',
      `command_type` tinyint(4) DEFAULT NULL COMMENT 'command type',
      `executor_id` int(11) DEFAULT NULL COMMENT 'executor id',
      `process_definition_code` bigint(20) NOT NULL COMMENT 'process definition code',
      `process_definition_version` int(11) DEFAULT '0' COMMENT 'process definition version',
      `process_instance_id` int(11) DEFAULT '0' COMMENT 'process instance id: 0',
      `command_param` text COMMENT 'json command parameters',
      `task_depend_type` tinyint(4) DEFAULT NULL COMMENT 'task depend type',
      `failure_strategy` tinyint(4) DEFAULT '0' COMMENT 'failure strategy',
      `warning_type` tinyint(4) DEFAULT '0' COMMENT 'warning type',
      `warning_group_id` int(11) DEFAULT NULL COMMENT 'warning group id',
      `schedule_time` datetime DEFAULT NULL COMMENT 'scheduler time',
      `start_time` datetime DEFAULT NULL COMMENT 'start time',
      `update_time` datetime DEFAULT NULL COMMENT 'update time',
      `process_instance_priority` int(11) DEFAULT NULL COMMENT 'process instance priority, 0 Highest,1 High,2 Medium,3 Low,4 Lowest',
      `worker_group` varchar(64)  COMMENT 'worker group',
      `environment_code` bigint(20) DEFAULT '-1' COMMENT 'environment code',
      `message` text COMMENT 'message',
      `dry_run` tinyint(4) DEFAULT '0' COMMENT 'dry run flag: 0 normal, 1 dry run',
      PRIMARY KEY (`id`) USING BTREE
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8 ROW_FORMAT=DYNAMIC;
    
    -- ----------------------------
    -- Records of t_ds_error_command
    -- ----------------------------
    
    -- ----------------------------
    -- Table structure for t_ds_process_definition
    -- ----------------------------
    DROP TABLE IF EXISTS `t_ds_process_definition`;
    CREATE TABLE `t_ds_process_definition` (
      `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'self-increasing id',
      `code` bigint(20) NOT NULL COMMENT 'encoding',
      `name` varchar(255) DEFAULT NULL COMMENT 'process definition name',
      `version` int(11) DEFAULT '0' COMMENT 'process definition version',
      `description` text COMMENT 'description',
      `project_code` bigint(20) NOT NULL COMMENT 'project code',
      `release_state` tinyint(4) DEFAULT NULL COMMENT 'process definition release state:0:offline,1:online',
      `user_id` int(11) DEFAULT NULL COMMENT 'process definition creator id',
      `global_params` text COMMENT 'global parameters',
      `flag` tinyint(4) DEFAULT NULL COMMENT '0 not available, 1 available',
      `locations` text COMMENT 'Node location information',
      `warning_group_id` int(11) DEFAULT NULL COMMENT 'alert group id',
      `timeout` int(11) DEFAULT '0' COMMENT 'time out, unit: minute',
      `tenant_id` int(11) NOT NULL DEFAULT '-1' COMMENT 'tenant id',
      `execution_type` tinyint(4) DEFAULT '0' COMMENT 'execution_type 0:parallel,1:serial wait,2:serial discard,3:serial priority',
      `create_time` datetime NOT NULL COMMENT 'create time',
      `update_time` datetime NOT NULL COMMENT 'update time',
      PRIMARY KEY (`id`,`code`),
      UNIQUE KEY `process_unique` (`name`,`project_code`) USING BTREE
    ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Records of t_ds_process_definition
    -- ----------------------------
    
    -- ----------------------------
    -- Table structure for t_ds_process_definition_log
    -- ----------------------------
    DROP TABLE IF EXISTS `t_ds_process_definition_log`;
    CREATE TABLE `t_ds_process_definition_log` (
      `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'self-increasing id',
      `code` bigint(20) NOT NULL COMMENT 'encoding',
      `name` varchar(200) DEFAULT NULL COMMENT 'process definition name',
      `version` int(11) DEFAULT '0' COMMENT 'process definition version',
      `description` text COMMENT 'description',
      `project_code` bigint(20) NOT NULL COMMENT 'project code',
      `release_state` tinyint(4) DEFAULT NULL COMMENT 'process definition release state:0:offline,1:online',
      `user_id` int(11) DEFAULT NULL COMMENT 'process definition creator id',
      `global_params` text COMMENT 'global parameters',
      `flag` tinyint(4) DEFAULT NULL COMMENT '0 not available, 1 available',
      `locations` text COMMENT 'Node location information',
      `warning_group_id` int(11) DEFAULT NULL COMMENT 'alert group id',
      `timeout` int(11) DEFAULT '0' COMMENT 'time out,unit: minute',
      `tenant_id` int(11) NOT NULL DEFAULT '-1' COMMENT 'tenant id',
      `execution_type` tinyint(4) DEFAULT '0' COMMENT 'execution_type 0:parallel,1:serial wait,2:serial discard,3:serial priority',
      `operator` int(11) DEFAULT NULL COMMENT 'operator user id',
      `operate_time` datetime DEFAULT NULL COMMENT 'operate time',
      `create_time` datetime NOT NULL COMMENT 'create time',
      `update_time` datetime NOT NULL COMMENT 'update time',
      PRIMARY KEY (`id`)
    ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Table structure for t_ds_task_definition
    -- ----------------------------
    DROP TABLE IF EXISTS `t_ds_task_definition`;
    CREATE TABLE `t_ds_task_definition` (
      `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'self-increasing id',
      `code` bigint(20) NOT NULL COMMENT 'encoding',
      `name` varchar(200) DEFAULT NULL COMMENT 'task definition name',
      `version` int(11) DEFAULT '0' COMMENT 'task definition version',
      `description` text COMMENT 'description',
      `project_code` bigint(20) NOT NULL COMMENT 'project code',
      `user_id` int(11) DEFAULT NULL COMMENT 'task definition creator id',
      `task_type` varchar(50) NOT NULL COMMENT 'task type',
      `task_params` longtext COMMENT 'job custom parameters',
      `flag` tinyint(2) DEFAULT NULL COMMENT '0 not available, 1 available',
      `task_priority` tinyint(4) DEFAULT NULL COMMENT 'job priority',
      `worker_group` varchar(200) DEFAULT NULL COMMENT 'worker grouping',
      `environment_code` bigint(20) DEFAULT '-1' COMMENT 'environment code',
      `fail_retry_times` int(11) DEFAULT NULL COMMENT 'number of failed retries',
      `fail_retry_interval` int(11) DEFAULT NULL COMMENT 'failed retry interval',
      `timeout_flag` tinyint(2) DEFAULT '0' COMMENT 'timeout flag:0 close, 1 open',
      `timeout_notify_strategy` tinyint(4) DEFAULT NULL COMMENT 'timeout notification policy: 0 warning, 1 fail',
      `timeout` int(11) DEFAULT '0' COMMENT 'timeout length,unit: minute',
      `delay_time` int(11) DEFAULT '0' COMMENT 'delay execution time,unit: minute',
      `resource_ids` text COMMENT 'resource id, separated by comma',
      `create_time` datetime NOT NULL COMMENT 'create time',
      `update_time` datetime NOT NULL COMMENT 'update time',
      PRIMARY KEY (`id`,`code`)
    ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Table structure for t_ds_task_definition_log
    -- ----------------------------
    DROP TABLE IF EXISTS `t_ds_task_definition_log`;
    CREATE TABLE `t_ds_task_definition_log` (
      `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'self-increasing id',
      `code` bigint(20) NOT NULL COMMENT 'encoding',
      `name` varchar(200) DEFAULT NULL COMMENT 'task definition name',
      `version` int(11) DEFAULT '0' COMMENT 'task definition version',
      `description` text COMMENT 'description',
      `project_code` bigint(20) NOT NULL COMMENT 'project code',
      `user_id` int(11) DEFAULT NULL COMMENT 'task definition creator id',
      `task_type` varchar(50) NOT NULL COMMENT 'task type',
      `task_params` longtext COMMENT 'job custom parameters',
      `flag` tinyint(2) DEFAULT NULL COMMENT '0 not available, 1 available',
      `task_priority` tinyint(4) DEFAULT NULL COMMENT 'job priority',
      `worker_group` varchar(200) DEFAULT NULL COMMENT 'worker grouping',
      `environment_code` bigint(20) DEFAULT '-1' COMMENT 'environment code',
      `fail_retry_times` int(11) DEFAULT NULL COMMENT 'number of failed retries',
      `fail_retry_interval` int(11) DEFAULT NULL COMMENT 'failed retry interval',
      `timeout_flag` tinyint(2) DEFAULT '0' COMMENT 'timeout flag:0 close, 1 open',
      `timeout_notify_strategy` tinyint(4) DEFAULT NULL COMMENT 'timeout notification policy: 0 warning, 1 fail',
      `timeout` int(11) DEFAULT '0' COMMENT 'timeout length,unit: minute',
      `delay_time` int(11) DEFAULT '0' COMMENT 'delay execution time,unit: minute',
      `resource_ids` text DEFAULT NULL COMMENT 'resource id, separated by comma',
      `operator` int(11) DEFAULT NULL COMMENT 'operator user id',
      `operate_time` datetime DEFAULT NULL COMMENT 'operate time',
      `create_time` datetime NOT NULL COMMENT 'create time',
      `update_time` datetime NOT NULL COMMENT 'update time',
      PRIMARY KEY (`id`),
      KEY `idx_code_version` (`code`,`version`),
      KEY `idx_project_code` (`project_code`)
    ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Table structure for t_ds_process_task_relation
    -- ----------------------------
    DROP TABLE IF EXISTS `t_ds_process_task_relation`;
    CREATE TABLE `t_ds_process_task_relation` (
      `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'self-increasing id',
      `name` varchar(200) DEFAULT NULL COMMENT 'relation name',
      `project_code` bigint(20) NOT NULL COMMENT 'project code',
      `process_definition_code` bigint(20) NOT NULL COMMENT 'process code',
      `process_definition_version` int(11) NOT NULL COMMENT 'process version',
      `pre_task_code` bigint(20) NOT NULL COMMENT 'pre task code',
      `pre_task_version` int(11) NOT NULL COMMENT 'pre task version',
      `post_task_code` bigint(20) NOT NULL COMMENT 'post task code',
      `post_task_version` int(11) NOT NULL COMMENT 'post task version',
      `condition_type` tinyint(2) DEFAULT NULL COMMENT 'condition type : 0 none, 1 judge 2 delay',
      `condition_params` text COMMENT 'condition params(json)',
      `create_time` datetime NOT NULL COMMENT 'create time',
      `update_time` datetime NOT NULL COMMENT 'update time',
      PRIMARY KEY (`id`),
      KEY `idx_code` (`project_code`,`process_definition_code`)
    ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Table structure for t_ds_process_task_relation_log
    -- ----------------------------
    DROP TABLE IF EXISTS `t_ds_process_task_relation_log`;
    CREATE TABLE `t_ds_process_task_relation_log` (
      `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'self-increasing id',
      `name` varchar(200) DEFAULT NULL COMMENT 'relation name',
      `project_code` bigint(20) NOT NULL COMMENT 'project code',
      `process_definition_code` bigint(20) NOT NULL COMMENT 'process code',
      `process_definition_version` int(11) NOT NULL COMMENT 'process version',
      `pre_task_code` bigint(20) NOT NULL COMMENT 'pre task code',
      `pre_task_version` int(11) NOT NULL COMMENT 'pre task version',
      `post_task_code` bigint(20) NOT NULL COMMENT 'post task code',
      `post_task_version` int(11) NOT NULL COMMENT 'post task version',
      `condition_type` tinyint(2) DEFAULT NULL COMMENT 'condition type : 0 none, 1 judge 2 delay',
      `condition_params` text COMMENT 'condition params(json)',
      `operator` int(11) DEFAULT NULL COMMENT 'operator user id',
      `operate_time` datetime DEFAULT NULL COMMENT 'operate time',
      `create_time` datetime NOT NULL COMMENT 'create time',
      `update_time` datetime NOT NULL COMMENT 'update time',
      PRIMARY KEY (`id`),
      KEY `idx_process_code_version` (`process_definition_code`,`process_definition_version`)
    ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Table structure for t_ds_process_instance
    -- ----------------------------
    DROP TABLE IF EXISTS `t_ds_process_instance`;
    CREATE TABLE `t_ds_process_instance` (
      `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key',
      `name` varchar(255) DEFAULT NULL COMMENT 'process instance name',
      `process_definition_code` bigint(20) NOT NULL COMMENT 'process definition code',
      `process_definition_version` int(11) DEFAULT '0' COMMENT 'process definition version',
      `state` tinyint(4) DEFAULT NULL COMMENT 'process instance Status: 0 commit succeeded, 1 running, 2 prepare to pause, 3 pause, 4 prepare to stop, 5 stop, 6 fail, 7 succeed, 8 need fault tolerance, 9 kill, 10 wait for thread, 11 wait for dependency to complete',
      `recovery` tinyint(4) DEFAULT NULL COMMENT 'process instance failover flag:0:normal,1:failover instance',
      `start_time` datetime DEFAULT NULL COMMENT 'process instance start time',
      `end_time` datetime DEFAULT NULL COMMENT 'process instance end time',
      `run_times` int(11) DEFAULT NULL COMMENT 'process instance run times',
      `host` varchar(135) DEFAULT NULL COMMENT 'process instance host',
      `command_type` tinyint(4) DEFAULT NULL COMMENT 'command type',
      `command_param` text COMMENT 'json command parameters',
      `task_depend_type` tinyint(4) DEFAULT NULL COMMENT 'task depend type. 0: only current node,1:before the node,2:later nodes',
      `max_try_times` tinyint(4) DEFAULT '0' COMMENT 'max try times',
      `failure_strategy` tinyint(4) DEFAULT '0' COMMENT 'failure strategy. 0:end the process when node failed,1:continue running the other nodes when node failed',
      `warning_type` tinyint(4) DEFAULT '0' COMMENT 'warning type. 0:no warning,1:warning if process success,2:warning if process failed,3:warning if success',
      `warning_group_id` int(11) DEFAULT NULL COMMENT 'warning group id',
      `schedule_time` datetime DEFAULT NULL COMMENT 'schedule time',
      `command_start_time` datetime DEFAULT NULL COMMENT 'command start time',
      `global_params` text COMMENT 'global parameters',
      `flag` tinyint(4) DEFAULT '1' COMMENT 'flag',
      `update_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
      `is_sub_process` int(11) DEFAULT '0' COMMENT 'flag, whether the process is sub process',
      `executor_id` int(11) NOT NULL COMMENT 'executor id',
      `history_cmd` text COMMENT 'history commands of process instance operation',
      `process_instance_priority` int(11) DEFAULT NULL COMMENT 'process instance priority. 0 Highest,1 High,2 Medium,3 Low,4 Lowest',
      `worker_group` varchar(64) DEFAULT NULL COMMENT 'worker group id',
      `environment_code` bigint(20) DEFAULT '-1' COMMENT 'environment code',
      `timeout` int(11) DEFAULT '0' COMMENT 'time out',
      `tenant_id` int(11) NOT NULL DEFAULT '-1' COMMENT 'tenant id',
      `var_pool` longtext COMMENT 'var_pool',
      `dry_run` tinyint(4) DEFAULT '0' COMMENT 'dry run flag:0 normal, 1 dry run',
      `next_process_instance_id` int(11) DEFAULT '0' COMMENT 'serial queue next processInstanceId',
      `restart_time` datetime DEFAULT NULL COMMENT 'process instance restart time',
      PRIMARY KEY (`id`),
      KEY `process_instance_index` (`process_definition_code`,`id`) USING BTREE,
      KEY `start_time_index` (`start_time`,`end_time`) USING BTREE
    ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Records of t_ds_process_instance
    -- ----------------------------
    
    -- ----------------------------
    -- Table structure for t_ds_project
    -- ----------------------------
    DROP TABLE IF EXISTS `t_ds_project`;
    CREATE TABLE `t_ds_project` (
      `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key',
      `name` varchar(100) DEFAULT NULL COMMENT 'project name',
      `code` bigint(20) NOT NULL COMMENT 'encoding',
      `description` varchar(200) DEFAULT NULL,
      `user_id` int(11) DEFAULT NULL COMMENT 'creator id',
      `flag` tinyint(4) DEFAULT '1' COMMENT '0 not available, 1 available',
      `create_time` datetime NOT NULL COMMENT 'create time',
      `update_time` datetime DEFAULT NULL COMMENT 'update time',
      PRIMARY KEY (`id`),
      KEY `user_id_index` (`user_id`) USING BTREE
    ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Records of t_ds_project
    -- ----------------------------
    
    -- ----------------------------
    -- Table structure for t_ds_queue
    -- ----------------------------
    DROP TABLE IF EXISTS `t_ds_queue`;
    CREATE TABLE `t_ds_queue` (
      `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key',
      `queue_name` varchar(64) DEFAULT NULL COMMENT 'queue name',
      `queue` varchar(64) DEFAULT NULL COMMENT 'yarn queue name',
      `create_time` datetime DEFAULT NULL COMMENT 'create time',
      `update_time` datetime DEFAULT NULL COMMENT 'update time',
      PRIMARY KEY (`id`)
    ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Records of t_ds_queue
    -- ----------------------------
    INSERT INTO `t_ds_queue` VALUES ('1', 'default', 'default', null, null);
    
    -- ----------------------------
    -- Table structure for t_ds_relation_datasource_user
    -- ----------------------------
    DROP TABLE IF EXISTS `t_ds_relation_datasource_user`;
    CREATE TABLE `t_ds_relation_datasource_user` (
      `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key',
      `user_id` int(11) NOT NULL COMMENT 'user id',
      `datasource_id` int(11) DEFAULT NULL COMMENT 'data source id',
      `perm` int(11) DEFAULT '1' COMMENT 'limits of authority',
      `create_time` datetime DEFAULT NULL COMMENT 'create time',
      `update_time` datetime DEFAULT NULL COMMENT 'update time',
      PRIMARY KEY (`id`)
    ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Records of t_ds_relation_datasource_user
    -- ----------------------------
    
    -- ----------------------------
    -- Table structure for t_ds_relation_process_instance
    -- ----------------------------
    DROP TABLE IF EXISTS `t_ds_relation_process_instance`;
    CREATE TABLE `t_ds_relation_process_instance` (
      `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key',
      `parent_process_instance_id` int(11) DEFAULT NULL COMMENT 'parent process instance id',
      `parent_task_instance_id` int(11) DEFAULT NULL COMMENT 'parent process instance id',
      `process_instance_id` int(11) DEFAULT NULL COMMENT 'child process instance id',
      PRIMARY KEY (`id`)
    ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Records of t_ds_relation_process_instance
    -- ----------------------------
    
    -- ----------------------------
    -- Table structure for t_ds_relation_project_user
    -- ----------------------------
    DROP TABLE IF EXISTS `t_ds_relation_project_user`;
    CREATE TABLE `t_ds_relation_project_user` (
      `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key',
      `user_id` int(11) NOT NULL COMMENT 'user id',
      `project_id` int(11) DEFAULT NULL COMMENT 'project id',
      `perm` int(11) DEFAULT '1' COMMENT 'limits of authority',
      `create_time` datetime DEFAULT NULL COMMENT 'create time',
      `update_time` datetime DEFAULT NULL COMMENT 'update time',
      PRIMARY KEY (`id`),
      KEY `user_id_index` (`user_id`) USING BTREE
    ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Records of t_ds_relation_project_user
    -- ----------------------------
    
    -- ----------------------------
    -- Table structure for t_ds_relation_resources_user
    -- ----------------------------
    DROP TABLE IF EXISTS `t_ds_relation_resources_user`;
    CREATE TABLE `t_ds_relation_resources_user` (
      `id` int(11) NOT NULL AUTO_INCREMENT,
      `user_id` int(11) NOT NULL COMMENT 'user id',
      `resources_id` int(11) DEFAULT NULL COMMENT 'resource id',
      `perm` int(11) DEFAULT '1' COMMENT 'limits of authority',
      `create_time` datetime DEFAULT NULL COMMENT 'create time',
      `update_time` datetime DEFAULT NULL COMMENT 'update time',
      PRIMARY KEY (`id`)
    ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Records of t_ds_relation_resources_user
    -- ----------------------------
    
    -- ----------------------------
    -- Table structure for t_ds_relation_udfs_user
    -- ----------------------------
    DROP TABLE IF EXISTS `t_ds_relation_udfs_user`;
    CREATE TABLE `t_ds_relation_udfs_user` (
      `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key',
      `user_id` int(11) NOT NULL COMMENT 'userid',
      `udf_id` int(11) DEFAULT NULL COMMENT 'udf id',
      `perm` int(11) DEFAULT '1' COMMENT 'limits of authority',
      `create_time` datetime DEFAULT NULL COMMENT 'create time',
      `update_time` datetime DEFAULT NULL COMMENT 'update time',
      PRIMARY KEY (`id`)
    ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Table structure for t_ds_resources
    -- ----------------------------
    DROP TABLE IF EXISTS `t_ds_resources`;
    CREATE TABLE `t_ds_resources` (
      `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key',
      `alias` varchar(64) DEFAULT NULL COMMENT 'alias',
      `file_name` varchar(64) DEFAULT NULL COMMENT 'file name',
      `description` varchar(255) DEFAULT NULL,
      `user_id` int(11) DEFAULT NULL COMMENT 'user id',
      `type` tinyint(4) DEFAULT NULL COMMENT 'resource type,0:FILE,1:UDF',
      `size` bigint(20) DEFAULT NULL COMMENT 'resource size',
      `create_time` datetime DEFAULT NULL COMMENT 'create time',
      `update_time` datetime DEFAULT NULL COMMENT 'update time',
      `pid` int(11) DEFAULT NULL,
      `full_name` varchar(128) DEFAULT NULL,
      `is_directory` tinyint(4) DEFAULT NULL,
      PRIMARY KEY (`id`),
      UNIQUE KEY `t_ds_resources_un` (`full_name`,`type`)
    ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Records of t_ds_resources
    -- ----------------------------
    
    -- ----------------------------
    -- Table structure for t_ds_schedules
    -- ----------------------------
    DROP TABLE IF EXISTS `t_ds_schedules`;
    CREATE TABLE `t_ds_schedules` (
      `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key',
      `process_definition_code` bigint(20) NOT NULL COMMENT 'process definition code',
      `start_time` datetime NOT NULL COMMENT 'start time',
      `end_time` datetime NOT NULL COMMENT 'end time',
      `timezone_id` varchar(40) DEFAULT NULL COMMENT 'schedule timezone id',
      `crontab` varchar(255) NOT NULL COMMENT 'crontab description',
      `failure_strategy` tinyint(4) NOT NULL COMMENT 'failure strategy. 0:end,1:continue',
      `user_id` int(11) NOT NULL COMMENT 'user id',
      `release_state` tinyint(4) NOT NULL COMMENT 'release state. 0:offline,1:online ',
      `warning_type` tinyint(4) NOT NULL COMMENT 'Alarm type: 0 is not sent, 1 process is sent successfully, 2 process is sent failed, 3 process is sent successfully and all failures are sent',
      `warning_group_id` int(11) DEFAULT NULL COMMENT 'alert group id',
      `process_instance_priority` int(11) DEFAULT NULL COMMENT 'process instance priority:0 Highest,1 High,2 Medium,3 Low,4 Lowest',
      `worker_group` varchar(64) DEFAULT '' COMMENT 'worker group id',
      `environment_code` bigint(20) DEFAULT '-1' COMMENT 'environment code',
      `create_time` datetime NOT NULL COMMENT 'create time',
      `update_time` datetime NOT NULL COMMENT 'update time',
      PRIMARY KEY (`id`)
    ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Records of t_ds_schedules
    -- ----------------------------
    
    -- ----------------------------
    -- Table structure for t_ds_session
    -- ----------------------------
    DROP TABLE IF EXISTS `t_ds_session`;
    CREATE TABLE `t_ds_session` (
      `id` varchar(64) NOT NULL COMMENT 'key',
      `user_id` int(11) DEFAULT NULL COMMENT 'user id',
      `ip` varchar(45) DEFAULT NULL COMMENT 'ip',
      `last_login_time` datetime DEFAULT NULL COMMENT 'last login time',
      PRIMARY KEY (`id`)
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Records of t_ds_session
    -- ----------------------------
    
    -- ----------------------------
    -- Table structure for t_ds_task_instance
    -- ----------------------------
    DROP TABLE IF EXISTS `t_ds_task_instance`;
    CREATE TABLE `t_ds_task_instance` (
      `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key',
      `name` varchar(255) DEFAULT NULL COMMENT 'task name',
      `task_type` varchar(50) NOT NULL COMMENT 'task type',
      `task_code` bigint(20) NOT NULL COMMENT 'task definition code',
      `task_definition_version` int(11) DEFAULT '0' COMMENT 'task definition version',
      `process_instance_id` int(11) DEFAULT NULL COMMENT 'process instance id',
      `state` tinyint(4) DEFAULT NULL COMMENT 'Status: 0 commit succeeded, 1 running, 2 prepare to pause, 3 pause, 4 prepare to stop, 5 stop, 6 fail, 7 succeed, 8 need fault tolerance, 9 kill, 10 wait for thread, 11 wait for dependency to complete',
      `submit_time` datetime DEFAULT NULL COMMENT 'task submit time',
      `start_time` datetime DEFAULT NULL COMMENT 'task start time',
      `end_time` datetime DEFAULT NULL COMMENT 'task end time',
      `host` varchar(135) DEFAULT NULL COMMENT 'host of task running on',
      `execute_path` varchar(200) DEFAULT NULL COMMENT 'task execute path in the host',
      `log_path` varchar(200) DEFAULT NULL COMMENT 'task log path',
      `alert_flag` tinyint(4) DEFAULT NULL COMMENT 'whether alert',
      `retry_times` int(4) DEFAULT '0' COMMENT 'task retry times',
      `pid` int(4) DEFAULT NULL COMMENT 'pid of task',
      `app_link` longtext COMMENT 'yarn app id',
      `task_params` longtext COMMENT 'job custom parameters',
      `flag` tinyint(4) DEFAULT '1' COMMENT '0 not available, 1 available',
      `retry_interval` int(4) DEFAULT NULL COMMENT 'retry interval when task failed ',
      `max_retry_times` int(2) DEFAULT NULL COMMENT 'max retry times',
      `task_instance_priority` int(11) DEFAULT NULL COMMENT 'task instance priority:0 Highest,1 High,2 Medium,3 Low,4 Lowest',
      `worker_group` varchar(64) DEFAULT NULL COMMENT 'worker group id',
      `environment_code` bigint(20) DEFAULT '-1' COMMENT 'environment code',
      `environment_config` text COMMENT 'this config contains many environment variables config',
      `executor_id` int(11) DEFAULT NULL,
      `first_submit_time` datetime DEFAULT NULL COMMENT 'task first submit time',
      `delay_time` int(4) DEFAULT '0' COMMENT 'task delay execution time',
      `var_pool` longtext COMMENT 'var_pool',
      `dry_run` tinyint(4) DEFAULT '0' COMMENT 'dry run flag: 0 normal, 1 dry run',
      PRIMARY KEY (`id`),
      KEY `process_instance_id` (`process_instance_id`) USING BTREE,
      KEY `idx_code_version` (`task_code`, `task_definition_version`) USING BTREE,
      CONSTRAINT `foreign_key_instance_id` FOREIGN KEY (`process_instance_id`) REFERENCES `t_ds_process_instance` (`id`) ON DELETE CASCADE
    ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Records of t_ds_task_instance
    -- ----------------------------
    
    -- ----------------------------
    -- Table structure for t_ds_tenant
    -- ----------------------------
    DROP TABLE IF EXISTS `t_ds_tenant`;
    CREATE TABLE `t_ds_tenant` (
      `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key',
      `tenant_code` varchar(64) DEFAULT NULL COMMENT 'tenant code',
      `description` varchar(255) DEFAULT NULL,
      `queue_id` int(11) DEFAULT NULL COMMENT 'queue id',
      `create_time` datetime DEFAULT NULL COMMENT 'create time',
      `update_time` datetime DEFAULT NULL COMMENT 'update time',
      PRIMARY KEY (`id`)
    ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Records of t_ds_tenant
    -- ----------------------------
    
    -- ----------------------------
    -- Table structure for t_ds_udfs
    -- ----------------------------
    DROP TABLE IF EXISTS `t_ds_udfs`;
    CREATE TABLE `t_ds_udfs` (
      `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key',
      `user_id` int(11) NOT NULL COMMENT 'user id',
      `func_name` varchar(100) NOT NULL COMMENT 'UDF function name',
      `class_name` varchar(255) NOT NULL COMMENT 'class of udf',
      `type` tinyint(4) NOT NULL COMMENT 'Udf function type',
      `arg_types` varchar(255) DEFAULT NULL COMMENT 'arguments types',
      `database` varchar(255) DEFAULT NULL COMMENT 'data base',
      `description` varchar(255) DEFAULT NULL,
      `resource_id` int(11) NOT NULL COMMENT 'resource id',
      `resource_name` varchar(255) NOT NULL COMMENT 'resource name',
      `create_time` datetime NOT NULL COMMENT 'create time',
      `update_time` datetime NOT NULL COMMENT 'update time',
      PRIMARY KEY (`id`)
    ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Records of t_ds_udfs
    -- ----------------------------
    
    -- ----------------------------
    -- Table structure for t_ds_user
    -- ----------------------------
    DROP TABLE IF EXISTS `t_ds_user`;
    CREATE TABLE `t_ds_user` (
      `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'user id',
      `user_name` varchar(64) DEFAULT NULL COMMENT 'user name',
      `user_password` varchar(64) DEFAULT NULL COMMENT 'user password',
      `user_type` tinyint(4) DEFAULT NULL COMMENT 'user type, 0:administrator,1:ordinary user',
      `email` varchar(64) DEFAULT NULL COMMENT 'email',
      `phone` varchar(11) DEFAULT NULL COMMENT 'phone',
      `tenant_id` int(11) DEFAULT NULL COMMENT 'tenant id',
      `create_time` datetime DEFAULT NULL COMMENT 'create time',
      `update_time` datetime DEFAULT NULL COMMENT 'update time',
      `queue` varchar(64) DEFAULT NULL COMMENT 'queue',
      `state` tinyint(4) DEFAULT '1' COMMENT 'state 0:disable 1:enable',
      PRIMARY KEY (`id`),
      UNIQUE KEY `user_name_unique` (`user_name`)
    ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Records of t_ds_user
    -- ----------------------------
    
    -- ----------------------------
    -- Table structure for t_ds_worker_group
    -- ----------------------------
    DROP TABLE IF EXISTS `t_ds_worker_group`;
    CREATE TABLE `t_ds_worker_group` (
      `id` bigint(11) NOT NULL AUTO_INCREMENT COMMENT 'id',
      `name` varchar(255) NOT NULL COMMENT 'worker group name',
      `addr_list` text NULL DEFAULT NULL COMMENT 'worker addr list. split by [,]',
      `create_time` datetime NULL DEFAULT NULL COMMENT 'create time',
      `update_time` datetime NULL DEFAULT NULL COMMENT 'update time',
      PRIMARY KEY (`id`),
      UNIQUE KEY `name_unique` (`name`)
    ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Records of t_ds_worker_group
    -- ----------------------------
    
    -- ----------------------------
    -- Table structure for t_ds_version
    -- ----------------------------
    DROP TABLE IF EXISTS `t_ds_version`;
    CREATE TABLE `t_ds_version` (
      `id` int(11) NOT NULL AUTO_INCREMENT,
      `version` varchar(200) NOT NULL,
      PRIMARY KEY (`id`),
      UNIQUE KEY `version_UNIQUE` (`version`)
    ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8 COMMENT='version';
    
    -- ----------------------------
    -- Records of t_ds_version
    -- ----------------------------
    INSERT INTO `t_ds_version` VALUES ('1', '2.0.6');
    
    
    -- ----------------------------
    -- Records of t_ds_alertgroup
    -- ----------------------------
    INSERT INTO `t_ds_alertgroup`(alert_instance_ids, create_user_id, group_name, description, create_time, update_time)
    VALUES ("1,2", 1, 'default admin warning group', 'default admin warning group', '2018-11-29 10:20:39', '2018-11-29 10:20:39');
    
    -- ----------------------------
    -- Records of t_ds_user
    -- ----------------------------
    INSERT INTO `t_ds_user`
    VALUES ('1', 'admin', '7ad2410b2f4c074479a8937a28a22b8f', '0', 'xxx@qq.com', '', '0', '2018-03-27 15:48:50', '2018-10-24 17:40:22', null, 1);
    
    -- ----------------------------
    -- Table structure for t_ds_plugin_define
    -- ----------------------------
    SET sql_mode=(SELECT REPLACE(@@sql_mode,'ONLY_FULL_GROUP_BY',''));
    DROP TABLE IF EXISTS `t_ds_plugin_define`;
    CREATE TABLE `t_ds_plugin_define` (
      `id` int NOT NULL AUTO_INCREMENT,
      `plugin_name` varchar(100) NOT NULL COMMENT 'the name of plugin eg: email',
      `plugin_type` varchar(100) NOT NULL COMMENT 'plugin type . alert=alert plugin, job=job plugin',
      `plugin_params` text COMMENT 'plugin params',
      `create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
      `update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
      PRIMARY KEY (`id`),
      UNIQUE KEY `t_ds_plugin_define_UN` (`plugin_name`,`plugin_type`)
    ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Table structure for t_ds_alert_plugin_instance
    -- ----------------------------
    DROP TABLE IF EXISTS `t_ds_alert_plugin_instance`;
    CREATE TABLE `t_ds_alert_plugin_instance` (
      `id` int NOT NULL AUTO_INCREMENT,
      `plugin_define_id` int NOT NULL,
      `plugin_instance_params` text COMMENT 'plugin instance params. Also contain the params value which user input in web ui.',
      `create_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
      `update_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
      `instance_name` varchar(200) DEFAULT NULL COMMENT 'alert instance name',
      PRIMARY KEY (`id`)
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Table structure for t_ds_environment
    -- ----------------------------
    DROP TABLE IF EXISTS `t_ds_environment`;
    CREATE TABLE `t_ds_environment` (
      `id` bigint(11) NOT NULL AUTO_INCREMENT COMMENT 'id',
      `code` bigint(20)  DEFAULT NULL COMMENT 'encoding',
      `name` varchar(100) NOT NULL COMMENT 'environment name',
      `config` text NULL DEFAULT NULL COMMENT 'this config contains many environment variables config',
      `description` text NULL DEFAULT NULL COMMENT 'the details',
      `operator` int(11) DEFAULT NULL COMMENT 'operator user id',
      `create_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
      `update_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
      PRIMARY KEY (`id`),
      UNIQUE KEY `environment_name_unique` (`name`),
      UNIQUE KEY `environment_code_unique` (`code`)
    ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
    
    -- ----------------------------
    -- Table structure for t_ds_environment_worker_group_relation
    -- ----------------------------
    DROP TABLE IF EXISTS `t_ds_environment_worker_group_relation`;
    CREATE TABLE `t_ds_environment_worker_group_relation` (
      `id` bigint(11) NOT NULL AUTO_INCREMENT COMMENT 'id',
      `environment_code` bigint(20) NOT NULL COMMENT 'environment code',
      `worker_group` varchar(255) NOT NULL COMMENT 'worker group id',
      `operator` int(11) DEFAULT NULL COMMENT 'operator user id',
      `create_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
      `update_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
      PRIMARY KEY (`id`),
      UNIQUE KEY `environment_worker_group_unique` (`environment_code`,`worker_group`)
    ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110
    • 111
    • 112
    • 113
    • 114
    • 115
    • 116
    • 117
    • 118
    • 119
    • 120
    • 121
    • 122
    • 123
    • 124
    • 125
    • 126
    • 127
    • 128
    • 129
    • 130
    • 131
    • 132
    • 133
    • 134
    • 135
    • 136
    • 137
    • 138
    • 139
    • 140
    • 141
    • 142
    • 143
    • 144
    • 145
    • 146
    • 147
    • 148
    • 149
    • 150
    • 151
    • 152
    • 153
    • 154
    • 155
    • 156
    • 157
    • 158
    • 159
    • 160
    • 161
    • 162
    • 163
    • 164
    • 165
    • 166
    • 167
    • 168
    • 169
    • 170
    • 171
    • 172
    • 173
    • 174
    • 175
    • 176
    • 177
    • 178
    • 179
    • 180
    • 181
    • 182
    • 183
    • 184
    • 185
    • 186
    • 187
    • 188
    • 189
    • 190
    • 191
    • 192
    • 193
    • 194
    • 195
    • 196
    • 197
    • 198
    • 199
    • 200
    • 201
    • 202
    • 203
    • 204
    • 205
    • 206
    • 207
    • 208
    • 209
    • 210
    • 211
    • 212
    • 213
    • 214
    • 215
    • 216
    • 217
    • 218
    • 219
    • 220
    • 221
    • 222
    • 223
    • 224
    • 225
    • 226
    • 227
    • 228
    • 229
    • 230
    • 231
    • 232
    • 233
    • 234
    • 235
    • 236
    • 237
    • 238
    • 239
    • 240
    • 241
    • 242
    • 243
    • 244
    • 245
    • 246
    • 247
    • 248
    • 249
    • 250
    • 251
    • 252
    • 253
    • 254
    • 255
    • 256
    • 257
    • 258
    • 259
    • 260
    • 261
    • 262
    • 263
    • 264
    • 265
    • 266
    • 267
    • 268
    • 269
    • 270
    • 271
    • 272
    • 273
    • 274
    • 275
    • 276
    • 277
    • 278
    • 279
    • 280
    • 281
    • 282
    • 283
    • 284
    • 285
    • 286
    • 287
    • 288
    • 289
    • 290
    • 291
    • 292
    • 293
    • 294
    • 295
    • 296
    • 297
    • 298
    • 299
    • 300
    • 301
    • 302
    • 303
    • 304
    • 305
    • 306
    • 307
    • 308
    • 309
    • 310
    • 311
    • 312
    • 313
    • 314
    • 315
    • 316
    • 317
    • 318
    • 319
    • 320
    • 321
    • 322
    • 323
    • 324
    • 325
    • 326
    • 327
    • 328
    • 329
    • 330
    • 331
    • 332
    • 333
    • 334
    • 335
    • 336
    • 337
    • 338
    • 339
    • 340
    • 341
    • 342
    • 343
    • 344
    • 345
    • 346
    • 347
    • 348
    • 349
    • 350
    • 351
    • 352
    • 353
    • 354
    • 355
    • 356
    • 357
    • 358
    • 359
    • 360
    • 361
    • 362
    • 363
    • 364
    • 365
    • 366
    • 367
    • 368
    • 369
    • 370
    • 371
    • 372
    • 373
    • 374
    • 375
    • 376
    • 377
    • 378
    • 379
    • 380
    • 381
    • 382
    • 383
    • 384
    • 385
    • 386
    • 387
    • 388
    • 389
    • 390
    • 391
    • 392
    • 393
    • 394
    • 395
    • 396
    • 397
    • 398
    • 399
    • 400
    • 401
    • 402
    • 403
    • 404
    • 405
    • 406
    • 407
    • 408
    • 409
    • 410
    • 411
    • 412
    • 413
    • 414
    • 415
    • 416
    • 417
    • 418
    • 419
    • 420
    • 421
    • 422
    • 423
    • 424
    • 425
    • 426
    • 427
    • 428
    • 429
    • 430
    • 431
    • 432
    • 433
    • 434
    • 435
    • 436
    • 437
    • 438
    • 439
    • 440
    • 441
    • 442
    • 443
    • 444
    • 445
    • 446
    • 447
    • 448
    • 449
    • 450
    • 451
    • 452
    • 453
    • 454
    • 455
    • 456
    • 457
    • 458
    • 459
    • 460
    • 461
    • 462
    • 463
    • 464
    • 465
    • 466
    • 467
    • 468
    • 469
    • 470
    • 471
    • 472
    • 473
    • 474
    • 475
    • 476
    • 477
    • 478
    • 479
    • 480
    • 481
    • 482
    • 483
    • 484
    • 485
    • 486
    • 487
    • 488
    • 489
    • 490
    • 491
    • 492
    • 493
    • 494
    • 495
    • 496
    • 497
    • 498
    • 499
    • 500
    • 501
    • 502
    • 503
    • 504
    • 505
    • 506
    • 507
    • 508
    • 509
    • 510
    • 511
    • 512
    • 513
    • 514
    • 515
    • 516
    • 517
    • 518
    • 519
    • 520
    • 521
    • 522
    • 523
    • 524
    • 525
    • 526
    • 527
    • 528
    • 529
    • 530
    • 531
    • 532
    • 533
    • 534
    • 535
    • 536
    • 537
    • 538
    • 539
    • 540
    • 541
    • 542
    • 543
    • 544
    • 545
    • 546
    • 547
    • 548
    • 549
    • 550
    • 551
    • 552
    • 553
    • 554
    • 555
    • 556
    • 557
    • 558
    • 559
    • 560
    • 561
    • 562
    • 563
    • 564
    • 565
    • 566
    • 567
    • 568
    • 569
    • 570
    • 571
    • 572
    • 573
    • 574
    • 575
    • 576
    • 577
    • 578
    • 579
    • 580
    • 581
    • 582
    • 583
    • 584
    • 585
    • 586
    • 587
    • 588
    • 589
    • 590
    • 591
    • 592
    • 593
    • 594
    • 595
    • 596
    • 597
    • 598
    • 599
    • 600
    • 601
    • 602
    • 603
    • 604
    • 605
    • 606
    • 607
    • 608
    • 609
    • 610
    • 611
    • 612
    • 613
    • 614
    • 615
    • 616
    • 617
    • 618
    • 619
    • 620
    • 621
    • 622
    • 623
    • 624
    • 625
    • 626
    • 627
    • 628
    • 629
    • 630
    • 631
    • 632
    • 633
    • 634
    • 635
    • 636
    • 637
    • 638
    • 639
    • 640
    • 641
    • 642
    • 643
    • 644
    • 645
    • 646
    • 647
    • 648
    • 649
    • 650
    • 651
    • 652
    • 653
    • 654
    • 655
    • 656
    • 657
    • 658
    • 659
    • 660
    • 661
    • 662
    • 663
    • 664
    • 665
    • 666
    • 667
    • 668
    • 669
    • 670
    • 671
    • 672
    • 673
    • 674
    • 675
    • 676
    • 677
    • 678
    • 679
    • 680
    • 681
    • 682
    • 683
    • 684
    • 685
    • 686
    • 687
    • 688
    • 689
    • 690
    • 691
    • 692
    • 693
    • 694
    • 695
    • 696
    • 697
    • 698
    • 699
    • 700
    • 701
    • 702
    • 703
    • 704
    • 705
    • 706
    • 707
    • 708
    • 709
    • 710
    • 711
    • 712
    • 713
    • 714
    • 715
    • 716
    • 717
    • 718
    • 719
    • 720
    • 721
    • 722
    • 723
    • 724
    • 725
    • 726
    • 727
    • 728
    • 729
    • 730
    • 731
    • 732
    • 733
    • 734
    • 735
    • 736
    • 737
    • 738
    • 739
    • 740
    • 741
    • 742
    • 743
    • 744
    • 745
    • 746
    • 747
    • 748
    • 749
    • 750
    • 751
    • 752
    • 753
    • 754
    • 755
    • 756
    • 757
    • 758
    • 759
    • 760
    • 761
    • 762
    • 763
    • 764
    • 765
    • 766
    • 767
    • 768
    • 769
    • 770
    • 771
    • 772
    • 773
    • 774
    • 775
    • 776
    • 777
    • 778
    • 779
    • 780
    • 781
    • 782
    • 783
    • 784
    • 785
    • 786
    • 787
    • 788
    • 789
    • 790
    • 791
    • 792
    • 793
    • 794
    • 795
    • 796
    • 797
    • 798
    • 799
    • 800
    • 801
    • 802
    • 803
    • 804
    • 805
    • 806
    • 807
    • 808
    • 809
    • 810
    • 811
    • 812
    • 813
    • 814
    • 815
    • 816
    • 817
    • 818
    • 819
    • 820
    • 821
    • 822
    • 823
    • 824
    • 825
    • 826
    • 827
    • 828
    • 829
    • 830
    • 831
    • 832
    • 833
    • 834
    • 835
    • 836
    • 837
    • 838
    • 839
    • 840
    • 841
    • 842
    • 843
    • 844
    • 845
    • 846
    • 847
    • 848
    • 849
    • 850
    • 851
    • 852
    • 853
    • 854
    • 855
    • 856
    • 857
    • 858
    • 859
    • 860
    • 861
    • 862
    • 863
    • 864
    • 865
    • 866
    • 867
    • 868
    • 869
    • 870
    • 871
    • 872
    • 873
    • 874
    • 875
    • 876
    • 877
    • 878
    • 879
    • 880
    • 881
    • 882
    • 883
    • 884
    • 885
    • 886
    • 887
    • 888
    • 889
    • 890
    • 891
    • 892
    • 893
    • 894
    • 895
    • 896
    • 897
    • 898
    • 899
    • 900
    • 901
    • 902
    • 903
    • 904
    • 905
    • 906
    • 907
    • 908
    • 909
    • 910
    • 911
    • 912
    • 913
    • 914
    • 915
    • 916
    • 917
    • 918
    • 919
    • 920
    • 921
    • 922
    • 923
    • 924
    • 925
    • 926
    • 927
    • 928
    • 929
    • 930
    • 931
    • 932
    • 933
    • 934
    • 935
    • 936
    • 937
    • 938
    • 939
    • 940
    • 941
    • 942
    • 943
    • 944
    • 945
    • 946
    • 947
    • 948
    • 949
    • 950
    • 951
    • 952
    • 953
    • 954
    • 955
    • 956
    • 957
    • 958
    • 959
    • 960
    • 961
    • 962
    • 963
    • 964
    • 965
    • 966
    • 967
    • 968
    • 969
    • 970
    • 971
    • 972
    • 973
    • 974
    • 975
    • 976
    • 977
    • 978
    • 979
    • 980
    • 981
    • 982
    • 983
    • 984
    • 985
    • 986
    • 987
    • 988
    • 989
    • 990
    • 991
    • 992
    • 993
    • 994
    • 995
    • 996
    • 997
    • 998
    • 999
    • 1000
    • 1001
    • 1002
    • 1003
    • 1004
    • 1005
    • 1006
    • 1007
    • 1008
    • 1009
    • 1010
    • 1011
    • 1012
    • 1013
    • 1014
    • 1015
    • 1016
    • 1017
    • 1018
    • 1019
    • 1020
    • 1021
    • 1022
    • 1023
    • 1024
    • 1025
    • 1026
    • 1027
    • 1028
    • 1029
    • 1030
    • 1031
    • 1032

    启动 dolphinscheduler

    kubectl apply -f dolphinscheduler-master.yaml
    kubectl apply -f dolphinscheduler-alert.yaml
    kubectl apply -f dolphinscheduler-worker.yaml
    kubectl apply -f dolphinscheduler-api.yaml
    kubectl apply -f dolphinscheduler-ingress.yaml
    
    • 1
    • 2
    • 3
    • 4
    • 5

    pod 都为 running 后,访问 dolphinscheduler.org/dolphinscheduler ,如果修改过 ingress ,需要以自己配置的域名为准,没有域名服务器或 dns 解析,需要自己本地配置好 hosts 解析

    默认用户名/密码:admin/dolphinscheduler123

  • 相关阅读:
    selenium模拟登录某宝
    java包装类
    秋季开学,培训机构如何做好线下招生?
    JavaEE初阶(5)多线程案例(定时器、标准库中的定时器、实现定时器、线程池、标准库中的线程池、实现线程池)
    基于 CC2530 的多功能 ZigBee 继电器、开关、传感器和路由器的详细实现与JavaScript编码
    leetCode 121. 买卖股票的最佳时机 贪心算法
    微信小程序4种弹框
    如何在页面中制作悬浮发布按钮弹窗
    力扣287. 寻找重复数
    Gartner发布当前至2024年的五大隐私趋势
  • 原文地址:https://blog.csdn.net/u010383467/article/details/126233377