• 【kyuubi k8s】kyuubi发布k8s执行spark sql


    背景

    依据上一篇kyuubi与spark集成,并发布spark sql到k8s集群,上一篇的将kyuubi和spark环境放在本地某台服务器上的,为了高可用,本篇将其打包镜像,并发布到k8s。

    其实就是将本地的kyuubi,spark,jdk等打包到镜像中,设置好环境变量,打包成镜像,过程也很简单,参考以下Dockerfile

    Dockerfile的目录

    准备好kyuubi的包,spark包,jdk包,还有上篇中重点提到的aws包:awsicelib目录下的jar,还有一点将k8s的config也要打包进来,否则无法通过k8s的认证,没法提交任务。

    Dockerfile

    1. FROM centos
    2. MAINTAINER 682556
    3. RUN mkdir -p /opt
    4. RUN mkdir -p /root/.kube
    5. #将看k8s的配置加进来
    6. ADD config /root/.kube
    7. ADD apache-kyuubi-1.9.0-bin.tgz /opt
    8. ADD spark-3.5.0-bin-hadoop3.tgz /opt
    9. ADD zulu11.52.13-ca-jdk11.0.13-linux_x64.tar.gz /opt/java
    10. #这个是busybox,在启动kyuubi后,让容器仍然启动着,不会completed
    11. ADD busycommand.sh /opt/rcommand.sh
    12. RUN ls /opt/java
    13. RUN ln -s /opt/apache-kyuubi-1.9.0-bin /opt/kyuubi
    14. RUN ln -s /opt/spark-3.5.0-bin-hadoop3 /opt/spark
    15. #添加aws的包,访问minio
    16. COPY awsicelib/ /opt/spark/jars
    17. ENV JAVA_HOME /opt/java/zulu11.52.13-ca-jdk11.0.13-linux_x64
    18. ENV SPARK_HOME /opt/spark
    19. ENV KYUUBI_HOME /opt/kyuubi
    20. ENV PATH $PATH:$JAVA_HOME/bin:$SPARK_HOME/bin:$KYUUBI_HOME/bin
    21. ENV AWS_REGION=us-east-1
    22. EXPOSE 10009
    23. #RUN chmod -R 777 /opt
    24. ENTRYPOINT ["/opt/rcommand.sh"]

    我的busycommand,10s钟检测一次,如果进程没了,重新启动kyuubi

    1. #! /bin/bash
    2. i=1
    3. while ((i<100000000))
    4. do
    5. flag="true"
    6. if((i<2)); then
    7. $KYUUBI_HOME/bin/kyuubi start
    8. echo `date "+%Y-%m-%d %H:%M:%S"` "first started" >> /opt/kyuubi.log
    9. pwd;
    10. else
    11. PID=$(ps -e|grep java|awk '{printf $1}')
    12. if [ $PID ]; then
    13. echo `date "+%Y-%m-%d %H:%M:%S"` "process id:$PID" >> /opt/kyuubi.log
    14. else
    15. echo `date "+%Y-%m-%d %H:%M:%S"` "process kyuubi-server not exit" >> /opt/kyuubi.log
    16. $KYUUBI_HOME/bin/kyuubi start
    17. echo `date "+%Y-%m-%d %H:%M:%S"` "process kyuubi-server started" >> /opt/kyuubi.log
    18. fi
    19. sleep 10s
    20. fi
    21. let i++
    22. done

    打包镜像为10.38.199.203:1443/fhc/kyuubi:v1.0

    编写yaml发布

    kyuubi-condfigMap.yaml

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: kyuubi-config-cm
    5. #namespace: ingress-nginx
    6. labels:
    7. app: kyuubi
    8. data:
    9. kyuubi-env.sh: |-
    10. export JAVA_HOME=/opt/java/zulu11.52.13-ca-jdk11.0.13-linux_x64
    11. export SPARK_HOME=/opt/spark
    12. #export KYUUBI_JAVA_OPTS="-Xmx10g -XX:MaxMetaspaceSize=512m -XX:MaxDirectMemorySize=1024m -XX:+UseG1GC -XX:+UseStringDeduplication -XX:+UnlockDiagnosticVMOptions -XX:+UseCondCardMark -XX:+UseGCOverheadLimit -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=./logs -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -verbose:gc -Xloggc:./logs/kyuubi-server-gc-%t.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=20M"
    13. #export KYUUBI_BEELINE_OPTS="-Xmx2g -XX:+UseG1GC -XX:+UnlockDiagnosticVMOptions -XX:+UseCondCardMark"
    14. kyuubi-defaults.conf: |-
    15. spark.master=k8s://https://10.38.199.201:443/k8s/clusters/c-m-l7gflsx7
    16. spark.home=/opt/spark
    17. spark.hadoop.fs.s3a.access.key=apPeWWr5KpXkzEW2jNKW
    18. spark.hadoop.fs.s3a.secret.key=cRt3inWAhDYtuzsDnKGLGg9EJSbJ083ekuW7PejM
    19. spark.hadoop.fs.s3a.endpoint=http://wuxdihadl01b.seagate.com:30009
    20. spark.hadoop.fs.s3a.path.style.access=true
    21. spark.hadoop.fs.s3a.aws.region=us-east-1
    22. spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem
    23. spark.sql.catalog.default=spark_catalog
    24. spark.sql.catalog.spark_catalog=org.apache.iceberg.spark.SparkCatalog
    25. spark.sql.catalog.spark_catalog.type=rest
    26. #spark.sql.catalog.spark_catalog.catalog-impl=org.apache.iceberg.rest.RESTCatalog
    27. spark.sql.catalog.spark_catalog.uri=http://10.40.8.42:31000
    28. spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions
    29. spark.sql.catalog.spark_catalog.io-impl=org.apache.iceberg.aws.s3.S3FileIO
    30. spark.sql.catalog.spark_catalog.warehouse=s3a://wux-hoo-dev-01/ice_warehouse
    31. spark.sql.catalog.spark_catalog.s3.endpoint=http://wuxdihadl01b.seagate.com:30009
    32. spark.sql.catalog.spark_catalog.s3.path-style-access=true
    33. spark.sql.catalog.spark_catalog.s3.access-key-id=apPeWWr5KpXkzEW2jNKW
    34. spark.sql.catalog.spark_catalog.s3.secret-access-key=cRt3inWAhDYtuzsDnKGLGg9EJSbJ083ekuW7PejM
    35. spark.sql.catalog.spark_catalog.region=us-east-1
    36. spark.kubernetes.container.image=10.38.199.203:1443/fhc/spark350:v1.0
    37. spark.kubernetes.namespace=default
    38. spark.kubernetes.authenticate.driver.serviceAccountName=spark
    39. spark.ssl.kubernetes.enabled=false

    spark-configMap.yaml

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: spark-config-cm
    5. #namespace: ingress-nginx
    6. labels:
    7. app: kyuubi
    8. data:
    9. spark-env.sh: |-
    10. spark-defaults.conf: |-
    11. spark.master=k8s://https://10.38.199.201:443/k8s/clusters/c-m-l7gflsx7
    12. spark.home=/opt/spark
    13. spark.hadoop.fs.s3a.access.key=apPeWWr5KpXkzEW2jNKW
    14. spark.hadoop.fs.s3a.secret.key=cRt3inWAhDYtuzsDnKGLGg9EJSbJ083ekuW7PejM
    15. spark.hadoop.fs.s3a.endpoint=http://wuxdihadl01b.seagate.com:30009
    16. spark.hadoop.fs.s3a.path.style.access=true
    17. spark.hadoop.fs.s3a.aws.region=us-east-1
    18. spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem
    19. spark.sql.catalog.default=spark_catalog
    20. spark.sql.catalog.spark_catalog=org.apache.iceberg.spark.SparkCatalog
    21. spark.sql.catalog.spark_catalog.type=rest
    22. #spark.sql.catalog.spark_catalog.catalog-impl=org.apache.iceberg.rest.RESTCatalog
    23. spark.sql.catalog.spark_catalog.uri=http://10.40.8.42:31000
    24. spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions
    25. spark.sql.catalog.spark_catalog.io-impl=org.apache.iceberg.aws.s3.S3FileIO
    26. spark.sql.catalog.spark_catalog.warehouse=s3a://wux-hoo-dev-01/ice_warehouse
    27. spark.sql.catalog.spark_catalog.s3.endpoint=http://wuxdihadl01b.seagate.com:30009
    28. spark.sql.catalog.spark_catalog.s3.path-style-access=true
    29. spark.sql.catalog.spark_catalog.s3.access-key-id=apPeWWr5KpXkzEW2jNKW
    30. spark.sql.catalog.spark_catalog.s3.secret-access-key=cRt3inWAhDYtuzsDnKGLGg9EJSbJ083ekuW7PejM
    31. spark.sql.catalog.spark_catalog.region=us-east-1
    32. spark.kubernetes.container.image=10.38.199.203:1443/fhc/spark350:v1.0
    33. spark.kubernetes.namespace=default
    34. spark.kubernetes.authenticate.driver.serviceAccountName=spark
    35. spark.ssl.kubernetes.enabled=false
    36. log4j2.properties: |-
    37. # Set everything to be logged to the console
    38. rootLogger.level = info
    39. rootLogger.appenderRef.stdout.ref = console
    40. appender.console.type = Console
    41. appender.console.name = console
    42. appender.console.target = SYSTEM_ERR
    43. appender.console.layout.type = PatternLayout
    44. appender.console.layout.pattern = %d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n%ex
    45. logger.repl.name = org.apache.spark.repl.Main
    46. logger.repl.level = warn
    47. logger.thriftserver.name = org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver
    48. logger.thriftserver.level = warn
    49. logger.jetty1.name = org.sparkproject.jetty
    50. logger.jetty1.level = warn
    51. logger.jetty2.name = org.sparkproject.jetty.util.component.AbstractLifeCycle
    52. logger.jetty2.level = error
    53. logger.replexprTyper.name = org.apache.spark.repl.SparkIMain$exprTyper
    54. logger.replexprTyper.level = info
    55. logger.replSparkILoopInterpreter.name = org.apache.spark.repl.SparkILoop$SparkILoopInterpreter
    56. logger.replSparkILoopInterpreter.level = info
    57. logger.parquet1.name = org.apache.parquet
    58. logger.parquet1.level = error
    59. logger.parquet2.name = parquet
    60. logger.parquet2.level = error
    61. logger.RetryingHMSHandler.name = org.apache.hadoop.hive.metastore.RetryingHMSHandler
    62. logger.RetryingHMSHandler.level = fatal
    63. logger.FunctionRegistry.name = org.apache.hadoop.hive.ql.exec.FunctionRegistry
    64. logger.FunctionRegistry.level = error
    65. appender.console.filter.1.type = RegexFilter
    66. appender.console.filter.1.regex = .*Thrift error occurred during processing of message.*
    67. appender.console.filter.1.onMatch = deny
    68. appender.console.filter.1.onMismatch = neutral

    kyuubi-dev.yaml

    1. apiVersion: apps/v1
    2. kind: Deployment
    3. metadata:
    4. name: kyuubi
    5. #namespace: ingress-nginx
    6. spec:
    7. replicas: 1
    8. revisionHistoryLimit: 1
    9. selector:
    10. matchLabels:
    11. app: kyuubi
    12. template:
    13. metadata:
    14. labels:
    15. app: kyuubi
    16. spec:
    17. #nodeSelector:
    18. #type: lab1
    19. #hostNetwork: true
    20. containers:
    21. - name: kyuubi
    22. imagePullPolicy: Always
    23. image: 10.38.199.203:1443/fhc/kyuubi:v1.0
    24. resources:
    25. requests:
    26. cpu: 1000m
    27. memory: 5000Mi
    28. limits:
    29. cpu: 2000m
    30. memory: 20240Mi
    31. ports:
    32. - containerPort: 10009
    33. env:
    34. - name: COORDINATOR_NODE
    35. value: "true"
    36. volumeMounts:
    37. - name: kyuubi-config-volume
    38. mountPath: /opt/kyuubi/conf
    39. readOnly: false
    40. - name: spark-config-volume
    41. mountPath: /opt/spark/conf
    42. readOnly: false
    43. volumes:
    44. - name: kyuubi-config-volume
    45. configMap:
    46. name: kyuubi-config-cm
    47. - name: spark-config-volume
    48. configMap:
    49. name: spark-config-cm
    50. - name: kyuubi-data-volume
    51. emptyDir: {}
    52. imagePullSecrets:
    53. - name: harbor

    kyuubi-service.yaml

    1. kind: Service
    2. apiVersion: v1
    3. metadata:
    4. labels:
    5. app: kyuubi
    6. name: kyuubi-service
    7. #namespace: ingress-nginx
    8. spec:
    9. type: NodePort
    10. #sessionAffinity: ClientIP
    11. ports:
    12. - port: 10009
    13. targetPort: 10009
    14. name: http
    15. #protocol: TCP
    16. nodePort: 30009
    17. selector:
    18. app: kyuubi

    发布yaml

    1. kubectl apply -f kyuubi-condfigMap.yaml
    2. kubectl apply -f spark-configMap.yaml
    3. kubectl apply -f kyuubi-dev.yaml
    4. kubectl apply -f kyuubi-service.yaml

    查看k8s pod

    本地连接kyuubi jdbc,及执行sql

    ./beeline -u 'jdbc:hive2://10.38.199.201:30009'
    1. [root@t001 bin]# ./beeline -u 'jdbc:hive2://10.38.199.201:30009'
    2. Connecting to jdbc:hive2://10.38.199.201:30009
    3. 2024-06-17 07:33:45.668 INFO KyuubiSessionManager-exec-pool: Thread-54 org.apache.kyuubi.operation.LaunchEngine: Processing anonymous's query[c612b8d0-8bc7-4efa-acc8-e6c477fbda20]: PENDING_STATE -> RUNNING_STATE, statement:
    4. LaunchEngine
    5. 2024-06-17 07:33:45.673 INFO KyuubiSessionManager-exec-pool: Thread-54 org.apache.kyuubi.shaded.curator.framework.imps.CuratorFrameworkImpl: Starting
    6. 2024-06-17 07:33:45.673 INFO KyuubiSessionManager-exec-pool: Thread-54 org.apache.kyuubi.shaded.zookeeper.ZooKeeper: Initiating client connection, connectString=10.42.46.241:2181 sessionTimeout=60000 watcher=org.apache.kyuubi.shaded.curator.ConnectionState@59e92870
    7. 2024-06-17 07:33:45.676 INFO KyuubiSessionManager-exec-pool: Thread-54-SendThread(kyuubi-59b89b87b7-bjckz:2181) org.apache.kyuubi.shaded.zookeeper.ClientCnxn: Opening socket connection to server kyuubi-59b89b87b7-bjckz/10.42.46.241:2181. Will not attempt to authenticate using SASL (unknown error)
    8. 2024-06-17 07:33:45.678 INFO KyuubiSessionManager-exec-pool: Thread-54-SendThread(kyuubi-59b89b87b7-bjckz:2181) org.apache.kyuubi.shaded.zookeeper.ClientCnxn: Socket connection established to kyuubi-59b89b87b7-bjckz/10.42.46.241:2181, initiating session
    9. 2024-06-17 07:33:45.702 INFO KyuubiSessionManager-exec-pool: Thread-54-SendThread(kyuubi-59b89b87b7-bjckz:2181) org.apache.kyuubi.shaded.zookeeper.ClientCnxn: Session establishment complete on server kyuubi-59b89b87b7-bjckz/10.42.46.241:2181, sessionid = 0x101113150210001, negotiated timeout = 60000
    10. 2024-06-17 07:33:45.703 INFO KyuubiSessionManager-exec-pool: Thread-54-EventThread org.apache.kyuubi.shaded.curator.framework.state.ConnectionStateManager: State change: CONNECTED
    11. 2024-06-17 07:33:45.728 WARN KyuubiSessionManager-exec-pool: Thread-54 org.apache.kyuubi.shaded.curator.utils.ZKPaths: The version of ZooKeeper being used doesn't support Container nodes. CreateMode.PERSISTENT will be used instead.
    12. 2024-06-17 07:33:45.829 INFO KyuubiSessionManager-exec-pool: Thread-54 org.apache.kyuubi.engine.ProcBuilder: Creating anonymous's working directory at /opt/kyuubi/work/anonymous
    13. 2024-06-17 07:33:45.852 INFO KyuubiSessionManager-exec-pool: Thread-54 org.apache.kyuubi.engine.ProcBuilder: Logging to /opt/kyuubi/work/anonymous/kyuubi-spark-sql-engine.log.0
    14. 2024-06-17 07:33:45.860 INFO KyuubiSessionManager-exec-pool: Thread-54 org.apache.kyuubi.Utils: Loading Kyuubi properties from /opt/spark/conf/spark-defaults.conf
    15. 2024-06-17 07:33:45.869 INFO KyuubiSessionManager-exec-pool: Thread-54 org.apache.kyuubi.engine.EngineRef: Launching engine:
    16. /opt/spark-3.5.0-bin-hadoop3/bin/spark-submit \
    17. --class org.apache.kyuubi.engine.spark.SparkSQLEngine \
    18. --conf spark.hive.server2.thrift.resultset.default.fetch.size=1000 \
    19. --conf spark.kyuubi.client.ipAddress=10.38.199.201 \
    20. --conf spark.kyuubi.client.version=1.9.0 \
    21. --conf spark.kyuubi.engine.engineLog.path=/opt/kyuubi/work/anonymous/kyuubi-spark-sql-engine.log.0 \
    22. --conf spark.kyuubi.engine.submit.time=1718609625821 \
    23. --conf spark.kyuubi.ha.addresses=10.42.46.241:2181 \
    24. --conf spark.kyuubi.ha.engine.ref.id=50850fdf-6a15-4c47-bc5f-d8816d255ef8 \
    25. --conf spark.kyuubi.ha.namespace=/kyuubi_1.9.0_USER_SPARK_SQL/anonymous/default \
    26. --conf spark.kyuubi.ha.zookeeper.auth.type=NONE \
    27. --conf spark.kyuubi.server.ipAddress=10.42.46.241 \
    28. --conf spark.kyuubi.session.connection.url=kyuubi-59b89b87b7-bjckz:10009 \
    29. --conf spark.kyuubi.session.real.user=anonymous \
    30. --conf spark.app.name=kyuubi_USER_SPARK_SQL_anonymous_default_50850fdf-6a15-4c47-bc5f-d8816d255ef8 \
    31. --conf spark.hadoop.fs.s3a.access.key=apPeWWr5KpXkzEW2jNKW \
    32. --conf spark.hadoop.fs.s3a.aws.region=us-east-1 \
    33. --conf spark.hadoop.fs.s3a.endpoint=http://wuxdihadl01b.seagate.com:30009 \
    34. --conf spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem \
    35. --conf spark.hadoop.fs.s3a.path.style.access=true \
    36. --conf spark.hadoop.fs.s3a.secret.key=cRt3inWAhDYtuzsDnKGLGg9EJSbJ083ekuW7PejM \
    37. --conf spark.home=/opt/spark \
    38. --conf spark.kubernetes.authenticate.driver.oauthToken=kubeconfig-user-8g2wl8jd6v:t6fm8dphfzr8r2dhhz9f9m57nzcmbcjrmmx6txj4vpfvv799c4sj84 \
    39. --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
    40. --conf spark.kubernetes.container.image=10.38.199.203:1443/fhc/spark350:v1.0 \
    41. --conf spark.kubernetes.driver.label.kyuubi-unique-tag=50850fdf-6a15-4c47-bc5f-d8816d255ef8 \
    42. --conf spark.kubernetes.executor.podNamePrefix=kyuubi-user-spark-sql-anonymous-default-50850fdf-6a15-4c47-bc5f-d8816d255ef8 \
    43. --conf spark.kubernetes.namespace=default \
    44. --conf spark.master=k8s://https://10.38.199.201:443/k8s/clusters/c-m-l7gflsx7 \
    45. --conf spark.sql.catalog.default=spark_catalog \
    46. --conf spark.sql.catalog.spark_catalog=org.apache.iceberg.spark.SparkCatalog \
    47. --conf spark.sql.catalog.spark_catalog.io-impl=org.apache.iceberg.aws.s3.S3FileIO \
    48. --conf spark.sql.catalog.spark_catalog.region=us-east-1 \
    49. --conf spark.sql.catalog.spark_catalog.s3.access-key-id=apPeWWr5KpXkzEW2jNKW \
    50. --conf spark.sql.catalog.spark_catalog.s3.endpoint=http://wuxdihadl01b.seagate.com:30009 \
    51. --conf spark.sql.catalog.spark_catalog.s3.path-style-access=true \
    52. --conf spark.sql.catalog.spark_catalog.s3.secret-access-key=cRt3inWAhDYtuzsDnKGLGg9EJSbJ083ekuW7PejM \
    53. --conf spark.sql.catalog.spark_catalog.type=rest \
    54. --conf spark.sql.catalog.spark_catalog.uri=http://10.40.8.42:31000 \
    55. --conf spark.sql.catalog.spark_catalog.warehouse=s3a://wux-hoo-dev-01/ice_warehouse \
    56. --conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions \
    57. --conf spark.ssl.kubernetes.enabled=false \
    58. --conf spark.kubernetes.driverEnv.SPARK_USER_NAME=anonymous \
    59. --conf spark.executorEnv.SPARK_USER_NAME=anonymous \
    60. --proxy-user anonymous /opt/apache-kyuubi-1.9.0-bin/externals/engines/spark/kyuubi-spark-sql-engine_2.12-1.9.0.jar
    61. 24/06/17 07:33:48 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    62. 24/06/17 07:33:49 INFO SignalRegister: Registering signal handler for TERM
    63. 24/06/17 07:33:49 INFO SignalRegister: Registering signal handler for HUP
    64. 24/06/17 07:33:49 INFO SignalRegister: Registering signal handler for INT
    65. 24/06/17 07:33:49 INFO HiveConf: Found configuration file null
    66. 24/06/17 07:33:49 INFO SparkContext: Running Spark version 3.5.0
    67. 24/06/17 07:33:49 INFO SparkContext: OS info Linux, 3.10.0-1160.114.2.el7.x86_64, amd64
    68. 24/06/17 07:33:49 INFO SparkContext: Java version 11.0.13
    69. 24/06/17 07:33:49 INFO ResourceUtils: ==============================================================
    70. 24/06/17 07:33:49 INFO ResourceUtils: No custom resources configured for spark.driver.
    71. 24/06/17 07:33:49 INFO ResourceUtils: ==============================================================
    72. 24/06/17 07:33:49 INFO SparkContext: Submitted application: kyuubi_USER_SPARK_SQL_anonymous_default_50850fdf-6a15-4c47-bc5f-d8816d255ef8
    73. 24/06/17 07:33:49 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
    74. 24/06/17 07:33:49 INFO ResourceProfile: Limiting resource is cpus at 1 tasks per executor
    75. 24/06/17 07:33:49 INFO ResourceProfileManager: Added ResourceProfile id: 0
    76. 24/06/17 07:33:49 INFO SecurityManager: Changing view acls to: root,anonymous
    77. 24/06/17 07:33:49 INFO SecurityManager: Changing modify acls to: root,anonymous
    78. 24/06/17 07:33:49 INFO SecurityManager: Changing view acls groups to:
    79. 24/06/17 07:33:49 INFO SecurityManager: Changing modify acls groups to:
    80. 24/06/17 07:33:49 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: root, anonymous; groups with view permissions: EMPTY; users with modify permissions: root, anonymous; groups with modify permissions: EMPTY
    81. 24/06/17 07:33:50 INFO Utils: Successfully started service 'sparkDriver' on port 38583.
    82. 24/06/17 07:33:50 INFO SparkEnv: Registering MapOutputTracker
    83. 24/06/17 07:33:50 INFO SparkEnv: Registering BlockManagerMaster
    84. 24/06/17 07:33:50 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
    85. 24/06/17 07:33:50 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
    86. 24/06/17 07:33:50 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
    87. 24/06/17 07:33:50 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-533c82c5-bfd4-40d1-b8f1-0a348b15a1cf
    88. 24/06/17 07:33:50 INFO MemoryStore: MemoryStore started with capacity 434.4 MiB
    89. 24/06/17 07:33:50 INFO SparkEnv: Registering OutputCommitCoordinator
    90. 24/06/17 07:33:50 INFO JettyUtils: Start Jetty 0.0.0.0:0 for SparkUI
    91. 24/06/17 07:33:50 INFO Utils: Successfully started service 'SparkUI' on port 42553.
    92. 24/06/17 07:33:50 INFO SparkContext: Added JAR file:/opt/apache-kyuubi-1.9.0-bin/externals/engines/spark/kyuubi-spark-sql-engine_2.12-1.9.0.jar at spark://10.42.46.241:38583/jars/kyuubi-spark-sql-engine_2.12-1.9.0.jar with timestamp 1718609629708
    93. 24/06/17 07:33:50 INFO SparkKubernetesClientFactory: Auto-configuring K8S client using current context from users K8S config file
    94. 24/06/17 07:33:51 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes for ResourceProfile Id: 0, target: 2, known: 0, sharedSlotFromPendingPods: 2147483647.
    95. 24/06/17 07:33:52 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : spark-env.sh,log4j2.properties
    96. 24/06/17 07:33:52 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
    97. 24/06/17 07:33:52 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : spark-env.sh,log4j2.properties
    98. 24/06/17 07:33:52 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 39580.
    99. 24/06/17 07:33:52 INFO NettyBlockTransferService: Server created on 10.42.46.241:39580
    100. 24/06/17 07:33:52 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
    101. 24/06/17 07:33:52 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.42.46.241, 39580, None)
    102. 24/06/17 07:33:52 INFO BlockManagerMasterEndpoint: Registering block manager 10.42.46.241:39580 with 434.4 MiB RAM, BlockManagerId(driver, 10.42.46.241, 39580, None)
    103. 24/06/17 07:33:52 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.42.46.241, 39580, None)
    104. 24/06/17 07:33:52 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.42.46.241, 39580, None)
    105. 24/06/17 07:33:52 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : spark-env.sh,log4j2.properties
    106. 24/06/17 07:33:52 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
    107. 24/06/17 07:33:57 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: No executor found for 10.42.235.247:52602
    108. 24/06/17 07:33:57 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.42.235.247:52608) with ID 1, ResourceProfileId 0
    109. 24/06/17 07:33:57 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: No executor found for 10.42.46.243:51040
    110. 24/06/17 07:33:57 INFO BlockManagerMasterEndpoint: Registering block manager 10.42.235.247:46359 with 413.9 MiB RAM, BlockManagerId(1, 10.42.235.247, 46359, None)
    111. 24/06/17 07:33:58 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.42.46.243:51042) with ID 2, ResourceProfileId 0
    112. 24/06/17 07:33:58 INFO KubernetesClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
    113. 24/06/17 07:33:58 INFO BlockManagerMasterEndpoint: Registering block manager 10.42.46.243:34050 with 413.9 MiB RAM, BlockManagerId(2, 10.42.46.243, 34050, None)
    114. 24/06/17 07:33:58 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir.
    115. 24/06/17 07:33:58 INFO SharedState: Warehouse path is 'file:/opt/apache-kyuubi-1.9.0-bin/work/anonymous/spark-warehouse'.
    116. 24/06/17 07:34:01 INFO CatalogUtil: Loading custom FileIO implementation: org.apache.iceberg.aws.s3.S3FileIO
    117. 24/06/17 07:34:02 INFO CodeGenerator: Code generated in 293.979616 ms
    118. 24/06/17 07:34:02 INFO CodeGenerator: Code generated in 10.25986 ms
    119. 24/06/17 07:34:03 INFO CodeGenerator: Code generated in 12.699915 ms
    120. 24/06/17 07:34:03 INFO SparkContext: Starting job: isEmpty at KyuubiSparkUtil.scala:49
    121. 24/06/17 07:34:03 INFO DAGScheduler: Got job 0 (isEmpty at KyuubiSparkUtil.scala:49) with 1 output partitions
    122. 24/06/17 07:34:03 INFO DAGScheduler: Final stage: ResultStage 0 (isEmpty at KyuubiSparkUtil.scala:49)
    123. 24/06/17 07:34:03 INFO DAGScheduler: Parents of final stage: List()
    124. 24/06/17 07:34:03 INFO DAGScheduler: Missing parents: List()
    125. 24/06/17 07:34:03 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[2] at isEmpty at KyuubiSparkUtil.scala:49), which has no missing parents
    126. 24/06/17 07:34:03 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 7.0 KiB, free 434.4 MiB)
    127. 24/06/17 07:34:03 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 3.7 KiB, free 434.4 MiB)
    128. 24/06/17 07:34:03 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.42.46.241:39580 (size: 3.7 KiB, free: 434.4 MiB)
    129. 24/06/17 07:34:03 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1580
    130. 24/06/17 07:34:03 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[2] at isEmpty at KyuubiSparkUtil.scala:49) (first 15 tasks are for partitions Vector(0))
    131. 24/06/17 07:34:03 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks resource profile 0
    132. 24/06/17 07:34:03 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0) (10.42.46.243, executor 2, partition 0, PROCESS_LOCAL, 10848 bytes)
    133. 24/06/17 07:34:03 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.42.46.243:34050 (size: 3.7 KiB, free: 413.9 MiB)
    134. 24/06/17 07:34:04 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 1115 ms on 10.42.46.243 (executor 2) (1/1)
    135. 24/06/17 07:34:04 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
    136. 24/06/17 07:34:04 INFO DAGScheduler: ResultStage 0 (isEmpty at KyuubiSparkUtil.scala:49) finished in 1.414 s
    137. 24/06/17 07:34:04 INFO DAGScheduler: Job 0 is finished. Cancelling potential speculative or zombie tasks for this job
    138. 24/06/17 07:34:04 INFO TaskSchedulerImpl: Killing all running tasks in stage 0: Stage finished
    139. 24/06/17 07:34:04 INFO DAGScheduler: Job 0 finished: isEmpty at KyuubiSparkUtil.scala:49, took 1.522348 s
    140. 24/06/17 07:34:04 INFO BlockManagerInfo: Removed broadcast_0_piece0 on 10.42.46.241:39580 in memory (size: 3.7 KiB, free: 434.4 MiB)
    141. 24/06/17 07:34:04 INFO Utils: Loading Kyuubi properties from /opt/kyuubi/conf/kyuubi-defaults.conf
    142. 24/06/17 07:34:04 INFO BlockManagerInfo: Removed broadcast_0_piece0 on 10.42.46.243:34050 in memory (size: 3.7 KiB, free: 413.9 MiB)
    143. 24/06/17 07:34:04 INFO ThreadUtils: SparkSQLSessionManager-exec-pool: pool size: 100, wait queue size: 100, thread keepalive time: 60000 ms
    144. 24/06/17 07:34:04 INFO SparkSQLOperationManager: Service[SparkSQLOperationManager] is initialized.
    145. 24/06/17 07:34:04 INFO SparkSQLSessionManager: Service[SparkSQLSessionManager] is initialized.
    146. 24/06/17 07:34:04 INFO SparkSQLBackendService: Service[SparkSQLBackendService] is initialized.
    147. 24/06/17 07:34:05 INFO SparkTBinaryFrontendService: Initializing SparkTBinaryFrontend on kyuubi-59b89b87b7-bjckz:42178 with [9, 999] worker threads
    148. 24/06/17 07:34:05 INFO CuratorFrameworkImpl: Starting
    149. 24/06/17 07:34:05 INFO ZooKeeper: Client environment:zookeeper.version=3.4.14-4c25d480e66aadd371de8bd2fd8da255ac140bcf, built on 03/06/2019 16:18 GMT
    150. 24/06/17 07:34:05 INFO ZooKeeper: Client environment:host.name=kyuubi-59b89b87b7-bjckz
    151. 24/06/17 07:34:05 INFO ZooKeeper: Client environment:java.version=11.0.13
    152. 24/06/17 07:34:05 INFO ZooKeeper: Client environment:java.vendor=Azul Systems, Inc.
    153. 24/06/17 07:34:05 INFO ZooKeeper: Client environment:java.home=/opt/java/zulu11.52.13-ca-jdk11.0.13-linux_x64
    154. 24/06/17 07:34:05 INFO ZooKeeper: Client environment:java.class.path=/opt/spark/conf/:/opt/spark/jars/HikariCP-2.5.1.jar:/opt/spark/jars/JLargeArrays-1.5.jar:/opt/spark/jars/JTransforms-3.1.jar:/opt/spark/jars/RoaringBitmap-0.9.45.jar:/opt/spark/jars/ST4-4.0.4.jar:/opt/spark/jars/activation-1.1.1.jar:/opt/spark/jars/aircompressor-0.25.jar:/opt/spark/jars/algebra_2.12-2.0.1.jar:/opt/spark/jars/annotations-17.0.0.jar:/opt/spark/jars/antlr-runtime-3.5.2.jar:/opt/spark/jars/antlr4-runtime-4.9.3.jar:/opt/spark/jars/aopalliance-repackaged-2.6.1.jar:/opt/spark/jars/arpack-3.0.3.jar:/opt/spark/jars/arpack_combined_all-0.1.jar:/opt/spark/jars/arrow-format-12.0.1.jar:/opt/spark/jars/arrow-memory-core-12.0.1.jar:/opt/spark/jars/arrow-memory-netty-12.0.1.jar:/opt/spark/jars/arrow-vector-12.0.1.jar:/opt/spark/jars/audience-annotations-0.5.0.jar:/opt/spark/jars/avro-1.11.2.jar:/opt/spark/jars/avro-ipc-1.11.2.jar:/opt/spark/jars/avro-mapred-1.11.2.jar:/opt/spark/jars/blas-3.0.3.jar:/opt/spark/jars/bonecp-0.8.0.RELEASE.jar:/opt/spark/jars/breeze-macros_2.12-2.1.0.jar:/opt/spark/jars/breeze_2.12-2.1.0.jar:/opt/spark/jars/cats-kernel_2.12-2.1.1.jar:/opt/spark/jars/chill-java-0.10.0.jar:/opt/spark/jars/chill_2.12-0.10.0.jar:/opt/spark/jars/commons-cli-1.5.0.jar:/opt/spark/jars/commons-codec-1.16.0.jar:/opt/spark/jars/commons-collections-3.2.2.jar:/opt/spark/jars/commons-collections4-4.4.jar:/opt/spark/jars/commons-compiler-3.1.9.jar:/opt/spark/jars/commons-compress-1.23.0.jar:/opt/spark/jars/commons-crypto-1.1.0.jar:/opt/spark/jars/commons-dbcp-1.4.jar:/opt/spark/jars/commons-io-2.13.0.jar:/opt/spark/jars/commons-lang-2.6.jar:/opt/spark/jars/commons-lang3-3.12.0.jar:/opt/spark/jars/commons-logging-1.1.3.jar:/opt/spark/jars/commons-math3-3.6.1.jar:/opt/spark/jars/commons-pool-1.5.4.jar:/opt/spark/jars/commons-text-1.10.0.jar:/opt/spark/jars/compress-lzf-1.1.2.jar:/opt/spark/jars/curator-client-2.13.0.jar:/opt/spark/jars/curator-framework-2.13.0.jar:/opt/spark/jars/curator-recipes-2.13.0.jar:/opt/spark/jars/datanucleus-api-jdo-4.2.4.jar:/opt/spark/jars/datanucleus-core-4.1.17.jar:/opt/spark/jars/datanucleus-rdbms-4.1.19.jar:/opt/spark/jars/datasketches-java-3.3.0.jar:/opt/spark/jars/datasketches-memory-2.1.0.jar:/opt/spark/jars/derby-10.14.2.0.jar:/opt/spark/jars/dropwizard-metrics-hadoop-metrics2-reporter-0.1.2.jar:/opt/spark/jars/flatbuffers-java-1.12.0.jar:/opt/spark/jars/gson-2.2.4.jar:/opt/spark/jars/guava-14.0.1.jar:/opt/spark/jars/hadoop-client-api-3.3.4.jar:/opt/spark/jars/hadoop-client-runtime-3.3.4.jar:/opt/spark/jars/hadoop-shaded-guava-1.1.1.jar:/opt/spark/jars/hadoop-yarn-server-web-proxy-3.3.4.jar:/opt/spark/jars/hive-beeline-2.3.9.jar:/opt/spark/jars/hive-cli-2.3.9.jar:/opt/spark/jars/hive-common-2.3.9.jar:/opt/spark/jars/hive-exec-2.3.9-core.jar:/opt/spark/jars/hive-jdbc-2.3.9.jar:/opt/spark/jars/hive-llap-common-2.3.9.jar:/opt/spark/jars/hive-metastore-2.3.9.jar:/opt/spark/jars/hive-serde-2.3.9.jar:/opt/spark/jars/hive-service-rpc-3.1.3.jar:/opt/spark/jars/hive-shims-0.23-2.3.9.jar:/opt/spark/jars/hive-shims-2.3.9.jar:/opt/spark/jars/hive-shims-common-2.3.9.jar:/opt/spark/jars/hive-shims-scheduler-2.3.9.jar:/opt/spark/jars/hive-storage-api-2.8.1.jar:/opt/spark/jars/hk2-api-2.6.1.jar:/opt/spark/jars/hk2-locator-2.6.1.jar:/opt/spark/jars/hk2-utils-2.6.1.jar:/opt/spark/jars/httpclient-4.5.14.jar:/opt/spark/jars/httpcore-4.4.16.jar:/opt/spark/jars/istack-commons-runtime-3.0.8.jar:/opt/spark/jars/ivy-2.5.1.jar:/opt/spark/jars/jackson-annotations-2.15.2.jar:/opt/spark/jars/jackson-core-2.15.2.jar:/opt/spark/jars/jackson-core-asl-1.9.13.jar:/opt/spark/jars/jackson-databind-2.15.2.jar:/opt/spark/jars/jackson-dataformat-yaml-2.15.2.jar:/opt/spark/jars/jackson-datatype-jsr310-2.15.2.jar:/opt/spark/jars/jackson-mapper-asl-1.9.13.jar:/opt/spark/jars/jackson-module-scala_2.12-2.15.2.jar:/opt/spark/jars/jakarta.annotation-api-1.3.5.jar:/opt/spark/jars/jakarta.inject-2.6.1.jar:/opt/spark/jars/jakarta.servlet-api-4.0.3.jar:/opt/spark/jars/jakarta.validation-api-2.0.2.jar:/opt/spark/jars/jakarta.ws.rs-api-2.1.6.jar:/opt/spark/jars/jakarta.xml.bind-api-2.3.2.jar:/opt/spark/jars/janino-3.1.9.jar:/opt/spark/jars/javassist-3.29.2-GA.jar:/opt/spark/jars/jpam-1.1.jar:/opt/spark/jars/javax.jdo-3.2.0-m3.jar:/opt/spark/jars/javolution-5.5.1.jar:/opt/spark/jars/jaxb-runtime-2.3.2.jar:/opt/spark/jars/jcl-over-slf4j-2.0.7.jar:/opt/spark/jars/jdo-api-3.0.1.jar:/opt/spark/jars/jersey-client-2.40.jar:/opt/spark/jars/jersey-common-2.40.jar:/opt/spark/jars/jersey-container-servlet-2.40.jar:/opt/spark/jars/jersey-container-servlet-core-2.40.jar:/opt/spark/jars/jersey-hk2-2.40.jar:/opt/spark/jars/jersey-server-2.40.jar:/opt/spark/jars/jline-2.14.6.jar:/opt/spark/jars/joda-time-2.12.5.jar:/opt/spark/jars/jodd-core-3.5.2.jar:/opt/spark/jars/json-1.8.jar:/opt/spark/jars/json4s-ast_2.12-3.7.0-M11.jar:/opt/spark/jars/json4s-core_2.12-3.7.0-M11.jar:/opt/spark/jars/json4s-jackson_2.12-3.7.0-M11.jar:/opt/spark/jars/json4s-scalap_2.12-3.7.0-M11.jar:/opt/spark/jars/jsr305-3.0.0.jar:/opt/spark/jars/jta-1.1.jar:/opt/spark/jars/jul-to-slf4j-2.0.7.jar:/opt/spark/jars/kryo-shaded-4.0.2.jar:/opt/spark/jars/kubernetes-client-6.7.2.jar:/opt/spark/jars/kubernetes-client-api-6.7.2.jar:/opt/spark/jars/kubernetes-httpclient-okhttp-6.7.2.jar:/opt/spark/jars/kubernetes-model-admissionregistration-6.7.2.jar:/opt/spark/jars/kubernetes-model-apiextensions-6.7.2.jar:/opt/spark/jars/kubernetes-model-apps-6.7.2.jar:/opt/spark/jars/kubernetes-model-autoscaling-6.7.2.jar:/opt/spark/jars/kubernetes-model-batch-6.7.2.jar:/opt/spark/jars/kubernetes-model-certificates-6.7.2.jar:/opt/spark/jars/kubernetes-model-common-6.7.2.jar:/opt/spark/jars/kubernetes-model-coordination-6.7.2.jar:/opt/spark/jars/kubernetes-model-core-6.7.2.jar:/opt/spark/jars/kubernetes-model-discovery-6.7.2.jar:/opt/spark/jars/kubernetes-model-events-6.7.2.jar:/opt/spark/jars/kubernetes-model-extensions-6.7.2.jar:/opt/spark/jars/kubernetes-model-flowcontrol-6.7.2.jar:/opt/spark/jars/kubernetes-model-gatewayapi-6.7.2.jar:/opt/spark/jars/kubernetes-model-metrics-6.7.2.jar:/opt/spark/jars/kubernetes-model-networking-6.7.2.jar:/opt/spark/jars/kubernetes-model-node-6.7.2.jar:/opt/spark/jars/kubernetes-model-policy-6.7.2.jar:/opt/spark/jars/kubernetes-model-rbac-6.7.2.jar:/opt/spark/jars/kubernetes-model-resource-6.7.2.jar:/opt/spark/jars/kubernetes-model-scheduling-6.7.2.jar:/opt/spark/jars/kubernetes-model-storageclass-6.7.2.jar:/opt/spark/jars/lapack-3.0.3.jar:/opt/spark/jars/leveldbjni-all-1.8.jar:/opt/spark/jars/libfb303-0.9.3.jar:/opt/spark/jars/libthrift-0.12.0.jar:/opt/spark/jars/log4j-1.2-api-2.20.0.jar:/opt/spark/jars/log4j-api-2.20.0.jar:/opt/spark/jars/log4j-core-2.20.0.jar:/opt/spark/jars/log4j-slf4j2-impl-2.20.0.jar:/opt/spark/jars/logging-interceptor-3.12.12.jar:/opt/spark/jars/lz4-java-1.8.0.jar:/opt/spark/jars/mesos-1.4.3-shaded-protobuf.jar:/opt/spark/jars/metrics-core-4.2.19.jar:/opt/spark/jars/metrics-graphite-4.2.19.jar:/opt/spark/jars/metrics-jmx-4.2.19.jar:/opt/spark/jars/metrics-json-4.2.19.jar:/opt/spark/jars/metrics-jvm-4.2.19.jar:/opt/spark/jars/minlog-1.3.0.jar:/opt/spark/jars/netty-all-4.1.96.Final.jar:/opt/spark/jars/netty-buffer-4.1.96.Final.jar:/opt/spark/jars/netty-codec-4.1.96.Final.jar:/opt/spark/jars/netty-codec-http-4.1.96.Final.jar:/opt/spark/jars/netty-codec-http2-4.1.96.Final.jar:/opt/spark/jars/netty-codec-socks-4.1.96.Final.jar:/opt/spark/jars/netty-common-4.1.96.Final.jar:/opt/spark/jars/netty-handler-4.1.96.Final.jar:/opt/spark/jars/netty-handler-proxy-4.1.96.Final.jar:/opt/spark/jars/netty-resolver-4.1.96.Final.jar:/opt/spark/jars/netty-transport-4.1.96.Final.jar:/opt/spark/jars/netty-transport-classes-epoll-4.1.96.Final.jar:/opt/spark/jars/netty-transport-classes-kqueue-4.1.96.Final.jar:/opt/spark/jars/netty-transport-native-epoll-4.1.96.Final-linux-aarch_64.jar:/opt/spark/jars/netty-transport-native-epoll-4.1.96.Final-linux-x86_64.jar:/opt/spark/jars/netty-transport-native-kqueue-4.1.96.Final-osx-aarch_64.jar:/opt/spark/jars/netty-transport-native-kqueue-4.1.96.Final-osx-x86_64.jar:/opt/spark/jars/netty-transport-native-unix-common-4.1.96.Final.jar:/opt/spark/jars/objenesis-3.3.jar:/opt/spark/jars/okhttp-3.12.12.jar:/opt/spark/jars/okio-1.15.0.jar:/opt/spark/jars/opencsv-2.3.jar:/opt/spark/jars/orc-core-1.9.1-shaded-protobuf.jar:/opt/spark/jars/orc-shims-1.9.1.jar:/opt/spark/jars/orc-mapreduce-1.9.1-shaded-protobuf.jar:/opt/spark/jars/oro-2.0.8.jar:/opt/spark/jars/osgi-resource-locator-1.0.3.jar:/opt/spark/jars/paranamer-2.8.jar:/opt/spark/jars/parquet-column-1.13.1.jar:/opt/spark/jars/parquet-common-1.13.1.jar:/opt/spark/jars/parquet-encoding-1.13.1.jar:/opt/spark/jars/parquet-format-structures-1.13.1.jar:/opt/spark/jars/parquet-hadoop-1.13.1.jar:/opt/spark/jars/parquet-jackson-1.13.1.jar:/opt/spark/jars/pickle-1.3.jar:/opt/spark/jars/py4j-0.10.9.7.jar:/opt/spark/jars/rocksdbjni-8.3.2.jar:/opt/spark/jars/scala-collection-compat_2.12-2.7.0.jar:/opt/spark/jars/scala-compiler-2.12.18.jar:/opt/spark/jars/scala-library-2.12.18.jar:/opt/spark/jars/scala-parser-combinators_2.12-2.3.0.jar:/opt/spark/jars/scala-reflect-2.12.18.jar:/opt/spark/jars/scala-xml_2.12-2.1.0.jar:/opt/spark/jars/shims-0.9.45.jar:/opt/spark/jars/slf4j-api-2.0.7.jar:/opt/spark/jars/snakeyaml-2.0.jar:/opt/spark/jars/snakeyaml-engine-2.6.jar:/opt/spark/jars/snappy-java-1.1.10.3.jar:/opt/spark/jars/spark-catalyst_2.12-3.5.0.jar:/opt/spark/jars/spark-common-utils_2.12-3.5.0.jar:/opt/spark/jars/spark-core_2.12-3.5.0.jar:/opt/spark/jars/spark-graphx_2.12-3.5.0.jar:/opt/spark/jars/spark-hive-thriftserver_2.12-3.5.0.jar:/opt/spark/jars/spark-hive_2.12-3.5.0.jar:/opt/spark/jars/spark-kubernetes_2.12-3.5.0.jar:/opt/spark/jars/spark-kvstore_2.12-3.5.0.jar:/opt/spark/jars/spark-launcher_2.12-3.5.0.jar:/opt/spark/jars/spark-mesos_2.12-3.5.0.jar:/opt/spark/jars/spark-mllib-local_2.12-3.5.0.jar:/opt/spark/jars/spark-mllib_2.12-3.5.0.jar:/opt/spark/jars/spark-network-common_2.12-3.5.0.jar:/opt/spark/jars/spark-network-shuffle_2.12-3.5.0.jar:/opt/spark/jars/spark-repl_2.12-3.5.0.jar:/opt/spark/jars/spark-sketch_2.12-3.5.0.jar:/opt/spark/jars/spark-sql-api_2.12-3.5.0.jar:/opt/spark/jars/spark-sql_2.12-3.5.0.jar:/opt/spark/jars/spark-streaming_2.12-3.5.0.jar:/opt/spark/jars/spark-tags_2.12-3.5.0.jar:/opt/spark/jars/spark-unsafe_2.12-3.5.0.jar:/opt/spark/jars/spark-yarn_2.12-3.5.0.jar:/opt/spark/jars/spire-macros_2.12-0.17.0.jar:/opt/spark/jars/spire-platform_2.12-0.17.0.jar:/opt/spark/jars/spire-util_2.12-0.17.0.jar:/opt/spark/jars/spire_2.12-0.17.0.jar:/opt/spark/jars/stax-api-1.0.1.jar:/opt/spark/jars/stream-2.9.6.jar:/opt/spark/jars/super-csv-2.2.0.jar:/opt/spark/jars/threeten-extra-1.7.1.jar:/opt/spark/jars/tink-1.9.0.jar:/opt/spark/jars/transaction-api-1.1.jar:/opt/spark/jars/univocity-parsers-2.9.1.jar:/opt/spark/jars/xbean-asm9-shaded-4.23.jar:/opt/spark/jars/xz-1.9.jar:/opt/spark/jars/zjsonpatch-0.3.0.jar:/opt/spark/jars/zookeeper-3.6.3.jar:/opt/spark/jars/zookeeper-jute-3.6.3.jar:/opt/spark/jars/zstd-jni-1.5.5-4.jar:/opt/spark/jars/apache-client-2.25.65.jar:/opt/spark/jars/auth-2.25.65.jar:/opt/spark/jars/aws-core-2.25.65.jar:/opt/spark/jars/aws-java-sdk-bundle-1.12.262.jar:/opt/spark/jars/aws-json-protocol-2.25.65.jar:/opt/spark/jars/aws-query-protocol-2.25.65.jar:/opt/spark/jars/aws-xml-protocol-2.25.65.jar:/opt/spark/jars/checksums-2.25.65.jar:/opt/spark/jars/checksums-spi-2.25.65.jar:/opt/spark/jars/dynamodb-2.25.65.jar:/opt/spark/jars/endpoints-spi-2.25.65.jar:/opt/spark/jars/glue-2.25.65.jar:/opt/spark/jars/hadoop-aws-3.3.4.jar:/opt/spark/jars/http-auth-2.25.65.jar:/opt/spark/jars/http-auth-aws-2.25.65.jar:/opt/spark/jars/http-auth-spi-2.25.65.jar:/opt/spark/jars/http-client-spi-2.25.65.jar:/opt/spark/jars/iceberg-spark-runtime-3.5_2.12-1.5.0.jar:/opt/spark/jars/identity-spi-2.25.65.jar:/opt/spark/jars/json-utils-2.25.65.jar:/opt/spark/jars/kms-2.25.65.jar:/opt/spark/jars/metrics-spi-2.25.65.jar:/opt/spark/jars/profiles-2.25.65.jar:/opt/spark/jars/protocol-core-2.25.65.jar:/opt/spark/jars/reactive-streams-1.0.4.jar:/opt/spark/jars/regions-2.25.65.jar:/opt/spark/jars/s3-2.25.65.jar:/opt/spark/jars/sdk-core-2.25.65.jar:/opt/spark/jars/sts-2.25.65.jar:/opt/spark/jars/third-party-jackson-core-2.25.65.jar:/opt/spark/jars/utils-2.25.65.jar
    155. 24/06/17 07:34:05 INFO ZooKeeper: Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib
    156. 24/06/17 07:34:05 INFO ZooKeeper: Client environment:java.io.tmpdir=/tmp
    157. 24/06/17 07:34:05 INFO ZooKeeper: Client environment:java.compiler=
    158. 24/06/17 07:34:05 INFO ZooKeeper: Client environment:os.name=Linux
    159. 24/06/17 07:34:05 INFO ZooKeeper: Client environment:os.arch=amd64
    160. 24/06/17 07:34:05 INFO ZooKeeper: Client environment:os.version=3.10.0-1160.114.2.el7.x86_64
    161. 24/06/17 07:34:05 INFO ZooKeeper: Client environment:user.name=root
    162. 24/06/17 07:34:05 INFO ZooKeeper: Client environment:user.home=/root
    163. 24/06/17 07:34:05 INFO ZooKeeper: Client environment:user.dir=/opt/apache-kyuubi-1.9.0-bin/work/anonymous
    164. 24/06/17 07:34:05 INFO ZooKeeper: Initiating client connection, connectString=10.42.46.241:2181 sessionTimeout=60000 watcher=org.apache.kyuubi.shaded.curator.ConnectionState@13482361
    165. 24/06/17 07:34:05 INFO EngineServiceDiscovery: Service[EngineServiceDiscovery] is initialized.
    166. 24/06/17 07:34:05 INFO SparkTBinaryFrontendService: Service[SparkTBinaryFrontend] is initialized.
    167. 24/06/17 07:34:05 INFO SparkSQLEngine: Service[SparkSQLEngine] is initialized.
    168. 24/06/17 07:34:05 INFO ClientCnxn: Opening socket connection to server kyuubi-59b89b87b7-bjckz/10.42.46.241:2181. Will not attempt to authenticate using SASL (unknown error)
    169. 24/06/17 07:34:05 INFO ClientCnxn: Socket connection established to kyuubi-59b89b87b7-bjckz/10.42.46.241:2181, initiating session
    170. 24/06/17 07:34:05 INFO SparkSQLOperationManager: Service[SparkSQLOperationManager] is started.
    171. 24/06/17 07:34:05 INFO SparkSQLSessionManager: Service[SparkSQLSessionManager] is started.
    172. 24/06/17 07:34:05 INFO SparkSQLBackendService: Service[SparkSQLBackendService] is started.
    173. 24/06/17 07:34:05 INFO ClientCnxn: Session establishment complete on server kyuubi-59b89b87b7-bjckz/10.42.46.241:2181, sessionid = 0x101113150210002, negotiated timeout = 60000
    174. 24/06/17 07:34:05 INFO ConnectionStateManager: State change: CONNECTED
    175. 24/06/17 07:34:05 INFO ZookeeperDiscoveryClient: Zookeeper client connection state changed to: CONNECTED
    176. 24/06/17 07:34:05 INFO ZookeeperDiscoveryClient: Created a /kyuubi_1.9.0_USER_SPARK_SQL/anonymous/default/serverUri=10.42.46.241:42178;version=1.9.0;spark.driver.memory=;spark.executor.memory=;kyuubi.engine.id=spark-a13ae55c985c44168bd3fa6476c1ac1f;kyuubi.engine.url=10.42.46.241:42553;refId=50850fdf-6a15-4c47-bc5f-d8816d255ef8;sequence=0000000000 on ZooKeeper for KyuubiServer uri: 10.42.46.241:42178
    177. 24/06/17 07:34:05 INFO EngineServiceDiscovery: Registered EngineServiceDiscovery in namespace /kyuubi_1.9.0_USER_SPARK_SQL/anonymous/default.
    178. 24/06/17 07:34:05 INFO EngineServiceDiscovery: Service[EngineServiceDiscovery] is started.
    179. 24/06/17 07:34:05 INFO SparkTBinaryFrontendService: Service[SparkTBinaryFrontend] is started.
    180. 24/06/17 07:34:05 INFO SparkSQLEngine: Service[SparkSQLEngine] is started.
    181. 24/06/17 07:34:05 INFO SparkSQLEngine:
    182. Spark application name: kyuubi_USER_SPARK_SQL_anonymous_default_50850fdf-6a15-4c47-bc5f-d8816d255ef8
    183. application ID: spark-a13ae55c985c44168bd3fa6476c1ac1f
    184. application tags:
    185. application web UI: http://10.42.46.241:42553
    186. master: k8s://https://10.38.199.201:443/k8s/clusters/c-m-l7gflsx7
    187. version: 3.5.0
    188. driver: [cpu: 1, mem: 1g]
    189. executor: [cpu: 2, mem: 1g, maxNum: 2]
    190. Start time: Mon Jun 17 07:33:49 UTC 2024
    191. User: anonymous (shared mode: USER)
    192. State: STARTED
    193. 2024-06-17 07:34:05.979 INFO KyuubiSessionManager-exec-pool: Thread-54 org.apache.kyuubi.ha.client.zookeeper.ZookeeperDiscoveryClient: Get service instance:10.42.46.241:42178 engine id:spark-a13ae55c985c44168bd3fa6476c1ac1f and version:1.9.0 under /kyuubi_1.9.0_USER_SPARK_SQL/anonymous/default
    194. 24/06/17 07:34:06 INFO SparkTBinaryFrontendService: Client protocol version: HIVE_CLI_SERVICE_PROTOCOL_V10
    195. 24/06/17 07:34:06 INFO SparkSQLSessionManager: Opening session for anonymous@10.42.46.241
    196. 24/06/17 07:34:06 INFO CatalogUtil: Loading custom FileIO implementation: org.apache.iceberg.aws.s3.S3FileIO
    197. 24/06/17 07:34:06 INFO HiveUtils: Initializing HiveMetastoreConnection version 2.3.9 using Spark classes.
    198. 24/06/17 07:34:06 INFO HiveClientImpl: Warehouse location for Hive client (version 2.3.9) is file:/opt/apache-kyuubi-1.9.0-bin/work/anonymous/spark-warehouse
    199. 24/06/17 07:34:06 WARN HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
    200. 24/06/17 07:34:06 WARN HiveConf: HiveConf of name hive.stats.retries.wait does not exist
    201. 24/06/17 07:34:06 INFO HiveMetaStore: 0: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore
    202. 24/06/17 07:34:06 INFO ObjectStore: ObjectStore, initialize called
    203. 24/06/17 07:34:07 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
    204. 24/06/17 07:34:07 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored
    205. 24/06/17 07:34:12 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
    206. 24/06/17 07:34:18 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
    207. 24/06/17 07:34:18 INFO ObjectStore: Initialized ObjectStore
    208. 24/06/17 07:34:19 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 2.3.0
    209. 24/06/17 07:34:19 WARN ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 2.3.0, comment = Set by MetaStore UNKNOWN@10.42.46.241
    210. 24/06/17 07:34:19 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
    211. 24/06/17 07:34:19 INFO HiveMetaStore: Added admin role in metastore
    212. 24/06/17 07:34:19 INFO HiveMetaStore: Added public role in metastore
    213. 24/06/17 07:34:20 INFO HiveMetaStore: No user is added in admin role, since config is empty
    214. 24/06/17 07:34:20 INFO HiveMetaStore: 0: get_database: default
    215. 24/06/17 07:34:20 INFO audit: ugi=anonymous ip=unknown-ip-addr cmd=get_database: default
    216. 24/06/17 07:34:20 INFO HiveMetaStore: 0: get_database: global_temp
    217. 24/06/17 07:34:20 INFO audit: ugi=anonymous ip=unknown-ip-addr cmd=get_database: global_temp
    218. 24/06/17 07:34:20 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
    219. 24/06/17 07:34:20 INFO HiveMetaStore: 0: get_database: default
    220. 24/06/17 07:34:20 INFO audit: ugi=anonymous ip=unknown-ip-addr cmd=get_database: default
    221. 2024-06-17 07:34:21.585 INFO KyuubiSessionManager-exec-pool: Thread-54 org.apache.kyuubi.session.KyuubiSessionImpl: [anonymous:10.38.199.201] SessionHandle [50850fdf-6a15-4c47-bc5f-d8816d255ef8] - Connected to engine [10.42.46.241:42178]/[spark-a13ae55c985c44168bd3fa6476c1ac1f] with SessionHandle [50850fdf-6a15-4c47-bc5f-d8816d255ef8]]
    222. 2024-06-17 07:34:21.588 INFO Curator-Framework-0 org.apache.kyuubi.shaded.curator.framework.imps.CuratorFrameworkImpl: backgroundOperationsLoop exiting
    223. 2024-06-17 07:34:21.607 INFO KyuubiSessionManager-exec-pool: Thread-54 org.apache.kyuubi.shaded.zookeeper.ZooKeeper: Session: 0x101113150210001 closed
    224. 2024-06-17 07:34:21.607 INFO KyuubiSessionManager-exec-pool: Thread-54-EventThread org.apache.kyuubi.shaded.zookeeper.ClientCnxn: EventThread shut down for session: 0x101113150210001
    225. 2024-06-17 07:34:21.613 INFO KyuubiSessionManager-exec-pool: Thread-54 org.apache.kyuubi.operation.LaunchEngine: Processing anonymous's query[c612b8d0-8bc7-4efa-acc8-e6c477fbda20]: RUNNING_STATE -> FINISHED_STATE, time taken: 35.942 seconds
    226. 24/06/17 07:34:21 INFO SparkSQLSessionManager: anonymous's SparkSessionImpl with SessionHandle [50850fdf-6a15-4c47-bc5f-d8816d255ef8] is opened, current opening sessions 1
    227. Connected to: Spark SQL (version 3.5.0)
    228. Driver: Kyuubi Project Hive JDBC Client (version 1.9.0)
    229. Beeline version 1.9.0 by Apache Kyuubi
    230. 0: jdbc:hive2://10.38.199.201:30009>

    执行sql日志

    1. 0: jdbc:hive2://10.38.199.201:30009> select * from p530_cimarronbp.attr_vals limit 10;
    2. 2024-06-17 06:31:19.099 INFO KyuubiSessionManager-exec-pool: Thread-63 org.apache.kyuubi.operation.ExecuteStatement: Processing anonymous's query[65d78d0c-0e89-4f2f-a41c-7a584520bc81]: PENDING_STATE -> RUNNING_STATE, statement:
    3. select * from p530_cimarronbp.attr_vals limit 10
    4. 24/06/17 06:31:19 INFO ExecuteStatement: Processing anonymous's query[65d78d0c-0e89-4f2f-a41c-7a584520bc81]: PENDING_STATE -> RUNNING_STATE, statement:
    5. select * from p530_cimarronbp.attr_vals limit 10
    6. 24/06/17 06:31:19 INFO ExecuteStatement:
    7. Spark application name: kyuubi_USER_SPARK_SQL_anonymous_default_6474780a-72b2-439a-82f0-79131d583346
    8. application ID: spark-6b4ff4ba67df455d94af062c51825801
    9. application web UI: http://10.42.235.246:45076
    10. master: k8s://https://10.38.199.201:443/k8s/clusters/c-m-l7gflsx7
    11. deploy mode: client
    12. version: 3.5.0
    13. Start time: 2024-06-17T06:30:04.865
    14. User: anonymous
    15. 24/06/17 06:31:19 INFO ExecuteStatement: Execute in full collect mode
    16. 24/06/17 06:31:19 INFO V2ScanRelationPushDown:
    17. Output: serial_num#96, trans_seq#97, attr_name#98, pre_attr_value#99, post_attr_value#100, in_run_file#101, event_date#102, family#103, operation#104
    18. 24/06/17 06:31:19 INFO SnapshotScan: Scanning table spark_catalog.p530_cimarronbp.attr_vals snapshot 4278818967903043345 created at 2024-06-15T05:35:09.281+00:00 with filter true
    19. 24/06/17 06:31:19 INFO BaseDistributedDataScan: Planning file tasks locally for table spark_catalog.p530_cimarronbp.attr_vals
    20. 24/06/17 06:31:23 INFO LoggingMetricsReporter: Received metrics report: ScanReport{tableName=spark_catalog.p530_cimarronbp.attr_vals, snapshotId=4278818967903043345, filter=true, schemaId=0, projectedFieldIds=[1, 2, 3, 4, 5, 6, 7, 8, 9], projectedFieldNames=[serial_num, trans_seq, attr_name, pre_attr_value, post_attr_value, in_run_file, event_date, family, operation], scanMetrics=ScanMetricsResult{totalPlanningDuration=TimerResult{timeUnit=NANOSECONDS, totalDuration=PT4.375705985S, count=1}, resultDataFiles=CounterResult{unit=COUNT, value=53367}, resultDeleteFiles=CounterResult{unit=COUNT, value=0}, totalDataManifests=CounterResult{unit=COUNT, value=97}, totalDeleteManifests=CounterResult{unit=COUNT, value=0}, scannedDataManifests=CounterResult{unit=COUNT, value=97}, skippedDataManifests=CounterResult{unit=COUNT, value=0}, totalFileSizeInBytes=CounterResult{unit=BYTES, value=2867530435}, totalDeleteFileSizeInBytes=CounterResult{unit=BYTES, value=0}, skippedDataFiles=CounterResult{unit=COUNT, value=0}, skippedDeleteFiles=CounterResult{unit=COUNT, value=0}, scannedDeleteManifests=CounterResult{unit=COUNT, value=0}, skippedDeleteManifests=CounterResult{unit=COUNT, value=0}, indexedDeleteFiles=CounterResult{unit=COUNT, value=0}, equalityDeleteFiles=CounterResult{unit=COUNT, value=0}, positionalDeleteFiles=CounterResult{unit=COUNT, value=0}}, metadata={engine-version=3.5.0, iceberg-version=Apache Iceberg 1.5.0 (commit 2519ab43d654927802cc02e19c917ce90e8e0265), app-id=spark-6b4ff4ba67df455d94af062c51825801, engine-name=spark}}
    21. 24/06/17 06:31:23 INFO SparkPartitioningAwareScan: Reporting UnknownPartitioning with 13356 partition(s) for table spark_catalog.p530_cimarronbp.attr_vals
    22. 24/06/17 06:31:23 INFO MemoryStore: Block broadcast_7 stored as values in memory (estimated size 32.0 KiB, free 434.4 MiB)
    23. 24/06/17 06:31:23 INFO MemoryStore: Block broadcast_7_piece0 stored as bytes in memory (estimated size 3.8 KiB, free 434.4 MiB)
    24. 24/06/17 06:31:23 INFO SparkContext: Created broadcast 7 from broadcast at SparkBatch.java:79
    25. 24/06/17 06:31:23 INFO MemoryStore: Block broadcast_8 stored as values in memory (estimated size 32.0 KiB, free 434.3 MiB)
    26. 24/06/17 06:31:23 INFO MemoryStore: Block broadcast_8_piece0 stored as bytes in memory (estimated size 3.8 KiB, free 434.3 MiB)
    27. 24/06/17 06:31:23 INFO SparkContext: Created broadcast 8 from broadcast at SparkBatch.java:79
    28. 24/06/17 06:31:23 INFO SparkContext: Starting job: collect at ExecuteStatement.scala:85
    29. 24/06/17 06:31:23 INFO SQLOperationListener: Query [65d78d0c-0e89-4f2f-a41c-7a584520bc81]: Job 3 started with 1 stages, 1 active jobs running
    30. 24/06/17 06:31:23 INFO SQLOperationListener: Query [65d78d0c-0e89-4f2f-a41c-7a584520bc81]: Stage 3.0 started with 1 tasks, 1 active stages running
    31. 2024-06-17 06:31:24.103 INFO KyuubiSessionManager-exec-pool: Thread-63 org.apache.kyuubi.operation.ExecuteStatement: Query[65d78d0c-0e89-4f2f-a41c-7a584520bc81] in RUNNING_STATE
    32. 24/06/17 06:31:24 INFO SQLOperationListener: Finished stage: Stage(3, 0); Name: 'collect at ExecuteStatement.scala:85'; Status: succeeded; numTasks: 1; Took: 343 msec
    33. 24/06/17 06:31:24 INFO DAGScheduler: Job 3 finished: collect at ExecuteStatement.scala:85, took 0.362979 s
    34. 24/06/17 06:31:24 INFO StatsReportListener: task runtime:(count: 1, mean: 326.000000, stdev: 0.000000, max: 326.000000, min: 326.000000)
    35. 24/06/17 06:31:24 INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100%
    36. 24/06/17 06:31:24 INFO StatsReportListener: 326.0 ms 326.0 ms 326.0 ms 326.0 ms 326.0 ms 326.0 ms 326.0 ms 326.0 ms 326.0 ms
    37. 24/06/17 06:31:24 INFO StatsReportListener: shuffle bytes written:(count: 1, mean: 0.000000, stdev: 0.000000, max: 0.000000, min: 0.000000)
    38. 24/06/17 06:31:24 INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100%
    39. 24/06/17 06:31:24 INFO StatsReportListener: 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B
    40. 24/06/17 06:31:24 INFO StatsReportListener: fetch wait time:(count: 1, mean: 0.000000, stdev: 0.000000, max: 0.000000, min: 0.000000)
    41. 24/06/17 06:31:24 INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100%
    42. 24/06/17 06:31:24 INFO StatsReportListener: 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms
    43. 24/06/17 06:31:24 INFO StatsReportListener: remote bytes read:(count: 1, mean: 0.000000, stdev: 0.000000, max: 0.000000, min: 0.000000)
    44. 24/06/17 06:31:24 INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100%
    45. 24/06/17 06:31:24 INFO StatsReportListener: 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B
    46. 24/06/17 06:31:24 INFO StatsReportListener: task result size:(count: 1, mean: 5057.000000, stdev: 0.000000, max: 5057.000000, min: 5057.000000)
    47. 24/06/17 06:31:24 INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100%
    48. 24/06/17 06:31:24 INFO StatsReportListener: 4.9 KiB 4.9 KiB 4.9 KiB 4.9 KiB 4.9 KiB 4.9 KiB 4.9 KiB 4.9 KiB 4.9 KiB
    49. 24/06/17 06:31:24 INFO StatsReportListener: executor (non-fetch) time pct: (count: 1, mean: 84.969325, stdev: 0.000000, max: 84.969325, min: 84.969325)
    50. 24/06/17 06:31:24 INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100%
    51. 24/06/17 06:31:24 INFO StatsReportListener: 85 % 85 % 85 % 85 % 85 % 85 % 85 % 85 % 85 %
    52. 24/06/17 06:31:24 INFO StatsReportListener: fetch wait time pct: (count: 1, mean: 0.000000, stdev: 0.000000, max: 0.000000, min: 0.000000)
    53. 24/06/17 06:31:24 INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100%
    54. 24/06/17 06:31:24 INFO StatsReportListener: 0 % 0 % 0 % 0 % 0 % 0 % 0 % 0 % 0 %
    55. 24/06/17 06:31:24 INFO StatsReportListener: other time pct: (count: 1, mean: 15.030675, stdev: 0.000000, max: 15.030675, min: 15.030675)
    56. 24/06/17 06:31:24 INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100%
    57. 24/06/17 06:31:24 INFO StatsReportListener: 15 % 15 % 15 % 15 % 15 % 15 % 15 % 15 % 15 %
    58. 24/06/17 06:31:24 INFO SQLOperationListener: Query [65d78d0c-0e89-4f2f-a41c-7a584520bc81]: Job 3 succeeded, 0 active jobs running
    59. 24/06/17 06:31:24 INFO ExecuteStatement: Processing anonymous's query[65d78d0c-0e89-4f2f-a41c-7a584520bc81]: RUNNING_STATE -> FINISHED_STATE, time taken: 5.241 seconds
    60. 24/06/17 06:31:24 INFO ExecuteStatement: statementId=65d78d0c-0e89-4f2f-a41c-7a584520bc81, operationRunTime=0.3 s, operationCpuTime=77 ms
    61. 2024-06-17 06:31:24.344 INFO KyuubiSessionManager-exec-pool: Thread-63 org.apache.kyuubi.operation.ExecuteStatement: Query[65d78d0c-0e89-4f2f-a41c-7a584520bc81] in FINISHED_STATE
    62. 2024-06-17 06:31:24.344 INFO KyuubiSessionManager-exec-pool: Thread-63 org.apache.kyuubi.operation.ExecuteStatement: Processing anonymous's query[65d78d0c-0e89-4f2f-a41c-7a584520bc81]: RUNNING_STATE -> FINISHED_STATE, time taken: 5.244 seconds
    63. +-------------+------------+----------------+-----------------------+-----------------------+--------------+-------------+---------+------------+
    64. | serial_num | trans_seq | attr_name | pre_attr_value | post_attr_value | in_run_file | event_date | family | operation |
    65. +-------------+------------+----------------+-----------------------+-----------------------+--------------+-------------+---------+------------+
    66. | WRQ2NM0H | 19 | STATE_NAME | END | END | NULL | 20240427 | 2TJ | CAL2 |
    67. | WRQ2NM0H | 19 | TEST_DATE | 04/27/2024 09:55:40 | 04/27/2024 09:55:40 | NULL | 20240427 | 2TJ | CAL2 |
    68. | WRQ2NM0H | 19 | FILE_TYPE | NTR | NTR | NULL | 20240427 | 2TJ | CAL2 |
    69. | WRQ2NM0H | 19 | CUM_TEST_TIME | 162301.49 | 212097.50 | 1 | 20240427 | 2TJ | CAL2 |
    70. | WRQ2NM0H | 19 | PCBA_COMP_ID5 | 70437 | 70437 | 1 | 20240427 | 2TJ | CAL2 |
    71. | WRQ2NM0H | 19 | CCVTEST | NONE | NONE | 1 | 20240427 | 2TJ | CAL2 |
    72. | WRQ2NM0H | 19 | PCBA_COMP_ID3 | 78810 | 78810 | 1 | 20240427 | 2TJ | CAL2 |
    73. | WRQ2NM0H | 19 | PCBA_COMP_ID2 | 57706 | 57706 | 1 | 20240427 | 2TJ | CAL2 |
    74. | WRQ2NM0H | 19 | PCBA_COMP_ID1 | 15229 | 15229 | 1 | 20240427 | 2TJ | CAL2 |
    75. | WRQ2NM0H | 19 | FTFC_APC_DATE | "0001-01-0100:00:00" | "0001-01-0100:00:00" | 1 | 20240427 | 2TJ | CAL2 |
    76. +-------------+------------+----------------+-----------------------+-----------------------+--------------+-------------+---------+------------+
    77. 10 rows selected (5.287 seconds)

    k8s中生成2个executors

  • 相关阅读:
    奖励多的看不多来了?成都市温江区关于加快推进创新创业载体发展的实施办法和申报条件、奖励和流程
    llm模拟基本逻辑门
    Python Linux下编译
    百花齐放的国产数据库,献礼国庆节
    扩展WiFi是什么意思
    非母语玩家如何撰写研究性英文论文:0. 前言
    【JWT】快速了解什么是jwt及如何使用jwt
    别跟客户扯细节
    作业错题一
    【linux】linux实操篇之任务调度
  • 原文地址:https://blog.csdn.net/w8998036/article/details/139744081