• 记录一个hive中因没启yarn导致的spark引擎跑insert语句的报错


    【背景说明】

    刚在hive中配置了Spark引擎,在进行Hive on Spark测试时报错,

    报错截图如下:

    1. [atguigu@hadoop102 conf]$ hive
    2. which: no hbase in (/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/module/jdk1.8.0_212/bin:/opt/module/hadoop-3.3.4/bin:/opt/module/hadoop-3.3.4/sbin:/opt/module/hive-3.1.3/bin :/opt/module/kafka/bin:/opt/module/efak/bin:/home/atguigu/.local/bin:/home/atguigu/bin:/opt/module/jdk1.8.0_212/bin:/opt/module/hadoop-3.3.4/bin:/opt/module/hadoop-3.3.4/sbin:/opt/modu le/hive-3.1.3/bin:/opt/module/kafka/bin:/opt/module/efak/bin:/opt/module/spark/bin)
    3. Hive Session ID = 4b43a439-6dee-4295-a467-7182adb64f04
    4. Logging initialized using configuration in file:/opt/module/hive-3.1.3/conf/hive-log4j2.properties Async: true
    5. Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary.
    6. Hive Session ID = 6dbba42a-f926-4cee-8368-646383608b57
    7. hive (default)> create table student(id int, name string);
    8. OK
    9. Time taken: 0.948 seconds
    10. hive (default)> insert into table student values(1,'abc');
    11. Query ID = atguigu_20240420093653_68ffa538-97fa-4864-9d92-18dfc9def1c6
    12. Total jobs = 1
    13. Launching Job 1 out of 1
    14. In order to change the average load for a reducer (in bytes):
    15. set hive.exec.reducers.bytes.per.reducer=<number>
    16. In order to limit the maximum number of reducers:
    17. set hive.exec.reducers.max=<number>
    18. In order to set a constant number of reducers:
    19. set mapreduce.job.reduces=<number>
    20. Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create Spark client for Spark session 885f9da9-d447-4d55-a411-aca9c832703b)'
    21. FAILED: Execution Error, return code 30041 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Failed to create Spark client for Spark session 885f9da9-d447-4d55-a411-aca9c832703b

    Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create Spark client for Spark session 885f9da9-d447-4d55-a411-aca9c832703b)'
    FAILED: Execution Error, return code 30041 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Failed to create Spark client for Spark session 885f9da9-d447-4d55-a411-aca9c832703b

    【原因】

    百度说是这个报错意味着Hive无法为Spark会话创建Spark客户端。可能是由于配置问题导致的。建议检查Hive配置文件中关于Spark的设置是否正确,特别是关于Spark执行引擎的配置。

    【解决】

    这次没有创建SparkClient失败是因为我的yarn没启,Spark运行需要yarn进行资源调度。好,启动yarn:start-yarn.sh

    再跑:hive (default)> insert into table student values(1,'abc');

  • 相关阅读:
    凸优化问题定义及其凸函数、凸集、仿射函数相关概念和定义
    ZCMU--5085: ly的圆
    OPTEE:TA命令操作的实现(三)
    C语言的头文件的处理
    Python源码剖析3-列表对象PyListObject
    nodejs 不支持 atob、btoa解决方案(Base64加解密、base64与uint8array转换)
    计算机算法分析与设计(9)---0-1背包问题(含C++代码)
    平替ChatGPT的多模态智能体来了
    DNA修饰碳纳米管|DNA修饰单层二硫化钼|DNA修饰二硫化钨(注意事项)
    使用 Docker Compose V2 快速搭建日志分析平台 ELK (Elasticsearch、Logstash 和 Kibana)
  • 原文地址:https://blog.csdn.net/yuanlaishidahuaa/article/details/137958329