• HDFS磁盘满额导致Hive写入失败


    主要错误信息体现在:Error committing write to Hive,即Hive写入失败。

    1. java.sql.SQLException: Query failed (#20220803_210812_96922_u5y5y): Error committing write to Hive
    2. at io.prestosql.jdbc.AbstractPrestoResultSet.resultsException(AbstractPrestoResultSet.java:1730)
    3. at io.prestosql.jdbc.PrestoResultSet$ResultsPageIterator.computeNext(PrestoResultSet.java:225)
    4. at io.prestosql.jdbc.PrestoResultSet$ResultsPageIterator.computeNext(PrestoResultSet.java:185)
    5. at io.prestosql.jdbc.$internal.guava.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:141)
    6. at io.prestosql.jdbc.$internal.guava.collect.AbstractIterator.hasNext(AbstractIterator.java:136)
    7. at java.util.Spliterators$IteratorSpliterator.tryAdvance(Spliterators.java:1811)
    8. at java.util.stream.StreamSpliterators$WrappingSpliterator.lambda$initPartialTraversalState$0(StreamSpliterators.java:294)
    9. at java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.fillBuffer(StreamSpliterators.java:206)
    10. at java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.doAdvance(StreamSpliterators.java:161)
    11. at java.util.stream.StreamSpliterators$WrappingSpliterator.tryAdvance(StreamSpliterators.java:300)
    12. at java.util.Spliterators$1Adapter.hasNext(Spliterators.java:681)
    13. at io.prestosql.jdbc.PrestoResultSet$AsyncIterator.lambda$new$0(PrestoResultSet.java:131)
    14. at java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1640)
    15. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    16. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    17. at java.lang.Thread.run(Thread.java:748)
    18. Caused by: io.prestosql.spi.PrestoException: Error committing write to Hive
    19. at io.prestosql.plugin.hive.orc.OrcFileWriter.commit(OrcFileWriter.java:189)
    20. at io.prestosql.plugin.hive.HiveWriter.commit(HiveWriter.java:86)
    21. at io.prestosql.plugin.hive.HivePageSink.doFinish(HivePageSink.java:190)
    22. at io.prestosql.plugin.hive.authentication.NoHdfsAuthentication.doAs(NoHdfsAuthentication.java:23)
    23. at io.prestosql.plugin.hive.HdfsEnvironment.doAs(HdfsEnvironment.java:96)
    24. at io.prestosql.plugin.hive.HivePageSink.finish(HivePageSink.java:181)
    25. at io.prestosql.plugin.base.classloader.ClassLoaderSafeConnectorPageSink.finish(ClassLoaderSafeConnectorPageSink.java:77)
    26. at io.prestosql.operator.TableWriterOperator.finish(TableWriterOperator.java:208)
    27. at io.prestosql.operator.Driver.processInternal(Driver.java:397)
    28. at io.prestosql.operator.Driver.lambda$processFor$8(Driver.java:283)
    29. at io.prestosql.operator.Driver.tryWithLock(Driver.java:675)
    30. at io.prestosql.operator.Driver.processFor(Driver.java:276)
    31. at io.prestosql.execution.SqlTaskExecution$DriverSplitRunner.processFor(SqlTaskExecution.java:1076)
    32. at io.prestosql.execution.executor.PrioritizedSplitRunner.process(PrioritizedSplitRunner.java:163)
    33. at io.prestosql.execution.executor.TaskExecutor$TaskRunner.run(TaskExecutor.java:484)
    34. at io.prestosql.$gen.Presto_345____20220802_064725_2.run(Unknown Source)
    35. at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    36. at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    37. at java.base/java.lang.Thread.run(Thread.java:834)

    继续观察错误信息还有:could only be written to 0 of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s) are excluded in this operation,即没有写入的节点了。

    1. Caused by: org.apache.hadoop.ipc.RemoteException: File /tmp/presto-root/3f51a7c5-085f-4d60-bcac-544a96572dbb/dp=1/20220803_210812_96922_u5y5y_51e03901-e2a7-4c7f-88cc-454635aa8f46 could only be written to 0 of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
    2. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2278)
    3. at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
    4. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2808)
    5. at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:905)
    6. at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:577)
    7. at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    8. at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:528)
    9. at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1086)
    10. at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1029)
    11. at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:957)
    12. at java.security.AccessController.doPrivileged(Native Method)
    13. at javax.security.auth.Subject.doAs(Subject.java:422)
    14. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
    15. at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2957)
    16. at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1511)
    17. at org.apache.hadoop.ipc.Client.call(Client.java:1457)
    18. at org.apache.hadoop.ipc.Client.call(Client.java:1367)
    19. at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
    20. at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
    21. at com.sun.proxy.$Proxy333.addBlock(Unknown Source)
    22. at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:513)
    23. at jdk.internal.reflect.GeneratedMethodAccessor811.invoke(Unknown Source)
    24. at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    25. at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    26. at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
    27. at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
    28. at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
    29. at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
    30. at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
    31. at com.sun.proxy.$Proxy334.addBlock(Unknown Source)
    32. at org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:1081)
    33. at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1865)
    34. at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1668)
    35. at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:716)
    36. 04-08-2022 05:08:48 CST sql ERROR - Job run failed!

    最终观察HDFS磁盘用满了,因此做出相应的拓容操作解决。

  • 相关阅读:
    zookeeper + kafka消息队列
    学习java的day02
    Python中的科学计算和数学建模
    Spring(十四)- Spring注解原理解析
    LeetCode0912.排序数组 Go语言AC笔记
    Facebook 惊现网络钓鱼浪潮,每周攻击 10 万个账户
    VSCode 常用配置
    2022 年最新 Java 后端薪资统计出炉,看看你有没有拖后腿
    JAVA--枚举类
    Uniapp实现实时音视频的基础美颜滤镜功能
  • 原文地址:https://blog.csdn.net/cainiao1412/article/details/126157129