• 【总结】使用livy 提交spark任务时报错Connection refused


    问题描述

    使用livy 提交spark任务时报错。

    22/06/27 15:14:50 INFO RetryInvocationHandler: Exception while invoking getClusterMetrics of class ApplicationClientProtocolPBClientImpl over rm2 after 15 fail over attempts. Trying to fail over after sleeping for 15400ms.
    java.net.ConnectException: Call From dev-d-01.hz/10.192.168.62 to dev-d-02.hz:8032 failed on connection exception: java.net.ConnectException: Connection refused;

    原因

    livy 代理用户不存在或者不生效,去掉livy 代理用户后,就可以正常提交任务了。

    详细报错信息

    22/06/27 15:14:50 INFO ConfiguredRMFailoverProxyProvider: Failing over to rm1
    22/06/27 15:14:50 INFO RetryInvocationHandler: Exception while invoking getClusterMetrics of class ApplicationClientProtocolPBClientImpl over rm1 after 14 fail over attempts. Trying to fail over immediately.
    22/06/27 15:14:50 INFO ConfiguredRMFailoverProxyProvider: Failing over to rm2
    22/06/27 15:14:50 INFO RetryInvocationHandler: Exception while invoking getClusterMetrics of class ApplicationClientProtocolPBClientImpl over rm2 after 15 fail over attempts. Trying to fail over after sleeping for 15400ms.
    java.net.ConnectException: Call From dev-d-01.hz/10.192.168.62 to dev-d-02.hz:8032 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
    at org.apache.hadoop.ipc.Client.call(Client.java:1479)
    at org.apache.hadoop.ipc.Client.call(Client.java:1412)
    at org.apache.hadoop.ipc.ProtobufRpcEngine I n v o k e r . i n v o k e ( P r o t o b u f R p c E n g i n e . j a v a : 229 ) a t c o m . s u n . p r o x y . Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy. Invoker.invoke(ProtobufRpcEngine.java:229)atcom.sun.proxy.Proxy30.getClusterMetrics(Unknown Source)
    at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getClusterMetrics(ApplicationClientProtocolPBClientImpl.java:206)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy. P r o x y 31. g e t C l u s t e r M e t r i c s ( U n k n o w n S o u r c e ) a t o r g . a p a c h e . h a d o o p . y a r n . c l i e n t . a p i . i m p l . Y a r n C l i e n t I m p l . g e t Y a r n C l u s t e r M e t r i c s ( Y a r n C l i e n t I m p l . j a v a : 487 ) a t o r g . a p a c h e . s p a r k . d e p l o y . y a r n . C l i e n t Proxy31.getClusterMetrics(Unknown Source) at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getYarnClusterMetrics(YarnClientImpl.java:487) at org.apache.spark.deploy.yarn.Client Proxy31.getClusterMetrics(UnknownSource)atorg.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getYarnClusterMetrics(YarnClientImpl.java:487)atorg.apache.spark.deploy.yarn.Client a n o n f u n anonfun anonfunsubmitApplication 1. a p p l y ( C l i e n t . s c a l a : 165 ) a t o r g . a p a c h e . s p a r k . d e p l o y . y a r n . C l i e n t 1.apply(Client.scala:165) at org.apache.spark.deploy.yarn.Client 1.apply(Client.scala:165)atorg.apache.spark.deploy.yarn.Client a n o n f u n anonfun anonfunsubmitApplication 1. a p p l y ( C l i e n t . s c a l a : 165 ) a t o r g . a p a c h e . s p a r k . i n t e r n a l . L o g g i n g 1.apply(Client.scala:165) at org.apache.spark.internal.Logging 1.apply(Client.scala:165)atorg.apache.spark.internal.Loggingclass.logInfo(Logging.scala:54)
    at org.apache.spark.deploy.yarn.Client.logInfo(Client.scala:60)
    at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:164)
    at org.apache.spark.deploy.yarn.Client.run(Client.scala:1135)
    at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1530)
    at org.apache.spark.deploy.SparkSubmit.org a p a c h e apache apachespark d e p l o y deploy deploySparkSubmit r u n M a i n ( S p a r k S u b m i t . s c a l a : 845 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t runMain(SparkSubmit.scala:845) at org.apache.spark.deploy.SparkSubmit runMain(SparkSubmit.scala:845)atorg.apache.spark.deploy.SparkSubmitanon 3. r u n ( S p a r k S u b m i t . s c a l a : 146 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t 3.run(SparkSubmit.scala:146) at org.apache.spark.deploy.SparkSubmit 3.run(SparkSubmit.scala:146)atorg.apache.spark.deploy.SparkSubmit$anon$3.run(SparkSubmit.scala:144)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
    at org.apache.spark.deploy.SparkSubmit.doRunMain 1 ( S p a r k S u b m i t . s c a l a : 144 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . s u b m i t ( S p a r k S u b m i t . s c a l a : 184 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . d o S u b m i t ( S p a r k S u b m i t . s c a l a : 86 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t 1(SparkSubmit.scala:144) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86) at org.apache.spark.deploy.SparkSubmit 1(SparkSubmit.scala:144)atorg.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)atorg.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)atorg.apache.spark.deploy.SparkSubmit$anon 2. d o S u b m i t ( S p a r k S u b m i t . s c a l a : 920 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t 2.doSubmit(SparkSubmit.scala:920) at org.apache.spark.deploy.SparkSubmit 2.doSubmit(SparkSubmit.scala:920)atorg.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala:929)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
    Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.ipc.Client C o n n e c t i o n . s e t u p C o n n e c t i o n ( C l i e n t . j a v a : 614 ) a t o r g . a p a c h e . h a d o o p . i p c . C l i e n t Connection.setupConnection(Client.java:614) at org.apache.hadoop.ipc.Client Connection.setupConnection(Client.java:614)atorg.apache.hadoop.ipc.ClientConnection.setupIOstreams(Client.java:712)
    at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1528)
    at org.apache.hadoop.ipc.Client.call(Client.java:1451)
    … 31 more

  • 相关阅读:
    开源模型应用落地-qwen2-7b-instruct-LoRA微调-LLaMA-Factory(五)
    使用策略模式优化多重if/else
    Shell重油常压塔模拟仿真与控制
    Python:实现recursive insertion sort递归插入排序算法(附完整源码)
    android.bp 使用
    第3章业务功能开发(用户访问项目)
    浅谈spring-createBean
    Redis学习笔记①基础篇_Redis快速入门
    Oracle技术分享 oracle 回收站recyclebin相关
    Oracle事务槽wrap#上限问题
  • 原文地址:https://blog.csdn.net/li396864285/article/details/125484920