y鹏程万里g 2022-10-31 12:43 采纳率: 50%
浏览 10

DataX配置文件的HA运行报错

求解决:
DataX配置文件的HA运行报错
hadoop01和hadoop02是namenode;mysql也在hadoop01;datax在hadoop03上;03、04、05都是datanode
从mysql采集数据到hdfs。

INFO: Exception while invoking getFileInfo of class ClientNamenodeProtocolTranslatorPB over hadoop102/172.16.89.53:8082 after 1 fail over attempts. Trying to fail over after sleeping for 1217ms.
java.net.ConnectException: Call From hadoop103/172.16.89.56 to hadoop102:8082 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
    at org.apache.hadoop.ipc.Client.call(Client.java:1480)
    at org.apache.hadoop.ipc.Client.call(Client.java:1407)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
    at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2116)
    at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
    at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1424)
    at com.alibaba.datax.plugin.writer.hdfswriter.HdfsHelper.isPathexists(HdfsHelper.java:156)
    at com.alibaba.datax.plugin.writer.hdfswriter.HdfsWriter$Job.prepare(HdfsWriter.java:151)
    at com.alibaba.datax.core.job.JobContainer.prepareJobWriter(JobContainer.java:724)
    at com.alibaba.datax.core.job.JobContainer.prepare(JobContainer.java:309)
    at com.alibaba.datax.core.job.JobContainer.start(JobContainer.java:115)
    at com.alibaba.datax.core.Engine.start(Engine.java:92)
    at com.alibaba.datax.core.Engine.entry(Engine.java:171)
    at com.alibaba.datax.core.Engine.main(Engine.java:204)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:609)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:707)
    at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:370)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529)
    at org.apache.hadoop.ipc.Client.call(Client.java:1446)
    ... 25 more

该配置的都已经配好

"defaultFS": "hdfs://mycluster",
"hadoopConfig":{
    "dfs.nameservices": "mycluster",
    "dfs.ha.namenodes.mycluster": "nn1,nn2",
    "dfs.namenode.rpc-address.mycluster.nn1": "hadoop101:8082",
    "dfs.namenode.rpc-address.mycluster.nn2": "hadoop102:8082",
    "dfs.client.failover.proxy.provider.mycluster": "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"
}
  • 写回答

1条回答 默认 最新

  • CSDN-Ada助手 CSDN-AI 官方账号 2022-10-31 14:03
    关注
    评论

报告相同问题?

问题事件

  • 创建了问题 10月31日

悬赏问题

  • ¥20 指导如何跑通以下两个Github代码
  • ¥15 大家知道这个后备文件怎么删吗,为啥这些文件我只看到一份,没有后备呀
  • ¥15 C++为什么这个代码没报错运行不出来啊
  • ¥15 一道ban了很多东西的pyjail题
  • ¥15 关于#r语言#的问题:如何将生成的四幅图排在一起,且对变量的赋值进行更改,让组合的图漂亮、美观@(相关搜索:森林图)
  • ¥15 C++识别堆叠物体异常
  • ¥15 微软硬件驱动认证账号申请
  • ¥15 GPT写作提示指令词
  • ¥20 根据动态演化博弈支付矩阵完成复制动态方程求解和演化相图分析等
  • ¥15 华为超融合部署环境下RedHat虚拟机分区扩容问题