上传 不报错。但是hdfs 上面的文件大小为0.
下载的时候
org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-127181180-172.17.0.2-1526283881280:blk_1073741825_1001 file=/wing/LICENSE.txt
at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:946)
at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:604)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:844)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:896)
at java.io.DataInputStream.read(DataInputStream.java:100)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:366)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:338)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:289)
at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:2030)
at hadoop.hdfs.HdfsTest.testDownLoad(HdfsTest.java:76)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
at com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
求各位大佬帮忙看看。环境是在docker 上部署的, 1主2丛。jps查看 datanode和nameNode 都是活的。
网页访问50070 也能进入web 也,里面显示的节点也是活的。
在服务器上用 hdfs dfs -put /get 操作可以成功。
java 代码如下:
private FileSystem createClient() throws IOException {
//设置程序执行的用户为root
System.setProperty("HADOOP_USER_NAME","root");
//指定NameNode的地址
Configuration conf = new Configuration();
conf.set("fs.defaultFS","hdfs://Master:9000");
//创建hdfs客户端
return FileSystem.get(conf);
}
@Test
public void testMkdir() throws IOException {
FileSystem client = createClient();
//创建目录
client.mkdirs(new Path(PATH));
//关闭客户端
client.close();
}
@Test
public void testUpload() throws Exception{
FileSystem client = createClient();
//构造输入流
InputStream in = new FileInputStream("d:\\bcprov-jdk16-1.46.jar");
//构造输出流
OutputStream out = client.create(new Path(PATH +"/bcprov.jar"));
//上传
IOUtils.copyBytes(in, out ,1024);
//关闭客户端
client.close();
}
@Test
public void testDownLoad() throws Exception{
FileSystem client = createClient();
//构造输入流
FSDataInputStream in = client.open(new Path("hdfs://Master:9000/wing/LICENSE.txt"));
//构造输出流
OutputStream out = new FileOutputStream("d:\\hello.txt");
//下载
IOUtils.copyBytes(in, System.out, 1024, false);
IOUtils.closeStream(in);
IOUtils.closeStream(out);
//关闭客户端
client.close();
}
其他api 调用是可以的,比如创建目录。