【急!】java访问hbase出错,无明显异常,无法读取对应表的信息(需求就是从Hbase之中的某张表读取信息)

有点着急请哪位大神帮忙一下

本地Hosts配置(对应的机上面的hosts的文件配置都一样):


127.0.0.1 localhost
192.168.0.25 Master.Hadoop
192.168.0.26 Slave1.Hadoop
192.168.0.27 Slave2.Hadoop
192.168.0.28 Slave3.Hadoop

hbase的基本信息:


http://master.hadoop:60010/master-status
HBase Root Directory: hdfs://Master.Hadoop:9000/hbase

代码


Configuration configuration = HBaseConfiguration.create();
configuration = HBaseConfiguration.create();
configuration.set("hbase.zookeeper.property.clientPort", "2181");
configuration.set("hbase.zookeeper.quorum", "Master.Hadoop,Slave1.Hadoop,Slave2.Hadoop,Slave1.Hadoop");
configuration.set("hbase.master", "Master.Hadoop:60010"); //"Master.Hadoop:60010"
System.setProperty("hadoop.home.dir", "C:\Program Files\hadoop\hadoop-common-2.2.0-bin-master");

connection = ConnectionFactory.createConnection(configuration);
System.out.println( connection );
Admin admin = connection.getAdmin();
System.out.println( admin );

TableName tableName1 = TableName.valueOf("hbase:meta");
System.out.println( tableName1 );
System.out.println( admin.tableExists(tableName1) );

Table table = connection.getTable(tableName1);
HTableDescriptor a = table.getTableDescriptor();
System.out.println( a );

日志信息:


"C:\Program Files\Java\jdk1.8.0_181\bin\java.exe" "-javaagent:C:\Program Files\JetBrains\IntelliJ IDEA 2018.2.3\lib\idea_rt.jar=57269:C:\Program Files\JetBrains\IntelliJ IDEA 2018.2.3\bin" -Dfile.encoding=UTF-8 -classpath "C:\Program Files\Java\jdk1.8.0_181\jre\lib\charsets.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\deploy.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\access-bridge-64.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\cldrdata.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\dnsns.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\jaccess.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\jfxrt.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\localedata.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\nashorn.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\sunec.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\sunjce_provider.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\sunmscapi.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\sunpkcs11.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\zipfs.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\javaws.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\jce.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\jfr.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\jfxswt.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\jsse.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\management-agent.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\plugin.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\resources.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\rt.jar;C:\Users\Administrator\IdeaProjects\TestHbase\out\production\TestHbase;D:\lib\xz-1.0.jar;D:\lib\asm-3.1.jar;D:\lib\avro-1.7.4.jar;D:\lib\common-1.0.jar;D:\lib\domain-1.0.jar;D:\lib\jfinal-3.1.jar;D:\lib\joni-2.1.2.jar;D:\lib\noggit-0.6.jar;D:\lib\jsch-0.1.42.jar;D:\lib\service-1.0.jar;D:\lib\xmlenc-0.52.jar;D:\lib\druid-1.0.31.jar;D:\lib\guava-12.0.1.jar;D:\lib\jcifs-1.3.17.jar;D:\lib\jetty-6.1.26.jar;D:\lib\jsr305-1.3.9.jar;D:\lib\log4j-1.2.16.jar;D:\lib\cos-26Dec2008.jar;D:\lib\paranamer-2.3.jar;D:\lib\activation-1.1.jar;D:\lib\commons-el-1.0.jar;D:\lib\commons-io-2.4.jar;D:\lib\httpcore-4.4.4.jar;D:\lib\httpmime-4.4.1.jar;D:\lib\jaxb-api-2.2.2.jar;D:\lib\jcodings-1.0.8.jar;D:\lib\jsp-2.1-6.1.14.jar;D:\lib\stax-api-1.0-2.jar;D:\lib\cglib-nodep-3.1.jar;D:\lib\commons-cli-1.2.jar;D:\lib\commons-net-3.1.jar;D:\lib\disruptor-3.3.0.jar;D:\lib\fastjson-1.2.37.jar;D:\lib\jersey-core-1.9.jar;D:\lib\servlet-api-2.4.jar;D:\lib\slf4j-api-1.6.6.jar;D:\lib\stax2-api-3.1.4.jar;D:\lib\zookeeper-3.4.6.jar;D:\lib\commons-lang-2.6.jar;D:\lib\commons-math-2.2.jar;D:\lib\httpclient-4.5.2.jar;D:\lib\solr-solrj-6.1.0.jar;D:\lib\freemarker-2.3.23.jar;D:\lib\hadoop-auth-2.5.1.jar;D:\lib\hadoop-hdfs-2.7.4.jar;D:\lib\jersey-server-1.9.jar;D:\lib\jetty-util-6.1.26.jar;D:\lib\netty-3.6.2.Final.jar;D:\lib\api-util-1.0.0-M20.jar;D:\lib\commons-codec-1.11.jar;D:\lib\hbase-client-1.2.3.jar;D:\lib\hbase-common-1.2.3.jar;D:\lib\hbase-server-1.4.0.jar;D:\lib\jsp-api-2.1-6.1.14.jar;D:\lib\leveldbjni-all-1.8.jar;D:\lib\metrics-core-2.2.0.jar;D:\lib\metrics-core-3.1.2.jar;D:\lib\commons-logging-1.2.jar;D:\lib\commons-math3-3.1.1.jar;D:\lib\hadoop-client-2.7.4.jar;D:\lib\hadoop-common-2.5.1.jar;D:\lib\hbase-metrics-1.4.0.jar;D:\lib\jamon-runtime-2.4.1.jar;D:\lib\protobuf-java-2.5.0.jar;D:\lib\slf4j-log4j12-1.6.6.jar;D:\lib\snappy-java-1.0.4.1.jar;D:\lib\commons-digester-1.8.jar;D:\lib\hbase-protocol-1.2.3.jar;D:\lib\jackson-jaxrs-1.9.13.jar;D:\lib\jcl-over-slf4j-1.7.7.jar;D:\lib\commons-daemon-1.0.13.jar;D:\lib\hadoop-yarn-api-2.7.4.jar;D:\lib\hbase-procedure-1.4.0.jar;D:\lib\jasper-runtime-5.5.23.jar;D:\lib\api-asn1-api-1.0.0-M20.jar;D:\lib\commons-compress-1.4.1.jar;D:\lib\commons-httpclient-3.1.jar;D:\lib\jasper-compiler-5.5.23.jar;D:\lib\jetty-sslengine-6.1.26.jar;D:\lib\netty-all-4.0.23.Final.jar;D:\lib\servlet-api-2.5-6.1.14.jar;D:\lib\apacheds-i18n-2.0.0-M15.jar;D:\lib\commons-beanutils-1.7.0.jar;D:\lib\hbase-annotations-1.2.3.jar;D:\lib\hbase-metrics-api-1.4.0.jar;D:\lib\hbase-prefix-tree-1.4.0.jar;D:\lib\jackson-core-asl-1.9.13.jar;D:\lib\woodstox-core-asl-4.4.1.jar;D:\lib\hadoop-annotations-2.5.1.jar;D:\lib\hadoop-yarn-client-2.7.4.jar;D:\lib\hadoop-yarn-common-2.5.1.jar;D:\lib\hbase-common-1.4.0-tests.jar;D:\lib\commons-collections-3.2.2.jar;D:\lib\commons-configuration-1.6.jar;D:\lib\hbase-hadoop-compat-1.4.0.jar;D:\lib\jackson-mapper-asl-1.9.13.jar;D:\lib\hbase-hadoop2-compat-1.4.0.jar;D:\lib\mysql-connector-java-5.1.38.jar;D:\lib\commons-beanutils-core-1.8.0.jar;D:\lib\findbugs-annotations-1.3.9-1.jar;D:\lib\htrace-core-3.1.0-incubating.jar;D:\lib\hadoop-yarn-server-common-2.7.4.jar;D:\lib\apacheds-kerberos-codec-2.0.0-M15.jar;D:\lib\hadoop-mapreduce-client-app-2.7.4.jar;D:\lib\hadoop-mapreduce-client-core-2.5.1.jar;D:\lib\hadoop-mapreduce-client-common-2.7.4.jar;D:\lib\hadoop-mapreduce-client-shuffle-2.7.4.jar;D:\lib\hadoop-mapreduce-client-jobclient-2.7.4.jar" TestHbase

[DEBUG] [10:23:53] org.apache.hadoop.security.Groups - Creating new Groups object
[DEBUG] [10:23:53] org.apache.hadoop.util.NativeCodeLoader - Trying to load the custom-built native-hadoop library...
[DEBUG] [10:23:53] org.apache.hadoop.util.NativeCodeLoader - Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError: no hadoop in java.library.path
[DEBUG] [10:23:53] org.apache.hadoop.util.NativeCodeLoader - java.library.path=C:\Program Files\Java\jdk1.8.0_181\bin;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;C:\Program Files (x86)\Common Files\Oracle\Java\javapath;C:\ProgramData\Oracle\Java\javapath;C:\Program Files\Java\jdk1.7.0_51\bin;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;.
[WARN ] [10:23:53] org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[DEBUG] [10:23:53] org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback - Falling back to shell based
[DEBUG] [10:23:53] org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback - Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping
[DEBUG] [10:23:53] org.apache.hadoop.security.Groups - Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
[DEBUG] [10:23:53] org.apache.hadoop.metrics2.lib.MutableMetricsFactory - field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, value=[Rate of successful kerberos logins and latency (milliseconds)], valueName=Time)
[DEBUG] [10:23:53] org.apache.hadoop.metrics2.lib.MutableMetricsFactory - field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, value=[Rate of failed kerberos logins and latency (milliseconds)], valueName=Time)
[DEBUG] [10:23:53] org.apache.hadoop.metrics2.lib.MutableMetricsFactory - field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, value=[GetGroups], valueName=Time)
[DEBUG] [10:23:53] org.apache.hadoop.metrics2.impl.MetricsSystemImpl - UgiMetrics, User and group related metrics
[DEBUG] [10:23:54] org.apache.hadoop.security.authentication.util.KerberosName - Kerberos krb5 configuration not found, setting default realm to empty

[DEBUG] [10:23:54] org.apache.hadoop.security.UserGroupInformation - hadoop login
[DEBUG] [10:23:54] org.apache.hadoop.security.UserGroupInformation - hadoop login commit
[DEBUG] [10:23:54] org.apache.hadoop.security.UserGroupInformation - using local user:NTUserPrincipal: admin
[DEBUG] [10:23:54] org.apache.hadoop.security.UserGroupInformation - UGI loginUser:admin (auth:SIMPLE)
[INFO ] [10:23:55] org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper - Process identifier=hconnection-0x17d0685f connecting to ZooKeeper ensemble=192.168.0.25:2181

[INFO ] [10:23:55] org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
[INFO ] [10:23:55] org.apache.zookeeper.ZooKeeper - Client environment:host.name=WIN-SSJFMH6ELVT
[INFO ] [10:23:55] org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.8.0_181
[INFO ] [10:23:55] org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Oracle Corporation
[INFO ] [10:23:55] org.apache.zookeeper.ZooKeeper - Client environment:java.home=C:\Program Files\Java\jdk1.8.0_181\jre
[INFO ] [10:23:55] org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=C:\Program Files\Java\jdk1.8.0_181\jre\lib\charsets.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\deploy.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\access-bridge-64.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\cldrdata.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\dnsns.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\jaccess.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\jfxrt.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\localedata.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\nashorn.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\sunec.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\sunjce_provider.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\sunmscapi.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\sunpkcs11.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\zipfs.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\javaws.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\jce.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\jfr.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\jfxswt.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\jsse.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\management-agent.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\plugin.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\resources.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\rt.jar;C:\Users\Administrator\IdeaProjects\TestHbase\out\production\TestHbase;D:\lib\xz-1.0.jar;D:\lib\asm-3.1.jar;D:\lib\avro-1.7.4.jar;D:\lib\common-1.0.jar;D:\lib\domain-1.0.jar;D:\lib\jfinal-3.1.jar;D:\lib\joni-2.1.2.jar;D:\lib\noggit-0.6.jar;D:\lib\jsch-0.1.42.jar;D:\lib\service-1.0.jar;D:\lib\xmlenc-0.52.jar;D:\lib\druid-1.0.31.jar;D:\lib\guava-12.0.1.jar;D:\lib\jcifs-1.3.17.jar;D:\lib\jetty-6.1.26.jar;D:\lib\jsr305-1.3.9.jar;D:\lib\log4j-1.2.16.jar;D:\lib\cos-26Dec2008.jar;D:\lib\paranamer-2.3.jar;D:\lib\activation-1.1.jar;D:\lib\commons-el-1.0.jar;D:\lib\commons-io-2.4.jar;D:\lib\httpcore-4.4.4.jar;D:\lib\httpmime-4.4.1.jar;D:\lib\jaxb-api-2.2.2.jar;D:\lib\jcodings-1.0.8.jar;D:\lib\jsp-2.1-6.1.14.jar;D:\lib\stax-api-1.0-2.jar;D:\lib\cglib-nodep-3.1.jar;D:\lib\commons-cli-1.2.jar;D:\lib\commons-net-3.1.jar;D:\lib\disruptor-3.3.0.jar;D:\lib\fastjson-1.2.37.jar;D:\lib\jersey-core-1.9.jar;D:\lib\servlet-api-2.4.jar;D:\lib\slf4j-api-1.6.6.jar;D:\lib\stax2-api-3.1.4.jar;D:\lib\zookeeper-3.4.6.jar;D:\lib\commons-lang-2.6.jar;D:\lib\commons-math-2.2.jar;D:\lib\httpclient-4.5.2.jar;D:\lib\solr-solrj-6.1.0.jar;D:\lib\freemarker-2.3.23.jar;D:\lib\hadoop-auth-2.5.1.jar;D:\lib\hadoop-hdfs-2.7.4.jar;D:\lib\jersey-server-1.9.jar;D:\lib\jetty-util-6.1.26.jar;D:\lib\netty-3.6.2.Final.jar;D:\lib\api-util-1.0.0-M20.jar;D:\lib\commons-codec-1.11.jar;D:\lib\hbase-client-1.2.3.jar;D:\lib\hbase-common-1.2.3.jar;D:\lib\hbase-server-1.4.0.jar;D:\lib\jsp-api-2.1-6.1.14.jar;D:\lib\leveldbjni-all-1.8.jar;D:\lib\metrics-core-2.2.0.jar;D:\lib\metrics-core-3.1.2.jar;D:\lib\commons-logging-1.2.jar;D:\lib\commons-math3-3.1.1.jar;D:\lib\hadoop-client-2.7.4.jar;D:\lib\hadoop-common-2.5.1.jar;D:\lib\hbase-metrics-1.4.0.jar;D:\lib\jamon-runtime-2.4.1.jar;D:\lib\protobuf-java-2.5.0.jar;D:\lib\slf4j-log4j12-1.6.6.jar;D:\lib\snappy-java-1.0.4.1.jar;D:\lib\commons-digester-1.8.jar;D:\lib\hbase-protocol-1.2.3.jar;D:\lib\jackson-jaxrs-1.9.13.jar;D:\lib\jcl-over-slf4j-1.7.7.jar;D:\lib\commons-daemon-1.0.13.jar;D:\lib\hadoop-yarn-api-2.7.4.jar;D:\lib\hbase-procedure-1.4.0.jar;D:\lib\jasper-runtime-5.5.23.jar;D:\lib\api-asn1-api-1.0.0-M20.jar;D:\lib\commons-compress-1.4.1.jar;D:\lib\commons-httpclient-3.1.jar;D:\lib\jasper-compiler-5.5.23.jar;D:\lib\jetty-sslengine-6.1.26.jar;D:\lib\netty-all-4.0.23.Final.jar;D:\lib\servlet-api-2.5-6.1.14.jar;D:\lib\apacheds-i18n-2.0.0-M15.jar;D:\lib\commons-beanutils-1.7.0.jar;D:\lib\hbase-annotations-1.2.3.jar;D:\lib\hbase-metrics-api-1.4.0.jar;D:\lib\hbase-prefix-tree-1.4.0.jar;D:\lib\jackson-core-asl-1.9.13.jar;D:\lib\woodstox-core-asl-4.4.1.jar;D:\lib\hadoop-annotations-2.5.1.jar;D:\lib\hadoop-yarn-client-2.7.4.jar;D:\lib\hadoop-yarn-common-2.5.1.jar;D:\lib\hbase-common-1.4.0-tests.jar;D:\lib\commons-collections-3.2.2.jar;D:\lib\commons-configuration-1.6.jar;D:\lib\hbase-hadoop-compat-1.4.0.jar;D:\lib\jackson-mapper-asl-1.9.13.jar;D:\lib\hbase-hadoop2-compat-1.4.0.jar;D:\lib\mysql-connector-java-5.1.38.jar;D:\lib\commons-beanutils-core-1.8.0.jar;D:\lib\findbugs-annotations-1.3.9-1.jar;D:\lib\htrace-core-3.1.0-incubating.jar;D:\lib\hadoop-yarn-server-common-2.7.4.jar;D:\lib\apacheds-kerberos-codec-2.0.0-M15.jar;D:\lib\hadoop-mapreduce-client-app-2.7.4.jar;D:\lib\hadoop-mapreduce-client-core-2.5.1.jar;D:\lib\hadoop-mapreduce-client-common-2.7.4.jar;D:\lib\hadoop-mapreduce-client-shuffle-2.7.4.jar;D:\lib\hadoop-mapreduce-client-jobclient-2.7.4.jar;C:\Program Files\JetBrains\IntelliJ IDEA 2018.2.3\lib\idea_rt.jar
[INFO ] [10:23:55] org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=C:\Program Files\Java\jdk1.8.0_181\bin;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;C:\Program Files (x86)\Common Files\Oracle\Java\javapath;C:\ProgramData\Oracle\Java\javapath;C:\Program Files\Java\jdk1.7.0_51\bin;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;.
[INFO ] [10:23:55] org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=C:\Users\ADMINI~1\AppData\Local\Temp\1\
[INFO ] [10:23:55] org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=
[INFO ] [10:23:55] org.apache.zookeeper.ZooKeeper - Client environment:os.name=Windows Server 2008 R2
[INFO ] [10:23:55] org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64
[INFO ] [10:23:55] org.apache.zookeeper.ZooKeeper - Client environment:os.version=6.1
[INFO ] [10:23:55] org.apache.zookeeper.ZooKeeper - Client environment:user.name=admin
[INFO ] [10:23:55] org.apache.zookeeper.ZooKeeper - Client environment:user.home=C:\Users\Administrator
[INFO ] [10:23:55] org.apache.zookeeper.ZooKeeper - Client environment:user.dir=C:\Users\Administrator\IdeaProjects\TestHbase
[INFO ] [10:23:55] org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=192.168.0.25:2181 sessionTimeout=90000 watcher=hconnection-0x17d0685f0x0, quorum=192.168.0.25:2181, baseZNode=/hbase

[DEBUG] [10:23:55] org.apache.zookeeper.ClientCnxn - zookeeper.disableAutoWatchReset is false
[INFO ] [10:23:55] org.apache.zookeeper.ClientCnxn - Opening socket connection to server 192.168.0.25/192.168.0.25:2181. Will not attempt to authenticate using SASL (unknown error)
[INFO ] [10:23:55] org.apache.zookeeper.ClientCnxn - Socket connection established to 192.168.0.25/192.168.0.25:2181, initiating session
[DEBUG] [10:23:55] org.apache.zookeeper.ClientCnxn - Session establishment request sent on 192.168.0.25/192.168.0.25:2181
[INFO ] [10:23:56] org.apache.zookeeper.ClientCnxn - Session establishment complete on server 192.168.0.25/192.168.0.25:2181, sessionid = 0x165a4d303830022, negotiated timeout = 40000
[DEBUG] [10:23:56] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher - hconnection-0x17d0685f0x0, quorum=192.168.0.25:2181, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
[DEBUG] [10:23:56] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher - hconnection-0x17d0685f-0x165a4d303830022 connected

[DEBUG] [10:23:56] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 1,3 replyHeader:: 1,184683596486,0 request:: '/hbase/hbaseid,F response:: s{4294967310,184683593733,1421399758722,1536068918058,938,0,0,0,60,0,4294967310}
[DEBUG] [10:23:56] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 2,4 replyHeader:: 2,184683596486,0 request:: '/hbase/hbaseid,F response:: #ffffffff000133135313538404d61737465722e4861646f6f7066626162613563302d313737332d343731342d613630622d643233626232623865373831,s{4294967310,184683593733,1421399758722,1536068918058,938,0,0,0,60,0,4294967310}
[DEBUG] [10:23:56] org.apache.hadoop.hbase.ipc.AbstractRpcClient - Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7c1e2a9e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
hconnection-0x17d0685f
org.apache.hadoop.hbase.client.HBaseAdmin@272ed83b
hbase:meta
true
[DEBUG] [10:23:56] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 3,3 replyHeader:: 3,184683596486,0 request:: '/hbase,F response:: s{4294967298,4294967298,1421399756338,1421399756338,0,1956,0,0,0,12,184683593750}
[DEBUG] [10:23:56] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 4,4 replyHeader:: 4,184683596486,0 request:: '/hbase/master,F response:: #ffffffff000133135313538404d61737465722e4861646f6f70004d61737465722e4861646f6f702c36303030302c31353336303638393135363739,s{184683593731,184683593731,1536068917156,1536068917156,0,0,0,316840575388352512,59,0,184683593731}
[DEBUG] [10:23:56] org.apache.hadoop.hbase.ipc.RpcClientImpl - Use SIMPLE authentication for service MasterService, sasl=false
[DEBUG] [10:23:56] org.apache.hadoop.hbase.ipc.RpcClientImpl - Connecting to Master.Hadoop/192.168.0.25:60000
[DEBUG] [10:23:57] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 5,3 replyHeader:: 5,184683596486,0 request:: '/hbase,F response:: s{4294967298,4294967298,1421399756338,1421399756338,0,1956,0,0,0,12,184683593750}
[DEBUG] [10:23:57] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 6,4 replyHeader:: 6,184683596486,0 request:: '/hbase/master,F response:: #ffffffff000133135313538404d61737465722e4861646f6f70004d61737465722e4861646f6f702c36303030302c31353336303638393135363739,s{184683593731,184683593731,1536068917156,1536068917156,0,0,0,316840575388352512,59,0,184683593731}
[DEBUG] [10:23:57] org.apache.hadoop.hbase.ipc.RpcClientImpl - Use SIMPLE authentication for service MasterService, sasl=false
[DEBUG] [10:23:57] org.apache.hadoop.hbase.ipc.RpcClientImpl - Not trying to connect to Master.Hadoop/192.168.0.25:60000 this server is in the failed servers list
[DEBUG] [10:23:57] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 7,3 replyHeader:: 7,184683596486,0 request:: '/hbase,F response:: s{4294967298,4294967298,1421399756338,1421399756338,0,1956,0,0,0,12,184683593750}
[DEBUG] [10:23:57] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 8,4 replyHeader:: 8,184683596486,0 request:: '/hbase/master,F response:: #ffffffff000133135313538404d61737465722e4861646f6f70004d61737465722e4861646f6f702c36303030302c31353336303638393135363739,s{184683593731,184683593731,1536068917156,1536068917156,0,0,0,316840575388352512,59,0,184683593731}
[DEBUG] [10:23:57] org.apache.hadoop.hbase.ipc.RpcClientImpl - Use SIMPLE authentication for service MasterService, sasl=false
[DEBUG] [10:23:57] org.apache.hadoop.hbase.ipc.RpcClientImpl - Not trying to connect to Master.Hadoop/192.168.0.25:60000 this server is in the failed servers list
[DEBUG] [10:23:57] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 9,3 replyHeader:: 9,184683596486,0 request:: '/hbase,F response:: s{4294967298,4294967298,1421399756338,1421399756338,0,1956,0,0,0,12,184683593750}
[DEBUG] [10:23:57] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 10,4 replyHeader:: 10,184683596486,0 request:: '/hbase/master,F response:: #ffffffff000133135313538404d61737465722e4861646f6f70004d61737465722e4861646f6f702c36303030302c31353336303638393135363739,s{184683593731,184683593731,1536068917156,1536068917156,0,0,0,316840575388352512,59,0,184683593731}
[DEBUG] [10:23:57] org.apache.hadoop.hbase.ipc.RpcClientImpl - Use SIMPLE authentication for service MasterService, sasl=false
[DEBUG] [10:23:57] org.apache.hadoop.hbase.ipc.RpcClientImpl - Not trying to connect to Master.Hadoop/192.168.0.25:60000 this server is in the failed servers list
[DEBUG] [10:23:58] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 11,3 replyHeader:: 11,184683596486,0 request:: '/hbase,F response:: s{4294967298,4294967298,1421399756338,1421399756338,0,1956,0,0,0,12,184683593750}
[DEBUG] [10:23:58] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 12,4 replyHeader:: 12,184683596486,0 request:: '/hbase/master,F response:: #ffffffff000133135313538404d61737465722e4861646f6f70004d61737465722e4861646f6f702c36303030302c31353336303638393135363739,s{184683593731,184683593731,1536068917156,1536068917156,0,0,0,316840575388352512,59,0,184683593731}
[DEBUG] [10:23:58] org.apache.hadoop.hbase.ipc.RpcClientImpl - Use SIMPLE authentication for service MasterService, sasl=false
[DEBUG] [10:23:58] org.apache.hadoop.hbase.ipc.RpcClientImpl - Not trying to connect to Master.Hadoop/192.168.0.25:60000 this server is in the failed servers list
[DEBUG] [10:23:59] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 13,3 replyHeader:: 13,184683596486,0 request:: '/hbase,F response:: s{4294967298,4294967298,1421399756338,1421399756338,0,1956,0,0,0,12,184683593750}
[DEBUG] [10:23:59] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 14,4 replyHeader:: 14,184683596486,0 request:: '/hbase/master,F response:: #ffffffff000133135313538404d61737465722e4861646f6f70004d61737465722e4861646f6f702c36303030302c31353336303638393135363739,s{184683593731,184683593731,1536068917156,1536068917156,0,0,0,316840575388352512,59,0,184683593731}
[DEBUG] [10:23:59] org.apache.hadoop.hbase.ipc.RpcClientImpl - Use SIMPLE authentication for service MasterService, sasl=false
[DEBUG] [10:23:59] org.apache.hadoop.hbase.ipc.RpcClientImpl - Connecting to Master.Hadoop/192.168.0.25:60000
[DEBUG] [10:24:01] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 15,3 replyHeader:: 15,184683596486,0 request:: '/hbase,F response:: s{4294967298,4294967298,1421399756338,1421399756338,0,1956,0,0,0,12,184683593750}
[DEBUG] [10:24:01] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 16,4 replyHeader:: 16,184683596486,0 request:: '/hbase/master,F response:: #ffffffff000133135313538404d61737465722e4861646f6f70004d61737465722e4861646f6f702c36303030302c31353336303638393135363739,s{184683593731,184683593731,1536068917156,1536068917156,0,0,0,316840575388352512,59,0,184683593731}
[DEBUG] [10:24:01] org.apache.hadoop.hbase.ipc.RpcClientImpl - Use SIMPLE authentication for service MasterService, sasl=false
[DEBUG] [10:24:01] org.apache.hadoop.hbase.ipc.RpcClientImpl - Connecting to Master.Hadoop/192.168.0.25:60000
[DEBUG] [10:24:05] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 17,3 replyHeader:: 17,184683596486,0 request:: '/hbase,F response:: s{4294967298,4294967298,1421399756338,1421399756338,0,1956,0,0,0,12,184683593750}
[DEBUG] [10:24:05] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 18,4 replyHeader:: 18,184683596486,0 request:: '/hbase/master,F response:: #ffffffff000133135313538404d61737465722e4861646f6f70004d61737465722e4861646f6f702c36303030302c31353336303638393135363739,s{184683593731,184683593731,1536068917156,1536068917156,0,0,0,316840575388352512,59,0,184683593731}
[DEBUG] [10:24:05] org.apache.hadoop.hbase.ipc.RpcClientImpl - Use SIMPLE authentication for service MasterService, sasl=false
[DEBUG] [10:24:05] org.apache.hadoop.hbase.ipc.RpcClientImpl - Connecting to Master.Hadoop/192.168.0.25:60000
[DEBUG] [10:24:15] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 19,3 replyHeader:: 19,184683596486,0 request:: '/hbase,F response:: s{4294967298,4294967298,1421399756338,1421399756338,0,1956,0,0,0,12,184683593750}
[DEBUG] [10:24:15] org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x165a4d303830022 after 11ms
[DEBUG] [10:24:15] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 20,4 replyHeader:: 20,184683596486,0 request:: '/hbase/master,F response:: #ffffffff000133135313538404d61737465722e4861646f6f70004d61737465722e4861646f6f702c36303030302c31353336303638393135363739,s{184683593731,184683593731,1536068917156,1536068917156,0,0,0,316840575388352512,59,0,184683593731}
[DEBUG] [10:24:15] org.apache.hadoop.hbase.ipc.RpcClientImpl - Use SIMPLE authentication for service MasterService, sasl=false
[DEBUG] [10:24:15] org.apache.hadoop.hbase.ipc.RpcClientImpl - Connecting to Master.Hadoop/192.168.0.25:60000
[DEBUG] [10:24:25] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 21,3 replyHeader:: 21,184683596486,0 request:: '/hbase,F response:: s{4294967298,4294967298,1421399756338,1421399756338,0,1956,0,0,0,12,184683593750}
[DEBUG] [10:24:25] org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x165a4d303830022 after 3ms
[DEBUG] [10:24:25] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 22,4 replyHeader:: 22,184683596486,0 request:: '/hbase/master,F response:: #ffffffff000133135313538404d61737465722e4861646f6f70004d61737465722e4861646f6f702c36303030302c31353336303638393135363739,s{184683593731,184683593731,1536068917156,1536068917156,0,0,0,316840575388352512,59,0,184683593731}
[DEBUG] [10:24:25] org.apache.hadoop.hbase.ipc.RpcClientImpl - Use SIMPLE authentication for service MasterService, sasl=false
[DEBUG] [10:24:25] org.apache.hadoop.hbase.ipc.RpcClientImpl - Connecting to Master.Hadoop/192.168.0.25:60000
[DEBUG] [10:24:35] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 23,3 replyHeader:: 23,184683596486,0 request:: '/hbase,F response:: s{4294967298,4294967298,1421399756338,1421399756338,0,1956,0,0,0,12,184683593750}
[DEBUG] [10:24:35] org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x165a4d303830022 after 1ms
[DEBUG] [10:24:35] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 24,4 replyHeader:: 24,184683596486,0 request:: '/hbase/master,F response:: #ffffffff000133135313538404d61737465722e4861646f6f70004d61737465722e4861646f6f702c36303030302c31353336303638393135363739,s{184683593731,184683593731,1536068917156,1536068917156,0,0,0,316840575388352512,59,0,184683593731}
[DEBUG] [10:24:35] org.apache.hadoop.hbase.ipc.RpcClientImpl - Use SIMPLE authentication for service MasterService, sasl=false
[DEBUG] [10:24:35] org.apache.hadoop.hbase.ipc.RpcClientImpl - Connecting to Master.Hadoop/192.168.0.25:60000
[INFO ] [10:24:35] org.apache.hadoop.hbase.client.RpcRetryingCaller - Call exception, tries=10, retries=35, started=38894 ms ago, cancelled=false, msg=
[DEBUG] [10:24:45] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 25,3 replyHeader:: 25,184683596486,0 request:: '/hbase,F response:: s{4294967298,4294967298,1421399756338,1421399756338,0,1956,0,0,0,12,184683593750}
[DEBUG] [10:24:45] org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x165a4d303830022 after 1ms
[DEBUG] [10:24:45] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 26,4 replyHeader:: 26,184683596486,0 request:: '/hbase/master,F response:: #ffffffff000133135313538404d61737465722e4861646f6f70004d61737465722e4861646f6f702c36303030302c31353336303638393135363739,s{184683593731,184683593731,1536068917156,1536068917156,0,0,0,316840575388352512,59,0,184683593731}
[DEBUG] [10:24:45] org.apache.hadoop.hbase.ipc.RpcClientImpl - Use SIMPLE authentication for service MasterService, sasl=false
[DEBUG] [10:24:45] org.apache.hadoop.hbase.ipc.RpcClientImpl - Connecting to Master.Hadoop/192.168.0.25:60000
[INFO ] [10:24:45] org.apache.hadoop.hbase.client.RpcRetryingCaller - Call exception, tries=11, retries=35, started=48956 ms ago, cancelled=false, msg=
[DEBUG] [10:24:58] org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x165a4d303830022 after 1ms
[DEBUG] [10:25:05] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 27,3 replyHeader:: 27,184683596486,0 request:: '/hbase,F response:: s{4294967298,4294967298,1421399756338,1421399756338,0,1956,0,0,0,12,184683593750}
[DEBUG] [10:25:05] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 28,4 replyHeader:: 28,184683596486,0 request:: '/hbase/master,F response:: #ffffffff000133135313538404d61737465722e4861646f6f70004d61737465722e4861646f6f702c36303030302c31353336303638393135363739,s{184683593731,184683593731,1536068917156,1536068917156,0,0,0,316840575388352512,59,0,184683593731}
[DEBUG] [10:25:05] org.apache.hadoop.hbase.ipc.RpcClientImpl - Use SIMPLE authentication for service MasterService, sasl=false
[DEBUG] [10:25:05] org.apache.hadoop.hbase.ipc.RpcClientImpl - Connecting to Master.Hadoop/192.168.0.25:60000
[INFO ] [10:25:05] org.apache.hadoop.hbase.client.RpcRetryingCaller - Call exception, tries=12, retries=35, started=69113 ms ago, cancelled=false, msg=
[DEBUG] [10:25:19] org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x165a4d303830022 after 1ms
[DEBUG] [10:25:25] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 29,3 replyHeader:: 29,184683596486,0 request:: '/hbase,F response:: s{4294967298,4294967298,1421399756338,1421399756338,0,1956,0,0,0,12,184683593750}
[DEBUG] [10:25:25] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 30,4 replyHeader:: 30,184683596486,0 request:: '/hbase/master,F response:: #ffffffff000133135313538404d61737465722e4861646f6f70004d61737465722e4861646f6f702c36303030302c31353336303638393135363739,s{184683593731,184683593731,1536068917156,1536068917156,0,0,0,316840575388352512,59,0,184683593731}
[DEBUG] [10:25:25] org.apache.hadoop.hbase.ipc.RpcClientImpl - Use SIMPLE authentication for service MasterService, sasl=false
[DEBUG] [10:25:25] org.apache.hadoop.hbase.ipc.RpcClientImpl - Connecting to Master.Hadoop/192.168.0.25:60000
[INFO ] [10:25:25] org.apache.hadoop.hbase.client.RpcRetryingCaller - Call exception, tries=13, retries=35, started=89176 ms ago, cancelled=false, msg=
[DEBUG] [10:25:39] org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x165a4d303830022 after 0ms
[DEBUG] [10:25:45] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 31,3 replyHeader:: 31,184683596486,0 request:: '/hbase,F response:: s{4294967298,4294967298,1421399756338,1421399756338,0,1956,0,0,0,12,184683593750}
[DEBUG] [10:25:45] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 32,4 replyHeader:: 32,184683596486,0 request:: '/hbase/master,F response:: #ffffffff000133135313538404d61737465722e4861646f6f70004d61737465722e4861646f6f702c36303030302c31353336303638393135363739,s{184683593731,184683593731,1536068917156,1536068917156,0,0,0,316840575388352512,59,0,184683593731}
[DEBUG] [10:25:45] org.apache.hadoop.hbase.ipc.RpcClientImpl - Use SIMPLE authentication for service MasterService, sasl=false
[DEBUG] [10:25:45] org.apache.hadoop.hbase.ipc.RpcClientImpl - Connecting to Master.Hadoop/192.168.0.25:60000
[INFO ] [10:25:45] org.apache.hadoop.hbase.client.RpcRetryingCaller - Call exception, tries=14, retries=35, started=109299 ms ago, cancelled=false, msg=
[DEBUG] [10:25:59] org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x165a4d303830022 after 0ms
[DEBUG] [10:26:05] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 33,3 replyHeader:: 33,184683596486,0 request:: '/hbase,F response:: s{4294967298,4294967298,1421399756338,1421399756338,0,1956,0,0,0,12,184683593750}
[DEBUG] [10:26:05] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 34,4 replyHeader:: 34,184683596486,0 request:: '/hbase/master,F response:: #ffffffff000133135313538404d61737465722e4861646f6f70004d61737465722e4861646f6f702c36303030302c31353336303638393135363739,s{184683593731,184683593731,1536068917156,1536068917156,0,0,0,316840575388352512,59,0,184683593731}
[DEBUG] [10:26:05] org.apache.hadoop.hbase.ipc.RpcClientImpl - Use SIMPLE authentication for service MasterService, sasl=false
[DEBUG] [10:26:05] org.apache.hadoop.hbase.ipc.RpcClientImpl - Connecting to Master.Hadoop/192.168.0.25:60000
[INFO ] [10:26:05] org.apache.hadoop.hbase.client.RpcRetryingCaller - Call exception, tries=15, retries=35, started=129364 ms ago, cancelled=false, msg=
[DEBUG] [10:26:19] org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x165a4d303830022 after 0ms
[DEBUG] [10:26:26] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 35,3 replyHeader:: 35,184683596486,0 request:: '/hbase,F response:: s{4294967298,4294967298,1421399756338,1421399756338,0,1956,0,0,0,12,184683593750}
[DEBUG] [10:26:26] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 36,4 replyHeader:: 36,184683596486,0 request:: '/hbase/master,F response:: #ffffffff000133135313538404d61737465722e4861646f6f70004d61737465722e4861646f6f702c36303030302c31353336303638393135363739,s{184683593731,184683593731,1536068917156,1536068917156,0,0,0,316840575388352512,59,0,184683593731}
[DEBUG] [10:26:26] org.apache.hadoop.hbase.ipc.RpcClientImpl - Use SIMPLE authentication for service MasterService, sasl=false
[DEBUG] [10:26:26] org.apache.hadoop.hbase.ipc.RpcClientImpl - Connecting to Master.Hadoop/192.168.0.25:60000
[INFO ] [10:26:26] org.apache.hadoop.hbase.client.RpcRetryingCaller - Call exception, tries=16, retries=35, started=149567 ms ago, cancelled=false, msg=
[DEBUG] [10:26:39] org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x165a4d303830022 after 0ms
[DEBUG] [10:26:46] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 37,3 replyHeader:: 37,184683596486,0 request:: '/hbase,F response:: s{4294967298,4294967298,1421399756338,1421399756338,0,1956,0,0,0,12,184683593750}
[DEBUG] [10:26:46] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 38,4 replyHeader:: 38,184683596486,0 request:: '/hbase/master,F response:: #ffffffff000133135313538404d61737465722e4861646f6f70004d61737465722e4861646f6f702c36303030302c31353336303638393135363739,s{184683593731,184683593731,1536068917156,1536068917156,0,0,0,316840575388352512,59,0,184683593731}
[DEBUG] [10:26:46] org.apache.hadoop.hbase.ipc.RpcClientImpl - Use SIMPLE authentication for service MasterService, sasl=false
[DEBUG] [10:26:46] org.apache.hadoop.hbase.ipc.RpcClientImpl - Connecting to Master.Hadoop/192.168.0.25:60000
[INFO ] [10:26:46] org.apache.hadoop.hbase.client.RpcRetryingCaller - Call exception, tries=17, retries=35, started=169742 ms ago, cancelled=false, msg=
[DEBUG] [10:26:59] org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x165a4d303830022 after 1ms
[DEBUG] [10:27:06] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 39,3 replyHeader:: 39,184683596486,0 request:: '/hbase,F response:: s{4294967298,4294967298,1421399756338,1421399756338,0,1956,0,0,0,12,184683593750}
[DEBUG] [10:27:06] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 40,4 replyHeader:: 40,184683596486,0 request:: '/hbase/master,F response:: #ffffffff000133135313538404d61737465722e4861646f6f70004d61737465722e4861646f6f702c36303030302c31353336303638393135363739,s{184683593731,184683593731,1536068917156,1536068917156,0,0,0,316840575388352512,59,0,184683593731}
[DEBUG] [10:27:06] org.apache.hadoop.hbase.ipc.RpcClientImpl - Use SIMPLE authentication for service MasterService, sasl=false
[DEBUG] [10:27:06] org.apache.hadoop.hbase.ipc.RpcClientImpl - Connecting to Master.Hadoop/192.168.0.25:60000
[INFO ] [10:27:06] org.apache.hadoop.hbase.client.RpcRetryingCaller - Call exception, tries=18, retries=35, started=189961 ms ago, cancelled=false, msg=
[DEBUG] [10:27:19] org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x165a4d303830022 after 0ms
[DEBUG] [10:27:26] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 41,3 replyHeader:: 41,184683596486,0 request:: '/hbase,F response:: s{4294967298,4294967298,1421399756338,1421399756338,0,1956,0,0,0,12,184683593750}
[DEBUG] [10:27:26] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 42,4 replyHeader:: 42,184683596486,0 request:: '/hbase/master,F response:: #ffffffff000133135313538404d61737465722e4861646f6f70004d61737465722e4861646f6f702c36303030302c31353336303638393135363739,s{184683593731,184683593731,1536068917156,1536068917156,0,0,0,316840575388352512,59,0,184683593731}
[DEBUG] [10:27:26] org.apache.hadoop.hbase.ipc.RpcClientImpl - Use SIMPLE authentication for service MasterService, sasl=false
[DEBUG] [10:27:26] org.apache.hadoop.hbase.ipc.RpcClientImpl - Connecting to Master.Hadoop/192.168.0.25:60000
[INFO ] [10:27:26] org.apache.hadoop.hbase.client.RpcRetryingCaller - Call exception, tries=19, retries=35, started=210070 ms ago, cancelled=false, msg=
[DEBUG] [10:27:39] org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x165a4d303830022 after 2ms
[DEBUG] [10:27:46] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 43,3 replyHeader:: 43,184683596486,0 request:: '/hbase,F response:: s{4294967298,4294967298,1421399756338,1421399756338,0,1956,0,0,0,12,184683593750}
[DEBUG] [10:27:46] org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x165a4d303830022, packet:: clientPath:null serverPath:null finished:false header:: 44,4 replyHeader:: 44,184683596486,0 request:: '/hbase/master,F response:: #ffffffff000133135313538404d61737465722e4861646f6f70004d61737465722e4861646f6f702c36303030302c31353336303638393135363739,s{184683593731,184683593731,1536068917156,1536068917156,0,0,0,316840575388352512,59,0,184683593731}
[DEBUG] [10:27:46] org.apache.hadoop.hbase.ipc.RpcClientImpl - Use SIMPLE authentication for service MasterService, sasl=false
[DEBUG] [10:27:46] org.apache.hadoop.hbase.ipc.RpcClientImpl - Connecting to Master.Hadoop/192.168.0.25:60000
[INFO ] [10:27:46] org.apache.hadoop.hbase.client.RpcRetryingCaller - Call exception, tries=20, retries=35, started=230074 ms ago, cancelled=false, msg=
[DEBUG] [10:28:00] org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x165a4d303830022 after 1ms

1个回答

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
java多线程读取hbase, 是单例一个client供所有线程使用好呢 ,还是每个线程自己创建线程好呢

java多线程读取hbase, 是单例一个client供所有线程使用好呢 ,还是每个线程自己创建线程好呢 我想明白 ,单例client 与 new 多个client 想比较, 的优缺点和用处

使用Java API访问hbase很慢的问题

有没有人出现过这种情况,就是使用java对HBASE进行连接的时候可以连接上,但是所花的时间都是20秒左右(不管是连接Linux上的HBASE还是自己windows上的HBASE都要20秒左右),不知道是什么原因导致的,时间都花在连接数据库上了,Java代码运行的时候也没有报任何的错误 habse安装的是单机版的,版本是1.2.6.1,zookeeper使用的是自带的,hadoop刚开始没有安装,后面安装了,但还是没什么作用,hosts文件这些都配置好了。 现在一个简单的增删改查都要花20秒钟,崩溃,被这个问题困扰了好几天了。。好难受,求大神帮忙解答啊。。

使用Java对hbase进行连接测试,连接不上,超时,

我在Linux上安装了hbase的单机版,hbase版本是1.2.6, zookeeper使用的是hbase自己的,Linux防火墙开放了16010和2181端口, 使用hbase shell 可以正常操作hbase,浏览器也可以正常访问hbase, 但是我使用Java对hbase进行连接测试的时候就是一直连接不上,然后当我关闭 Linux防火墙的时候java又可以正常的对hbase进行连接访问了,???,这是怎么回事,是因为Linux防火墙还需要开放其他端口吗

JAVA连接Hbase集群

java连接Hbase,代码卡在 HBaseAdmin admin1 = new HBaseAdmin(conf1);处

java连hbase永久等待问题

只有一个有价值信息就是,Session establishment complete on server tb86-es01/172.18.1.86:2181, sessionid = 0x15b79d49b158594, negotiated timeout = 40000 代码如下: public class extract_hbase_job { private static Configuration hconf = null; static{ Configuration conf = new Configuration(); conf.set("hbase.zookeeper.quorum", "tb86-es01,tb87-es02,tb88-es03"); conf.set("hbase.zookeeper.property.clientPort", "2181"); System.setProperty("hadoop.home.dir", "D:\\work\\apache\\hadoop\\hadoop-2.6.0"); hconf = HBaseConfiguration.create(conf); } public static void getRecord(String tableName,String rowKey) throws Exception{ HTable table = new HTable(hconf, tableName.getBytes()); Get get = new Get(rowKey.getBytes()); Result rs = table.get(get); for (KeyValue kv : rs.raw()) { System.out.println(new String(kv.getQualifier()) + " " ); } @Test public void test() throws Exception{ extract_hbase_job.getRecord("default:relation_graph", "00:global:cp:10878983"); } } hbase中也有报错: java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1867) at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:198) at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1881) ed by: java.net.BindException: 地址已在使用 at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:444) at sun.nio.ch.Net.bind(Net.java:436) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.hbase.ipc.RpcServer.bind(RpcServer.java:2366) at org.apache.hadoop.hbase.ipc.RpcServer$Listener.<init>(RpcServer.java:524) at org.apache.hadoop.hbase.ipc.RpcServer.<init>(RpcServer.java:1896) at org.apache.hadoop.hbase.regionserver.RSRpcServices.<init>(RSRpcServices.java:788) at org.apache.hadoop.hbase.master.MasterRpcServices.<init>(MasterRpcServices.java:197) at org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:401) at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:487) at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:271) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1862) ... 5 more

Java项目连接hbase时超时

本人小白一枚,现在有个项目是用Java项目连接hbase。我用的是windows下的Java项目连接linux虚拟机上的hbase,hbase开启之后连接提示连接超时(ps:主机之间可互通),求助。。。 log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell). log4j:WARN Please initialize the log4j system properly. SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/E:/apache-tomcat-7.0.85-windows-x64/apache-tomcat-7.0.85/webapps/car_hbase/WEB-INF/lib/slf4j-log4j12-1.5.8.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/E:/apache-tomcat-7.0.85-windows-x64/apache-tomcat-7.0.85/webapps/car_hbase/WEB-INF/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. e785dc9437424bf8a7714f460293896c HBASE表创建失败! java.io.IOException: Failed to get result within timeout, timeout=60000ms at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:232) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:219) at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:277) at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:438) at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:312) at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:604) at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366) at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:410) at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:420) at util.HBaseUtil.createTable(HBaseUtil.java:45) at util.HbaseDemo.createTable(HbaseDemo.java:55) at util.StartupListener.contextInitialized(StartupListener.java:31) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:5118) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5641) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:145) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:1015) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:991) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:652) at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1296) at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:2038) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

通过javaAPI为hbase用户授权,实现acl接口

通过javaAPI为hbase中的用户进行授权,对表进行授权,实现hbase的acl接口的编写

java连接单机hbase操作数据

16/03/04 17:09:56 INFO support.ClassPathXmlApplicationContext: Refreshing org.springframework.context.support.ClassPathXmlApplicationContext@1f6ae4d: startup date [Fri Mar 04 17:09:56 CST 2016]; root of context hierarchy 16/03/04 17:09:56 INFO xml.XmlBeanDefinitionReader: Loading XML bean definitions from class path resource [applicationContest.xml] 16/03/04 17:09:57 INFO support.DefaultListableBeanFactory: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@1d8fe20: defining beans [hbaseConfiguration,htemplate]; root of factory hierarchy 16/03/04 17:09:57 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 16/03/04 17:09:57 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xc4dc7c connecting to ZooKeeper ensemble=192.168.1.202:2181 16/03/04 17:09:57 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.5-cdh5.2.0--1, built on 10/11/2014 20:49 GMT 16/03/04 17:09:57 INFO zookeeper.ZooKeeper: Client environment:host.name=xiaoming 16/03/04 17:09:57 INFO zookeeper.ZooKeeper: Client environment:java.version=1.7.0_17 16/03/04 17:09:57 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation 16/03/04 17:09:57 INFO zookeeper.ZooKeeper: Client environment:java.home=C:\Program Files (x86)\Java\jdk1.7.0_17\jre 16/03/04 17:09:57 INFO zookeeper.ZooKeeper: Client environment:java.class.path=F:\mess\12Hadoop\workspace\myhbasetest\WebRoot\WEB-INF\classes;F:\mess\12Hadoop\workspace\myhbasetest\WebRoot\WEB-INF\lib\commons-codec-1.7.jar;F:\mess\12Hadoop\workspace\myhbasetest\WebRoot\WEB-INF\lib\commons-collections-3.2.1.jar;F:\mess\12Hadoop\workspace\myhbasetest\WebRoot\WEB-INF\lib\commons-configuration-1.6.jar;F:\mess\12Hadoop\workspace\myhbasetest\WebRoot\WEB-INF\lib\commons-lang-2.6.jar;F:\mess\12Hadoop\workspace\myhbasetest\WebRoot\WEB-INF\lib\commons-logging-1.1.1.jar;F:\mess\12Hadoop\workspace\myhbasetest\WebRoot\WEB-INF\lib\guava-12.0.1.jar;F:\mess\12Hadoop\workspace\myhbasetest\WebRoot\WEB-INF\lib\hadoop-auth-2.5.0-cdh5.2.0.jar;F:\mess\12Hadoop\workspace\myhbasetest\WebRoot\WEB-INF\lib\hadoop-common-2.5.0-cdh5.2.0.jar;F:\mess\12Hadoop\workspace\myhbasetest\WebRoot\WEB-INF\lib\hadoop-core-2.5.0-mr1-cdh5.2.0.jar;F:\mess\12Hadoop\workspace\myhbasetest\WebRoot\WEB-INF\lib\hbase-client-0.98.6-cdh5.2.0.jar;F:\mess\12Hadoop\workspace\myhbasetest\WebRoot\WEB-INF\lib\hbase-common-0.98.6-cdh5.2.0.jar;F:\mess\12Hadoop\workspace....... 16/03/04 17:09:57 INFO zookeeper.ClientCnxn: Opening socket connection to server 192.168.1.202/192.168.1.202:2181. Will not attempt to authenticate using SASL (unknown error) 16/03/04 17:09:57 INFO zookeeper.ClientCnxn: Socket connection established to 192.168.1.202/192.168.1.202:2181, initiating session 16/03/04 17:09:57 INFO zookeeper.ClientCnxn: Session establishment complete on server 192.168.1.202/192.168.1.202:2181, sessionid = 0x15341c9328d0012, negotiated timeout = 40000 16/03/04 17:17:50 WARN client.HConnectionManager$HConnectionImplementation: Encountered problems when prefetch hbase:meta table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=31, exceptions: .......Fri Mar 04 17:17:50 CST 2016, org.apache.hadoop.hbase.client.RpcRetryingCaller@b7b17a, java.net.UnknownHostException: unknown host: hbase ......at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:129) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:714) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:144) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:1140) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1204) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1092) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1049) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionLocation(HConnectionManager.java:890) at org.apache.hadoop.hbase.client.RegionServerCallable.prepare(RegionServerCallable.java:72) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:113) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:780) at org.springframework.data.hadoop.hbase.HbaseTemplate$2.doInTable(HbaseTemplate.java:182) at org.springframework.data.hadoop.hbase.HbaseTemplate.execute(HbaseTemplate.java:58) at org.springframework.data.hadoop.hbase.HbaseTemplate.get(HbaseTemplate.java:168) at org.springframework.data.hadoop.hbase.HbaseTemplate.get(HbaseTemplate.java:158) at com.bw.test.Mytest.get(Mytest.java:46) at com.bw.test.Mytest.main(Mytest.java:40) Caused by: java.net.UnknownHostException: unknown host: hbase at org.apache.hadoop.hbase.ipc.RpcClient$Connection.<init>(RpcClient.java:385) at org.apache.hadoop.hbase.ipc.RpcClient.createConnection(RpcClient.java:351) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1530) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1442) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1661) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1719) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:29966) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1562) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:710) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:708) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:114) ... 16 more

java连接hbase Kerberos 24小时报错问题

java连接hbase kerberos 24小时过期问题。求指点 ``` try { String krbStr = Thread.currentThread().getContextClassLoader().getResource("krb5.ini").getFile(); String keyStr = Thread.currentThread().getContextClassLoader().getResource(keytab).getFile(); System.setProperty("java.security.krb5.conf", krbStr); UserGroupInformation.setConfiguration(conf); UserGroupInformation.loginUserFromKeytab(principal, keyStr); } catch (IOException e) { log.error(e); } try { HBaseAdmin admin = new HBaseAdmin(conf); if (!admin.tableExists(tableName)) { HTableDescriptor tableDescripter = new HTableDescriptor( tableName.getBytes()); tableDescripter.addFamily(new HColumnDescriptor("data")); admin.createTable(tableDescripter); } } catch (Exception e) { log.error( e); } ``` 以上代码刚运行时没问题,但是过了24小时左右admin.tableExists(tableName)这句就会报错,错误如下 org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions: Sat Jul 13 14:30:29 CST 2019, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68090: row 'p_rsdisk,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=xx-xx-xx-xx-xx.indata.com,16020,1560266314048, seqNum=0 at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:271) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:223) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:61) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320) at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:295) at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:160) at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:155) at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:811) at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:602) at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366) at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:303) at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:313) at javax.servlet.http.HttpServlet.service(HttpServlet.java:731) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) at org.apache.catalina.filters.CorsFilter.handleNonCORS(CorsFilter.java:436) at org.apache.catalina.filters.CorsFilter.doFilter(CorsFilter.java:177) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:505) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:956) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:423) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1079) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:625) at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:316) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:748) Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=68090: row 'p_rsdisk,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=xx-xx-xx-xx-xx.indata.com,16020,1560266314048, seqNum=0 at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159) at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:64) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ... 1 more Caused by: java.io.IOException: Could not set up IO Streams to xx-xx-xx-xx-xx.indata.com/xx.xx.xx.xx:16020 at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:777) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:885) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:854) at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1180) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32651) at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:372) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:199) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:62) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:369) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:343) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126) ... 4 more Caused by: java.lang.RuntimeException: SASL authentication failed. The most likely cause is missing or invalid credentials. Consider 'kinit'. at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$1.run(RpcClientImpl.java:677) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.handleSaslConnectionFailure(RpcClientImpl.java:635) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:743) ... 17 more Caused by: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:609) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600(RpcClientImpl.java:154) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:735) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:732) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:732) ... 17 more Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt) at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147) at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122) at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187) at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179) at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192) ... 26 more 求解原因,是因为票据过期吗?

java 如何访问远程 linux 服务器下利用docker容器建立的hbase集群

在远程linux服务器上利用docker搭建伪分布式集群的Hbase,如何在本地windows主机利用java来访问并进行操作。 ``` conf = HBaseConfiguration.create(); //我认为这里应该是我服务器的ip,因为容器的ip并没有映射到windows中 conf.set("hbase.zookeeper.quorum","192.168.10.168"); //2281是master容器zk2181端口的映射 conf.set("hbase.zookeeper.property.clientPort", "2281"); conn= ConnectionFactory.createConnection(conf); admin=conn.getAdmin(); System.out.println("connection success"); Table mk=conn.getTable(TableName.valueOf("user")); Get g = new Get(Bytes.toBytes("1234")); Result result = mk.get(g); ``` 如果这样子操作的话,运行到Result报缺少master主机的错误

关于java开发hbase的框架

刚接触hbase,需要java客户端的编程。目前之了解到hbase客户端的API的编写。就是get,put之类的层次。但以往开发关系型数据库的时候都有很多框架。请问,有经验的朋友是如何快速开发基于hbase的程序的?谢谢?比如是用什么,可以使用sql语言来查询之类的。谢谢

Http方式读HBase中图片内容

现在已将图片数据存储到了Hbase表中,能够通过http方式访问Hbase表中的图片, 但是返回是以XML和Json返回的。 因为图片是想在web系统中直接展示用的(类似http 的方式直接展示图片)。 有没有有这方面的大佬能解答下,或者知道方法的指点下,多谢大家了。 下面是我现在访问Hbase的返回: ![图片说明](https://img-ask.csdn.net/upload/201805/31/1527751511_609190.png)

如何用java操作hbase实现对行键的模糊查询

//这个代码该怎么改成行键的模糊查询呢 List<UFile> list=new ArrayList<UFile>(); Table table=conn.getTable(TableName.valueOf("yunpro")); Get get = new Get(Bytes.toBytes("lrd")); get.addColumn(Bytes.toBytes("picture"), Bytes.toBytes("alert")) Scan scan=new Scan(get); Result res = table.get(get);

java调用Hbase java.net.SocketTimeoutException

hbase菜鸟,工作需要,请大神帮忙。java连接hbase出现的错误error如下: log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions: Thu Apr 07 09:43:39 CST 2016, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=80156: row 'gpsinfo,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=zpu-storage3,16020,1443175301611, seqNum=0 at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:271) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:207) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320) at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:401) at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364) at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:604) at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366) at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:392) at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:402) at Test.main(Test.java:19) Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=80156: row 'gpsinfo,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=zpu-storage3,16020,1443175301611, seqNum=0 at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159) at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.net.UnknownHostException: unknown host: zpu-storage3 at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.<init>(RpcClientImpl.java:303) at org.apache.hadoop.hbase.ipc.RpcClientImpl.createConnection(RpcClientImpl.java:133) at org.apache.hadoop.hbase.ipc.RpcClientImpl.getConnection(RpcClientImpl.java:1283) at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1191) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:222) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:323) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32831) at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:373) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:200) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:62) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:360) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:334) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126) ... 4 more

hbase shell 无法启动 页面能正常访问

安装了hbase 1.3.1 hadoop 2.8.2 jdk 9.0.1 hbase 启动没问题,但当启动 hbase shell 错误如下: root@Master bin]# ./hbase shell Java HotSpot(TM) 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.jruby.java.invokers.RubyToJavaInvoker (file:/usr/local/hadoop/hbase-1.2.6/lib/jruby-complete-1.6.8.jar) to method java.lang.Object.registerNatives() WARNING: Please consider reporting this to the maintainers of org.jruby.java.invokers.RubyToJavaInvoker WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ArgumentError: wrong number of arguments (0 for 1) method_added at file:/usr/local/hadoop/hbase-1.2.6/lib/jruby-complete-1.6.8.jar!/builtin/javasupport/core_ext/object.rb:10 method_added at file:/usr/local/hadoop/hbase-1.2.6/lib/jruby-complete-1.6.8.jar!/builtin/javasupport/core_ext/object.rb:129 Pattern at file:/usr/local/hadoop/hbase-1.2.6/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.regex.rb:2 (root) at file:/usr/local/hadoop/hbase-1.2.6/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.regex.rb:1 require at org/jruby/RubyKernel.java:1062 (root) at file:/usr/local/hadoop/hbase-1.2.6/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.regex.rb:42 (root) at /usr/local/hadoop/hbase-1.2.6/bin/hirb.rb:38 而hirb.rb 第38行是 include JAVA, 而我的环境变量都配好的,不知道还有什么出错了。

java通过solr查询hbase数据,如何自动映射到java实体类?

现在的情况是,使用simplehbase可以将查询结果自动映射到实体中,但simplehbase的过滤手段不如solr全面,能否将simplehbase与solr结合起来使用?如果可以,该怎么做呢?如果不可以,有没有办法可以达到以下目的? 目的: 1)可以模糊查询 2)查询出的结果自动映射到java实体类中

java如何修改hbase表数据

注意:是修改hbase表的数据,不是新增、删除、查询,是修改

hbase读取数据怎么快速转成json数组给前端

Configuration conf = HBaseConfiguration.create(); conf.set("hbase.zookeeper.quorum", "hadoop1,hadoop2,hadoop3"); HTable table = new HTable(conf, "DataCollection1"); System.out.println("scan1"); Scan scan1 = new Scan(); new PrefixFilter(Bytes.toBytes("row")); Filter filter3= new PrefixFilter(Bytes.toBytes("2017-01-01")); scan1.setFilter(filter3); scan1.setMaxVersions(); ResultScanner scanner1 = table.getScanner(scan1); System.out.println("scan2"); int count=0; JSONArray array = new JSONArray(); for (Result r : scanner1) { System.out.println("sssss"); JSONObject mapOfColValues = new JSONObject();// 创建json对象就是一个{name:wp} for (KeyValue kv : r.raw()) { System.out.println(String.format("row:%s, family:%s, qualifier:%s, qualifiervalue:%s, timestamp:%s.", Bytes.toString(kv.getRow()), Bytes.toString(kv.getFamily()), Bytes.toString(kv.getQualifier()), Bytes.toString(kv.getValue()), kv.getTimestamp())); mapOfColValues.put(Bytes.toString(kv.getQualifier()),Bytes.toString(kv.getValue())); } array.add(mapOfColValues); count++; System.out.println(count); } scanner1.close(); table.close(); //pool.close(); System.out.println(count); System.out.println("-------------finished----------------"); 以上是我主要的代码, DataCollection1表的数据有7000万多条,我查询一天的数据(大概10万条左右), 到了循环scanner1的时候就卡住了,不知道为什么就是很慢。rowkey的设计是“时间|ID”,真的不知道该怎么弄,求教大神解答

Hadoop+Hbase报错java.net.UnknownHostException:

问题:java.net.UnknownHostException: Invalid host name: local host is: (unknown); destination host is: "master":9000; java.net.UnknownHostException; log错误日志 2017-07-13 21:26:45,915 FATAL [master:16000.activeMasterManager] master.HMaster: Failed to become active master java.net.UnknownHostException: Invalid host name: local host is: (unknown); destination host is: "master":9000; java.net.UnknownHostException; For more details see: http://wiki.apache.org/hadoop/UnknownHost at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:744) at org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:409) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1518) at org.apache.hadoop.ipc.Client.call(Client.java:1451) at org.apache.hadoop.ipc.Client.call(Client.java:1412) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy18.setSafeMode(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:666) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy19.setSafeMode(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279) at com.sun.proxy.$Proxy20.setSafeMode(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2596) at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1223) at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1207) at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:525) at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:971) at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:429) at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:153) at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:128) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:693) at org.apache.hadoop.hbase.master.HMaster.access$600(HMaster.java:189) at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1803) at java.lang.Thread.run(Thread.java:745) Caused by: java.net.UnknownHostException ... 32 more 2017-07-13 21:26:45,924 FATAL [master:16000.activeMasterManager] master.HMaster: Unhandled exception. Starting shutdown. java.net.UnknownHostException: Invalid host name: local host is: (unknown); destination host is: "master":9000; java.net.UnknownHostException; For more details see: http://wiki.apache.org/hadoop/UnknownHost at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:744) at org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:409) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1518) at org.apache.hadoop.ipc.Client.call(Client.java:1451) at org.apache.hadoop.ipc.Client.call(Client.java:1412) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy18.setSafeMode(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:666) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy19.setSafeMode(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279) at com.sun.proxy.$Proxy20.setSafeMode(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2596) at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1223) at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1207) at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:525) at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:971) at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:429) at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:153) at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:128) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:693) at org.apache.hadoop.hbase.master.HMaster.access$600(HMaster.java:189) at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1803) at java.lang.Thread.run(Thread.java:745) Caused by: java.net.UnknownHostException ... 32 more 2017-07-13 21:26:45,925 INFO [master:16000.activeMasterManager] regionserver.HRegionServer: STOPPED: Unhandled exception. Starting shutdown. 问题补充 1、防火墙均已关闭、root最高权限 2、hadoop启动正常jps查看已启动,通过浏览器访问50070,8088无任何问题 3、Zookeeper启动正常jps查看已启动 4、已删除hbase/lib下所有关于hadoop的jar并将 hadoop/share所有关于Hadoop的jar拷贝到hbase/lib下,并添加aws-java-sdk-core-1.11.158.jar和aws-java-sdk-s3-1.11.155.jar 版本说明 1、hadoop 2.7.2 2、hbase 1.2.6 3、zookeeper 3.4.2

在中国程序员是青春饭吗?

今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...

程序员请照顾好自己,周末病魔差点一套带走我。

程序员在一个周末的时间,得了重病,差点当场去世,还好及时挽救回来了。

和黑客斗争的 6 天!

互联网公司工作,很难避免不和黑客们打交道,我呆过的两家互联网公司,几乎每月每天每分钟都有黑客在公司网站上扫描。有的是寻找 Sql 注入的缺口,有的是寻找线上服务器可能存在的漏洞,大部分都...

搜狗输入法也在挑战国人的智商!

故事总是一个接着一个到来...上周写完《鲁大师已经彻底沦为一款垃圾流氓软件!》这篇文章之后,鲁大师的市场工作人员就找到了我,希望把这篇文章删除掉。经过一番沟通我先把这篇文章从公号中删除了...

总结了 150 余个神奇网站,你不来瞅瞅吗?

原博客再更新,可能就没了,之后将持续更新本篇博客。

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

MySQL数据库面试题(2020最新版)

文章目录数据库基础知识为什么要使用数据库什么是SQL?什么是MySQL?数据库三大范式是什么mysql有关权限的表都有哪几个MySQL的binlog有有几种录入格式?分别有什么区别?数据类型mysql有哪些数据类型引擎MySQL存储引擎MyISAM与InnoDB区别MyISAM索引与InnoDB索引的区别?InnoDB引擎的4大特性存储引擎选择索引什么是索引?索引有哪些优缺点?索引使用场景(重点)...

如果你是老板,你会不会踢了这样的员工?

有个好朋友ZS,是技术总监,昨天问我:“有一个老下属,跟了我很多年,做事勤勤恳恳,主动性也很好。但随着公司的发展,他的进步速度,跟不上团队的步伐了,有点...

我入职阿里后,才知道原来简历这么写

私下里,有不少读者问我:“二哥,如何才能写出一份专业的技术简历呢?我总感觉自己写的简历太烂了,所以投了无数份,都石沉大海了。”说实话,我自己好多年没有写过简历了,但我认识的一个同行,他在阿里,给我说了一些他当年写简历的方法论,我感觉太牛逼了,实在是忍不住,就分享了出来,希望能够帮助到你。 01、简历的本质 作为简历的撰写者,你必须要搞清楚一点,简历的本质是什么,它就是为了来销售你的价值主张的。往深...

优雅的替换if-else语句

场景 日常开发,if-else语句写的不少吧??当逻辑分支非常多的时候,if-else套了一层又一层,虽然业务功能倒是实现了,但是看起来是真的很不优雅,尤其是对于我这种有强迫症的程序"猿",看到这么多if-else,脑袋瓜子就嗡嗡的,总想着解锁新姿势:干掉过多的if-else!!!本文将介绍三板斧手段: 优先判断条件,条件不满足的,逻辑及时中断返回; 采用策略模式+工厂模式; 结合注解,锦...

离职半年了,老东家又发 offer,回不回?

有小伙伴问松哥这个问题,他在上海某公司,在离职了几个月后,前公司的领导联系到他,希望他能够返聘回去,他很纠结要不要回去? 俗话说好马不吃回头草,但是这个小伙伴既然感到纠结了,我觉得至少说明了两个问题:1.曾经的公司还不错;2.现在的日子也不是很如意。否则应该就不会纠结了。 老实说,松哥之前也有过类似的经历,今天就来和小伙伴们聊聊回头草到底吃不吃。 首先一个基本观点,就是离职了也没必要和老东家弄的苦...

2020阿里全球数学大赛:3万名高手、4道题、2天2夜未交卷

阿里巴巴全球数学竞赛( Alibaba Global Mathematics Competition)由马云发起,由中国科学技术协会、阿里巴巴基金会、阿里巴巴达摩院共同举办。大赛不设报名门槛,全世界爱好数学的人都可参与,不论是否出身数学专业、是否投身数学研究。 2020年阿里巴巴达摩院邀请北京大学、剑桥大学、浙江大学等高校的顶尖数学教师组建了出题组。中科院院士、美国艺术与科学院院士、北京国际数学...

男生更看重女生的身材脸蛋,还是思想?

往往,我们看不进去大段大段的逻辑。深刻的哲理,往往短而精悍,一阵见血。问:产品经理挺漂亮的,有点心动,但不知道合不合得来。男生更看重女生的身材脸蛋,还是...

程序员为什么千万不要瞎努力?

本文作者用对比非常鲜明的两个开发团队的故事,讲解了敏捷开发之道 —— 如果你的团队缺乏统一标准的环境,那么即使勤劳努力,不仅会极其耗时而且成果甚微,使用...

为什么程序员做外包会被瞧不起?

二哥,有个事想询问下您的意见,您觉得应届生值得去外包吗?公司虽然挺大的,中xx,但待遇感觉挺低,马上要报到,挺纠结的。

当HR压你价,说你只值7K,你该怎么回答?

当HR压你价,说你只值7K时,你可以流畅地回答,记住,是流畅,不能犹豫。 礼貌地说:“7K是吗?了解了。嗯~其实我对贵司的面试官印象很好。只不过,现在我的手头上已经有一份11K的offer。来面试,主要也是自己对贵司挺有兴趣的,所以过来看看……”(未完) 这段话主要是陪HR互诈的同时,从公司兴趣,公司职员印象上,都给予对方正面的肯定,既能提升HR的好感度,又能让谈判气氛融洽,为后面的发挥留足空间。...

面试:第十六章:Java中级开发(16k)

HashMap底层实现原理,红黑树,B+树,B树的结构原理 Spring的AOP和IOC是什么?它们常见的使用场景有哪些?Spring事务,事务的属性,传播行为,数据库隔离级别 Spring和SpringMVC,MyBatis以及SpringBoot的注解分别有哪些?SpringMVC的工作原理,SpringBoot框架的优点,MyBatis框架的优点 SpringCould组件有哪些,他们...

面试阿里p7,被按在地上摩擦,鬼知道我经历了什么?

面试阿里p7被问到的问题(当时我只知道第一个):@Conditional是做什么的?@Conditional多个条件是什么逻辑关系?条件判断在什么时候执...

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

大三实习生,字节跳动面经分享,已拿Offer

说实话,自己的算法,我一个不会,太难了吧

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

《Oracle Java SE编程自学与面试指南》最佳学习路线图2020年最新版(进大厂必备)

正确选择比瞎努力更重要!

《Oracle Java SE编程自学与面试指南》最佳学习路线图(2020最新版)

正确选择比瞎努力更重要!

都前后端分离了,咱就别做页面跳转了!统统 JSON 交互

文章目录1. 无状态登录1.1 什么是有状态1.2 什么是无状态1.3 如何实现无状态1.4 各自优缺点2. 登录交互2.1 前后端分离的数据交互2.2 登录成功2.3 登录失败3. 未认证处理方案4. 注销登录 这是本系列的第四篇,有小伙伴找不到之前文章,松哥给大家列一个索引出来: 挖一个大坑,Spring Security 开搞! 松哥手把手带你入门 Spring Security,别再问密...

字节跳动面试官竟然问了我JDBC?

轻松等回家通知

面试官:你连SSO都不懂,就别来面试了

大厂竟然要考我SSO,卧槽。

阿里面试官让我用Zk(Zookeeper)实现分布式锁

他可能没想到,我当场手写出来了

终于,月薪过5万了!

来看几个问题想不想月薪超过5万?想不想进入公司架构组?想不想成为项目组的负责人?想不想成为spring的高手,超越99%的对手?那么本文内容是你必须要掌握的。本文主要详解bean的生命...

自从喜欢上了B站这12个UP主,我越来越觉得自己是个废柴了!

不怕告诉你,我自从喜欢上了这12个UP主,哔哩哔哩成为了我手机上最耗电的软件,几乎每天都会看,可是吧,看的越多,我就越觉得自己是个废柴,唉,老天不公啊,不信你看看…… 间接性踌躇满志,持续性混吃等死,都是因为你们……但是,自己的学习力在慢慢变强,这是不容忽视的,推荐给你们! 都说B站是个宝,可是有人不会挖啊,没事,今天咱挖好的送你一箩筐,首先啊,我在B站上最喜欢看这个家伙的视频了,为啥 ,咱撇...

代码注释如此沙雕,会玩还是你们程序员!

某站后端代码被“开源”,同时刷遍全网的,还有代码里的那些神注释。 我们这才知道,原来程序员个个都是段子手;这么多年来,我们也走过了他们的无数套路… 首先,产品经理,是永远永远吐槽不完的!网友的评论也非常扎心,说看这些代码就像在阅读程序员的日记,每一页都写满了对产品经理的恨。 然后,也要发出直击灵魂的质问:你是尊贵的付费大会员吗? 这不禁让人想起之前某音乐app的穷逼Vip,果然,穷逼在哪里都是...

立即提问
相关内容推荐