windows 下启动hbase报错 10C

D:\TSBrowserDownloads\db\hbase-2.1.0-bin\hbase-2.1.0\bin>start-hbase.cmd
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/commons/logging/LogFactory
at org.apache.hadoop.conf.Configuration.(Configuration.java:178)
at org.apache.hadoop.hbase.util.HBaseConfTool.main(HBaseConfTool.java:39)
Caused by: java.lang.ClassNotFoundException: org.apache.commons.logging.LogFactory
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 2 more
ERROR: Could not determine the startup mode.


1个回答

在node1、node2分别执行命令 zkServer.sh start 启动zookeeper,再次在master节点执行hbase shell

https://blog.csdn.net/lieyanhaipo/article/details/77800517

u013523089
神武舞 我是在window下的,没有zk,hadoop
一年多之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
hive on hbase报错,在hive中创建映射表,关联到hbase上

hive on hbase,在hive中创建映射表,关联到hbase上,在hive中已经创建hbase中的row_key,还是报错FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Got exception: org.apache.hadoop.security.AccessControlException Permission denied: user=xlwang5, access=EXECUTE, inode="/user/hive/warehouse":hive:hive:drwxrwx--T

求助:kettle连接cdh510-hbase报错问题

最近在用hbase与oracle的数据搬运工作,之前一直用sqoop,最近用kettle在做这个事情,在windows,os环境测试一切正常,包括客户端直接跑Job,和生成ktr脚本去跑,都没问题,但是我把生成的.ktr文件拿到ubuntu跑,就报错 org.pentaho.di.core.exception.KettleException: Unable to obtain a connection to HBase: null 如下图 ![图片说明](https://img-ask.csdn.net/upload/201707/12/1499870023_502959.png)

hdfs升级ha,启动hbase报错

016-05-18 13:16:26,751 INFO [master:master2:60000-EventThread] zookeeper.ClientCnxn: EventThread shut down 2016-05-18 13:16:26,752 INFO [master:master2:60000-SendThread(master1:2181)] zookeeper.ClientCnxn: Unable to reconnect to ZooKeeper service, session 0x354c2044e44000d has expired, closing socket connection 2016-05-18 13:16:27,154 INFO [main-SendThread(master3:2181)] zookeeper.ClientCnxn: Opening socket connection to server master3/10.219.83.213:2181. Will not attempt to authenticate using SASL (java.lang.SecurityException: Unable to locate a login configuration) 2016-05-18 13:16:27,155 INFO [main-SendThread(master3:2181)] zookeeper.ClientCnxn: Socket connection established to master3/10.219.83.213:2181, initiating session 2016-05-18 13:16:27,158 INFO [main-SendThread(master3:2181)] zookeeper.ClientCnxn: Unable to reconnect to ZooKeeper service, session 0x254c2058c80001d has expired, closing socket connection 2016-05-18 13:16:27,159 FATAL [main-EventThread] master.HMaster: Master server abort: loaded coprocessors are: [] 2016-05-18 13:16:27,159 INFO [main-EventThread] master.HMaster: Primary Master trying to recover from ZooKeeper session expiry. 2016-05-18 13:16:27,160 INFO [main-EventThread] zookeeper.RecoverableZooKeeper: Closing dead ZooKeeper connection, session was: 0x254c2058c80001d 2016-05-18 13:16:27,160 INFO [main-EventThread] zookeeper.ZooKeeper: Initiating client connection, connectString=DBmaster1:2181,master2:2181,master3:2181,master1:2181,master4:2181 sessionTimeout=30000 watcher=master:60000-0x254c2058c80001d, quorum=DBmaster1:2181,master2:2181,master3:2181,master1:2181,master4:2181, baseZNode=/hbase-unsecure 2016-05-18 13:16:27,163 INFO [main-EventThread] zookeeper.RecoverableZooKeeper: Recreated a ZooKeeper, session is: 0x0 WARN [master:master2:60000-EventThread] client.ConnectionManager$HConnectionImplementation: This client just lost it's session with ZooKeeper, closing it. It will be recreated next time someone needs it org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.connectionEvent(ZooKeeperWatcher.java:448) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:366) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498) 2016-05-18 13:16:27,190 INFO [master:master2:60000-EventThread] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x454c2044d0d0017 2016-05-18 13:16:27,190 INFO [master:master2:60000-EventThread] zookeeper.ClientCnxn: EventThread shut down 2016-05-18 13:16:27,203 ERROR [main-EventThread] master.HMaster: Primary master encountered unexpected exception while trying to recover from ZooKeeper session expiry. Proceeding with server abort. java.util.concurrent.ExecutionException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87) at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1719) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1352)

hbase启动报错问题,求解

hbase集群启动通过ssh启动会爆下面这个错 java.lang.RuntimeException: Failed construction of Regionserver: class org.apache.hadoop.hbase.regionserver.HRegionServer at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2706) at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:64) at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2721) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2704) ... 5 more Caused by: java.io.IOException: No FileSystem for scheme: hdfs at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2799) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2810) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:100) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2849) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2831) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356) at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:1003) at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:579) 通过自定义脚本或者 bin/start-hbase.sh 启动会报错,但通过$ bin/hbase-daemon.sh start master $ bin/hbase-daemon.sh start regionserver 这命令单独在各个机子上运行则不报,求解!!!!

Sqoop导入数据到Hbase报错

# Sqoop导入数据到Hbase报错,我的版本Hadoop3.2.1,hbase2.2.3,sqoop1.4.7,我知道是版本的问题,怎么解决? 我将hbase中的lib所有都导入到Sqoop中都没有解决 ``` Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.hbase.client.HBaseAdmin.<init>(Lorg/apache/hadoop/conf/Configuration;)V at org.apache.sqoop.mapreduce.HBaseImportJob.jobSetup(HBaseImportJob.java:163) at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:268) at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:692) at org.apache.sqoop.manager.MySQLManager.importTable(MySQLManager.java:127) at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:520) at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:628) at org.apache.sqoop.Sqoop.run(Sqoop.java:147) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243) at org.apache.sqoop.Sqoop.main(Sqoop.java:252) ```

Hadoop+Hbase报错java.net.UnknownHostException:

问题:java.net.UnknownHostException: Invalid host name: local host is: (unknown); destination host is: "master":9000; java.net.UnknownHostException; log错误日志 2017-07-13 21:26:45,915 FATAL [master:16000.activeMasterManager] master.HMaster: Failed to become active master java.net.UnknownHostException: Invalid host name: local host is: (unknown); destination host is: "master":9000; java.net.UnknownHostException; For more details see: http://wiki.apache.org/hadoop/UnknownHost at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:744) at org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:409) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1518) at org.apache.hadoop.ipc.Client.call(Client.java:1451) at org.apache.hadoop.ipc.Client.call(Client.java:1412) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy18.setSafeMode(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:666) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy19.setSafeMode(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279) at com.sun.proxy.$Proxy20.setSafeMode(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2596) at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1223) at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1207) at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:525) at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:971) at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:429) at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:153) at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:128) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:693) at org.apache.hadoop.hbase.master.HMaster.access$600(HMaster.java:189) at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1803) at java.lang.Thread.run(Thread.java:745) Caused by: java.net.UnknownHostException ... 32 more 2017-07-13 21:26:45,924 FATAL [master:16000.activeMasterManager] master.HMaster: Unhandled exception. Starting shutdown. java.net.UnknownHostException: Invalid host name: local host is: (unknown); destination host is: "master":9000; java.net.UnknownHostException; For more details see: http://wiki.apache.org/hadoop/UnknownHost at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:744) at org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:409) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1518) at org.apache.hadoop.ipc.Client.call(Client.java:1451) at org.apache.hadoop.ipc.Client.call(Client.java:1412) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy18.setSafeMode(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:666) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy19.setSafeMode(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279) at com.sun.proxy.$Proxy20.setSafeMode(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2596) at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1223) at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1207) at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:525) at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:971) at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:429) at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:153) at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:128) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:693) at org.apache.hadoop.hbase.master.HMaster.access$600(HMaster.java:189) at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1803) at java.lang.Thread.run(Thread.java:745) Caused by: java.net.UnknownHostException ... 32 more 2017-07-13 21:26:45,925 INFO [master:16000.activeMasterManager] regionserver.HRegionServer: STOPPED: Unhandled exception. Starting shutdown. 问题补充 1、防火墙均已关闭、root最高权限 2、hadoop启动正常jps查看已启动,通过浏览器访问50070,8088无任何问题 3、Zookeeper启动正常jps查看已启动 4、已删除hbase/lib下所有关于hadoop的jar并将 hadoop/share所有关于Hadoop的jar拷贝到hbase/lib下,并添加aws-java-sdk-core-1.11.158.jar和aws-java-sdk-s3-1.11.155.jar 版本说明 1、hadoop 2.7.2 2、hbase 1.2.6 3、zookeeper 3.4.2

list的时候Hbase报错的问题?

2016-09-21 00:11:28,533 ERROR [main] client.ConnectionManager$HConnectionImplementation: Can't get connection to ZooKeeper: KeeperErrorCode = OperationTimeout ERROR: KeeperErrorCode = OperationTimeout Here is some help for this command: List all tables in hbase. Optional regular expression parameter could be used to filter the output. Examples: hbase> list hbase> list 'abc.*' hbase> list 'ns:abc.*' hbase> list 'ns:.*' hbase(main):002:0> [root@cluster12 bin]#

hbase启动报错 hbase shell

请大神帮帮忙 hbase启动 OK hbase shell有下问题 # ./bin/hbase shell 2016-04-05 08:53:06,328 ERROR [main] zookeeper.RecoverableZooKeeper: ZooKeeper exists failed after 4 attempts 2016-04-05 08:53:06,331 WARN [main] zookeeper.ZKUtil: hconnection-0x1f6917fb0x0, quorum=salve1:2181,master:2181,salve2:2181, baseZNode=/hbase Unable to set watcher on znode (/hbase/hbaseid) org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:221) at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:482) at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:65) at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:86) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:833) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init>(ConnectionManager.java:623) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:422) at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238) at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218) at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:450) at org.jruby.javasupport.JavaMethod.invokeStaticDirect(JavaMethod.java:362) at org.jruby.java.invokers.StaticMethodInvoker.call(StaticMethodInvoker.java:58) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:312) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:169) at org.jruby.ast.CallOneArgNode.interpret(CallOneArgNode.java:57) at org.jruby.ast.InstAsgnNode.interpret(InstAsgnNode.java:95) at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104) at org.jruby.ast.BlockNode.interpret(BlockNode.java:71) at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74) at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:169) at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:191) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:302) at org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:144) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:148) at org.jruby.RubyClass.newInstance(RubyClass.java:822) at org.jruby.RubyClass$i$newInstance.call(RubyClass$i$newInstance.gen:65535) at org.jruby.internal.runtime.methods.JavaMethod$JavaMethodZeroOrNBlock.call(JavaMethod.java:249) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:292) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:135) at usr.local.hadoop.hbase_minus_1_dot_0_dot_3.bin.hirb.__file__(/usr/local/hadoop/hbase-1.0.3/bin/hirb.rb:118) at usr.local.hadoop.hbase_minus_1_dot_0_dot_3.bin.hirb.load(/usr/local/hadoop/hbase-1.0.3/bin/hirb.rb) at org.jruby.Ruby.runScript(Ruby.java:697) at org.jruby.Ruby.runScript(Ruby.java:690) at org.jruby.Ruby.runNormally(Ruby.java:597) at org.jruby.Ruby.runFromMain(Ruby.java:446) at org.jruby.Main.doRunFromMain(Main.java:369) at org.jruby.Main.internalRun(Main.java:258) at org.jruby.Main.run(Main.java:224) at org.jruby.Main.run(Main.java:208) at org.jruby.Main.main(Main.java:188) 2016-04-05 08:53:06,338 ERROR [main] zookeeper.ZooKeeperWatcher: hconnection-0x1f6917fb0x0, quorum=salve1:2181,master:2181,salve2:2181, baseZNode=/hbase Received unexpected KeeperException, re-throwing exception org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:221) at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:482) at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:65) at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:86) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:833) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init>(ConnectionManager.java:623) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:422) at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238) at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218) at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:450) at org.jruby.javasupport.JavaMethod.invokeStaticDirect(JavaMethod.java:362) at org.jruby.java.invokers.StaticMethodInvoker.call(StaticMethodInvoker.java:58) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:312) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:169) at org.jruby.ast.CallOneArgNode.interpret(CallOneArgNode.java:57) at org.jruby.ast.InstAsgnNode.interpret(InstAsgnNode.java:95) at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104) at org.jruby.ast.BlockNode.interpret(BlockNode.java:71) at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74) at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:169) at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:191) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:302) at org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:144) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:148) at org.jruby.RubyClass.newInstance(RubyClass.java:822) at org.jruby.RubyClass$i$newInstance.call(RubyClass$i$newInstance.gen:65535) at org.jruby.internal.runtime.methods.JavaMethod$JavaMethodZeroOrNBlock.call(JavaMethod.java:249) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:292) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:135) at usr.local.hadoop.hbase_minus_1_dot_0_dot_3.bin.hirb.__file__(/usr/local/hadoop/hbase-1.0.3/bin/hirb.rb:118) at usr.local.hadoop.hbase_minus_1_dot_0_dot_3.bin.hirb.load(/usr/local/hadoop/hbase-1.0.3/bin/hirb.rb) at org.jruby.Ruby.runScript(Ruby.java:697) at org.jruby.Ruby.runScript(Ruby.java:690) at org.jruby.Ruby.runNormally(Ruby.java:597) at org.jruby.Ruby.runFromMain(Ruby.java:446) at org.jruby.Main.doRunFromMain(Main.java:369) at org.jruby.Main.internalRun(Main.java:258) at org.jruby.Main.run(Main.java:224) at org.jruby.Main.run(Main.java:208) at org.jruby.Main.main(Main.java:188) HBase Shell; enter 'help<RETURN>' for list of supported commands. Type "exit<RETURN>" to leave the HBase Shell Version 1.0.3, rf1e1312f9790a7c40f6a4b5a1bab2ea1dd559890, Tue Jan 19 19:26:53 PST 2016

hbase伪分布式安装时启动报错

运行环境:ubuntu 18.0.4 hadoop-3.2.1 hbase-2.2.4 搭建大数据平台,伪分布式安装时,启动hbase报错,但是运行jps,好像又没有问题,前面已经使用过hadoop是没问题的,会不会是hadoop与hbase版本不兼容?还是配置问题?麻烦各位大佬看看。![图片说明](https://img-ask.csdn.net/upload/202004/04/1585981993_858884.png)

phoenix连接hbase启动的时候报错

phoenix启动一直报错,不知道是什么原因。。看了datanode和namenode都没有挂掉,datanode日志文件里报同样的错。删除data文件再格式化也试过了,还是不行。求大佬指教 ``` Error: java.util.concurrent.ExecutionException: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/.tmp/data/default/SYSTEM.LOG/1837b9ac241e98a8107767d061aed9cd/.regioninfo could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and no node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1547) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3107) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3031) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:724) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043) (state=08000,code=101) ```

tomcat启动情况下连接hbase失败!!!

如图,单元测试代码无误。启动tomcat后调用方法出现问题,创建与hbase的连接失败,求大神赐教![图片说明](http://forum.csdn.net/PointForum/ui/scripts/csdn/Plugin/001/face/18.gif)![图片说明](http://forum.csdn.net/PointForum/ui/scripts/csdn/Plugin/001/face/18.gif)![图片](https://img-ask.csdn.net/upload/201607/13/1468419918_757967.jpg)![图片](https://img-ask.csdn.net/upload/201607/13/1468420024_773845.jpg)![图片](https://img-ask.csdn.net/upload/201607/13/1468420029_629464.jpg)

hbase的hmaster没有启动?

我的Hadoop是3.2.1,zookeeper是3.5.6,hbase是2.1.8,jdk是1.8 ![图片说明](https://img-ask.csdn.net/upload/202002/01/1580544515_862311.png) 安装后在启动hbase报错, ![图片说明](https://img-ask.csdn.net/upload/202002/01/1580544375_956804.png) jps发现hregionserver启动,但hmaster没有启动, ![图片说明](https://img-ask.csdn.net/upload/202002/01/1580544458_915078.png) 哪位大侠知道原因?谢谢 以下是各配置文件: hbase-evn.sh export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_231 export HBASE_CLASSPATH=/opt/hadoop/hadoop-3.2.1/etc/hadoop export HBASE_MANAGES_ZK=false hbase-site.sh <property> <name>hbase.rootdir</name> <value>hdfs://node01:9000/hbase</value> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> <property> <name>hbase.zookeeper.quorum</name> <value>node01,node02,node03</value> </property>

window下启动hbase提示jdk没有配置好?

![图片说明](https://img-ask.csdn.net/upload/202003/19/1584581223_467032.png)

为什么我的hbase老是启动不成功?

我在配置hbase时,老是在最后一步启动hbase时总是出错。查过百度和站内了,依然没有解决问题。 配置信息如下: hbase-env.sh:(改过的地方) # Where log files are stored. $HBASE_HOME/logs by default. export HBASE_LOG_DIR=/data/logs/hbase export HBASE_OPTS="-XX:+UseConcMarkSweepGC" # Teil HBase whether it should manage it's own instance of Zookeeper or not export HBASE_MANAGES_ZK=false # The java implementation to use. Java 1.6 required. export JAVA_HOME=/jdk # Extra Java CLASSPATH elements. Optional. export HBASE_CLASSPATH=/hadoop/etc/hadoop hbase-site.xml:(配置信息) <property> <name>hbase.rootdir</name> <value>hdfs://master:8020/hbase</value> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> <property> <name>base.zookeeper.quorum</name> <value>master,slave1,slave2</value> </property> 我还在hbase/conf文件夹下添加了Hadoop的两个文件:hdfs-site.xml和core-site.xml。 又把regionservers文件给改了,原来是localhost,我改成了slave1 slave2。 报错信息如下: 2020-03-23 07:54:06,860 ERROR [main] master.HMasterCommandLine: Master exiting java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2785) at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:184) at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:134) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2799) Caused by: java.lang.RuntimeException: Unexpected version format: 13.0.1 at org.apache.hadoop.hbase.util.ClassSize.<clinit>(ClassSize.java:118) at org.apache.hadoop.hbase.ipc.IPCUtil.<init>(IPCUtil.java:68) at org.apache.hadoop.hbase.ipc.RpcServer.<init>(RpcServer.java:2039) at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:437) at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:500) at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:481) at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2780) ... 5 more

Hbase 正常启动后,运行status命令的错误

client.HConnectionManager$HConnectionImplementation:The node /hbase is not in Zookeeper![运行status命令时的错误](https://img-ask.csdn.net/upload/201505/13/1431504319_531298.png)如图中所示的错误异常,我再hbase-site.xml中已经配置了如下图![在hbase-site.xml中的配置](https://img-ask.csdn.net/upload/201505/13/1431504468_206666.png)我的HBase是按单机模式配置的,其大神帮助啊

Hbase启动时报错,有没有大佬解决下

Hbase启动时报错,有没有大佬解决下 Sat Apr 18 00:33:38 PDT 2020 Starting master on hadoop01 core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 3804 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 3804 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited 2020-04-18 00:33:40,217 INFO [main] util.VersionInfo: HBase 1.2.1 2020-04-18 00:33:40,218 INFO [main] util.VersionInfo: Source code repository git://asf-dev/home/busbey/projects/hbase revision=8d8a7107dc4ccbf36a92f64675dc60392f85c015 2020-04-18 00:33:40,218 INFO [main] util.VersionInfo: Compiled by busbey on Wed Mar 30 11:19:21 CDT 2016 2020-04-18 00:33:40,218 INFO [main] util.VersionInfo: From source with checksum f4bb4a14bb4e0b72b46f729dae98a772 2020-04-18 00:33:41,174 INFO [main] util.ServerCommandLine: env:HBASE_LOGFILE=hbase-root-master-hadoop01.log 2020-04-18 00:33:41,213 INFO [main] util.ServerCommandLine: env:PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/export/servers/jdk/bin:/export/servers/hadoop-2.9.2/bin:/export/servers/hadoop-2.9.2/sbin:/export/servers/apache-hive-1.2.1-bin/bin:/export/servers/jdk/bin:/export/servers/hadoop-2.9.2/bin:/export/servers/hadoop-2.9.2/sbin:/export/servers/zookeeper-3.4.10/bin:HBASE_CLASSPATH/bin:/root/bin:/export/servers/jdk/bin:/export/servers/hadoop-2.9.2/bin:/export/servers/hadoop-2.9.2/sbin:/export/servers/apache-hive-1.2.1-bin/bin:/export/servers/jdk/bin:/export/servers/hadoop-2.9.2/bin:/export/servers/hadoop-2.9.2/sbin:/export/servers/zookeeper-3.4.10/bin:HBASE_HIVE/bin:/export/servers/jdk/bin:/export/servers/hadoop-2.9.2/bin:/export/servers/hadoop-2.9.2/sbin:/export/servers/apache-hive-1.2.1-bin/bin:/export/servers/jdk/bin:/export/servers/hadoop-2.9.2/bin:/export/servers/hadoop-2.9.2/sbin:/export/servers/zookeeper-3.4.10/bin:HBASE_HIVE/bin 2020-04-18 00:33:41,213 INFO [main] util.ServerCommandLine: env:HISTCONTROL=ignoredups 2020-04-18 00:33:41,213 INFO [main] util.ServerCommandLine: env:HISTSIZE=1000 2020-04-18 00:33:41,213 INFO [main] util.ServerCommandLine: env:HBASE_REGIONSERVER_OPTS= -XX:PermSize=128m -XX:MaxPermSize=128m 2020-04-18 00:33:41,213 INFO [main] util.ServerCommandLine: env:JAVA_HOME=/export/servers/jdk 2020-04-18 00:33:41,214 INFO [main] util.ServerCommandLine: env:TERM=vt100 2020-04-18 00:33:41,214 INFO [main] util.ServerCommandLine: env:LANG=en_US.UTF-8 2020-04-18 00:33:41,214 INFO [main] util.ServerCommandLine: env:G_BROKEN_FILENAMES=1 2020-04-18 00:33:41,214 INFO [main] util.ServerCommandLine: env:SELINUX_LEVEL_REQUESTED= 2020-04-18 00:33:41,214 INFO [main] util.ServerCommandLine: env:SELINUX_ROLE_REQUESTED= 2020-04-18 00:33:41,214 INFO [main] util.ServerCommandLine: env:MAIL=/var/spool/mail/root 2020-04-18 00:33:41,214 INFO [main] util.ServerCommandLine: env:LD_LIBRARY_PATH=:/export/servers/hadoop-2.9.2/lib/native 2020-04-18 00:33:41,220 INFO [main] util.ServerCommandLine: env:LOGNAME=root 2020-04-18 00:33:41,220 INFO [main] util.ServerCommandLine: env:HBASE_REST_OPTS= 2020-04-18 00:33:41,221 INFO [main] util.ServerCommandLine: env:PWD=/export/servers/hbase-1.2.1/bin 2020-04-18 00:33:41,221 INFO [main] util.ServerCommandLine: env:HBASE_ROOT_LOGGER=INFO,RFA 2020-04-18 00:33:41,221 INFO [main] util.ServerCommandLine: env:LESSOPEN=||/usr/bin/lesspipe.sh %s 2020-04-18 00:33:41,221 INFO [main] util.ServerCommandLine: env:SHELL=/bin/bash 2020-04-18 00:33:41,221 INFO [main] util.ServerCommandLine: env:ZK_HOME=/export/servers/zookeeper-3.4.10 2020-04-18 00:33:41,221 INFO [main] util.ServerCommandLine: env:SELINUX_USE_CURRENT_RANGE= 2020-04-18 00:33:41,221 INFO [main] util.ServerCommandLine: env:HBASE_ENV_INIT=true 2020-04-18 00:33:41,221 INFO [main] util.ServerCommandLine: env:HBASE_IDENT_STRING=root 2020-04-18 00:33:41,221 INFO [main] util.ServerCommandLine: env:HBASE_ZNODE_FILE=/tmp/hbase-root-master.znode 2020-04-18 00:33:41,221 INFO [main] util.ServerCommandLine: env:SSH_TTY=/dev/pts/1 2020-04-18 00:33:41,222 INFO [main] util.ServerCommandLine: env:SSH_CLIENT=192.168.121.1 50359 22 2020-04-18 00:33:41,222 INFO [main] util.ServerCommandLine: env:HIVE_HOME=/export/servers/apache-hive-1.2.1-bin 2020-04-18 00:33:41,245 INFO [main] util.ServerCommandLine: env:HBASE_LOG_PREFIX=hbase-root-master-hadoop01 2020-04-18 00:33:41,246 INFO [main] util.ServerCommandLine: env:HBASE_LOG_DIR=/export/servers/hbase-1.2.1/logs 2020-04-18 00:33:41,247 INFO [main] util.ServerCommandLine: env:USER=root 2020-04-18 00:33:41,248 INFO [main] util.ServerCommandLine: env:CLASSPATH=/export/servers/hbase-1.2.1/conf:/export/servers/jdk/lib/tools.jar:/export/servers/hbase-1.2.1:/export/servers/hbase-1.2.1/lib/activation-1.1.jar:/export/servers/hbase-1.2.1/lib/asm-3.1.jar:/export/servers/hbase-1.2.1/lib/avro-1.7.4.jar:/export/servers/hbase-1.2.1/lib/commons-beanutils-1.7.0.jar:/export/servers/hbase-1.2.1/lib/commons-beanutils-core-1.7.0.jar:/export/servers/hbase-1.2.1/lib/commons-cli-1.2.jar:/export/servers/hbase-1.2.1/lib/commons-codec-1.9.jar:/export/servers/hbase-1.2.1/lib/commons-collections-3.2.2.jar:/export/servers/hbase-1.2.1/lib/commons-configuration-1.6.jar:/export/servers/hbase-1.2.1/lib/commons-digester-1.8.jar:/export/servers/hbase-1.2.1/lib/commons-el-1.0.jar:/export/servers/hbase-1.2.1/lib/commons-httpclient-3.1.jar:/export/servers/hbase-1.2.1/lib/commons-io-2.4.jar:/export/servers/hbase-1.2.1/lib/commons-lang-2.6.jar:/export/servers/hbase-1.2.1/lib/commons-logging-1.2.jar:/export/servers/hbase-1.2.1/lib/commons-math3-3.1.1.jar:/export/servers/hbase-1.2.1/lib/commons-net-3.1.jar:/export/servers/hbase-1.2.1/lib/findbugs-annotations-1.3.9-1.jar:/export/servers/hbase-1.2.1/lib/guava-12.0.1.jar:/export/servers/hbase-1.2.1/lib/hadoop-annotations-2.5.1.jar:/export/servers/hbase-1.2.1/lib/hadoop-common-2.5.1.jar:/export/servers/hbase-1.2.1/lib/hbase-annotations-1.2.1.jar:/export/servers/hbase-1.2.1/lib/hbase-annotations-1.2.1-tests.jar:/export/servers/hbase-1.2.1/lib/hbase-client-1.2.1.jar:/export/servers/hbase-1.2.1/lib/hbase-common-1.2.1.jar:/export/servers/hbase-1.2.1/lib/hbase-common-1.2.1-tests.jar:/export/servers/hbase-1.2.1/lib/hbase-examples-1.2.1.jar:/export/servers/hbase-1.2.1/lib/hbase-external-blockcache-1.2.1.jar:/export/servers/hbase-1.2.1/lib/hbase-hadoop2-compat-1.2.1.jar:/export/servers/hbase-1.2.1/lib/hbase-hadoop-compat-1.2.1.jar:/export/servers/hbase-1.2.1/lib/hbase-it-1.2.1.jar:/export/servers/hbase-1.2.1/lib/hbase-it-1.2.1-tests.jar:/export/servers/hbase-1.2.1/lib/hbase-prefix-tree-1.2.1.jar:/export/servers/hbase-1.2.1/lib/hbase-procedure-1.2.1.jar:/export/servers/hbase-1.2.1/lib/hbase-protocol-1.2.1.jar:/export/servers/hbase-1.2.1/lib/hbase-rest-1.2.1.jar:/export/servers/hbase-1.2.1/lib/hbase-server-1.2.1.jar:/export/servers/hbase-1.2.1/lib/hbase-server-1.2.1-tests.jar:/export/servers/hbase-1.2.1/lib/hbase-shell-1.2.1.jar:/export/servers/hbase-1.2.1/lib/hbase-thrift-1.2.1.jar:/export/servers/hbase-1.2.1/lib/htrace-core-3.1.0-incubating.jar:/export/servers/hbase-1.2.1/lib/httpclient-4.2.5.jar:/export/servers/hbase-1.2.1/lib/httpcore-4.4.1.jar:/export/servers/hbase-1.2.1/lib/jackson-core-asl-1.9.13.jar:/export/servers/hbase-1.2.1/lib/jackson-jaxrs-1.9.13.jar:/export/servers/hbase-1.2.1/lib/jackson-mapper-asl-1.9.13.jar:/export/servers/hbase-1.2.1/lib/jackson-xc-1.9.13.jar:/export/servers/hbase-1.2.1/lib/jasper-compiler-5.5.23.jar:/export/servers/hbase-1.2.1/lib/jasper-runtime-5.5.23.jar:/export/servers/hbase-1.2.1/lib/java-xmlbuilder-0.4.jar:/export/servers/hbase-1.2.1/lib/jaxb-api-2.2.2.jar:/export/servers/hbase-1.2.1/lib/jaxb-impl-2.2.3-1.jar:/export/servers/hbase-1.2.1/lib/jersey-core-1.9.jar:/export/servers/hbase-1.2.1/lib/jersey-json-1.9.jar:/export/servers/hbase-1.2.1/lib/jersey-server-1.9.jar:/export/servers/hbase-1.2.1/lib/jets3t-0.9.0.jar:/export/servers/hbase-1.2.1/lib/jettison-1.3.3.jar:/export/servers/hbase-1.2.1/lib/jetty-6.1.26.jar:/export/servers/hbase-1.2.1/lib/jetty-util-6.1.26.jar:/export/servers/hbase-1.2.1/lib/jsr305-1.3.9.jar:/export/servers/hbase-1.2.1/lib/junit-4.12.jar:/export/servers/hbase-1.2.1/lib/log4j-1.2.17.jar:/export/servers/hbase-1.2.1/lib/paranamer-2.3.jar:/export/servers/hbase-1.2.1/lib/protobuf-java-2.5.0.jar:/export/servers/hbase-1.2.1/lib/slf4j-api-1.7.7.jar:/export/servers/hbase-1.2.1/lib/slf4j-log4j12-1.7.5.jar:/export/servers/hbase-1.2.1/lib/snappy-java-1.0.4.1.jar:/export/servers/hbase-1.2.1/lib/xmlenc-0.52.jar:/export/servers/hadoop-2.9.2/etc/hadoop:/export/servers/hadoop-2.9.2/share/hadoop/common/lib/*:/export/servers/hadoop-2.9.2/share/hadoop/common/*:/export/servers/hadoop-2.9.2/share/hadoop/hdfs:/export/servers/hadoop-2.9.2/share/hadoop/hdfs/lib/*:/export/servers/hadoop-2.9.2/share/hadoop/hdfs/*:/export/servers/hadoop-2.9.2/share/hadoop/yarn:/export/servers/hadoop-2.9.2/share/hadoop/yarn/lib/*:/export/servers/hadoop-2.9.2/share/hadoop/yarn/*:/export/servers/hadoop-2.9.2/share/hadoop/mapreduce/lib/*:/export/servers/hadoop-2.9.2/share/hadoop/mapreduce/*:/export/servers/hadoop-2.9.2/contrib/capacity-scheduler/*.jar 2020-04-18 00:33:41,248 INFO [main] util.ServerCommandLine: env:HBASE_MASTER_OPTS= -XX:PermSize=128m -XX:MaxPermSize=128m 2020-04-18 00:33:41,249 INFO [main] util.ServerCommandLine: env:HBASE_MANAGES_ZK=false 2020-04-18 00:33:41,249 INFO [main] util.ServerCommandLine: env:SSH_ASKPASS=/usr/libexec/openssh/gnome-ssh-askpass 2020-04-18 00:33:41,249 INFO [main] util.ServerCommandLine: env:SSH_CONNECTION=192.168.121.1 50359 192.168.121.134 22 2020-04-18 00:33:41,249 INFO [main] util.ServerCommandLine: env:HOSTNAME=hadoop01 2020-04-18 00:33:41,249 INFO [main] util.ServerCommandLine: env:HADOOP_HOME=/export/servers/hadoop-2.9.2 2020-04-18 00:33:41,249 INFO [main] util.ServerCommandLine: env:HBASE_NICENESS=0 2020-04-18 00:33:41,249 INFO [main] util.ServerCommandLine: env:HBASE_OPTS=-XX:+UseConcMarkSweepGC -XX:PermSize=128m -XX:MaxPermSize=128m -Dhbase.log.dir=/export/servers/hbase-1.2.1/logs -Dhbase.log.file=hbase-root-master-hadoop01.log -Dhbase.home.dir=/export/servers/hbase-1.2.1 -Dhbase.id.str=root -Dhbase.root.logger=INFO,RFA -Djava.library.path=/export/servers/hadoop-2.9.2/lib/native -Dhbase.security.logger=INFO,RFAS 2020-04-18 00:33:41,249 INFO [main] util.ServerCommandLine: env:HBASE_START_FILE=/tmp/hbase-root-master.autorestart 2020-04-18 00:33:41,249 INFO [main] util.ServerCommandLine: env:HBASE_SECURITY_LOGGER=INFO,RFAS 2020-04-18 00:33:41,250 INFO [main] util.ServerCommandLine: env:HBASE_THRIFT_OPTS= 2020-04-18 00:33:41,250 INFO [main] util.ServerCommandLine: env:HBASE_HOME=/export/servers/hbase-1.2.1 2020-04-18 00:33:41,250 INFO [main] util.ServerCommandLine: env:LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.tbz=01;31:*.tbz2=01;31:*.bz=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36: 2020-04-18 00:33:41,250 INFO [main] util.ServerCommandLine: env:HOME=/root 2020-04-18 00:33:41,250 INFO [main] util.ServerCommandLine: env:SHLVL=4 2020-04-18 00:33:41,250 INFO [main] util.ServerCommandLine: env:MALLOC_ARENA_MAX=4 2020-04-18 00:33:41,254 INFO [main] util.ServerCommandLine: vmName=Java HotSpot(TM) 64-Bit Server VM, vmVendor=Oracle Corporation, vmVersion=25.161-b12 2020-04-18 00:33:41,590 INFO [main] util.ServerCommandLine: vmInputArguments=[-Dproc_master, -XX:OnOutOfMemoryError=kill -9 %p, -XX:+UseConcMarkSweepGC, -XX:PermSize=128m, -XX:MaxPermSize=128m, -Dhbase.log.dir=/export/servers/hbase-1.2.1/logs, -Dhbase.log.file=hbase-root-master-hadoop01.log, -Dhbase.home.dir=/export/servers/hbase-1.2.1, -Dhbase.id.str=root, -Dhbase.root.logger=INFO,RFA, -Djava.library.path=/export/servers/hadoop-2.9.2/lib/native, -Dhbase.security.logger=INFO,RFAS] 2020-04-18 00:33:42,227 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2020-04-18 00:33:45,999 INFO [main] regionserver.RSRpcServices: master/hadoop01/192.168.121.134:16000 server-side HConnection retries=350 2020-04-18 00:33:46,311 INFO [main] ipc.SimpleRpcScheduler: Using deadline as user call queue, count=3 2020-04-18 00:33:46,371 INFO [main] ipc.RpcServer: master/hadoop01/192.168.121.134:16000: started 10 reader(s) listening on port=16000 2020-04-18 00:33:46,473 INFO [main] impl.MetricsConfig: loaded properties from hadoop-metrics2-hbase.properties 2020-04-18 00:33:46,674 INFO [main] impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 2020-04-18 00:33:46,674 INFO [main] impl.MetricsSystemImpl: HBase metrics system started 2020-04-18 00:33:46,746 ERROR [main] master.HMasterCommandLine: Master exiting java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster. at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2401) at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:232) at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:138) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2411) Caused by: java.lang.NoClassDefFoundError: com/yammer/metrics/stats/Sample at org.apache.hadoop.metrics2.lib.DynamicMetricsRegistry.newTimeHistogram(DynamicMetricsRegistry.java:305) at org.apache.hadoop.hbase.ipc.MetricsHBaseServerSourceImpl.<init>(MetricsHBaseServerSourceImpl.java:99) at org.apache.hadoop.hbase.ipc.MetricsHBaseServerSourceFactoryImpl.getSource(MetricsHBaseServerSourceFactoryImpl.java:48) at org.apache.hadoop.hbase.ipc.MetricsHBaseServerSourceFactoryImpl.create(MetricsHBaseServerSourceFactoryImpl.java:38) at org.apache.hadoop.hbase.ipc.MetricsHBaseServer.<init>(MetricsHBaseServer.java:39) at org.apache.hadoop.hbase.ipc.RpcServer.<init>(RpcServer.java:2032) at org.apache.hadoop.hbase.regionserver.RSRpcServices.<init>(RSRpcServices.java:923) at org.apache.hadoop.hbase.master.MasterRpcServices.<init>(MasterRpcServices.java:230) at org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:517) at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:535) at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:364) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2394) ... 5 more Caused by: java.lang.ClassNotFoundException: com.yammer.metrics.stats.Sample at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 21 more

Hbase连接报错RpcRetryingCaller

16/12/22 10:13:42 INFO client.RpcRetryingCaller: Call exception, tries=10, retries=35, retryTime=46200ms, msg=row 'factoryBatteryData,123456789,99999999999999' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=localhost,57557,1482371106831, seqNum=0 在hbase中插入数据时候,报错如上。

hbase 启动,查看日志文件出现错误 这是为什么?

ERROR org.apache.hadoop.hbase.master.HMasterCommandLine: Failed to start master java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2115) at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:152) at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:104) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:76) at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2129) Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for / hbase at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1041)

求助,hbase自带的zookeeper在哪里启动配置

我在试着搭一个hbase,单机模式的,所以用自带的zookeeper,hadoop配好了,但是启动start-hbase.cmd时总是报错。 报错信息是这样的: 2017-12-06 16:38:11,275 INFO [M:0;WIN-52F19LCA43O:55968] client.ZooKeeperRegistry: ClusterId read in ZooKeeper is null 2017-12-06 16:38:18,010 INFO [SessionTracker] server.ZooKeeperServer: Expiring session 0x1602af5c9830003, timeout of 10000ms exceeded 2017-12-06 16:38:18,010 INFO [SessionTracker] server.ZooKeeperServer: Expiring session 0x1602af5c9830001, timeout of 10000ms exceeded 2017-12-06 16:38:18,010 INFO [ProcessThread(sid:0 cport:-1):] server.PrepRequestProcessor : Processed session termination for sessionid: 0x1602af5c9830003 2017-12-06 16:38:18,010 INFO [ProcessThread(sid:0 cport:-1):] server.PrepRequestProcessor: Processed session termination for sessionid: 0x1602af5c9830001 2017-12-06 16:38:40,606 FATAL [WIN-52F19LCA43O:55968.activeMasterManager] master.HMaster:Failed to become active master java.io.IOException: Mkdirs failed to create file:/localhost:9000/.tmp (exists=false, cwd=file:/F:/360Downloads/hbase-1.2.3/bin) 启动之前的节点是这样的: F:\360Downloads\hbase-1.2.3\bin>jps 11232 NameNode 12592 Jps 9412 DataNode F:\360Downloads\hbase-1.2.3\bin>start-hbase.cmd 好像是这个zookeeper从来没有连上过,一直是空的,求各位帮忙看一下。

YOLOv3目标检测实战:训练自己的数据集

YOLOv3目标检测实战:训练自己的数据集

150讲轻松搞定Python网络爬虫

150讲轻松搞定Python网络爬虫

实用主义学Python(小白也容易上手的Python实用案例)

实用主义学Python(小白也容易上手的Python实用案例)

我说我不会算法,阿里把我挂了。

不说了,字节跳动也反手把我挂了。

立方体线框模型透视投影 (计算机图形学实验)

计算机图形学实验 立方体线框模型透视投影 的可执行文件,亲测可运行,若需报告可以联系我,期待和各位交流

2019 AI开发者大会

2019 AI开发者大会

组成原理课程设计(实现机器数的真值还原等功能)

实现机器数的真值还原(定点小数)、定点小数的单符号位补码加减运算、定点小数的补码乘法运算和浮点数的加减运算。

C/C++跨平台研发从基础到高阶实战系列套餐

一 专题从基础的C语言核心到c++ 和stl完成基础强化; 二 再到数据结构,设计模式完成专业计算机技能强化; 三 通过跨平台网络编程,linux编程,qt界面编程,mfc编程,windows编程,c++与lua联合编程来完成应用强化 四 最后通过基于ffmpeg的音视频播放器,直播推流,屏幕录像,

MFC一站式终极全套课程包

该套餐共包含从C小白到C++到MFC的全部课程,整套学下来绝对成为一名C++大牛!!!

软件测试2小时入门

软件测试2小时入门

三个项目玩转深度学习(附1G源码)

三个项目玩转深度学习(附1G源码)

计算机图形学-球的光照模型课程设计

计算机图形学-球的光照模型,有代码完美运行,有课程设计书

Linux常用命令大全(非常全!!!)

Linux常用命令大全(非常全!!!) 最近都在和Linux打交道,感觉还不错。我觉得Linux相比windows比较麻烦的就是很多东西都要用命令来控制,当然,这也是很多人喜欢linux的原因,比较短小但却功能强大。我将我了解到的命令列举一下,仅供大家参考: 系统信息 arch 显示机器的处理器架构 uname -m 显示机器的处理器架构 uname -r 显示正在使用的内核版本 d...

因为看了这些书,我大二就拿了华为Offer

四年了,四年,你知道大学这四年我怎么过的么?

深度学习原理+项目实战+算法详解+主流框架(套餐)

深度学习系列课程从深度学习基础知识点开始讲解一步步进入神经网络的世界再到卷积和递归神经网络,详解各大经典网络架构。实战部分选择当下最火爆深度学习框架PyTorch与Tensorflow/Keras,全程实战演示框架核心使用与建模方法。项目实战部分选择计算机视觉与自然语言处理领域经典项目,从零开始详解算法原理,debug模式逐行代码解读。适合准备就业和转行的同学们加入学习! 建议按照下列课程顺序来进行学习 (1)掌握深度学习必备经典网络架构 (2)深度框架实战方法 (3)计算机视觉与自然语言处理项目实战。(按照课程排列顺序即可)

fakeLocation13.5.1.zip

fakeLocation13.5.1 虚拟定位 ios13.5.1的最新驱动下载,iPhone/iPad免越狱虚拟定位工具Location-cleaned驱动已更新

UnityLicence

UnityLicence

Python可以这样学(第一季:Python内功修炼)

Python可以这样学(第一季:Python内功修炼)

Python+OpenCV计算机视觉

Python+OpenCV计算机视觉

土豆浏览器

土豆浏览器可以用来看各种搞笑、电影、电视剧视频

【数据结构与算法综合实验】欢乐连连看(C++ & MFC)案例

这是武汉理工大学计算机学院数据结构与算法综合实验课程的第三次项目:欢乐连连看(C++ & MFC)迭代开发代码。运行环境:VS2017。已经实现功能:开始游戏、消子、判断胜负、提示、重排、计时、帮助。

php+mysql学生成绩管理系统

学生成绩管理系统,分三个模块:学生,教师和管理员。 管理员模块:负责学生、老师信息的增删改;发布课程信息的增删改,以便让学生选课;审核老师提交的学生成绩并且打印成绩存档;按照课号查询每个课号的学生成绩

多功能数字钟.zip

利用数字电子计数知识设计并制作的数字电子钟(含multisim仿真),该数字钟具有显示星期、24小时制时间、闹铃、整点报时、时间校准功能

推荐24个国外黄色网站欣赏

在中国清朝,明黄色的衣服只有皇子才有资格穿,慢慢的黄色在中国就成了高贵的颜色。在人们的色彩印象中,黄色也表现为暂停。所以当你的网页设计采用黄色的时候,会让人们在你的网页前停留。 黄色,就像橙色和红色,黄色也是一个暖色。它有大自然、阳光、春天的涵义,而且通常被认为是一个快乐和有希望的色彩。黄色是所有色相中最能发光的颜色,给人轻快,透明,辉煌,充满希望的色彩印象。 黄色是一个高可见的色...

u-boot-2015.07.tar.bz2

uboot-2015-07最新代码,喜欢的朋友请拿去

一学即懂的计算机视觉(第一季)

一学即懂的计算机视觉(第一季)

学生成绩管理系统(PHP + MYSQL)

做的是数据库课程设计,使用的php + MySQL,本来是黄金搭配也就没啥说的,推荐使用wamp服务器,里面有详细的使用说明,带有界面的啊!呵呵 不行的话,可以给我留言!

Windows版YOLOv4目标检测实战:训练自己的数据集

Windows版YOLOv4目标检测实战:训练自己的数据集

C++语言基础视频教程

C++语言基础视频教程

玩转Python-Python3基础入门

玩转Python-Python3基础入门

相关热词 c# 开发接口 c# 中方法上面的限制 c# java 时间戳 c#单元测试入门 c# 数组转化成文本 c#实体类主外键关系设置 c# 子函数 局部 c#窗口位置设置 c# list 查询 c# 事件 执行顺序
立即提问
相关内容推荐