win10 hbase2.0 独立zookeeper 单机

在win10系统中,部署独立的zookeeper,然后禁止hbase使用自带的zookeeper,
进行单机部署;

            第一步:只启动start-dfs.cmd,启动成功;
            第二步:启动zookeeper,成功;
            第三步:启动hbase,报出:暂未实现,请期待;

            大家遇到这种情况没?

1个回答

遇到。Windows系统下暂时不支持这样单机部署,请试试Linux、Mac等其他系统可行。

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
求助,hbase自带的zookeeper在哪里启动配置

我在试着搭一个hbase,单机模式的,所以用自带的zookeeper,hadoop配好了,但是启动start-hbase.cmd时总是报错。 报错信息是这样的: 2017-12-06 16:38:11,275 INFO [M:0;WIN-52F19LCA43O:55968] client.ZooKeeperRegistry: ClusterId read in ZooKeeper is null 2017-12-06 16:38:18,010 INFO [SessionTracker] server.ZooKeeperServer: Expiring session 0x1602af5c9830003, timeout of 10000ms exceeded 2017-12-06 16:38:18,010 INFO [SessionTracker] server.ZooKeeperServer: Expiring session 0x1602af5c9830001, timeout of 10000ms exceeded 2017-12-06 16:38:18,010 INFO [ProcessThread(sid:0 cport:-1):] server.PrepRequestProcessor : Processed session termination for sessionid: 0x1602af5c9830003 2017-12-06 16:38:18,010 INFO [ProcessThread(sid:0 cport:-1):] server.PrepRequestProcessor: Processed session termination for sessionid: 0x1602af5c9830001 2017-12-06 16:38:40,606 FATAL [WIN-52F19LCA43O:55968.activeMasterManager] master.HMaster:Failed to become active master java.io.IOException: Mkdirs failed to create file:/localhost:9000/.tmp (exists=false, cwd=file:/F:/360Downloads/hbase-1.2.3/bin) 启动之前的节点是这样的: F:\360Downloads\hbase-1.2.3\bin>jps 11232 NameNode 12592 Jps 9412 DataNode F:\360Downloads\hbase-1.2.3\bin>start-hbase.cmd 好像是这个zookeeper从来没有连上过,一直是空的,求各位帮忙看一下。

java连接单机hbase操作数据

16/03/04 17:09:56 INFO support.ClassPathXmlApplicationContext: Refreshing org.springframework.context.support.ClassPathXmlApplicationContext@1f6ae4d: startup date [Fri Mar 04 17:09:56 CST 2016]; root of context hierarchy 16/03/04 17:09:56 INFO xml.XmlBeanDefinitionReader: Loading XML bean definitions from class path resource [applicationContest.xml] 16/03/04 17:09:57 INFO support.DefaultListableBeanFactory: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@1d8fe20: defining beans [hbaseConfiguration,htemplate]; root of factory hierarchy 16/03/04 17:09:57 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 16/03/04 17:09:57 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xc4dc7c connecting to ZooKeeper ensemble=192.168.1.202:2181 16/03/04 17:09:57 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.5-cdh5.2.0--1, built on 10/11/2014 20:49 GMT 16/03/04 17:09:57 INFO zookeeper.ZooKeeper: Client environment:host.name=xiaoming 16/03/04 17:09:57 INFO zookeeper.ZooKeeper: Client environment:java.version=1.7.0_17 16/03/04 17:09:57 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation 16/03/04 17:09:57 INFO zookeeper.ZooKeeper: Client environment:java.home=C:\Program Files (x86)\Java\jdk1.7.0_17\jre 16/03/04 17:09:57 INFO zookeeper.ZooKeeper: Client environment:java.class.path=F:\mess\12Hadoop\workspace\myhbasetest\WebRoot\WEB-INF\classes;F:\mess\12Hadoop\workspace\myhbasetest\WebRoot\WEB-INF\lib\commons-codec-1.7.jar;F:\mess\12Hadoop\workspace\myhbasetest\WebRoot\WEB-INF\lib\commons-collections-3.2.1.jar;F:\mess\12Hadoop\workspace\myhbasetest\WebRoot\WEB-INF\lib\commons-configuration-1.6.jar;F:\mess\12Hadoop\workspace\myhbasetest\WebRoot\WEB-INF\lib\commons-lang-2.6.jar;F:\mess\12Hadoop\workspace\myhbasetest\WebRoot\WEB-INF\lib\commons-logging-1.1.1.jar;F:\mess\12Hadoop\workspace\myhbasetest\WebRoot\WEB-INF\lib\guava-12.0.1.jar;F:\mess\12Hadoop\workspace\myhbasetest\WebRoot\WEB-INF\lib\hadoop-auth-2.5.0-cdh5.2.0.jar;F:\mess\12Hadoop\workspace\myhbasetest\WebRoot\WEB-INF\lib\hadoop-common-2.5.0-cdh5.2.0.jar;F:\mess\12Hadoop\workspace\myhbasetest\WebRoot\WEB-INF\lib\hadoop-core-2.5.0-mr1-cdh5.2.0.jar;F:\mess\12Hadoop\workspace\myhbasetest\WebRoot\WEB-INF\lib\hbase-client-0.98.6-cdh5.2.0.jar;F:\mess\12Hadoop\workspace\myhbasetest\WebRoot\WEB-INF\lib\hbase-common-0.98.6-cdh5.2.0.jar;F:\mess\12Hadoop\workspace....... 16/03/04 17:09:57 INFO zookeeper.ClientCnxn: Opening socket connection to server 192.168.1.202/192.168.1.202:2181. Will not attempt to authenticate using SASL (unknown error) 16/03/04 17:09:57 INFO zookeeper.ClientCnxn: Socket connection established to 192.168.1.202/192.168.1.202:2181, initiating session 16/03/04 17:09:57 INFO zookeeper.ClientCnxn: Session establishment complete on server 192.168.1.202/192.168.1.202:2181, sessionid = 0x15341c9328d0012, negotiated timeout = 40000 16/03/04 17:17:50 WARN client.HConnectionManager$HConnectionImplementation: Encountered problems when prefetch hbase:meta table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=31, exceptions: .......Fri Mar 04 17:17:50 CST 2016, org.apache.hadoop.hbase.client.RpcRetryingCaller@b7b17a, java.net.UnknownHostException: unknown host: hbase ......at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:129) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:714) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:144) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:1140) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1204) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1092) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1049) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionLocation(HConnectionManager.java:890) at org.apache.hadoop.hbase.client.RegionServerCallable.prepare(RegionServerCallable.java:72) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:113) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:780) at org.springframework.data.hadoop.hbase.HbaseTemplate$2.doInTable(HbaseTemplate.java:182) at org.springframework.data.hadoop.hbase.HbaseTemplate.execute(HbaseTemplate.java:58) at org.springframework.data.hadoop.hbase.HbaseTemplate.get(HbaseTemplate.java:168) at org.springframework.data.hadoop.hbase.HbaseTemplate.get(HbaseTemplate.java:158) at com.bw.test.Mytest.get(Mytest.java:46) at com.bw.test.Mytest.main(Mytest.java:40) Caused by: java.net.UnknownHostException: unknown host: hbase at org.apache.hadoop.hbase.ipc.RpcClient$Connection.<init>(RpcClient.java:385) at org.apache.hadoop.hbase.ipc.RpcClient.createConnection(RpcClient.java:351) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1530) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1442) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1661) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1719) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:29966) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1562) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:710) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:708) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:114) ... 16 more

hadoop,hbase,zookeeper与kerberos集成问题

最近在实现以上组件与kerberos集成时出现了问题,网上实在找不到解决办法 本人目前在3台虚拟机上搭建了hadoop-2.5.0的集群,并且成功实现了hdfs与kerberos的集成,之后也将zookeeper-3.4.6与kerberos进行了集成,不知道是否成功,但是out里出现了successfully logged in的字样 现在的问题在于,我按照网上的一些资料,教程,将hbase-1.0.3也加入了kerberos相关的配置项,但是始终报错,无法连接到zookeeper上,start-hbase.sh后,hmaster能启动一下,但是regionserver始终无法启动 希望懂这些的或者实现过集成的人可以帮我看一下怎么解决这个问题,谢谢

Hbase zookeeper好像是连接报错,求帮忙= =

hbase(main):002:0> create 'test','info' 2017-11-26 11:07:32,602 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2017-11-26 11:07:49,822 ERROR [main] zookeeper.RecoverableZooKeeper: ZooKeeper exists failed after 4 attempts 2017-11-26 11:07:49,825 WARN [main] zookeeper.ZKUtil: hconnection-0x42b6d0cc, quorum=liubaoxing-precision-workstation-t7500:2182,liubaoxing-slave:2182, baseZNode=/hbase Unable to set watcher on znode (/hbase/hbaseid) org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1041) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:220) at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:479) at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:65) at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:83) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.retrieveClusterId(HConnectionManager.java:839) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:642) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:411) at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:390) at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:271) at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:195) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.jruby.javasupport.JavaConstructor.newInstanceDirect(JavaConstructor.java:275) at org.jruby.java.invokers.ConstructorInvoker.call(ConstructorInvoker.java:91) at org.jruby.java.invokers.ConstructorInvoker.call(ConstructorInvoker.java:178) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:322) at org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:178) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:182) at org.jruby.java.proxies.ConcreteJavaProxy$2.call(ConcreteJavaProxy.java:48) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:322) at org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:178) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:182) at org.jruby.RubyClass.newInstance(RubyClass.java:829) at org.jruby.RubyClass$i$newInstance.call(RubyClass$i$newInstance.gen:65535) at org.jruby.internal.runtime.methods.JavaMethod$JavaMethodZeroOrOneOrNBlock.call(JavaMethod.java:266) at org.jruby.java.proxies.ConcreteJavaProxy$3.call(ConcreteJavaProxy.java:144) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:312) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:169) at org.jruby.ast.CallOneArgNode.interpret(CallOneArgNode.java:57) at org.jruby.ast.InstAsgnNode.interpret(InstAsgnNode.java:95) at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104) at org.jruby.ast.BlockNode.interpret(BlockNode.java:71) at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74) at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:255) at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:223) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:342) at org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:212) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:216) at org.jruby.RubyClass.newInstance(RubyClass.java:836) at org.jruby.RubyClass$i$newInstance.call(RubyClass$i$newInstance.gen:65535) at org.jruby.internal.runtime.methods.JavaMethod$JavaMethodZeroOrOneOrTwoOrNBlock.call(JavaMethod.java:283) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:332) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:203) at org.jruby.ast.CallTwoArgNode.interpret(CallTwoArgNode.java:59) at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104) at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74) at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:190) at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:199) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:312) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:169) at org.jruby.ast.CallOneArgNode.interpret(CallOneArgNode.java:57) at org.jruby.ast.InstAsgnNode.interpret(InstAsgnNode.java:95) at org.jruby.ast.OpAsgnOrNode.interpret(OpAsgnOrNode.java:100) at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104) at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74) at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:147) at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:183) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:292) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:135) at org.jruby.ast.CallNoArgNode.interpret(CallNoArgNode.java:63) at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104) at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74) at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:147) at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:183) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:292) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:135) at org.jruby.ast.VCallNode.interpret(VCallNode.java:86) at org.jruby.ast.CallSpecialArgNode.interpret(CallSpecialArgNode.java:57) at org.jruby.ast.DAsgnNode.interpret(DAsgnNode.java:110) at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104) at org.jruby.evaluator.ASTInterpreter.INTERPRET_BLOCK(ASTInterpreter.java:111) at org.jruby.runtime.InterpretedBlock.evalBlockBody(InterpretedBlock.java:374) at org.jruby.runtime.InterpretedBlock.yield(InterpretedBlock.java:295) at org.jruby.runtime.InterpretedBlock.yieldSpecific(InterpretedBlock.java:229) at org.jruby.runtime.Block.yieldSpecific(Block.java:99) at org.jruby.ast.ZYieldNode.interpret(ZYieldNode.java:25) at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104) at org.jruby.ast.BlockNode.interpret(BlockNode.java:71) at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74) at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:169) at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:191) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:302) at org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:144) at org.jruby.runtime.callsite.CachingCallSite.callIter(CachingCallSite.java:153) at org.jruby.ast.FCallNoArgBlockNode.interpret(FCallNoArgBlockNode.java:32) at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104) at org.jruby.ast.BlockNode.interpret(BlockNode.java:71) at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74) at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:255) at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:223) at org.jruby.RubyClass.finvoke(RubyClass.java:611) at org.jruby.RubyBasicObject.send(RubyBasicObject.java:2787) at org.jruby.RubyKernel.send(RubyKernel.java:2113) at org.jruby.RubyKernel$s$send.call(RubyKernel$s$send.gen:65535) at org.jruby.internal.runtime.methods.JavaMethod$JavaMethodZeroOrOneOrTwoOrThreeOrNBlock.call(JavaMethod.java:300) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:352) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:237) at org.jruby.ast.FCallSpecialArgNode.interpret(FCallSpecialArgNode.java:43) at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104) at org.jruby.evaluator.ASTInterpreter.INTERPRET_BLOCK(ASTInterpreter.java:111) at org.jruby.runtime.InterpretedBlock.evalBlockBody(InterpretedBlock.java:374) at org.jruby.runtime.InterpretedBlock.yield(InterpretedBlock.java:295) at org.jruby.runtime.InterpretedBlock.yieldSpecific(InterpretedBlock.java:229) at org.jruby.runtime.Block.yieldSpecific(Block.java:99) at org.jruby.ast.ZYieldNode.interpret(ZYieldNode.java:25) at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)

hbase所有的表都访问不了,master报错

org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node /hbase/master already exists and this is not a retry

JAVA连接Hbase集群

java连接Hbase,代码卡在 HBaseAdmin admin1 = new HBaseAdmin(conf1);处

整合hive和hbase,zk不释放

整合hive和hbase,把hbase的表映射到hive,然后去hive查询表信息,此过程会建立zookeeper连接,但是hive不会释放连接,导致连接占满后就会堵死,求解决方案,hive使用的版本是apache-hive-1.2.1-bin

hbase启动几十秒后,HMaster和HRegionserver自动关闭

## **master的vim /etc/hostname配置如下** master ## **master的vim /etc/sysconfig/network配置如下:** # Created by anaconda NETWORKING=yes HOSTNAME=master ## **/etc/hosts配置如下:** 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.241.235 master 192.168.241.236 slave1 192.168.241.237 slave2 192.168.241.238 slave3 ## **/etc/profile 增加的配置如下:** 78 export JAVA_HOME=/usr/java/jdk1.8.0_112 79 export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib 80 export PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/bin 81 82 export HADOOP_HOME=/root/Hadoop/hadoop-2.7.3 83 export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin 84 85 export HBASE_HOME=/root/Hbase/hbase-1.2.4 86 export PATH=$PATH:$HBASE_HOME/bin ## **regionserver配置如下:** master slave1 slave2 slave3 ## **hbase-env.sh配置如下:** export JAVA_HOME=/usr/java/jdk1.8.0_112 export HBASE_CLASSPATH=/root/Hadoop/hadoop-2.7.3/etc/hadoop export HBASE_OPTS="-XX:+UseConcMarkSweepGC" export HBASE_MANAGES_ZK=true ## **hbase-site.xml配置如下:** ``` <configuration> <property> <name>hbase.rootdir</name> <value>hdfs://master:9000/hbase</value> <description>hadoop集群地址</description> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> <description>是否启动集群模式</description> </property> <property> <name>hbase.tmp.dir</name> <value>/root/Hbase/hbase-1.2.4/tmp</value> </property> <property> <name>hbase.master</name> #指定hbase集群主控节点 <value>master:60000</value> </property> <property> <name>hbase.zookeeper.quorum</name> <value>slave1,slave2,slave3</value> <description>zookeeper集群主机名列表</description> </property> <property> <name>hbase.master.maxclockskew</name> <value>180000</value> <description>Time difference of regionserver from master</description> </property> <property> <name>hbase.zookeeper.property.dataDir</name> <value>/root/Hbase/hbase-1.2.4/zookeeper_data</value> </property> </configuration> ``` ## **logs/hbase-root-master-master部分内容如下:** ``` 2017-01-06 14:35:21,969 INFO [main] zookeeper.ZooKeeper: Client environment:java.library.path=/root/Hadoop/hadoop-2.7.3/lib/native 2017-01-06 14:35:21,969 INFO [main] zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp 2017-01-06 14:35:21,969 INFO [main] zookeeper.ZooKeeper: Client environment:java.compiler=<NA> 2017-01-06 14:35:21,969 INFO [main] zookeeper.ZooKeeper: Client environment:os.name=Linux 2017-01-06 14:35:21,969 INFO [main] zookeeper.ZooKeeper: Client environment:os.arch=amd64 2017-01-06 14:35:21,969 INFO [main] zookeeper.ZooKeeper: Client environment:os.version=3.10.0-514.el7.x86_64 2017-01-06 14:35:21,969 INFO [main] zookeeper.ZooKeeper: Client environment:user.name=root 2017-01-06 14:35:21,969 INFO [main] zookeeper.ZooKeeper: Client environment:user.home=/root 2017-01-06 14:35:21,969 INFO [main] zookeeper.ZooKeeper: Client environment:user.dir=/root/Hbase/hbase-1.2.4/logs 2017-01-06 14:35:21,971 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=slave1:2181,slave2:2181,slave3:2181 sessionTimeout=90000 watcher=master:160000x0, quorum=slave1:2181,slave2:2181,slave3:2181, baseZNode=/hbase 2017-01-06 14:35:22,048 INFO [main-SendThread(slave2:2181)] zookeeper.ClientCnxn: Opening socket connection to server slave2/192.168.241.237:2181. Will not attempt to authenticate using SASL (unknown error) 2017-01-06 14:35:22,080 WARN [main-SendThread(slave2:2181)] zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.net.NoRouteToHostException: No route to host at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) 2017-01-06 14:35:22,230 INFO [main-SendThread(slave1:2181)] zookeeper.ClientCnxn: Opening socket connection to server slave1/192.168.241.236:2181. Will not attempt to authenticate using SASL (unknown error) 2017-01-06 14:35:23,239 WARN [main-SendThread(slave1:2181)] zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.net.NoRouteToHostException: No route to host at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) 2017-01-06 14:35:23,341 INFO [main-SendThread(slave3:2181)] zookeeper.ClientCnxn: Opening socket connection to server slave3/192.168.241.238:2181. Will not attempt to authenticate using SASL (unknown error) 2017-01-06 14:35:23,342 WARN [main-SendThread(slave3:2181)] zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.net.NoRouteToHostException: No route to host at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) ......此处省略N次重复 Caused by: org.apache.hadoop.hbase.ZooKeeperConnectionException: master:160000x0, quorum=slave1:2181,slave2:2181,slave3:2181, baseZNode=/hbase Unexpected KeeperException creating base node at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.createBaseZNodes(ZooKeeperWatcher.java:206) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:187) at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:585) at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:381) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2419) ... 5 more Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.createNonSequential(RecoverableZooKeeper.java:565) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.create(RecoverableZooKeeper.java:544) at org.apache.hadoop.hbase.zookeeper.ZKUtil.createWithParents(ZKUtil.java:1204) at org.apache.hadoop.hbase.zookeeper.ZKUtil.createWithParents(ZKUtil.java:1182) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.createBaseZNodes(ZooKeeperWatcher.java:194) ... 13 more 2017-01-06 14:35:39,329 WARN [main-SendThread(slave1:2181)] zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.net.NoRouteToHostException: No route to host at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) ```

zookeeper链接不上2181端口~防火墙已关~

2016-04-18 22:03:05,165 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=slave1:2181,master:2181 sessionTimeout=90000 watcher=master:160200x0, quorum=slave1:2181,master:2181, baseZNode=/hbase 2016-04-18 22:03:05,175 INFO [main-SendThread(slave1:2181)] zookeeper.ClientCnxn: Opening socket connection to server slave1/192.168.1.11:2181. Will not attempt to authenticate using SASL (unknown error) 2016-04-18 22:03:05,178 WARN [main-SendThread(slave1:2181)] zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: 拒绝连接 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) ....... 2016-04-18 22:03:22,107 WARN [main] zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper, quorum=slave1:2181,master:2181, exception=org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase 2016-04-18 22:03:22,107 INFO [main-SendThread(master:2181)] zookeeper.ClientCnxn: Opening socket connection to server master/192.168.1.10:2181. Will not attempt to authenticate using SASL (unknown error) 2016-04-18 22:03:22,107 ERROR [main] zookeeper.RecoverableZooKeeper: ZooKeeper create failed after 4 attempts 2016-04-18 22:03:22,107 ERROR [main] master.HMasterCommandLine: Master exiting java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2002) at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:203) at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2016) Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.createNonSequential(RecoverableZooKeeper.java:575) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.create(RecoverableZooKeeper.java:554) at org.apache.hadoop.hbase.zookeeper.ZKUtil.createWithParents(ZKUtil.java:1258) at org.apache.hadoop.hbase.zookeeper.ZKUtil.createWithParents(ZKUtil.java:1236) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.createBaseZNodes(ZooKeeperWatcher.java:179) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:172) at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:531) at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:333) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1997) ... 5 more 2016-04-18 22:03:22,107 WARN [main-SendThread(master:2181)] zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: 拒绝连接 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)

hbase刚配置好 无法启动 求大神帮助..................

启动不起来 进程也没有,hosts里面的127.0.0.1已经配置了zookeeper能启动 状态里有主从 hadoop能启动 sqoop能导数据 hbase的log日志在下面的回答里

phoenix 5.0 hbase2.0.1 新建二级索引后,新增数据报错

1、新建了一个test表 2、添加了一些数据 ![图片说明](https://img-ask.csdn.net/upload/201811/10/1541845481_780198.jpg) 3、添加了全局二级索引 4、添加或者删除数据报错!! 是索引的问题?还是哪里需要配置,目前没有找到资料。 删除全局二级索引后,添加、删除成功! 报错代码如下: ![图片说明](https://img-ask.csdn.net/upload/201811/10/1541845463_544281.jpg) 补充一下环境:pheonix5.0版本 hbase2.0.1版本 报错如下: Error: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: org.apache.phoenix.hbase.index.builder.IndexBuildingFailureException: Failed to build index for unexpected reason! at org.apache.phoenix.hbase.index.util.IndexManagementUtil.rethrowIndexingException(IndexManagementUtil.java:206) at org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:351) at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$28.call(RegionCoprocessorHost.java:1010) at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$28.call(RegionCoprocessorHost.java:1007) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:540) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:614) at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:1007) at org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.prepareMiniBatchOperations(HRegion.java:3466) at org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:3875) at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3833) at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3764) at org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1027) at org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:959) at org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:922) at org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2666) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42014) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Caused by: java.lang.VerifyError: org/apache/phoenix/hbase/index/covered/data/IndexMemStore$1 at org.apache.phoenix.hbase.index.covered.data.IndexMemStore.<init>(IndexMemStore.java:82) at org.apache.phoenix.hbase.index.covered.LocalTableState.<init>(LocalTableState.java:57) at org.apache.phoenix.hbase.index.covered.NonTxIndexBuilder.getIndexUpdate(NonTxIndexBuilder.java:52) at org.apache.phoenix.hbase.index.builder.IndexBuildManager.getIndexUpdate(IndexBuildManager.java:90) at org.apache.phoenix.hbase.index.Indexer.preBatchMutateWithExceptions(Indexer.java:503) at org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:348) ... 18 more : 1 time, servers with issues: hbase,16020,1542002334452 (state=,code=0)

求解:模拟环境3台虚拟机搭建Hadoop+Hbase+zookeeper时出错

nginx-1虚拟机和rabbit-3虚拟机配置一模一样,但rabbit-3的HBase出错。 nginx-1截图: ![图片说明](https://img-ask.csdn.net/upload/201507/14/1436845765_938415.png) ![图片说明](https://img-ask.csdn.net/upload/201507/14/1436845781_938145.png) rabbit-3截图: ![图片说明](https://img-ask.csdn.net/upload/201507/14/1436845841_287438.png) ![图片说明](https://img-ask.csdn.net/upload/201507/14/1436845845_43867.png)![图片说明](https://img-ask.csdn.net/upload/201507/14/1436845852_513624.png) 做了3遍了 每次都这问题 快跪了 求解

hbase中的hbase:acl表丢失该如何找回?或者自动生成

我hbase中没有hbase:acl表了,zookeeper中的table目录下面也没有了,请问该如何找回hbase:acl表! 希望知道的人能给我一个答案,感谢!!!

使用Java对hbase进行连接测试,连接不上,超时,

我在Linux上安装了hbase的单机版,hbase版本是1.2.6, zookeeper使用的是hbase自己的,Linux防火墙开放了16010和2181端口, 使用hbase shell 可以正常操作hbase,浏览器也可以正常访问hbase, 但是我使用Java对hbase进行连接测试的时候就是一直连接不上,然后当我关闭 Linux防火墙的时候java又可以正常的对hbase进行连接访问了,???,这是怎么回事,是因为Linux防火墙还需要开放其他端口吗

关于hbase HMaster启动后自动关闭的问题

最近在学习hbase,按照教程配置好后,发现使用hbase-daemon.sh start master 启动后,HMaster进程一会自动退出,查看日志如下: 2015-10-25 04:38:32,322 WARN org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper exception: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase 2015-10-25 04:38:32,322 ERROR org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: ZooKeeper exists failed after 3 retries 2015-10-25 04:38:32,322 ERROR org.apache.hadoop.hbase.master.HMasterCommandLine: Failed to start master java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2115) at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:152) at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:104) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:76) at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2129) Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1041) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1069) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:220) at org.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java:1111) at org.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java:1101) at org.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java:1085) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.createBaseZNodes(ZooKeeperWatcher.java:164) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:157) at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:348) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2110) ... 5 more 请高手看看是什么原因,在网上找了很多解决办法,但还是无法解决。

hbase启动报错 hbase shell

请大神帮帮忙 hbase启动 OK hbase shell有下问题 # ./bin/hbase shell 2016-04-05 08:53:06,328 ERROR [main] zookeeper.RecoverableZooKeeper: ZooKeeper exists failed after 4 attempts 2016-04-05 08:53:06,331 WARN [main] zookeeper.ZKUtil: hconnection-0x1f6917fb0x0, quorum=salve1:2181,master:2181,salve2:2181, baseZNode=/hbase Unable to set watcher on znode (/hbase/hbaseid) org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:221) at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:482) at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:65) at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:86) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:833) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init>(ConnectionManager.java:623) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:422) at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238) at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218) at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:450) at org.jruby.javasupport.JavaMethod.invokeStaticDirect(JavaMethod.java:362) at org.jruby.java.invokers.StaticMethodInvoker.call(StaticMethodInvoker.java:58) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:312) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:169) at org.jruby.ast.CallOneArgNode.interpret(CallOneArgNode.java:57) at org.jruby.ast.InstAsgnNode.interpret(InstAsgnNode.java:95) at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104) at org.jruby.ast.BlockNode.interpret(BlockNode.java:71) at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74) at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:169) at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:191) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:302) at org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:144) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:148) at org.jruby.RubyClass.newInstance(RubyClass.java:822) at org.jruby.RubyClass$i$newInstance.call(RubyClass$i$newInstance.gen:65535) at org.jruby.internal.runtime.methods.JavaMethod$JavaMethodZeroOrNBlock.call(JavaMethod.java:249) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:292) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:135) at usr.local.hadoop.hbase_minus_1_dot_0_dot_3.bin.hirb.__file__(/usr/local/hadoop/hbase-1.0.3/bin/hirb.rb:118) at usr.local.hadoop.hbase_minus_1_dot_0_dot_3.bin.hirb.load(/usr/local/hadoop/hbase-1.0.3/bin/hirb.rb) at org.jruby.Ruby.runScript(Ruby.java:697) at org.jruby.Ruby.runScript(Ruby.java:690) at org.jruby.Ruby.runNormally(Ruby.java:597) at org.jruby.Ruby.runFromMain(Ruby.java:446) at org.jruby.Main.doRunFromMain(Main.java:369) at org.jruby.Main.internalRun(Main.java:258) at org.jruby.Main.run(Main.java:224) at org.jruby.Main.run(Main.java:208) at org.jruby.Main.main(Main.java:188) 2016-04-05 08:53:06,338 ERROR [main] zookeeper.ZooKeeperWatcher: hconnection-0x1f6917fb0x0, quorum=salve1:2181,master:2181,salve2:2181, baseZNode=/hbase Received unexpected KeeperException, re-throwing exception org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:221) at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:482) at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:65) at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:86) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:833) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init>(ConnectionManager.java:623) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:422) at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238) at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218) at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:450) at org.jruby.javasupport.JavaMethod.invokeStaticDirect(JavaMethod.java:362) at org.jruby.java.invokers.StaticMethodInvoker.call(StaticMethodInvoker.java:58) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:312) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:169) at org.jruby.ast.CallOneArgNode.interpret(CallOneArgNode.java:57) at org.jruby.ast.InstAsgnNode.interpret(InstAsgnNode.java:95) at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104) at org.jruby.ast.BlockNode.interpret(BlockNode.java:71) at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74) at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:169) at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:191) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:302) at org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:144) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:148) at org.jruby.RubyClass.newInstance(RubyClass.java:822) at org.jruby.RubyClass$i$newInstance.call(RubyClass$i$newInstance.gen:65535) at org.jruby.internal.runtime.methods.JavaMethod$JavaMethodZeroOrNBlock.call(JavaMethod.java:249) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:292) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:135) at usr.local.hadoop.hbase_minus_1_dot_0_dot_3.bin.hirb.__file__(/usr/local/hadoop/hbase-1.0.3/bin/hirb.rb:118) at usr.local.hadoop.hbase_minus_1_dot_0_dot_3.bin.hirb.load(/usr/local/hadoop/hbase-1.0.3/bin/hirb.rb) at org.jruby.Ruby.runScript(Ruby.java:697) at org.jruby.Ruby.runScript(Ruby.java:690) at org.jruby.Ruby.runNormally(Ruby.java:597) at org.jruby.Ruby.runFromMain(Ruby.java:446) at org.jruby.Main.doRunFromMain(Main.java:369) at org.jruby.Main.internalRun(Main.java:258) at org.jruby.Main.run(Main.java:224) at org.jruby.Main.run(Main.java:208) at org.jruby.Main.main(Main.java:188) HBase Shell; enter 'help<RETURN>' for list of supported commands. Type "exit<RETURN>" to leave the HBase Shell Version 1.0.3, rf1e1312f9790a7c40f6a4b5a1bab2ea1dd559890, Tue Jan 19 19:26:53 PST 2016

sqoop2数据导入相关疑问

场景: 通过 sqoop2, 从oracle 数据导入 hdfs 问题1)当oracle表中有blob字段的列时出现类型转换失败 , Integer 不能转BigDecimal, 问题是表中没有number列也出现此问题. 问题2)sqoop2是否支持往hbase的数据导入?? , 因为connector中 没有hbase相关连接器模板 希望发下以上两个问题描述或相关资料连接地址 注: 时 sqoop2 相关资料. 别发sqoop1的资料

hbase 启动后自动退出

日志 如下 2017-11-22 15:59:03,632 INFO [main] util.ServerCommandLine: env:CVS_RSH=ssh 2017-11-22 15:59:03,632 INFO [main] util.ServerCommandLine: env:HBASE_MANAGES_ZK=false 2017-11-22 15:59:03,632 INFO [main] util.ServerCommandLine: env:G_BROKEN_FILENAMES=1 2017-11-22 15:59:03,632 INFO [main] util.ServerCommandLine: env:HBASE_NICENESS=0 2017-11-22 15:59:03,632 INFO [main] util.ServerCommandLine: env:HBASE_REST_OPTS= 2017-11-22 15:59:03,632 INFO [main] util.ServerCommandLine: env:NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy15.getFileInfo(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279) at com.sun.proxy.$Proxy16.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1977) at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1118) at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1400) at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:436) at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:153) at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:128) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:693) at org.apache.hadoop.hbase.master.HMaster.access$600(HMaster.java:189) at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1803) at java.lang.Thread.run(Thread.java:745) 2017-11-30 15:30:29,684 INFO [master1:16000.activeMasterManager] regionserver.HRegionServer: STOPPED: Unhandled exception. Starting shutdown. 2017-11-30 15:30:32,555 INFO [master/master1/172.17.153.117:16000] ipc.RpcServer: Stopping server on 16000 2017-11-30 15:30:32,555 INFO [RpcServer.listener,port=16000] ipc.RpcServer: RpcServer.listener,port=16000: stopping 2017-11-30 15:30:32,556 INFO [master/master1/172.17.153.117:16000] regionserver.HRegionServer: Stopping infoServer 2017-11-30 15:30:32,557 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopped 2017-11-30 15:30:32,557 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopping 2017-11-30 15:30:32,565 INFO [master/master1/172.17.153.117:16000] mortbay.log: Stopped SelectChannelConnector@0.0.0.0:6123 2017-11-30 15:30:32,565 INFO [master/master1/172.17.153.117:16000] regionserver.HRegionServer: stopping server master1,16000,1512027026091 2017-11-30 15:30:32,565 INFO [master/master1/172.17.153.117:16000] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x15fe8b9c7df000a 2017-11-30 15:30:32,570 INFO [master/master1/172.17.153.117:16000-EventThread] zookeeper.ClientCnxn: EventThread shut down 2017-11-30 15:30:32,570 INFO [master/master1/172.17.153.117:16000] zookeeper.ZooKeeper: Session: 0x15fe8b9c7df000a closed 2017-11-30 15:30:32,572 INFO [master/master1/172.17.153.117:16000] regionserver.HRegionServer: stopping server master1,16000,1512027026091; all regions closed. 2017-11-30 15:30:32,573 INFO [master/master1/172.17.153.117:16000] hbase.ChoreService: Chore service for: master1,16000,1512027026091 had [] on shutdown 2017-11-30 15:30:32,578 INFO [master/master1/172.17.153.117:16000] ipc.RpcServer: Stopping server on 16000 2017-11-30 15:30:32,587 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down 2017-11-30 15:30:32,587 INFO [master/master1/172.17.153.117:16000] zookeeper.ZooKeeper: Session: 0x45fe8b9c7260005 closed 2017-11-30 15:30:32,587 INFO [master/master1/172.17.153.117:16000] regionserver.HRegionServer: stopping server master1,16000,1512027026091; zookeeper connection closed. 配置文件 <configuration> <property> <name>hbase.rootdir</name> <value>hdfs://172.17.153.117:8020/hbase</value> </property> <property> <name>hbase.tmp.dir</name> <value>file:/root/app/hbase-1.0.0/tmp</value> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> <property> <name>hbase.zookeeper.quorum</name> <value>172.17.153.117,master2,slave1,slave2,slave3</value> </property> <property> <name>hbase.zookeeper.property.clientPort</name> <value>2181</value> </property> <property> <name>hbase.zookeeper.property.dataDir</name> <value>/root/app/zookeeper-3.4.5/var/data</value> </property> <property> <name>hbase.master.info.port</name> <value>6123</value> <description>The port the HBase Master should bind to.</description> </property> </configuration>

大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了

大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...

在中国程序员是青春饭吗?

今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...

Java基础知识面试题(2020最新版)

文章目录Java概述何为编程什么是Javajdk1.5之后的三大版本JVM、JRE和JDK的关系什么是跨平台性?原理是什么Java语言有哪些特点什么是字节码?采用字节码的最大好处是什么什么是Java程序的主类?应用程序和小程序的主类有何不同?Java应用程序与小程序之间有那些差别?Java和C++的区别Oracle JDK 和 OpenJDK 的对比基础语法数据类型Java有哪些数据类型switc...

我以为我学懂了数据结构,直到看了这个导图才发现,我错了

数据结构与算法思维导图

技术大佬:我去,你写的 switch 语句也太老土了吧

昨天早上通过远程的方式 review 了两名新来同事的代码,大部分代码都写得很漂亮,严谨的同时注释也很到位,这令我非常满意。但当我看到他们当中有一个人写的 switch 语句时,还是忍不住破口大骂:“我擦,小王,你丫写的 switch 语句也太老土了吧!” 来看看小王写的代码吧,看完不要骂我装逼啊。 private static String createPlayer(PlayerTypes p...

和黑客斗争的 6 天!

互联网公司工作,很难避免不和黑客们打交道,我呆过的两家互联网公司,几乎每月每天每分钟都有黑客在公司网站上扫描。有的是寻找 Sql 注入的缺口,有的是寻找线上服务器可能存在的漏洞,大部分都...

Linux 会成为主流桌面操作系统吗?

整理 |屠敏出品 | CSDN(ID:CSDNnews)2020 年 1 月 14 日,微软正式停止了 Windows 7 系统的扩展支持,这意味着服役十年的 Windows 7,属于...

讲一个程序员如何副业月赚三万的真实故事

loonggg读完需要3分钟速读仅需 1 分钟大家好,我是你们的校长。我之前讲过,这年头,只要肯动脑,肯行动,程序员凭借自己的技术,赚钱的方式还是有很多种的。仅仅靠在公司出卖自己的劳动时...

学习总结之HTML5剑指前端(建议收藏,图文并茂)

前言学习《HTML5与CSS3权威指南》这本书很不错,学完之后我颇有感触,觉得web的世界开明了许多。这本书是需要有一定基础的web前端开发工程师。这本书主要学习HTML5和css3,看...

女程序员,为什么比男程序员少???

昨天看到一档综艺节目,讨论了两个话题:(1)中国学生的数学成绩,平均下来看,会比国外好?为什么?(2)男生的数学成绩,平均下来看,会比女生好?为什么?同时,我又联想到了一个技术圈经常讨...

搜狗输入法也在挑战国人的智商!

故事总是一个接着一个到来...上周写完《鲁大师已经彻底沦为一款垃圾流氓软件!》这篇文章之后,鲁大师的市场工作人员就找到了我,希望把这篇文章删除掉。经过一番沟通我先把这篇文章从公号中删除了...

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

MySQL数据库面试题(2020最新版)

文章目录数据库基础知识为什么要使用数据库什么是SQL?什么是MySQL?数据库三大范式是什么mysql有关权限的表都有哪几个MySQL的binlog有有几种录入格式?分别有什么区别?数据类型mysql有哪些数据类型引擎MySQL存储引擎MyISAM与InnoDB区别MyISAM索引与InnoDB索引的区别?InnoDB引擎的4大特性存储引擎选择索引什么是索引?索引有哪些优缺点?索引使用场景(重点)...

新一代神器STM32CubeMonitor介绍、下载、安装和使用教程

关注、星标公众号,不错过精彩内容作者:黄工公众号:strongerHuang最近ST官网悄悄新上线了一款比较强大的工具:STM32CubeMonitor V1.0.0。经过我研究和使用之...

记一次腾讯面试,我挂在了最熟悉不过的队列上……

腾讯后台面试,面试官问:如何自己实现队列?

如果你是老板,你会不会踢了这样的员工?

有个好朋友ZS,是技术总监,昨天问我:“有一个老下属,跟了我很多年,做事勤勤恳恳,主动性也很好。但随着公司的发展,他的进步速度,跟不上团队的步伐了,有点...

我入职阿里后,才知道原来简历这么写

私下里,有不少读者问我:“二哥,如何才能写出一份专业的技术简历呢?我总感觉自己写的简历太烂了,所以投了无数份,都石沉大海了。”说实话,我自己好多年没有写过简历了,但我认识的一个同行,他在阿里,给我说了一些他当年写简历的方法论,我感觉太牛逼了,实在是忍不住,就分享了出来,希望能够帮助到你。 01、简历的本质 作为简历的撰写者,你必须要搞清楚一点,简历的本质是什么,它就是为了来销售你的价值主张的。往深...

冒泡排序动画(基于python pygame实现)

本项目效果初始截图如下 动画见本人b站投稿:https://www.bilibili.com/video/av95491382 本项目对应github地址:https://github.com/BigShuang python版本:3.6,pygame版本:1.9.3。(python版本一致应该就没什么问题) 样例gif如下 ======================= 大爽歌作,mad

Redis核心原理与应用实践

Redis核心原理与应用实践 在很多场景下都会使用Redis,但是到了深层次的时候就了解的不是那么深刻,以至于在面试的时候经常会遇到卡壳的现象,学习知识要做到系统和深入,不要把Redis想象的过于复杂,和Mysql一样,是个读取数据的软件。 有一个理解是Redis是key value缓存服务器,更多的优点在于对value的操作更加丰富。 安装 yum install redis #yum安装 b...

现代的 “Hello, World”,可不仅仅是几行代码而已

作者 |Charles R. Martin译者 | 弯月,责编 | 夕颜头图 |付费下载自视觉中国出品 | CSDN(ID:CSDNnews)新手...

带了6个月的徒弟当了面试官,而身为高级工程师的我天天修Bug......

即将毕业的应届毕业生一枚,现在只拿到了两家offer,但最近听到一些消息,其中一个offer,我这个组据说客户很少,很有可能整组被裁掉。 想问大家: 如果我刚入职这个组就被裁了怎么办呢? 大家都是什么时候知道自己要被裁了的? 面试软技能指导: BQ/Project/Resume 试听内容: 除了刷题,还有哪些技能是拿到offer不可或缺的要素 如何提升面试软实力:简历, 行为面试,沟通能...

!大部分程序员只会写3年代码

如果世界上都是这种不思进取的软件公司,那别说大部分程序员只会写 3 年代码,恐怕就没有程序员这种职业。

离职半年了,老东家又发 offer,回不回?

有小伙伴问松哥这个问题,他在上海某公司,在离职了几个月后,前公司的领导联系到他,希望他能够返聘回去,他很纠结要不要回去? 俗话说好马不吃回头草,但是这个小伙伴既然感到纠结了,我觉得至少说明了两个问题:1.曾经的公司还不错;2.现在的日子也不是很如意。否则应该就不会纠结了。 老实说,松哥之前也有过类似的经历,今天就来和小伙伴们聊聊回头草到底吃不吃。 首先一个基本观点,就是离职了也没必要和老东家弄的苦...

2020阿里全球数学大赛:3万名高手、4道题、2天2夜未交卷

阿里巴巴全球数学竞赛( Alibaba Global Mathematics Competition)由马云发起,由中国科学技术协会、阿里巴巴基金会、阿里巴巴达摩院共同举办。大赛不设报名门槛,全世界爱好数学的人都可参与,不论是否出身数学专业、是否投身数学研究。 2020年阿里巴巴达摩院邀请北京大学、剑桥大学、浙江大学等高校的顶尖数学教师组建了出题组。中科院院士、美国艺术与科学院院士、北京国际数学...

为什么你不想学习?只想玩?人是如何一步一步废掉的

不知道是不是只有我这样子,还是你们也有过类似的经历。 上学的时候总有很多光辉历史,学年名列前茅,或者单科目大佬,但是虽然慢慢地长大了,你开始懈怠了,开始废掉了。。。 什么?你说不知道具体的情况是怎么样的? 我来告诉你: 你常常潜意识里或者心理觉得,自己真正的生活或者奋斗还没有开始。总是幻想着自己还拥有大把时间,还有无限的可能,自己还能逆风翻盘,只不是自己还没开始罢了,自己以后肯定会变得特别厉害...

HTTP与HTTPS的区别

面试官问HTTP与HTTPS的区别,我这样回答让他竖起大拇指!

程序员毕业去大公司好还是小公司好?

虽然大公司并不是人人都能进,但我仍建议还未毕业的同学,尽力地通过校招向大公司挤,但凡挤进去,你这一生会容易很多。 大公司哪里好?没能进大公司怎么办?答案都在这里了,记得帮我点赞哦。 目录: 技术氛围 内部晋升与跳槽 啥也没学会,公司倒闭了? 不同的人脉圈,注定会有不同的结果 没能去大厂怎么办? 一、技术氛围 纵观整个程序员技术领域,哪个在行业有所名气的大牛,不是在大厂? 而且众所...

男生更看重女生的身材脸蛋,还是思想?

往往,我们看不进去大段大段的逻辑。深刻的哲理,往往短而精悍,一阵见血。问:产品经理挺漂亮的,有点心动,但不知道合不合得来。男生更看重女生的身材脸蛋,还是...

程序员为什么千万不要瞎努力?

本文作者用对比非常鲜明的两个开发团队的故事,讲解了敏捷开发之道 —— 如果你的团队缺乏统一标准的环境,那么即使勤劳努力,不仅会极其耗时而且成果甚微,使用...

为什么程序员做外包会被瞧不起?

二哥,有个事想询问下您的意见,您觉得应届生值得去外包吗?公司虽然挺大的,中xx,但待遇感觉挺低,马上要报到,挺纠结的。

面试阿里p7,被按在地上摩擦,鬼知道我经历了什么?

面试阿里p7被问到的问题(当时我只知道第一个):@Conditional是做什么的?@Conditional多个条件是什么逻辑关系?条件判断在什么时候执...

立即提问
相关内容推荐