spring-hadoop.xsd在哪

江湖救急,spring-hadoop.xsd在哪spring-hadoop.xsd在哪spring-hadoop.xsd在哪spring-hadoop.xsd在哪spring-hadoop.xsd在哪spring-hadoop.xsd在哪spring-hadoop.xsd在哪

2个回答

图片说明

fascinatingGirl
FantasticGirlisMe 谢谢
接近 4 年之前 回复

xsd文件一般都是在jar包里的

fascinatingGirl
FantasticGirlisMe 谢谢
接近 4 年之前 回复
fascinatingGirl
FantasticGirlisMe 谢谢
接近 4 年之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
求教,hadoop-2.2.0升级hadoop-2.6.0。
最近需要升级hadoop,从hadoop-2.2.0升级到hadoop-2.6.0,根据http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html#dfsadmin_-rollingUpgrade 提示的来,第一步:./bin/hdfs dfsadmin -rollingUPgrade prepare 就出现了:PREPARE rolling upgrade ... rollingUpgrade: Unknown method rollingUpgrade called on org.apache.hadoop.hdfs.protocol.ClientProtocol protocol.
Hadoop2.4.0环境下HBase-0.9.60-hadoo2版本冲突问题
我的Hadoop环境是Hadoop2.4.0,HBase是HBase-0.9.60-hadoo2,今天使用HBase API编写了一个程序,运行的时候曝下面的错误: 2014-09-01 18:16:00,247 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2014-09-01 18:16:00,283 ERROR [main] util.Shell (Shell.java:getWinUtilsPath(303)) - Failed to locate the winutils binary in the hadoop binary path java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries. at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278) at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300) at org.apache.hadoop.util.Shell.<clinit>(Shell.java:293) at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:76) at org.apache.hadoop.conf.Configuration.getStrings(Configuration.java:1514) at org.apache.hadoop.hbase.zookeeper.ZKConfig.makeZKProps(ZKConfig.java:113) at org.apache.hadoop.hbase.zookeeper.ZKConfig.getZKQuorumServersString(ZKConfig.java:265) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:159) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:134) at org.apache.hadoop.hbase.client.ZooKeeperKeepAliveConnection.<init>(ZooKeeperKeepAliveConnection.java:43) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveZooKeeperWatcher(HConnectionManager.java:1710) at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:82) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.retrieveClusterId(HConnectionManager.java:806) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:633) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:387) at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:366) at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:247) at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:183) at cn.haha.HBase.HBaseApp1.main(HBaseApp1.java:26) 2014-09-01 18:16:00,297 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT 2014-09-01 18:16:00,297 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:host.name=Admin-PC 2014-09-01 18:16:00,297 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.version=1.7.0_65 2014-09-01 18:16:00,297 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.vendor=Oracle Corporation 2014-09-01 18:16:00,297 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.home=C:\workDir\jdk7u65\jre 2014-09-01 18:16:00,297 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.class.path=E:\workDir\workspace_eclipse\HBase-0.96\bin;E:\workDir\workspace_eclipse\HBase-0.96\lib\activation-1.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\aopalliance-1.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\asm-3.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\avro-1.7.4.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-beanutils-1.7.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-beanutils-core-1.8.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-cli-1.2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-codec-1.7.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-collections-3.2.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-compress-1.4.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-configuration-1.6.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-daemon-1.0.13.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-digester-1.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-el-1.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-httpclient-3.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-io-2.4.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-lang-2.6.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-logging-1.1.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-math-2.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-net-3.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\findbugs-annotations-1.3.9-1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\gmbal-api-only-3.0.0-b023.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\grizzly-framework-2.1.2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\grizzly-http-2.1.2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\grizzly-http-server-2.1.2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\grizzly-http-servlet-2.1.2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\grizzly-rcm-2.1.2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\guava-12.0.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\guice-3.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\guice-servlet-3.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-annotations-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-auth-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-client-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-common-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-hdfs-2.2.0-tests.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-hdfs-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-mapreduce-client-app-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-mapreduce-client-common-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-mapreduce-client-core-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-mapreduce-client-jobclient-2.2.0-tests.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-mapreduce-client-jobclient-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-mapreduce-client-shuffle-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-yarn-api-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-yarn-client-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-yarn-common-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-yarn-server-common-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-yarn-server-nodemanager-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hamcrest-core-1.3.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-client-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-common-0.96.2-hadoop2-tests.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-common-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-examples-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-hadoop-compat-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-hadoop2-compat-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-it-0.96.2-hadoop2-tests.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-it-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-prefix-tree-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-protocol-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-server-0.96.2-hadoop2-tests.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-server-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-shell-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-testing-util-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-thrift-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\htrace-core-2.04.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\httpclient-4.1.3.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\httpcore-4.1.3.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jackson-core-asl-1.8.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jackson-jaxrs-1.8.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jackson-mapper-asl-1.8.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jackson-xc-1.8.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jamon-runtime-2.3.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jasper-compiler-5.5.23.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jasper-runtime-5.5.23.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\javax.inject-1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\javax.servlet-3.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\javax.servlet-api-3.0.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jaxb-api-2.2.2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jaxb-impl-2.2.3-1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jersey-client-1.9.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jersey-core-1.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jersey-grizzly2-1.9.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jersey-guice-1.9.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jersey-json-1.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jersey-server-1.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jersey-test-framework-core-1.9.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jersey-test-framework-grizzly2-1.9.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jets3t-0.6.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jettison-1.3.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jetty-6.1.26.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jetty-sslengine-6.1.26.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jetty-util-6.1.26.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jruby-complete-1.6.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jsch-0.1.42.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jsp-2.1-6.1.14.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jsp-api-2.1-6.1.14.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jsr305-1.3.9.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\junit-4.11.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\libthrift-0.9.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\log4j-1.2.17.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\management-api-3.0.0-b012.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\metrics-core-2.1.2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\netty-3.6.6.Final.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\paranamer-2.3.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\protobuf-java-2.5.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\servlet-api-2.5-6.1.14.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\slf4j-api-1.6.4.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\slf4j-log4j12-1.6.4.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\snappy-java-1.0.4.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\xmlenc-0.52.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\xz-1.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\zookeeper-3.4.5.jar 2014-09-01 18:16:00,297 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.library.path=C:\workDir\jdk7u65\bin;C:\windows\Sun\Java\bin;C:\windows\system32;C:\windows;C:/workDir/jdk7u65/bin/../jre/bin/client;C:/workDir/jdk7u65/bin/../jre/bin;C:/workDir/jdk7u65/bin/../jre/lib/i386;C:\Program Files (x86)\Common Files\NetSarang;C:\workDir\jdk7u65\bin;E:\workDir\apache-tomcat-7.0.55;E:\workDir\apache-tomcat-7.0.55;%CATALINA_HOME%\common\lib\common\lib\bin;C:\Program Files (x86)\Intel\iCLS Client\;C:\Program Files\Intel\iCLS Client\;C:\windows\system32;C:\windows;C:\windows\System32\Wbem;C:\windows\System32\WindowsPowerShell\v1.0\;C:\Program Files\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files\Intel\Intel(R) Management Engine Components\IPT;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\IPT;C:\Program Files\Intel\WiFi\bin\;C:\Program Files\Common Files\Intel\WirelessCommon\;C:\Program Files\Lenovo\Fingerprint Manager Pro\;C:\Program Files (x86)\Intel\OpenCL SDK\3.0\bin\x86;C:\Program Files (x86)\Intel\OpenCL SDK\3.0\bin\x64;C:\Program Files (x86)\IDM Computer Solutions\UltraEdit\;E:\workDir\eclipse-indigo-3.7.2;;. 2014-09-01 18:16:00,297 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.io.tmpdir=C:\Users\ADMINI~1\AppData\Local\Temp\ 2014-09-01 18:16:00,298 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.compiler=<NA> 2014-09-01 18:16:00,298 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:os.name=Windows 7 2014-09-01 18:16:00,298 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:os.arch=x86 2014-09-01 18:16:00,298 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:os.version=6.1 2014-09-01 18:16:00,298 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:user.name=Administrator 2014-09-01 18:16:00,298 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:user.home=C:\Users\Administrator 2014-09-01 18:16:00,298 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:user.dir=E:\workDir\workspace_eclipse\HBase-0.96 2014-09-01 18:16:00,299 INFO [main] zookeeper.ZooKeeper (ZooKeeper.java:<init>(438)) - Initiating client connection, connectString=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181 sessionTimeout=90000 watcher=hconnection-0xde1f90, quorum=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181, baseZNode=/hbase 2014-09-01 18:16:00,326 INFO [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=hconnection-0xde1f90 connecting to ZooKeeper ensemble=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181 2014-09-01 18:16:00,328 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(966)) - Opening socket connection to server hadoop2.slave01/192.168.100.51:2181. Will not attempt to authenticate using SASL (unknown error) 2014-09-01 18:16:00,329 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(849)) - Socket connection established to hadoop2.slave01/192.168.100.51:2181, initiating session 2014-09-01 18:16:00,335 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1207)) - Session establishment complete on server hadoop2.slave01/192.168.100.51:2181, sessionid = 0x1482f4c45e2001c, negotiated timeout = 40000 2014-09-01 18:16:00,472 INFO [main] zookeeper.ZooKeeper (ZooKeeper.java:<init>(438)) - Initiating client connection, connectString=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181 sessionTimeout=90000 watcher=catalogtracker-on-hconnection-0xde1f90, quorum=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181, baseZNode=/hbase 2014-09-01 18:16:00,473 INFO [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=catalogtracker-on-hconnection-0xde1f90 connecting to ZooKeeper ensemble=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181 2014-09-01 18:16:00,474 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(966)) - Opening socket connection to server hadoop2.slave01/192.168.100.51:2181. Will not attempt to authenticate using SASL (unknown error) 2014-09-01 18:16:00,474 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(849)) - Socket connection established to hadoop2.slave01/192.168.100.51:2181, initiating session 2014-09-01 18:16:00,478 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1207)) - Session establishment complete on server hadoop2.slave01/192.168.100.51:2181, sessionid = 0x1482f4c45e2001d, negotiated timeout = 40000 2014-09-01 18:16:00,499 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(840)) - hadoop.native.lib is deprecated. Instead, use io.native.lib.available 2014-09-01 18:16:00,817 INFO [main] zookeeper.ZooKeeper (ZooKeeper.java:close(684)) - Session: 0x1482f4c45e2001d closed 2014-09-01 18:16:00,817 INFO [main-EventThread] zookeeper.ClientCnxn (ClientCnxn.java:run(509)) - EventThread shut down 2014-09-01 18:16:01,288 INFO [main] zookeeper.ZooKeeper (ZooKeeper.java:<init>(438)) - Initiating client connection, connectString=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181 sessionTimeout=90000 watcher=catalogtracker-on-hconnection-0xde1f90, quorum=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181, baseZNode=/hbase 2014-09-01 18:16:01,290 INFO [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=catalogtracker-on-hconnection-0xde1f90 connecting to ZooKeeper ensemble=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181 2014-09-01 18:16:01,290 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(966)) - Opening socket connection to server hadoop2.slave01/192.168.100.51:2181. Will not attempt to authenticate using SASL (unknown error) 2014-09-01 18:16:01,291 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(849)) - Socket connection established to hadoop2.slave01/192.168.100.51:2181, initiating session 2014-09-01 18:16:01,294 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1207)) - Session establishment complete on server hadoop2.slave01/192.168.100.51:2181, sessionid = 0x1482f4c45e2001e, negotiated timeout = 40000 2014-09-01 18:16:01,304 INFO [main] zookeeper.ZooKeeper (ZooKeeper.java:close(684)) - Session: 0x1482f4c45e2001e closed 2014-09-01 18:16:01,304 INFO [main-EventThread] zookeeper.ClientCnxn (ClientCnxn.java:run(509)) - EventThread shut down
ubuntu下hadoop-2.6.0测试用例运行失败
Results : Failed tests: TestTableMapping.testClearingCachedMappings:144 expected:</[rack1]> but was:</[default-rack]> TestTableMapping.testTableCaching:79 expected:</[rack1]> but was:</[default-rack]> TestTableMapping.testResolve:56 expected:</[rack1]> but was:</[default-rack]> TestDecayRpcScheduler.testAccumulate:136 expected:<3> but was:<2> TestDecayRpcScheduler.testPriority:203 expected:<2> but was:<1> Tests run: 2723, Failures: 5, Errors: 0, Skipped: 91 [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop Main ................................ SUCCESS [4.300s] [INFO] Apache Hadoop Project POM ......................... SUCCESS [2.250s] [INFO] Apache Hadoop Annotations ......................... SUCCESS [7.805s] [INFO] Apache Hadoop Assemblies .......................... SUCCESS [1.006s] [INFO] Apache Hadoop Project Dist POM .................... SUCCESS [8.227s] [INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [9.390s] [INFO] Apache Hadoop MiniKDC ............................. SUCCESS [22.836s] [INFO] Apache Hadoop Auth ................................ SUCCESS [40.704s] [INFO] Apache Hadoop Auth Examples ....................... SUCCESS [4.181s] [INFO] Apache Hadoop Common .............................. FAILURE [27:26.889s] [INFO] Apache Hadoop NFS ................................. SKIPPED [INFO] Apache Hadoop KMS ................................. SKIPPED [INFO] Apache Hadoop Common Project ...................... SKIPPED [INFO] Apache Hadoop HDFS ................................ SKIPPED [INFO] Apache Hadoop HttpFS .............................. SKIPPED [INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED [INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED [INFO] Apache Hadoop HDFS Project ........................ SKIPPED [INFO] hadoop-yarn ....................................... SKIPPED [INFO] hadoop-yarn-api ................................... SKIPPED [INFO] hadoop-yarn-common ................................ SKIPPED [INFO] hadoop-yarn-server ................................ SKIPPED [INFO] hadoop-yarn-server-common ......................... SKIPPED [INFO] hadoop-yarn-server-nodemanager .................... SKIPPED [INFO] hadoop-yarn-server-web-proxy ...................... SKIPPED [INFO] hadoop-yarn-server-applicationhistoryservice ...... SKIPPED [INFO] hadoop-yarn-server-resourcemanager ................ SKIPPED [INFO] hadoop-yarn-server-tests .......................... SKIPPED [INFO] hadoop-yarn-client ................................ SKIPPED [INFO] hadoop-yarn-applications .......................... SKIPPED [INFO] hadoop-yarn-applications-distributedshell ......... SKIPPED [INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SKIPPED [INFO] hadoop-yarn-site .................................. SKIPPED [INFO] hadoop-yarn-registry .............................. SKIPPED [INFO] hadoop-yarn-project ............................... SKIPPED [INFO] hadoop-mapreduce-client ........................... SKIPPED [INFO] hadoop-mapreduce-client-core ...................... SKIPPED [INFO] hadoop-mapreduce-client-common .................... SKIPPED [INFO] hadoop-mapreduce-client-shuffle ................... SKIPPED [INFO] hadoop-mapreduce-client-app ....................... SKIPPED [INFO] hadoop-mapreduce-client-hs ........................ SKIPPED [INFO] hadoop-mapreduce-client-jobclient ................. SKIPPED [INFO] hadoop-mapreduce-client-hs-plugins ................ SKIPPED [INFO] Apache Hadoop MapReduce Examples .................. SKIPPED [INFO] hadoop-mapreduce .................................. SKIPPED [INFO] Apache Hadoop MapReduce Streaming ................. SKIPPED [INFO] Apache Hadoop Distributed Copy .................... SKIPPED [INFO] Apache Hadoop Archives ............................ SKIPPED [INFO] Apache Hadoop Rumen ............................... SKIPPED [INFO] Apache Hadoop Gridmix ............................. SKIPPED [INFO] Apache Hadoop Data Join ........................... SKIPPED [INFO] Apache Hadoop Ant Tasks ........................... SKIPPED [INFO] Apache Hadoop Extras .............................. SKIPPED [INFO] Apache Hadoop Pipes ............................... SKIPPED [INFO] Apache Hadoop OpenStack support ................... SKIPPED [INFO] Apache Hadoop Amazon Web Services support ......... SKIPPED [INFO] Apache Hadoop Client .............................. SKIPPED [INFO] Apache Hadoop Mini-Cluster ........................ SKIPPED [INFO] Apache Hadoop Scheduler Load Simulator ............ SKIPPED [INFO] Apache Hadoop Tools Dist .......................... SKIPPED [INFO] Apache Hadoop Tools ............................... SKIPPED [INFO] Apache Hadoop Distribution ........................ SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 29:15.906s [INFO] Finished at: Tue Jun 09 01:10:59 CST 2015 [INFO] Final Memory: 65M/202M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on project hadoop-common: There are test failures. [ERROR] [ERROR] Please refer to /home/cj/workspace/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/target/surefire-reports for the individual test results. [ERROR] -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn <goals> -rf :hadoop-common
hadoop-common-2.8.2搭建环境,报错 org.apache.htrace.core.Tracer$Builder.<init>(Ljava/lang/String;)V
# 报错:Exception in thread "main" java.lang.NoSuchMethodError: org.apache.htrace.core.Tracer$Builder.<init>(Ljava/lang/String;)V Exception in thread "main" java.lang.NoSuchMethodError: org.apache.htrace.core.Tracer$Builder.<init>(Ljava/lang/String;)V at org.apache.hadoop.fs.FsTracer.get(FsTracer.java:42) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2806) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:100) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2849) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2831) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:181) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.addInputPath(FileInputFormat.java:526) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.addInputPaths(FileInputFormat.java:491) at com.knn.c.SortMR.main(SortMR.java:65) 换了三个版本的架包htrace-core-3.0.4,htrace-core-3.1.0-incubating,htrace-core-4.0.0-incubating,htrace-core-2.00都试过了,但是还是依旧报错,被这个问题烦死了,在这里卡住了
apache hadoop-2.5.0上安装impala的问题
在安装impala上遇到了一个问题 因为本人目前使用的系统是centos6.5,搭建了hadoop-2.5.0(非cdh,无cm)的集群,部署了hive-1.2.1以及mysql,但是不知道怎么把impala部署在apache hadoop上,也不能重新换用cdh hadoop,想问一下有没有可以在hadoop-2.5.0上安装相应版本impala的资料和方法?
hadoop启动start-dfs.sh找不到命令
[root@sparkproject1 sbin]# start-dfs.sh -bash: start-dfs.sh: command not found hadoop-env.sh已经配置java home hadoop version 可以看到版本号
初入Hadoop,start-all.sh问题
Hadoop 版本:hadoop-2.6.5 环境: ![图片说明](https://img-ask.csdn.net/upload/201711/08/1510104898_307684.png) HODOOP_HOME 目录也不同:都在各自用户的目录下 e.g: /home/yann/hadoop /home/ubuntu01/hadoop /home/ubuntu02/hadoop - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 当我在启动的时候start-all.sh 提示我如下错误: yann@yann-laptop:~/develop/tool/hadoop-2.6.5/sbin$ ./start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh Starting namenodes on [yann-laptop] yann-laptop: namenode running as process 7041. Stop it first. yann@ubuntu01-virtual-machine's password: The authenticity of host 'ubuntu02-virtual-machine (192.168.2.182)' can't be established. ECDSA key fingerprint is 57:ee:92:a8:85:85:ef:16:26:a3:b7:1d:54:77:19:18. Are you sure you want to continue connecting (yes/no)? - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - hadoop/etc/hadoop/slaves: ubuntu01-virtual-machine ubuntu02-virtual-machine - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - SSH公钥已经配置好: ssh-rsa XXXXX......XXXXXX yann@yann-laptop ssh-rsa XXXXX......XXXXXX ubuntu01@ubuntu01-virtual-machine ssh-rsa XXXXX......XXXXXX ubuntu02@ubuntu02-virtual-machine - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 从提示来看它使用了yann去连接ubuntu01-virtual-machine。 请问这种情况如何配置(主从机器不同的用户名)?
编译安装hadoop-2.5.0-rc1,程序包com.sun.javadoc不存在
编译安装hadoop-2.5.0-rc1,报如下错,求指点: [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.5.1:compile (default-compile) on project hadoop-annotations: Compilation failure: Compilation failure: [ERROR] /usr/local/src/release-2.5.0-rc1/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/ExcludePrivateAnnotationsJDiffDoclet.java:[20,22] 错误: 程序包com.sun.javadoc不存在
hadoop start-all.sh问题
Hadoop 版本:hadoop-2.6.5 环境: ![图片说明](https://img-ask.csdn.net/upload/201711/08/1510127732_32215.png) HODOOP_HOME 目录也不同:都在各自用户的目录下 e.g: /home/yann/hadoop /home/ubuntu01/hadoop /home/ubuntu02/hadoop 当我在启动的时候start-all.sh 提示我如下错误: yann@yann-laptop:~/develop/tool/hadoop-2.6.5/sbin$ ./start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh Starting namenodes on [yann-laptop] yann-laptop: namenode running as process 7041. Stop it first. yann@ubuntu01-virtual-machine's password: The authenticity of host 'ubuntu02-virtual-machine (192.168.2.182)' can't be established. ECDSA key fingerprint is 57:ee:92:a8:85:85:ef:16:26:a3:b7:1d:54:77:19:18. Are you sure you want to continue connecting (yes/no)? hadoop/etc/hadoop/slaves: ubuntu01-virtual-machine ubuntu02-virtual-machine SSH公钥已经配置好: ssh-rsa XXXXX......XXXXXX yann@yann-laptop ssh-rsa XXXXX......XXXXXX ubuntu01@ubuntu01-virtual-machine ssh-rsa XXXXX......XXXXXX ubuntu02@ubuntu02-virtual-machine 从提示来看它使用了yann去连接ubuntu01-virtual-machine。 请问这种情况如何配置?或者这种配置不提倡? (1)主从机器不同的用户名 (当我把slaves文件改成 ubuntu01@ubuntu01-virtual-machine , 以上问题没有了,随之而来的是,它在我的ubuntu01这台机子上区找/home/yann/hadoop这个目录,当然没有,所以报了目录找不到,也就是下面第二点是否可行) (2)hadoop安装目录不同 (貌似看到很多朋友把主从机器上的hadoop都是装在/usr/下)
osx下编译hadoop-2.5.2-src出错
[exec] /pein/hadoop/hadoop-2.5.2-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/vecsum.c:61:9: warning: implicit declaration of function 'clock_gettime' is invalid in C99 [-Wimplicit-function-declaration] [exec] if (clock_gettime(CLOCK_MONOTONIC, &watch->start)) { [exec] ^ [exec] /pein/hadoop/hadoop-2.5.2-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/vecsum.c:61:23: error: use of undeclared identifier 'CLOCK_MONOTONIC' [exec] if (clock_gettime(CLOCK_MONOTONIC, &watch->start)) { [exec] ^ [exec] /pein/hadoop/hadoop-2.5.2-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/vecsum.c:79:23: error: use of undeclared identifier 'CLOCK_MONOTONIC' [exec] if (clock_gettime(CLOCK_MONOTONIC, &watch->stop)) { [exec] ^ [exec] 1 warning and 2 errors generated. [exec] make[2]: *** [CMakeFiles/test_libhdfs_vecsum.dir/main/native/libhdfs/test/vecsum.c.o] Error 1 [exec] make[1]: *** [CMakeFiles/test_libhdfs_vecsum.dir/all] Error 2 [exec] make: *** [all] Error 2
cdh的hadoop版本2.6.0-5.4.5 怎么升级到 2.6.0-5.13.0 ?
目前生产环境用的hadoop版本是Hadoop 2.6.0-cdh5.4.5 ,现在想升级到 更高的版本 2.6.0-cdh5.13.0 ,并且hadoop集群没有使用cm管理,完全是用二进制源码安装的,目前上面只有 hdfs hive ,其他的都没有用,要怎么样才能升级了?
spark读取hdfs中lzo文件时hadoop版本冲突
各位大神跪求lzo-hadoop.jar支持hadoop-2.6版本的,或者是解决方法,本人想要用spark读取hdfs中*.lzo格式的压缩文件, 但是当前lzo-hadoop.jar包只支持hadoop-1.2.1,跪求解决办法!很急在线等!!!! 邮箱island_lonely@163.com
HDFS启动警告 WARN util.NativeCodeLoader
启动HDFS报入校警告: [hadoop@hadoop tmp]$ start-dfs.sh 15/02/02 20:39:49 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [hadoop] hadoop: starting namenode, logging to /usr/hadoop-2.5.2/logs/hadoop-hadoop-namenode-hadoop.out localhost: starting datanode, logging to /usr/hadoop-2.5.2/logs/hadoop-hadoop-datanode-hadoop.out Starting secondary namenodes [0.0.0.0] 0.0.0.0: starting secondarynamenode, logging to /usr/hadoop-2.5.2/logs/hadoop-hadoop-secondarynamenode-hadoop.out 15/02/02 20:40:08 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 网上查说是liunx版本问题 [hadoop@hadoop ~]$ uname -a Linux hadoop 2.6.32-431.el6.i686 #1 SMP Fri Nov 22 00:26:36 UTC 2013 i686 i686 i386 GNU/Linux JDK版本 [hadoop@hadoop ~]$ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07) Java HotSpot(TM) Client VM (build 19.1-b02, mixed mode, sharing) Hadoop版本: [hadoop@hadoop ~]$ hadoop version Hadoop 2.5.2 Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r cc72e9b000545b86b75a61f4835eb86d57bfafc0 Compiled by jenkins on 2014-11-14T23:45Z Compiled with protoc 2.5.0 From source with checksum df7537a4faa4658983d397abf4514320 This command was run using /usr/hadoop-2.5.2/share/hadoop/common/hadoop-common-2.5.2.jar 请高手帮忙看下
ubuntu 64位编译hadoop-2.6.0失败,网上看了好久,都没解决,大神速来解救
[exec] CMake Error at /usr/local/share/cmake-2.6/Modules/FindPackageHandleStandardArgs.cmake:52 (MESSAGE): [exec] Could NOT find ZLIB [exec] Call Stack (most recent call first): [exec] /usr/local/share/cmake-2.6/Modules/FindZLIB.cmake:22 (FIND_PACKAGE_HANDLE_STANDARD_ARGS) [exec] CMakeLists.txt:107 (find_package) [ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project hadoop-common: An Ant BuildException has occured: exec returned: 1 [ERROR] around Ant part ...<exec dir="/home/cj/workspace/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/target/native" executable="cmake" failonerror="true">... @ 4:139 in /home/cj/workspace/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/target/antrun/build-main.xml [ERROR] -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn <goals> -rf :hadoop-common 好像是 Could NOT find ZLIB引起的
本地eclipse运行 hadoop-2.6 mapreduce 报错,求助
报错信息是: 2016-02-26 11:24:07,722 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1174)) - session.id is deprecated. Instead, use dfs.metrics.session-id 2016-02-26 11:24:07,727 INFO [main] jvm.JvmMetrics (JvmMetrics.java:init(76)) - Initializing JVM Metrics with processName=JobTracker, sessionId= 2016-02-26 11:24:08,081 WARN [main] mapreduce.JobResourceUploader (JobResourceUploader.java:uploadFiles(171)) - No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-02-26 11:24:08,091 INFO [main] mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(252)) - Cleaning up the staging area file:/tmp/hadoop-fire/mapred/staging/fire1322517587/.staging/job_local1322517587_0001 2016-02-26 11:24:08,095 WARN [main] security.UserGroupInformation (UserGroupInformation.java:doAs(1674)) - PriviledgedActionException as:fire (auth:SIMPLE) cause:org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/user/fire/dedup_in Exception in thread "main" org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/user/fire/dedup_in at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:323) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:265) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:387) at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:304) at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:321) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:199) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1307) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1304) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1304) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1325) at com.hebut.mr.Dedup.main(Dedup.java:135)
hadoop集群start-yarn.sh报错,目测是jdk的原因,求指导
**之前jdk的安装目录为/usr/local/jdk1.7.0_80,后来新建了一个文件夹jdk把jdk1.7.0_80放进文件夹里了/usr/local/jdk/jdk1.7.0_80** **/etc/profile的JAVA_HOME也改了也source了**_![图片说明](https://img-ask.csdn.net/upload/201902/21/1550740813_416088.png) **which java也对** ![图片说明](https://img-ask.csdn.net/upload/201902/21/1550740890_610489.png) **还有hadoop-env.sh的JAVA_HOME也改了** ![图片说明](https://img-ask.csdn.net/upload/201902/21/1550741528_856329.png) **我先start-dfs.sh,没问题** ![图片说明](https://img-ask.csdn.net/upload/201902/21/1550741211_41850.jpg) **再start-yarn.sh就报错了** ![图片说明](https://img-ask.csdn.net/upload/201902/21/1550741335_349468.png) **请大佬指导**
Azkaban和Hadoop2.5.1集成出现的问题
Using Hadoop from /usr/local/hadoop-suite/hadoop Using Hive from /usr/local/hadoop-suite/hive bin/.. /usr/local/jdk/lib/tools.jar:/usr/local/jdk/lib/dt.jar:bin/../lib/azkaban-common-2.6.4.jar:bin/../lib/azkaban-webserver-2.6.4.jar:bin/../lib/commons-codec-1.9.jar:bin/../lib/commons-collections-3.2.1.jar:bin/../lib/commons-configuration-1.8.jar:bin/../lib/commons-dbcp-1.4.jar:bin/../lib/commons-dbutils-1.5.jar:bin/../lib/commons-email-1.2.jar:bin/../lib/commons-fileupload-1.2.1.jar:bin/../lib/commons-io-2.4.jar:bin/../lib/commons-jexl-2.1.1.jar:bin/../lib/commons-lang-2.6.jar:bin/../lib/commons-logging-1.1.1.jar:bin/../lib/commons-pool-1.6.jar:bin/../lib/data-1.15.7.jar:bin/../lib/gradle-plugins-1.15.7.jar:bin/../lib/guava-13.0.1.jar:bin/../lib/h2-1.3.170.jar:bin/../lib/httpclient-4.2.1.jar:bin/../lib/httpcore-4.2.1.jar:bin/../lib/jackson-core-2.3.2.jar:bin/../lib/jackson-core-asl-1.9.5.jar:bin/../lib/jackson-mapper-asl-1.9.5.jar:bin/../lib/jetty-6.1.26.jar:bin/../lib/jetty-util-6.1.26.jar:bin/../lib/joda-time-2.0.jar:bin/../lib/jopt-simple-4.3.jar:bin/../lib/li-jersey-uri-1.15.7.jar:bin/../lib/log4j-1.2.16.jar:bin/../lib/mail-1.4.5.jar:bin/../lib/mysql-connector-java-5.1.28.jar:bin/../lib/parseq-1.3.7.jar:bin/../lib/pegasus-common-1.15.7.jar:bin/../lib/r2-1.15.7.jar:bin/../lib/restli-common-1.15.7.jar:bin/../lib/restli-server-1.15.7.jar:bin/../lib/servlet-api-2.5.jar:bin/../lib/slf4j-api-1.6.1.jar:bin/../lib/velocity-1.7.jar:bin/../lib/velocity-tools-2.0.jar:bin/../extlib/azkaban-common-2.6.4.jar:bin/../extlib/azkaban-execserver-2.6.4.jar:bin/../extlib/azkaban-webserver-2.6.4.jar:bin/../extlib/commons-cli-1.2.jar:bin/../extlib/hadoop-auth-2.5.1.jar:bin/../extlib/hadoop-common-2.5.1.jar:bin/../extlib/hadoop-hdfs-2.5.1.jar:bin/../extlib/hive-cli-0.13.1.jar:bin/../extlib/hive-common-0.13.1.jar:bin/../extlib/hive-exec-0.13.1.jar:bin/../extlib/jackson-core-asl-1.9.5.jar:bin/../extlib/jackson-mapper-asl-1.9.5.jar:bin/../extlib/log4j-1.2.16.jar:bin/../extlib/protobuf-java-2.5.0.jar:bin/../extlib/servlet-api-2.5.jar:bin/../extlib/slf4j-api-1.6.1.jar:bin/../extlib/slf4j-log4j12-1.6.4.jar:bin/../extlib/velocity-1.7.jar:bin/../extlib/velocity-tools-2.0.jar:bin/../plugins/*/*.jar:/usr/local/hadoop-suite/hadoop/conf:/usr/local/hadoop-suite/hadoop/*:/usr/local/hadoop-suite/hive/conf:/usr/local/hadoop-suite/hive/lib/* 2015/01/21 16:02:33.518 +0800 ERROR [AzkabanWebServer] [Azkaban] Starting Jetty Azkaban Executor... 2015/01/21 16:02:33.937 +0800 ERROR [AzkabanWebServer] [Azkaban] Plugin class azkaban.viewer.hdfs.HdfsBrowserServlet 2015/01/21 16:02:33.941 +0800 INFO [AzkabanWebServer] [Azkaban] Source jar /usr/local/hadoop-suite/azkaban-web-2.6.4-old/plugins/viewer/hdfs/lib/azkaban-hdfs-viewer-2.6.4.jar 2015/01/21 16:02:33.945 +0800 ERROR [AzkabanWebServer] [Azkaban] Plugin class azkaban.viewer.javaviewer.JavaViewerServlet 2015/01/21 16:02:33.946 +0800 INFO [AzkabanWebServer] [Azkaban] Source jar /usr/local/hadoop-suite/azkaban-web-2.6.4-old/plugins/viewer/javaviewer/lib/azkaban-javaviewer-2.6.3.jar 2015/01/21 16:02:33.947 +0800 ERROR [AzkabanWebServer] [Azkaban] Plugin class azkaban.viewer.reportal.ReportalServlet 2015/01/21 16:02:33.947 +0800 ERROR [AzkabanWebServer] [Azkaban] External library path /usr/local/hadoop-suite/azkaban-web-2.6.4-old/plugins/viewer/reportal/extlib not found. 2015/01/21 16:02:33.950 +0800 INFO [AzkabanWebServer] [Azkaban] Source jar /usr/local/hadoop-suite/azkaban-web-2.6.4-old/plugins/viewer/reportal/lib/azkaban-reportal-$%7Bgit.tag%7D.jar Reportal web resources: /usr/local/hadoop-suite/azkaban-web-2.6.4-old/plugins/viewer/reportal/web 2015/01/21 16:02:33.953 +0800 ERROR [AzkabanWebServer] [Azkaban] Plugin class azkaban.viewer.jobsummary.JobSummaryServlet 2015/01/21 16:02:33.953 +0800 ERROR [AzkabanWebServer] [Azkaban] External library path /usr/local/hadoop-suite/azkaban-web-2.6.4-old/plugins/viewer/jobsummary/extlib/* not found.
hadoop2.7.2搭建分布式环境,格式化后,namenode没启动成功
第一步:执行hadoop namenode -formate STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r b165c4fe8a74265c792ce23f546c64604acf0e41; compiled by 'jenkins' on 2016-01-26T00:08Z STARTUP_MSG: java = 1.7.0_76 ************************************************************/ 16/08/02 04:26:16 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 16/08/02 04:26:16 INFO namenode.NameNode: createNameNode [-formate] Usage: java NameNode [-backup] | [-checkpoint] | [-format [-clusterid cid ] [-force] [-nonInteractive] ] | [-upgrade [-clusterid cid] [-renameReserved<k-v pairs>] ] | [-upgradeOnly [-clusterid cid] [-renameReserved<k-v pairs>] ] | [-rollback] | [-rollingUpgrade <rollback|downgrade|started> ] | [-finalize] | [-importCheckpoint] | [-initializeSharedEdits] | [-bootstrapStandby] | [-recover [ -force] ] | [-metadataVersion ] ] 16/08/02 04:26:16 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at master/192.168.234.100 第二步:执行start-all.sh 结果如下: [root@master sbin]# sh start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh 16/08/02 05:45:24 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [master] master: starting namenode, logging to /usr/hadoop/hadoop-2.7.2/logs/hadoop-root-namenode-master.out slave2: starting datanode, logging to /usr/hadoop/hadoop-2.7.2/logs/hadoop-root-datanode-slave2.out slave3: starting datanode, logging to /usr/hadoop/hadoop-2.7.2/logs/hadoop-root-datanode-slave3.out slave1: starting datanode, logging to /usr/hadoop/hadoop-2.7.2/logs/hadoop-root-datanode-slave1.out Starting secondary namenodes [master] master: starting secondarynamenode, logging to /usr/hadoop/hadoop-2.7.2/logs/hadoop-root-secondarynamenode-master.out 16/08/02 05:46:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable starting yarn daemons starting resourcemanager, logging to /usr/hadoop/hadoop-2.7.2/logs/yarn-root-resourcemanager-master.out slave2: starting nodemanager, logging to /usr/hadoop/hadoop-2.7.2/logs/yarn-root-nodemanager-slave2.out slave3: starting nodemanager, logging to /usr/hadoop/hadoop-2.7.2/logs/yarn-root-nodemanager-slave3.out slave1: starting nodemanager, logging to /usr/hadoop/hadoop-2.7.2/logs/yarn-root-nodemanager-slave1.out [root@master sbin]# jps 2613 ResourceManager 2467 SecondaryNameNode 2684 Jps namenode日志: 2016-08-02 05:49:49,910 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage java.io.IOException: NameNode is not formatted. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:225) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:975) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:584) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:644) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:811) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:795) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1488) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554) 2016-08-02 05:49:49,928 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070 2016-08-02 05:49:49,928 WARN org.apache.hadoop.http.HttpServer2: HttpServer Acceptor: isRunning is false. Rechecking. 2016-08-02 05:49:49,930 WARN org.apache.hadoop.http.HttpServer2: HttpServer Acceptor: isRunning is false 2016-08-02 05:49:49,934 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system... 2016-08-02 05:49:49,935 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped. 2016-08-02 05:49:49,935 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete. 2016-08-02 05:49:49,935 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.io.IOException: NameNode is not formatted. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:225) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:975) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:584) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:644) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:811) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:795) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1488) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554) 2016-08-02 05:49:49,949 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2016-08-02 05:49:49,961 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at master/192.168.234.100
Hive请正常,在show databases;报错,求大神解答
2018-02-01T09:46:28,400 WARN [9a4cc1b4-8396-471b-8df0-b1eb3ca1fd82 main] ql.Driver: Caught exception attempting to write metadata call information org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:236) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:388) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.ql.metadata.Hive.create(Hive.java:332) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.ql.metadata.Hive.getInternal(Hive.java:312) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:354) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:350) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.ql.Driver.dumpMetaCallTimingWithoutEx(Driver.java:683) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:621) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233) ~[hive-cli-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184) ~[hive-cli-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403) ~[hive-cli-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821) ~[hive-cli-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759) ~[hive-cli-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686) ~[hive-cli-2.3.2.jar:2.3.2] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_151] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_151] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_151] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_151] at org.apache.hadoop.util.RunJar.run(RunJar.java:239) ~[hadoop-common-2.9.0.jar:?] at org.apache.hadoop.util.RunJar.main(RunJar.java:153) ~[hadoop-common-2.9.0.jar:?] Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1701) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:83) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:133) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3600) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3652) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3632) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3894) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:248) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:231) ~[hive-exec-2.3.2.jar:2.3.2] ... 23 more Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_151] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_151] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_151] at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_151] at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1699) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:83) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:133) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3600) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3652) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3632) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3894) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:248) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:231) ~[hive-exec-2.3.2.jar:2.3.2] ... 23 more Caused by: org.apache.hadoop.hive.metastore.api.MetaException: Version information not found in metastore. at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:83) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:92) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6893) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:164) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:70) ~[hive-exec-2.3.2.jar:2.3.2] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_151] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_151] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_151] at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_151] at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1699) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:83) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:133) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3600) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3652) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3632) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3894) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:248) ~[hive-exec-2.3.2.jar:2.3.2] at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:231) ~[hive-exec-2.3.2.jar:2.3.2] ... 23 more 2018-02-01T09:46:28,400 INFO [9a4cc1b4-8396-471b-8df0-b1eb3ca1fd82 main] ql.Driver: Completed compiling command(queryId=root_20180201094627_6a378f28-ae24-4c00-8d15-a8df87e7020e); Time taken: 0.474 seconds
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它是一个过程,是一个不断累积、不断沉淀、不断总结、善于传达自己的个人见解以及乐于分享的过程。
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过...
有哪些让程序员受益终生的建议
从业五年多,辗转两个大厂,出过书,创过业,从技术小白成长为基层管理,联合几个业内大牛回答下这个问题,希望能帮到大家,记得帮我点赞哦。 敲黑板!!!读了这篇文章,你将知道如何才能进大厂,如何实现财务自由,如何在工作中游刃有余,这篇文章很长,但绝对是精品,记得帮我点赞哦!!!! 一腔肺腑之言,能看进去多少,就看你自己了!!! 目录: 在校生篇: 为什么要尽量进大厂? 如何选择语言及方...
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 free -m 其中:m表示兆,也可以用g,注意都要小写 Men:表示物理内存统计 total:表示物理内存总数(total=used+free) use...
比特币原理详解
一、什么是比特币 比特币是一种电子货币,是一种基于密码学的货币,在2008年11月1日由中本聪发表比特币白皮书,文中提出了一种去中心化的电子记账系统,我们平时的电子现金是银行来记账,因为银行的背后是国家信用。去中心化电子记账系统是参与者共同记账。比特币可以防止主权危机、信用风险。其好处不多做赘述,这一层面介绍的文章很多,本文主要从更深层的技术原理角度进行介绍。 二、问题引入 假设现有4个人...
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发...
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 演示地点演示 html代码如下` music 这个年纪 七月的风 音乐 ` 然后就是css`*{ margin: 0; padding: 0; text-decoration: none; list-...
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。
数据库优化 - SQL优化
以实际SQL入手,带你一步一步走上SQL优化之路!
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 cpp 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7 p...
通俗易懂地给女朋友讲:线程池的内部原理
餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”
《奇巧淫技》系列-python!!每天早上八点自动发送天气预报邮件到QQ邮箱
将代码部署服务器,每日早上定时获取到天气数据,并发送到邮箱。 也可以说是一个小型人工智障。 知识可以运用在不同地方,不一定非是天气预报。
经典算法(5)杨辉三角
杨辉三角 是经典算法,这篇博客对它的算法思想进行了讲解,并有完整的代码实现。
英特尔不为人知的 B 面
从 PC 时代至今,众人只知在 CPU、GPU、XPU、制程、工艺等战场中,英特尔在与同行硬件芯片制造商们的竞争中杀出重围,且在不断的成长进化中,成为全球知名的半导体公司。殊不知,在「刚硬」的背后,英特尔「柔性」的软件早已经做到了全方位的支持与支撑,并持续发挥独特的生态价值,推动产业合作共赢。 而对于这一不知人知的 B 面,很多人将其称之为英特尔隐形的翅膀,虽低调,但是影响力却不容小觑。 那么,在...
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹...
面试官:你连RESTful都不知道我怎么敢要你?
干货,2019 RESTful最贱实践
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看...
白话阿里巴巴Java开发手册高级篇
不久前,阿里巴巴发布了《阿里巴巴Java开发手册》,总结了阿里巴巴内部实际项目开发过程中开发人员应该遵守的研发流程规范,这些流程规范在一定程度上能够保证最终的项目交付质量,通过在时间中总结模式,并推广给广大开发人员,来避免研发人员在实践中容易犯的错误,确保最终在大规模协作的项目中达成既定目标。 无独有偶,笔者去年在公司里负责升级和制定研发流程、设计模板、设计标准、代码标准等规范,并在实际工作中进行...
SQL-小白最佳入门sql查询一
不要偷偷的查询我的个人资料,即使你再喜欢我,也不要这样,真的不好;
redis分布式锁,面试官请随便问,我都会
文章有点长并且绕,先来个图片缓冲下! 前言 现在的业务场景越来越复杂,使用的架构也就越来越复杂,分布式、高并发已经是业务要求的常态。像腾讯系的不少服务,还有CDN优化、异地多备份等处理。 说到分布式,就必然涉及到分布式锁的概念,如何保证不同机器不同线程的分布式锁同步呢? 实现要点 互斥性,同一时刻,智能有一个客户端持有锁。 防止死锁发生,如果持有锁的客户端崩溃没有主动释放锁,也要保证锁可以正常释...
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // doshom...
Nginx 原理和架构
Nginx 是一个免费的,开源的,高性能的 HTTP 服务器和反向代理,以及 IMAP / POP3 代理服务器。Nginx 以其高性能,稳定性,丰富的功能,简单的配置和低资源消耗而闻名。 Nginx 的整体架构 Nginx 里有一个 master 进程和多个 worker 进程。master 进程并不处理网络请求,主要负责调度工作进程:加载配置、启动工作进程及非停升级。worker 进程负责处...
Python 编程开发 实用经验和技巧
Python是一门很灵活的语言,也有很多实用的方法,有时候实现一个功能可以用多种方法实现,我这里总结了一些常用的方法和技巧,包括小数保留指定位小数、判断变量的数据类型、类方法@classmethod、制表符中文对齐、遍历字典、datetime.timedelta的使用等,会持续更新......
YouTube排名第一的励志英文演讲《Dream(梦想)》
Idon’t know what that dream is that you have, I don't care how disappointing it might have been as you've been working toward that dream,but that dream that you’re holding in your mind, that it’s po...
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,...
程序员:我终于知道post和get的区别
是一个老生常谈的话题,然而随着不断的学习,对于以前的认识有很多误区,所以还是需要不断地总结的,学而时习之,不亦说乎
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU...
加快推动区块链技术和产业创新发展,2019可信区块链峰会在京召开
11月8日,由中国信息通信研究院、中国通信标准化协会、中国互联网协会、可信区块链推进计划联合主办,科技行者协办的2019可信区块链峰会将在北京悠唐皇冠假日酒店开幕。   区块链技术被认为是继蒸汽机、电力、互联网之后,下一代颠覆性的核心技术。如果说蒸汽机释放了人类的生产力,电力解决了人类基本的生活需求,互联网彻底改变了信息传递的方式,区块链作为构造信任的技术有重要的价值。   1...
Java世界最常用的工具类库
Apache Commons Apache Commons有很多子项目 Google Guava 参考博客
相关热词 c# 引用mysql c#动态加载非托管dll c# 两个表数据同步 c# 返回浮点json c# imap 链接状态 c# 漂亮字 c# 上取整 除法 c#substring c#中延时关闭 c#线段拖拉
立即提问