hadoop集群下 spark 启动报错
 Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
17/09/29 09:24:37 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
  at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
  at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
  at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
  at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
  at org.apache.spark.sql.SparkSession$Builder$$anonfun$getOrCreate$5.apply(SparkSession.scala:938)
  at org.apache.spark.sql.SparkSession$Builder$$anonfun$getOrCreate$5.apply(SparkSession.scala:938)
  at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
  at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
  at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
  at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
  at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
  at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:938)
  at org.apache.spark.repl.Main$.createSparkSession(Main.scala:97)
  ... 47 elided
Caused by: org.apache.spark.sql.AnalysisException: java.lang.RuntimeException: org.apache.hadoop.fs.ParentNotDirectoryException: /tmp (is not a directory)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkIsDirectory(FSPermissionChecker.java:530)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkSimpleTraverse(FSPermissionChecker.java:522)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:497)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1603)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1621)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:542)
        at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:51)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2970)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1078)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:637)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455)
;
  at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:106)
  at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
  at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
  at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
  at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
  at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
  at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
  at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
  at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
  at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
  ... 61 more
Caused by: java.lang.RuntimeException: org.apache.hadoop.fs.ParentNotDirectoryException: /tmp (is not a directory)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkIsDirectory(FSPermissionChecker.java:530)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkSimpleTraverse(FSPermissionChecker.java:522)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:497)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1603)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1621)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:542)
        at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:51)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2970)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1078)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:637)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455)

  at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
  at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:191)
  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
  at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
  at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
  at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
  at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
  at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
  at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
  at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
  at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
  at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
  at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
  at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
  ... 70 more
Caused by: org.apache.hadoop.fs.ParentNotDirectoryException: /tmp (is not a directory)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkIsDirectory(FSPermissionChecker.java:530)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkSimpleTraverse(FSPermissionChecker.java:522)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:497)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1603)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1621)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:542)
        at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:51)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2970)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1078)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:637)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455)

  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
  at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
  at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
  at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
  at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
  at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
  at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
  at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
  at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
  at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
  at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
  at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
  at org.apache.hadoop.hive.ql.exec.Utilities.createDirsWithPermission(Utilities.java:3679)
  at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:597)
  at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:554)
  at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508)
  ... 84 more
Caused by: org.apache.hadoop.ipc.RemoteException: /tmp (is not a directory)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkIsDirectory(FSPermissionChecker.java:530)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkSimpleTraverse(FSPermissionChecker.java:522)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:497)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1603)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1621)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:542)
        at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:51)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2970)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1078)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:637)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455)

  at org.apache.hadoop.ipc.Client.call(Client.java:1475)
  at org.apache.hadoop.ipc.Client.call(Client.java:1412)
  at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
  at com.sun.proxy.$Proxy22.mkdirs(Unknown Source)
  at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:558)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)
  at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
  at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
  at com.sun.proxy.$Proxy23.mkdirs(Unknown Source)
  at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3000)
  ... 94 more
<console>:14: error: not found: value spark
       import spark.implicits._
              ^
<console>:14: error: not found: value spark
       import spark.sql
              ^
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.2.0
      /_/

Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_144)
Type in expressions to have them evaluated.
Type :help for more information.

scala>

2个回答

集群没有装hive,也有/tmp 路径

有没有大神出现解决我吧

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
window下连接hadoop集群报错
window下连接hadoop集群报错,已经把hadoop.dll放在window下的hadoop的bin目录了,system32也放了,还是无效,请问怎么办??![图片说明](https://img-ask.csdn.net/upload/201508/31/1441030623_267379.png)
solaris部署hadoop集群跑wordcount报错
solaris部署hadoop集群跑wordcount报错, 信息如下: [admin@4bf635fa-5f3e-4b47-b42d-7558a6f0bbff ~]$ hadoop jar /opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount /input /output 15/08/20 00:48:09 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 15/08/20 00:48:10 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.77.27:8032 15/08/20 00:48:11 INFO input.FileInputFormat: Total input paths to process : 3 15/08/20 00:48:11 INFO mapreduce.JobSubmitter: number of splits:3 15/08/20 00:48:11 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1439973008617_0002 15/08/20 00:48:13 INFO impl.YarnClientImpl: Submitted application application_1439973008617_0002 15/08/20 00:48:13 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1439973008617_0002/ 15/08/20 00:48:13 INFO mapreduce.Job: Running job: job_1439973008617_0002 15/08/20 00:48:35 INFO mapreduce.Job: Job job_1439973008617_0002 running in uber mode : false 15/08/20 00:48:35 INFO mapreduce.Job: map 0% reduce 0% 15/08/20 00:48:35 INFO mapreduce.Job: Job job_1439973008617_0002 failed with state FAILED due to: Application application_1439973008617_0002 failed 2 times due to Error launching appattempt_1439973008617_0002_000002. Got exception: java.net.ConnectException: Call From localhost/127.0.0.1 to localhost:37524 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:422) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732) at org.apache.hadoop.ipc.Client.call(Client.java:1480) at org.apache.hadoop.ipc.Client.call(Client.java:1407) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy83.startContainers(Unknown Source) at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96) at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:119) at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:254) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:609) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:707) at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:370) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529) at org.apache.hadoop.ipc.Client.call(Client.java:1446) ... 9 more . Failing the application. 15/08/20 00:48:35 INFO mapreduce.Job: Counters: 0 hadoop dfsadmin -report 统计信息如下: [admin@4bf635fa-5f3e-4b47-b42d-7558a6f0bbff ~]$ hadoop dfsadmin -report DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it. 15/08/20 00:49:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Configured Capacity: 344033951744 (320.41 GB) Present Capacity: 342854029952 (319.31 GB) DFS Remaining: 342853630464 (319.31 GB) DFS Used: 399488 (390.13 KB) DFS Used%: 0.00% Under replicated blocks: 7 Blocks with corrupt replicas: 0 Missing blocks: 0 Missing blocks (with replication factor 1): 0 ------------------------------------------------- Live datanodes (1): Name: 192.168.77.28:50010 (slave1) Hostname: localhost Decommission Status : Normal Configured Capacity: 344033951744 (320.41 GB) DFS Used: 399488 (390.13 KB) Non DFS Used: 1179921792 (1.10 GB) DFS Remaining: 342853630464 (319.31 GB) DFS Used%: 0.00% DFS Remaining%: 99.66% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Thu Aug 20 00:49:36 UTC 2015 192.168.77.27(master)跑ResourceManager和NameNode,192.168.77.28(slave)跑DataNode和NodeManger,貌似hadoop把DataNode和NodeManger的hostname都认成localhost了,所以才找不到报错了。 同样的配置在centos一切正常,在solaris就报错 ![NodeManger信息](https://img-ask.csdn.net/upload/201508/20/1440038080_659898.png) ![DataNode信息](https://img-ask.csdn.net/upload/201508/20/1440038161_998191.png)
hadoop 集群 + zookeeper 时:启动journalnode失败
hadoop 集群 + zookeeper 时:启动journalnode失败,提示Error: Cannot find configuration directory: /itcast/hadoop-2.2.0/etc/hadoop
Hadoop集群执行wordcount出现的一些报错信息
我是一个Hadoop学习的新手,请大家帮助一下,非常的感谢! 我自己在虚拟机使用docker搭建了一个Hadoop集群,docker镜像是使用的ubuntu18.04 首先我的Hadoop1主节点上开启了以下服务: ``` root@hadoop1:/usr/local/hadoop# jps 2058 NameNode 2266 SecondaryNameNode 2445 ResourceManager 2718 Jps ``` 下面是两个从节点的服务: ``` root@hadoop2:~# jps 294 DataNode 550 Jps 406 NodeManager ``` ``` root@hadoop3:~# jps 543 Jps 399 NodeManager 287 DataNode ``` hadoop1(主节点)在云端创建一个/data/input的文件夹结构 ``` root@hadoop1:/usr/local/hadoop#bin/hdfs dfs -mkdir -p /data/input WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ``` 就是上面的一堆警告,下面我每执行一次bin/hdfs dfs都会有这个警告,请问这种警告对于整个Hadoop集群有没有影响,怎样将这个警告消除。 ``` 下面这是将test1文件推送带云端时也出现同样的报警 root@hadoop1:/usr/local/hadoop#bin/hdfs dfs -put test1 /data/input WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ``` 这是查看推送到云端文件的时候也出现这种报警 ``` root@hadoop1:/usr/local/hadoop#bin/hdfs dfs -ls /data/input WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release Found 1 items -rw-r--r-- 1 root supergroup 60 2019-09-15 08:07 /data/input/test1 ``` 这是执行share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar也出现这种报警 ``` root@hadoop1:/usr/local/hadoop#bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar wordcount /data/input/test1 /data/output/test1 WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ``` 这是执行后查看wordcount的执行结果也出现这种报警 ``` root@hadoop1:/usr/local/hadoop# bin/hdfs dfs -cat /data/output/test1/part-r-00000 WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release first 1 hello 2 is 2 my 2 test1 1 testwordcount 1 this 2 ``` 有哪位大神能否帮我看一下这个问题如何解决,非常感谢!
hadoop集群搭建完成,其他进程都启动了,但是namenode没有启动,查看日志,报错了
hadoop集群搭建完成,其他进程都启动了,但是namenode没有启动,查看namenode的日志信息,报错了, 192.168.100.70:8485: Call From anlulu-1/192.168.100.10 to anlulu-7:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused 192.168.100.50:8485: Call From anlulu-1/192.168.100.10 to anlulu-5:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused 192.168.100.60:8485: Call From anlulu-1/192.168.100.10 to anlulu-6:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81) at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223) at org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:142) at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createNewUniqueEpoch(QuorumJournalManager.java:182) at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.recoverUnfinalizedSegments(QuorumJournalManager.java:436) at org.apache.hadoop.hdfs.server.namenode.JournalSet$7.apply(JournalSet.java:590) at org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:359) at org.apache.hadoop.hdfs.server.namenode.JournalSet.recoverUnfinalizedSegments(JournalSet.java:587) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.recoverUnclosedStreams(FSEditLog.java:1330) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:994) at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1521) at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61) at org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63) at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49) at org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1399) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:1160) at org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107) at org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:4460) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) 2015-08-20 00:41:50,962 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2015-08-20 00:41:50,966 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: ``` ```
萌妹子 求解hadoop集群搭建 ZKFC报错
hdfs zkfc -formatZK执行后: WARNING: Before proceeding, ensure that all HDFS services and failover controllers are stopped! =============================================== Proceed formatting /hadoop-ha/mycluster? (Y or N) 16/02/26 01:18:56 INFO ha.ActiveStandbyElector: Session connected. y 16/02/26 01:19:12 INFO ha.ActiveStandbyElector: Recursively deleting /hadoop-ha/mycluster from ZK... 16/02/26 01:19:12 INFO ha.ActiveStandbyElector: Successfully deleted /hadoop-ha/mycluster from ZK. 16/02/26 01:19:12 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/mycluster in ZK. 16/02/26 01:19:12 INFO zookeeper.ClientCnxn: EventThread shut down 16/02/26 01:19:12 INFO zookeeper.ZooKeeper: Session: 0x153196bc9790000 closed 结果ZKFC还是没起来,然后我尝试: [root@h1 ~]# hadoop-daemon.sh start DFSZKFailoverController ......... Error: Could not find or load main class DFSZKFailoverController 但仍然报错!
Hive on spark查询报错。
求助!!!在hadoop使用Hive on spark执行Bigbench测试时,一直会有报错,log信息: FAILED: SemanticException Failed to get a spark session: org.apache.hadoop.hive.ql.metadata.HiveException: Failed to create spark client. WARN: The method class org.apache.commons.logging.impl.SLF4JLogFactory#release() was invoked. WARN: Please see http://www.slf4j.org/codes.html#release for an explanation. An error occured while running command: ========== runEngineCmd -f /var/lib/hadoop-hdfs/Big-Bench/engines/hive/queries/q04/q04.sql ========== 在网上查了很多资料,有说版本不匹配的,有说是概率性问题,有没有大佬来瞅一眼啊。。哭了
hadoop集群启动后namenode自动关闭
2017-09-05 10:14:17,973 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode ON, in safe mode extension. The reported blocks 189 has reached the threshold 0.9990 of total blocks 189. The number of live datanodes 2 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 9 seconds. 2017-09-05 10:14:23,736 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.updateBlockForPipeline from 172.28.14.61:41497 Call#164039 Retry#12 org.apache.hadoop.ipc.RetriableException: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot get a new generation stamp and an access token for block BP-1552766309-172.28.41.193-1503397713205:blk_1073742745_1926. Name node is in safe mode. The reported blocks 189 has reached the threshold 0.9990 of total blocks 189. The number of live datanodes 2 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 4 seconds. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1331) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:6234) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:6309) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:806) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:955) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043) Caused by: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot get a new generation stamp and an access token for block BP-1552766309-172.28.41.193-1503397713205:blk_1073742745_1926. Name node is in safe mode. The reported blocks 189 has reached the threshold 0.9990 of total blocks 189. The number of live datanodes 2 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 4 seconds. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1327) ... 13 more 2017-09-05 10:14:27,976 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: initializing replication queues 2017-09-05 10:14:27,977 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 55 secs 2017-09-05 10:14:27,977 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode is OFF 2017-09-05 10:14:27,977 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 1 racks and 2 datanodes 2017-09-05 10:14:27,977 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks 2017-09-05 10:14:28,013 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Total number of blocks = 190 2017-09-05 10:14:28,013 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of invalid blocks = 0 2017-09-05 10:14:28,013 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of under-replicated blocks = 3 2017-09-05 10:14:28,013 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of over-replicated blocks = 0 2017-09-05 10:14:28,013 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of blocks being written = 1 2017-09-05 10:14:28,013 INFO org.apache.hadoop.hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 29 msec 2017-09-05 10:14:59,141 INFO org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN size for command listStatus is: 0 2017-09-05 10:14:59,141 INFO org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN size for command * is: 0 2017-09-05 10:14:59,143 INFO org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN size for command listStatus is: 1 2017-09-05 10:14:59,145 INFO org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN size for command * is: 1 2017-09-05 10:14:59,185 INFO org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN size for command listStatus is: 1 2017-09-05 10:14:59,186 INFO org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN size for command * is: 1 2017-09-05 10:16:50,848 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 172.28.41.196 2017-09-05 10:16:50,849 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs 2017-09-05 10:16:50,849 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 15839 2017-09-05 10:16:50,849 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 3 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 88 18 2017-09-05 10:16:50,883 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 3 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 120 20 2017-09-05 10:16:50,910 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /home/hadoop/hadoop_name/current/edits_inprogress_0000000000000015839 -> /home/hadoop/hadoop_name/current/edits_0000000000000015839-0000000000000015841 2017-09-05 10:16:50,915 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 15842 2017-09-05 10:18:51,193 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 172.28.41.196 2017-09-05 10:18:51,193 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs 2017-09-05 10:18:51,193 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 15842 2017-09-05 10:18:51,194 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 1 SyncTimes(ms): 19 8 2017-09-05 10:18:51,372 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 129 76 2017-09-05 10:18:51,405 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /home/hadoop/hadoop_name/current/edits_inprogress_0000000000000015842 -> /home/hadoop/hadoop_name/current/edits_0000000000000015842-0000000000000015843 2017-09-05 10:18:51,406 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 15844 2017-09-05 10:20:52,122 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 172.28.41.196 2017-09-05 10:20:52,122 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs 2017-09-05 10:20:52,122 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 15844 2017-09-05 10:20:52,122 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 1 SyncTimes(ms): 39 341 2017-09-05 10:20:52,258 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 103 413 2017-09-05 10:20:52,284 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /home/hadoop/hadoop_name/current/edits_inprogress_0000000000000015844 -> /home/hadoop/hadoop_name/current/edits_0000000000000015844-0000000000000015845 2017-09-05 10:20:52,284 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 15846 报这样的错误是不是有问题
hadoop集群datanode启动成功,jps中无datanode
这是master中jps的结果 ![图片说明](https://img-ask.csdn.net/upload/201804/02/1522635422_43286.png) 这个是slave中jps的结果 ![图片说明](https://img-ask.csdn.net/upload/201804/02/1522635450_918763.png) 这个是50070页面 ![图片说明](https://img-ask.csdn.net/upload/201804/02/1522635471_368841.png) hadoop dfsadmin -report的结果 ![图片说明](https://img-ask.csdn.net/upload/201804/02/1522635512_251415.png) master和slave的VERSION信息 ![图片说明](https://img-ask.csdn.net/upload/201804/02/1522635571_921496.png) ![图片说明](https://img-ask.csdn.net/upload/201804/02/1522635586_259196.png) 有没有大神知道这是什么问题
hadoop比spark的优势?
最近入门spark,但是网上都是说spark的优势,速度快。可是现在很多企业是hadoop结合spark,说明hadoop也有他的优势面? **所以hadoop比spark优势,更擅长什么?**
请问各位大神们分布式环境下hadoop集群搭建实训报告怎么写
分布式环境下hadoop集群搭建实训报告怎么写 所用设备 实验原理实验内容及步骤
hadoop集群添加kerberos认证后namenode启动报ipc认证失败?
问题描述: namenode连接journalnode报错,zkfc连接namenode也报错,都是同样的错。 namenode错误日志: 2019-07-16 18:55:52,617 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hostname/ip:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-07-16 18:55:52,616 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hostname/ip:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-07-16 18:55:53,438 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 6001 ms (timeout=20000 ms) for a response for selectInputStreams. No responses yet. 2019-07-16 18:55:53,618 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hostname/ip:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-07-16 18:55:53,618 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hostname/ip:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-07-16 18:55:53,619 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hostname/ip:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-07-16 18:55:54,439 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 7003 ms (timeout=20000 ms) for a response for selectInputStreams. No responses yet. journalnode错误日志: 2019-07-16 18:56:10,836 WARN SecurityLogger.org.apache.hadoop.ipc.Server: Auth failed for ip:port:null (GSS initiate failed) with true cause: (GSS initiate failed) 2019-07-16 18:56:11,939 WARN SecurityLogger.org.apache.hadoop.ipc.Server: Auth failed for ip:port:null (GSS initiate failed) with true cause: (GSS initiate failed) 2019-07-16 18:56:12,391 WARN SecurityLogger.org.apache.hadoop.ipc.Server: Auth failed for ip:port:null (GSS initiate failed) with true cause: (GSS initiate failed) 2019-07-16 18:56:13,341 WARN SecurityLogger.org.apache.hadoop.ipc.Server: Auth failed for ip:port:null (GSS initiate failed) with true cause: (GSS initiate failed) 2019-07-16 18:56:16,212 WARN SecurityLogger.org.apache.hadoop.ipc.Server: Auth failed for ip:port:null (GSS initiate failed) with true cause: (GSS initiate failed) 2019-07-16 18:56:17,871 WARN SecurityLogger.org.apache.hadoop.ipc.Server: Auth failed for ip:port:null (GSS initiate failed) with true cause: (GSS initiate failed) 2019-07-16 18:56:20,902 WARN SecurityLogger.org.apache.hadoop.ipc.Server: Auth failed for ip:port:null (GSS initiate failed) with true cause: (GSS initiate failed) 2019-07-16 18:56:21,081 WARN SecurityLogger.org.apache.hadoop.ipc.Server: Auth failed for ip:port:null (GSS initiate failed) with true cause: (GSS initiate failed) 查看了一下kdc的日志:可能问题在这里 Jul 16 17:03:50 hadoop01 krb5kdc[47](info): TGS_REQ (8 etypes {18 17 20 19 16 23 25 26}) 10.10.10.40: LOOKING_UP_SERVER: authtime 0, root/hadoop00@HADOOP.COM for host/hadoop01@HADOOP.COM, Server not found in Kerberos database Jul 16 17:03:50 hadoop01 krb5kdc[47](info): TGS_REQ (8 etypes {18 17 20 19 16 23 25 26}) 10.10.10.40: LOOKING_UP_SERVER: authtime 0, root/hadoop00@HADOOP.COM for host/hadoop00@HADOOP.COM, Server not found in Kerberos database Jul 16 17:03:52 hadoop01 krb5kdc[47](info): AS_REQ (3 etypes {17 16 23}) 10.10.10.40: ISSUE: authtime 1563267832, etypes {rep=17 tkt=18 ses=17}, root/hadoop00@HADOOP.COM for krbtgt/HADOOP.COM@HADOOP.COM Jul 16 17:03:53 hadoop01 krb5kdc[47](info): TGS_REQ (3 etypes {17 16 23}) 10.10.10.40: ISSUE: authtime 1563267832 , etypes {rep=17 tkt=18 ses=17}, root/hadoop00@HADOOP.COM for root/hadoop01@HADOOP.COM Jul 16 17:03:54 hadoop01 krb5kdc[47](info): TGS_REQ (8 etypes {18 17 20 19 16 23 25 26}) 10.10.10.40: LOOKING_UP_SERVER: authtime 0, root/hadoop00@HADOOP.COM for host/hadoop10@HADOOP.COM, Server not found in Kerberos database 所以怀疑问题处在这里,本地kinit root 和HTTP用户都是可以的,正常情况下应该是访问HTTP/hadoop01@HADOOP.COM 而不是host/hadoop01@HADOOP.COM 不知道这里为什么会出现host,请kerberos的大神指导一下
hadoop集群上已经有要运行的jar,怎么样通过intelj 启动它
hadoop集群上已经有要运行的jar,而且该jar也是可以用hadoop启动的,怎么样通过intelj 启动它
启动spark-shell报错,
启动spark-shell报错,error creating transactional connection factory,
hadoop集群间数据迁移
bin/hadoop distcp hftp://master:50070/user/wp hdfs://ns1/user/ hadoop集群间数据迁移org.apache.hadoop.tools.mapred.RetriableFileCopyCommand$CopyReadException: java.net.SocketTimeoutException: connect timed out
处理18TB数据, 大概需要多少台hadoop集群 ~ 求大神回复?
处理18TB数据, 大概需要多少台hadoop集群 ~ 求大神回复? 老板等着买机器, 急!
ambari监控已存在的hadoop集群?
ambari到底能不能管理监控已存在的hadoop集群?求大神帮帮忙。
hadoop集群,hdfs dfs -ls / 目录出错
搭建了一个hadoop集群,用hdfs dfs -ls /命令,列出的是本地系统的根目录。 用hdfs dfs -ls hdfs://servicename/ 列出的目录才是hdfs上的目录,可能是什么原因? 执行hive创建的目录也是在本地系统目录上。 集群的配置如下 集群规划: 主机名 IP 安装的软件 运行的进程 hadoop01 192.168.175.129 jdk、hadoop NameNode、DFSZKFailoverController(zkfc) hadoop02 192.168.175.127 jdk、hadoop NameNode、DFSZKFailoverController(zkfc) hadoop03 192.168.175.126 jdk、hadoop ResourceManager hadoop04 192.168.175.125 jdk、hadoop ResourceManager hadoop05 192.168.175.124 jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain hadoop06 192.168.175.123 jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain hadoop07 192.168.175.122 jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain windows:NLB LINUX:LVS 1.liunx虚拟机安装后,虚拟机连接模式要选择host-only模式。然后分配IP(以hadoop01为例) DEVICE="eth0" BOOTPROTO="static" ### HWADDR="00:0C:29:3C:BF:E7" IPV6INIT="yes" NM_CONTROLLED="yes" ONBOOT="yes" TYPE="Ethernet" UUID="ce22eeca-ecde-4536-8cc2-ef0dc36d4a8c" IPADDR="192.168.175.129" ### NETMASK="255.255.255.0" ### GATEWAY="192.168.175.1" ### 2.修改主机名: vim /etc/sysconfig/network NETWORKING=yes HOSTNAME=hadoop01 ### 3.关闭防火墙: #查看防火墙状态 service iptables status #关闭防火墙 service iptables stop #查看防火墙开机启动状态 chkconfig iptables --list #关闭防火墙开机启动 chkconfig iptables off 4.免登录配置: #生成ssh免登陆密钥 #进入到我的home目录 cd ~/.ssh ssh-keygen -t rsa (四个回车) 执行完这个命令后,会生成两个文件id_rsa(私钥)、id_rsa.pub(公钥) 将公钥拷贝到要免登陆的机器上 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 或 若报错ssh-copy-id: ERROR: No identities found,是因为找不到公钥路径,加上-i然后再加上路径即可 则用 $ ssh-copy-id -i ~/.ssh/id_rsa.pub user@remote_ip 5.主机IP映射关系(/etc/hosts每台机器上都要配置全部映射关系) 192.168.175.129 hadoop01 192.168.175.127 hadoop02 192.168.175.126 hadoop03 192.168.175.125 hadoop04 192.168.175.124 hadoop05 192.168.175.123 hadoop06 192.168.175.122 hadoop07 6./etc/profile下配置java环境变量: export JAVA_HOME=/lichangwu/jdk1.7.0_79 export PATH=$PATH:$JAVA_HOME/bin #刷新profile source /etc/profile 若版本报错,vi /etc/selinux/config,设置SELINUX=disabled,然后重启虚拟机 7.安装zookeeper: 1.安装配置zooekeeper集群(在hadoop05上): 1.1解压 tar -zxvf zookeeper-3.4.6.tar.gz -C /lichangwu/ 1.2修改配置 cd /lichangwu/zookeeper-3.4.6/conf/ cp zoo_sample.cfg zoo.cfg vim zoo.cfg 修改:dataDir=/lichangwu/zookeeper-3.4.6/tmp 在最后添加: server.1=hadoop05:2888:3888 server.2=hadoop06:2888:3888 server.3=hadoop07:2888:3888 保存退出 然后创建一个tmp文件夹 mkdir /lichangwu/zookeeper-3.4.6/tmp 再创建一个空文件 touch /lichangwu/zookeeper-3.4.6/tmp/myid 最后向该文件写入ID echo 1 > /lichangwu/zookeeper-3.4.6/tmp/myid 1.3将配置好的zookeeper拷贝到其他节点(首先分别在hadoop06、hadoop07根目录下创建一个lichangwu目录:mkdir /lichangwu) scp -r /lichangwu/zookeeper-3.4.6/ hadoop06:/lichangwu/ scp -r /lichangwu/zookeeper-3.4.6/ hadoop07:/lichangwu/ 注意:修改hadoop06、hadoop07对应/lichangwu/zookeeper-3.4.6/tmp/myid内容 itcast06: echo 2 > /lichangwu/zookeeper-3.4.6/tmp/myid itcast07: echo 3 > /lichangwu/zookeeper-3.4.6/tmp/myid 8.安装配置hadoop集群(在hadoop01上操作): 2.1解压 tar -zxvf hadoop-2.4.1.tar.gz -C /lichangwu/ 2.2配置HDFS(hadoop2.0所有的配置文件都在$HADOOP_HOME/etc/hadoop目录下) #将hadoop添加到环境变量中 vim /etc/profile export JAVA_HOME=/lichangwu/jdk1.7.0_79 export HADOOP_HOME=/lichangwu/hadoop-2.4.1 export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin #hadoop2.0的配置文件全部在$HADOOP_HOME/etc/hadoop下 cd /lichangwu/hadoop-2.4.1/etc/hadoop 2.2.1修改hadoo-env.sh export JAVA_HOME=/lichangwu/jdk1.7.0_79 2.2.2修改core-site.xml <configuration> <!-- 指定hdfs的nameservice为ns1 --> <property> <name>fs.defaultFS</name> <value>hdfs://ns1</value> </property> <!-- 指定hadoop临时目录 --> <property> <name>hadoop.tmp.dir</name> <value>/lichangwu/hadoop-2.4.1/tmp</value> </property> <!-- 指定zookeeper地址 --> <property> <name>ha.zookeeper.quorum</name> <value>hadoop05:2181,hadoop06:2181,hadoop07:2181</value> </property> </configuration> 2.2.3修改hdfs-site.xml <configuration> <!--指定hdfs的nameservice为ns1,需要和core-site.xml中的保持一致 --> <property> <name>dfs.nameservices</name> <value>ns1</value> </property> <!-- ns1下面有两个NameNode,分别是nn1,nn2 --> <property> <name>dfs.ha.namenodes.ns1</name> <value>nn1,nn2</value> </property> <!-- nn1的RPC通信地址 --> <property> <name>dfs.namenode.rpc-address.ns1.nn1</name> <value>hadoop01:9000</value> </property> <!-- nn1的http通信地址 --> <property> <name>dfs.namenode.http-address.ns1.nn1</name> <value>hadoop01:50070</value> </property> <!-- nn2的RPC通信地址 --> <property> <name>dfs.namenode.rpc-address.ns1.nn2</name> <value>hadoop02:9000</value> </property> <!-- nn2的http通信地址 --> <property> <name>dfs.namenode.http-address.ns1.nn2</name> <value>hadoop02:50070</value> </property> <!-- 指定NameNode的元数据在JournalNode上的存放位置 --> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://hadoop05:8485;hadoop06:8485;hadoop07:8485/ns1</value> </property> <!-- 指定JournalNode在本地磁盘存放数据的位置 --> <property> <name>dfs.journalnode.edits.dir</name> <value>/lichangwu/hadoop-2.4.1/journal</value> </property> <!-- 开启NameNode失败自动切换 --> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> <!-- 配置失败自动切换实现方式 --> <property> <name>dfs.client.failover.proxy.provider.ns1</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行--> <property> <name>dfs.ha.fencing.methods</name> <value> sshfence shell(/bin/true) </value> </property> <!-- 使用sshfence隔离机制时需要ssh免登陆 --> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/home/hadoop/.ssh/id_rsa</value> </property> <!-- 配置sshfence隔离机制超时时间 --> <property> <name>dfs.ha.fencing.ssh.connect-timeout</name> <value>30000</value> </property> </configuration> 2.2.4修改mapred-site.xml <configuration> <!-- 指定mr框架为yarn方式 --> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration> 2.2.5修改yarn-site.xml <configuration> <!-- 开启RM高可靠 --> <property> <name>yarn.resourcemanager.ha.enabled</name> <value>true</value> </property> <!-- 指定RM的cluster id --> <property> <name>yarn.resourcemanager.cluster-id</name> <value>yrc</value> </property> <!-- 指定RM的名字 --> <property> <name>yarn.resourcemanager.ha.rm-ids</name> <value>rm1,rm2</value> </property> <!-- 分别指定RM的地址 --> <property> <name>yarn.resourcemanager.hostname.rm1</name> <value>hadoop03</value> </property> <property> <name>yarn.resourcemanager.hostname.rm2</name> <value>hadoop04</value> </property> <!-- 指定zk集群地址 --> <property> <name>yarn.resourcemanager.zk-address</name> <value>hadoop05:2181,hadoop06:2181,hadoop07:2181</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration> 2.2.6修改slaves(slaves是指定子节点的位置,因为要在itcast01上启动HDFS、在itcast03启动yarn, 所以itcast01上的slaves文件指定的是datanode的位置,itcast03上的slaves文件指定的是nodemanager的位置) hadoop05 hadoop06 hadoop07 2.2.7配置免密码登陆 #首先要配置itcast01到hadoop02、hadoop03、hadoop04、hadoop05、hadoop06、hadoop07的免密码登陆 #在hadoop01上生产一对钥匙 ssh-keygen -t rsa #将公钥拷贝到其他节点,包括自己 ssh-coyp-id hadoop01 ssh-coyp-id hadoop02 ssh-coyp-id hadoop03 ssh-coyp-id hadoop04 ssh-coyp-id hadoop05 ssh-coyp-id hadoop06 ssh-coyp-id hadoop07 #配置hadoop03到hadoop04、hadoop05、hadoop06、hadoop07的免密码登陆 #在hadoop03上生产一对钥匙 ssh-keygen -t rsa #将公钥拷贝到其他节点 ssh-coyp-id hadoop04 ssh-coyp-id hadoop05 ssh-coyp-id hadoop06 ssh-coyp-id hadoop07 #注意:两个namenode之间要配置ssh免密码登陆,别忘了配置hadoop02到hadoop01的免登陆 在hadoop02上生产一对钥匙 ssh-keygen -t rsa ssh-coyp-id -i hadoop01 2.4将配置好的hadoop拷贝到其他节点 scp -r hadoop-2.4.1/ hadoop02:/lichangwu/hadoop-2.4.1/ scp -r hadoop-2.4.1/ hadoop03:/lichangwu/hadoop-2.4.1/ scp -r hadoop-2.4.1/ hadoop04:/lichangwu/hadoop-2.4.1/ scp -r hadoop-2.4.1/ hadoop05:/lichangwu/hadoop-2.4.1/ scp -r hadoop-2.4.1/ hadoop06:/lichangwu/hadoop-2.4.1/ scp -r hadoop-2.4.1/ hadoop07:/lichangwu/hadoop-2.4.1/ ###注意:严格按照下面的步骤 2.5启动zookeeper集群(分别在hadoop05、hadoop06、hadoop07上启动zk) cd /lichangwu/zookeeper-3.4.6/bin/ ./zkServer.sh start #查看状态:一个leader,两个follower ./zkServer.sh status 2.6启动journalnode(分别在在hadoop05、hadoop06、hadoop07上执行) cd /lichangwu/hadoop-2.4.1 sbin/hadoop-daemon.sh start journalnode #运行jps命令检验,hadoop05、hadoop06、hadoop07上多了JournalNode进程 2.7格式化HDFS #在hadoop01上执行命令: hdfs namenode -format #格式化后会在根据core-site.xml中的hadoop.tmp.dir配置生成个文件, 这里我配置的是/lichangwu/hadoop-2.4.1/tmp,然后将/lichangwu/hadoop-2.4.1/tmp拷贝到hadoop02的/lichangwu/hadoop-2.4.1/下。 scp -r tmp/ hadoop02:/lichangwu/hadoop-2.4.1/ 2.8格式化ZK(在hadoop01上执行即可) hdfs zkfc -formatZK 2.9启动HDFS(在hadoop01上执行) sbin/start-dfs.sh 2.10启动YARN(#####注意#####:是在hadoop03上执行start-yarn.sh, 如果hadoop04上没有启动成功,则在hadoop04上再启动一次start-yarn.sh; 把namenode和resourcemanager分开是因为性能问题,因为他们都要占用大量资源,所以把他们分开了,他们分开了就要分别在不同的机器上启动) sbin/start-yarn.sh 到此,hadoop-2.4.1配置完毕,可以统计浏览器访问: http://192.168.175.129:50070 NameNode 'hadoop01:9000' (active) http://192.168.175.127:50070 NameNode 'hadoop02:9000' (standby)
HADOOP集群主机上运行HADOOP相关命令回车后响应慢
HADOOP集群主机上运行HADOOP相关命令回车后响应慢,如集群主机上输入hive\hbase shell或者hdfs dfs -ls / 回车后终端响应非常慢,至少10秒上才出来 Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/jars/hive-common-0.13.1-cdh5.3.1.jar!/hive-log4j.properties hive>
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、PDF搜索网站推荐 对于大部
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 顺便拉下票,我在参加csdn博客之星竞选,欢迎投票支持,每个QQ或者微信每天都可以投5票,扫二维码即可,http://m234140.nofollow.ax.
Vue + Spring Boot 项目实战(十四):用户认证方案与完善的访问拦截
本篇文章主要讲解 token、session 等用户认证方案的区别并分析常见误区,以及如何通过前后端的配合实现完善的访问拦截,为下一步权限控制的实现打下基础。
比特币原理详解
一、什么是比特币 比特币是一种电子货币,是一种基于密码学的货币,在2008年11月1日由中本聪发表比特币白皮书,文中提出了一种去中心化的电子记账系统,我们平时的电子现金是银行来记账,因为银行的背后是国家信用。去中心化电子记账系统是参与者共同记账。比特币可以防止主权危机、信用风险。其好处不多做赘述,这一层面介绍的文章很多,本文主要从更深层的技术原理角度进行介绍。 二、问题引入  假设现有4个人
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 欢迎 改进 留言。 演示地点跳到演示地点 html代码如下`&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;music&lt;/title&gt; &lt;meta charset="utf-8"&gt
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。 1. for - else 什么?不是 if 和 else 才
数据库优化 - SQL优化
前面一篇文章从实例的角度进行数据库优化,通过配置一些参数让数据库性能达到最优。但是一些“不好”的SQL也会导致数据库查询变慢,影响业务流程。本文从SQL角度进行数据库优化,提升SQL运行效率。 判断问题SQL 判断SQL是否有问题时可以通过两个表象进行判断: 系统级别表象 CPU消耗严重 IO等待严重 页面响应时间过长
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 c/c++ 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7
通俗易懂地给女朋友讲:线程池的内部原理
餐厅的约会 餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”我楞了一下,心里想女朋友今天是怎么了,怎么突然问出这么专业的问题,但做为一个专业人士在女朋友面前也不能露怯啊,想了一下便说:“我先给你讲讲我前同事老王的故事吧!” 大龄程序员老王 老王是一个已经北漂十多年的程序员,岁数大了,加班加不动了,升迁也无望,于是拿着手里
经典算法(5)杨辉三角
写在前面: 我是 扬帆向海,这个昵称来源于我的名字以及女朋友的名字。我热爱技术、热爱开源、热爱编程。技术是开源的、知识是共享的。 这博客是对自己学习的一点点总结及记录,如果您对 Java、算法 感兴趣,可以关注我的动态,我们一起学习。 用知识改变命运,让我们的家人过上更好的生活。 目录一、杨辉三角的介绍二、杨辉三角的算法思想三、代码实现1.第一种写法2.第二种写法 一、杨辉三角的介绍 百度
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹
面试官:你连RESTful都不知道我怎么敢要你?
面试官:了解RESTful吗? 我:听说过。 面试官:那什么是RESTful? 我:就是用起来很规范,挺好的 面试官:是RESTful挺好的,还是自我感觉挺好的 我:都挺好的。 面试官:… 把门关上。 我:… 要干嘛?先关上再说。 面试官:我说出去把门关上。 我:what ?,夺门而去 文章目录01 前言02 RESTful的来源03 RESTful6大原则1. C-S架构2. 无状态3.统一的接
SQL-小白最佳入门sql查询一
一 说明 如果是初学者,建议去网上寻找安装Mysql的文章安装,以及使用navicat连接数据库,以后的示例基本是使用mysql数据库管理系统; 二 准备前提 需要建立一张学生表,列分别是id,名称,年龄,学生信息;本示例中文章篇幅原因SQL注释略; 建表语句: CREATE TABLE `student` ( `id` int(11) NOT NULL AUTO_INCREMENT, `
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // dosho
漫话:什么是平衡(AVL)树?这应该是把AVL树讲的最好的文章了
这篇文章通过对话的形式,由浅入深带你读懂 AVL 树,看完让你保证理解 AVL 树的各种操作,如果觉得不错,别吝啬你的赞哦。 1、若它的左子树不为空,则左子树上所有的节点值都小于它的根节点值。 2、若它的右子树不为空,则右子树上所有的节点值均大于它的根节点值。 3、它的左右子树也分别可以充当为二叉查找树。 例如: 例如,我现在想要查找数值为14的节点。由于二叉查找树的特性,我们可...
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,
程序员:我终于知道post和get的区别
IT界知名的程序员曾说:对于那些月薪三万以下,自称IT工程师的码农们,其实我们从来没有把他们归为我们IT工程师的队伍。他们虽然总是以IT工程师自居,但只是他们一厢情愿罢了。 此话一出,不知激起了多少(码农)程序员的愤怒,却又无可奈何,于是码农问程序员。 码农:你知道get和post请求到底有什么区别? 程序员:你看这篇就知道了。 码农:你月薪三万了? 程序员:嗯。 码农:你是怎么做到的? 程序员:
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU
开源并不是你认为的那些事
点击上方蓝字 关注我们开源之道导读所以 ————想要理清开源是什么?先要厘清开源不是什么,名正言顺是句中国的古代成语,概念本身的理解非常之重要。大部分生物多样性的起源,...
加快推动区块链技术和产业创新发展,2019可信区块链峰会在京召开
      11月8日,由中国信息通信研究院、中国通信标准化协会、中国互联网协会、可信区块链推进计划联合主办,科技行者协办的2019可信区块链峰会将在北京悠唐皇冠假日酒店开幕。   区块链技术被认为是继蒸汽机、电力、互联网之后,下一代颠覆性的核心技术。如果说蒸汽机释放了人类的生产力,电力解决了人类基本的生活需求,互联网彻底改变了信息传递的方式,区块链作为构造信任的技术有重要的价值。   1
程序员把地府后台管理系统做出来了,还有3.0版本!12月7号最新消息:已在开发中有github地址
第一幕:缘起 听说阎王爷要做个生死簿后台管理系统,我们派去了一个程序员…… 996程序员做的梦: 第一场:团队招募 为了应对地府管理危机,阎王打算找“人”开发一套地府后台管理系统,于是就在地府总经办群中发了项目需求。 话说还是中国电信的信号好,地府都是满格,哈哈!!! 经常会有外行朋友问:看某网站做的不错,功能也简单,你帮忙做一下? 而这次,面对这样的需求,这个程序员
网易云6亿用户音乐推荐算法
网易云音乐是音乐爱好者的集聚地,云音乐推荐系统致力于通过 AI 算法的落地,实现用户千人千面的个性化推荐,为用户带来不一样的听歌体验。 本次分享重点介绍 AI 算法在音乐推荐中的应用实践,以及在算法落地过程中遇到的挑战和解决方案。 将从如下两个部分展开: AI 算法在音乐推荐中的应用 音乐场景下的 AI 思考 从 2013 年 4 月正式上线至今,网易云音乐平台持续提供着:乐屏社区、UGC
【技巧总结】位运算装逼指南
位算法的效率有多快我就不说,不信你可以去用 10 亿个数据模拟一下,今天给大家讲一讲位运算的一些经典例子。不过,最重要的不是看懂了这些例子就好,而是要在以后多去运用位运算这些技巧,当然,采用位运算,也是可以装逼的,不信,你往下看。我会从最简单的讲起,一道比一道难度递增,不过居然是讲技巧,那么也不会太难,相信你分分钟看懂。 判断奇偶数 判断一个数是基于还是偶数,相信很多人都做过,一般的做法的代码如下
《C++ Primer》学习笔记(六):C++模块设计——函数
专栏C++学习笔记 《C++ Primer》学习笔记/习题答案 总目录 https://blog.csdn.net/TeFuirnever/article/details/100700212 —————————————————————————————————————————————————————— 《C++ Primer》习题参考答案:第6章 - C++模块设计——函数 文章目录专栏C+...
8年经验面试官详解 Java 面试秘诀
    作者 | 胡书敏 责编 | 刘静 出品 | CSDN(ID:CSDNnews) 本人目前在一家知名外企担任架构师,而且最近八年来,在多家外企和互联网公司担任Java技术面试官,前后累计面试了有两三百位候选人。在本文里,就将结合本人的面试经验,针对Java初学者、Java初级开发和Java开发,给出若干准备简历和准备面试的建议。   Java程序员准备和投递简历的实
面试官如何考察你的思维方式?
1.两种思维方式在求职面试中,经常会考察这种问题:北京有多少量特斯拉汽车? 某胡同口的煎饼摊一年能卖出多少个煎饼? 深圳有多少个产品经理? 一辆公交车里能装下多少个乒乓球? 一
so easy! 10行代码写个"狗屁不通"文章生成器
前几天,GitHub 有个开源项目特别火,只要输入标题就可以生成一篇长长的文章。背后实现代码一定很复杂吧,里面一定有很多高深莫测的机器学习等复杂算法不过,当我看了源代码之后这程序不到50
知乎高赞:中国有什么拿得出手的开源软件产品?(整理自本人原创回答)
知乎高赞:中国有什么拿得出手的开源软件产品? 在知乎上,有个问题问“中国有什么拿得出手的开源软件产品(在 GitHub 等社区受欢迎度较好的)?” 事实上,还不少呢~ 本人于2019.7.6进行了较为全面的 回答 - Bravo Yeung,获得该问题下回答中得最高赞(236赞和1枚专业勋章),对这些受欢迎的 Github 开源项目分类整理如下: 分布式计算、云平台相关工具类 1.SkyWalk
相关热词 如何提升c#开发能力 矩阵乘法c# c#调用谷歌浏览器 c# 去空格去转义符 c#用户登录窗体代码 c# 流 c# linux 可视化 c# mvc 返回图片 c# 像素空间 c# 日期 最后一天
立即提问