husaijunjin
husaijunjin
2015-10-15 12:40

测试spark集群入门级wordcount出错,求大神们帮忙解决啊

  • exception
  • spark
  • 源代码
  • 测试
  • 集群
  • Created by jyq on 10/14/15. */ 就这么点源代码

import org.apache.spark.{SparkConf,SparkContext,SparkFiles}

object WordCount {
def main(args: Array[String]):Unit=
{
val conf =new SparkConf().setAppName("WordCount").setMaster("spark://master:7077")

  val sc = new SparkContext(conf)
  sc.addFile("file:///home/jyq/Desktop/1.txt")

  val textRDD=sc.textFile(SparkFiles.get("file:///home/jyq/Desktop/1.txt"))
  val result = textRDD.flatMap(line =>line.split("\\s+") ).map(word=> (word, 1)).reduceByKey(_ + _)
  result.saveAsTextFile("/home/jyq/Desktop/2.txt")
  println("hello world")
}

}

在IDEA编译运行下输出的日志:

Exception in thread "main" java.lang.IllegalArgumentException: java.net.URISyntaxException: Expected scheme-specific part at index 5: file:
at org.apache.hadoop.fs.Path.initialize(Path.java:206)
at org.apache.hadoop.fs.Path.(Path.java:172)
at org.apache.hadoop.fs.Path.(Path.java:94)
at org.apache.hadoop.fs.Globber.glob(Globber.java:211)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1644)
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:257)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:207)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:65)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:290)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:290)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:306)
at org.apache.spark.rdd.PairRDDFunctions.reduceByKey(PairRDDFunctions.scala:289)
at WordCount$.main(WordCount.scala:16)
at WordCount.main(WordCount.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
Caused by: java.net.URISyntaxException: Expected scheme-specific part at index 5: file:
at java.net.URI$Parser.fail(URI.java:2848)
at java.net.URI$Parser.failExpecting(URI.java:2854)
at java.net.URI$Parser.parse(URI.java:3057)
at java.net.URI.(URI.java:746)
at org.apache.hadoop.fs.Path.initialize(Path.java:203)
... 41 more
15/10/15 20:08:36 INFO SparkContext: Invoking stop() from shutdown hook
15/10/15 20:08:36 INFO SparkUI: Stopped Spark web UI at http://192.168.179.111:4040
15/10/15 20:08:36 INFO DAGScheduler: Stopping DAGScheduler
15/10/15 20:08:36 INFO SparkDeploySchedulerBackend: Shutting down all executors
15/10/15 20:08:36 INFO SparkDeploySchedulerBackend: Asking each executor to shut down
15/10/15 20:08:36 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
15/10/15 20:08:36 INFO MemoryStore: MemoryStore cleared
15/10/15 20:08:36 INFO BlockManager: BlockManager stopped
15/10/15 20:08:36 INFO BlockManagerMaster: BlockManagerMaster stopped
15/10/15 20:08:36 INFO SparkContext: Successfully stopped SparkContext
15/10/15 20:08:36 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
15/10/15 20:08:36 INFO ShutdownHookManager: Shutdown hook called
15/10/15 20:08:36 INFO ShutdownHookManager: Deleting directory /tmp/spark-d7ca48d5-4e31-4a07-9264-8d7f5e8e1032
15/10/15 20:08:36 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.

Process finished with exit code 1

  • 点赞
  • 回答
  • 收藏
  • 复制链接分享

2条回答