spark程序使用scalac先编译再使用scala运行和打成jar包使用spark-submit提交运行有什么区别?

第一种方式:先使用scalac命令 编译,再使用scala命令运行

第二种方式:先使用sbt打包,然后再使用spark-submit提交运行spark

这两种方式有什么区别?各有什么优劣势?先谢谢大家了 ~

1个回答

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
spark运行scala的jar包

![图片说明](https://img-ask.csdn.net/upload/202002/05/1580887545_330719.png) ![图片说明](https://img-ask.csdn.net/upload/202002/05/1580887568_992291.png) ![图片说明](https://img-ask.csdn.net/upload/202002/05/1580887616_449280.png) 有人遇到过类似的问题吗? 我的尝试: 当没Master节点的Worker进程,运行会报错,当开启了Master节点的Worker进程有时不会报错,但是会说内存不够,但是我觉得不是这个问题,也能得出一定的结果,但并不是预期的结果。 执行的命令:bin/spark-submit --master spark://node1:7077 --class cn.itcast.WordCount_Online --executor-memory 1g --total-executor-cores 1 ~/data/spark_chapter02-1.0-SNAPSHOT.jar /spark/test/words.txt /spark/test/out jar包是在idea中打包的,用的是scala语言,主要作用是词频统计 scala代码: ``` package cn.itcast import org.apache.spark.rdd.RDD import org.apache.spark.{SparkConf, SparkContext} object WordCount_Online { def main(args: Array[String]):Unit={ val sparkConf = new SparkConf().setAppName("WordCount_Online") val sparkContext = new SparkContext(sparkConf) val data : RDD[String] = sparkContext.textFile(args(0)) val words :RDD[String] = data.flatMap(_.split(" ")) val wordAndOne :RDD[(String,Int)] = words.map(x => (x,1)) val result :RDD[(String,Int)] = wordAndOne.reduceByKey(_+_) result.saveAsTextFile(args(1)) sparkContext.stop() } } ``` 我也做了很多尝试,希望懂的人可以交流一下

spark-submit命令运行jar包报空指针,Java -jar命令可以运行。

local[*]模式下的spark程序,在idea上运行没问题,用maven打出来的jar包在windows的cmd下运行java -jar XX.jar 也可以运行成功。将jar包放到集群中运用spark-submit运行可以打印输出控制台打印结果,但是最后报空指针异常,并且结果不能保存到mysql。可是在集群中也运用java -jar xxx.jar一样可以运行不会报错。这是什么原因?我怎么才能用spark-submit命令运行成功?我想要通过这个命令去具体制定某一个类,利用Java -jar命令就不能具体制定哪一个类 。求帮忙 Exception in thread "main" java.lang.NullPointerException at java.util.Properties$LineReader.readLine(Properties.java:434) at java.util.Properties.load0(Properties.java:353) at java.util.Properties.load(Properties.java:341) at com.bigdata.bcht.spes.SparkEsLawUndel$.main(SparkEsLawUndel.scala:90) at com.bigdata.bcht.spes.SparkEsLawUndel.main(SparkEsLawUndel.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:745) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

Jar在spark-shell上运行报错:主类找不到

scala IntelliJ的项目,sbt打好包在spark-shell上运行后报错:主类找不到;使用了两个中文分词包(ansj_seg-2.0.8.jar,nlp-lang-0.3.jar),但是已经加入到 External libraries里去了;打包没问题,运行报错 ![![图片说明](https://img-ask.csdn.net/upload/201601/26/1453780626_723163.jpg)![图片说明](https://img-ask.csdn.net/upload/201601/26/1453780648_659305.jpg) spark-shell 提交命令: [gaohui@hadoop-1-2 test]$ spark-submit --master yarn --driver-memory 5G --num-executors 20 --executor-cores 16 --executor-memory 10G --conf spark.serializer=org.apache.spark.serializer.KryoSerializer --class NLP_V6.Nlp_test --jars /home/gaohui/test/NLP_v6_test.jar /home/gaohui/test/NLP_v6_test.jar 报错图片: ![图片说明](https://img-ask.csdn.net/upload/201601/26/1453780776_603750.jpg)

Spark1.3基于scala2.11编译hive-thrift报错,关于jline的

[INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building Spark Project Hive Thrift Server 1.3.0 [INFO] ------------------------------------------------------------------------ [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ spark-hive-thriftserver_2.11 --- [INFO] Deleting /usr/local/spark-1.3.0/sql/hive-thriftserver/target [INFO] [INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce-versions) @ spark-hive-thriftserver_2.11 --- [INFO] [INFO] --- scala-maven-plugin:3.2.0:add-source (eclipse-add-source) @ spark-hive-thriftserver_2.11 --- [INFO] Add Source directory: /usr/local/spark-1.3.0/sql/hive-thriftserver/src/main/scala [INFO] Add Test Source directory: /usr/local/spark-1.3.0/sql/hive-thriftserver/src/test/scala [INFO] [INFO] --- build-helper-maven-plugin:1.8:add-source (add-scala-sources) @ spark-hive-thriftserver_2.11 --- [INFO] Source directory: /usr/local/spark-1.3.0/sql/hive-thriftserver/src/main/scala added. [INFO] [INFO] --- build-helper-maven-plugin:1.8:add-source (add-default-sources) @ spark-hive-thriftserver_2.11 --- [INFO] Source directory: /usr/local/spark-1.3.0/sql/hive-thriftserver/v0.13.1/src/main/scala added. [INFO] [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ spark-hive-thriftserver_2.11 --- [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ spark-hive-thriftserver_2.11 --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /usr/local/spark-1.3.0/sql/hive-thriftserver/src/main/resources [INFO] Copying 3 resources [INFO] [INFO] --- scala-maven-plugin:3.2.0:compile (scala-compile-first) @ spark-hive-thriftserver_2.11 --- [WARNING] Zinc server is not available at port 3030 - reverting to normal incremental compile [INFO] Using incremental compilation [INFO] compiler plugin: BasicArtifact(org.scalamacros,paradise_2.11.2,2.0.1,null) [INFO] Compiling 9 Scala sources to /usr/local/spark-1.3.0/sql/hive-thriftserver/target/scala-2.11/classes... [ERROR] /usr/local/spark-1.3.0/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLCLIDriver.scala:25: object ConsoleReader is not a member of package jline [ERROR] import jline.{ConsoleReader, History} [ERROR] ^ [WARNING] Class jline.Completor not found - continuing with a stub. [WARNING] Class jline.ConsoleReader not found - continuing with a stub. [ERROR] /usr/local/spark-1.3.0/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLCLIDriver.scala:165: not found: type ConsoleReader [ERROR] val reader = new ConsoleReader() [ERROR] ^ [ERROR] Class jline.Completor not found - continuing with a stub. [WARNING] Class com.google.protobuf.Parser not found - continuing with a stub. [WARNING] Class com.google.protobuf.Parser not found - continuing with a stub. [WARNING] Class com.google.protobuf.Parser not found - continuing with a stub. [WARNING] Class com.google.protobuf.Parser not found - continuing with a stub. [WARNING] 6 warnings found [ERROR] three errors found [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Spark Project Parent POM ........................... SUCCESS [01:20 min] [INFO] Spark Project Networking ........................... SUCCESS [01:31 min] [INFO] Spark Project Shuffle Streaming Service ............ SUCCESS [ 47.808 s] [INFO] Spark Project Core ................................. SUCCESS [34:00 min] [INFO] Spark Project Bagel ................................ SUCCESS [03:21 min] [INFO] Spark Project GraphX ............................... SUCCESS [09:22 min] [INFO] Spark Project Streaming ............................ SUCCESS [15:07 min] [INFO] Spark Project Catalyst ............................. SUCCESS [14:35 min] [INFO] Spark Project SQL .................................. SUCCESS [16:31 min] [INFO] Spark Project ML Library ........................... SUCCESS [18:15 min] [INFO] Spark Project Tools ................................ SUCCESS [01:50 min] [INFO] Spark Project Hive ................................. SUCCESS [13:58 min] [INFO] Spark Project REPL ................................. SUCCESS [06:13 min] [INFO] Spark Project YARN ................................. SUCCESS [07:05 min] [INFO] Spark Project Hive Thrift Server ................... FAILURE [01:39 min] [INFO] Spark Project Assembly ............................. SKIPPED [INFO] Spark Project External Twitter ..................... SKIPPED [INFO] Spark Project External Flume Sink .................. SKIPPED [INFO] Spark Project External Flume ....................... SKIPPED [INFO] Spark Project External MQTT ........................ SKIPPED [INFO] Spark Project External ZeroMQ ...................... SKIPPED [INFO] Spark Project Examples ............................. SKIPPED [INFO] Spark Project YARN Shuffle Service ................. SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 02:25 h [INFO] Finished at: 2015-04-16T14:11:24+08:00 [INFO] Final Memory: 62M/362M [INFO] ------------------------------------------------------------------------ [WARNING] The requested profile "hadoop-2.5" could not be activated because it does not exist. [ERROR] Failed to execute goal net.alchim31.maven:scala-maven-plugin:3.2.0:compile (scala-compile-first) on project spark-hive-thriftserver_2.11: Execution scala-compile-first of goal net.alchim31.maven:scala-maven-plugin:3.2.0:compile failed. CompileFailed -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn <goals> -rf :spark-hive-thriftserver_2.11

Spark scala 运行报错 .test$$anonfun$1

同样的写法再scala中执行报错,然而在java中能够正常执行 以下是报错内容 ``` helloOffSLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/C:/Users/zsts/.m2/repository/org/apache/logging/log4j/log4j-slf4j-impl/2.5/log4j-slf4j-impl-2.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/C:/Users/zsts/.m2/repository/org/slf4j/slf4j-log4j12/1.7.10/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console. 19:25:11.875 [main] ERROR org.apache.hadoop.util.Shell - Failed to locate the winutils binary in the hadoop binary path java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries. at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:355) ~[hadoop-common-2.6.4.jar:?] at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:370) [hadoop-common-2.6.4.jar:?] at org.apache.hadoop.util.Shell.<clinit>(Shell.java:363) [hadoop-common-2.6.4.jar:?] at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:79) [hadoop-common-2.6.4.jar:?] at org.apache.hadoop.security.Groups.parseStaticMapping(Groups.java:116) [hadoop-common-2.6.4.jar:?] at org.apache.hadoop.security.Groups.<init>(Groups.java:93) [hadoop-common-2.6.4.jar:?] at org.apache.hadoop.security.Groups.<init>(Groups.java:73) [hadoop-common-2.6.4.jar:?] at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:293) [hadoop-common-2.6.4.jar:?] at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283) [hadoop-common-2.6.4.jar:?] at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260) [hadoop-common-2.6.4.jar:?] at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:789) [hadoop-common-2.6.4.jar:?] at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774) [hadoop-common-2.6.4.jar:?] at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647) [hadoop-common-2.6.4.jar:?] at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2198) [spark-core_2.10-1.6.2.jar:1.6.2] at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2198) [spark-core_2.10-1.6.2.jar:1.6.2] at scala.Option.getOrElse(Option.scala:120) [?:?] at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:2198) [spark-core_2.10-1.6.2.jar:1.6.2] at org.apache.spark.SparkContext.<init>(SparkContext.scala:322) [spark-core_2.10-1.6.2.jar:1.6.2] at tarot.test$.main(test.scala:19) [bin/:?] at tarot.test.main(test.scala) [bin/:?] Exception in thread "main" org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://tarot1:9000/sparkTest/hello at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:285) at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228) at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313) at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:199) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1307) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111) at org.apache.spark.rdd.RDD.withScope(RDD.scala:316) at org.apache.spark.rdd.RDD.take(RDD.scala:1302) at tarot.test$.main(test.scala:26) at tarot.test.main(test.scala) ``` scala代码 ``` package tarot import scala.tools.nsc.doc.model.Val import org.apache.spark.SparkConf import org.apache.spark.SparkContext import org.apache.spark.api.java.JavaSparkContext object test { def main(hello:Array[String]){ print("helloOff") val conf = new SparkConf() conf.setMaster("spark://tarot1:7077") .setAppName("hello_off") .set("spark.executor.memory", "4g") .set("spark.executor.cores", "4") .set("spark.cores.max", "4") .set("spark.sql.crossJoin.enabled", "true") val sc = new SparkContext(conf) sc.setLogLevel("ERROR") val file = sc.textFile("hdfs://tarot1:9000/sparkTest/hello") val filterRDD = file.filter { (ss:String) => ss.contains("hello") } val f=filterRDD.cache() // println(f) // filterRDD.count() for(x <- f.take(100)){ println(x) } } def helloingTest(jsc:SparkContext){ val sc = jsc val file = sc.textFile("hdfs://tarot1:9000/sparkTest/hello") val filterRDD = file.filter((ss:String) => ss.contains("hello")) val f=filterRDD.cache() println(f) val i = filterRDD.count() println(i) } // val seehello = def helloingTest(jsc:JavaSparkContext){ val sc = jsc val file = sc.textFile("hdfs://tarot1:9000/sparkTest/hello") val filterRDD = file.filter((ss:String) => ss.contains("hello")) val f=filterRDD.cache() println(f) val i = filterRDD.count() println(i) } } ``` java代码 ``` package com.tarot.sparkToHdfsTest; import org.apache.spark.SparkConf; import org.apache.spark.SparkContext; import org.apache.spark.api.java.JavaRDD; import org.apache.spark.api.java.JavaSparkContext; import org.apache.spark.api.java.function.Function; import org.apache.spark.rdd.RDD; //import scalaModule.hello; public class App { public static void main( String[] args ) { SparkConf sparkConf = new SparkConf(); sparkConf.setMaster("spark://tarot1:7077") .setAppName("hello off") .set("spark.executor.memory","4g") .set("spark.executor.cores", "4") .set("spark.cores.max","4") .set("spark.sql.crossJoin.enabled", "true"); JavaSparkContext jsc = new JavaSparkContext(sparkConf); jsc.setLogLevel("ERROR"); text(jsc); // test.helloingTest(jsc); } /** * test * @param jsc */ private static void text(JavaSparkContext jsc){ // jsc.textFile("hdfs://tarot1:9000/sparkTest/hello"); JavaRDD<String> jr= jsc.textFile("hdfs://tarot1:9000/sparkTest/hello",1); jr.cache(); // test t = new test(); jr.filter(f); for (String string : jr.take(100)) { System.out.println(string); } System.out.println("hello off"); } public static Function<String, Boolean> f = new Function<String, Boolean>() { public Boolean call(String s) { return s.contains("hello"); } }; } ``` 坑了很久了,网上的解决办法不是让我去shell上就是让我上传jar包,但是java不用啊?都调用的JVM。 大神救命

当jar在hdfs的时候提交spark job报错

(一)jar不在hdfs上的时候提交spark任务成功,使用的命令: spark-submit --master spark://192.168.244.130:7077 --class cn.com.cnpc.klmy.common.WordCount2 --executor-memory 1G --total-executor-cores 2 /root/modelcall-2.0.jar (二)而当jar在hdfs上的时候提交spark任务报错:classNotFoundException呢?,命令如下: spark-submit --master spark://192.168.244.130:7077 --class cn.com.cnpc.klmy.common.WordCount2 --executor-memory 1G --total-executor-cores 2 hdfs://192.168.244.130:9000/mdjar/modelcall-2.0.jar 请教各位大咖这到底是什么原因造成的?望各位大咖不吝赐教!跪谢!!! 注:hdfs能够正常访问,代码里面产生的结果存在hdfs上(第一情况正常运行,在hdfs上能够查看到结果)

scala运行hello world出错

Error:scalac: Error: org.jetbrains.jps.incremental.scala.remote.ServerException Error compiling sbt component 'compiler-interface-2.10.0-52.0' at sbt.compiler.AnalyzingCompiler$$anonfun$compileSources$1$$anonfun$apply$2.apply(AnalyzingCompiler.scala:145) at sbt.compiler.AnalyzingCompiler$$anonfun$compileSources$1$$anonfun$apply$2.apply(AnalyzingCompiler.scala:142) at sbt.IO$.withTemporaryDirectory(IO.scala:291) at sbt.compiler.AnalyzingCompiler$$anonfun$compileSources$1.apply(AnalyzingCompiler.scala:142) at sbt.compiler.AnalyzingCompiler$$anonfun$compileSources$1.apply(AnalyzingCompiler.scala:139) at sbt.IO$.withTemporaryDirectory(IO.scala:291) at sbt.compiler.AnalyzingCompiler$.compileSources(AnalyzingCompiler.scala:139) at sbt.compiler.IC$.compileInterfaceJar(IncrementalCompiler.scala:52) at org.jetbrains.jps.incremental.scala.local.CompilerFactoryImpl$.getOrCompileInterfaceJar(CompilerFactoryImpl.scala:96) at org.jetbrains.jps.incremental.scala.local.CompilerFactoryImpl$$anonfun$getScalac$1.apply(CompilerFactoryImpl.scala:50) at org.jetbrains.jps.incremental.scala.local.CompilerFactoryImpl$$anonfun$getScalac$1.apply(CompilerFactoryImpl.scala:49) at scala.Option.map(Option.scala:146) at org.jetbrains.jps.incremental.scala.local.CompilerFactoryImpl.getScalac(CompilerFactoryImpl.scala:49) at org.jetbrains.jps.incremental.scala.local.CompilerFactoryImpl.createCompiler(CompilerFactoryImpl.scala:22) at org.jetbrains.jps.incremental.scala.local.CachingFactory$$anonfun$createCompiler$1.apply(CachingFactory.scala:24) at org.jetbrains.jps.incremental.scala.local.CachingFactory$$anonfun$createCompiler$1.apply(CachingFactory.scala:24) at org.jetbrains.jps.incremental.scala.local.Cache$$anonfun$getOrUpdate$2.apply(Cache.scala:20) at scala.Option.getOrElse(Option.scala:121) at org.jetbrains.jps.incremental.scala.local.Cache.getOrUpdate(Cache.scala:19) at org.jetbrains.jps.incremental.scala.local.CachingFactory.createCompiler(CachingFactory.scala:23) at org.jetbrains.jps.incremental.scala.local.LocalServer.compile(LocalServer.scala:22) at org.jetbrains.jps.incremental.scala.remote.Main$.make(Main.scala:68) at org.jetbrains.jps.incremental.scala.remote.Main$.nailMain(Main.scala:25) at org.jetbrains.jps.incremental.scala.remote.Main.nailMain(Main.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.martiansoftware.nailgun.NGSession.run(NGSession.java:319)

Scala中动态加载jar包并调用方法,有个匿名类出现ClassNotFoundException

测试环境为:scala 2.11.8和spark2.1.0 单独jar包中TextProcess.scala中的代码: ``` object TextProcess { def process(col: String, df: Dataset[Row]) = { val rdd = df.select(col).rdd.map { x => val word = x.getAs[String](0).split(".")(0).toString() (word, 1) } val spark = SparkSession.builder().getOrCreate() val words = Array("one", "two", "two", "three", "three", "three") val wordPairsRDD = spark.sparkContext.parallelize(words).map(word => (word, 1)) val wordCountsWithReduce = wordPairsRDD.reduceByKey(_ + _) wordCountsWithReduce.foreach { x => println(x._1 , x._2) } val rd = rdd.reduceByKey(_ + _).map { x => Row.fromSeq(Array(x._1, x._2)) } val schema = StructType(Array(StructField("word", StringType, false), StructField("num", IntegerType, false))) spark.createDataFrame(rd, schema) } } ``` 测试调用代码: ``` object ProcessTest { def main(args: Array[String]) = { val spark = SparkSession.builder().master("local").getOrCreate() val data = Array("a.", "b.", "b.", "c.", "b.", "d.", "b.", "d.", "b.aa") val rdd = spark.sparkContext.parallelize(data, 1).map { x => Row.fromSeq(Seq(x)) } val schema = StructType(Array(StructField("text", StringType, false))) val df = spark.createDataFrame(rdd, schema) df.show() invoke(df) } def invoke(df: Dataset[Row]):Unit = { val f = new File("F:\\360CloudUI\\textprocess.jar") val loader = new URLClassLoader(Array(f.toURI().toURL()), getClass.getClassLoader) val clazz = loader.loadClass("cc.eabour.spark.TextProcess") val process = clazz.getDeclaredMethod("process", classOf[String], classOf[Dataset[_]]) process.invoke(null, "text", df) } } ``` 错误信息堆栈: ``` 7/09/24 18:43:08 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, localhost, executor driver, partition 0, PROCESS_LOCAL, 5981 bytes) 17/09/24 18:43:08 INFO Executor: Running task 0.0 in stage 1.0 (TID 1) 17/09/24 18:43:08 ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 1) java.lang.ClassNotFoundException: cc.eabour.spark.TextProcess$$anonfun$4 at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:423) at java.lang.ClassLoader.loadClass(ClassLoader.java:356) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:67) at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1593) at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1514) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1750) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1964) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1888) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1964) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1888) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1964) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1888) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:369) at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75) at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:85) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53) at org.apache.spark.scheduler.Task.run(Task.scala:99) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) 17/09/24 18:43:08 WARN TaskSetManager: Lost task 0.0 in stage 1.0 (TID 1, localhost, executor driver): java.lang.ClassNotFoundException: cc.eabour.spark.TextProcess$$anonfun$4 at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:423) at java.lang.ClassLoader.loadClass(ClassLoader.java:356) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:67) at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1593) at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1514) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1750) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1964) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1888) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1964) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1888) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1964) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1888) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:369) at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75) at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:85) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53) at org.apache.spark.scheduler.Task.run(Task.scala:99) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) 17/09/24 18:43:08 ERROR TaskSetManager: Task 0 in stage 1.0 failed 1 times; aborting job 17/09/24 18:43:08 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 17/09/24 18:43:08 INFO TaskSchedulerImpl: Cancelling stage 1 ``` 不知道是spark的使用问题还是scala的调用问题。求指导

有关spark jar包的某些疑问

都说spark是scala写的,是不是就是指这些jar包是scala编译出来的class组成的?

使用gradle编译打包scala工程,报错:

1.使用的版本 jdk1.7.004 gradle 2.1 scala 2.11.2 2.代码结构 src/main/scala/HelloWorld.scala package main.scala object HelloWorld { def main(args: Array[String]): Unit = { println("hello world") } } 3.build.gradle文件 apply plugin: 'scala' repositories { maven { url "http://10.177.60.141:8888/nexus/conte..." } } dependencies { compile 'org.scala-lang:scala-library:2.11.2' compile 'org.scala-lang:scala-compiler:2.11.2' } 4.执行gradle build命令,结果: D:\Workfiles\Scala\gradleproject>gradle build -stacktrace :compileJava UP-TO-DATE :compileScala FAILED FAILURE: Build failed with an exception. * What went wrong: Execution failed for task ':compileScala'. > scala/runtime/Nothing$ * Try: Run with --info or --debug option to get more log output. * Exception is: org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':compile Scala'. at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.ex ecuteActions(ExecuteActionsTaskExecuter.java:69) ........................................................... at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.ex ecuteActions(ExecuteActionsTaskExecuter.java:61) ... 44 more Caused by: java.lang.ClassNotFoundException: scala.runtime.Nothing$ ... 76 more BUILD FAILED Total time: 12.063 secs

使用spark自带的sbt编译工具打包失败

小菜目前刚学spark,安装spark的操作系统环境是ubuntu 12.0.4。安装的scala是scala-2.9.1.final.tgz。配置好环境变量后,通过git clone git://github.com/aparch/spark.git下载spark源代码后,执行命令sbt/sbt update complie,出现信息如下: NOTE: The sbt/sbt script has been relocated to build/sbt. Please update references to point to the new location. Invoking 'build/sbt update compile' now ... /root/spark/build/sbt-launch-lib.bash: line 84: java: command not found 查了半天不知道是什么原因,恳请各位大侠帮忙看看。

maven在scalaIDE下构建scala和java混合代码

通过scalaIDE新建一个scala项目,然后convert to maven project。单独clean install scala的类是没有问题的, 但是新建java类之后再在内部引用scala类就会出现找不到符号这样的报错,所以请教各位有没有什么可以解决的方法。 以下是pom.xml的build配置: ``` <build> <sourceDirectory>src</sourceDirectory> <plugins> <plugin> <groupId>org.scala-tools</groupId> <artifactId>maven-scala-plugin</artifactId> <version>2.15.2</version> <executions> <execution> <id>scala-compile-first</id> <goals> <goal>compile</goal> <goal>add-source</goal> </goals> <!-- <configuration> <includes> <include>**/*.scala</include> </includes> </configuration> --> </execution> <execution> <id>scala-test-compile</id> <goals> <goal>testCompile</goal> </goals> </execution> </executions> <configuration> <scalaVersion>2.10.5</scalaVersion> </configuration> </plugin> <plugin> <artifactId>maven-compiler-plugin</artifactId> <version>3.3</version> <configuration> <source>1.7</source> <target>1.7</target> </configuration> </plugin> </plugins> </build> ```

scala中使用回调函数idea下没有打印

小白在学习scala,练习了一个实例代码,虽然知道会输出空集合,但是其中的打印语句在idea下没有效果,如果使用控制台模式就可以打印,求大佬们解答。。 ``` import scala.concurrent.Future import scala.util.{Failure, Success, Try} import scala.concurrent.ExecutionContext.Implicits.global object FutureDemo1 { def main(args: Array[String]): Unit = { val doComplete:PartialFunction[Try[String],Unit] = { case s @ Success(_) => println(s) case f @ Failure(_) => println(f) } val future = (0 to 9) map { case i if i%2 == 0 => Future.successful(i.toString) case i => Future.failed(ThatsOdd(i)) } future map(_ onComplete doComplete) } case class ThatsOdd(i:Int) extends RuntimeException{ s"odd $i received!" } } ``` idea环境 ![图片说明](https://img-ask.csdn.net/upload/202004/04/1585965109_276986.png) 控制台环境 ![图片说明](https://img-ask.csdn.net/upload/202004/04/1585965175_258436.png)

spark任务spark-submit集群运行报错。

报错如下:SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/home/spark/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/home/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: 19/03/15 11:28:57 INFO demo.DemoApplication: Starting DemoApplication on spark01 with PID 15249 (/home/demo.jar started by root in /usr/sbin) 19/03/15 11:28:57 INFO demo.DemoApplication: No active profile set, falling back to default profiles: default 19/03/15 11:28:57 INFO context.AnnotationConfigServletWebServerApplicationContext: Refreshing org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@3b79fd76: startup date [Fri Mar 15 11:28:57 CST 2019]; root of context hierarchy 19/03/15 11:28:59 INFO annotation.AutowiredAnnotationBeanPostProcessor: JSR-330 'javax.inject.Inject' annotation found and supported for autowiring 19/03/15 11:28:59 WARN context.AnnotationConfigServletWebServerApplicationContext: Exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.ApplicationContextException: Unable to start web server; nested exception is org.springframework.context.ApplicationContextException: Unable to start ServletWebServerApplicationContext due to missing ServletWebServerFactory bean. 19/03/15 11:28:59 ERROR boot.SpringApplication: Application run failed org.springframework.context.ApplicationContextException: Unable to start web server; nested exception is org.springframework.context.ApplicationContextException: Unable to start ServletWebServerApplicationContext due to missing ServletWebServerFactory bean. at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.onRefresh(ServletWebServerApplicationContext.java:155) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:543) at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:140) at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:752) at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:388) at org.springframework.boot.SpringApplication.run(SpringApplication.java:327) at org.springframework.boot.SpringApplication.run(SpringApplication.java:1246) at org.springframework.boot.SpringApplication.run(SpringApplication.java:1234) at com.spark.demo.DemoApplication.main(DemoApplication.java:12) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:755) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused by: org.springframework.context.ApplicationContextException: Unable to start ServletWebServerApplicationContext due to missing ServletWebServerFactory bean. at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.getWebServerFactory(ServletWebServerApplicationContext.java:204) at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.createWebServer(ServletWebServerApplicationContext.java:178) at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.onRefresh(ServletWebServerApplicationContext.java:152) ... 17 more

idea maven部署 部分jar包不编译

如图所示,4.0.3的jar包目录结构可以展示出来,2.1.7的jar包怎么都出不来,有高手知道怎么回事吗?或者通过什么方式可以编译出来,尝试了编译方式通过eclipse方式编译,但是也没有效果。 ![图片说明](https://img-ask.csdn.net/upload/201909/10/1568071835_755705.png)

IDEA 2018.2 使用gradle 构建, java + scala 的项目, 编译不通过,无法运行

**下面是编译异常的日志:** ``` org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':common:compileScala'. at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:103) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:73) at org.gradle.api.internal.tasks.execution.OutputDirectoryCreatingTaskExecuter.execute(OutputDirectoryCreatingTaskExecuter.java:51) at org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.execute(SkipUpToDateTaskExecuter.java:59) at org.gradle.api.internal.tasks.execution.ResolveTaskOutputCachingStateExecuter.execute(ResolveTaskOutputCachingStateExecuter.java:54) at org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:59) at org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:101) at org.gradle.api.internal.tasks.execution.FinalizeInputFilePropertiesTaskExecuter.execute(FinalizeInputFilePropertiesTaskExecuter.java:44) at org.gradle.api.internal.tasks.execution.CleanupStaleOutputsExecuter.execute(CleanupStaleOutputsExecuter.java:91) at org.gradle.api.internal.tasks.execution.ResolveTaskArtifactStateTaskExecuter.execute(ResolveTaskArtifactStateTaskExecuter.java:62) at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:59) at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:54) at org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter.execute(ExecuteAtMostOnceTaskExecuter.java:43) at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:34) at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker$1.run(DefaultTaskGraphExecuter.java:256) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328) at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:199) at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:110) at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:249) at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:238) at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.processTask(DefaultTaskPlanExecutor.java:123) at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.access$200(DefaultTaskPlanExecutor.java:79) at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker$1.execute(DefaultTaskPlanExecutor.java:104) at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker$1.execute(DefaultTaskPlanExecutor.java:98) at org.gradle.execution.taskgraph.DefaultTaskExecutionPlan.execute(DefaultTaskExecutionPlan.java:663) at org.gradle.execution.taskgraph.DefaultTaskExecutionPlan.executeWithTask(DefaultTaskExecutionPlan.java:597) at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.run(DefaultTaskPlanExecutor.java:98) at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63) at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:46) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55) at java.lang.Thread.run(Thread.java:745) Caused by: org.gradle.api.internal.tasks.compile.CompilationFailedException: javac returned nonzero exit code at org.gradle.api.internal.tasks.scala.ZincScalaCompiler$Compiler.execute(ZincScalaCompiler.java:83) at org.gradle.api.internal.tasks.scala.ZincScalaCompiler.execute(ZincScalaCompiler.java:52) at org.gradle.api.internal.tasks.scala.ZincScalaCompiler.execute(ZincScalaCompiler.java:38) at org.gradle.api.internal.tasks.compile.daemon.AbstractDaemonCompiler$CompilerRunnable.run(AbstractDaemonCompiler.java:87) at org.gradle.workers.internal.DefaultWorkerServer.execute(DefaultWorkerServer.java:36) at org.gradle.workers.internal.WorkerDaemonServer.execute(WorkerDaemonServer.java:46) at org.gradle.workers.internal.WorkerDaemonServer.execute(WorkerDaemonServer.java:30) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.gradle.process.internal.worker.request.WorkerAction.run(WorkerAction.java:100) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35) at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24) at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:146) at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:128) at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:404) ... 6 more Caused by: javac returned nonzero exit code at sbt.compiler.JavaCompiler$JavaTool0.compile(JavaCompiler.scala:97) at sbt.compiler.JavaTool$class.apply(JavaCompiler.scala:54) at sbt.compiler.JavaCompiler$JavaTool0.apply(JavaCompiler.scala:84) at sbt.compiler.JavaCompiler$class.compile(JavaCompiler.scala:35) at sbt.compiler.JavaCompiler$JavaTool0.compile(JavaCompiler.scala:84) at sbt.compiler.JavaCompiler$class.compileWithReporter(JavaCompiler.scala:40) at sbt.compiler.JavaCompiler$JavaTool0.compileWithReporter(JavaCompiler.scala:84) at sbt.compiler.AggressiveCompile$$anonfun$3$$anonfun$compileJava$1$1.apply$mcV$sp(AggressiveCompile.scala:121) at sbt.compiler.AggressiveCompile$$anonfun$3$$anonfun$compileJava$1$1.apply(AggressiveCompile.scala:121) at sbt.compiler.AggressiveCompile$$anonfun$3$$anonfun$compileJava$1$1.apply(AggressiveCompile.scala:121) at sbt.compiler.AggressiveCompile.sbt$compiler$AggressiveCompile$$timed(AggressiveCompile.scala:168) at sbt.compiler.AggressiveCompile$$anonfun$3.compileJava$1(AggressiveCompile.scala:120) at sbt.compiler.AggressiveCompile$$anonfun$3.apply(AggressiveCompile.scala:142) at sbt.compiler.AggressiveCompile$$anonfun$3.apply(AggressiveCompile.scala:84) at sbt.inc.IncrementalCompile$$anonfun$doCompile$1.apply(Compile.scala:66) at sbt.inc.IncrementalCompile$$anonfun$doCompile$1.apply(Compile.scala:64) at sbt.inc.IncrementalCommon.cycle(IncrementalCommon.scala:32) at sbt.inc.Incremental$$anonfun$1.apply(Incremental.scala:72) at sbt.inc.Incremental$$anonfun$1.apply(Incremental.scala:71) at sbt.inc.Incremental$.manageClassfiles(Incremental.scala:99) at sbt.inc.Incremental$.compile(Incremental.scala:71) at sbt.inc.IncrementalCompile$.apply(Compile.scala:54) at sbt.compiler.AggressiveCompile.compile2(AggressiveCompile.scala:159) at sbt.compiler.AggressiveCompile.compile1(AggressiveCompile.scala:68) at com.typesafe.zinc.Compiler.compile(Compiler.scala:207) at com.typesafe.zinc.Compiler.compile(Compiler.scala:189) at com.typesafe.zinc.Compiler.compile(Compiler.scala:180) at com.typesafe.zinc.Compiler.compile(Compiler.scala:171) at org.gradle.api.internal.tasks.scala.ZincScalaCompiler$Compiler.execute(ZincScalaCompiler.java:81) ... 26 more ``` IDEA 的 scala 插件版本 2018.2.11 gradle 版本:5.1 JDK 版本 1.8.0_45 scala 依赖的版本 2.11.8

IDEA 运行 scala.class文件后报错

IDEA 运行 scala.class文件后报错 ![图片说明](https://img-ask.csdn.net/upload/202005/04/1588569501_448365.png)

关于Spark on Yarn运行WordCount的问题

**运行wordcount程序是,一直提示以下的内容,yarnAppState状态一直没有变成running:** 14/05/13 15:05:25 INFO yarn.Client: Application report from ASM: application identifier: application_1399949387820_0008 appId: 8 clientToAMToken: null appDiagnostics: appMasterHost: N/A appQueue: default appMasterRpcPort: 0 appStartTime: 1399964104011 yarnAppState: ACCEPTED distributedFinalState: UNDEFINED appTrackingUrl: master:8088/proxy/application_1399949387820_0008/ appUser: hadoop **我有3个虚拟机,master内存1g,slave内存512m,我的运行脚本如下:** export YARN_CONF_DIR=/home/hadoop/hadoop-2.2.0/etc/hadoop SPARK_JAR=/home/hadoop/spark-0.9.0-incubating-bin-hadoop2/assembly/target/scala-2.10/spark-assembly_2.10-0.9.0-incubating-hadoop2.2.0.jar \ ./spark-class org.apache.spark.deploy.yarn.Client \ --jar spark-wordcount-in-scala.jar \ --class WordCount \ --args yarn-standalone \ --args hdfs://master:6000/input \ --args hdfs://master:6000/output \ --num-workers 1 \ --master-memory 512m \ --worker-memory 512m \ --worker-cores 1 请各位大神帮帮忙!!!

scala spark 读取mongodb数据,并将数据写入hdfs示例

scala spark 读取mongodb数据(查询时会使用spark sql进行查询),并将数据写入hdfs示例

在中国程序员是青春饭吗?

今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...

程序员请照顾好自己,周末病魔差点一套带走我。

程序员在一个周末的时间,得了重病,差点当场去世,还好及时挽救回来了。

我以为我学懂了数据结构,直到看了这个导图才发现,我错了

数据结构与算法思维导图

String s = new String(" a ") 到底产生几个对象?

老生常谈的一个梗,到2020了还在争论,你们一天天的,哎哎哎,我不是针对你一个,我是说在座的各位都是人才! 上图红色的这3个箭头,对于通过new产生一个字符串(”宜春”)时,会先去常量池中查找是否已经有了”宜春”对象,如果没有则在常量池中创建一个此字符串对象,然后堆中再创建一个常量池中此”宜春”对象的拷贝对象。 也就是说准确答案是产生了一个或两个对象,如果常量池中原来没有 ”宜春” ,就是两个。...

技术大佬:我去,你写的 switch 语句也太老土了吧

昨天早上通过远程的方式 review 了两名新来同事的代码,大部分代码都写得很漂亮,严谨的同时注释也很到位,这令我非常满意。但当我看到他们当中有一个人写的 switch 语句时,还是忍不住破口大骂:“我擦,小王,你丫写的 switch 语句也太老土了吧!” 来看看小王写的代码吧,看完不要骂我装逼啊。 private static String createPlayer(PlayerTypes p...

Linux面试题(2020最新版)

文章目录Linux 概述什么是LinuxUnix和Linux有什么区别?什么是 Linux 内核?Linux的基本组件是什么?Linux 的体系结构BASH和DOS之间的基本区别是什么?Linux 开机启动过程?Linux系统缺省的运行级别?Linux 使用的进程间通信方式?Linux 有哪些系统日志文件?Linux系统安装多个桌面环境有帮助吗?什么是交换空间?什么是root帐户什么是LILO?什...

将一个接口响应时间从2s优化到 200ms以内的一个案例

一、背景 在开发联调阶段发现一个接口的响应时间特别长,经常超时,囧… 本文讲讲是如何定位到性能瓶颈以及修改的思路,将该接口从 2 s 左右优化到 200ms 以内 。 二、步骤 2.1 定位 定位性能瓶颈有两个思路,一个是通过工具去监控,一个是通过经验去猜想。 2.1.1 工具监控 就工具而言,推荐使用 arthas ,用到的是 trace 命令 具体安装步骤很简单,大家自行研究。 我的使用步骤是...

学历低,无法胜任工作,大佬告诉你应该怎么做

微信上收到一位读者小涛的留言,大致的意思是自己只有高中学历,经过培训后找到了一份工作,但很难胜任,考虑要不要辞职找一份他能力可以胜任的实习工作。下面是他留言的一部分内容: 二哥,我是 2016 年高中毕业的,考上了大学但没去成,主要是因为当时家里经济条件不太允许。 打工了三年后想学一门技术,就去培训了。培训的学校比较垃圾,现在非常后悔没去正规一点的机构培训。 去年 11 月份来北京找到了一份工...

JVM内存结构和Java内存模型别再傻傻分不清了

JVM内存结构和Java内存模型都是面试的热点问题,名字看感觉都差不多,网上有些博客也都把这两个概念混着用,实际上他们之间差别还是挺大的。 通俗点说,JVM内存结构是与JVM的内部存储结构相关,而Java内存模型是与多线程编程相关,本文针对这两个总是被混用的概念展开讲解。 JVM内存结构 JVM构成 说到JVM内存结构,就不会只是说内存结构的5个分区,而是会延展到整个JVM相关的问题,所以先了解下

和黑客斗争的 6 天!

互联网公司工作,很难避免不和黑客们打交道,我呆过的两家互联网公司,几乎每月每天每分钟都有黑客在公司网站上扫描。有的是寻找 Sql 注入的缺口,有的是寻找线上服务器可能存在的漏洞,大部分都...

Google 与微软的浏览器之争

浏览器再现“神仙打架”。整理 | 屠敏头图 | CSDN 下载自东方 IC出品 | CSDN(ID:CSDNnews)从 IE 到 Chrome,再从 Chrome 到 Edge,微软与...

讲一个程序员如何副业月赚三万的真实故事

loonggg读完需要3分钟速读仅需 1 分钟大家好,我是你们的校长。我之前讲过,这年头,只要肯动脑,肯行动,程序员凭借自己的技术,赚钱的方式还是有很多种的。仅仅靠在公司出卖自己的劳动时...

上班一个月,后悔当初着急入职的选择了

最近有个老铁,告诉我说,上班一个月,后悔当初着急入职现在公司了。他之前在美图做手机研发,今年美图那边今年也有一波组织优化调整,他是其中一个,在协商离职后,当时捉急找工作上班,因为有房贷供着,不能没有收入来源。所以匆忙选了一家公司,实际上是一个大型外包公司,主要派遣给其他手机厂商做外包项目。**当时承诺待遇还不错,所以就立马入职去上班了。但是后面入职后,发现薪酬待遇这块并不是HR所说那样,那个HR自...

女程序员,为什么比男程序员少???

昨天看到一档综艺节目,讨论了两个话题:(1)中国学生的数学成绩,平均下来看,会比国外好?为什么?(2)男生的数学成绩,平均下来看,会比女生好?为什么?同时,我又联想到了一个技术圈经常讨...

搜狗输入法也在挑战国人的智商!

故事总是一个接着一个到来...上周写完《鲁大师已经彻底沦为一款垃圾流氓软件!》这篇文章之后,鲁大师的市场工作人员就找到了我,希望把这篇文章删除掉。经过一番沟通我先把这篇文章从公号中删除了...

85后蒋凡:28岁实现财务自由、34岁成为阿里万亿电商帝国双掌门,他的人生底层逻辑是什么?...

蒋凡是何许人也? 2017年12月27日,在入职4年时间里,蒋凡开挂般坐上了淘宝总裁位置。 为此,时任阿里CEO张勇在任命书中力赞: 蒋凡加入阿里,始终保持创业者的冲劲,有敏锐的...

总结了 150 余个神奇网站,你不来瞅瞅吗?

原博客再更新,可能就没了,之后将持续更新本篇博客。

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

MySQL数据库面试题(2020最新版)

文章目录数据库基础知识为什么要使用数据库什么是SQL?什么是MySQL?数据库三大范式是什么mysql有关权限的表都有哪几个MySQL的binlog有有几种录入格式?分别有什么区别?数据类型mysql有哪些数据类型引擎MySQL存储引擎MyISAM与InnoDB区别MyISAM索引与InnoDB索引的区别?InnoDB引擎的4大特性存储引擎选择索引什么是索引?索引有哪些优缺点?索引使用场景(重点)...

如果你是老板,你会不会踢了这样的员工?

有个好朋友ZS,是技术总监,昨天问我:“有一个老下属,跟了我很多年,做事勤勤恳恳,主动性也很好。但随着公司的发展,他的进步速度,跟不上团队的步伐了,有点...

我入职阿里后,才知道原来简历这么写

私下里,有不少读者问我:“二哥,如何才能写出一份专业的技术简历呢?我总感觉自己写的简历太烂了,所以投了无数份,都石沉大海了。”说实话,我自己好多年没有写过简历了,但我认识的一个同行,他在阿里,给我说了一些他当年写简历的方法论,我感觉太牛逼了,实在是忍不住,就分享了出来,希望能够帮助到你。 01、简历的本质 作为简历的撰写者,你必须要搞清楚一点,简历的本质是什么,它就是为了来销售你的价值主张的。往深...

离职半年了,老东家又发 offer,回不回?

有小伙伴问松哥这个问题,他在上海某公司,在离职了几个月后,前公司的领导联系到他,希望他能够返聘回去,他很纠结要不要回去? 俗话说好马不吃回头草,但是这个小伙伴既然感到纠结了,我觉得至少说明了两个问题:1.曾经的公司还不错;2.现在的日子也不是很如意。否则应该就不会纠结了。 老实说,松哥之前也有过类似的经历,今天就来和小伙伴们聊聊回头草到底吃不吃。 首先一个基本观点,就是离职了也没必要和老东家弄的苦...

男生更看重女生的身材脸蛋,还是思想?

往往,我们看不进去大段大段的逻辑。深刻的哲理,往往短而精悍,一阵见血。问:产品经理挺漂亮的,有点心动,但不知道合不合得来。男生更看重女生的身材脸蛋,还是...

什么时候跳槽,为什么离职,你想好了么?

都是出来打工的,多为自己着想

程序员为什么千万不要瞎努力?

本文作者用对比非常鲜明的两个开发团队的故事,讲解了敏捷开发之道 —— 如果你的团队缺乏统一标准的环境,那么即使勤劳努力,不仅会极其耗时而且成果甚微,使用...

为什么程序员做外包会被瞧不起?

二哥,有个事想询问下您的意见,您觉得应届生值得去外包吗?公司虽然挺大的,中xx,但待遇感觉挺低,马上要报到,挺纠结的。

当HR压你价,说你只值7K,你该怎么回答?

当HR压你价,说你只值7K时,你可以流畅地回答,记住,是流畅,不能犹豫。 礼貌地说:“7K是吗?了解了。嗯~其实我对贵司的面试官印象很好。只不过,现在我的手头上已经有一份11K的offer。来面试,主要也是自己对贵司挺有兴趣的,所以过来看看……”(未完) 这段话主要是陪HR互诈的同时,从公司兴趣,公司职员印象上,都给予对方正面的肯定,既能提升HR的好感度,又能让谈判气氛融洽,为后面的发挥留足空间。...

面试:第十六章:Java中级开发(16k)

HashMap底层实现原理,红黑树,B+树,B树的结构原理 Spring的AOP和IOC是什么?它们常见的使用场景有哪些?Spring事务,事务的属性,传播行为,数据库隔离级别 Spring和SpringMVC,MyBatis以及SpringBoot的注解分别有哪些?SpringMVC的工作原理,SpringBoot框架的优点,MyBatis框架的优点 SpringCould组件有哪些,他们...

面试阿里p7,被按在地上摩擦,鬼知道我经历了什么?

面试阿里p7被问到的问题(当时我只知道第一个):@Conditional是做什么的?@Conditional多个条件是什么逻辑关系?条件判断在什么时候执...

终于懂了TCP和UDP协议区别

终于懂了TCP和UDP协议区别

立即提问
相关内容推荐