intellij创建scala项目没有scala选项 5C

为什么我新建scala那一步,在new project里面左侧选择scala,右侧就是没有scala的选项呢,只有SBT\Activator\IDEA这三个选项
图片说明

8个回答

选择IDEA 便是创建普通的Scala 项目(注:IJ IDEA版本的不同,这里IDEA可能显示成Scala,不过这个并没有什么影响)

我也是,请问楼主解决了吗

qq_31851531
2551 我一开始下载的Community版本,然后换成了Ultimate版本就有了
2 年多之前 回复

图片说明

有人解决这个问题没? 折腾了一整天,这东西也真是坑人。。。。。。

qq_31851531
2551 我一开始下载的Community版本,然后换成了Ultimate版本就有了
2 年多之前 回复

自己解决了。一开始下载的Community版本,然后换成了Ultimate版本就有了

你好,我也遇到相同的问题,我换成了Ultimate版本还是不行,有会解决的人吗

qq_31851531
2551 你用的最新版的吧,最新版好像是没有,我也不知道咋解决,我直接用的16.03的版本,两个版本那差距不大。一般公司开发也不会直接用最新版
大约 2 年之前 回复

我这边也遇到这种情况,没有解决呢

qq_41983010
johnnyAndCode 需要下载scala插件,网上有教程,下载部署好 重启IDEA就行了。
大约一年之前 回复

确保已经安装scala插件,在IDEA中全局配置中Global libraries中选择scala-sdk就可以了

BGH12ET
BGH12ET 什么东西,一点用都没有,哪里有什么Global libraries
7 个月之前 回复

知道问题了,是版本问题,Ultimate版本选IDEA,community版本选择scala,这样就

a98709474
Hilter_man 说错了,试了下,17版本的都没有Scala选项,但是16版本的有
接近 2 年之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
用idea+sbt构建scala项目是出现的问题
用idea+sbt,创建一个scala项目,然后idea的控制台直接报错,如下图所示: ![图片说明](https://img-ask.csdn.net/upload/201711/25/1511581280_348398.png) 然后idea最后指出的log日志内容如下: ``` Error during sbt execution: java.lang.RuntimeException: Expected one of local, maven-local, maven-central, scala-tools-releases, scala-tools-snapshots, sonatype-oss-releases, sonatype-oss-snapshots, jcenter, got 'local '. java.lang.RuntimeException: Expected one of local, maven-local, maven-central, scala-tools-releases, scala-tools-snapshots, sonatype-oss-releases, sonatype-oss-snapshots, jcenter, got 'local '. at xsbti.Predefined.toValue(Predefined.java:28) at xsbt.boot.Repository$Predefined$.apply(LaunchConfiguration.scala:114) at xsbt.boot.ConfigurationParser$$anonfun$getRepositories$1.apply(ConfigurationParser.scala:197) at xsbt.boot.ConfigurationParser$$anonfun$getRepositories$1.apply(ConfigurationParser.scala:196) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at scala.collection.immutable.List.foreach(List.scala:318) at org.apache.ivy.core.RelativeUrlResolver.map(RelativeUrlResolver.java:244) at scala.collection.AbstractTraversable.map(Traversable.scala:105) at xsbt.boot.ConfigurationParser.getRepositories(ConfigurationParser.scala:196) at xsbt.boot.ConfigurationParser$$anonfun$4.apply(ConfigurationParser.scala:71) at xsbt.boot.ConfigurationParser$$anonfun$processSection$1.apply(ConfigurationParser.scala:109) at xsbt.boot.ConfigurationParser.process(ConfigurationParser.scala:110) at xsbt.boot.ConfigurationParser.processSection(ConfigurationParser.scala:109) at xsbt.boot.ConfigurationParser.xsbt$boot$ConfigurationParser$$apply(ConfigurationParser.scala:49) at xsbt.boot.ConfigurationParser$$anonfun$apply$3.apply(ConfigurationParser.scala:47) at xsbt.boot.Using$.withResource(Using.scala:10) at xsbt.boot.Using$.apply(Using.scala:9) at xsbt.boot.Configuration$$anonfun$parse$1.apply(Configuration.scala:21) at xsbt.boot.Using$.withResource(Using.scala:10) at xsbt.boot.Using$.apply(Using.scala:9) at xsbt.boot.Configuration$.parse$fcb646c(Configuration.scala:21) at xsbt.boot.Launch$.apply(Launch.scala:18) at xsbt.boot.Boot$.runImpl(Boot.scala:41) at xsbt.boot.Boot$.main(Boot.scala:17) at xsbt.boot.Boot.main(Boot.scala) Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 ``` 想请问,这是什么问题? 一创建sbt项目,就报这个错。
intellij idea修改scal代码再次运行就报错
Error:scalac: Error: Could not find an output directory for D:\IdeaProjects\scala\src\main\scala\Scala.scala in List((d:\IdeaProjects\scala\target\scala-2.11\resource_managed\test,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\src\test\resources,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\target\scala-2.11\resource_managed\main,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\src\main\resources,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\target\scala-2.11\src_managed\test,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\src\test\scala-2.11,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\src\test\scala,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\src\test\java,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\target\scala-2.11\src_managed\main,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\src\main\scala-2.11,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\src\main\scala,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\src\main\java,d:\IdeaProjects\scala\target\scala-2.11\classes)) scala.reflect.internal.FatalError: Could not find an output directory for D:\IdeaProjects\scala\src\main\scala\Scala.scala in List((d:\IdeaProjects\scala\target\scala-2.11\resource_managed\test,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\src\test\resources,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\target\scala-2.11\resource_managed\main,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\src\main\resources,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\target\scala-2.11\src_managed\test,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\src\test\scala-2.11,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\src\test\scala,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\src\test\java,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\target\scala-2.11\src_managed\main,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\src\main\scala-2.11,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\src\main\scala,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\src\main\java,d:\IdeaProjects\scala\target\scala-2.11\classes)) at scala.tools.nsc.settings.MutableSettings$OutputDirs.outputDirFor(MutableSettings.scala:311) at scala.tools.nsc.backend.jvm.BytecodeWriters$class.outputDirectory(BytecodeWriters.scala:26) at scala.tools.nsc.backend.jvm.GenASM.outputDirectory(GenASM.scala:23) at scala.tools.nsc.backend.jvm.BytecodeWriters$class.getFile(BytecodeWriters.scala:41) at scala.tools.nsc.backend.jvm.GenASM.getFile(GenASM.scala:23) at scala.tools.nsc.backend.jvm.GenASM$JBuilder.writeIfNotTooBig(GenASM.scala:531) at scala.tools.nsc.backend.jvm.GenASM$JMirrorBuilder.genMirrorClass(GenASM.scala:2835) at scala.tools.nsc.backend.jvm.GenASM$AsmPhase.emitFor$1(GenASM.scala:193) at scala.tools.nsc.backend.jvm.GenASM$AsmPhase.run(GenASM.scala:203) at scala.tools.nsc.Global$Run.compileUnitsInternal(Global.scala:1500) at scala.tools.nsc.Global$Run.compileUnits(Global.scala:1487) at scala.tools.nsc.Global$Run.compileSources(Global.scala:1482) at scala.tools.nsc.Global$Run.compile(Global.scala:1580) at xsbt.CachedCompiler0.run(CompilerInterface.scala:126) at xsbt.CachedCompiler0.run(CompilerInterface.scala:102) at xsbt.CompilerInterface.run(CompilerInterface.scala:27) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at sbt.compiler.AnalyzingCompiler.call(AnalyzingCompiler.scala:102) at sbt.compiler.AnalyzingCompiler.compile(AnalyzingCompiler.scala:48) at sbt.compiler.AnalyzingCompiler.compile(AnalyzingCompiler.scala:41) at org.jetbrains.jps.incremental.scala.local.IdeaIncrementalCompiler.compile(IdeaIncrementalCompiler.scala:29) at org.jetbrains.jps.incremental.scala.local.LocalServer.compile(LocalServer.scala:26) at org.jetbrains.jps.incremental.scala.remote.Main$.make(Main.scala:62) at org.jetbrains.jps.incremental.scala.remote.Main$.nailMain(Main.scala:20) at org.jetbrains.jps.incremental.scala.remote.Main.nailMain(Main.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.martiansoftware.nailgun.NGSession.run(NGSession.java:319) Information:2015/4/24 10:39 - Compilation completed with 1 error and 0 warnings in 600ms
Intellij 导入git项目sbt报错
![图片说明](https://img-ask.csdn.net/upload/201708/21/1503299221_866858.jpg) 如图,在build.sbt中导入了typesafa相关包,结果在import sbt项目的时候报错如上 万谢
IDEA安装导入sbt依赖库时失败!尝试了三天依然无法成功导入,求大神指点!!!
dump project structure from sbt 一直无法成功. sbt shell中显示[info] Loading settings for project global-plugins from idea.sbt scala 2.11.8 sbt 1.3.4 请问怎么解决呢?
spark程序使用scalac先编译再使用scala运行和打成jar包使用spark-submit提交运行有什么区别?
第一种方式:先使用scalac命令 编译,再使用scala命令运行 第二种方式:先使用sbt打包,然后再使用spark-submit提交运行spark 这两种方式有什么区别?各有什么优劣势?先谢谢大家了 ~
IDEA中使用scala开发有些地方不进断点!
本人在IDEA中用scala进行断点调试时,发现scala的foreach中 有些地方断点不执行 但是程序执行了 ![图片说明](https://img-ask.csdn.net/upload/201708/30/1504060501_549112.png) 如图 只有红圈的一个断点上面 显示对号,并且程序会停止,其他地方均不停止。不知道哪个高手能指点一下。所用idea版本 scala版本如下 ![图片说明](https://img-ask.csdn.net/upload/201708/30/1504060660_510886.png) ![图片说明](https://img-ask.csdn.net/upload/201708/30/1504060672_136190.png)
求救!kafka maven依赖冲突问题
小弟目前在写毕业设计项目,其中涉及到kafka javaapi的一段代码: ``` import kafka.javaapi.producer.Producer; import kafka.producer.KeyedMessage; import kafka.producer.ProducerConfig; public void sendMessage(AppLogEntity e) { //创建配置对象 Properties props = new Properties(); props.put("metadata.broker.list", "192.168.72.182:9092"); props.put("serializer.class", "kafka.serializer.StringEncoder"); props.put("request.required.acks", "1"); //创建生产者 Producer<Integer, String> producer = new Producer<Integer, String>(new ProducerConfig(props)); sendSingleLog(producer,Constants.TOPIC_APP_STARTUP,e.getAppStartupLogs()); sendSingleLog(producer,Constants.TOPIC_APP_ERRROR,e.getAppErrorLogs()); sendSingleLog(producer,Constants.TOPIC_APP_EVENT,e.getAppEventLogs()); sendSingleLog(producer,Constants.TOPIC_APP_PAGE,e.getAppPageLogs()); sendSingleLog(producer,Constants.TOPIC_APP_USAGE,e.getAppUsageLogs()); //发送消息 producer.close(); } ``` 框架是SSM,启动后报错 org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'collectLogController': Failed to introspect bean class [com.automan.applogs.collect.web.controller.CollectLogController] for lookup method metadata: could not find class that it depends on; nested exception is java.lang.NoClassDefFoundError: kafka/javaapi/producer/Producer 再往下看,即是 Caused by: java.lang.ClassNotFoundException: kafka.javaapi.producer.Producer 我第一反应是去看pom文件,发现依赖都没有问题,但点开详细dependency后,发现kafka的依赖中scala版本冲突了, ![图片说明](https://img-ask.csdn.net/upload/201911/26/1574744977_566670.png) 查询了各种办法都解决不掉,也不知道是不是这个问题导致上述报错,特来求救,感激不尽~~
IntelliJ中运行SparkPi的问题
_jdk1.8.0_40 scala-2.10.4 hadoop-2.6.0 spark-1.1.1-bin-hadoop2.4 下面是log中的一些关键信息 15/03/31 19:31:59 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 15/03/31 19:32:08 WARN TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1, slave3): java.lang.ClassNotFoundException: SparkPi$$anonfun$1 15/03/31 19:32:08 ERROR TaskSetManager: Task 1 in stage 0.0 failed 4 times; aborting job Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 0.0 failed 4 times, most recent failure: Lost task 1.3 in stage 0.0 (TID 6, slave3): java.lang.ClassNotFoundException: SparkPi$$anonfun$1__
Jar在spark-shell上运行报错:主类找不到
scala IntelliJ的项目,sbt打好包在spark-shell上运行后报错:主类找不到;使用了两个中文分词包(ansj_seg-2.0.8.jar,nlp-lang-0.3.jar),但是已经加入到 External libraries里去了;打包没问题,运行报错 ![![图片说明](https://img-ask.csdn.net/upload/201601/26/1453780626_723163.jpg)![图片说明](https://img-ask.csdn.net/upload/201601/26/1453780648_659305.jpg) spark-shell 提交命令: [gaohui@hadoop-1-2 test]$ spark-submit --master yarn --driver-memory 5G --num-executors 20 --executor-cores 16 --executor-memory 10G --conf spark.serializer=org.apache.spark.serializer.KryoSerializer --class NLP_V6.Nlp_test --jars /home/gaohui/test/NLP_v6_test.jar /home/gaohui/test/NLP_v6_test.jar 报错图片: ![图片说明](https://img-ask.csdn.net/upload/201601/26/1453780776_603750.jpg)
sbt启动出错,求解决。
安装了sbt,当运行时出现下列错误,有谁知道如何解决 [root@Spark ~]# sbt sbt-version java.lang.NoClassDefFoundError: scala/reflect/internal/Trees at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:763) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) at java.net.URLClassLoader.access$100(URLClassLoader.java:73) at java.net.URLClassLoader$1.run(URLClassLoader.java:368) at java.net.URLClassLoader$1.run(URLClassLoader.java:362) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:361) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:763) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) at java.net.URLClassLoader.access$100(URLClassLoader.java:73) at java.net.URLClassLoader$1.run(URLClassLoader.java:368) at java.net.URLClassLoader$1.run(URLClassLoader.java:362) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:361) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:348) at xsbt.boot.Pre$.xsbt$boot$Pre$$classMissing$1(Pre.scala:66) at xsbt.boot.Pre$$anonfun$getMissing$1.apply(Pre.scala:67) at scala.collection.TraversableLike$$anonfun$filter$1.apply(TraversableLike.scala:264) at scala.collection.immutable.List.foreach(List.scala:318) at org.apache.ivy.core.RelativeUrlResolver.filter(RelativeUrlResolver.java:263) at scala.collection.AbstractTraversable.filter(Traversable.scala:105) at xsbt.boot.Pre$.getMissing$d83f809$3a8a6f87(Pre.scala:67) at xsbt.boot.Launch.checkLoader$2accd70c(Launch.scala:185) at xsbt.boot.Launch.xsbt$boot$Launch$$provider$1(Launch.scala:249) at xsbt.boot.Launch$$anonfun$xsbt$boot$Launch$$getScalaProvider0$2.apply(Launch.scala:252) at xsbt.boot.Launch$$anonfun$xsbt$boot$Launch$$getScalaProvider0$2.apply(Launch.scala:251) at scala.Option.flatMap(Option.scala:170) at xsbt.boot.Launch.xsbt$boot$Launch$$getScalaProvider0(Launch.scala:251) at xsbt.boot.Launch$$anon$3.call(Launch.scala:240) at xsbt.boot.Locks$GlobalLock.withLock(Locks.scala:45) at xsbt.boot.Locks$.apply0(Locks.scala:31) at xsbt.boot.Locks$.apply(Locks.scala:28) at xsbt.boot.Launch.locked(Launch.scala:238) at xsbt.boot.Launch.getScalaProvider(Launch.scala:240) at xsbt.boot.Launch$$anonfun$1.apply(Launch.scala:141) at xsbt.boot.Cache.newEntry(Cache.scala:16) at xsbt.boot.Cache.apply(Cache.scala:11) at xsbt.boot.Launch.getScala(Launch.scala:144) at xsbt.boot.Launch.getScala(Launch.scala:143) at xsbt.boot.Launch.xsbt$boot$Launch$$getAppProvider0(Launch.scala:219) at xsbt.boot.Launch$$anon$2.call(Launch.scala:196) at xsbt.boot.Locks$GlobalLock.withChannel$1(Locks.scala:93) at xsbt.boot.Locks$GlobalLock.xsbt$boot$Locks$GlobalLock$$withChannelRetries$1(Locks.scala:78) at xsbt.boot.Locks$GlobalLock$$anonfun$withFileLock$1.apply(Locks.scala:97) at xsbt.boot.Using$.withResource(Using.scala:10) at xsbt.boot.Using$.apply(Using.scala:9) at xsbt.boot.Locks$GlobalLock.ignoringDeadlockAvoided(Locks.scala:58) at xsbt.boot.Locks$GlobalLock.withLock(Locks.scala:48) at xsbt.boot.Locks$.apply0(Locks.scala:31) at xsbt.boot.Locks$.apply(Locks.scala:28) at xsbt.boot.Launch.locked(Launch.scala:238) at xsbt.boot.Launch.app(Launch.scala:147) at xsbt.boot.Launch.app(Launch.scala:145) at xsbt.boot.Launch$.run(Launch.scala:102) at xsbt.boot.Launch$$anonfun$apply$1.apply(Launch.scala:35) at xsbt.boot.Launch$.launch(Launch.scala:117) at xsbt.boot.Launch$.apply(Launch.scala:18) at xsbt.boot.Boot$.runImpl(Boot.scala:41) at xsbt.boot.Boot$.main(Boot.scala:17) at xsbt.boot.Boot.main(Boot.scala) Caused by: java.lang.ClassNotFoundException: scala.reflect.internal.Trees at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 69 more Error during sbt execution: java.lang.NoClassDefFoundError: scala/reflect/internal/Trees
spark在yarn集群上执行client模式代码
spark的wordcount提交到yarn集群上运行时,出现以下报错:请问有大神知道如何解决吗? ``` [hadoop00@hadoop02 ~]$ ./spark-submit-wordcount-yarn-client.sh //下面是执行过程: 19/07/31 17:12:36 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.2.102:4040 19/07/31 17:12:36 INFO spark.SparkContext: Added JAR file:/home/hadoop00/spark-core-1.0-SNAPSHOT-jar-with-dependencies.jar at spark://192.168.2.102:43723/jars/spark-core-1.0-SNAPSHOT-jar-with-dependencies.jar with timestamp 1564564356841 19/07/31 17:12:40 INFO yarn.Client: Requesting a new application from cluster with 0 NodeManagers 19/07/31 17:12:41 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container) 19/07/31 17:12:41 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead 19/07/31 17:12:41 INFO yarn.Client: Setting up container launch context for our AM 19/07/31 17:12:41 INFO yarn.Client: Setting up the launch environment for our AM container 19/07/31 17:12:41 INFO yarn.Client: Preparing resources for our AM container 19/07/31 17:12:45 WARN yarn.Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME. 19/07/31 17:12:53 INFO yarn.Client: Uploading resource file:/tmp/spark-59635080-0711-4817-9e3b-b25f528cbbbe/__spark_libs__5797595590401639249.zip -> hdfs://myha01/user/hadoop00/.sparkStaging/application_1564523762236_0001/__spark_libs__5797595590401639249.zip 19/07/31 17:13:07 INFO yarn.Client: Uploading resource file:/tmp/spark-59635080-0711-4817-9e3b-b25f528cbbbe/__spark_conf__627970737981952935.zip -> hdfs://myha01/user/hadoop00/.sparkStaging/application_1564523762236_0001/__spark_conf__.zip 19/07/31 17:13:07 INFO spark.SecurityManager: Changing view acls to: hadoop00 19/07/31 17:13:07 INFO spark.SecurityManager: Changing modify acls to: hadoop00 19/07/31 17:13:07 INFO spark.SecurityManager: Changing view acls groups to: 19/07/31 17:13:07 INFO spark.SecurityManager: Changing modify acls groups to: 19/07/31 17:13:07 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop00); groups with view permissions: Set(); users with modify permissions: Set(hadoop00); groups with modify permissions: Set() 19/07/31 17:13:07 INFO yarn.Client: Submitting application application_1564523762236_0001 to ResourceManager 19/07/31 17:13:08 INFO impl.YarnClientImpl: Submitted application application_1564523762236_0001 19/07/31 17:13:08 INFO cluster.SchedulerExtensionServices: Starting Yarn extension services with app application_1564523762236_0001 and attemptId None 19/07/31 17:13:09 INFO yarn.Client: Application report for application_1564523762236_0001 (state: ACCEPTED) 19/07/31 17:13:09 INFO yarn.Client: client token: N/A diagnostics: N/A ApplicationMaster host: N/A ApplicationMaster RPC port: -1 queue: default start time: 1564523805324 final status: UNDEFINED tracking URL: http://hadoop03:8088/proxy/application_1564523762236_0001/ user: hadoop00 19/07/31 17:13:10 INFO yarn.Client: Application report for application_1564523762236_0001 (state: FAILED) 19/07/31 17:13:10 INFO yarn.Client: client token: N/A diagnostics: Application application_1564523762236_0001 failed 2 times due to Error launching appattempt_1564523762236_0001_000002. Got exception: org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request to start container. This token is expired. current time is 1564564389887 found 1564524406596 Note: System times on machines may be out of sync. Check system time and time zones. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:422) at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168) at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106) at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:123) at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:250) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) . Failing the application. ApplicationMaster host: N/A ApplicationMaster RPC port: -1 queue: default start time: 1564523805324 final status: FAILED tracking URL: http://hadoop03:8088/cluster/app/application_1564523762236_0001 user: hadoop00 19/07/31 17:13:10 INFO yarn.Client: Deleted staging directory hdfs://myha01/user/hadoop00/.sparkStaging/application_1564523762236_0001 19/07/31 17:13:10 ERROR spark.SparkContext: Error initializing SparkContext. org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master. at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:85) at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62) at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:173) at org.apache.spark.SparkContext.<init>(SparkContext.scala:509) at p2._01ScalaWordCountRemoteOps$.main(_01ScalaWordCountRemoteOps.scala:21) at p2._01ScalaWordCountRemoteOps.main(_01ScalaWordCountRemoteOps.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:775) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 19/07/31 17:13:10 INFO server.AbstractConnector: Stopped Spark@6f2bafef{HTTP/1.1,[http/1.1]}{0.0.0.0:4040} 19/07/31 17:13:10 INFO ui.SparkUI: Stopped Spark web UI at http://192.168.2.102:4040 19/07/31 17:13:10 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered! 19/07/31 17:13:10 INFO cluster.YarnClientSchedulerBackend: Shutting down all executors 19/07/31 17:13:10 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Asking each executor to shut down 19/07/31 17:13:10 INFO cluster.SchedulerExtensionServices: Stopping SchedulerExtensionServices (serviceOption=None, services=List(), started=false) 19/07/31 17:13:10 INFO cluster.YarnClientSchedulerBackend: Stopped 19/07/31 17:13:10 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! 19/07/31 17:13:10 INFO memory.MemoryStore: MemoryStore cleared 19/07/31 17:13:10 INFO storage.BlockManager: BlockManager stopped 19/07/31 17:13:10 INFO storage.BlockManagerMaster: BlockManagerMaster stopped 19/07/31 17:13:10 WARN metrics.MetricsSystem: Stopping a MetricsSystem that is not running 19/07/31 17:13:10 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! 19/07/31 17:13:10 INFO spark.SparkContext: Successfully stopped SparkContext Exception in thread "main" org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master. at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:85) at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62) at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:173) at org.apache.spark.SparkContext.<init>(SparkContext.scala:509) at p2._01ScalaWordCountRemoteOps$.main(_01ScalaWordCountRemoteOps.scala:21) at p2._01ScalaWordCountRemoteOps.main(_01ScalaWordCountRemoteOps.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:775) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 19/07/31 17:13:10 INFO util.ShutdownHookManager: Shutdown hook called 19/07/31 17:13:10 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-59635080-0711-4817-9e3b-b25f528cbbbe ```
Error while instantiating 'org.apache.spark.sql.hive.HiveSessionState'
idea中使用spark-sql报错,事先说明一下,我已经将三个配置文件core-site.xml、hdfs-site.xml、hive-site.xml拷贝到resources下面,可以连接到metastore。我在网上看了很多解决方法,我都做了修改,但是都为生效。 我已经做过的事如下: ![图片说明](https://img-ask.csdn.net/upload/201908/09/1565356414_188554.png) ![图片说明](https://img-ask.csdn.net/upload/201908/09/1565356355_466558.png) ![图片说明](https://img-ask.csdn.net/upload/201908/09/1565356390_666077.png) ![图片说明](https://img-ask.csdn.net/upload/201908/09/1565356428_729364.png) ![图片说明](https://img-ask.csdn.net/upload/201908/09/1565356441_976555.png) 错误如下: ![图片说明](https://img-ask.csdn.net/upload/201908/09/1565356461_588231.png)
求问,这个cannot resolve symbol println是什么意思
在Idea上敲scala 的 demo出现的 ![图片说明](https://img-ask.csdn.net/upload/201707/09/1499613675_366221.jpg)
测试spark集群入门级wordcount出错,求大神们帮忙解决啊
* Created by jyq on 10/14/15. */ 就这么点源代码 import org.apache.spark.{SparkConf,SparkContext,SparkFiles} object WordCount { def main(args: Array[String]):Unit= { val conf =new SparkConf().setAppName("WordCount").setMaster("spark://master:7077") val sc = new SparkContext(conf) sc.addFile("file:///home/jyq/Desktop/1.txt") val textRDD=sc.textFile(SparkFiles.get("file:///home/jyq/Desktop/1.txt")) val result = textRDD.flatMap(line =>line.split("\\s+") ).map(word=> (word, 1)).reduceByKey(_ + _) result.saveAsTextFile("/home/jyq/Desktop/2.txt") println("hello world") } } 在IDEA编译运行下输出的日志: Exception in thread "main" java.lang.IllegalArgumentException: java.net.URISyntaxException: Expected scheme-specific part at index 5: file: at org.apache.hadoop.fs.Path.initialize(Path.java:206) at org.apache.hadoop.fs.Path.<init>(Path.java:172) at org.apache.hadoop.fs.Path.<init>(Path.java:94) at org.apache.hadoop.fs.Globber.glob(Globber.java:211) at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1644) at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:257) at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228) at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313) at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:207) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) at org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:65) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:290) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:290) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108) at org.apache.spark.rdd.RDD.withScope(RDD.scala:306) at org.apache.spark.rdd.PairRDDFunctions.reduceByKey(PairRDDFunctions.scala:289) at WordCount$.main(WordCount.scala:16) at WordCount.main(WordCount.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140) Caused by: java.net.URISyntaxException: Expected scheme-specific part at index 5: file: at java.net.URI$Parser.fail(URI.java:2848) at java.net.URI$Parser.failExpecting(URI.java:2854) at java.net.URI$Parser.parse(URI.java:3057) at java.net.URI.<init>(URI.java:746) at org.apache.hadoop.fs.Path.initialize(Path.java:203) ... 41 more 15/10/15 20:08:36 INFO SparkContext: Invoking stop() from shutdown hook 15/10/15 20:08:36 INFO SparkUI: Stopped Spark web UI at http://192.168.179.111:4040 15/10/15 20:08:36 INFO DAGScheduler: Stopping DAGScheduler 15/10/15 20:08:36 INFO SparkDeploySchedulerBackend: Shutting down all executors 15/10/15 20:08:36 INFO SparkDeploySchedulerBackend: Asking each executor to shut down 15/10/15 20:08:36 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! 15/10/15 20:08:36 INFO MemoryStore: MemoryStore cleared 15/10/15 20:08:36 INFO BlockManager: BlockManager stopped 15/10/15 20:08:36 INFO BlockManagerMaster: BlockManagerMaster stopped 15/10/15 20:08:36 INFO SparkContext: Successfully stopped SparkContext 15/10/15 20:08:36 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! 15/10/15 20:08:36 INFO ShutdownHookManager: Shutdown hook called 15/10/15 20:08:36 INFO ShutdownHookManager: Deleting directory /tmp/spark-d7ca48d5-4e31-4a07-9264-8d7f5e8e1032 15/10/15 20:08:36 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. Process finished with exit code 1
SparkSql中读取hive中的表不能存在"."
val hiveDeptDF = sqlContext.read.table("emp_test.emp") 我要读取hive中emp_test中的emp表,报错不能包含“.” Exception in thread "main" org.apache.spark.sql.AnalysisException: Specifying database name or other qualifiers are not allowed for temporary tables. If the table name has dots (.) in it, please quote the table name with backticks (`).; at org.apache.spark.sql.catalyst.analysis.Catalog$class.getTableName(Catalog.scala:70) at org.apache.spark.sql.catalyst.analysis.SimpleCatalog.getTableName(Catalog.scala:82) at org.apache.spark.sql.catalyst.analysis.SimpleCatalog.lookupRelation(Catalog.scala:104) at org.apache.spark.sql.DataFrameReader.table(DataFrameReader.scala:338) at Hive2Rdbms$.main(Hive2Rdbms.scala:16) at Hive2Rdbms.main(Hive2Rdbms.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140) 我加上反引号后,又显示找不到该表。 hive库本身没问题
oozie配置任务,任务可以执行成功,但是workflow显示的是kill
oozie配置任务,任务可以执行成功,但是workflow显示的是kill,看了一下日志,报错如下: ![图片说明](https://img-ask.csdn.net/upload/201908/08/1565255997_632507.png)![图片说明](https://img-ask.csdn.net/upload/201908/08/1565256009_163062.png) 我看网上说换mysql驱动包,我已经尝试了,发现不是这个问题导致的。 我用的是hue,然后hdp的版本是2.6。
spark提交任务 cassadra报错,guava版本低于16。0.1,但是检查jar包是19.0的spark本地local模式跑没问题
![图片说明](https://img-ask.csdn.net/upload/201903/27/1553694354_203466.png) # Caused by: com.datastax.driver.core.exceptions.DriverInternalError: Detected incompatible version of Guava in the classpath. You need 16.0.1 or higher. ``` canssadra版本3.x的,查看里面的guava包是19.0的,但是还报这个错。求大神帮忙看yi'x 报错代码: 19/03/27 21:21:23 INFO scheduler.DAGScheduler: ResultStage 2 (foreachPartition at DmpOfflineRecive.scala:45) failed in 0.368 s due to Job aborted due to stage failure: Task 0 in stage 2.0 failed 1 times, most recent failure: Lost task 0.0 in stage 2.0 (TID 2, localhost, executor driver): java.lang.ExceptionInInitializerError at com.datastax.driver.core.PoolingOptions.<clinit>(PoolingOptions.java:137) at com.apus.dmp.client.scylladb.client.AbsScylladbClient.init(AbsScylladbClient.java:62) at com.apus.dmp.client.scylladb.client.ScylladbOffLineClient.<init>(ScylladbOffLineClient.java:29) at com.apus.dmp.client.scylladb.client.ScylladbOffLineClient.getInstance(ScylladbOffLineClient.java:18) at com.apus.woody.imp.DmpOfflineRecive$$anonfun$main$1.apply(DmpOfflineRecive.scala:47) at com.apus.woody.imp.DmpOfflineRecive$$anonfun$main$1.apply(DmpOfflineRecive.scala:45) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:926) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:926) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1951) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1951) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:99) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: com.datastax.driver.core.exceptions.DriverInternalError: Detected incompatible version of Guava in the classpath. You need 16.0.1 or higher. at com.datastax.driver.core.GuavaCompatibility.selectImplementation(GuavaCompatibility.java:191) at com.datastax.driver.core.GuavaCompatibility.<clinit>(GuavaCompatibility.java:59) ... 16 more ```
spark ansj分词 报错数组越界
``` val lines = sc.textFile("file:///D:/data/solr.txt") val hashingTF = new mllib.feature.HashingTF() val sentences = lines.collect().map{ sents => val data = sents.split(",") val lable = "1" val sentence=sents.replaceAll("\t","") println(sentence) val temp = ToAnalysis.parse(sentence) //报错的地方 val stopwords: java.util.List[String] = sc.textFile("hdfs:/svm/stopword.dic").collect().toSeq FilterModifWord.insertStopWords(stopwords) //(3)根据词性去停用词,w为标点符号 FilterModifWord.insertStopNatures("w", null) val filter = FilterModifWord.modifResult(temp) val sent = for (i <- Range(0, filter.size())) yield filter.get(i).getName val message = sent.toArray message.map{word=> termMap.put(hashingTF.indexOf(word),word) } RawDataRecord(lable, message) } ``` ``` 16/12/17 17:30:45 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 63 ms on localhost (1/1) 16/12/17 17:30:45 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 16/12/17 17:30:45 INFO DAGScheduler: ResultStage 0 (collect at seg_local.scala:33) finished in 0.102 s 16/12/17 17:30:45 INFO DAGScheduler: Job 0 finished: collect at seg_local.scala:33, took 0.146047 s 目前的分词器大部分都是单机服务器进行分词,或者使用hadoop mapreduce对存储在hdfs中大量的数据文本进行分词。由于mapreduce的速度较慢,相对spark来说代码书写较繁琐。 16/12/17 17:30:45 INFO BlockManagerInfo: Removed broadcast_1_piece0 on 172.16.110.10:49409 in memory (size: 1850.0 B, free: 1992.9 MB) 16/12/17 17:30:46 INFO DICLOG: init user userLibrary ok path is : D:\Intellij\tsf_lda\library\default.dic 16/12/17 17:30:46 INFO DICLOG: init ambiguityLibrary ok! 16/12/17 17:30:46 INFO DICLOG: init core library ok use time :304 Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 3 at org.ansj.splitWord.Analysis.analysisStr(Analysis.java:115) at org.ansj.splitWord.Analysis.parseStr(Analysis.java:222) at org.ansj.splitWord.analysis.ToAnalysis.parse(ToAnalysis.java:103) at tsf_lda.seg_local$$anonfun$1.apply(seg_local.scala:38) at tsf_lda.seg_local$$anonfun$1.apply(seg_local.scala:33) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186) at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186) at tsf_lda.seg_local$.main(seg_local.scala:33) at tsf_lda.seg_local.main(seg_local.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147) ```
求问,cannot resolve symbol println是什么意思
在用IDEA写scala代码的时候出现的 ![图片说明](https://img-ask.csdn.net/upload/201707/09/1499613409_925907.jpg)
爬虫福利二 之 妹子图网MM批量下载
爬虫福利一:27报网MM批量下载    点击 看了本文,相信大家对爬虫一定会产生强烈的兴趣,激励自己去学习爬虫,在这里提前祝:大家学有所成! 目标网站:妹子图网 环境:Python3.x 相关第三方模块:requests、beautifulsoup4 Re:各位在测试时只需要将代码里的变量 path 指定为你当前系统要保存的路径,使用 python xxx.py 或IDE运行即可。
字节跳动视频编解码面经
三四月份投了字节跳动的实习(图形图像岗位),然后hr打电话过来问了一下会不会opengl,c++,shador,当时只会一点c++,其他两个都不会,也就直接被拒了。 七月初内推了字节跳动的提前批,因为内推没有具体的岗位,hr又打电话问要不要考虑一下图形图像岗,我说实习投过这个岗位不合适,不会opengl和shador,然后hr就说秋招更看重基础。我当时想着能进去就不错了,管他哪个岗呢,就同意了面试...
开源一个功能完整的SpringBoot项目框架
福利来了,给大家带来一个福利。 最近想了解一下有关Spring Boot的开源项目,看了很多开源的框架,大多是一些demo或者是一个未成形的项目,基本功能都不完整,尤其是用户权限和菜单方面几乎没有完整的。 想到我之前做的框架,里面通用模块有:用户模块,权限模块,菜单模块,功能模块也齐全了,每一个功能都是完整的。 打算把这个框架分享出来,供大家使用和学习。 为什么用框架? 框架可以学习整体...
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它是一个过程,是一个不断累积、不断沉淀、不断总结、善于传达自己的个人见解以及乐于分享的过程。
Spring Boot2 系列教程(十七)SpringBoot 整合 Swagger2
前后端分离后,维护接口文档基本上是必不可少的工作。 一个理想的状态是设计好后,接口文档发给前端和后端,大伙按照既定的规则各自开发,开发好了对接上了就可以上线了。当然这是一种非常理想的状态,实际开发中却很少遇到这样的情况,接口总是在不断的变化之中,有变化就要去维护,做过的小伙伴都知道这件事有多么头大!还好,有一些工具可以减轻我们的工作量,Swagger2 就是其中之一,至于其他类似功能但是却收费的软...
Python 基础(一):入门必备知识
目录1 标识符2 关键字3 引号4 编码5 输入输出6 缩进7 多行8 注释9 数据类型10 运算符10.1 常用运算符10.2 运算符优先级 1 标识符 标识符是编程时使用的名字,用于给变量、函数、语句块等命名,Python 中标识符由字母、数字、下划线组成,不能以数字开头,区分大小写。 以下划线开头的标识符有特殊含义,单下划线开头的标识符,如:_xxx ,表示不能直接访问的类属性,需通过类提供
天天学JAVA-JAVA基础(6)
如果觉得我写的还行,请关注我的博客并且点个赞哟。本文主要介绍JAVA 中最常使用字符串常量String相关知识。 1.String简介 2.创建字符串对象两种方式的区别 3.String常用的方法 4.String的不可变性 5.一道阿里面试题,你会做吗? 1.String简介 1.1String源码 首先看一段String源码,String主要实现了Serializable、Compar...
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。 1. for - else 什么?不是 if 和 else 才
数据库优化 - SQL优化
前面一篇文章从实例的角度进行数据库优化,通过配置一些参数让数据库性能达到最优。但是一些“不好”的SQL也会导致数据库查询变慢,影响业务流程。本文从SQL角度进行数据库优化,提升SQL运行效率。 判断问题SQL 判断SQL是否有问题时可以通过两个表象进行判断: 系统级别表象 CPU消耗严重 IO等待严重 页面响应时间过长
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 c/c++ 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7
通俗易懂地给女朋友讲:线程池的内部原理
餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”
经典算法(5)杨辉三角
写在前面: 我是 扬帆向海,这个昵称来源于我的名字以及女朋友的名字。我热爱技术、热爱开源、热爱编程。技术是开源的、知识是共享的。 这博客是对自己学习的一点点总结及记录,如果您对 Java、算法 感兴趣,可以关注我的动态,我们一起学习。 用知识改变命运,让我们的家人过上更好的生活。 目录一、杨辉三角的介绍二、杨辉三角的算法思想三、代码实现1.第一种写法2.第二种写法 一、杨辉三角的介绍 百度
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹
2020 网络课 智慧树自动刷课代码,自动跳转,自动答题并关闭弹窗,自动1.5倍速静音
刷课一时爽,一直刷课一直爽! 终于让我找到了这个黑客代码了,教程开始: 只限谷歌浏览器和火狐浏览器使用,如果第一次失败,请重新试一下次 将下面代码复制后,进入浏览器按F12键,先点击console 然后Ctrl+v复制代码 最后按回车键即可 var ti = $("body"); var video = $(".catalogue_ul1 li[id*=video-]"); var i = 1;...
面试官:你连RESTful都不知道我怎么敢要你?
面试官:了解RESTful吗? 我:听说过。 面试官:那什么是RESTful? 我:就是用起来很规范,挺好的 面试官:是RESTful挺好的,还是自我感觉挺好的 我:都挺好的。 面试官:… 把门关上。 我:… 要干嘛?先关上再说。 面试官:我说出去把门关上。 我:what ?,夺门而去 文章目录01 前言02 RESTful的来源03 RESTful6大原则1. C-S架构2. 无状态3.统一的接
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看
SQL-小白最佳入门sql查询一
一 说明 如果是初学者,建议去网上寻找安装Mysql的文章安装,以及使用navicat连接数据库,以后的示例基本是使用mysql数据库管理系统; 二 准备前提 需要建立一张学生表,列分别是id,名称,年龄,学生信息;本示例中文章篇幅原因SQL注释略; 建表语句: CREATE TABLE `student` ( `id` int(11) NOT NULL AUTO_INCREMENT, `
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // dosho
致 Python 初学者
文章目录1. 前言2. 明确学习目标,不急于求成,不好高骛远3. 在开始学习 Python 之前,你需要做一些准备2.1 Python 的各种发行版2.2 安装 Python2.3 选择一款趁手的开发工具3. 习惯使用IDLE,这是学习python最好的方式4. 严格遵从编码规范5. 代码的运行、调试5. 模块管理5.1 同时安装了py2/py35.2 使用Anaconda,或者通过IDE来安装模
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,
程序员:我终于知道post和get的区别
IT界知名的程序员曾说:对于那些月薪三万以下,自称IT工程师的码农们,其实我们从来没有把他们归为我们IT工程师的队伍。他们虽然总是以IT工程师自居,但只是他们一厢情愿罢了。 此话一出,不知激起了多少(码农)程序员的愤怒,却又无可奈何,于是码农问程序员。 码农:你知道get和post请求到底有什么区别? 程序员:你看这篇就知道了。 码农:你月薪三万了? 程序员:嗯。 码农:你是怎么做到的? 程序员:
羞,Java 字符串拼接竟然有这么多姿势
二哥,我今年大二,看你分享的《阿里巴巴 Java 开发手册》上有一段内容说:“循环体内,拼接字符串最好使用 StringBuilder 的 append 方法,而不是 + 号操作符。”到底为什么啊,我平常一直就用的‘+’号操作符啊!二哥有空的时候能否写一篇文章分析一下呢? 就在昨天,一位叫小菜的读者微信我说了上面这段话。 我当时看到这条微信的第一感觉是:小菜你也太菜了吧,这都不知道为啥啊!我估...
写1行代码影响1000000000人,这是个什么项目?
不带钱不带卡,只带手机出门就能畅行无阻,这已是生活的常态。益普索发布的《2019第一季度第三方移动支付用户研究》报告显示,移动支付在手机网民中的渗透率高达95.1%,截至今年1月,支付宝全球用户数已经突破10亿。你或许每天都会打开支付宝,付款购物、领取权益、享受服务……但你或许不知道的是,在这个方便、快捷、智能化的APP背后,有一群年轻的技术人,用智慧和创新让它每天都变得更“聪明”一点。 ...
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU
程序员把地府后台管理系统做出来了,还有3.0版本!12月7号最新消息:已在开发中有github地址
第一幕:缘起 听说阎王爷要做个生死簿后台管理系统,我们派去了一个程序员…… 996程序员做的梦: 第一场:团队招募 为了应对地府管理危机,阎王打算找“人”开发一套地府后台管理系统,于是就在地府总经办群中发了项目需求。 话说还是中国电信的信号好,地府都是满格,哈哈!!! 经常会有外行朋友问:看某网站做的不错,功能也简单,你帮忙做一下? 而这次,面对这样的需求,这个程序员
网易云6亿用户音乐推荐算法
网易云音乐是音乐爱好者的集聚地,云音乐推荐系统致力于通过 AI 算法的落地,实现用户千人千面的个性化推荐,为用户带来不一样的听歌体验。 本次分享重点介绍 AI 算法在音乐推荐中的应用实践,以及在算法落地过程中遇到的挑战和解决方案。 将从如下两个部分展开: AI 算法在音乐推荐中的应用 音乐场景下的 AI 思考 从 2013 年 4 月正式上线至今,网易云音乐平台持续提供着:乐屏社区、UGC
一点一滴记录 Java 8 stream 的使用
日常用到,一点一滴记录,不断丰富,知识积累,塑造自身价值。欢迎收藏 String 转 List String str = 1,2,3,4; List&lt;Long&gt; lists = Arrays.stream(str.split(",")).map(s -&gt; Long.parseLong(s.trim())).collect(Collectors.toList()); Lis...
8年经验面试官详解 Java 面试秘诀
    作者 | 胡书敏 责编 | 刘静 出品 | CSDN(ID:CSDNnews) 本人目前在一家知名外企担任架构师,而且最近八年来,在多家外企和互联网公司担任Java技术面试官,前后累计面试了有两三百位候选人。在本文里,就将结合本人的面试经验,针对Java初学者、Java初级开发和Java开发,给出若干准备简历和准备面试的建议。   Java程序员准备和投递简历的实
面试官如何考察你的思维方式?
1.两种思维方式在求职面试中,经常会考察这种问题:北京有多少量特斯拉汽车? 某胡同口的煎饼摊一年能卖出多少个煎饼? 深圳有多少个产品经理? 一辆公交车里能装下多少个乒乓球? 一
相关热词 c# plc s1200 c#里氏转换原则 c# 主界面 c# do loop c#存为组套 模板 c# 停掉协程 c# rgb 读取图片 c# 图片颜色调整 最快 c#多张图片上传 c#密封类与密封方法
立即提问