IDEA 运行 scala.class文件后报错

IDEA 运行 scala.class文件后报错
图片说明

1个回答

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
sbt报错scala.runtime in compiler mirror not found

用sbt编译kafka manager工程时报错如下信息: [info] Loading project definition from E:\workspace\idea\kafka-manager-master\project error: error while loading <root>, error in opening zip file scala.reflect.internal.MissingRequirementError: object scala.runtime in compiler mirror not found. at scala.reflect.internal.MissingRequirementError$.signal(MissingRequirementError.scala:16) at scala.reflect.internal.MissingRequirementError$.notFound(MissingRequirementError.scala:17) at scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:48) at scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:40) at scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:61) at scala.reflect.internal.Mirrors$RootsBase.getPackage(Mirrors.scala:172) at scala.reflect.internal.Mirrors$RootsBase.getRequiredPackage(Mirrors.scala:175) at scala.reflect.internal.Definitions$DefinitionsClass.RuntimePackage$lzycompute(Definitions.scala:183) at scala.reflect.internal.Definitions$DefinitionsClass.RuntimePackage(Definitions.scala:183) at scala.reflect.internal.Definitions$DefinitionsClass.RuntimePackageClass$lzycompute(Definitions.scala:184) at scala.reflect.internal.Definitions$DefinitionsClass.RuntimePackageClass(Definitions.scala:184) at scala.reflect.internal.Definitions$DefinitionsClass.AnnotationDefaultAttr$lzycompute(Definitions.scala:1024) at scala.reflect.internal.Definitions$DefinitionsClass.AnnotationDefaultAttr(Definitions.scala:1023) at scala.reflect.internal.Definitions$DefinitionsClass.syntheticCoreClasses$lzycompute(Definitions.scala:1153) at scala.reflect.internal.Definitions$DefinitionsClass.syntheticCoreClasses(Definitions.scala:1152) at scala.reflect.internal.Definitions$DefinitionsClass.symbolsNotPresentInBytecode$lzycompute(Definitions.scala:1196) at scala.reflect.internal.Definitions$DefinitionsClass.symbolsNotPresentInBytecode(Definitions.scala:1196) at scala.reflect.internal.Definitions$DefinitionsClass.init(Definitions.scala:1261) at scala.tools.nsc.Global$Run.<init>(Global.scala:1290) at sbt.compiler.Eval$$anon$1.<init>(Eval.scala:141) at sbt.compiler.Eval.run$lzycompute$1(Eval.scala:141) at sbt.compiler.Eval.run$1(Eval.scala:141) at sbt.compiler.Eval.unlinkAll$1(Eval.scala:144) at sbt.compiler.Eval.evalCommon(Eval.scala:153) at sbt.compiler.Eval.evalDefinitions(Eval.scala:122) at sbt.EvaluateConfigurations$.evaluateDefinitions(EvaluateConfigurations.scala:271) at sbt.EvaluateConfigurations$.evaluateSbtFile(EvaluateConfigurations.scala:109) at sbt.Load$.sbt$Load$$loadSettingsFile$1(Load.scala:775) at sbt.Load$$anonfun$sbt$Load$$memoLoadSettingsFile$1$1.apply(Load.scala:781) at sbt.Load$$anonfun$sbt$Load$$memoLoadSettingsFile$1$1.apply(Load.scala:780) at scala.collection.MapLike$class.getOrElse(MapLike.scala:128) at scala.collection.AbstractMap.getOrElse(Map.scala:58) at sbt.Load$.sbt$Load$$memoLoadSettingsFile$1(Load.scala:780) at sbt.Load$$anonfun$loadFiles$1$2.apply(Load.scala:788) at sbt.Load$$anonfun$loadFiles$1$2.apply(Load.scala:788) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at scala.collection.TraversableLike$class.map(TraversableLike.scala:244) at scala.collection.AbstractTraversable.map(Traversable.scala:105) at sbt.Load$.loadFiles$1(Load.scala:788) at sbt.Load$.discoverProjects(Load.scala:799) at sbt.Load$.discover$1(Load.scala:585) at sbt.Load$.sbt$Load$$loadTransitive(Load.scala:633) at sbt.Load$$anonfun$loadUnit$1.sbt$Load$$anonfun$$loadProjects$1(Load.scala:482) at sbt.Load$$anonfun$loadUnit$1$$anonfun$40.apply(Load.scala:485) at sbt.Load$$anonfun$loadUnit$1$$anonfun$40.apply(Load.scala:485) at sbt.Load$.timed(Load.scala:1025) at sbt.Load$$anonfun$loadUnit$1.apply(Load.scala:485) at sbt.Load$$anonfun$loadUnit$1.apply(Load.scala:459) at sbt.Load$.timed(Load.scala:1025) at sbt.Load$.loadUnit(Load.scala:459) at sbt.Load$$anonfun$25$$anonfun$apply$14.apply(Load.scala:311) at sbt.Load$$anonfun$25$$anonfun$apply$14.apply(Load.scala:310) at sbt.BuildLoader$$anonfun$componentLoader$1$$anonfun$apply$4$$anonfun$apply$5$$anonfun$apply$6.apply(BuildLoader.scala:91) at sbt.BuildLoader$$anonfun$componentLoader$1$$anonfun$apply$4$$anonfun$apply$5$$anonfun$apply$6.apply(BuildLoader.scala:90) at sbt.BuildLoader.apply(BuildLoader.scala:140) at sbt.Load$.loadAll(Load.scala:365) at sbt.Load$.loadURI(Load.scala:320) at sbt.Load$.load(Load.scala:316) at sbt.Load$.load(Load.scala:305) at sbt.Load$$anonfun$4.apply(Load.scala:146) at sbt.Load$$anonfun$4.apply(Load.scala:146) at sbt.Load$.timed(Load.scala:1025) at sbt.Load$.apply(Load.scala:146) at sbt.Load$.defaultLoad(Load.scala:39) at sbt.BuiltinCommands$.liftedTree1$1(Main.scala:496) at sbt.BuiltinCommands$.doLoadProject(Main.scala:496) at sbt.BuiltinCommands$$anonfun$loadProjectImpl$2.apply(Main.scala:488) at sbt.BuiltinCommands$$anonfun$loadProjectImpl$2.apply(Main.scala:488) at sbt.Command$$anonfun$applyEffect$1$$anonfun$apply$2.apply(Command.scala:59) at sbt.Command$$anonfun$applyEffect$1$$anonfun$apply$2.apply(Command.scala:59) at sbt.Command$$anonfun$applyEffect$2$$anonfun$apply$3.apply(Command.scala:61) at sbt.Command$$anonfun$applyEffect$2$$anonfun$apply$3.apply(Command.scala:61) at sbt.Command$.process(Command.scala:93) at sbt.MainLoop$$anonfun$1$$anonfun$apply$1.apply(MainLoop.scala:96) at sbt.MainLoop$$anonfun$1$$anonfun$apply$1.apply(MainLoop.scala:96) at sbt.State$$anon$1.process(State.scala:184) at sbt.MainLoop$$anonfun$1.apply(MainLoop.scala:96) at sbt.MainLoop$$anonfun$1.apply(MainLoop.scala:96) at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17) at sbt.MainLoop$.next(MainLoop.scala:96) at sbt.MainLoop$.run(MainLoop.scala:89) at sbt.MainLoop$$anonfun$runWithNewLog$1.apply(MainLoop.scala:68) at sbt.MainLoop$$anonfun$runWithNewLog$1.apply(MainLoop.scala:63) at sbt.Using.apply(Using.scala:24) at sbt.MainLoop$.runWithNewLog(MainLoop.scala:63) at sbt.MainLoop$.runAndClearLast(MainLoop.scala:46) at sbt.MainLoop$.runLoggedLoop(MainLoop.scala:30) at sbt.MainLoop$.runLogged(MainLoop.scala:22) at sbt.StandardMain$.runManaged(Main.scala:57) at sbt.xMain.run(Main.scala:29) at xsbt.boot.Launch$$anonfun$run$1.apply(Launch.scala:109) at xsbt.boot.Launch$.withContextLoader(Launch.scala:128) at xsbt.boot.Launch$.run(Launch.scala:109) at xsbt.boot.Launch$$anonfun$apply$1.apply(Launch.scala:35) at xsbt.boot.Launch$.launch(Launch.scala:117) at xsbt.boot.Launch$.apply(Launch.scala:18) at xsbt.boot.Boot$.runImpl(Boot.scala:41) at xsbt.boot.Boot$.main(Boot.scala:17) at xsbt.boot.Boot.main(Boot.scala) [error] scala.reflect.internal.MissingRequirementError: object scala.runtime in compiler mirror not found. [error] Use 'last' for the full log. Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768M; support was removed in 8.0

scala运行hello world出错

Error:scalac: Error: org.jetbrains.jps.incremental.scala.remote.ServerException Error compiling sbt component 'compiler-interface-2.10.0-52.0' at sbt.compiler.AnalyzingCompiler$$anonfun$compileSources$1$$anonfun$apply$2.apply(AnalyzingCompiler.scala:145) at sbt.compiler.AnalyzingCompiler$$anonfun$compileSources$1$$anonfun$apply$2.apply(AnalyzingCompiler.scala:142) at sbt.IO$.withTemporaryDirectory(IO.scala:291) at sbt.compiler.AnalyzingCompiler$$anonfun$compileSources$1.apply(AnalyzingCompiler.scala:142) at sbt.compiler.AnalyzingCompiler$$anonfun$compileSources$1.apply(AnalyzingCompiler.scala:139) at sbt.IO$.withTemporaryDirectory(IO.scala:291) at sbt.compiler.AnalyzingCompiler$.compileSources(AnalyzingCompiler.scala:139) at sbt.compiler.IC$.compileInterfaceJar(IncrementalCompiler.scala:52) at org.jetbrains.jps.incremental.scala.local.CompilerFactoryImpl$.getOrCompileInterfaceJar(CompilerFactoryImpl.scala:96) at org.jetbrains.jps.incremental.scala.local.CompilerFactoryImpl$$anonfun$getScalac$1.apply(CompilerFactoryImpl.scala:50) at org.jetbrains.jps.incremental.scala.local.CompilerFactoryImpl$$anonfun$getScalac$1.apply(CompilerFactoryImpl.scala:49) at scala.Option.map(Option.scala:146) at org.jetbrains.jps.incremental.scala.local.CompilerFactoryImpl.getScalac(CompilerFactoryImpl.scala:49) at org.jetbrains.jps.incremental.scala.local.CompilerFactoryImpl.createCompiler(CompilerFactoryImpl.scala:22) at org.jetbrains.jps.incremental.scala.local.CachingFactory$$anonfun$createCompiler$1.apply(CachingFactory.scala:24) at org.jetbrains.jps.incremental.scala.local.CachingFactory$$anonfun$createCompiler$1.apply(CachingFactory.scala:24) at org.jetbrains.jps.incremental.scala.local.Cache$$anonfun$getOrUpdate$2.apply(Cache.scala:20) at scala.Option.getOrElse(Option.scala:121) at org.jetbrains.jps.incremental.scala.local.Cache.getOrUpdate(Cache.scala:19) at org.jetbrains.jps.incremental.scala.local.CachingFactory.createCompiler(CachingFactory.scala:23) at org.jetbrains.jps.incremental.scala.local.LocalServer.compile(LocalServer.scala:22) at org.jetbrains.jps.incremental.scala.remote.Main$.make(Main.scala:68) at org.jetbrains.jps.incremental.scala.remote.Main$.nailMain(Main.scala:25) at org.jetbrains.jps.incremental.scala.remote.Main.nailMain(Main.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.martiansoftware.nailgun.NGSession.run(NGSession.java:319)

intellij idea修改scal代码再次运行就报错

Error:scalac: Error: Could not find an output directory for D:\IdeaProjects\scala\src\main\scala\Scala.scala in List((d:\IdeaProjects\scala\target\scala-2.11\resource_managed\test,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\src\test\resources,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\target\scala-2.11\resource_managed\main,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\src\main\resources,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\target\scala-2.11\src_managed\test,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\src\test\scala-2.11,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\src\test\scala,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\src\test\java,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\target\scala-2.11\src_managed\main,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\src\main\scala-2.11,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\src\main\scala,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\src\main\java,d:\IdeaProjects\scala\target\scala-2.11\classes)) scala.reflect.internal.FatalError: Could not find an output directory for D:\IdeaProjects\scala\src\main\scala\Scala.scala in List((d:\IdeaProjects\scala\target\scala-2.11\resource_managed\test,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\src\test\resources,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\target\scala-2.11\resource_managed\main,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\src\main\resources,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\target\scala-2.11\src_managed\test,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\src\test\scala-2.11,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\src\test\scala,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\src\test\java,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\target\scala-2.11\src_managed\main,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\src\main\scala-2.11,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\src\main\scala,d:\IdeaProjects\scala\target\scala-2.11\classes), (d:\IdeaProjects\scala\src\main\java,d:\IdeaProjects\scala\target\scala-2.11\classes)) at scala.tools.nsc.settings.MutableSettings$OutputDirs.outputDirFor(MutableSettings.scala:311) at scala.tools.nsc.backend.jvm.BytecodeWriters$class.outputDirectory(BytecodeWriters.scala:26) at scala.tools.nsc.backend.jvm.GenASM.outputDirectory(GenASM.scala:23) at scala.tools.nsc.backend.jvm.BytecodeWriters$class.getFile(BytecodeWriters.scala:41) at scala.tools.nsc.backend.jvm.GenASM.getFile(GenASM.scala:23) at scala.tools.nsc.backend.jvm.GenASM$JBuilder.writeIfNotTooBig(GenASM.scala:531) at scala.tools.nsc.backend.jvm.GenASM$JMirrorBuilder.genMirrorClass(GenASM.scala:2835) at scala.tools.nsc.backend.jvm.GenASM$AsmPhase.emitFor$1(GenASM.scala:193) at scala.tools.nsc.backend.jvm.GenASM$AsmPhase.run(GenASM.scala:203) at scala.tools.nsc.Global$Run.compileUnitsInternal(Global.scala:1500) at scala.tools.nsc.Global$Run.compileUnits(Global.scala:1487) at scala.tools.nsc.Global$Run.compileSources(Global.scala:1482) at scala.tools.nsc.Global$Run.compile(Global.scala:1580) at xsbt.CachedCompiler0.run(CompilerInterface.scala:126) at xsbt.CachedCompiler0.run(CompilerInterface.scala:102) at xsbt.CompilerInterface.run(CompilerInterface.scala:27) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at sbt.compiler.AnalyzingCompiler.call(AnalyzingCompiler.scala:102) at sbt.compiler.AnalyzingCompiler.compile(AnalyzingCompiler.scala:48) at sbt.compiler.AnalyzingCompiler.compile(AnalyzingCompiler.scala:41) at org.jetbrains.jps.incremental.scala.local.IdeaIncrementalCompiler.compile(IdeaIncrementalCompiler.scala:29) at org.jetbrains.jps.incremental.scala.local.LocalServer.compile(LocalServer.scala:26) at org.jetbrains.jps.incremental.scala.remote.Main$.make(Main.scala:62) at org.jetbrains.jps.incremental.scala.remote.Main$.nailMain(Main.scala:20) at org.jetbrains.jps.incremental.scala.remote.Main.nailMain(Main.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.martiansoftware.nailgun.NGSession.run(NGSession.java:319) Information:2015/4/24 10:39 - Compilation completed with 1 error and 0 warnings in 600ms

使用gradle编译打包scala工程,报错:

1.使用的版本 jdk1.7.004 gradle 2.1 scala 2.11.2 2.代码结构 src/main/scala/HelloWorld.scala package main.scala object HelloWorld { def main(args: Array[String]): Unit = { println("hello world") } } 3.build.gradle文件 apply plugin: 'scala' repositories { maven { url "http://10.177.60.141:8888/nexus/conte..." } } dependencies { compile 'org.scala-lang:scala-library:2.11.2' compile 'org.scala-lang:scala-compiler:2.11.2' } 4.执行gradle build命令,结果: D:\Workfiles\Scala\gradleproject>gradle build -stacktrace :compileJava UP-TO-DATE :compileScala FAILED FAILURE: Build failed with an exception. * What went wrong: Execution failed for task ':compileScala'. > scala/runtime/Nothing$ * Try: Run with --info or --debug option to get more log output. * Exception is: org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':compile Scala'. at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.ex ecuteActions(ExecuteActionsTaskExecuter.java:69) ........................................................... at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.ex ecuteActions(ExecuteActionsTaskExecuter.java:61) ... 44 more Caused by: java.lang.ClassNotFoundException: scala.runtime.Nothing$ ... 76 more BUILD FAILED Total time: 12.063 secs

用idea+sbt构建scala项目是出现的问题

用idea+sbt,创建一个scala项目,然后idea的控制台直接报错,如下图所示: ![图片说明](https://img-ask.csdn.net/upload/201711/25/1511581280_348398.png) 然后idea最后指出的log日志内容如下: ``` Error during sbt execution: java.lang.RuntimeException: Expected one of local, maven-local, maven-central, scala-tools-releases, scala-tools-snapshots, sonatype-oss-releases, sonatype-oss-snapshots, jcenter, got 'local '. java.lang.RuntimeException: Expected one of local, maven-local, maven-central, scala-tools-releases, scala-tools-snapshots, sonatype-oss-releases, sonatype-oss-snapshots, jcenter, got 'local '. at xsbti.Predefined.toValue(Predefined.java:28) at xsbt.boot.Repository$Predefined$.apply(LaunchConfiguration.scala:114) at xsbt.boot.ConfigurationParser$$anonfun$getRepositories$1.apply(ConfigurationParser.scala:197) at xsbt.boot.ConfigurationParser$$anonfun$getRepositories$1.apply(ConfigurationParser.scala:196) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at scala.collection.immutable.List.foreach(List.scala:318) at org.apache.ivy.core.RelativeUrlResolver.map(RelativeUrlResolver.java:244) at scala.collection.AbstractTraversable.map(Traversable.scala:105) at xsbt.boot.ConfigurationParser.getRepositories(ConfigurationParser.scala:196) at xsbt.boot.ConfigurationParser$$anonfun$4.apply(ConfigurationParser.scala:71) at xsbt.boot.ConfigurationParser$$anonfun$processSection$1.apply(ConfigurationParser.scala:109) at xsbt.boot.ConfigurationParser.process(ConfigurationParser.scala:110) at xsbt.boot.ConfigurationParser.processSection(ConfigurationParser.scala:109) at xsbt.boot.ConfigurationParser.xsbt$boot$ConfigurationParser$$apply(ConfigurationParser.scala:49) at xsbt.boot.ConfigurationParser$$anonfun$apply$3.apply(ConfigurationParser.scala:47) at xsbt.boot.Using$.withResource(Using.scala:10) at xsbt.boot.Using$.apply(Using.scala:9) at xsbt.boot.Configuration$$anonfun$parse$1.apply(Configuration.scala:21) at xsbt.boot.Using$.withResource(Using.scala:10) at xsbt.boot.Using$.apply(Using.scala:9) at xsbt.boot.Configuration$.parse$fcb646c(Configuration.scala:21) at xsbt.boot.Launch$.apply(Launch.scala:18) at xsbt.boot.Boot$.runImpl(Boot.scala:41) at xsbt.boot.Boot$.main(Boot.scala:17) at xsbt.boot.Boot.main(Boot.scala) Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 ``` 想请问,这是什么问题? 一创建sbt项目,就报这个错。

hadoop使用yarn运行jar 报java.lang.ClassNotFoundException 找不到类 (找不到的不是主类)

1、写了一个数据分析的程序,用idea打成jar包,依赖jar都打进去了 ![图片说明](https://img-ask.csdn.net/upload/201911/03/1572779664_439750.png) 已经设置了 job.setJarByClass(CountDurationRunner.class); 2、开启hadoop zookeeper 和hbase集群 3、yarn运行jar : $ /opt/module/hadoop-2.7.2/bin/yarn jar ct_analysis.jar runner.CountDurationRunner 报错截图:![图片说明](https://img-ask.csdn.net/upload/201911/03/1572779908_781957.png) CountDurationRunner类代码: ``` package runner; import kv.key.ComDimension; //就是这里第一个就没找到 import kv.value.CountDurationValue; import mapper.CountDurationMapper; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.Connection; import org.apache.hadoop.hbase.client.ConnectionFactory; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.util.Tool; import org.apache.hadoop.util.ToolRunner; import outputformat.MysqlOutputFormat; import reducer.CountDurationReducer; import java.io.IOException; public class CountDurationRunner implements Tool { private Configuration conf = null; @Override public void setConf(Configuration conf) { this.conf = HBaseConfiguration.create(conf); } @Override public Configuration getConf() { return this.conf; } @Override public int run(String[] args) throws Exception { //得到conf Configuration conf = this.getConf(); //实例化job Job job = Job.getInstance(conf); job.setJarByClass(CountDurationRunner.class); //组装Mapper InputFormat initHbaseInputConfig(job); //组装Reducer outputFormat initHbaseOutputConfig(job); return job.waitForCompletion(true) ? 0 : 1; } private void initHbaseOutputConfig(Job job) { Connection connection = null; Admin admin = null; String tableName = "ns_ct:calllog"; try { connection = ConnectionFactory.createConnection(job.getConfiguration()); admin = connection.getAdmin(); if(!admin.tableExists(TableName.valueOf(tableName))) throw new RuntimeException("没有找到目标表"); Scan scan = new Scan(); //初始化Mapper TableMapReduceUtil.initTableMapperJob( tableName, scan, CountDurationMapper.class, ComDimension.class, Text.class, job, true); }catch (IOException e){ e.printStackTrace(); }finally { try { if(admin!=null) admin.close(); if(connection!=null) connection.close(); } catch (IOException e) { e.printStackTrace(); } } } private void initHbaseInputConfig(Job job) { job.setReducerClass(CountDurationReducer.class); job.setOutputKeyClass(ComDimension.class); job.setOutputValueClass(CountDurationValue.class); job.setOutputFormatClass(MysqlOutputFormat.class); } public static void main(String[] args) { try { int status = ToolRunner.run(new CountDurationRunner(), args); System.exit(status); } catch (Exception e) { e.printStackTrace(); } } } 这问题困扰很久了,有人说classPath不对,不知道如何修改,求助! ```

spark ClassNotFoundException

maven项目, 语言用的是scala, AnalysisSimulation模块依赖commons模块, 打包之后运行报ClassNotFoundException: analysis.DangerLevelTop10, 并且解压jar包可以找到analysis.DangerLevelTop10类, 求大神解决,困扰我很多天了 ![图片说明](https://img-ask.csdn.net/upload/201808/02/1533200745_57212.png) ![图片说明](https://img-ask.csdn.net/upload/201808/02/1533200756_565809.png)

spark运行scala的jar包

![图片说明](https://img-ask.csdn.net/upload/202002/05/1580887545_330719.png) ![图片说明](https://img-ask.csdn.net/upload/202002/05/1580887568_992291.png) ![图片说明](https://img-ask.csdn.net/upload/202002/05/1580887616_449280.png) 有人遇到过类似的问题吗? 我的尝试: 当没Master节点的Worker进程,运行会报错,当开启了Master节点的Worker进程有时不会报错,但是会说内存不够,但是我觉得不是这个问题,也能得出一定的结果,但并不是预期的结果。 执行的命令:bin/spark-submit --master spark://node1:7077 --class cn.itcast.WordCount_Online --executor-memory 1g --total-executor-cores 1 ~/data/spark_chapter02-1.0-SNAPSHOT.jar /spark/test/words.txt /spark/test/out jar包是在idea中打包的,用的是scala语言,主要作用是词频统计 scala代码: ``` package cn.itcast import org.apache.spark.rdd.RDD import org.apache.spark.{SparkConf, SparkContext} object WordCount_Online { def main(args: Array[String]):Unit={ val sparkConf = new SparkConf().setAppName("WordCount_Online") val sparkContext = new SparkContext(sparkConf) val data : RDD[String] = sparkContext.textFile(args(0)) val words :RDD[String] = data.flatMap(_.split(" ")) val wordAndOne :RDD[(String,Int)] = words.map(x => (x,1)) val result :RDD[(String,Int)] = wordAndOne.reduceByKey(_+_) result.saveAsTextFile(args(1)) sparkContext.stop() } } ``` 我也做了很多尝试,希望懂的人可以交流一下

sbt启动出错,求解决。

安装了sbt,当运行时出现下列错误,有谁知道如何解决 [root@Spark ~]# sbt sbt-version java.lang.NoClassDefFoundError: scala/reflect/internal/Trees at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:763) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) at java.net.URLClassLoader.access$100(URLClassLoader.java:73) at java.net.URLClassLoader$1.run(URLClassLoader.java:368) at java.net.URLClassLoader$1.run(URLClassLoader.java:362) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:361) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:763) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) at java.net.URLClassLoader.access$100(URLClassLoader.java:73) at java.net.URLClassLoader$1.run(URLClassLoader.java:368) at java.net.URLClassLoader$1.run(URLClassLoader.java:362) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:361) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:348) at xsbt.boot.Pre$.xsbt$boot$Pre$$classMissing$1(Pre.scala:66) at xsbt.boot.Pre$$anonfun$getMissing$1.apply(Pre.scala:67) at scala.collection.TraversableLike$$anonfun$filter$1.apply(TraversableLike.scala:264) at scala.collection.immutable.List.foreach(List.scala:318) at org.apache.ivy.core.RelativeUrlResolver.filter(RelativeUrlResolver.java:263) at scala.collection.AbstractTraversable.filter(Traversable.scala:105) at xsbt.boot.Pre$.getMissing$d83f809$3a8a6f87(Pre.scala:67) at xsbt.boot.Launch.checkLoader$2accd70c(Launch.scala:185) at xsbt.boot.Launch.xsbt$boot$Launch$$provider$1(Launch.scala:249) at xsbt.boot.Launch$$anonfun$xsbt$boot$Launch$$getScalaProvider0$2.apply(Launch.scala:252) at xsbt.boot.Launch$$anonfun$xsbt$boot$Launch$$getScalaProvider0$2.apply(Launch.scala:251) at scala.Option.flatMap(Option.scala:170) at xsbt.boot.Launch.xsbt$boot$Launch$$getScalaProvider0(Launch.scala:251) at xsbt.boot.Launch$$anon$3.call(Launch.scala:240) at xsbt.boot.Locks$GlobalLock.withLock(Locks.scala:45) at xsbt.boot.Locks$.apply0(Locks.scala:31) at xsbt.boot.Locks$.apply(Locks.scala:28) at xsbt.boot.Launch.locked(Launch.scala:238) at xsbt.boot.Launch.getScalaProvider(Launch.scala:240) at xsbt.boot.Launch$$anonfun$1.apply(Launch.scala:141) at xsbt.boot.Cache.newEntry(Cache.scala:16) at xsbt.boot.Cache.apply(Cache.scala:11) at xsbt.boot.Launch.getScala(Launch.scala:144) at xsbt.boot.Launch.getScala(Launch.scala:143) at xsbt.boot.Launch.xsbt$boot$Launch$$getAppProvider0(Launch.scala:219) at xsbt.boot.Launch$$anon$2.call(Launch.scala:196) at xsbt.boot.Locks$GlobalLock.withChannel$1(Locks.scala:93) at xsbt.boot.Locks$GlobalLock.xsbt$boot$Locks$GlobalLock$$withChannelRetries$1(Locks.scala:78) at xsbt.boot.Locks$GlobalLock$$anonfun$withFileLock$1.apply(Locks.scala:97) at xsbt.boot.Using$.withResource(Using.scala:10) at xsbt.boot.Using$.apply(Using.scala:9) at xsbt.boot.Locks$GlobalLock.ignoringDeadlockAvoided(Locks.scala:58) at xsbt.boot.Locks$GlobalLock.withLock(Locks.scala:48) at xsbt.boot.Locks$.apply0(Locks.scala:31) at xsbt.boot.Locks$.apply(Locks.scala:28) at xsbt.boot.Launch.locked(Launch.scala:238) at xsbt.boot.Launch.app(Launch.scala:147) at xsbt.boot.Launch.app(Launch.scala:145) at xsbt.boot.Launch$.run(Launch.scala:102) at xsbt.boot.Launch$$anonfun$apply$1.apply(Launch.scala:35) at xsbt.boot.Launch$.launch(Launch.scala:117) at xsbt.boot.Launch$.apply(Launch.scala:18) at xsbt.boot.Boot$.runImpl(Boot.scala:41) at xsbt.boot.Boot$.main(Boot.scala:17) at xsbt.boot.Boot.main(Boot.scala) Caused by: java.lang.ClassNotFoundException: scala.reflect.internal.Trees at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 69 more Error during sbt execution: java.lang.NoClassDefFoundError: scala/reflect/internal/Trees

运行spark自带例子sparkpi报错

![图片说明](https://img-ask.csdn.net/upload/201701/18/1484675647_678532.png) "C:\Program Files\Java\jdk1.8.0_111\bin\java" -Didea.launcher.port=7532 "-Didea.launcher.bin.path=C:\Program Files (x86)\JetBrains\IntelliJ IDEA Community Edition 2016.3.2\bin" -Dfile.encoding=UTF-8 -classpath "C:\Program Files\Java\jdk1.8.0_111\jre\lib\charsets.jar;C:\Program Files\Java\jdk1.8.0_111\jre\lib\deploy.jar;C:\Program Files\Java\jdk1.8.0_111\jre\lib\ext\access-bridge-64.jar;C:\Program Files\Java\jdk1.8.0_111\jre\lib\ext\cldrdata.jar;C:\Program Files\Java\jdk1.8.0_111\jre\lib\ext\dnsns.jar;C:\Program Files\Java\jdk1.8.0_111\jre\lib\ext\jaccess.jar;C:\Program Files\Java\jdk1.8.0_111\jre\lib\ext\jfxrt.jar;C:\Program Files\Java\jdk1.8.0_111\jre\lib\ext\localedata.jar;C:\Program Files\Java\jdk1.8.0_111\jre\lib\ext\nashorn.jar;C:\Program Files\Java\jdk1.8.0_111\jre\lib\ext\sunec.jar;C:\Program Files\Java\jdk1.8.0_111\jre\lib\ext\sunjce_provider.jar;C:\Program Files\Java\jdk1.8.0_111\jre\lib\ext\sunmscapi.jar;C:\Program Files\Java\jdk1.8.0_111\jre\lib\ext\sunpkcs11.jar;C:\Program Files\Java\jdk1.8.0_111\jre\lib\ext\zipfs.jar;C:\Program Files\Java\jdk1.8.0_111\jre\lib\javaws.jar;C:\Program Files\Java\jdk1.8.0_111\jre\lib\jce.jar;C:\Program Files\Java\jdk1.8.0_111\jre\lib\jfr.jar;C:\Program Files\Java\jdk1.8.0_111\jre\lib\jfxswt.jar;C:\Program Files\Java\jdk1.8.0_111\jre\lib\jsse.jar;C:\Program Files\Java\jdk1.8.0_111\jre\lib\management-agent.jar;C:\Program Files\Java\jdk1.8.0_111\jre\lib\plugin.jar;C:\Program Files\Java\jdk1.8.0_111\jre\lib\resources.jar;C:\Program Files\Java\jdk1.8.0_111\jre\lib\rt.jar;C:\Users\yyy\IdeaProjects\ywordcount\out\production\ywordcount;D:\peizhi\scala-2.10.6\lib\scala-actors-migration.jar;D:\peizhi\scala-2.10.6\lib\scala-actors.jar;D:\peizhi\scala-2.10.6\lib\scala-library.jar;D:\peizhi\scala-2.10.6\lib\scala-reflect.jar;D:\peizhi\scala-2.10.6\lib\scala-swing.jar;D:\peizhi\spark-1.6.2-bin-hadoop2.6\lib\spark-assembly-1.6.2-hadoop2.6.0.jar;C:\Program Files (x86)\JetBrains\IntelliJ IDEA Community Edition 2016.3.2\lib\idea_rt.jar" com.intellij.rt.execution.application.AppMain SparkPi local Exception in thread "main" java.lang.ClassNotFoundException: SparkPi at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:123)

SparkSQL Group by 语句报错

跪求各位大神。 代码如下所示: val sqlContext = new org.apache.spark.sql.SQLContext(sc) import spark.implicits._ val testRDD = spark.sparkContext.textFile("hdfs://ip-172-31-26-254:9000/eth-data/done-eth-trx-5125092-5491171.csv"). filter(line=>line.split(",")(25)=="0xa74476443119a942de498590fe1f2454d7d4ac0d") val rdd = testRDD.map(line=>(line.split(",")(25),line.split(",")(15),line.split(",")(18).substring(0,10))) case class Row(fromadd: String, amount:Int, date:String) val rowRDD = rdd.map(p => Row(p._1,p._2.toInt,p._3)) val testDF=rowRDD.toDF() testDF.registerTempTable("test") #test 内容如下所示; | fromadd|amount| date| +--------------------+------+----------+ |0xa74476443119a94...| 28553|2018-02-20| |0xa74476443119a94...| 30764|2018-02-20| |0xa74476443119a94...| 32775|2018-02-20| |0xa74476443119a94...| 29439|2018-02-20| |0xa74476443119a94...| 35810|2018-02-20| |0xa74476443119a94...| 35810|2018-02-20| |0xa74476443119a94...| 35810|2018-02-20| |0xa74476443119a94...| 28926|2018-02-20| |0xa74476443119a94...| 36229|2018-02-20| |0xa74476443119a94...| 33235|2018-02-20| |0xa74476443119a94...| 34104|2018-02-20| |0xa74476443119a94...| 29425|2018-02-20| |0xa74476443119a94...| 29568|2018-02-20| |0xa74476443119a94...| 33473|2018-02-20| |0xa74476443119a94...| 31344|2018-02-20| |0xa74476443119a94...| 34399|2018-02-20| |0xa74476443119a94...| 34080|2018-02-20| |0xa74476443119a94...| 34080|2018-02-20| |0xa74476443119a94...| 27165|2018-02-20| |0xa74476443119a94...| 33512|2018-02-20| +--------------------+------+----------+ 运行SQL: val data=sqlContext.sql("select * from test where amount>27000").show() 语句ok. 但是运行: val res=sqlContext.sql("select count(amount) from test where group by date").show() 报错如下: org.apache.spark.SparkException: Job aborted due to stage failure: Task 55 in stage 5.0 failed 1 times, most recent failure: Lost task 55.0 in stage 5.0 (TID 82, localhost, executor driver): java.lang.ArrayIndexOutOfBoundsException: 25 at $anonfun$1.apply(<console>:27) at $anonfun$1.apply(<console>:27) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:463) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:395) at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:234) at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:228) at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827) at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:108) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1517) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1505) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1504) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1504) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:814) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:814) at scala.Option.foreach(Option.scala:257) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:814) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1732) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1687) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1676) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:630) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2029) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2050) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2069) at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:336) at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38) at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:2861) at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2150) at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2150) at org.apache.spark.sql.Dataset$$anonfun$55.apply(Dataset.scala:2842) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:2841) at org.apache.spark.sql.Dataset.head(Dataset.scala:2150) at org.apache.spark.sql.Dataset.take(Dataset.scala:2363) at org.apache.spark.sql.Dataset.showString(Dataset.scala:241) at org.apache.spark.sql.Dataset.show(Dataset.scala:637) at org.apache.spark.sql.Dataset.show(Dataset.scala:596) at org.apache.spark.sql.Dataset.show(Dataset.scala:605) ... 50 elided Caused by: java.lang.ArrayIndexOutOfBoundsException: 25 at $anonfun$1.apply(<console>:27) at $anonfun$1.apply(<console>:27) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:463) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:395) at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:234) at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:228) at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827) at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:108) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 感谢感谢

Jar在spark-shell上运行报错:主类找不到

scala IntelliJ的项目,sbt打好包在spark-shell上运行后报错:主类找不到;使用了两个中文分词包(ansj_seg-2.0.8.jar,nlp-lang-0.3.jar),但是已经加入到 External libraries里去了;打包没问题,运行报错 ![![图片说明](https://img-ask.csdn.net/upload/201601/26/1453780626_723163.jpg)![图片说明](https://img-ask.csdn.net/upload/201601/26/1453780648_659305.jpg) spark-shell 提交命令: [gaohui@hadoop-1-2 test]$ spark-submit --master yarn --driver-memory 5G --num-executors 20 --executor-cores 16 --executor-memory 10G --conf spark.serializer=org.apache.spark.serializer.KryoSerializer --class NLP_V6.Nlp_test --jars /home/gaohui/test/NLP_v6_test.jar /home/gaohui/test/NLP_v6_test.jar 报错图片: ![图片说明](https://img-ask.csdn.net/upload/201601/26/1453780776_603750.jpg)

kafka集成flink报出如下错误如何解决

idea运行kafka集成flink的项目运行报错。 public class KafkaFlinkDemo1 { public static void main(String[] args) throws Exception { //获取执行环境 StreamExecutionEnvironment sEnv = StreamExecutionEnvironment.getExecutionEnvironment(); //创建一个Table Environment StreamTableEnvironment sTableEnv = StreamTableEnvironment.create(sEnv); sTableEnv.connect(new Kafka() .version("0.10") .topic("topic1") .startFromLatest() .property("group.id", "group1") .property("bootstrap.servers", "172.168.30.105:21005") ).withFormat( new Json().failOnMissingField(false).deriveSchema() ).withSchema( new Schema().field("userId", Types.LONG()) .field("day", Types.STRING()) .field("begintime", Types.LONG()) .field("endtime", Types.LONG()) .field("data", ObjectArrayTypeInfo.getInfoFor( Row[].class, Types.ROW(new String[]{"package", "activetime"}, new TypeInformation[]{Types.STRING(), Types.LONG()} ) )) ).inAppendMode().registerTableSource("userlog"); Table result = sTableEnv.sqlQuery("select userId from userlog"); DataStream<Row> rowDataStream = sTableEnv.toAppendStream(result, Row.class); rowDataStream.print(); sEnv.execute("KafkaFlinkDemo1"); } } 报错信息如下: SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/E:/develop/apache-maven-3.6.0-bin/repository/ch/qos/logback/logback-classic/1.1.3/logback-classic-1.1.3.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/E:/develop/apache-maven-3.6.0-bin/repository/org/slf4j/slf4j-log4j12/1.7.7/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [ch.qos.logback.classic.util.ContextSelectorStaticBinder] Exception in thread "main" java.lang.AbstractMethodError: org.apache.flink.table.descriptors.ConnectorDescriptor.toConnectorProperties()Ljava/util/Map; at org.apache.flink.table.descriptors.ConnectorDescriptor.toProperties(ConnectorDescriptor.java:58) at org.apache.flink.table.descriptors.ConnectTableDescriptor.toProperties(ConnectTableDescriptor.scala:107) at org.apache.flink.table.descriptors.StreamTableDescriptor.toProperties(StreamTableDescriptor.scala:95) at org.apache.flink.table.factories.TableFactoryUtil$.findAndCreateTableSource(TableFactoryUtil.scala:39) at org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSource(ConnectTableDescriptor.scala:46) at org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSourceAndSink(ConnectTableDescriptor.scala:68) at com.huawei.bigdata.KafkaFlinkDemo1.main(KafkaFlinkDemo1.java:41) Process finished with exit code 1

spark读取本地文件报错

在scala编写spark程序使用了sc.textFile("file:///home/hadoop/2.txt"), 竟然报错java.io.FileNotFoundException: File file:/home/hadoop/2.txt does not exist,之后又用spark-shell测试,依旧报这样错误 ``` scala> val rdd = sc.textFile("file:///home/hadoop/2.txt") rdd: org.apache.spark.rdd.RDD[String] = file:///home/hadoop/2.txt MapPartitionsRDD[5] at textFile at <console>:24 scala> rdd.take(1) 17/08/29 20:27:28 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 1.0 (TID 4, slaves3, executor 2): java.io.FileNotFoundException: File file:/home/hadoop/2.txt does not exist ``` 我cat文件是有输出的 ``` [hadoop@master ~]$ cat /home/hadoop/2.txt chen 001 {"phone":"187***","sex":"m","card":"123"} zhou 002 {"phone":"187***","sex":"f","educetion":"1"} qian 003 {"phone":"187***","sex":"f","book":"2"} li 004 {"phone":"187***","sex":"f"} wu 005 {"phone":"187***","sex":"f"} zhang 006 {"phone":"187***","sex":"f"} xia 007 {"phone":"187***","sex":"f"} wang 008 {"phone":"187***","sex":"f"} lv 009 {"phone":"187***","sex":"m"} ``` 之后我将文件放在hdfs上面,就能读取的到,这是怎么回事

spark2.0报错求大神帮忙!!!谢谢!!

代码如下(网上摘录代码): package com.gree.test; import java.util.Arrays; import java.util.Iterator; import java.util.List; import org.apache.spark.SparkConf; import org.apache.spark.api.java.function.FlatMapFunction; import org.apache.spark.api.java.function.Function2; import org.apache.spark.api.java.function.PairFunction; import org.apache.spark.streaming.Durations; import org.apache.spark.streaming.api.java.JavaDStream; import org.apache.spark.streaming.api.java.JavaPairDStream; import org.apache.spark.streaming.api.java.JavaReceiverInputDStream; import org.apache.spark.streaming.api.java.JavaStreamingContext; import com.google.common.base.Optional; import scala.Tuple2; public class OnlineWordCount { public static void main(String[] args) { SparkConf conf = new SparkConf().setAppName("wordcount").setMaster("local[2]"); JavaStreamingContext jssc = new JavaStreamingContext(conf,Durations.seconds(5)); jssc.checkpoint("hdfs://spark001:9000/wordcount_checkpoint"); JavaReceiverInputDStream<String> lines = jssc.socketTextStream("spark001", 9999); JavaDStream<String> words = lines.flatMap(new FlatMapFunction<String, String>(){ private static final long serialVersionUID = 1L; @Override public Iterator<String> call(String line) throws Exception { return Arrays.asList(line.split(" ")).iterator(); } }); JavaPairDStream<String, Integer> pairs = words.mapToPair(new PairFunction<String, String, Integer>(){ private static final long serialVersionUID = 1L; @Override public Tuple2<String, Integer> call(String word) throws Exception { return new Tuple2<String, Integer>(word, 1); } }); JavaPairDStream<String, Integer> wordcounts = pairs.updateStateByKey( new Function2<List<Integer>, Optional<Integer>, Optional<Integer>>(){ private static final long serialVersionUID = 1L; @Override public Optional<Integer> call(List<Integer> values, Optional<Integer> state) throws Exception { Integer newValue = 0; if(state.isPresent()){ newValue = state.get(); } for(Integer value : values){ newValue += value; } return Optional.of(newValue); } }); wordcounts.print(); jssc.start(); try { jssc.awaitTermination(); } catch (InterruptedException e) { e.printStackTrace(); } jssc.close(); } } 报错位置为updateStateByKey位置: The method updateStateByKey(Function2<List<Integer>,Optional<S>,Optional<S>>) in the type JavaPairDStream<String,Integer> is not applicable for the arguments (new Function2<List<Integer>,Optional<Integer>,Optional<Integer>>(){}) 跪求大神解决。。。谢谢

spark sql 语法问题,新手求指点

使用spark 连接mysql 查询时发现一个错误,经过检测后发现是sql语句的问题,但sql 语句在mysql中是能够查询出来的,但是当使用spark进行查询的时候却发现报错了,sql语句如下: ``` # 计算出支付通道为alipay的金额最大的前5位商户号? select pay_channel,oid,sum(money) from pay where pay_channel = 'alipay' group by oid order by sum(money) desc limit 5 ; select pay_channel,oid,sum(money) from pay where pay_channel = 'alipay' group by oid,pay_channel order by sum(money) desc limit 5 ; ``` 正确代码代码如下,使用的是第二条sql语句 如果使用第一条sql语句是会报错: ``` import java.util.Properties import org.apache.spark.rdd.RDD import org.apache.spark.sql.{DataFrame, Row, SparkSession} object Test23 { def main(args: Array[String]): Unit = { //使用SparkSession.builder.替代SQLContext val sqlContext = SparkSession.builder. master("local[*]") .appName("TestMysql") .getOrCreate() val url = "jdbc:mysql://hadoop01:3306/spark?characterEncoding=UTF-8" val table = "pay" val properties = new Properties() properties.setProperty("user", "root") properties.setProperty("password", "123456") //需要传入Mysql的URL、表明、properties(连接数据库的用户名密码) val df = sqlContext.read.jdbc(url, table, properties) df.createOrReplaceTempView("pay") val frame: DataFrame = sqlContext.sql("select pay_channel,oid,sum(money) from pay where pay_channel = 'alipay' group by oid,pay_channel order by sum(money) desc limit 5 ") val rdd = frame.rdd rdd.foreach(println(_)) } } ``` 此两行sql语句在mysql中都是能正常查出来结果的,它们两个的区别就是第二条sql语句使用pay_channel字段多进行了一次分组,不过我在sql语句中已经把pay_channel作为了一个条件固定死了,为啥还要进行分组,不然会报错,报错内容如下: ``` Exception in thread "main" org.apache.spark.sql.AnalysisException: expression 'pay.`pay_channel`' is neither present in the group by, nor is it an aggregate function. Add to group by or wrap in first() (or first_value) if you don't care which value you get.;; GlobalLimit 5 +- LocalLimit 5 +- Project [pay_channel#3, oid#0, sum(money)#23] +- Sort [sum(money)#23 DESC NULLS LAST], true +- Aggregate [oid#0], [pay_channel#3, oid#0, sum(money#6) AS sum(money)#23] +- Filter (pay_channel#3 = alipay) +- SubqueryAlias pay +- Relation[oid#0,pos_name#1,order_num#2,pay_channel#3,pay_method#4,posId#5,money#6,pay_time#7,ord_status#8,rec_state#9] JDBCRelation(pay) [numPartitions=1] at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.failAnalysis(CheckAnalysis.scala:39) at org.apache.spark.sql.catalyst.analysis.Analyzer.failAnalysis(Analyzer.scala:91) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.org$apache$spark$sql$catalyst$analysis$CheckAnalysis$class$$anonfun$$checkValidAggregateExpression$1(CheckAnalysis.scala:247) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$9.apply(CheckAnalysis.scala:280) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$9.apply(CheckAnalysis.scala:280) at scala.collection.immutable.List.foreach(List.scala:381) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:280) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:78) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:127) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:126) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:126) at scala.collection.immutable.List.foreach(List.scala:381) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:126) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:126) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:126) at scala.collection.immutable.List.foreach(List.scala:381) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:126) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:126) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:126) at scala.collection.immutable.List.foreach(List.scala:381) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:126) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:126) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:126) at scala.collection.immutable.List.foreach(List.scala:381) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:126) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:78) at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:91) at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:52) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:66) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623) at com.czxy.exercise05.Test23$.main(Test23.scala:31) at com.czxy.exercise05.Test23.main(Test23.scala) ``` 只是语法的区别么?求大佬解答一下原因!

sbt idea build project 报错 invalid sha1: expected=<html> [warn] <head> ?

Error while importing sbt project: [info] Loading settings for project global-plugins from idea.sbt ... [info] Loading global plugins from D:\SBT\sbt\.sbt\plugins [info] Updating ProjectRef(uri("file:/D:/SBT/sbt/.sbt/plugins/"), "global-plugins")... [info] Done updating. [info] Loading project definition from D:\IDEA\sbttest\project [info] Updating ProjectRef(uri("file:/D:/IDEA/sbttest/project/"), "sbttest-build")... [info] Done updating. [info] Loading settings for project root from build.sbt ... [info] Set current project to sbttest (in build file:/D:/IDEA/sbttest/) [info] sbt server started at local:sbt-server-92123181769dae5e81c2 sbt:sbttest> [info] Defining Global / sbtStructureOptions, Global / sbtStructureOutputFile, shellPrompt [info] The new values will be used by no settings or tasks. [info] Reapplying settings... [info] Set current project to sbttest (in build file:/D:/IDEA/sbttest/) [info] Applying State transformations org.jetbrains.sbt.CreateTasks from D:/IDEA/IAD/.IntelliJIdea/config/plugins/Scala/launcher/sbt-structure-1.0.jar [info] Reapplying settings... [info] Set current project to sbttest (in build file:/D:/IDEA/sbttest/) [info] Updating ... [warn] problem while downloading module descriptor: http://mirrors.ibiblio.org/maven2/org/apache/flink/flink-streaming-scala/1.7.0/flink-streaming-scala-1.7.0.pom: invalid sha1: expected=<html> [warn] <head> computed=47abf2c9ebd0dedcc526c0800426f709d8179231 (160357ms) [warn] module not found: org.apache.flink#flink -streaming-scala;1.7.0 [warn] ==== local: tried [warn] D:\SBT\sbt\.ivy2\local\org.apache.flink\flink-streaming-scala\1.7.0\ivys\ivy.xml [warn] ==== comp-maven: tried [warn] http://mvnrepository.com/artifact/org/apache/flink/flink-streaming-scala/1.7.0/flink-streaming-scala-1.7.0.pom [warn] ==== sbt-releases-repo: tried [warn] http://repo.typesafe.com/typesafe/ivy-releases/org.apache.flink/flink-streaming-scala/1.7.0/ivys/ivy.xml [warn] ==== sbt-plugins-repo: tried [warn] http://repo.scala-sbt.org/scalasbt/sbt-plugin-releases/org.apache.flink/flink-streaming-scala/1.7.0/ivys/ivy.xml [warn] ==== ali1: tried [warn] http://maven.aliyun.com/nexus/content/groups/public/org/apache/flink/flink-streaming-scala/1.7.0/flink-streaming-scala-1.7.0.pom [warn] ==== ali2: tried [warn] https://oss.sonatype.org/content/repositories/snapshots/org/apache/flink/flink-streaming-scala/1.7.0/flink-streaming-scala-1.7.0.pom [warn] ==== store_0: tried [warn] https://repository.apache.org/content/repositories/snapshots/org/apache/flink/flink-streaming-scala/1.7.0/flink-streaming-scala-1.7.0.pom [warn] ==== store_1: tried [warn] http://mirrors.ibiblio.org/maven2/org/apache/flink/flink-streaming-scala/1.7.0/flink-streaming-scala-1.7.0.pom [warn] ==== store_2: tried [warn] http://repo2.maven.org/maven2/org/apache/flink/flink-streaming-scala/1.7.0/flink-streaming-scala-1.7.0.pom [warn] ==== maven-central: tried [warn] http://repo1.maven.org/maven2/org/apache/flink/flink-streaming-scala/1.7.0/flink-streaming-scala-1.7.0.pom [warn] ==== Apach : tried [warn] https://repository.apache.org/content/repositories/snapshots/org/apache/flink/flink-streaming-scala/1.7.0/flink-streaming-scala-1.7.0.pom [warn] ==== velvia maven: tried [warn] http://dl.bintray.com/velvia/maven/org/apache/flink/flink-streaming-scala/1.7.0/flink-streaming-scala-1.7.0.pom [warn] ==== ali: tried [warn] https://maven.aliyun.com/repository/public/org/apache/flink/flink-streaming-scala/1.7.0/flink-streaming-scala-1.7.0.pom [warn] :::::::::::::::::::::::::::::::::::::::::::::: [warn] :: UNRESOLVED DEPENDENCIES :: [warn] :::::::::::::::::::::::::::::::::::::::::::::: [warn] :: org.apache.flink#flink-streaming-scala;1.7.0: not found [warn] :::::::::::::::::::::::::::::::::::::::::::::: [warn] [warn] Note: Unresolved dependencies path: [warn] org.apache.flink:flink-streaming-scala:1.7.0 (D:\IDEA\sbttest\build.sbt#L35) [warn] +- sbttest:sbttest_2.11:0.1 [error] sbt.librarymanagement.ResolveException: unresolved dependency: org.apache.flink#flink-streaming-scala;1.7.0: not found [error] at sbt.internal.librarymanagement.IvyActions$.resolveAndRetrieve(IvyActions.scala:332) [error] at sbt.internal.librarymanagement.IvyActions$.$anonfun$updateEither$1(IvyActions.scala:208) [error] at sbt.internal.librarymanagement.IvySbt$Module.$anonfun$withModule$1(Ivy.scala:239) [error] at sbt.internal.librarymanagement.IvySbt.$anonfun$withIvy$1(Ivy.scala:204) [error] at sbt.internal.librarymanagement.IvySbt.sbt$internal$librarymanagement$IvySbt$$action$1(Ivy.scala:70) [error] at sbt.internal.librarymanagement.IvySbt$$anon$3.call(Ivy.scala:77) [error] at xsbt.boot.Locks$GlobalLock.withChannel$1(Locks.scala:95) [error] at xsbt.boot.Locks$GlobalLock.xsbt$boot$Locks$GlobalLock$$withChannelRetries$1(Locks.scala:80) [error] at xsbt.boot.Locks$GlobalLock$$anonfun$withFileLock$1.apply(Locks.scala:99) [error] at xsbt.boot.Using$.withResource(Using.scala:10) [error] at xsbt.boot.Using$.apply(Using.scala:9) [error] at xsbt.boot.Locks$GlobalLock.ignoringDeadlockAvoided(Locks.scala:60) [error] at xsbt.boot.Locks$GlobalLock.withLock(Locks.scala:50) [error] at xsbt.boot.Locks$.apply0(Locks.scala:31) [error] at xsbt.boot.Locks$.apply(Locks.scala:28) [error] at sbt.internal.librarymanagement.IvySbt.withDefaultLogger(Ivy.scala:77) [error] at sbt.internal.librarymanagement.IvySbt.withIvy(Ivy.scala:199) [error] at sbt.internal.librarymanagement.IvySbt.withIvy(Ivy.scala:196) [error] at sbt.internal.librarymanagement.IvySbt$Module.withModule(Ivy.scala:238) [error] at sbt.internal.librarymanagement.IvyActions$.updateEither(IvyActions.scala:193) [error] at sbt.librarymanagement.ivy.IvyDependencyResolution.update(IvyDependencyResolution.scala:20) [error] at sbt.librarymanagement.DependencyResolution.update(DependencyResolution.scala:56) [error] at sbt.internal.LibraryManagement$.resolve$1(LibraryManagement.scala:45) [error] at sbt.internal.LibraryManagement$.$anonfun$cachedUpdate$12(LibraryManagement.scala:93) [error] at sbt.util.Tracked$.$anonfun$lastOutput$1(Tracked.scala:68) [error] at sbt.internal.LibraryManagement$.$anonfun$cachedUpdate$19(LibraryManagement.scala:106) [error] at scala.util.control.Exception$Catch.apply(Exception.scala:224) [error] at sbt.internal.LibraryManagement$.$anonfun$cachedUpdate$11(LibraryManagement.scala:106) [error] at sbt.internal.LibraryManagement$.$anonfun$cachedUpdate$11$adapted(LibraryManagement.scala:89) [error] at sbt.util.Tracked$.$anonfun$inputChanged$1(Tracked.scala:149) [error] at sbt.internal.LibraryManagement$.cachedUpdate(LibraryManagement.scala:120) [error] at sbt.Classpaths$.$anonfun$updateTask$5(Defaults.scala:2556) [error] at scala.Function1.$anonfun$compose$1(Function1.scala:44) [error] at sbt.internal.util.$tilde$greater.$anonfun$$u2219$1(TypeFunctions.scala:40) [error] at sbt.std.Transform$$anon$4.work(System.scala:67) [error] at sbt.Execute.$anonfun$submit$2(Execute.scala:269) [error] at sbt.internal.util.ErrorHandling$.wideConvert(ErrorHandling.scala:16) [error] at sbt.Execute.work(Execute.scala:278) [error] at sbt.Execute.$anonfun$submit$1(Execute.scala:269) [error] at sbt.ConcurrentRestrictions$$anon$4.$anonfun$submitValid$1(ConcurrentRestrictions.scala:178) [error] at sbt.CompletionService$$anon$2.call(CompletionService.scala:37) [error] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [error] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [error] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [error] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [error] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [error] at java.lang.Thread.run(Thread.java:748) [error] sbt.librarymanagement.ResolveException: unresolved dependency: org.apache.flink#flink-streaming-scala;1.7.0: not found [error] at sbt.internal.librarymanagement.IvyActions$.resolveAndRetrieve(IvyActions.scala:332) [error] at sbt.internal.librarymanagement.IvyActions$.$anonfun$updateEither$1(IvyActions.scala:208) [error] at sbt.internal.librarymanagement.IvySbt$Module.$anonfun$withModule$1(Ivy.scala:239) [error] at sbt.internal.librarymanagement.IvySbt.$anonfun$withIvy$1(Ivy.scala:204) [error] at sbt.internal.librarymanagement.IvySbt.sbt$internal$librarymanagement$IvySbt$$action$1(Ivy.scala:70) [error] at sbt.internal.librarymanagement.IvySbt$$anon$3.call(Ivy.scala:77) [error] at xsbt.boot.Locks$GlobalLock.withChannel$1(Locks.scala:95) [error] at xsbt.boot.Locks$GlobalLock.xsbt$boot$Locks$GlobalLock$$withChannelRetries$1(Locks.scala:80) [error] at xsbt.boot.Locks$GlobalLock$$anonfun$withFileLock$1.apply(Locks.scala:99) [error] at xsbt.boot.Using$.withResource(Using.scala:10) [error] at xsbt.boot.Using$.apply(Using.scala:9) [error] at xsbt.boot.Locks$GlobalLock.ignoringDeadlockAvoided(Locks.scala:60) [error] at xsbt.boot.Locks$GlobalLock.withLock(Locks.scala:50) [error] at xsbt.boot.Locks$.apply0(Locks.scala:31) [error] at xsbt.boot.Locks$.apply(Locks.scala:28) [error] at sbt.internal.librarymanagement.IvySbt.withDefaultLogger(Ivy.scala:77) [error] at sbt.internal.librarymanagement.IvySbt.withIvy(Ivy.scala:199) [error] at sbt.internal.librarymanagement.IvySbt.withIvy(Ivy.scala:196) [error] at sbt.internal.librarymanagement.IvySbt$Module.withModule(Ivy.scala:238) [error] at sbt.internal.librarymanagement.IvyActions$.updateEither(IvyActions.scala:193) [error] at sbt.librarymanagement.ivy.IvyDependencyResolution.update(IvyDependencyResolution.scala:20) [error] at sbt.librarymanagement.DependencyResolution.update(DependencyResolution.scala:56) [error] at sbt.internal.LibraryManagement$.resolve$1(LibraryManagement.scala:45) [error] at sbt.internal.LibraryManagement$.$anonfun$cachedUpdate$12(LibraryManagement.scala:93) [error] at sbt.util.Tracked$.$anonfun$lastOutput$1(Tracked.scala:68) [error] at sbt.internal.LibraryManagement$.$anonfun$cachedUpdate$19(LibraryManagement.scala:106) [error] at scala.util.control.Exception$Catch.apply(Exception.scala:224) [error] at sbt.internal.LibraryManagement$.$anonfun$cachedUpdate$11(LibraryManagement.scala:106) [error] at sbt.internal.LibraryManagement$.$anonfun$cachedUpdate$11$adapted(LibraryManagement.scala:89) [error] at sbt.util.Tracked$.$anonfun$inputChanged$1(Tracked.scala:149) [error] at sbt.internal.LibraryManagement$.cachedUpdate(LibraryManagement.scala:120) [error] at sbt.Classpaths$.$anonfun$updateTask$5(Defaults.scala:2556) [error] at scala.Function1.$anonfun$compose$1(Function1.scala:44) [error] at sbt.internal.util.$tilde$greater.$anonfun$$u2219$1(TypeFunctions.scala:40) [error] at sbt.std.Transform$$anon$4.work(System.scala:67) [error] at sbt.Execute.$anonfun$submit$2(Execute.scala:269) [error] at sbt.internal.util.ErrorHandling$.wideConvert(ErrorHandling.scala:16) [error] at sbt.Execute.work(Execute.scala:278) [error] at sbt.Execute.$anonfun$submit$1(Execute.scala:269) [error] at sbt.ConcurrentRestrictions$$anon$4.$anonfun$submitValid$1(ConcurrentRestrictions.scala:178) [error] at sbt.CompletionService$$anon$2.call(CompletionService.scala:37) [error] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [error] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [error] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [error] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [error] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [error] at java.lang.Thread.run(Thread.java:748) [error] (update) sbt.librarymanagement.ResolveException: unresolved dependency: org.apache.flink#flink-streaming-scala;1.7.0: not found [error] (ssExtractDependencies) sbt.librarymanagement.ResolveException: unresolved dependency: org.apache.flink#flink-streaming-scala;1.7.0: not found [error] Total time: 178 s, completed 2019-6-19 8:35:29 [info] shutting down server Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512M; support was removed in 8.0

利用hsdis 生成汇编代码 报错 Error: Could not find or load main class Test

Error: Could not find or load main class Test [Loaded java.lang.Shutdown from /usr/local/src/jdk/jdk1.8/jre/lib/rt.jar] [Loaded java.lang.Shutdown$Lock from /usr/local/src/jdk/jdk1.8/jre/lib/rt.jar]

java 远程连接spark 出现错误

我使用的是sequenceiq/spark 搭建的docker集群,但是本机上能正常的运行,但是通过java远程连接访问的时候出现错误 代码为: ``` SparkConf sparkConf = new SparkConf().setAppName("JavaTopGroup").setMaster("spark://10.73.21.221:7077"); JavaSparkContext ctx = new JavaSparkContext(sparkConf); ``` 出现的错误为: ``` 17/12/07 19:17:47 ERROR StandaloneSchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up. 17/12/07 19:17:47 WARN StandaloneSchedulerBackend: Application ID is not initialized yet. 17/12/07 19:17:47 INFO SparkUI: Stopped Spark web UI at http://10.73.7.25:4040 17/12/07 19:17:47 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 8163. 17/12/07 19:17:47 INFO StandaloneSchedulerBackend: Shutting down all executors 17/12/07 19:17:47 INFO NettyBlockTransferService: Server created on 10.73.7.25:8163 17/12/07 19:17:47 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 17/12/07 19:17:47 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asking each executor to shut down 17/12/07 19:17:47 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.73.7.25, 8163, None) 17/12/07 19:17:47 INFO BlockManagerMasterEndpoint: Registering block manager 10.73.7.25:8163 with 900.6 MB RAM, BlockManagerId(driver, 10.73.7.25, 8163, None) 17/12/07 19:17:47 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.73.7.25, 8163, None) 17/12/07 19:17:47 WARN StandaloneAppClient$ClientEndpoint: Drop UnregisterApplication(null) because has not yet connected to master 17/12/07 19:17:47 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.73.7.25, 8163, None) 17/12/07 19:17:47 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! 17/12/07 19:17:47 INFO MemoryStore: MemoryStore cleared 17/12/07 19:17:47 INFO BlockManager: BlockManager stopped 17/12/07 19:17:47 INFO BlockManagerMaster: BlockManagerMaster stopped 17/12/07 19:17:47 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! 17/12/07 19:17:47 ERROR TransportResponseHandler: Still have 3 requests outstanding when connection from /10.73.21.21:7077 is closed 17/12/07 19:17:47 INFO SparkContext: Successfully stopped SparkContext 17/12/07 19:17:47 ERROR SparkContext: Error initializing SparkContext. java.lang.IllegalArgumentException: requirement failed: Can only call getServletHandlers on a running MetricsSystem at scala.Predef$.require(Predef.scala:224) at org.apache.spark.metrics.MetricsSystem.getServletHandlers(MetricsSystem.scala:91) at org.apache.spark.SparkContext.<init>(SparkContext.scala:524) at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58) at org.com.will.sparkl.App.main(App.java:24) 17/12/07 19:17:48 INFO SparkContext: SparkContext already stopped. Exception in thread "main" java.lang.IllegalArgumentException: requirement failed: Can only call getServletHandlers on a running MetricsSystem at scala.Predef$.require(Predef.scala:224) at org.apache.spark.metrics.MetricsSystem.getServletHandlers(MetricsSystem.scala:91) at org.apache.spark.SparkContext.<init>(SparkContext.scala:524) at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58) at org.com.will.sparkl.App.main(App.java:24) 17/12/07 19:17:48 INFO ShutdownHookManager: Shutdown hook called 17/12/07 19:17:48 INFO ShutdownHookManager: Deleting directory C:\Users\will\AppData\Local\Temp\spark-c60f05a8-5476-469b-8c43-d8476796a1dd ```

MySQL 8.0.19安装教程(windows 64位)

话不多说直接开干 目录 1-先去官网下载点击的MySQL的下载​ 2-配置初始化的my.ini文件的文件 3-初始化MySQL 4-安装MySQL服务 + 启动MySQL 服务 5-连接MySQL + 修改密码 先去官网下载点击的MySQL的下载 下载完成后解压 解压完是这个样子 配置初始化的my.ini文件的文件 ...

Python+OpenCV计算机视觉

Python+OpenCV计算机视觉系统全面的介绍。

Vue.js 2.0之全家桶系列视频课程

基于新的Vue.js 2.3版本, 目前新全的Vue.js教学视频,让你少走弯路,直达技术前沿! 1. 包含Vue.js全家桶(vue.js、vue-router、axios、vuex、vue-cli、webpack、ElementUI等) 2. 采用笔记+代码案例的形式讲解,通俗易懂

navicat(内含激活码)

navicat支持mysql的可视化操作,内涵激活码,不用再忍受弹框的痛苦。

HTML期末大作业

这是我自己做的HTML期末大作业,花了很多时间,稍加修改就可以作为自己的作业了,而且也可以作为学习参考

150讲轻松搞定Python网络爬虫

【为什么学爬虫?】 &nbsp; &nbsp; &nbsp; &nbsp;1、爬虫入手容易,但是深入较难,如何写出高效率的爬虫,如何写出灵活性高可扩展的爬虫都是一项技术活。另外在爬虫过程中,经常容易遇到被反爬虫,比如字体反爬、IP识别、验证码等,如何层层攻克难点拿到想要的数据,这门课程,你都能学到! &nbsp; &nbsp; &nbsp; &nbsp;2、如果是作为一个其他行业的开发者,比如app开发,web开发,学习爬虫能让你加强对技术的认知,能够开发出更加安全的软件和网站 【课程设计】 一个完整的爬虫程序,无论大小,总体来说可以分成三个步骤,分别是: 网络请求:模拟浏览器的行为从网上抓取数据。 数据解析:将请求下来的数据进行过滤,提取我们想要的数据。 数据存储:将提取到的数据存储到硬盘或者内存中。比如用mysql数据库或者redis等。 那么本课程也是按照这几个步骤循序渐进的进行讲解,带领学生完整的掌握每个步骤的技术。另外,因为爬虫的多样性,在爬取的过程中可能会发生被反爬、效率低下等。因此我们又增加了两个章节用来提高爬虫程序的灵活性,分别是: 爬虫进阶:包括IP代理,多线程爬虫,图形验证码识别、JS加密解密、动态网页爬虫、字体反爬识别等。 Scrapy和分布式爬虫:Scrapy框架、Scrapy-redis组件、分布式爬虫等。 通过爬虫进阶的知识点我们能应付大量的反爬网站,而Scrapy框架作为一个专业的爬虫框架,使用他可以快速提高我们编写爬虫程序的效率和速度。另外如果一台机器不能满足你的需求,我们可以用分布式爬虫让多台机器帮助你快速爬取数据。 &nbsp; 从基础爬虫到商业化应用爬虫,本套课程满足您的所有需求! 【课程服务】 专属付费社群+每周三讨论会+1v1答疑

三个项目玩转深度学习(附1G源码)

从事大数据与人工智能开发与实践约十年,钱老师亲自见证了大数据行业的发展与人工智能的从冷到热。事实证明,计算机技术的发展,算力突破,海量数据,机器人技术等,开启了第四次工业革命的序章。深度学习图像分类一直是人工智能的经典任务,是智慧零售、安防、无人驾驶等机器视觉应用领域的核心技术之一,掌握图像分类技术是机器视觉学习的重中之重。针对现有线上学习的特点与实际需求,我们开发了人工智能案例实战系列课程。打造:以项目案例实践为驱动的课程学习方式,覆盖了智能零售,智慧交通等常见领域,通过基础学习、项目案例实践、社群答疑,三维立体的方式,打造最好的学习效果。

基于STM32的电子时钟设计

时钟功能 还有闹钟功能,温湿度功能,整点报时功能 你值得拥有

学生成绩管理系统(PHP + MYSQL)

做的是数据库课程设计,使用的php + MySQL,本来是黄金搭配也就没啥说的,推荐使用wamp服务器,里面有详细的使用说明,带有界面的啊!呵呵 不行的话,可以给我留言!

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

程序员的兼职技能课

获取讲师答疑方式: 在付费视频第一节(触摸命令_ALL)片头有二维码及加群流程介绍 限时福利 原价99元,今日仅需39元!购课添加小助手(微信号:itxy41)按提示还可领取价值800元的编程大礼包! 讲师介绍: 苏奕嘉&nbsp;前阿里UC项目工程师 脚本开发平台官方认证满级(六级)开发者。 我将如何教会你通过【定制脚本】赚到你人生的第一桶金? 零基础程序定制脚本开发课程,是完全针对零脚本开发经验的小白而设计,课程内容共分为3大阶段: ①前期将带你掌握Q开发语言和界面交互开发能力; ②中期通过实战来制作有具体需求的定制脚本; ③后期将解锁脚本的更高阶玩法,打通任督二脉; ④应用定制脚本合法赚取额外收入的完整经验分享,带你通过程序定制脚本开发这项副业,赚取到你的第一桶金!

实用主义学Python(小白也容易上手的Python实用案例)

原价169,限时立减100元! 系统掌握Python核心语法16点,轻松应对工作中80%以上的Python使用场景! 69元=72讲+源码+社群答疑+讲师社群分享会&nbsp; 【哪些人适合学习这门课程?】 1)大学生,平时只学习了Python理论,并未接触Python实战问题; 2)对Python实用技能掌握薄弱的人,自动化、爬虫、数据分析能让你快速提高工作效率; 3)想学习新技术,如:人工智能、机器学习、深度学习等,这门课程是你的必修课程; 4)想修炼更好的编程内功,优秀的工程师肯定不能只会一门语言,Python语言功能强大、使用高效、简单易学。 【超实用技能】 从零开始 自动生成工作周报 职场升级 豆瓣电影数据爬取 实用案例 奥运冠军数据分析 自动化办公:通过Python自动化分析Excel数据并自动操作Word文档,最终获得一份基于Excel表格的数据分析报告。 豆瓣电影爬虫:通过Python自动爬取豆瓣电影信息并将电影图片保存到本地。 奥运会数据分析实战 简介:通过Python分析120年间奥运会的数据,从不同角度入手分析,从而得出一些有趣的结论。 【超人气老师】 二两 中国人工智能协会高级会员 生成对抗神经网络研究者 《深入浅出生成对抗网络:原理剖析与TensorFlow实现》一书作者 阿里云大学云学院导师 前大型游戏公司后端工程师 【超丰富实用案例】 0)图片背景去除案例 1)自动生成工作周报案例 2)豆瓣电影数据爬取案例 3)奥运会数据分析案例 4)自动处理邮件案例 5)github信息爬取/更新提醒案例 6)B站百大UP信息爬取与分析案例 7)构建自己的论文网站案例

Java8零基础入门视频教程

这门课程基于主流的java8平台,由浅入深的详细讲解了java SE的开发技术,可以使java方向的入门学员,快速扎实的掌握java开发技术!

Python数据挖掘简易入门

&nbsp; &nbsp; &nbsp; &nbsp; 本课程为Python数据挖掘方向的入门课程,课程主要以真实数据为基础,详细介绍数据挖掘入门的流程和使用Python实现pandas与numpy在数据挖掘方向的运用,并深入学习如何运用scikit-learn调用常用的数据挖掘算法解决数据挖掘问题,为进一步深入学习数据挖掘打下扎实的基础。

零基础学C#编程—C#从小白到大咖

本课程从初学者角度出发,提供了C#从入门到成为程序开发高手所需要掌握的各方面知识和技术。 【课程特点】 1 由浅入深,编排合理; 2 视频讲解,精彩详尽; 3 丰富实例,轻松易学; 4 每章总结配有难点解析文档。 15大章节,228课时,1756分钟与你一同进步!

MySQL数据库面试题(2020最新版)

文章目录数据库基础知识为什么要使用数据库什么是SQL?什么是MySQL?数据库三大范式是什么mysql有关权限的表都有哪几个MySQL的binlog有有几种录入格式?分别有什么区别?数据类型mysql有哪些数据类型引擎MySQL存储引擎MyISAM与InnoDB区别MyISAM索引与InnoDB索引的区别?InnoDB引擎的4大特性存储引擎选择索引什么是索引?索引有哪些优缺点?索引使用场景(重点)...

多功能数字钟.zip

利用数字电子计数知识设计并制作的数字电子钟(含multisim仿真),该数字钟具有显示星期、24小时制时间、闹铃、整点报时、时间校准功能

极简JAVA学习营第四期(报名以后加助教微信:eduxy-1)

想学好JAVA必须要报两万的培训班吗? Java大神勿入 如果你: 零基础想学JAVA却不知道从何入手 看了一堆书和视频却还是连JAVA的环境都搭建不起来 囊中羞涩面对两万起的JAVA培训班不忍直视 在职没有每天大块的时间专门学习JAVA 那么恭喜你找到组织了,在这里有: 1. 一群志同道合立志学好JAVA的同学一起学习讨论JAVA 2. 灵活机动的学习时间完成特定学习任务+每日编程实战练习 3. 热心助人的助教和讲师及时帮你解决问题,不按时完成作业小心助教老师的家访哦 上一张图看看前辈的感悟: &nbsp; &nbsp; 大家一定迫不及待想知道什么是极简JAVA学习营了吧,下面就来给大家说道说道: 什么是极简JAVA学习营? 1. 针对Java小白或者初级Java学习者; 2. 利用9天时间,每天1个小时时间; 3.通过 每日作业 / 组队PK / 助教答疑 / 实战编程 / 项目答辩 / 社群讨论 / 趣味知识抢答等方式让学员爱上学习编程 , 最终实现能独立开发一个基于控制台的‘库存管理系统’ 的学习模式 极简JAVA学习营是怎么学习的? &nbsp; 如何报名? 只要购买了极简JAVA一:JAVA入门就算报名成功! &nbsp;本期为第四期极简JAVA学习营,我们来看看往期学员的学习状态: 作业看这里~ &nbsp; 助教的作业报告是不是很专业 不交作业打屁屁 助教答疑是不是很用心 &nbsp; 有奖抢答大家玩的很嗨啊 &nbsp; &nbsp; 项目答辩终于开始啦 &nbsp; 优秀者的获奖感言 &nbsp; 这是答辩项目的效果 &nbsp; &nbsp; 这么细致的服务,这么好的氛围,这样的学习效果,需要多少钱呢? 不要1999,不要199,不要99,只要9.9 是的你没听错,只要9.9以上所有就都属于你了 如果你: 1、&nbsp;想学JAVA没有基础 2、&nbsp;想学JAVA没有整块的时间 3、&nbsp;想学JAVA没有足够的预算 还等什么?赶紧报名吧,抓紧抢位,本期只招300人,错过只有等时间待定的下一期了 &nbsp; 报名请加小助手微信:eduxy-1 &nbsp; &nbsp;

Python可以这样学(第一季:Python内功修炼)

董付国系列教材《Python程序设计基础》、《Python程序设计(第2版)》、《Python可以这样学》配套视频,讲解Python 3.5.x和3.6.x语法、内置对象用法、选择与循环以及函数设计与使用、lambda表达式用法、字符串与正则表达式应用、面向对象编程、文本文件与二进制文件操作、目录操作与系统运维、异常处理结构。

Java基础知识面试题(2020最新版)

文章目录Java概述何为编程什么是Javajdk1.5之后的三大版本JVM、JRE和JDK的关系什么是跨平台性?原理是什么Java语言有哪些特点什么是字节码?采用字节码的最大好处是什么什么是Java程序的主类?应用程序和小程序的主类有何不同?Java应用程序与小程序之间有那些差别?Java和C++的区别Oracle JDK 和 OpenJDK 的对比基础语法数据类型Java有哪些数据类型switc...

机器学习实战系列套餐(必备基础+经典算法+案例实战)

机器学习实战系列套餐以实战为出发点,帮助同学们快速掌握机器学习领域必备经典算法原理并结合Python工具包进行实战应用。建议学习顺序:1.Python必备工具包:掌握实战工具 2.机器学习算法与实战应用:数学原理与应用方法都是必备技能 3.数据挖掘实战:通过真实数据集进行项目实战。按照下列课程顺序学习即可! 课程风格通俗易懂,用最接地气的方式带领大家轻松进军机器学习!提供所有课程代码,PPT与实战数据,有任何问题欢迎随时与我讨论。

Java面试题大全(2020版)

发现网上很多Java面试题都没有答案,所以花了很长时间搜集整理出来了这套Java面试题大全,希望对大家有帮助哈~ 本套Java面试题大全,全的不能再全,哈哈~ 一、Java 基础 1. JDK 和 JRE 有什么区别? JDK:Java Development Kit 的简称,java 开发工具包,提供了 java 的开发环境和运行环境。 JRE:Java Runtime Environ...

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

深度学习原理+项目实战+算法详解+主流框架(套餐)

深度学习系列课程从深度学习基础知识点开始讲解一步步进入神经网络的世界再到卷积和递归神经网络,详解各大经典网络架构。实战部分选择当下最火爆深度学习框架PyTorch与Tensorflow/Keras,全程实战演示框架核心使用与建模方法。项目实战部分选择计算机视觉与自然语言处理领域经典项目,从零开始详解算法原理,debug模式逐行代码解读。适合准备就业和转行的同学们加入学习! 建议按照下列课程顺序来进行学习 (1)掌握深度学习必备经典网络架构 (2)深度框架实战方法 (3)计算机视觉与自然语言处理项目实战。(按照课程排列顺序即可)

HoloLens2开发入门教程

本课程为HoloLens2开发入门教程,讲解部署开发环境,安装VS2019,Unity版本,Windows SDK,创建Unity项目,讲解如何使用MRTK,编辑器模拟手势交互,打包VS工程并编译部署应用到HoloLens上等。

几率大的Redis面试题(含答案)

本文的面试题如下: Redis 持久化机制 缓存雪崩、缓存穿透、缓存预热、缓存更新、缓存降级等问题 热点数据和冷数据是什么 Memcache与Redis的区别都有哪些? 单线程的redis为什么这么快 redis的数据类型,以及每种数据类型的使用场景,Redis 内部结构 redis的过期策略以及内存淘汰机制【~】 Redis 为什么是单线程的,优点 如何解决redis的并发竞争key问题 Red...

MFC一站式终极全套课程包

该套餐共包含从C小白到C++到MFC的全部课程,整套学下来绝对成为一名C++大牛!!!

【数据结构与算法综合实验】欢乐连连看(C++ & MFC)案例

这是武汉理工大学计算机学院数据结构与算法综合实验课程的第三次项目:欢乐连连看(C++ & MFC)迭代开发代码。运行环境:VS2017。已经实现功能:开始游戏、消子、判断胜负、提示、重排、计时、帮助。

YOLOv3目标检测实战:训练自己的数据集

YOLOv3是一种基于深度学习的端到端实时目标检测方法,以速度快见长。本课程将手把手地教大家使用labelImg标注和使用YOLOv3训练自己的数据集。课程分为三个小项目:足球目标检测(单目标检测)、梅西目标检测(单目标检测)、足球和梅西同时目标检测(两目标检测)。 本课程的YOLOv3使用Darknet,在Ubuntu系统上做项目演示。包括:安装Darknet、给自己的数据集打标签、整理自己的数据集、修改配置文件、训练自己的数据集、测试训练出的网络模型、性能统计(mAP计算和画出PR曲线)和先验框聚类。 Darknet是使用C语言实现的轻型开源深度学习框架,依赖少,可移植性好,值得深入探究。 除本课程《YOLOv3目标检测实战:训练自己的数据集》外,本人推出了有关YOLOv3目标检测的系列课程,请持续关注该系列的其它课程视频,包括: 《YOLOv3目标检测实战:交通标志识别》 《YOLOv3目标检测:原理与源码解析》 《YOLOv3目标检测:网络模型改进方法》 敬请关注并选择学习!

u-boot-2015.07.tar.bz2

uboot-2015-07最新代码,喜欢的朋友请拿去

相关热词 c#跨线程停止timer c#批量写入sql数据库 c# 自动安装浏览器 c#语言基础考试题 c# 偏移量打印是什么 c# 绘制曲线图 c#框体中的退出函数 c# 按钮透明背景 c# idl 混编出错 c#在位置0处没有任何行
立即提问