Scala 循环添加list为空

图片说明

1个回答

谁知道麻烦告知下,急,谢谢

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
在学习flink写wordCount时报错
Error:scalac: Error: scala.collection.mutable.Set$.apply(Lscala/collection/Seq;)Lscala/collection/GenTraversable; java.lang.NoSuchMethodError: scala.collection.mutable.Set$.apply(Lscala/collection/Seq;)Lscala/collection/GenTraversable; at org.apache.flink.api.scala.codegen.TypeAnalyzer.$init$(TypeAnalyzer.scala:37) at org.apache.flink.api.scala.codegen.MacroContextHolder$$anon$1.<init>(MacroContextHolder.scala:30) at org.apache.flink.api.scala.codegen.MacroContextHolder$.newMacroHelper(MacroContextHolder.scala:30) at org.apache.flink.api.scala.typeutils.TypeUtils$.createTypeInfo(TypeUtils.scala:30) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at scala.reflect.macros.runtime.JavaReflectionRuntimes$JavaReflectionResolvers.$anonfun$resolveJavaReflectionRuntime$6(JavaReflectionRuntimes.scala:51) at scala.tools.nsc.typechecker.Macros.macroExpandWithRuntime(Macros.scala:758) at scala.tools.nsc.typechecker.Macros.macroExpandWithRuntime$(Macros.scala:734) at scala.tools.nsc.Global$$anon$5.macroExpandWithRuntime(Global.scala:483) at scala.tools.nsc.typechecker.Macros$MacroExpander.$anonfun$expand$1(Macros.scala:564) at scala.tools.nsc.Global.withInfoLevel(Global.scala:226) at scala.tools.nsc.typechecker.Macros$MacroExpander.expand(Macros.scala:557) at scala.tools.nsc.typechecker.Macros$MacroExpander.apply(Macros.scala:544) at scala.tools.nsc.typechecker.Macros.standardMacroExpand(Macros.scala:719) at scala.tools.nsc.typechecker.Macros.standardMacroExpand$(Macros.scala:717) at scala.tools.nsc.Global$$anon$5.standardMacroExpand(Global.scala:483) at scala.tools.nsc.typechecker.AnalyzerPlugins$$anon$10.default(AnalyzerPlugins.scala:456) at scala.tools.nsc.typechecker.AnalyzerPlugins$$anon$10.default(AnalyzerPlugins.scala:453) at scala.tools.nsc.typechecker.AnalyzerPlugins.invoke(AnalyzerPlugins.scala:410) at scala.tools.nsc.typechecker.AnalyzerPlugins.pluginsMacroExpand(AnalyzerPlugins.scala:453) at scala.tools.nsc.typechecker.AnalyzerPlugins.pluginsMacroExpand$(AnalyzerPlugins.scala:453) at scala.tools.nsc.Global$$anon$5.pluginsMacroExpand(Global.scala:483) at scala.tools.nsc.typechecker.Macros.macroExpand(Macros.scala:708) at scala.tools.nsc.typechecker.Macros.macroExpand$(Macros.scala:701) at scala.tools.nsc.Global$$anon$5.macroExpand(Global.scala:483) at scala.tools.nsc.typechecker.Macros$$anon$4.transform(Macros.scala:898) at scala.tools.nsc.typechecker.Macros.macroExpandAll(Macros.scala:906) at scala.tools.nsc.typechecker.Macros.macroExpandAll$(Macros.scala:887) at scala.tools.nsc.Global$$anon$5.macroExpandAll(Global.scala:483) at scala.tools.nsc.typechecker.Macros.macroExpandWithRuntime(Macros.scala:743) at scala.tools.nsc.typechecker.Macros.macroExpandWithRuntime$(Macros.scala:734) at scala.tools.nsc.Global$$anon$5.macroExpandWithRuntime(Global.scala:483) at scala.tools.nsc.typechecker.Macros$MacroExpander.$anonfun$expand$1(Macros.scala:564) at scala.tools.nsc.Global.withInfoLevel(Global.scala:226) at scala.tools.nsc.typechecker.Macros$MacroExpander.expand(Macros.scala:557) at scala.tools.nsc.typechecker.Macros$MacroExpander.apply(Macros.scala:544) at scala.tools.nsc.typechecker.Macros.standardMacroExpand(Macros.scala:719) at scala.tools.nsc.typechecker.Macros.standardMacroExpand$(Macros.scala:717) at scala.tools.nsc.Global$$anon$5.standardMacroExpand(Global.scala:483) at scala.tools.nsc.typechecker.AnalyzerPlugins$$anon$10.default(AnalyzerPlugins.scala:456) at scala.tools.nsc.typechecker.AnalyzerPlugins$$anon$10.default(AnalyzerPlugins.scala:453) at scala.tools.nsc.typechecker.AnalyzerPlugins.invoke(AnalyzerPlugins.scala:410) at scala.tools.nsc.typechecker.AnalyzerPlugins.pluginsMacroExpand(AnalyzerPlugins.scala:453) at scala.tools.nsc.typechecker.AnalyzerPlugins.pluginsMacroExpand$(AnalyzerPlugins.scala:453) at scala.tools.nsc.Global$$anon$5.pluginsMacroExpand(Global.scala:483) at scala.tools.nsc.typechecker.Macros.macroExpand(Macros.scala:708) at scala.tools.nsc.typechecker.Macros.macroExpand$(Macros.scala:701) at scala.tools.nsc.Global$$anon$5.macroExpand(Global.scala:483) at scala.tools.nsc.typechecker.Macros$DefMacroExpander.onDelayed(Macros.scala:691) at scala.tools.nsc.typechecker.Macros$MacroExpander.$anonfun$expand$1(Macros.scala:578) at scala.tools.nsc.Global.withInfoLevel(Global.scala:226) at scala.tools.nsc.typechecker.Macros$MacroExpander.expand(Macros.scala:557) at scala.tools.nsc.typechecker.Macros$MacroExpander.apply(Macros.scala:544) at scala.tools.nsc.typechecker.Macros.standardMacroExpand(Macros.scala:719) at scala.tools.nsc.typechecker.Macros.standardMacroExpand$(Macros.scala:717) at scala.tools.nsc.Global$$anon$5.standardMacroExpand(Global.scala:483) at scala.tools.nsc.typechecker.AnalyzerPlugins$$anon$10.default(AnalyzerPlugins.scala:456) at scala.tools.nsc.typechecker.AnalyzerPlugins$$anon$10.default(AnalyzerPlugins.scala:453) at scala.tools.nsc.typechecker.AnalyzerPlugins.invoke(AnalyzerPlugins.scala:410) at scala.tools.nsc.typechecker.AnalyzerPlugins.pluginsMacroExpand(AnalyzerPlugins.scala:453) at scala.tools.nsc.typechecker.AnalyzerPlugins.pluginsMacroExpand$(AnalyzerPlugins.scala:453) at scala.tools.nsc.Global$$anon$5.pluginsMacroExpand(Global.scala:483) at scala.tools.nsc.typechecker.Macros.macroExpand(Macros.scala:708) at scala.tools.nsc.typechecker.Macros.macroExpand$(Macros.scala:701) at scala.tools.nsc.Global$$anon$5.macroExpand(Global.scala:483) at scala.tools.nsc.typechecker.Typers$Typer.vanillaAdapt$1(Typers.scala:1212) at scala.tools.nsc.typechecker.Typers$Typer.adapt(Typers.scala:1277) at scala.tools.nsc.typechecker.Typers$Typer.adapt(Typers.scala:1250) at scala.tools.nsc.typechecker.Typers$Typer.adapt(Typers.scala:1270) at scala.tools.nsc.typechecker.Implicits$ImplicitSearch.typedImplicit1(Implicits.scala:866) at scala.tools.nsc.typechecker.Implicits$ImplicitSearch.typedImplicit0(Implicits.scala:803) at scala.tools.nsc.typechecker.Implicits$ImplicitSearch.scala$tools$nsc$typechecker$Implicits$ImplicitSearch$$typedImplicit(Implicits.scala:622) at scala.tools.nsc.typechecker.Implicits$ImplicitSearch$ImplicitComputation.rankImplicits(Implicits.scala:1213) at scala.tools.nsc.typechecker.Implicits$ImplicitSearch$ImplicitComputation.findBest(Implicits.scala:1248) at scala.tools.nsc.typechecker.Implicits$ImplicitSearch.searchImplicit(Implicits.scala:1305) at scala.tools.nsc.typechecker.Implicits$ImplicitSearch.bestImplicit(Implicits.scala:1704) at scala.tools.nsc.typechecker.Implicits.inferImplicit1(Implicits.scala:112) at scala.tools.nsc.typechecker.Implicits.inferImplicit(Implicits.scala:91) at scala.tools.nsc.typechecker.Implicits.inferImplicit$(Implicits.scala:88) at scala.tools.nsc.Global$$anon$5.inferImplicit(Global.scala:483) at scala.tools.nsc.typechecker.Implicits.inferImplicitFor(Implicits.scala:46) at scala.tools.nsc.typechecker.Implicits.inferImplicitFor$(Implicits.scala:45) at scala.tools.nsc.Global$$anon$5.inferImplicitFor(Global.scala:483) at scala.tools.nsc.typechecker.Typers$Typer.applyImplicitArgs(Typers.scala:270) at scala.tools.nsc.typechecker.Typers$Typer.$anonfun$adapt$1(Typers.scala:879) at scala.tools.nsc.typechecker.Typers$Typer.adaptToImplicitMethod$1(Typers.scala:490) at scala.tools.nsc.typechecker.Typers$Typer.adapt(Typers.scala:1273) at scala.tools.nsc.typechecker.Typers$Typer.typed(Typers.scala:5900) at scala.tools.nsc.typechecker.Typers$Typer.computeType(Typers.scala:5961) at scala.tools.nsc.typechecker.Namers$Namer.assignTypeToTree(Namers.scala:1120) at scala.tools.nsc.typechecker.Namers$Namer.valDefSig(Namers.scala:1716) at scala.tools.nsc.typechecker.Namers$Namer.memberSig(Namers.scala:1891) at scala.tools.nsc.typechecker.Namers$Namer.typeSig(Namers.scala:1855) at scala.tools.nsc.typechecker.Namers$Namer$MonoTypeCompleter.completeImpl(Namers.scala:867) at scala.tools.nsc.typechecker.Namers$LockingTypeCompleter.complete(Namers.scala:2040) at scala.tools.nsc.typechecker.Namers$LockingTypeCompleter.complete$(Namers.scala:2038) at scala.tools.nsc.typechecker.Namers$TypeCompleterBase.complete(Namers.scala:2033) at scala.reflect.internal.Symbols$Symbol.completeInfo(Symbols.scala:1544) at scala.reflect.internal.Symbols$Symbol.info(Symbols.scala:1517) at scala.reflect.internal.Symbols$Symbol.initialize(Symbols.scala:1691) at scala.tools.nsc.typechecker.Typers$Typer.typed1(Typers.scala:5485) at scala.tools.nsc.typechecker.Typers$Typer.typed(Typers.scala:5886) at scala.tools.nsc.typechecker.Typers$Typer.typedStat$1(Typers.scala:5950) at scala.tools.nsc.typechecker.Typers$Typer.$anonfun$typedStats$10(Typers.scala:3394) at scala.tools.nsc.typechecker.Typers$Typer.typedStats(Typers.scala:3394) at scala.tools.nsc.typechecker.Typers$Typer.typedBlock(Typers.scala:2536) at scala.tools.nsc.typechecker.Typers$Typer.typedOutsidePatternMode$1(Typers.scala:5815) at scala.tools.nsc.typechecker.Typers$Typer.typed1(Typers.scala:5850) at scala.tools.nsc.typechecker.Typers$Typer.typed(Typers.scala:5886) at scala.tools.nsc.typechecker.Typers$Typer.typedDefDef(Typers.scala:6141) at scala.tools.nsc.typechecker.Typers$Typer.typed1(Typers.scala:5793) at scala.tools.nsc.typechecker.Typers$Typer.typed(Typers.scala:5886) at scala.tools.nsc.typechecker.Typers$Typer.typedStat$1(Typers.scala:5950) at scala.tools.nsc.typechecker.Typers$Typer.$anonfun$typedStats$10(Typers.scala:3394) at scala.tools.nsc.typechecker.Typers$Typer.typedStats(Typers.scala:3394) at scala.tools.nsc.typechecker.Typers$Typer.typedTemplate(Typers.scala:2049) at scala.tools.nsc.typechecker.Typers$Typer.typedModuleDef(Typers.scala:1924) at scala.tools.nsc.typechecker.Typers$Typer.typed1(Typers.scala:5795) at scala.tools.nsc.typechecker.Typers$Typer.typed(Typers.scala:5886) at scala.tools.nsc.typechecker.Typers$Typer.typedStat$1(Typers.scala:5950) at scala.tools.nsc.typechecker.Typers$Typer.$anonfun$typedStats$10(Typers.scala:3394) at scala.tools.nsc.typechecker.Typers$Typer.typedStats(Typers.scala:3394) at scala.tools.nsc.typechecker.Typers$Typer.typedPackageDef$1(Typers.scala:5494) at scala.tools.nsc.typechecker.Typers$Typer.typed1(Typers.scala:5797) at scala.tools.nsc.typechecker.Typers$Typer.typed(Typers.scala:5886) at scala.tools.nsc.typechecker.Analyzer$typerFactory$TyperPhase.apply(Analyzer.scala:115) at scala.tools.nsc.Global$GlobalPhase.applyPhase(Global.scala:452) at scala.tools.nsc.typechecker.Analyzer$typerFactory$TyperPhase.run(Analyzer.scala:104) at scala.tools.nsc.Global$Run.compileUnitsInternal(Global.scala:1506) at scala.tools.nsc.Global$Run.compileUnits(Global.scala:1490) at scala.tools.nsc.Global$Run.compileSources(Global.scala:1482) at scala.tools.nsc.Global$Run.compile(Global.scala:1614) at xsbt.CachedCompiler0.run(CompilerInterface.scala:130) at xsbt.CachedCompiler0.run(CompilerInterface.scala:105) at xsbt.CompilerInterface.run(CompilerInterface.scala:31) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at sbt.internal.inc.AnalyzingCompiler.call(AnalyzingCompiler.scala:237) at sbt.internal.inc.AnalyzingCompiler.compile(AnalyzingCompiler.scala:111) at sbt.internal.inc.AnalyzingCompiler.compile(AnalyzingCompiler.scala:90) at org.jetbrains.jps.incremental.scala.local.IdeaIncrementalCompiler.compile(IdeaIncrementalCompiler.scala:40) at org.jetbrains.jps.incremental.scala.local.LocalServer.compile(LocalServer.scala:35) at org.jetbrains.jps.incremental.scala.remote.Main$.make(Main.scala:88) at org.jetbrains.jps.incremental.scala.remote.Main$.nailMain(Main.scala:36) at org.jetbrains.jps.incremental.scala.remote.Main.nailMain(Main.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.martiansoftware.nailgun.NGSession.run(NGSession.java:319)
spark运行scala的jar包
![图片说明](https://img-ask.csdn.net/upload/202002/05/1580887545_330719.png) ![图片说明](https://img-ask.csdn.net/upload/202002/05/1580887568_992291.png) ![图片说明](https://img-ask.csdn.net/upload/202002/05/1580887616_449280.png) 有人遇到过类似的问题吗? 我的尝试: 当没Master节点的Worker进程,运行会报错,当开启了Master节点的Worker进程有时不会报错,但是会说内存不够,但是我觉得不是这个问题,也能得出一定的结果,但并不是预期的结果。 执行的命令:bin/spark-submit --master spark://node1:7077 --class cn.itcast.WordCount_Online --executor-memory 1g --total-executor-cores 1 ~/data/spark_chapter02-1.0-SNAPSHOT.jar /spark/test/words.txt /spark/test/out jar包是在idea中打包的,用的是scala语言,主要作用是词频统计 scala代码: ``` package cn.itcast import org.apache.spark.rdd.RDD import org.apache.spark.{SparkConf, SparkContext} object WordCount_Online { def main(args: Array[String]):Unit={ val sparkConf = new SparkConf().setAppName("WordCount_Online") val sparkContext = new SparkContext(sparkConf) val data : RDD[String] = sparkContext.textFile(args(0)) val words :RDD[String] = data.flatMap(_.split(" ")) val wordAndOne :RDD[(String,Int)] = words.map(x => (x,1)) val result :RDD[(String,Int)] = wordAndOne.reduceByKey(_+_) result.saveAsTextFile(args(1)) sparkContext.stop() } } ``` 我也做了很多尝试,希望懂的人可以交流一下
spark读取avro序列化的parquet时报错:Illegal Parquet type: FIXED_LEN_BYTE_ARRAY
avro格式定义如下图:![图片说明](https://img-ask.csdn.net/upload/202002/14/1581611055_583617.png) 然后spark正常读取生成的parquet则报错:Illegal Parquet type: FIXED_LEN_BYTE_ARRAY。问怎么读取parquet(不一定要用spark)?详细错误如下: org.apache.spark.sql.AnalysisException: Illegal Parquet type: FIXED_LEN_BYTE_ARRAY; at org.apache.spark.sql.execution.datasources.parquet.ParquetToSparkSchemaConverter.illegalType$1(ParquetSchemaConverter.scala:107) at org.apache.spark.sql.execution.datasources.parquet.ParquetToSparkSchemaConverter.convertPrimitiveField(ParquetSchemaConverter.scala:175) at org.apache.spark.sql.execution.datasources.parquet.ParquetToSparkSchemaConverter.convertField(ParquetSchemaConverter.scala:89) at org.apache.spark.sql.execution.datasources.parquet.ParquetToSparkSchemaConverter.$anonfun$convert$1(ParquetSchemaConverter.scala:71) at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at scala.collection.Iterator.foreach(Iterator.scala:941) at scala.collection.Iterator.foreach$(Iterator.scala:941) at scala.collection.AbstractIterator.foreach(Iterator.scala:1429) at scala.collection.IterableLike.foreach(IterableLike.scala:74) at scala.collection.IterableLike.foreach$(IterableLike.scala:73) at scala.collection.AbstractIterable.foreach(Iterable.scala:56) at scala.collection.TraversableLike.map(TraversableLike.scala:237) at scala.collection.TraversableLike.map$(TraversableLike.scala:230) at scala.collection.AbstractTraversable.map(Traversable.scala:108) at org.apache.spark.sql.execution.datasources.parquet.ParquetToSparkSchemaConverter.convert(ParquetSchemaConverter.scala:65) at org.apache.spark.sql.execution.datasources.parquet.ParquetToSparkSchemaConverter.convert(ParquetSchemaConverter.scala:62) at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$.$anonfun$readSchemaFromFooter$2(ParquetFileFormat.scala:664) at scala.Option.getOrElse(Option.scala:138) at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$.readSchemaFromFooter(ParquetFileFormat.scala:664) at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$.$anonfun$mergeSchemasInParallel$2(ParquetFileFormat.scala:621) at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2(RDD.scala:801) at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2$adapted(RDD.scala:801) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
那位大佬看下scala报错求解决
Exception in thread "main" java.lang.VerifyError: class scala.collection.mutable.WrappedArray overrides final method toBuffer.()Lscala/collection/mutable/Buffer; at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:763) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) at java.net.URLClassLoader.access$100(URLClassLoader.java:73) at java.net.URLClassLoader$1.run(URLClassLoader.java:368) at java.net.URLClassLoader$1.run(URLClassLoader.java:362) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:361) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at org.apache.spark.SparkConf.loadFromSystemProperties(SparkConf.scala:73) at org.apache.spark.SparkConf.<init>(SparkConf.scala:68) at org.apache.spark.SparkConf.<init>(SparkConf.scala:55) at SessionStat$.main(SessionStat.scala:21) at SessionStat.main(SessionStat.scala) ``` ``` object SessionStat { def main(args: Array[String]): Unit = { //获取筛选条件 val jsonStr = ConfigurationManager.config.getString(Constants.TASK_PARAMS) //获取筛选条件对应的JsonObject val taskParam = JSONObject.fromObject(jsonStr) //创建全局唯一的主键 val taskUUID = UUID.randomUUID().toString //创建sparkSession val sparkConf = new SparkConf().setAppName("session").setMaster("local[*]") //创建sparkSession(包含SparkContext) val sparkSession = SparkSession.builder().config(sparkConf).enableHiveSupport().getOrCreate() //获取原始的动作表数据 //actionRDD:RDD[UserVisit] val actionRDD = getOriActionRDD(sparkSession,taskParam) actionRDD.foreach(println(_)) } def getOriActionRDD(sparkSession: SparkSession, taskParam: JSONObject) = { //获取查询时间的开始时间 val startDate = ParamUtils.getParam(taskParam,Constants.PARAM_START_DATE) //获取查询时间的结束时间 val endDate = ParamUtils.getParam(taskParam,Constants.PARAM_END_DATE) //查询数据 val sql = "select * from user_visit_action where date >='"+startDate+"' and date <='"+endDate+"'" import sparkSession.implicits._ sparkSession.sql(sql).as[UserVisitAction].rdd } }
在java项目中可以整合scala吗?
我要写一个分析系统,分析公司的业务数据 是否可以使用SSM框架controller层,service层,dao层 scala对接java接口 整合scala,sparksql实现前后端交互
scala运行hello world出错
Error:scalac: Error: org.jetbrains.jps.incremental.scala.remote.ServerException Error compiling sbt component 'compiler-interface-2.10.0-52.0' at sbt.compiler.AnalyzingCompiler$$anonfun$compileSources$1$$anonfun$apply$2.apply(AnalyzingCompiler.scala:145) at sbt.compiler.AnalyzingCompiler$$anonfun$compileSources$1$$anonfun$apply$2.apply(AnalyzingCompiler.scala:142) at sbt.IO$.withTemporaryDirectory(IO.scala:291) at sbt.compiler.AnalyzingCompiler$$anonfun$compileSources$1.apply(AnalyzingCompiler.scala:142) at sbt.compiler.AnalyzingCompiler$$anonfun$compileSources$1.apply(AnalyzingCompiler.scala:139) at sbt.IO$.withTemporaryDirectory(IO.scala:291) at sbt.compiler.AnalyzingCompiler$.compileSources(AnalyzingCompiler.scala:139) at sbt.compiler.IC$.compileInterfaceJar(IncrementalCompiler.scala:52) at org.jetbrains.jps.incremental.scala.local.CompilerFactoryImpl$.getOrCompileInterfaceJar(CompilerFactoryImpl.scala:96) at org.jetbrains.jps.incremental.scala.local.CompilerFactoryImpl$$anonfun$getScalac$1.apply(CompilerFactoryImpl.scala:50) at org.jetbrains.jps.incremental.scala.local.CompilerFactoryImpl$$anonfun$getScalac$1.apply(CompilerFactoryImpl.scala:49) at scala.Option.map(Option.scala:146) at org.jetbrains.jps.incremental.scala.local.CompilerFactoryImpl.getScalac(CompilerFactoryImpl.scala:49) at org.jetbrains.jps.incremental.scala.local.CompilerFactoryImpl.createCompiler(CompilerFactoryImpl.scala:22) at org.jetbrains.jps.incremental.scala.local.CachingFactory$$anonfun$createCompiler$1.apply(CachingFactory.scala:24) at org.jetbrains.jps.incremental.scala.local.CachingFactory$$anonfun$createCompiler$1.apply(CachingFactory.scala:24) at org.jetbrains.jps.incremental.scala.local.Cache$$anonfun$getOrUpdate$2.apply(Cache.scala:20) at scala.Option.getOrElse(Option.scala:121) at org.jetbrains.jps.incremental.scala.local.Cache.getOrUpdate(Cache.scala:19) at org.jetbrains.jps.incremental.scala.local.CachingFactory.createCompiler(CachingFactory.scala:23) at org.jetbrains.jps.incremental.scala.local.LocalServer.compile(LocalServer.scala:22) at org.jetbrains.jps.incremental.scala.remote.Main$.make(Main.scala:68) at org.jetbrains.jps.incremental.scala.remote.Main$.nailMain(Main.scala:25) at org.jetbrains.jps.incremental.scala.remote.Main.nailMain(Main.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.martiansoftware.nailgun.NGSession.run(NGSession.java:319)
intellij创建scala项目没有scala选项
为什么我新建scala那一步,在new project里面左侧选择scala,右侧就是没有scala的选项呢,只有SBT\Activator\IDEA这三个选项 ![图片说明](https://img-ask.csdn.net/upload/201709/25/1506321594_567550.jpg)
(求解答)在Windows安装scala2.12.3 版本,在cmd中输入scala后,报以下错误:
Exception in thread "main" java.lang.NullPointerException at java.util.Arrays.sort(Arrays.java:1438) at scala.tools.nsc.classpath.JFileDirectoryLookup.listChildren(DirectoryClassPath.scala:113) at scala.tools.nsc.classpath.JFileDirectoryLookup.listChildren$(DirectoryClassPath.scala:97) at scala.tools.nsc.classpath.DirectoryClassPath.listChildren(DirectoryClassPath.scala:202) at scala.tools.nsc.classpath.DirectoryClassPath.listChildren(DirectoryClassPath.scala:202) at scala.tools.nsc.classpath.DirectoryLookup.list(DirectoryClassPath.scala:73) at scala.tools.nsc.classpath.DirectoryLookup.list$(DirectoryClassPath.scala:69) at scala.tools.nsc.classpath.DirectoryClassPath.list(DirectoryClassPath.scala:202) at scala.tools.nsc.classpath.AggregateClassPath.$anonfun$list$1(AggregateClassPath.scala:76) at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:234) at scala.collection.Iterator.foreach(Iterator.scala:929) at scala.collection.Iterator.foreach$(Iterator.scala:929) at scala.collection.AbstractIterator.foreach(Iterator.scala:1417) at scala.collection.IterableLike.foreach(IterableLike.scala:71) at scala.collection.IterableLike.foreach$(IterableLike.scala:70) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at scala.collection.TraversableLike.map(TraversableLike.scala:234) at scala.collection.TraversableLike.map$(TraversableLike.scala:227) at scala.collection.AbstractTraversable.map(Traversable.scala:104) at scala.tools.nsc.classpath.AggregateClassPath.list(AggregateClassPath.scala:74) at scala.tools.nsc.symtab.SymbolLoaders$PackageLoader.doComplete(SymbolLoaders.scala:269) at scala.tools.nsc.symtab.SymbolLoaders$SymbolLoader.complete(SymbolLoaders.scala:218) at scala.reflect.internal.Symbols$Symbol.info(Symbols.scala:1531) at scala.reflect.internal.Mirrors$RootsBase.init(Mirrors.scala:225) at scala.tools.nsc.Global.rootMirror$lzycompute(Global.scala:65) at scala.tools.nsc.Global.rootMirror(Global.scala:63) at scala.tools.nsc.Global.rootMirror(Global.scala:36) at scala.reflect.internal.Definitions$DefinitionsClass.ObjectClass$lzycompute(Definitions.scala:267) at scala.reflect.internal.Definitions$DefinitionsClass.ObjectClass(Definitions.scala:267) at scala.reflect.internal.Definitions$DefinitionsClass.init(Definitions.scala:1448) at scala.tools.nsc.Global$Run.<init>(Global.scala:1154) at scala.tools.nsc.interpreter.IMain._initialize(IMain.scala:125) at scala.tools.nsc.interpreter.IMain.initializeSynchronous(IMain.scala:147) at scala.tools.nsc.interpreter.ILoop.$anonfun$process$12(ILoop.scala:1050) at scala.tools.nsc.interpreter.ILoop.startup$1(ILoop.scala:1017) at scala.tools.nsc.interpreter.ILoop.$anonfun$process$1(ILoop.scala:1067) at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:949) at scala.tools.nsc.MainGenericRunner.runTarget$1(MainGenericRunner.scala:82) at scala.tools.nsc.MainGenericRunner.run$1(MainGenericRunner.scala:85) at scala.tools.nsc.MainGenericRunner.process(MainGenericRunner.scala:96) at scala.tools.nsc.MainGenericRunner$.main(MainGenericRunner.scala:101) at scala.tools.nsc.MainGenericRunner.main(MainGenericRunner.scala)
kafka集成flink报出如下错误如何解决
idea运行kafka集成flink的项目运行报错。 public class KafkaFlinkDemo1 { public static void main(String[] args) throws Exception { //获取执行环境 StreamExecutionEnvironment sEnv = StreamExecutionEnvironment.getExecutionEnvironment(); //创建一个Table Environment StreamTableEnvironment sTableEnv = StreamTableEnvironment.create(sEnv); sTableEnv.connect(new Kafka() .version("0.10") .topic("topic1") .startFromLatest() .property("group.id", "group1") .property("bootstrap.servers", "172.168.30.105:21005") ).withFormat( new Json().failOnMissingField(false).deriveSchema() ).withSchema( new Schema().field("userId", Types.LONG()) .field("day", Types.STRING()) .field("begintime", Types.LONG()) .field("endtime", Types.LONG()) .field("data", ObjectArrayTypeInfo.getInfoFor( Row[].class, Types.ROW(new String[]{"package", "activetime"}, new TypeInformation[]{Types.STRING(), Types.LONG()} ) )) ).inAppendMode().registerTableSource("userlog"); Table result = sTableEnv.sqlQuery("select userId from userlog"); DataStream<Row> rowDataStream = sTableEnv.toAppendStream(result, Row.class); rowDataStream.print(); sEnv.execute("KafkaFlinkDemo1"); } } 报错信息如下: SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/E:/develop/apache-maven-3.6.0-bin/repository/ch/qos/logback/logback-classic/1.1.3/logback-classic-1.1.3.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/E:/develop/apache-maven-3.6.0-bin/repository/org/slf4j/slf4j-log4j12/1.7.7/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [ch.qos.logback.classic.util.ContextSelectorStaticBinder] Exception in thread "main" java.lang.AbstractMethodError: org.apache.flink.table.descriptors.ConnectorDescriptor.toConnectorProperties()Ljava/util/Map; at org.apache.flink.table.descriptors.ConnectorDescriptor.toProperties(ConnectorDescriptor.java:58) at org.apache.flink.table.descriptors.ConnectTableDescriptor.toProperties(ConnectTableDescriptor.scala:107) at org.apache.flink.table.descriptors.StreamTableDescriptor.toProperties(StreamTableDescriptor.scala:95) at org.apache.flink.table.factories.TableFactoryUtil$.findAndCreateTableSource(TableFactoryUtil.scala:39) at org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSource(ConnectTableDescriptor.scala:46) at org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSourceAndSink(ConnectTableDescriptor.scala:68) at com.huawei.bigdata.KafkaFlinkDemo1.main(KafkaFlinkDemo1.java:41) Process finished with exit code 1
hadoop单词统计报错Job job_1581768459583_0001 failed
3个节点hadoop01、hadoop02、hadoop03 hadoop01是主节点 hadoop01、hadoop02、hadoop03是从节点,目前集群已搭建好,jps查看三个节点运行都很正常,而且UI也能正常显示,但是使用hadoop自带的hadoop-mapreduce-examples-2.7.4.jar的wordcount进行单词统计时报错如下,请高人指点,看不懂呀: ```[root@hadoop01 mapreduce]# hadoop jar hadoop-mapreduce-examples-2.7.4.jar wordcount /wordcount/input /wordcount/output 20/02/15 20:14:25 INFO client.RMProxy: Connecting to ResourceManager at hadoop01/192.168.233.132:8032 20/02/15 20:14:27 INFO input.FileInputFormat: Total input paths to process : 1 20/02/15 20:14:27 INFO mapreduce.JobSubmitter: number of splits:1 20/02/15 20:14:28 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1581768459583_0001 20/02/15 20:14:28 INFO impl.YarnClientImpl: Submitted application application_1581768459583_0001 20/02/15 20:14:28 INFO mapreduce.Job: The url to track the job: http://hadoop01:8088/proxy/application_1581768459583_0001/ 20/02/15 20:14:28 INFO mapreduce.Job: Running job: job_1581768459583_0001 20/02/15 20:15:38 INFO mapreduce.Job: Job job_1581768459583_0001 running in uber mode : false 20/02/15 20:15:38 INFO mapreduce.Job: map 0% reduce 0% 20/02/15 20:15:38 INFO mapreduce.Job: Job job_1581768459583_0001 failed with state FAILED due to: Application application_1581768459583_0001 failed 2 times due to Error launching appattempt_1581768459583_0001_000002. Got exception: java.io.IOException: Failed on local exception: java.io.IOException: java.io.IOException: Connection reset by peer; Host Details : local host is: "hadoop01.com/79.124.78.101"; destination host is: "79.124.78.101":43276; at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776) at org.apache.hadoop.ipc.Client.call(Client.java:1480) at org.apache.hadoop.ipc.Client.call(Client.java:1413) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy83.startContainers(Unknown Source) at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy84.startContainers(Unknown Source) at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:119) at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:250) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.io.IOException: java.io.IOException: Connection reset by peer at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:688) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746) at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:651) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:738) at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529) at org.apache.hadoop.ipc.Client.call(Client.java:1452) ... 16 more Caused by: java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:197) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.FilterInputStream.read(FilterInputStream.java:133) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read(BufferedInputStream.java:265) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:367) at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:561) at org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:376) at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:730) at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:726) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:726) ... 19 more . Failing the application. 20/02/15 20:15:38 INFO mapreduce.Job: Counters: 0 ```
从MySQL数据库拿数据做实时报表?
公司业务越来越多,刚开始时使用的是MySQL数据库,现在数据库有很多了,估计有20多个系统,目前不允许直接从MySQL查询,以及关联操作,只能从其备库里面拉取binlog文件。同时又想做实时处理。 现在使用的是MySQL+canal+kafka+flink+数据库(Tidb\clickhouse\mysql\ES)。其中大的问题是前面的MySQL容易出现DDL语句(比如:添加字段删除字段等),会导致canal出问题,或者写数据库对应不上等。。。
Linux上运行Scala出现Java异常错误
在Fedora上试图运行Scala,环境变量设置完整,但是出现以下错误: Exception in thread "main" java.lang.NoClassDefFoundError: javax/script/Compilable at scala.tools.nsc.interpreter.ILoop.createInterpreter(ILoop.scala:126) at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:908) at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:906) at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:906) at scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97) at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:906) at scala.tools.nsc.MainGenericRunner.runTarget$1(MainGenericRunner.scala:74) at scala.tools.nsc.MainGenericRunner.run$1(MainGenericRunner.scala:87) at scala.tools.nsc.MainGenericRunner.process(MainGenericRunner.scala:98) at scala.tools.nsc.MainGenericRunner$.main(MainGenericRunner.scala:103) at scala.tools.nsc.MainGenericRunner.main(MainGenericRunner.scala)
scala为什么有那么多版本。以下这三个表示什么意思呢?
scala-2.11.0-M5 scala-2.11.0-RC5 scala-2.11.0-Final
Disk full while accessing 虚拟机中执行一个程序后保存会弹出这个对话框
Disk full while accessing 虚拟机中执行一个程序后保存会弹出这个对话框, 如题 我是虚拟机 英文版的, vm ,只有c盘,但是没有满,如何解决?
用idea+sbt构建scala项目是出现的问题
用idea+sbt,创建一个scala项目,然后idea的控制台直接报错,如下图所示: ![图片说明](https://img-ask.csdn.net/upload/201711/25/1511581280_348398.png) 然后idea最后指出的log日志内容如下: ``` Error during sbt execution: java.lang.RuntimeException: Expected one of local, maven-local, maven-central, scala-tools-releases, scala-tools-snapshots, sonatype-oss-releases, sonatype-oss-snapshots, jcenter, got 'local '. java.lang.RuntimeException: Expected one of local, maven-local, maven-central, scala-tools-releases, scala-tools-snapshots, sonatype-oss-releases, sonatype-oss-snapshots, jcenter, got 'local '. at xsbti.Predefined.toValue(Predefined.java:28) at xsbt.boot.Repository$Predefined$.apply(LaunchConfiguration.scala:114) at xsbt.boot.ConfigurationParser$$anonfun$getRepositories$1.apply(ConfigurationParser.scala:197) at xsbt.boot.ConfigurationParser$$anonfun$getRepositories$1.apply(ConfigurationParser.scala:196) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at scala.collection.immutable.List.foreach(List.scala:318) at org.apache.ivy.core.RelativeUrlResolver.map(RelativeUrlResolver.java:244) at scala.collection.AbstractTraversable.map(Traversable.scala:105) at xsbt.boot.ConfigurationParser.getRepositories(ConfigurationParser.scala:196) at xsbt.boot.ConfigurationParser$$anonfun$4.apply(ConfigurationParser.scala:71) at xsbt.boot.ConfigurationParser$$anonfun$processSection$1.apply(ConfigurationParser.scala:109) at xsbt.boot.ConfigurationParser.process(ConfigurationParser.scala:110) at xsbt.boot.ConfigurationParser.processSection(ConfigurationParser.scala:109) at xsbt.boot.ConfigurationParser.xsbt$boot$ConfigurationParser$$apply(ConfigurationParser.scala:49) at xsbt.boot.ConfigurationParser$$anonfun$apply$3.apply(ConfigurationParser.scala:47) at xsbt.boot.Using$.withResource(Using.scala:10) at xsbt.boot.Using$.apply(Using.scala:9) at xsbt.boot.Configuration$$anonfun$parse$1.apply(Configuration.scala:21) at xsbt.boot.Using$.withResource(Using.scala:10) at xsbt.boot.Using$.apply(Using.scala:9) at xsbt.boot.Configuration$.parse$fcb646c(Configuration.scala:21) at xsbt.boot.Launch$.apply(Launch.scala:18) at xsbt.boot.Boot$.runImpl(Boot.scala:41) at xsbt.boot.Boot$.main(Boot.scala:17) at xsbt.boot.Boot.main(Boot.scala) Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 ``` 想请问,这是什么问题? 一创建sbt项目,就报这个错。
sbt报错scala.runtime in compiler mirror not found
用sbt编译kafka manager工程时报错如下信息: [info] Loading project definition from E:\workspace\idea\kafka-manager-master\project error: error while loading <root>, error in opening zip file scala.reflect.internal.MissingRequirementError: object scala.runtime in compiler mirror not found. at scala.reflect.internal.MissingRequirementError$.signal(MissingRequirementError.scala:16) at scala.reflect.internal.MissingRequirementError$.notFound(MissingRequirementError.scala:17) at scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:48) at scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:40) at scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:61) at scala.reflect.internal.Mirrors$RootsBase.getPackage(Mirrors.scala:172) at scala.reflect.internal.Mirrors$RootsBase.getRequiredPackage(Mirrors.scala:175) at scala.reflect.internal.Definitions$DefinitionsClass.RuntimePackage$lzycompute(Definitions.scala:183) at scala.reflect.internal.Definitions$DefinitionsClass.RuntimePackage(Definitions.scala:183) at scala.reflect.internal.Definitions$DefinitionsClass.RuntimePackageClass$lzycompute(Definitions.scala:184) at scala.reflect.internal.Definitions$DefinitionsClass.RuntimePackageClass(Definitions.scala:184) at scala.reflect.internal.Definitions$DefinitionsClass.AnnotationDefaultAttr$lzycompute(Definitions.scala:1024) at scala.reflect.internal.Definitions$DefinitionsClass.AnnotationDefaultAttr(Definitions.scala:1023) at scala.reflect.internal.Definitions$DefinitionsClass.syntheticCoreClasses$lzycompute(Definitions.scala:1153) at scala.reflect.internal.Definitions$DefinitionsClass.syntheticCoreClasses(Definitions.scala:1152) at scala.reflect.internal.Definitions$DefinitionsClass.symbolsNotPresentInBytecode$lzycompute(Definitions.scala:1196) at scala.reflect.internal.Definitions$DefinitionsClass.symbolsNotPresentInBytecode(Definitions.scala:1196) at scala.reflect.internal.Definitions$DefinitionsClass.init(Definitions.scala:1261) at scala.tools.nsc.Global$Run.<init>(Global.scala:1290) at sbt.compiler.Eval$$anon$1.<init>(Eval.scala:141) at sbt.compiler.Eval.run$lzycompute$1(Eval.scala:141) at sbt.compiler.Eval.run$1(Eval.scala:141) at sbt.compiler.Eval.unlinkAll$1(Eval.scala:144) at sbt.compiler.Eval.evalCommon(Eval.scala:153) at sbt.compiler.Eval.evalDefinitions(Eval.scala:122) at sbt.EvaluateConfigurations$.evaluateDefinitions(EvaluateConfigurations.scala:271) at sbt.EvaluateConfigurations$.evaluateSbtFile(EvaluateConfigurations.scala:109) at sbt.Load$.sbt$Load$$loadSettingsFile$1(Load.scala:775) at sbt.Load$$anonfun$sbt$Load$$memoLoadSettingsFile$1$1.apply(Load.scala:781) at sbt.Load$$anonfun$sbt$Load$$memoLoadSettingsFile$1$1.apply(Load.scala:780) at scala.collection.MapLike$class.getOrElse(MapLike.scala:128) at scala.collection.AbstractMap.getOrElse(Map.scala:58) at sbt.Load$.sbt$Load$$memoLoadSettingsFile$1(Load.scala:780) at sbt.Load$$anonfun$loadFiles$1$2.apply(Load.scala:788) at sbt.Load$$anonfun$loadFiles$1$2.apply(Load.scala:788) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at scala.collection.TraversableLike$class.map(TraversableLike.scala:244) at scala.collection.AbstractTraversable.map(Traversable.scala:105) at sbt.Load$.loadFiles$1(Load.scala:788) at sbt.Load$.discoverProjects(Load.scala:799) at sbt.Load$.discover$1(Load.scala:585) at sbt.Load$.sbt$Load$$loadTransitive(Load.scala:633) at sbt.Load$$anonfun$loadUnit$1.sbt$Load$$anonfun$$loadProjects$1(Load.scala:482) at sbt.Load$$anonfun$loadUnit$1$$anonfun$40.apply(Load.scala:485) at sbt.Load$$anonfun$loadUnit$1$$anonfun$40.apply(Load.scala:485) at sbt.Load$.timed(Load.scala:1025) at sbt.Load$$anonfun$loadUnit$1.apply(Load.scala:485) at sbt.Load$$anonfun$loadUnit$1.apply(Load.scala:459) at sbt.Load$.timed(Load.scala:1025) at sbt.Load$.loadUnit(Load.scala:459) at sbt.Load$$anonfun$25$$anonfun$apply$14.apply(Load.scala:311) at sbt.Load$$anonfun$25$$anonfun$apply$14.apply(Load.scala:310) at sbt.BuildLoader$$anonfun$componentLoader$1$$anonfun$apply$4$$anonfun$apply$5$$anonfun$apply$6.apply(BuildLoader.scala:91) at sbt.BuildLoader$$anonfun$componentLoader$1$$anonfun$apply$4$$anonfun$apply$5$$anonfun$apply$6.apply(BuildLoader.scala:90) at sbt.BuildLoader.apply(BuildLoader.scala:140) at sbt.Load$.loadAll(Load.scala:365) at sbt.Load$.loadURI(Load.scala:320) at sbt.Load$.load(Load.scala:316) at sbt.Load$.load(Load.scala:305) at sbt.Load$$anonfun$4.apply(Load.scala:146) at sbt.Load$$anonfun$4.apply(Load.scala:146) at sbt.Load$.timed(Load.scala:1025) at sbt.Load$.apply(Load.scala:146) at sbt.Load$.defaultLoad(Load.scala:39) at sbt.BuiltinCommands$.liftedTree1$1(Main.scala:496) at sbt.BuiltinCommands$.doLoadProject(Main.scala:496) at sbt.BuiltinCommands$$anonfun$loadProjectImpl$2.apply(Main.scala:488) at sbt.BuiltinCommands$$anonfun$loadProjectImpl$2.apply(Main.scala:488) at sbt.Command$$anonfun$applyEffect$1$$anonfun$apply$2.apply(Command.scala:59) at sbt.Command$$anonfun$applyEffect$1$$anonfun$apply$2.apply(Command.scala:59) at sbt.Command$$anonfun$applyEffect$2$$anonfun$apply$3.apply(Command.scala:61) at sbt.Command$$anonfun$applyEffect$2$$anonfun$apply$3.apply(Command.scala:61) at sbt.Command$.process(Command.scala:93) at sbt.MainLoop$$anonfun$1$$anonfun$apply$1.apply(MainLoop.scala:96) at sbt.MainLoop$$anonfun$1$$anonfun$apply$1.apply(MainLoop.scala:96) at sbt.State$$anon$1.process(State.scala:184) at sbt.MainLoop$$anonfun$1.apply(MainLoop.scala:96) at sbt.MainLoop$$anonfun$1.apply(MainLoop.scala:96) at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17) at sbt.MainLoop$.next(MainLoop.scala:96) at sbt.MainLoop$.run(MainLoop.scala:89) at sbt.MainLoop$$anonfun$runWithNewLog$1.apply(MainLoop.scala:68) at sbt.MainLoop$$anonfun$runWithNewLog$1.apply(MainLoop.scala:63) at sbt.Using.apply(Using.scala:24) at sbt.MainLoop$.runWithNewLog(MainLoop.scala:63) at sbt.MainLoop$.runAndClearLast(MainLoop.scala:46) at sbt.MainLoop$.runLoggedLoop(MainLoop.scala:30) at sbt.MainLoop$.runLogged(MainLoop.scala:22) at sbt.StandardMain$.runManaged(Main.scala:57) at sbt.xMain.run(Main.scala:29) at xsbt.boot.Launch$$anonfun$run$1.apply(Launch.scala:109) at xsbt.boot.Launch$.withContextLoader(Launch.scala:128) at xsbt.boot.Launch$.run(Launch.scala:109) at xsbt.boot.Launch$$anonfun$apply$1.apply(Launch.scala:35) at xsbt.boot.Launch$.launch(Launch.scala:117) at xsbt.boot.Launch$.apply(Launch.scala:18) at xsbt.boot.Boot$.runImpl(Boot.scala:41) at xsbt.boot.Boot$.main(Boot.scala:17) at xsbt.boot.Boot.main(Boot.scala) [error] scala.reflect.internal.MissingRequirementError: object scala.runtime in compiler mirror not found. [error] Use 'last' for the full log. Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768M; support was removed in 8.0
scala的toMap一个坑,不能理解
![图片说明](https://img-ask.csdn.net/upload/201908/25/1566728593_502212.png) list转换为map,但是却把第一个dog给丢掉了,无法理解,急求帮助,强迫症患者
这个问题怎么解决,docker搭建kafka的wen'ti
首先说明这个错误的前提,我没有自己在虚拟机上搭建,因为华为送了服务器,我就直接在它的服务器上搭建了docker,弄了三个容器装了kafka,直接使用docker-compose搭建集群  映射的端口就是这样子,但是呢,在IDEA连接kafka集群的时候 首先连接IP:5000,5002,5004 再连接返回的host.name =kafka1,kafka2,kafka3 最后继续连接advertised.host.name=kafka1,kafka2,kafka3 这样的情况,如果是普通服务器还好,直接在本地hosts添加主机IP映射即可 但是这个容器就添加不了了,容器的IP地址是内网设定的,我们本地访问ip肯定访问不到了。 20/01/16 22:11:04 INFO AppInfoParser: Kafka version: 2.4.0 20/01/16 22:11:04 INFO AppInfoParser: Kafka commitId: 77a89fcf8d7fa018 20/01/16 22:11:04 INFO AppInfoParser: Kafka startTimeMs: 1579183864167 20/01/16 22:11:04 INFO KafkaConsumer: [Consumer clientId=consumer-groupid1-1, groupId=groupid1] Subscribed to topic(s): test, topicongbo 20/01/16 22:11:04 INFO Metadata: [Consumer clientId=consumer-groupid1-1, groupId=groupid1] Cluster ID: Kkwgy0gkSkmGAlsC_5cz9A 20/01/16 22:11:04 INFO AbstractCoordinator: [Consumer clientId=consumer-groupid1-1, groupId=groupid1] Discovered group coordinator kafka3:9092 (id: 2147483644 rack: null) 20/01/16 22:11:06 WARN NetworkClient: [Consumer clientId=consumer-groupid1-1, groupId=groupid1] Error connecting to node kafka3:9092 (id: 2147483644 rack: null) java.net.UnknownHostException: kafka3 at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929) at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1324) at java.net.InetAddress.getAllByName0(InetAddress.java:1277) at java.net.InetAddress.getAllByName(InetAddress.java:1193) at java.net.InetAddress.getAllByName(InetAddress.java:1127) at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:104) at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.currentAddress(ClusterConnectionStates.java:403) at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.access$200(ClusterConnectionStates.java:363) at org.apache.kafka.clients.ClusterConnectionStates.currentAddress(ClusterConnectionStates.java:151) at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:955) at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:289) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.tryConnect(ConsumerNetworkClient.java:572) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$FindCoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:757) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$FindCoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:737) at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:204) at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167) at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:599) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:409) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:294) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:212) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:230) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:444) at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1267) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1235) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1168) at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.paranoidPoll(DirectKafkaInputDStream.scala:172) at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.start(DirectKafkaInputDStream.scala:260) at org.apache.spark.streaming.DStreamGraph.$anonfun$start$7(DStreamGraph.scala:54) at org.apache.spark.streaming.DStreamGraph.$anonfun$start$7$adapted(DStreamGraph.scala:54) at scala.collection.parallel.mutable.ParArray$ParArrayIterator.foreach(ParArray.scala:145) at scala.collection.parallel.ParIterableLike$Foreach.leaf(ParIterableLike.scala:974) at scala.collection.parallel.Task.$anonfun$tryLeaf$1(Tasks.scala:53) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at scala.util.control.Breaks$$anon$1.catchBreak(Breaks.scala:67) at scala.collection.parallel.Task.tryLeaf(Tasks.scala:56) at scala.collection.parallel.Task.tryLeaf$(Tasks.scala:50) at scala.collection.parallel.ParIterableLike$Foreach.tryLeaf(ParIterableLike.scala:971) at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask.compute(Tasks.scala:153) at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask.compute$(Tasks.scala:149) at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.compute(Tasks.scala:440) at java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189) at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157) 那么这个错误怎么解决的呢,而且华为的安全组我没有权限修改,只能5000-5010的端口对外开方
maven在scalaIDE下构建scala和java混合代码
通过scalaIDE新建一个scala项目,然后convert to maven project。单独clean install scala的类是没有问题的, 但是新建java类之后再在内部引用scala类就会出现找不到符号这样的报错,所以请教各位有没有什么可以解决的方法。 以下是pom.xml的build配置: ``` <build> <sourceDirectory>src</sourceDirectory> <plugins> <plugin> <groupId>org.scala-tools</groupId> <artifactId>maven-scala-plugin</artifactId> <version>2.15.2</version> <executions> <execution> <id>scala-compile-first</id> <goals> <goal>compile</goal> <goal>add-source</goal> </goals> <!-- <configuration> <includes> <include>**/*.scala</include> </includes> </configuration> --> </execution> <execution> <id>scala-test-compile</id> <goals> <goal>testCompile</goal> </goals> </execution> </executions> <configuration> <scalaVersion>2.10.5</scalaVersion> </configuration> </plugin> <plugin> <artifactId>maven-compiler-plugin</artifactId> <version>3.3</version> <configuration> <source>1.7</source> <target>1.7</target> </configuration> </plugin> </plugins> </build> ```
终于明白阿里百度这样的大公司,为什么面试经常拿ThreadLocal考验求职者了
点击上面↑「爱开发」关注我们每晚10点,捕获技术思考和创业资源洞察什么是ThreadLocalThreadLocal是一个本地线程副本变量工具类,各个线程都拥有一份线程私有的数
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过...
《奇巧淫技》系列-python!!每天早上八点自动发送天气预报邮件到QQ邮箱
此博客仅为我业余记录文章所用,发布到此,仅供网友阅读参考,如有侵权,请通知我,我会删掉。 补充 有不少读者留言说本文章没有用,因为天气预报直接打开手机就可以收到了,为何要多此一举发送到邮箱呢!!!那我在这里只能说:因为你没用,所以你没用!!! 这里主要介绍的是思路,不是天气预报!不是天气预报!!不是天气预报!!!天气预报只是用于举例。请各位不要再刚了!!! 下面是我会用到的两个场景: 每日下
面试官问我:什么是消息队列?什么场景需要他?用了会出现什么问题?
你知道的越多,你不知道的越多 点赞再看,养成习惯 GitHub上已经开源 https://github.com/JavaFamily 有一线大厂面试点脑图、个人联系方式,欢迎Star和完善 前言 消息队列在互联网技术存储方面使用如此广泛,几乎所有的后端技术面试官都要在消息队列的使用和原理方面对小伙伴们进行360°的刁难。 作为一个在互联网公司面一次拿一次Offer的面霸,打败了无数
8年经验面试官详解 Java 面试秘诀
    作者 | 胡书敏 责编 | 刘静 出品 | CSDN(ID:CSDNnews) 本人目前在一家知名外企担任架构师,而且最近八年来,在多家外企和互联网公司担任Java技术面试官,前后累计面试了有两三百位候选人。在本文里,就将结合本人的面试经验,针对Java初学者、Java初级开发和Java开发,给出若干准备简历和准备面试的建议。   Java程序员准备和投递简历的实
究竟你适不适合买Mac?
我清晰的记得,刚买的macbook pro回到家,开机后第一件事情,就是上了淘宝网,花了500元钱,找了一个上门维修电脑的师傅,上门给我装了一个windows系统。。。。。。 表砍我。。。 当时买mac的初衷,只是想要个固态硬盘的笔记本,用来运行一些复杂的扑克软件。而看了当时所有的SSD笔记本后,最终决定,还是买个好(xiong)看(da)的。 已经有好几个朋友问我mba怎么样了,所以今天尽量客观
程序员一般通过什么途径接私活?
二哥,你好,我想知道一般程序猿都如何接私活,我也想接,能告诉我一些方法吗? 上面是一个读者“烦不烦”问我的一个问题。其实不止是“烦不烦”,还有很多读者问过我类似这样的问题。 我接的私活不算多,挣到的钱也没有多少,加起来不到 20W。说实话,这个数目说出来我是有点心虚的,毕竟太少了,大家轻喷。但我想,恰好配得上“一般程序员”这个称号啊。毕竟苍蝇再小也是肉,我也算是有经验的人了。 唾弃接私活、做外
大学四年自学走来,这些珍藏的「实用工具/学习网站」我全贡献出来了
知乎高赞:文中列举了互联网一线大厂程序员都在用的工具集合,涉及面非常广,小白和老手都可以进来看看,或许有新收获。
《阿里巴巴开发手册》读书笔记-编程规约
Java编程规约命名风格 命名风格 类名使用UpperCamelCase风格 方法名,参数名,成员变量,局部变量都统一使用lowerCamelcase风格 常量命名全部大写,单词间用下划线隔开, 力求语义表达完整清楚,不要嫌名字长 ...
Python爬虫爬取淘宝,京东商品信息
小编是一个理科生,不善长说一些废话。简单介绍下原理然后直接上代码。 使用的工具(Python+pycharm2019.3+selenium+xpath+chromedriver)其中要使用pycharm也可以私聊我selenium是一个框架可以通过pip下载 pip install selenium -i https://pypi.tuna.tsinghua.edu.cn/simple/ 
阿里程序员写了一个新手都写不出的低级bug,被骂惨了。
你知道的越多,你不知道的越多 点赞再看,养成习惯 本文 GitHub https://github.com/JavaFamily 已收录,有一线大厂面试点思维导图,也整理了很多我的文档,欢迎Star和完善,大家面试可以参照考点复习,希望我们一起有点东西。 前前言 为啥今天有个前前言呢? 因为你们的丙丙啊,昨天有牌面了哟,直接被微信官方推荐,知乎推荐,也就仅仅是还行吧(心里乐开花)
Java工作4年来应聘要16K最后没要,细节如下。。。
前奏: 今天2B哥和大家分享一位前几天面试的一位应聘者,工作4年26岁,统招本科。 以下就是他的简历和面试情况。 基本情况: 专业技能: 1、&nbsp;熟悉Sping了解SpringMVC、SpringBoot、Mybatis等框架、了解SpringCloud微服务 2、&nbsp;熟悉常用项目管理工具:SVN、GIT、MAVEN、Jenkins 3、&nbsp;熟悉Nginx、tomca
Python爬虫精简步骤1 获取数据
爬虫的工作分为四步: 1.获取数据。爬虫程序会根据我们提供的网址,向服务器发起请求,然后返回数据。 2.解析数据。爬虫程序会把服务器返回的数据解析成我们能读懂的格式。 3.提取数据。爬虫程序再从中提取出我们需要的数据。 4.储存数据。爬虫程序把这些有用的数据保存起来,便于你日后的使用和分析。 这一篇的内容就是:获取数据。 首先,我们将会利用一个强大的库——requests来获取数据。 在电脑上安装
Python绘图,圣诞树,花,爱心 | Turtle篇
1.画圣诞树 import turtle screen = turtle.Screen() screen.setup(800,600) circle = turtle.Turtle() circle.shape('circle') circle.color('red') circle.speed('fastest') circle.up() square = turtle.Turtle()
作为一个程序员,CPU的这些硬核知识你必须会!
CPU对每个程序员来说,是个既熟悉又陌生的东西? 如果你只知道CPU是中央处理器的话,那可能对你并没有什么用,那么作为程序员的我们,必须要搞懂的就是CPU这家伙是如何运行的,尤其要搞懂它里面的寄存器是怎么一回事,因为这将让你从底层明白程序的运行机制。 随我一起,来好好认识下CPU这货吧 把CPU掰开来看 对于CPU来说,我们首先就要搞明白它是怎么回事,也就是它的内部构造,当然,CPU那么牛的一个东
破14亿,Python分析我国存在哪些人口危机!
2020年1月17日,国家统计局发布了2019年国民经济报告,报告中指出我国人口突破14亿。 猪哥的朋友圈被14亿人口刷屏,但是很多人并没有看到我国复杂的人口问题:老龄化、男女比例失衡、生育率下降、人口红利下降等。 今天我们就来分析一下我们国家的人口数据吧! 更多有趣分析教程,扫描下方二维码关注vx公号「裸睡的猪」 即可查看! 一、背景 1.人口突破14亿 2020年1月17日,国家统计局发布
听说想当黑客的都玩过这个Monyer游戏(1~14攻略)
第零关 进入传送门开始第0关(游戏链接) 请点击链接进入第1关: 连接在左边→ ←连接在右边 看不到啊。。。。(只能看到一堆大佬做完的留名,也能看到菜鸡的我,在后面~~) 直接fn+f12吧 &lt;span&gt;连接在左边→&lt;/span&gt; &lt;a href="first.php"&gt;&lt;/a&gt; &lt;span&gt;←连接在右边&lt;/span&gt; o
在家远程办公效率低?那你一定要收好这个「在家办公」神器!
相信大家都已经收到国务院延长春节假期的消息,接下来,在家远程办公可能将会持续一段时间。 但是问题来了。远程办公不是人在电脑前就当坐班了,相反,对于沟通效率,文件协作,以及信息安全都有着极高的要求。有着非常多的挑战,比如: 1在异地互相不见面的会议上,如何提高沟通效率? 2文件之间的来往反馈如何做到及时性?如何保证信息安全? 3如何规划安排每天工作,以及如何进行成果验收? ......
作为一个程序员,内存和磁盘的这些事情,你不得不知道啊!!!
截止目前,我已经分享了如下几篇文章: 一个程序在计算机中是如何运行的?超级干货!!! 作为一个程序员,CPU的这些硬核知识你必须会! 作为一个程序员,内存的这些硬核知识你必须懂! 这些知识可以说是我们之前都不太重视的基础知识,可能大家在上大学的时候都学习过了,但是嘞,当时由于老师讲解的没那么有趣,又加上这些知识本身就比较枯燥,所以嘞,大家当初几乎等于没学。 再说啦,学习这些,也看不出来有什么用啊!
别低估自己的直觉,也别高估自己的智商
所有群全部吵翻天,朋友圈全部沦陷,公众号疯狂转发。这两周没怎么发原创,只发新闻,可能有人注意到了。我不是懒,是文章写了却没发,因为大家的关注力始终在这次的疫情上面,发了也没人看。当然,我...
这个世界上人真的分三六九等,你信吗?
偶然间,在知乎上看到一个问题 一时间,勾起了我深深的回忆。 以前在厂里打过两次工,做过家教,干过辅导班,做过中介。零下几度的晚上,贴过广告,满脸、满手地长冻疮。   再回首那段岁月,虽然苦,但让我学会了坚持和忍耐。让我明白了,在这个世界上,无论环境多么的恶劣,只要心存希望,星星之火,亦可燎原。   下文是原回答,希望能对你能有所启发。   如果我说,这个世界上人真的分三六九等,
为什么听过很多道理,依然过不好这一生?
记录学习笔记是一个重要的习惯,不希望学习过的东西成为过眼云烟。做总结的同时也是一次复盘思考的过程。 本文是根据阅读得到 App上《万维钢·精英日课》部分文章后所做的一点笔记和思考。学习是一个系统的过程,思维模型的建立需要相对完整的学习和思考过程。以下观点是在碎片化阅读后总结的一点心得总结。
B 站上有哪些很好的学习资源?
哇说起B站,在小九眼里就是宝藏般的存在,放年假宅在家时一天刷6、7个小时不在话下,更别提今年的跨年晚会,我简直是跪着看完的!! 最早大家聚在在B站是为了追番,再后来我在上面刷欧美新歌和漂亮小姐姐的舞蹈视频,最近两年我和周围的朋友们已经把B站当作学习教室了,而且学习成本还免费,真是个励志的好平台ヽ(.◕ฺˇд ˇ◕ฺ;)ノ 下面我们就来盘点一下B站上优质的学习资源: 综合类 Oeasy: 综合
如何优雅地打印一个Java对象?
你好呀,我是沉默王二,一个和黄家驹一样身高,和刘德华一样颜值的程序员。虽然已经写了十多年的 Java 代码,但仍然觉得自己是个菜鸟(请允许我惭愧一下)。 在一个月黑风高的夜晚,我思前想后,觉得再也不能这么蹉跎下去了。于是痛下决心,准备通过输出的方式倒逼输入,以此来修炼自己的内功,从而进阶成为一名真正意义上的大神。与此同时,希望这些文章能够帮助到更多的读者,让大家在学习的路上不再寂寞、空虚和冷。 ...
雷火神山直播超两亿,Web播放器事件监听是怎么实现的?
Web播放器解决了在手机浏览器和PC浏览器上播放音视频数据的问题,让视音频内容可以不依赖用户安装App,就能进行播放以及在社交平台进行传播。在视频业务大数据平台中,播放数据的统计分析非常重要,所以Web播放器在使用过程中,需要对其内部的数据进行收集并上报至服务端,此时,就需要对发生在其内部的一些播放行为进行事件监听。 那么Web播放器事件监听是怎么实现的呢? 01 监听事件明细表 名
3万字总结,Mysql优化之精髓
本文知识点较多,篇幅较长,请耐心学习 MySQL已经成为时下关系型数据库产品的中坚力量,备受互联网大厂的青睐,出门面试想进BAT,想拿高工资,不会点MySQL优化知识,拿offer的成功率会大大下降。 为什么要优化 系统的吞吐量瓶颈往往出现在数据库的访问速度上 随着应用程序的运行,数据库的中的数据会越来越多,处理时间会相应变慢 数据是存放在磁盘上的,读写速度无法和内存相比 如何优化 设计
Linux 命令(122)—— watch 命令
1.命令简介 2.命令格式 3.选项说明 4.常用示例 参考文献 [1] watch(1) manual
Linux 命令(121)—— cal 命令
1.命令简介 2.命令格式 3.选项说明 4.常用示例 参考文献 [1] cal(1) manual
记jsp+servlet+jdbc实现的新闻管理系统
1.工具:eclipse+SQLyog 2.介绍:实现的内容就是显示新闻的基本信息,然后一个增删改查的操作。 3.数据库表设计 列名 中文名称 数据类型 长度 非空 newsId 文章ID int 11 √ newsTitle 文章标题 varchar 20 √ newsContent 文章内容 text newsStatus 是否审核 varchar 10 news...
Python新型冠状病毒疫情数据自动爬取+统计+发送报告+数据屏幕(三)发送篇
今天介绍的项目是使用 Itchat 发送统计报告 项目功能设计: 定时爬取疫情数据存入Mysql 进行数据分析制作疫情报告 使用itchat给亲人朋友发送分析报告(本文) 基于Django做数据屏幕 使用Tableau做数据分析 来看看最终效果 目前已经完成,预计2月12日前更新 使用 itchat 发送数据统计报告 itchat 是一个基于 web微信的一个框架,但微信官方并不允
相关热词 c#如何定义数组列表 c#倒序读取txt文件 java代码生成c# c# tcp发送数据 c#解决时间格式带星期 c#类似hashmap c#设置istbox的值 c#获取多线程返回值 c# 包含数字 枚举 c# timespan
立即提问