排行榜

  • 用户榜
  • 标签榜
  • 冲榜分奖金

频道

最新最热悬赏待采纳 筛选
  • 3

    回答

  • 20

    浏览

通过spark读取Excel文件数据,给银行卡号加上了空格,通过spark sql中的 trim去除空格,但是不管用,空格还是存在,无法去除,各位大佬有没有遇到的,怎么解决,求教!!!!!

  • 3

    回答

  • 17

    浏览

我想要搭建一套基于hadoop3.0.3版本的架构所需要的 Hive、spark、hbase、scala、flume、Kafka、Sqoop和zookper版本是什么的,在网上搜索了好多不知道那个正确,希望大佬看到了解答一下

  • 1

    回答

  • 9

    浏览

Spark问题,有没有办法在map里面得到Spark的worker信息,然后固定仅让那一个worker输出日志?

回答 运动码农
采纳率38.5%
24天前
  • 3

    回答

  • 40

    浏览

应该是sparkstreaming对接kafka的时候出现数据积压的问题,报错如下: 部分代码: 试过手动配置offsets,还是报错     val offsets = collection.Map[TopicPartition, Long] {       new TopicPartition(“recommender”, 0.toInt) -> 21     } 有没有大佬能解决的!!卡了好久了

  • 0

    回答

  • 5

    浏览

有两个datafream数据集各十条数据,通过spark sql 的except求差集,两个数据集的字段名、字段顺序都一样,数据也是一样的,正确差集结果因改为空的,但是现在结果与数据集的数据一样也是10条

  • 0

    回答

  • 7

    浏览

营销短信大部分发出去都被拦截了,除了更改短信内容还有没有其他办法规避,求大神 营销短信大部分发出去都被拦截了,除了更改短信内容还有没有其他办法规避,求大神 营销短信大部分发出去都被拦截了,除了更改短信内容还有没有其他办法规避,求大神

  • 3

    回答

  • 17

    浏览

val rst: Map[String, Any] init one Map(...) rst = m.-("A") + ("A" -> prev_a) 为什么替换不了元素A对应值?!还是原来的值

  • 0

    回答

  • 4

    浏览

比如我要知道日志的路径配置的是多少,有办法动态获取吗

  • 3

    回答

  • 59

    浏览

21/05/07 12:59:00 WARN Utils: Your hostname, cr-ThinkStation-P720 resolves to a loopback address: 127.0.1.1; using 192.168.31.101 instead (on interface enp4s0) 21/05/07 12:59:00 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address 21/05/07 12:59:00 INFO SparkContext: Running Spark version 2.4.3 21/05/07 12:59:02 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 21/05/07 12:59:02 INFO SparkContext: Submitted application: SHLCfg2_round4_implicit 21/05/07 12:59:02 INFO SecurityManager: Changing view acls to: cr 21/05/07 12:59:02 INFO SecurityManager: Changing modify acls to: cr 21/05/07 12:59:02 INFO SecurityManager: Changing view acls groups to:  21/05/07 12:59:02 INFO SecurityManager: Changing modify acls groups to:  21/05/07 12:59:02 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(cr); groups with view permissions: Set(); users  with modify permissions: Set(cr); groups with modify permissions: Set() 21/05/07 12:59:03 INFO Utils: Successfully started service 'sparkDriver' on port 41251. 21/05/07 12:59:03 INFO SparkEnv: Registering MapOutputTracker 21/05/07 12:59:03 INFO SparkEnv: Registering BlockManagerMaster 21/05/07 12:59:03 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 21/05/07 12:59:03 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up 21/05/07 12:59:03 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-4648f2d0-896a-435b-a6f5-3815d846f9d3 21/05/07 12:59:03 INFO MemoryStore: MemoryStore started with capacity 15.8 GB 21/05/07 12:59:03 INFO SparkEnv: Registering OutputCommitCoordinator 21/05/07 12:59:03 INFO Utils: Successfully started service 'SparkUI' on port 4040. 21/05/07 12:59:03 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.31.101:4040 21/05/07 12:59:03 INFO Executor: Starting executor ID driver on host localhost 21/05/07 12:59:04 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 43409. 21/05/07 12:59:04 INFO NettyBlockTransferService: Server created on 192.168.31.101:43409 21/05/07 12:59:04 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 21/05/07 12:59:04 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.31.101, 43409, None) 21/05/07 12:59:04 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.31.101:43409 with 15.8 GB RAM, BlockManagerId(driver, 192.168.31.101, 43409, None) 21/05/07 12:59:04 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.31.101, 43409, None) 21/05/07 12:59:04 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.31.101, 43409, None) 21/05/07 14:49:44 ERROR Executor: Exception in task 31.0 in stage 645120.0 (TID 2767934) java.io.FileNotFoundException: /home/cr/Data/prosOfScala/projects/prosOfIDEA0507/CPT/checkPointSHLCfg2_round4_implicit/626571e5-2c43-4a41-9de7-6431ffcfa8b7/rdd-424305/..part-00031-attempt-0.crc (No space left on device)     at java.io.FileOutputStream.open0(Native Method)     at java.io.FileOutputStream.open(FileOutputStream.java:270)     at java.io.FileOutputStream.<init>(FileOutputStream.java:213)     at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:222)     at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:209)     at org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:307)     at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:296)     at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:328)     at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.<init>(ChecksumFileSystem.java:402)     at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:461)     at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:440)     at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:911)     at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:892)     at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:854)     at org.apache.spark.rdd.ReliableCheckpointRDD$.writePartitionToCheckpointFile(ReliableCheckpointRDD.scala:182)     at org.apache.spark.rdd.ReliableCheckpointRDD$$anonfun$writeRDDToCheckpointDirectory$1.apply(ReliableCheckpointRDD.scala:141)     at org.apache.spark.rdd.ReliableCheckpointRDD$$anonfun$writeRDDToCheckpointDirectory$1.apply(ReliableCheckpointRDD.scala:141)     at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)     at org.apache.spark.scheduler.Task.run(Task.scala:121)     at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)     at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)     at java.lang.Thread.run(Thread.java:748) 21/05/07 14:49:44 ERROR Executor: Exception in task 27.0 in stage 645120.0 (TID 2767930) java.io.FileNotFoundException: /home/cr/Data/prosOfScala/projects/prosOfIDEA0507/CPT/checkPointSHLCfg2_round4_implicit/626571e5-2c43-4a41-9de7-6431ffcfa8b7/rdd-424305/..part-00027-attempt-0.crc (No space left on device)     at java.io.FileOutputStream.open0(Native Method)     at java.io.FileOutputStream.open(FileOutputStream.java:270)    

  • 2

    回答

  • 19

    浏览

abstract class Teacher{ def teach:Unit } trait ChinsesTeacher extends Teacher{ def teach: Unit = println("teach Chinese") } trait EnglishTeacher extends Teacher{ def teach: Unit = println("teach English") } def main(args: Array[String]): Unit = {       val teacher=new Teacher with ChinsesTeacher with EnglishTeacher       teacher.teach }  

  • 3

    回答

  • 11

    浏览

spark中如何输出一个嵌套元组,如(('北京',12.4,34.5),('上海',10.4,30.5),('济南',15.4,20.5)),要求输出格式: 北京:12.4,34.5 上海:10.4,30.5 济南:15.4,20.5