java连接spark 没有运算结果 5C

idea 代码是这样的:
public final class JavaSparkPi {

public static void main(String[] args) throws Exception {
    SparkSession spark = SparkSession
            .builder()
            .master("spark://192.168.115.128:7077")
            .appName("JavaSparkPi")
            .getOrCreate();

    JavaSparkContext jsc = new JavaSparkContext(spark.sparkContext());

    int slices = (args.length == 1) ? Integer.parseInt(args[0]) : 2;
    int n = 100000 * slices;
    List<Integer> l = new ArrayList<>(n);
    for (int i = 0; i < n; i++) {
        l.add(i);
    }

    JavaRDD<Integer> dataSet = jsc.parallelize(l, slices);

    int count = dataSet.map(integer -> {
        double x = Math.random() * 2 - 1;
        double y = Math.random() * 2 - 1;
        return (x * x + y * y <= 1) ? 1 : 0;
    }).reduce((integer, integer2) -> integer + integer2);

    System.out.println("Pi is roughly " + 4.0 * count / n);

    spark.stop();
}

}

idea控制台是这样的:

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
18/01/03 10:35:41 INFO SparkContext: Running Spark version 2.2.1
18/01/03 10:35:43 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/01/03 10:35:43 ERROR Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:378)
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:393)
at org.apache.hadoop.util.Shell.(Shell.java:386)
at org.apache.hadoop.util.StringUtils.(StringUtils.java:79)
at org.apache.hadoop.security.Groups.parseStaticMapping(Groups.java:116)
at org.apache.hadoop.security.Groups.(Groups.java:93)
at org.apache.hadoop.security.Groups.(Groups.java:73)
at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:293)
at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:789)
at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2424)
at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2424)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:2424)
at org.apache.spark.SparkContext.(SparkContext.scala:295)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2516)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:918)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:910)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:910)
at JavaSparkPi.main(JavaSparkPi.java:39)
18/01/03 10:35:43 INFO SparkContext: Submitted application: JavaSparkPi
18/01/03 10:35:44 INFO SecurityManager: Changing view acls to: wmx
18/01/03 10:35:44 INFO SecurityManager: Changing modify acls to: wmx
18/01/03 10:35:44 INFO SecurityManager: Changing view acls groups to:
18/01/03 10:35:44 INFO SecurityManager: Changing modify acls groups to:
18/01/03 10:35:44 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(wmx); groups with view permissions: Set(); users with modify permissions: Set(wmx); groups with modify permissions: Set()
18/01/03 10:35:45 INFO Utils: Successfully started service 'sparkDriver' on port 62919.
18/01/03 10:35:45 INFO SparkEnv: Registering MapOutputTracker
18/01/03 10:35:45 INFO SparkEnv: Registering BlockManagerMaster
18/01/03 10:35:45 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
18/01/03 10:35:45 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
18/01/03 10:35:45 INFO DiskBlockManager: Created local directory at C:\Users\wmx\AppData\Local\Temp\blockmgr-37c3cc47-e21d-498b-b0ec-e987996a39cd
18/01/03 10:35:45 INFO MemoryStore: MemoryStore started with capacity 899.7 MB
18/01/03 10:35:45 INFO SparkEnv: Registering OutputCommitCoordinator
18/01/03 10:35:46 INFO Utils: Successfully started service 'SparkUI' on port 4040.
18/01/03 10:35:46 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://172.21.96.1:4040
18/01/03 10:35:47 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://192.168.115.128:7077...
18/01/03 10:35:47 INFO TransportClientFactory: Successfully created connection to /192.168.115.128:7077 after 105 ms (0 ms spent in bootstraps)
18/01/03 10:35:48 INFO StandaloneSchedulerBackend: Connected to Spark cluster with app ID app-20180102183557-0004
18/01/03 10:35:48 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20180102183557-0004/0 on worker-20180101224135-192.168.115.128-37401 (192.168.115.128:37401) with 1 cores
18/01/03 10:35:48 INFO StandaloneSchedulerBackend: Granted executor ID app-20180102183557-0004/0 on hostPort 192.168.115.128:37401 with 1 cores, 1024.0 MB RAM
18/01/03 10:35:48 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20180102183557-0004/0 is now RUNNING
18/01/03 10:35:48 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 62942.
18/01/03 10:35:48 INFO NettyBlockTransferService: Server created on 172.21.96.1:62942
18/01/03 10:35:48 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
18/01/03 10:35:48 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 172.21.96.1, 62942, None)
18/01/03 10:35:48 INFO BlockManagerMasterEndpoint: Registering block manager 172.21.96.1:62942 with 899.7 MB RAM, BlockManagerId(driver, 172.21.96.1, 62942, None)
18/01/03 10:35:48 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 172.21.96.1, 62942, None)
18/01/03 10:35:48 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 172.21.96.1, 62942, None)
18/01/03 10:35:50 INFO StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
18/01/03 10:35:51 INFO SparkContext: Starting job: reduce at JavaSparkPi.java:56
18/01/03 10:35:51 INFO DAGScheduler: Got job 0 (reduce at JavaSparkPi.java:56) with 2 output partitions
18/01/03 10:35:51 INFO DAGScheduler: Final stage: ResultStage 0 (reduce at JavaSparkPi.java:56)
18/01/03 10:35:51 INFO DAGScheduler: Parents of final stage: List()
18/01/03 10:35:51 INFO DAGScheduler: Missing parents: List()
18/01/03 10:35:51 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at JavaSparkPi.java:52), which has no missing parents
18/01/03 10:35:52 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 3.0 KB, free 899.7 MB)

6个回答

图片说明

spark集群日志是这样的:

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/spark-2.2.1-bin-hadoop2.7/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-2.7.5/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
18/01/02 18:12:28 INFO executor.CoarseGrainedExecutorBackend: Started daemon with process name: 54033@ubuntu
18/01/02 18:12:29 INFO util.SignalUtils: Registered signal handler for TERM
18/01/02 18:12:29 INFO util.SignalUtils: Registered signal handler for HUP
18/01/02 18:12:29 INFO util.SignalUtils: Registered signal handler for INT
18/01/02 18:12:29 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/01/02 18:12:29 INFO spark.SecurityManager: Changing view acls to: root,wmx
18/01/02 18:12:29 INFO spark.SecurityManager: Changing modify acls to: root,wmx
18/01/02 18:12:29 INFO spark.SecurityManager: Changing view acls groups to:
18/01/02 18:12:29 INFO spark.SecurityManager: Changing modify acls groups to:
18/01/02 18:12:29 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root, wmx); groups with view permissions: Set(); users with modify permissions: Set(root, wmx); groups with modify permissions: Set()
Exception in thread "main" java.lang.reflect.UndeclaredThrowableException
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1713)
at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:66)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:188)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:284)
at org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
Caused by: org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:205)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:100)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$run$1.apply$mcV$sp(CoarseGrainedExecutorBackend.scala:202)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$2.run(SparkHadoopUtil.scala:67)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$2.run(SparkHadoopUtil.scala:66)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
... 4 more
Caused by: java.io.IOException: Failed to connect to /172.21.96.1:61475
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:232)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:182)
at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:197)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:194)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:190)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.netty.channel.AbstractChannel$AnnotatedSocketException: Network is unreachable: /172.21.96.1:61475
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:454)
at sun.nio.ch.Net.connect(Net.java:446)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648)
at io.netty.channel.socket.nio.NioSocketChannel.doConnect(NioSocketChannel.java:242)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.connect(AbstractNioChannel.java:205)
at io.netty.channel.DefaultChannelPipeline$HeadContext.connect(DefaultChannelPipeline.java:1226)
at io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:540)
at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:525)
at io.netty.channel.ChannelOutboundHandlerAdapter.connect(ChannelOutboundHandlerAdapter.java:47)
at io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:540)
at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:525)
at io.netty.channel.ChannelDuplexHandler.connect(ChannelDuplexHandler.java:50)
at io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:540)
at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:525)
at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:507)
at io.netty.channel.DefaultChannelPipeline.connect(DefaultChannelPipeline.java:970)
at io.netty.channel.AbstractChannel.connect(AbstractChannel.java:215)
at io.netty.bootstrap.Bootstrap$2.run(Bootstrap.java:166)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:399)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:446)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
... 1 more

图片说明

java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
这里错误已经很清楚了 没有winutils ,这个应该是在windows上跑的,winutils.exe 是hadoop 访问windows用的,
你的数据不来自Hadoop, 你看看你有没有hadoop的配置文件.先把他删除.
winutils 也可以找一个 放进去.

bettermouse1
bettermouse1 回复zhenghailong888: 可以. 首先我建议你用虚拟机装 liunx 系统, 搭建spark集群 ,不要用windows,一是没有公司这么用,二是问题太多了,你解决起来麻烦啊. 你说的运行方式要以,要在网络互通的情况下.还要先在idea里面打成jar包,然后指明jar包位置.这个可以百度
2 年多之前 回复
zhenghailong888
zhenghailong888 我能不能跑个jar包程序,用.master("spark://192.168.115.128:7077")指定集群地址,进行运算,我的jar包,不用spark_subumit提交,就java -jar 运行jar包,通过"spark://192.168.115.128:7077"这种 网络访问,能实现实时利用spark计算能力,执行我写好的java方法吗
2 年多之前 回复

: Failed to connect to /172.21.96.1:61475 集群错误也很明显啊,,,估计都没搭建好,防火 墙关了吧. 搭建完了用 spark-shell测试一下.

zhenghailong888
zhenghailong888 /172.21.96.1:61475是我的win10开发机,我能用.master("spark://192.168.115.128:7077"),把我代码的任务提交到spark集群?然后再获得结果?
2 年多之前 回复

这很明显是一个想把spark在windows上运行的错误,然而spark要依赖hadoop,而hadoop需要在windows编译,hadoop在windows编译后会产生winutils.exe文件,在windows本地运行spark代码是要配置hadoop环境变量的,建议如下
1.找个windows编译后的hadoop,并配置环境变量
2.下载相应要用的spark并配置环境变量
3.然后就可以在windows上开心的运行代码了,但是 但是master要设置成 .master("local[*])

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
java 远程连接spark 出现错误

我使用的是sequenceiq/spark 搭建的docker集群,但是本机上能正常的运行,但是通过java远程连接访问的时候出现错误 代码为: ``` SparkConf sparkConf = new SparkConf().setAppName("JavaTopGroup").setMaster("spark://10.73.21.221:7077"); JavaSparkContext ctx = new JavaSparkContext(sparkConf); ``` 出现的错误为: ``` 17/12/07 19:17:47 ERROR StandaloneSchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up. 17/12/07 19:17:47 WARN StandaloneSchedulerBackend: Application ID is not initialized yet. 17/12/07 19:17:47 INFO SparkUI: Stopped Spark web UI at http://10.73.7.25:4040 17/12/07 19:17:47 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 8163. 17/12/07 19:17:47 INFO StandaloneSchedulerBackend: Shutting down all executors 17/12/07 19:17:47 INFO NettyBlockTransferService: Server created on 10.73.7.25:8163 17/12/07 19:17:47 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 17/12/07 19:17:47 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asking each executor to shut down 17/12/07 19:17:47 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.73.7.25, 8163, None) 17/12/07 19:17:47 INFO BlockManagerMasterEndpoint: Registering block manager 10.73.7.25:8163 with 900.6 MB RAM, BlockManagerId(driver, 10.73.7.25, 8163, None) 17/12/07 19:17:47 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.73.7.25, 8163, None) 17/12/07 19:17:47 WARN StandaloneAppClient$ClientEndpoint: Drop UnregisterApplication(null) because has not yet connected to master 17/12/07 19:17:47 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.73.7.25, 8163, None) 17/12/07 19:17:47 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! 17/12/07 19:17:47 INFO MemoryStore: MemoryStore cleared 17/12/07 19:17:47 INFO BlockManager: BlockManager stopped 17/12/07 19:17:47 INFO BlockManagerMaster: BlockManagerMaster stopped 17/12/07 19:17:47 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! 17/12/07 19:17:47 ERROR TransportResponseHandler: Still have 3 requests outstanding when connection from /10.73.21.21:7077 is closed 17/12/07 19:17:47 INFO SparkContext: Successfully stopped SparkContext 17/12/07 19:17:47 ERROR SparkContext: Error initializing SparkContext. java.lang.IllegalArgumentException: requirement failed: Can only call getServletHandlers on a running MetricsSystem at scala.Predef$.require(Predef.scala:224) at org.apache.spark.metrics.MetricsSystem.getServletHandlers(MetricsSystem.scala:91) at org.apache.spark.SparkContext.<init>(SparkContext.scala:524) at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58) at org.com.will.sparkl.App.main(App.java:24) 17/12/07 19:17:48 INFO SparkContext: SparkContext already stopped. Exception in thread "main" java.lang.IllegalArgumentException: requirement failed: Can only call getServletHandlers on a running MetricsSystem at scala.Predef$.require(Predef.scala:224) at org.apache.spark.metrics.MetricsSystem.getServletHandlers(MetricsSystem.scala:91) at org.apache.spark.SparkContext.<init>(SparkContext.scala:524) at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58) at org.com.will.sparkl.App.main(App.java:24) 17/12/07 19:17:48 INFO ShutdownHookManager: Shutdown hook called 17/12/07 19:17:48 INFO ShutdownHookManager: Deleting directory C:\Users\will\AppData\Local\Temp\spark-c60f05a8-5476-469b-8c43-d8476796a1dd ```

java程序中获取spark任务的计算结果

如题,开发了一个spark任务,通过java web 提交到spark集群,如果获取计算返回的 结果?

java调用spark做实时计算,可以直接调用吗?一般什么方式

java调用spark做实时计算,可以直接调用吗?一般什么方式

用spark提供的java API写的程序怎么远程提交到集群上运行。

小弟最近在做一个机器学习平台,想通过前台选择数据源、算法、参数之类的东西,由后台程序提交到spark集群上调用sparkML库来跑出结果,然后把结果返回之后在前台渲染出效果。实验室之前有搭spark集群,这两天看了一下java提交任务上去spark集群的东西,似乎都是要先把东西打jar包,再传服务器通过spark-submit,这样跟需求就不符了,恳求各位使用java调用过spark的大侠答疑解惑。委实是之前没用过这方面的使用经验。之前有找过一些代码如下。 ``` public class TestUtil { public static void main(String[] args){ System.setProperty("user.name", "root"); SparkConf conf = new SparkConf().setAppName("Spark Java API 学习") .setMaster("spark://211.87.227.79:7077"); JavaSparkContext sc = new JavaSparkContext(conf); JavaRDD<String> users = sc.textFile("hdfs://211.87.227.79:8020/input/wordcount.txt"); System.out.println(users.first()); } } ``` 看了spark的UI这个任务确实也提交上去了,但是idea的控制台一直重复地报这一段 ![图片说明](https://img-ask.csdn.net/upload/201901/13/1547375997_290816.png) sparkUI如图。 ![图片说明](https://img-ask.csdn.net/upload/201901/13/1547376092_538815.png) CSDN没币了没法悬赏。要是有大侠可以解决,可以有偿,留联系方式就行。

JAVA服务端如何与SPARK服务器交互??

application提交到spark之后 Java的服务端如何调用spark服务端的APPLICATIO呢?? 网上怎么都没有资料。。大神在哪里

spark和javaweb整合,如何通过页面提交spark任务,并过去结果

首先说一下想要达到的效果,就是网页有一个按钮,用户可以通过按钮提交任务到spark,spark集群运行并得出结果,结果能够返回给页面或者服务器。主要就是有两个问题。第一:如何通过服务器提交spark任务,让spark跑起来,第二:获取spark得出结果,能够在页面显示,或者我能通过程序获取到,有经验的或者有思路的大牛们帮忙解答一下,必有重谢!!!!

spark shell在存运算结果到hdfs时报java.io.IOException: Not a file: hdfs://mini1:9000/spark/res

scala> sc.textFile("hdfs://mini1:9000/spark").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).saveAsTextFile("hdfs://mini1:9000/spark/res2") 执行上面的代码出错,这个目录在hdfs下是有的,而且就算没有也会创建。还有就是我运行的代码中是保存到res2目录 ,这里为什么报没有res目录 18/11/05 19:06:44 WARN SizeEstimator: Failed to check whether UseCompressedOops is set; assuming yes java.io.IOException: Not a file: hdfs://mini1:9000/spark/res at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:320) at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:199) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) at org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:65) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:331) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:331) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111) at org.apache.spark.rdd.RDD.withScope(RDD.scala:316) at org.apache.spark.rdd.PairRDDFunctions.reduceByKey(PairRDDFunctions.scala:330) at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:28) at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:33) at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:35) at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:37) at $iwC$$iwC$$iwC$$iwC.<init>(<console>:39) at $iwC$$iwC$$iwC.<init>(<console>:41) at $iwC$$iwC.<init>(<console>:43) at $iwC.<init>(<console>:45) at <init>(<console>:47) at .<init>(<console>:51) at .<clinit>(<console>) at .<init>(<console>:7) at .<clinit>(<console>) at $print(<console>) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065) at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346) at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840) at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871) at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819) at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857) at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902) at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814) at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657) at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665) at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670) at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997) at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945) at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945) at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135) at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945) at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059) at org.apache.spark.repl.Main$.main(Main.scala:31) at org.apache.spark.repl.Main.main(Main.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

关于java操作spark读写mongodb

对spark一无所知,只需要在单机环境下跑起来就行,求大牛指教!!!!!!!!!

java 后台查询数据使用spark Streaming处理

哪位大神知道怎么使用spark Streaming处理从数据库查询出来的数据,然后传给前台吗

windows下使用java实现spark 外部调用python(pipe方式)

spark rdd可以使用管道方式调用外部驱动。 所以想做个尝试,在win7 (64),用java(1.8)实现外部调用python(3.7.2)文件。但是一直失败,报**CreateProcess error=193, %1 不是有效的 Win32 应用程序。** 想请教一下,该如何修改呢 代码如下 ---- ###python ``` import sys for line in sys.stdin: print(line) if isinstance(line, str): sys.stdout.write(line + "." + line) elif isinstance(line, int): sys.stdout.write(int(line) + 1) ``` --- ###java ``` public static void main(String[] args){ SparkConf conf = new SparkConf().setAppName("test1").setMaster("local"); SparkContext sparkContext = new SparkContext(conf); sparkContext.setLogLevel("info"); sparkContext.addFile("E:\\spark_pipe.py"); JavaSparkContext sc = new JavaSparkContext(sparkContext); JavaRDD<Integer> rdd = sc.parallelize(Arrays.asList(1, 2, 3, 4)); JavaRDD<String> pipe = rdd.pipe(SparkFiles.get("spark_pipe.py")); pipe.foreach(s -> System.out.println(s)); } ```

java 实现 sparksql 时,mysql数据库查询结果只有表头没有数据

这两天尝试用java实现sparksql连接mysql数据库,经过调试可以成功连接到数据库,但奇怪的是只能够查询出表头和表结构却看不到表里面数据 代码如下 import java.util.Hashtable; import java.util.Properties; import javax.swing.JFrame; import org.apache.avro.hadoop.io.AvroKeyValue.Iterator; import org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB; import org.apache.hadoop.hive.ql.exec.vector.expressions.IsNull; import org.apache.log4j.Logger; import org.apache.spark.SparkConf; import org.apache.spark.SparkContext; import org.apache.spark.api.java.JavaSparkContext; import org.apache.spark.rdd.RDD; import org.apache.spark.sql.DataFrameReader; import org.apache.spark.sql.DataFrameWriter; import org.apache.spark.sql.Dataset; import org.apache.spark.sql.Row; import org.apache.spark.sql.SQLContext; import org.apache.spark.sql.SaveMode; import org.apache.spark.sql.SparkSession; import org.apache.spark.sql.SparkSession.Builder; import org.apache.spark.sql.jdbc.JdbcDialect; import org.datanucleus.store.rdbms.identifier.IdentifierFactory; import antlr.collections.List; import scala.Enumeration.Val; public class Demo_Mysql3 { private static Logger logger = Logger.getLogger(Demo_Mysql3.class); public static void main(String[] args) { SparkConf sparkConf = new SparkConf(); sparkConf.setAppName("Demo_Mysql3"); sparkConf.setMaster("local[5]"); sparkConf.setSparkHome("F:\\DownLoad\\spark\\spark-2.0.0-bin-hadoop2.7"); sparkConf.set("spark.sql.warehouse.dir","F:\\DownLoad\\spark\\spark-2.0.0-bin-hadoop2.7"); SparkContext sc0=null; try { sc0=new SparkContext(sparkConf); SparkSession sparkSession=new SparkSession(sc0); SQLContext sqlContext = new SQLContext(sparkSession); // 一个条件表示一个分区 String[] predicates = new String[] { "1=1 order by id limit 400000,50000", "1=1 order by id limit 450000,50000", "1=1 order by id limit 500000,50000", "1=1 order by id limit 550000,50000", "1=1 order by id limit 600000,50000" }; String url = "jdbc:mysql://localhost:3306/clone"; String table = "image"; Properties connectionProperties = new Properties(); connectionProperties.setProperty("dbtable", table);// 设置表 connectionProperties.setProperty("user", "root");// 设置用户名 connectionProperties.setProperty("password", "root");// 设置密码 // 读取数据 DataFrameReader jread = sqlContext.read(); //Dataset<Row> jdbcDs=jread.jdbc(url, table, predicates, connectionProperties); sqlContext.read().jdbc(url, table, predicates, connectionProperties).select("*").show(); } catch (Exception e) { logger.error("|main|exception error", e); } finally { if (sc0 != null) { sc0.stop(); } } } } 控制台输出如下: ![图片说明](https://img-ask.csdn.net/upload/201707/22/1500708689_839040.png)

spark-submit命令运行jar包报空指针,Java -jar命令可以运行。

local[*]模式下的spark程序,在idea上运行没问题,用maven打出来的jar包在windows的cmd下运行java -jar XX.jar 也可以运行成功。将jar包放到集群中运用spark-submit运行可以打印输出控制台打印结果,但是最后报空指针异常,并且结果不能保存到mysql。可是在集群中也运用java -jar xxx.jar一样可以运行不会报错。这是什么原因?我怎么才能用spark-submit命令运行成功?我想要通过这个命令去具体制定某一个类,利用Java -jar命令就不能具体制定哪一个类 。求帮忙 Exception in thread "main" java.lang.NullPointerException at java.util.Properties$LineReader.readLine(Properties.java:434) at java.util.Properties.load0(Properties.java:353) at java.util.Properties.load(Properties.java:341) at com.bigdata.bcht.spes.SparkEsLawUndel$.main(SparkEsLawUndel.scala:90) at com.bigdata.bcht.spes.SparkEsLawUndel.main(SparkEsLawUndel.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:745) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

在Java web中怎么提交一个spark job任务?

场景与需求:用户在web页面点击某个按钮,然后需要提交一个spark job到spark集群运行。想通过java代码实现与使用spark-submit一样的效果,请问各位大佬应该怎么做? 望各位大佬不吝赐教!求教各位指点迷津!跪谢! 注:spark集群已经有3个spark-client;web项目开发使用的框架是springboot, web项目部署在websphere服务器上。

Spark 连接 mongodb 用python

按照官网教程 1 from pyspark.sql import SparkSession spark = SparkSession \ .builder \ .appName("myApp") \ .config("spark.mongodb.input.uri", "mongodb://127.0.0.1/Spark-Test.Numbers") \ .config("spark.mongodb.output.uri", "mongodb://127.0.0.1/Spark-Test.Numbers") \ .getOrCreate() df = spark.read.format("com.mongodb.spark.sql.DefaultSource").load() 结果报错Caused by: java.lang.ClassNotFoundException: com.mongodb.spark.sql.DefaultSource.DefaultSource 2 我看需要用--packages这个命令导入包 cmd>> pyspark --package org.mongodb.spark:mongo-spark-connector_2.11:2.2.0 报错:Exception in thread "main" java.lang.IllegalArgumentException: pyspark does not 3 完全按照官方来 cmd>>pyspark --conf "spark.mongodb.input.uri=mongodb://127.0.0.1/test.myCollection?readPreference=primaryPreferred" --conf "spark.mongodb.output.uri=mongodb://127.0.0.1/test.myCollection" --packages org.mongodb.spark:mongo-spark-connector_2.10:1.1.0 报错:'D:\SparkNew\spark\bin\pyspark2.cmd" --conf "spark.mongodb.input.uri' 不是内部或外部命令, 也不是可运行的程序或批处理文件。 不太明白我用的pyspark,怎么报错是pyspark2.cmd 那怎么才能跟mongodb连接呢,就是找不到DefaultSource.DefaultSource的事啊

在Java中如何使用spark解析邮件

在Java中本地测试使用spark解析邮件获取邮件的内容,附件,发送人等相关信息 ,请问有没有这方面的例子或者资料啊,实在是找不到相关内容啊

在kerberos环境下使用spark2访问hive报错

2019-05-13 21:27:07,394 [main] WARN org.apache.hadoop.hive.metastore.MetaStoreDirectSql - Self-test query [select "DB_ID" from "DBS"] failed; direct SQL is disabled javax.jdo.JDODataStoreException: Error executing SQL query "select "DB_ID" from "DBS"". at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543) at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:388) at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:213) at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.runTestQuery(MetaStoreDirectSql.java:243) at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.<init>(MetaStoreDirectSql.java:146) at org.apache.hadoop.hive.metastore.ObjectStore.initializeHelper(ObjectStore.java:406) at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:338) at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:299) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:77) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:137) at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:58) at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:67) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStoreForConf(HiveMetaStore.java:612) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:578) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:572) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:639) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:416) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6869) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:248) at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:70) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1700) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:80) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:130) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:101) at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3581) at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3633) at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3613) at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3867) at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:247) at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:230) at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:387) at org.apache.hadoop.hive.ql.metadata.Hive.create(Hive.java:331) at org.apache.hadoop.hive.ql.metadata.Hive.getInternal(Hive.java:311) at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:287) at org.apache.hadoop.hive.ql.session.SessionState.setAuthorizerV2Config(SessionState.java:895) at org.apache.hadoop.hive.ql.session.SessionState.setupAuth(SessionState.java:859) at org.apache.hadoop.hive.ql.session.SessionState.getAuthenticator(SessionState.java:1521) at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:204) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:268) at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:360) at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:264) at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:68) at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:67) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:197) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:197) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:197) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:99) at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:196) at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:106) at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:94) at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39) at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54) at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52) at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35) at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:290) at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1059) at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:137) at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:136) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:136) at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:133) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:632) at com.bigdata_example.oozie.SparkDemo.main(SparkDemo.java:23) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:775) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) at org.apache.oozie.action.hadoop.SparkMain.runSpark(SparkMain.java:181) at org.apache.oozie.action.hadoop.SparkMain.run(SparkMain.java:93) at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:101) at org.apache.oozie.action.hadoop.SparkMain.main(SparkMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.oozie.action.hadoop.LauncherAM.runActionMain(LauncherAM.java:410) at org.apache.oozie.action.hadoop.LauncherAM.access$300(LauncherAM.java:55) at org.apache.oozie.action.hadoop.LauncherAM$2.run(LauncherAM.java:223) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726) at org.apache.oozie.action.hadoop.LauncherAM.run(LauncherAM.java:217) at org.apache.oozie.action.hadoop.LauncherAM$1.run(LauncherAM.java:153) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726) at org.apache.oozie.action.hadoop.LauncherAM.main(LauncherAM.java:141) NestedThrowablesStackTrace: java.sql.SQLSyntaxErrorException: Table/View 'DBS' does not exist. at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown Source) at org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown Source) at org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown Source) at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown Source) at org.apache.derby.impl.jdbc.ConnectionChild.handleException(Unknown Source) at org.apache.derby.impl.jdbc.EmbedPreparedStatement.<init>(Unknown Source) at org.apache.derby.impl.jdbc.EmbedPreparedStatement42.<init>(Unknown Source) at org.apache.derby.jdbc.Driver42.newEmbedPreparedStatement(Unknown Source) at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown Source) at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown Source) at com.jolbox.bonecp.ConnectionHandle.prepareStatement(ConnectionHandle.java:1193) at org.datanucleus.store.rdbms.SQLController.getStatementForQuery(SQLController.java:345) at org.datanucleus.store.rdbms.query.RDBMSQueryUtils.getPreparedStatementForQuery(RDBMSQueryUtils.java:211) at org.datanucleus.store.rdbms.query.SQLQuery.performExecute(SQLQuery.java:633) at org.datanucleus.store.query.Query.executeQuery(Query.java:1844) at org.datanucleus.store.rdbms.query.SQLQuery.executeWithArray(SQLQuery.java:807) at org.datanucleus.store.query.Query.execute(Query.java:1715) at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:371) at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:213) at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.runTestQuery(MetaStoreDirectSql.java:243) at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.<init>(MetaStoreDirectSql.java:146) at org.apache.hadoop.hive.metastore.ObjectStore.initializeHelper(ObjectStore.java:406) at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:338) at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:299) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:77) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:137) at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:58) at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:67) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStoreForConf(HiveMetaStore.java:612) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:578) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:572) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:639) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:416) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6869) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:248) at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:70) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1700) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:80) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:130) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:101) at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3581) at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3633) at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3613) at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3867) at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:247) at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:230) at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:387) at org.apache.hadoop.hive.ql.metadata.Hive.create(Hive.java:331) at org.apache.hadoop.hive.ql.metadata.Hive.getInternal(Hive.java:311) at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:287) at org.apache.hadoop.hive.ql.session.SessionState.setAuthorizerV2Config(SessionState.java:895) at org.apache.hadoop.hive.ql.session.SessionState.setupAuth(SessionState.java:859) at org.apache.hadoop.hive.ql.session.SessionState.getAuthenticator(SessionState.java:1521) at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:204) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:268) at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:360) at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:264) at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:68) at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:67) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:197) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:197) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:197) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:99) at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:196) at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:106) at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:94) at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39) at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54) at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52) at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35) at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:290) at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1059) at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:137) at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:136) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:136) at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:133) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:632) at com.bigdata_example.oozie.SparkDemo.main(SparkDemo.java:23) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:775) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) at org.apache.oozie.action.hadoop.SparkMain.runSpark(SparkMain.java:181) at org.apache.oozie.action.hadoop.SparkMain.run(SparkMain.java:93) at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:101) at org.apache.oozie.action.hadoop.SparkMain.main(SparkMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.oozie.action.hadoop.LauncherAM.runActionMain(LauncherAM.java:410) at org.apache.oozie.action.hadoop.LauncherAM.access$300(LauncherAM.java:55) at org.apache.oozie.action.hadoop.LauncherAM$2.run(LauncherAM.java:223) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726) at org.apache.oozie.action.hadoop.LauncherAM.run(LauncherAM.java:217) at org.apache.oozie.action.hadoop.LauncherAM$1.run(LauncherAM.java:153) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726) at org.apache.oozie.action.hadoop.LauncherAM.main(LauncherAM.java:141) Caused by: ERROR 42X05: Table/View 'DBS' does not exist. at org.apache.derby.iapi.error.StandardException.newException(Unknown Source) at org.apache.derby.iapi.error.StandardException.newException(Unknown Source) at org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(Unknown Source) at org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(Unknown Source) at org.apache.derby.impl.sql.compile.FromList.bindTables(Unknown Source) at org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(Unknown Source) at org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(Unknown Source) at org.apache.derby.impl.sql.compile.DMLStatementNode.bind(Unknown Source) at org.apache.derby.impl.sql.compile.CursorNode.bindStatement(Unknown Source) at org.apache.derby.impl.sql.GenericStatement.prepMinion(Unknown Source) at org.apache.derby.impl.sql.GenericStatement.prepare(Unknown Source) at org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(Unknown Source) ... 113 more 没加kerberos认证,然后报找不到库,我猜是权限不够,然后加了kerberos,又报java.lang.reflect.InvocationTargetException和Caused by:java.lang.NullPointerException

spark2.0开发问题,用java程序编写

private static void testSparkToHive() { System.setProperty("HADOOP_USER_NAME", "mr"); SparkSession session = SparkClient.connHive(SparkConnUtil.loadProperties("/spark.properties")); System.out.println(session.toString()); session.sql("select te1 from default.lp_data_000b limit 1 ").show(); session.stop(); } public SparkSession connHive(Properties properties) { SparkConf conf = new SparkConf(); Enumeration<Object> en = properties.keys(); while (en.hasMoreElements()) { String name = en.nextElement().toString(); String value = properties.getProperty(name); if (!"".equals(value)) { conf.set(name, value); } } sc = SparkSession .builder() .config(conf) .enableHiveSupport()//Enables Hive support .getOrCreate(); return sc; } 报错如下: 17/06/20 13:51:21 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table. Exception in thread "main" java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:525) at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:258) at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:359) at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:263) at org.apache.spark.sql.hive.HiveSharedState.metadataHive$lzycompute(HiveSharedState.scala:39) at org.apache.spark.sql.hive.HiveSharedState.metadataHive(HiveSharedState.scala:38) at org.apache.spark.sql.hive.HiveSharedState.externalCatalog$lzycompute(HiveSharedState.scala:46) at org.apache.spark.sql.hive.HiveSharedState.externalCatalog(HiveSharedState.scala:45) at org.apache.spark.sql.hive.HiveSessionState.catalog$lzycompute(HiveSessionState.scala:50) at org.apache.spark.sql.hive.HiveSessionState.catalog(HiveSessionState.scala:48) at org.apache.spark.sql.hive.HiveSessionState$$anon$1.<init>(HiveSessionState.scala:63) at org.apache.spark.sql.hive.HiveSessionState.analyzer$lzycompute(HiveSessionState.scala:63) at org.apache.spark.sql.hive.HiveSessionState.analyzer(HiveSessionState.scala:62) at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:49) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:582) at com.demo.Test.testSparkToHive(Test.java:34) at com.demo.Test.main(Test.java:21) Caused by: java.lang.RuntimeException: java.lang.NullPointerException at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522) at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:171) ... 21 more Caused by: java.lang.NullPointerException at java.lang.ProcessBuilder.start(ProcessBuilder.java:1010) at org.apache.hadoop.util.Shell.runCommand(Shell.java:505) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:738) at org.apache.hadoop.util.Shell.execCommand(Shell.java:831) at org.apache.hadoop.util.Shell.execCommand(Shell.java:814) at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1100) at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:638) at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:613) at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:599) at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:554) at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508) ... 22 more 什么原因,求知道。

spark中java版本的mapPartitions怎么使用?使用dataset

spark中java版本的mapPartitions怎么使用?使用dataset

在中国程序员是青春饭吗?

今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...

程序员请照顾好自己,周末病魔差点一套带走我。

程序员在一个周末的时间,得了重病,差点当场去世,还好及时挽救回来了。

我以为我学懂了数据结构,直到看了这个导图才发现,我错了

数据结构与算法思维导图

String s = new String(" a ") 到底产生几个对象?

老生常谈的一个梗,到2020了还在争论,你们一天天的,哎哎哎,我不是针对你一个,我是说在座的各位都是人才! 上图红色的这3个箭头,对于通过new产生一个字符串(”宜春”)时,会先去常量池中查找是否已经有了”宜春”对象,如果没有则在常量池中创建一个此字符串对象,然后堆中再创建一个常量池中此”宜春”对象的拷贝对象。 也就是说准确答案是产生了一个或两个对象,如果常量池中原来没有 ”宜春” ,就是两个。...

技术大佬:我去,你写的 switch 语句也太老土了吧

昨天早上通过远程的方式 review 了两名新来同事的代码,大部分代码都写得很漂亮,严谨的同时注释也很到位,这令我非常满意。但当我看到他们当中有一个人写的 switch 语句时,还是忍不住破口大骂:“我擦,小王,你丫写的 switch 语句也太老土了吧!” 来看看小王写的代码吧,看完不要骂我装逼啊。 private static String createPlayer(PlayerTypes p...

Linux面试题(2020最新版)

文章目录Linux 概述什么是LinuxUnix和Linux有什么区别?什么是 Linux 内核?Linux的基本组件是什么?Linux 的体系结构BASH和DOS之间的基本区别是什么?Linux 开机启动过程?Linux系统缺省的运行级别?Linux 使用的进程间通信方式?Linux 有哪些系统日志文件?Linux系统安装多个桌面环境有帮助吗?什么是交换空间?什么是root帐户什么是LILO?什...

将一个接口响应时间从2s优化到 200ms以内的一个案例

一、背景 在开发联调阶段发现一个接口的响应时间特别长,经常超时,囧… 本文讲讲是如何定位到性能瓶颈以及修改的思路,将该接口从 2 s 左右优化到 200ms 以内 。 二、步骤 2.1 定位 定位性能瓶颈有两个思路,一个是通过工具去监控,一个是通过经验去猜想。 2.1.1 工具监控 就工具而言,推荐使用 arthas ,用到的是 trace 命令 具体安装步骤很简单,大家自行研究。 我的使用步骤是...

学历低,无法胜任工作,大佬告诉你应该怎么做

微信上收到一位读者小涛的留言,大致的意思是自己只有高中学历,经过培训后找到了一份工作,但很难胜任,考虑要不要辞职找一份他能力可以胜任的实习工作。下面是他留言的一部分内容: 二哥,我是 2016 年高中毕业的,考上了大学但没去成,主要是因为当时家里经济条件不太允许。 打工了三年后想学一门技术,就去培训了。培训的学校比较垃圾,现在非常后悔没去正规一点的机构培训。 去年 11 月份来北京找到了一份工...

JVM内存结构和Java内存模型别再傻傻分不清了

JVM内存结构和Java内存模型都是面试的热点问题,名字看感觉都差不多,网上有些博客也都把这两个概念混着用,实际上他们之间差别还是挺大的。 通俗点说,JVM内存结构是与JVM的内部存储结构相关,而Java内存模型是与多线程编程相关,本文针对这两个总是被混用的概念展开讲解。 JVM内存结构 JVM构成 说到JVM内存结构,就不会只是说内存结构的5个分区,而是会延展到整个JVM相关的问题,所以先了解下

和黑客斗争的 6 天!

互联网公司工作,很难避免不和黑客们打交道,我呆过的两家互联网公司,几乎每月每天每分钟都有黑客在公司网站上扫描。有的是寻找 Sql 注入的缺口,有的是寻找线上服务器可能存在的漏洞,大部分都...

Google 与微软的浏览器之争

浏览器再现“神仙打架”。整理 | 屠敏头图 | CSDN 下载自东方 IC出品 | CSDN(ID:CSDNnews)从 IE 到 Chrome,再从 Chrome 到 Edge,微软与...

讲一个程序员如何副业月赚三万的真实故事

loonggg读完需要3分钟速读仅需 1 分钟大家好,我是你们的校长。我之前讲过,这年头,只要肯动脑,肯行动,程序员凭借自己的技术,赚钱的方式还是有很多种的。仅仅靠在公司出卖自己的劳动时...

上班一个月,后悔当初着急入职的选择了

最近有个老铁,告诉我说,上班一个月,后悔当初着急入职现在公司了。他之前在美图做手机研发,今年美图那边今年也有一波组织优化调整,他是其中一个,在协商离职后,当时捉急找工作上班,因为有房贷供着,不能没有收入来源。所以匆忙选了一家公司,实际上是一个大型外包公司,主要派遣给其他手机厂商做外包项目。**当时承诺待遇还不错,所以就立马入职去上班了。但是后面入职后,发现薪酬待遇这块并不是HR所说那样,那个HR自...

女程序员,为什么比男程序员少???

昨天看到一档综艺节目,讨论了两个话题:(1)中国学生的数学成绩,平均下来看,会比国外好?为什么?(2)男生的数学成绩,平均下来看,会比女生好?为什么?同时,我又联想到了一个技术圈经常讨...

搜狗输入法也在挑战国人的智商!

故事总是一个接着一个到来...上周写完《鲁大师已经彻底沦为一款垃圾流氓软件!》这篇文章之后,鲁大师的市场工作人员就找到了我,希望把这篇文章删除掉。经过一番沟通我先把这篇文章从公号中删除了...

85后蒋凡:28岁实现财务自由、34岁成为阿里万亿电商帝国双掌门,他的人生底层逻辑是什么?...

蒋凡是何许人也? 2017年12月27日,在入职4年时间里,蒋凡开挂般坐上了淘宝总裁位置。 为此,时任阿里CEO张勇在任命书中力赞: 蒋凡加入阿里,始终保持创业者的冲劲,有敏锐的...

总结了 150 余个神奇网站,你不来瞅瞅吗?

原博客再更新,可能就没了,之后将持续更新本篇博客。

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

MySQL数据库面试题(2020最新版)

文章目录数据库基础知识为什么要使用数据库什么是SQL?什么是MySQL?数据库三大范式是什么mysql有关权限的表都有哪几个MySQL的binlog有有几种录入格式?分别有什么区别?数据类型mysql有哪些数据类型引擎MySQL存储引擎MyISAM与InnoDB区别MyISAM索引与InnoDB索引的区别?InnoDB引擎的4大特性存储引擎选择索引什么是索引?索引有哪些优缺点?索引使用场景(重点)...

如果你是老板,你会不会踢了这样的员工?

有个好朋友ZS,是技术总监,昨天问我:“有一个老下属,跟了我很多年,做事勤勤恳恳,主动性也很好。但随着公司的发展,他的进步速度,跟不上团队的步伐了,有点...

我入职阿里后,才知道原来简历这么写

私下里,有不少读者问我:“二哥,如何才能写出一份专业的技术简历呢?我总感觉自己写的简历太烂了,所以投了无数份,都石沉大海了。”说实话,我自己好多年没有写过简历了,但我认识的一个同行,他在阿里,给我说了一些他当年写简历的方法论,我感觉太牛逼了,实在是忍不住,就分享了出来,希望能够帮助到你。 01、简历的本质 作为简历的撰写者,你必须要搞清楚一点,简历的本质是什么,它就是为了来销售你的价值主张的。往深...

离职半年了,老东家又发 offer,回不回?

有小伙伴问松哥这个问题,他在上海某公司,在离职了几个月后,前公司的领导联系到他,希望他能够返聘回去,他很纠结要不要回去? 俗话说好马不吃回头草,但是这个小伙伴既然感到纠结了,我觉得至少说明了两个问题:1.曾经的公司还不错;2.现在的日子也不是很如意。否则应该就不会纠结了。 老实说,松哥之前也有过类似的经历,今天就来和小伙伴们聊聊回头草到底吃不吃。 首先一个基本观点,就是离职了也没必要和老东家弄的苦...

男生更看重女生的身材脸蛋,还是思想?

往往,我们看不进去大段大段的逻辑。深刻的哲理,往往短而精悍,一阵见血。问:产品经理挺漂亮的,有点心动,但不知道合不合得来。男生更看重女生的身材脸蛋,还是...

什么时候跳槽,为什么离职,你想好了么?

都是出来打工的,多为自己着想

程序员为什么千万不要瞎努力?

本文作者用对比非常鲜明的两个开发团队的故事,讲解了敏捷开发之道 —— 如果你的团队缺乏统一标准的环境,那么即使勤劳努力,不仅会极其耗时而且成果甚微,使用...

为什么程序员做外包会被瞧不起?

二哥,有个事想询问下您的意见,您觉得应届生值得去外包吗?公司虽然挺大的,中xx,但待遇感觉挺低,马上要报到,挺纠结的。

当HR压你价,说你只值7K,你该怎么回答?

当HR压你价,说你只值7K时,你可以流畅地回答,记住,是流畅,不能犹豫。 礼貌地说:“7K是吗?了解了。嗯~其实我对贵司的面试官印象很好。只不过,现在我的手头上已经有一份11K的offer。来面试,主要也是自己对贵司挺有兴趣的,所以过来看看……”(未完) 这段话主要是陪HR互诈的同时,从公司兴趣,公司职员印象上,都给予对方正面的肯定,既能提升HR的好感度,又能让谈判气氛融洽,为后面的发挥留足空间。...

面试:第十六章:Java中级开发(16k)

HashMap底层实现原理,红黑树,B+树,B树的结构原理 Spring的AOP和IOC是什么?它们常见的使用场景有哪些?Spring事务,事务的属性,传播行为,数据库隔离级别 Spring和SpringMVC,MyBatis以及SpringBoot的注解分别有哪些?SpringMVC的工作原理,SpringBoot框架的优点,MyBatis框架的优点 SpringCould组件有哪些,他们...

面试阿里p7,被按在地上摩擦,鬼知道我经历了什么?

面试阿里p7被问到的问题(当时我只知道第一个):@Conditional是做什么的?@Conditional多个条件是什么逻辑关系?条件判断在什么时候执...

终于懂了TCP和UDP协议区别

终于懂了TCP和UDP协议区别

无代码时代来临,程序员如何保住饭碗?

编程语言层出不穷,从最初的机器语言到如今2500种以上的高级语言,程序员们大呼“学到头秃”。程序员一边面临编程语言不断推陈出新,一边面临由于许多代码已存在,程序员编写新应用程序时存在重复“搬砖”的现象。 无代码/低代码编程应运而生。无代码/低代码是一种创建应用的方法,它可以让开发者使用最少的编码知识来快速开发应用程序。开发者通过图形界面中,可视化建模来组装和配置应用程序。这样一来,开发者直...

立即提问
相关内容推荐