spark的配置问题,启动不了

在sbin下启动 ./start-all.sh 出现如下错误
**
starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark/spark-1.6.1-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.master.Master-1-ubuntu.out
master: ssh: Could not resolve hostname master: Name or service not known_**
**
望哪位大神给解疑

1个回答

主机名错误,看看是不是主机名写错了,或者没有配置host,或者IP地址错误

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
Spark集群无法启动,命令都识别不了

在SPAK-HOME/sbin目录下执行start-master.sh 得到如下结果: -bash: start-master.sh: command not found 使用jps命令查看得到如下: 15585 DataNode 15432 NameNode 15945 Jps 15822 SecondaryNameNode 看不到Master节点,Spark集群无法启动。谢谢!

spark主节点配置一定与hadoop主节点配置一样吗?依赖zookeeper吗?

我用的是hadoop-2.5cdh5.3,集群namenode在01主机上,在配置spark的时候我将spark master指向了03节点, 配置完毕后,将hadoop集群,yarn依次启动,然后启动spark做示例运行总是出现启动不正常的问题,所以请有经验的人指明一下,是不是一定要和hadoop的主节点一样呢?zookeeper是不是也一定要启动。图是我的示例运行,页面查询发现此任务运行是被killed的状态。 ![图片说明](https://img-ask.csdn.net/upload/201510/21/1445391119_876804.png) ![图片说明](https://img-ask.csdn.net/upload/201510/21/1445391055_88906.png) ![图片说明](https://img-ask.csdn.net/upload/201510/21/1445391145_543234.png) ![图片说明](https://img-ask.csdn.net/upload/201510/21/1445391080_514411.png) worker的deploy日志 ![图片说明](https://img-ask.csdn.net/upload/201510/21/1445398054_94598.png)

如何在windows下配置Spark?

我在win10下,按照教程安装和配置了Spark,为什么cmd命令中输入spark-shell时会 提示“系统找不到指定的路径”?

windows 配置spark成功 但是无法启动Hdfs namenode

16/08/22 09:44:14 ERROR namenode.NameNode: Failed to start namenode. java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.a ccess0(Ljava/lang/String;I)Z at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method) at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:6 09) at org.apache.hadoop.fs.FileUtil.canWrite(FileUtil.java:996) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyze Storage(Storage.java:490) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSI mage.java:322) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead( FSImage.java:215) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNam esystem.java:975) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNa mesystem.java:681) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNo de.java:584) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.j ava:644) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java: 811) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java: 795) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNo de.java:1488) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:15 54 求各位大神帮助,配置的是单机版的,spark已可以运行,但是hdfs不可以

启动spark发生错误,,配置都没有什么问题

![图片说明](https://img-ask.csdn.net/upload/201901/22/1548127186_393883.jpg)![图片说明](https://img-ask.csdn.net/upload/201901/22/1548127191_311104.jpg)

spark-sql --master yarn-client登录不成功,求教大神。

高可靠集群,hive也都配置好的,只是使用命令./bin/spark-sql或者spark-sql --master local或者spark-sql --master spark://172.16.4.169:7077都可以正常登录spark-sql ,也可以查看表格操作,但是如果使用命令./bin/ spark-sql --master yarn-client则无法正常登录,不报错,但也一直卡在登录界面,如图所示,不知道问题出在哪里?求大神指教 ![图片说明](https://img-ask.csdn.net/upload/201610/18/1476801025_876930.jpg)

spark sparkcontext 初始化失败

环境 Ubuntu 16.04 hadoop 2.7.3 scala 2.11.8 spark 2.1.0 已经安装好了hadoop scala,之后配置了下 spark 运行 spark-shell 就爆出来下面的错误 ``` 18/05/22 15:43:30 ERROR spark.SparkContext: Error initializing SparkContext. java.lang.IllegalArgumentException: For input string: "true #是否记录Spark事件,用于应用程序在完成后重构webUI" at scala.collection.immutable.StringLike$class.parseBoolean(StringLike.scala:290) at scala.collection.immutable.StringLike$class.toBoolean(StringLike.scala:260) at scala.collection.immutable.StringOps.toBoolean(StringOps.scala:29) at org.apache.spark.SparkConf$$anonfun$getBoolean$2.apply(SparkConf.scala:407) at org.apache.spark.SparkConf$$anonfun$getBoolean$2.apply(SparkConf.scala:407) at scala.Option.map(Option.scala:146) at org.apache.spark.SparkConf.getBoolean(SparkConf.scala:407) at org.apache.spark.SparkContext.isEventLogEnabled(SparkContext.scala:238) at org.apache.spark.SparkContext.<init>(SparkContext.scala:407) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2313) at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:868) at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:860) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:860) at org.apache.spark.repl.Main$.createSparkSession(Main.scala:95) at $line3.$read$$iw$$iw.<init>(<console>:15) at $line3.$read$$iw.<init>(<console>:42) at $line3.$read.<init>(<console>:44) at $line3.$read$.<init>(<console>:48) at $line3.$read$.<clinit>(<console>) at $line3.$eval$.$print$lzycompute(<console>:7) at $line3.$eval$.$print(<console>:6) at $line3.$eval.$print(<console>) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:786) at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1047) at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:638) at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:637) at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31) at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19) at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:637) at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:569) at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:565) at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:807) at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:681) at scala.tools.nsc.interpreter.ILoop.processLine(ILoop.scala:395) at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply$mcV$sp(SparkILoop.scala:38) at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:37) at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:37) at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:214) at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:37) at org.apache.spark.repl.SparkILoop.loadFiles(SparkILoop.scala:105) at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:920) at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909) at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909) at scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97) at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:909) at org.apache.spark.repl.Main$.doMain(Main.scala:68) at org.apache.spark.repl.Main$.main(Main.scala:51) at org.apache.spark.repl.Main.main(Main.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) java.lang.IllegalArgumentException: For input string: "true #是否记录Spark事件,用于应用程序在完成后重构webUI" at scala.collection.immutable.StringLike$class.parseBoolean(StringLike.scala:290) at scala.collection.immutable.StringLike$class.toBoolean(StringLike.scala:260) at scala.collection.immutable.StringOps.toBoolean(StringOps.scala:29) at org.apache.spark.SparkConf$$anonfun$getBoolean$2.apply(SparkConf.scala:407) at org.apache.spark.SparkConf$$anonfun$getBoolean$2.apply(SparkConf.scala:407) at scala.Option.map(Option.scala:146) at org.apache.spark.SparkConf.getBoolean(SparkConf.scala:407) at org.apache.spark.SparkContext.isEventLogEnabled(SparkContext.scala:238) at org.apache.spark.SparkContext.<init>(SparkContext.scala:407) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2313) at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:868) at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:860) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:860) at org.apache.spark.repl.Main$.createSparkSession(Main.scala:95) ... 47 elided <console>:14: error: not found: value spark import spark.implicits._ ^ <console>:14: error: not found: value spark import spark.sql ```

本地运行spark,JNI error, NoClassDefFoundError

异常信息如下:运行spark 的wordcount demo,引用 的jar都依赖好好的, 部署spark 的时候遇见过类似错误,通过环境变量指定hadoop的jni就好了,现在在本机ide不知道该怎么办了 Error: A JNI error has occurred, please check your installation and try again Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/spark/api/java/function/FlatMapFunction at java.lang.Class.getDeclaredMethods0(Native Method) at java.lang.Class.privateGetDeclaredMethods(Class.java:2701) Disconnected from the target VM, address: '127.0.0.1:58564', transport: 'socket' at java.lang.Class.privateGetMethodRecursive(Class.java:3048) at java.lang.Class.getMethod0(Class.java:3018) at java.lang.Class.getMethod(Class.java:1784) at sun.launcher.LauncherHelper.validateMainClass(LauncherHelper.java:544) at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:526) Caused by: java.lang.ClassNotFoundException: org.apache.spark.api.java.function.FlatMapFunction at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 7 more

spark-sql如何显示默认库名

启动spark-sql ![图片说明](https://img-ask.csdn.net/upload/201903/08/1552009814_366438.jpg) 启动后spark-sql ![图片说明](https://img-ask.csdn.net/upload/201903/08/1552009871_167191.jpg) 期望向hive启动一样 带默认库 ![图片说明](https://img-ask.csdn.net/upload/201903/08/1552009941_168959.jpg) hive 配置文件 ![图片说明](https://img-ask.csdn.net/upload/201903/08/1552010009_255879.jpg) spark-env配置 ![图片说明](https://img-ask.csdn.net/upload/201903/08/1552010079_625482.jpg) 希望大佬能帮忙解决下 谢谢

spark环境搭建,关于主从服务器

小白环境搭了三天了,可还是不成功,求教各位大牛:我有远程的两台服务器,自己的笔记本可以作为master吗?也就是说可以有三台服务器工作,还是说自己的笔记本只是用于登陆远程执行命令最终只有远程的两台工作?

spark on yarn 资源调度问题

为什么spark在yarn上运行时,资源使用情况如下图:有一个结点的资源使用很少。 ![图片说明](https://img-ask.csdn.net/upload/201912/17/1576545833_408284.png) 我的集群配置,一共六台电脑,一台运行驱动器,五台执行器,均为8g 8核, spark启动如下: ``` pyspark --master yarn --num-executors 4 --executor-memory 6g --executor-cores 6 --conf spark.default.parallelism=50 --deploy-mode client ``` 同时我设置--num-executors为4为什么会有5个contains,且不管--num-executors设置为多少,contaiers总是会+1

sparkMasterWebUI无法访问

#spark正常启动,无法访问Master,网上说的端口问题,在修改为8089后依旧无法访问,下面是日志和一些配置 #logs: 20/03/18 20:07:56 INFO master.Master: Started daemon with process name: 1920@hadoopnode01 20/03/18 20:07:56 INFO util.SignalUtils: Registered signal handler for TERM 20/03/18 20:07:56 INFO util.SignalUtils: Registered signal handler for HUP 20/03/18 20:07:56 INFO util.SignalUtils: Registered signal handler for INT 20/03/18 20:07:56 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 20/03/18 20:07:56 INFO spark.SecurityManager: Changing view acls to: hadoopnode01 20/03/18 20:07:56 INFO spark.SecurityManager: Changing modify acls to: hadoopnode01 20/03/18 20:07:56 INFO spark.SecurityManager: Changing view acls groups to: 20/03/18 20:07:56 INFO spark.SecurityManager: Changing modify acls groups to: 20/03/18 20:07:56 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoopnode01); groups with view permissions: Set(); users with modify permissions: Set(hadoopnode01); groups with modify permissions: Set() 20/03/18 20:07:57 INFO util.Utils: Successfully started service 'sparkMaster' on port 7077. 20/03/18 20:07:57 INFO master.Master: Starting Spark master at spark://hadoopnode01:7077 20/03/18 20:07:57 INFO master.Master: Running Spark version 2.4.5 20/03/18 20:07:57 INFO util.log: Logging initialized @1410ms 20/03/18 20:07:57 INFO server.Server: jetty-9.3.z-SNAPSHOT, build timestamp: unknown, git hash: unknown 20/03/18 20:07:57 INFO server.Server: Started @1484ms 20/03/18 20:07:57 INFO server.AbstractConnector: Started ServerConnector@7976972b{HTTP/1.1,[http/1.1]}{0.0.0.0:8089} 20/03/18 20:07:57 INFO util.Utils: Successfully started service 'MasterUI' on port 8089. 20/03/18 20:07:57 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@422e316{/app,null,AVAILABLE,@Spark} 20/03/18 20:07:57 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@15d02cf6{/app/json,null,AVAILABLE,@Spark} 20/03/18 20:07:57 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@74464f69{/,null,AVAILABLE,@Spark} 20/03/18 20:07:57 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@13cad671{/json,null,AVAILABLE,@Spark} 20/03/18 20:07:57 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7e9ff62{/static,null,AVAILABLE,@Spark} 20/03/18 20:07:57 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1fb5f36c{/app/kill,null,AVAILABLE,@Spark} 20/03/18 20:07:57 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@708b72b5{/driver/kill,null,AVAILABLE,@Spark} 20/03/18 20:07:57 INFO ui.MasterWebUI: Bound MasterWebUI to 0.0.0.0, and started at http://hadoopnode01:8089 20/03/18 20:07:57 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@b0f1d0c{/metrics/master/json,null,AVAILABLE,@Spark} 20/03/18 20:07:57 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5891212e{/metrics/applications/json,null,AVAILABLE,@Spark} 20/03/18 20:07:57 INFO master.Master: I have been elected leader! New state: ALIVE #etc/profile \#set java enviroment JAVA_HOME=/usr/lib/jvm/java PATH=$PATH:$JAVA_HOME/bin CLASSPATH=$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar export JAVA_HOME CLASSPATH PATH \#Scala env export SCALA_HOME=/usr/scala/scala-2.13.1 export PATH=$PATH:$SCALA_HOME/bin #ip 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:50:56:3b:85:db brd ff:ff:ff:ff:ff:ff inet 192.168.35.130/24 brd 192.168.35.255 scope global noprefixroute dynamic ens33 valid_lft 1139sec preferred_lft 1139sec inet 192.168.35.10/24 brd 192.168.35.255 scope global secondary noprefixroute ens33 valid_lft forever preferred_lft forever inet6 fe80::505b:3101:5284:850c/64 scope link noprefixroute valid_lft forever preferred_lft forever #spark-env.sh export SPARK_HOME=/home/hadoopnode01/apps/spark export PATH=$PATH:$SPARK_HOME/bin export JAVA_HOME=/usr/lib/jvm/java export SCALA_HOME=/usr/scala/scala-2.13.1 export HADOOP_HOME=/home/hadoopnode01/apps/hadoop-2.9.2 export HADOOP_CONF_DIR=/home/hadoopnode01/apps/hadoop-2.9.2/etc/hadoop export SPARK_LOCAL_IP=192.168.35.10 export SPARK_MASTER_HOST=master.lab.hadoop.com export SPARK_MASTER_PORT=7077 尝试了很多方法都没用,spark正常启动,shell也能用,就是webui都进不去

spark on yarn 8088界面只有一个程序是Running状态,其他都是ACCEPTED状态

请教:我的程序是只能在8088界面显示一个AppId 是running状态,其他都是ACCEPTED状态。尝试修改了spark-env以及yarn-site.xml,spark-defaults.conf,以及capacity-scheduler.xml都没有什么作用。 1. 1.1 vim yarn-site.xml scp -r /usr/local/hadoop-2.7.1/etc/hadoop/yarn-site.xml root@xiuba112:/usr/local/hadoop-2.7.1/etc/hadoop/ <property> <name>yarn.nodemanager.aux-services.spark_shuffle.class</name> <value>org.apache.spark.network.yarn.YarnShuffleService</value> </property> <property> <name>spark.shuffle.service.port</name> <value>7337</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> 1.2 添加依赖的jar包 cp /usr/local/spark-2.2.1-bin-hadoop2.7/yarn/spark-2.2.1-yarn-shuffle.jar /usr/local/hadoop-2.7.1/share/hadoop/yarn/lib/ 拷贝“${SPARK_HOME}/lib/spark-1.3.0-yarn-shuffle.jar”到“${HADOOP_HOME}/share/hadoop/yarn/lib/”目录下。 note:高版本没有lib目录,有jars目录,比如说spark-2.0.2-yarn-shuffle.jar就在${SPARK_HOME}/yarn目录下,将其复制到${HADOOP_HOME}/share/hadoop/yarn/lib目录下。 1.3 重启NodeManager进程 2. scp -r /usr/local/spark-2.2.1-bin-hadoop2.7/conf/spark-defaults.conf root@xiuba112:/usr/local/spark-2.2.1-bin-hadoop2.7/conf/ 在“spark-defaults.conf”中必须添加如下配置项: spark.shuffle.service.enabled=true spark.shuffle.service.port=7337 1和2不能解决问题 3. vim /usr/local/spark-2.2.1-bin-hadoop2.7/conf/spark-env.sh conf/spark-env.sh中,同时在节点 /etc/profile中也添加一行 export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop 3不能解决问题。

Spark History Server WebUI中时间展示错误

Spark History Server WebUI中时间展示错误 ![图片说明](https://img-ask.csdn.net/upload/201610/20/1476945388_738871.png) 请问下这个时间到底需要怎么配置

eclipse中部署spark源码时候 编译antClass not found: javac1.8

![这是我编译时候遇到的问题查了下eclipse内置ant版本是1.8的自己装的java也是1.8版本的怎么解决](https://img-ask.csdn.net/upload/201502/05/1423100900_933588.png)

zeppelin设置每个用户切换到自己的用户下执行报错

执行sh可以,执行spark不行 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at org.apache.thrift.transport.TSocket.open(TSocket.java:182) at org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:51) at org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:37) at org.apache.commons.pool2.BasePooledObjectFactory.makeObject(BasePooledObjectFactory.java:60) at org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:861) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:435) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:363) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.getClient(RemoteInterpreterProcess.java:90) at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.init(RemoteInterpreter.java:209) at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getFormType(RemoteInterpreter.java:375) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.getFormType(LazyOpenInterpreter.java:105) at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:365) at org.apache.zeppelin.scheduler.Job.run(Job.java:175) at org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:329) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

如何 使用akk-http 提供spark 服务?

大概代码如下 `object SparkServer extends App with SparkServices { implicit val system = ActorSystem("spark-akka-http") implicit val materializer = ActorMaterializer() implicit val executionContext = system.dispatcher val bindingFuture = Http().bindAndHandle(route, "192.168.xx.xx", 8001) }` SparkServices 基础代码如下 `val sparkConf = new SparkConf().setMaster("local[8]") val sparkSession = SparkSession.builder .appName("SpanMsgServer") .config(sparkConf) .enableHiveSupport() .getOrCreate()` ...... 使用 spark-submit 提交的时候 报错 com.typesafe.config.ConfigException$Missing: No configuration setting found for key akka.stream google 一下 pom 里面添加 如下 <transformer implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer"> <resource>reference.conf</resource> </transformer> 错误依旧, 没法在resources 目录下面添加 http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-M2/stream-configuration.html 下面的akka.stream 配置文件 出现新错误 comes com.typesafe.config.ConfigException$Missing: No configuration setting found for key debug-logging 怎么解决? 求大神指导下

sparksql整合hive创建外部表报错(求大佬解答)

sparksql整合hive创建外部表的时候报错 建表语句如下: ``` create external table if not exists bdm.itcast_bdm_order_goods( user_id string,--用户ID order_id string,--订单ID order_no string,--订单号 sku_id bigint,--SKU编号 sku_name string,--SKU名称 goods_id bigint,--商品编号 ) partitioned by (dt string) row format delimited fields terminated by ',' lines terminated by '\n' location '/business/itcast_bdm_order_goods'; ``` 报如下错误: ``` **Moved: 'hdfs://hann/business/itcast_bdm_order_goods' to trash at: hdfs://hann/user/root/.Trash/Current Error in query: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:java.lang.IllegalArgumentException: java.net.UnknownHostExc**eption: nhann); ``` 启动spark-sql的语句如下: ``` spark-sql --master spark://node01:7077 --driver-class-path /export/servers/hive-1.1.0-cdh5.14.0/lib/mysql-connector-java-5.1.38.jar --conf spark.sql.warehouse.dir=hdfs://hann/user/hive/warehouse ``` hive-site.xml配置文件如下: ``` <configuration> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://node03.hadoop.com:3306/hive?createDatabaseIfNotExist=true</value> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>root</value> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>123456</value> </property> <!-- <property> <name>hive.cli.print.current.db</name> <value>true</value> </property> <property> <name>hive.cli.print.header</name> <value>true</value> </property> <property> <name>hive.server2.thrift.bind.host</name> <value>node03.hadoop.com</value> </property> <property> <name>hive.metastore.uris</name> <value>thrift://node03.hadoop.com:9083</value> </property> <property> <name>hive.metastore.client.socket.timeout</name> <value>3600</value> </property>--> </configuration> ```

SparkStream与flume的整合问题[急,在线等!!!]

各个版本信息: spark2.0.2 flume1.7 sbt部分依赖 libraryDependencies += "org.apache.spark" % "spark-streaming-flume_2.11" % "2.0.2" _拉模式代码和简单的输出语句_ val flumeStream = FlumeUtils.createPollingStream(ssc,host,port,StorageLevel.MEMORY_ONLY_SER_2) flumeStream.count().map(cnt => "Received " + cnt + " flume events." ).print() 已经在各个节点添加依赖 flume简单配置 # 指定Agent的组件名称 a1.sources = r1 a1.sinks = k1 a1.channels = c1 # 指定Flume source(要监听的路径) a1.sources.r1.type = spooldir a1.sources.r1.spoolDir = /home/hadoop/weixf_kafka/testflume # 指定Flume sink a1.sinks.k1.type = org.apache.spark.streaming.flume.sink.SparkSink a1.sinks.k1.channel =c1 a1.sinks.k1.hostname=172.28.41.196 a1.sinks.k1.port = 19999 # 指定Flume channel a1.channels.c1.type = memory a1.channels.c1.capacity = 100000 a1.channels.c1.transactionCapacity = 100000 # 绑定source和sink到channel上 a1.sources.r1.channels = c1 a1.sinks.k1.channel = c1 启动flume,再启动SparkStreaming程序发现如下信息(部分) 17/09/15 17:44:53 INFO scheduler.DAGScheduler: Submitting ResultStage 2 (Receiver 0 ParallelCollectionRDD[3] at makeRDD at ReceiverTracker.scala:610), which has no missing parents 17/09/15 17:44:53 INFO scheduler.ReceiverTracker: Receiver 0 started 17/09/15 17:44:53 INFO memory.MemoryStore: Block broadcast_2 stored as values in memory (estimated size 70.6 KB, free 413.8 MB) 17/09/15 17:44:53 INFO memory.MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 25.1 KB, free 413.8 MB) 17/09/15 17:44:53 INFO storage.BlockManagerInfo: Added broadcast_2_piece0 in memory on 172.28.41.193:41571 (size: 25.1 KB, free: 413.9 MB) 17/09/15 17:44:53 INFO spark.SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:1012 17/09/15 17:44:53 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 2 (Receiver 0 ParallelCollectionRDD[3] at makeRDD at ReceiverTracker.scala:610) 17/09/15 17:44:53 INFO scheduler.TaskSchedulerImpl: Adding task set 2.0 with 1 tasks 17/09/15 17:44:54 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 2.0 (TID 70, 172.28.41.196, partition 0, PROCESS_LOCAL, 6736 bytes) 17/09/15 17:44:54 INFO cluster.CoarseGrainedSchedulerBackend$DriverEndpoint: Launching task 70 on executor id: 0 hostname: 172.28.41.196. 17/09/15 17:44:54 INFO storage.BlockManagerInfo: Added broadcast_2_piece0 in memory on 172.28.41.196:33364 (size: 25.1 KB, free: 413.9 MB) 17/09/15 17:44:54 INFO util.RecurringTimer: Started timer for JobGenerator at time 1505468700000 17/09/15 17:44:54 INFO scheduler.JobGenerator: Started JobGenerator at 1505468700000 ms 17/09/15 17:44:54 INFO scheduler.JobScheduler: Started JobScheduler 17/09/15 17:44:54 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@534e58b6{/streaming,null,AVAILABLE} 17/09/15 17:44:54 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1b495d4{/streaming/json,null,AVAILABLE} 17/09/15 17:44:54 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@12fe1f28{/streaming/batch,null,AVAILABLE} 17/09/15 17:44:54 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@26fb4d06{/streaming/batch/json,null,AVAILABLE} 17/09/15 17:44:54 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2d38edfd{/static/streaming,null,AVAILABLE} 17/09/15 17:44:54 INFO streaming.StreamingContext: StreamingContext started 17/09/15 17:44:55 INFO scheduler.ReceiverTracker: Registered receiver for stream 0 from 172.28.41.196:45983 17/09/15 17:45:01 INFO scheduler.JobScheduler: Added jobs for time 1505468700000 ms 17/09/15 17:45:01 INFO scheduler.JobScheduler: Starting job streaming job 1505468700000 ms.0 from job set of time 1505468700000 ms 17/09/15 17:45:01 INFO spark.SparkContext: Starting job: print at FlumeLogPull.scala:44 17/09/15 17:45:01 INFO storage.BlockManagerInfo: Removed broadcast_1_piece0 on 172.28.41.196:33364 in memory (size: 1969.0 B, free: 413.9 MB) 17/09/15 17:45:01 INFO scheduler.DAGScheduler: Registering RDD 7 (union at DStream.scala:605) 17/09/15 17:45:01 INFO scheduler.DAGScheduler: Got job 2 (print at FlumeLogPull.scala:44) with 1 output partitions 17/09/15 17:45:01 INFO scheduler.DAGScheduler: Final stage: ResultStage 4 (print at FlumeLogPull.scala:44) 17/09/15 17:45:01 INFO scheduler.DAGScheduler: Parents of final stage: List(ShuffleMapStage 3) 17/09/15 17:45:01 INFO scheduler.DAGScheduler: Missing parents: List(ShuffleMapStage 3) 17/09/15 17:45:01 INFO scheduler.DAGScheduler: Submitting ShuffleMapStage 3 (UnionRDD[7] at union at DStream.scala:605), which has no missing parents 17/09/15 17:45:01 INFO storage.BlockManagerInfo: Removed broadcast_1_piece0 on 172.28.41.193:41571 in memory (size: 1969.0 B, free: 413.9 MB) 17/09/15 17:45:02 INFO memory.MemoryStore: Block broadcast_3 stored as values in memory (estimated size 3.3 KB, free 413.8 MB) 17/09/15 17:45:02 INFO memory.MemoryStore: Block broadcast_3_piece0 stored as bytes in memory (estimated size 2.0 KB, free 413.8 MB) 17/09/15 17:45:02 INFO storage.BlockManagerInfo: Added broadcast_3_piece0 in memory on 172.28.41.193:41571 (size: 2.0 KB, free: 413.9 MB) 17/09/15 17:45:02 INFO spark.SparkContext: Created broadcast 3 from broadcast at DAGScheduler.scala:1012 17/09/15 17:45:02 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 3 (UnionRDD[7] at union at DStream.scala:605) 17/09/15 17:45:02 INFO scheduler.TaskSchedulerImpl: Adding task set 3.0 with 1 tasks 17/09/15 17:45:30 INFO scheduler.JobScheduler: Added jobs for time 1505468730000 ms 17/09/15 17:46:00 INFO scheduler.JobScheduler: Added jobs for time 1505468760000 ms 17/09/15 17:46:30 INFO scheduler.JobScheduler: Added jobs for time 1505468790000 ms 17/09/15 17:47:00 INFO scheduler.JobScheduler: Added jobs for time 1505468820000 ms 17/09/15 17:47:30 INFO scheduler.JobScheduler: Added jobs for time 1505468850000 ms 17/09/15 17:48:00 INFO scheduler.JobScheduler: Added jobs for time 1505468880000 ms 17/09/15 17:48:30 INFO scheduler.JobScheduler: Added jobs for time 1505468910000 ms 17/09/15 17:49:00 INFO scheduler.JobScheduler: Added jobs for time 1505468940000 ms 17/09/15 17:49:30 INFO scheduler.JobScheduler: Added jobs for time 1505468970000 ms 17/09/15 17:50:00 INFO scheduler.JobScheduler: Added jobs for time 1505469000000 ms 17/09/15 17:50:30 INFO scheduler.JobScheduler: Added jobs for time 1505469030000 ms 17/09/15 17:51:00 INFO scheduler.JobScheduler: Added jobs for time 1505469060000 ms 17/09/15 17:51:30 INFO scheduler.JobScheduler: Added jobs for time 1505469090000 ms 17/09/15 17:52:00 INFO scheduler.JobScheduler: Added jobs for time 1505469120000 ms 17/09/15 17:52:30 INFO scheduler.JobScheduler: Added jobs for time 1505469150000 ms 17/09/15 17:53:00 INFO scheduler.JobScheduler: Added jobs for time 1505469180000 ms 17/09/15 17:53:30 INFO scheduler.JobScheduler: Added jobs for time 1505469210000 ms 17/09/15 17:54:00 INFO scheduler.JobScheduler: Added jobs for time 1505469240000 ms 17/09/15 17:54:30 INFO scheduler.JobScheduler: Added jobs for time 1505469270000 ms 17/09/15 17:55:00 INFO scheduler.JobScheduler: Added jobs for time 1505469300000 ms 17/09/15 17:55:30 INFO scheduler.JobScheduler: Added jobs for time 1505469330000 ms 17/09/15 17:56:00 INFO scheduler.JobScheduler: Added jobs for time 1505469360000 ms 17/09/15 17:56:30 INFO scheduler.JobScheduler: Added jobs for time 1505469390000 ms 17/09/15 17:57:00 INFO scheduler.JobScheduler: Added jobs for time 1505469420000 ms 17/09/15 17:57:30 INFO scheduler.JobScheduler: Added jobs for time 1505469450000 ms 17/09/15 17:58:00 INFO scheduler.JobScheduler: Added jobs for time 1505469480000 ms 17/09/15 17:58:30 INFO scheduler.JobScheduler: Added jobs for time 1505469510000 ms 17/09/15 17:59:00 INFO scheduler.JobScheduler: Added jobs for time 1505469540000 ms 17/09/15 17:59:30 INFO scheduler.JobScheduler: Added jobs for time 1505469570000 ms 17/09/15 18:00:00 INFO scheduler.JobScheduler: Added jobs for time 1505469600000 ms 17/09/15 18:00:30 INFO scheduler.JobScheduler: Added jobs for time 1505469630000 ms 17/09/15 18:00:59 INFO storage.BlockManagerInfo: Added input-0-1505469659600 in memory on 172.28.41.196:33364 (size: 15.7 KB, free: 413.9 MB) 17/09/15 18:01:00 INFO scheduler.JobScheduler: Added jobs for time 1505469660000 ms 17/09/15 18:01:00 INFO storage.BlockManagerInfo: Added input-0-1505469659800 in memory on 172.28.41.196:33364 (size: 15.3 KB, free: 413.9 MB) 17/09/15 18:01:03 INFO storage.BlockManagerInfo: Added input-0-1505469662800 in memory on 172.28.41.196:33364 (size: 7.3 KB, free: 413.9 MB) 17/09/15 18:01:25 INFO storage.BlockManagerInfo: Added input-0-1505469684800 in memory on 172.28.41.196:33364 (size: 15.7 KB, free: 413.8 MB) 17/09/15 18:01:25 INFO storage.BlockManagerInfo: Added input-0-1505469685000 in memory on 172.28.41.196:33364 (size: 15.3 KB, free: 413.8 MB) 其中没有我想要的输出信息而是一直有类似 17/09/15 17:45:30 INFO scheduler.JobScheduler: Added jobs for time 1505468730000 ms 这样的信息,如果向监控的文件夹下copy文件得到这样的输出信息 17/09/15 18:00:59 INFO storage.BlockManagerInfo: Added input-0-1505469659600 in memory on 172.28.41.196:33364 (size: 15.7 KB, free: 413.9 MB) 想要的效果是输出类似这样的正常结果 ------------------------------------------- Time: 1505468700000 ms ------------------------------------------- Received .. flume events. 实在是找不出来什么原因,求大神解惑,不胜感激

在中国程序员是青春饭吗?

今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...

程序员请照顾好自己,周末病魔差点一套带走我。

程序员在一个周末的时间,得了重病,差点当场去世,还好及时挽救回来了。

Java基础知识面试题(2020最新版)

文章目录Java概述何为编程什么是Javajdk1.5之后的三大版本JVM、JRE和JDK的关系什么是跨平台性?原理是什么Java语言有哪些特点什么是字节码?采用字节码的最大好处是什么什么是Java程序的主类?应用程序和小程序的主类有何不同?Java应用程序与小程序之间有那些差别?Java和C++的区别Oracle JDK 和 OpenJDK 的对比基础语法数据类型Java有哪些数据类型switc...

和黑客斗争的 6 天!

互联网公司工作,很难避免不和黑客们打交道,我呆过的两家互联网公司,几乎每月每天每分钟都有黑客在公司网站上扫描。有的是寻找 Sql 注入的缺口,有的是寻找线上服务器可能存在的漏洞,大部分都...

Intellij IDEA 实用插件安利

1. 前言从2020 年 JVM 生态报告解读 可以看出Intellij IDEA 目前已经稳坐 Java IDE 头把交椅。而且统计得出付费用户已经超过了八成(国外统计)。IDEA 的...

搜狗输入法也在挑战国人的智商!

故事总是一个接着一个到来...上周写完《鲁大师已经彻底沦为一款垃圾流氓软件!》这篇文章之后,鲁大师的市场工作人员就找到了我,希望把这篇文章删除掉。经过一番沟通我先把这篇文章从公号中删除了...

总结了 150 余个神奇网站,你不来瞅瞅吗?

原博客再更新,可能就没了,之后将持续更新本篇博客。

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

MySQL数据库面试题(2020最新版)

文章目录数据库基础知识为什么要使用数据库什么是SQL?什么是MySQL?数据库三大范式是什么mysql有关权限的表都有哪几个MySQL的binlog有有几种录入格式?分别有什么区别?数据类型mysql有哪些数据类型引擎MySQL存储引擎MyISAM与InnoDB区别MyISAM索引与InnoDB索引的区别?InnoDB引擎的4大特性存储引擎选择索引什么是索引?索引有哪些优缺点?索引使用场景(重点)...

如果你是老板,你会不会踢了这样的员工?

有个好朋友ZS,是技术总监,昨天问我:“有一个老下属,跟了我很多年,做事勤勤恳恳,主动性也很好。但随着公司的发展,他的进步速度,跟不上团队的步伐了,有点...

我入职阿里后,才知道原来简历这么写

私下里,有不少读者问我:“二哥,如何才能写出一份专业的技术简历呢?我总感觉自己写的简历太烂了,所以投了无数份,都石沉大海了。”说实话,我自己好多年没有写过简历了,但我认识的一个同行,他在阿里,给我说了一些他当年写简历的方法论,我感觉太牛逼了,实在是忍不住,就分享了出来,希望能够帮助到你。 01、简历的本质 作为简历的撰写者,你必须要搞清楚一点,简历的本质是什么,它就是为了来销售你的价值主张的。往深...

魂迁光刻,梦绕芯片,中芯国际终获ASML大型光刻机

据羊城晚报报道,近日中芯国际从荷兰进口的一台大型光刻机,顺利通过深圳出口加工区场站两道闸口进入厂区,中芯国际发表公告称该光刻机并非此前盛传的EUV光刻机,主要用于企业复工复产后的生产线扩容。 我们知道EUV主要用于7nm及以下制程的芯片制造,光刻机作为集成电路制造中最关键的设备,对芯片制作工艺有着决定性的影响,被誉为“超精密制造技术皇冠上的明珠”,根据之前中芯国际的公报,目...

优雅的替换if-else语句

场景 日常开发,if-else语句写的不少吧??当逻辑分支非常多的时候,if-else套了一层又一层,虽然业务功能倒是实现了,但是看起来是真的很不优雅,尤其是对于我这种有强迫症的程序"猿",看到这么多if-else,脑袋瓜子就嗡嗡的,总想着解锁新姿势:干掉过多的if-else!!!本文将介绍三板斧手段: 优先判断条件,条件不满足的,逻辑及时中断返回; 采用策略模式+工厂模式; 结合注解,锦...

离职半年了,老东家又发 offer,回不回?

有小伙伴问松哥这个问题,他在上海某公司,在离职了几个月后,前公司的领导联系到他,希望他能够返聘回去,他很纠结要不要回去? 俗话说好马不吃回头草,但是这个小伙伴既然感到纠结了,我觉得至少说明了两个问题:1.曾经的公司还不错;2.现在的日子也不是很如意。否则应该就不会纠结了。 老实说,松哥之前也有过类似的经历,今天就来和小伙伴们聊聊回头草到底吃不吃。 首先一个基本观点,就是离职了也没必要和老东家弄的苦...

2020阿里全球数学大赛:3万名高手、4道题、2天2夜未交卷

阿里巴巴全球数学竞赛( Alibaba Global Mathematics Competition)由马云发起,由中国科学技术协会、阿里巴巴基金会、阿里巴巴达摩院共同举办。大赛不设报名门槛,全世界爱好数学的人都可参与,不论是否出身数学专业、是否投身数学研究。 2020年阿里巴巴达摩院邀请北京大学、剑桥大学、浙江大学等高校的顶尖数学教师组建了出题组。中科院院士、美国艺术与科学院院士、北京国际数学...

为什么你不想学习?只想玩?人是如何一步一步废掉的

不知道是不是只有我这样子,还是你们也有过类似的经历。 上学的时候总有很多光辉历史,学年名列前茅,或者单科目大佬,但是虽然慢慢地长大了,你开始懈怠了,开始废掉了。。。 什么?你说不知道具体的情况是怎么样的? 我来告诉你: 你常常潜意识里或者心理觉得,自己真正的生活或者奋斗还没有开始。总是幻想着自己还拥有大把时间,还有无限的可能,自己还能逆风翻盘,只不是自己还没开始罢了,自己以后肯定会变得特别厉害...

百度工程师,获利10万,判刑3年!

所有一夜暴富的方法都写在刑法中,但总有人心存侥幸。这些年互联网犯罪高发,一些工程师高技术犯罪更是引发关注。这两天,一个百度运维工程师的案例传遍朋友圈。1...

程序员为什么千万不要瞎努力?

本文作者用对比非常鲜明的两个开发团队的故事,讲解了敏捷开发之道 —— 如果你的团队缺乏统一标准的环境,那么即使勤劳努力,不仅会极其耗时而且成果甚微,使用...

为什么程序员做外包会被瞧不起?

二哥,有个事想询问下您的意见,您觉得应届生值得去外包吗?公司虽然挺大的,中xx,但待遇感觉挺低,马上要报到,挺纠结的。

当HR压你价,说你只值7K,你该怎么回答?

当HR压你价,说你只值7K时,你可以流畅地回答,记住,是流畅,不能犹豫。 礼貌地说:“7K是吗?了解了。嗯~其实我对贵司的面试官印象很好。只不过,现在我的手头上已经有一份11K的offer。来面试,主要也是自己对贵司挺有兴趣的,所以过来看看……”(未完) 这段话主要是陪HR互诈的同时,从公司兴趣,公司职员印象上,都给予对方正面的肯定,既能提升HR的好感度,又能让谈判气氛融洽,为后面的发挥留足空间。...

面试:第十六章:Java中级开发

HashMap底层实现原理,红黑树,B+树,B树的结构原理 Spring的AOP和IOC是什么?它们常见的使用场景有哪些?Spring事务,事务的属性,传播行为,数据库隔离级别 Spring和SpringMVC,MyBatis以及SpringBoot的注解分别有哪些?SpringMVC的工作原理,SpringBoot框架的优点,MyBatis框架的优点 SpringCould组件有哪些,他们...

面试阿里p7,被按在地上摩擦,鬼知道我经历了什么?

面试阿里p7被问到的问题(当时我只知道第一个):@Conditional是做什么的?@Conditional多个条件是什么逻辑关系?条件判断在什么时候执...

无代码时代来临,程序员如何保住饭碗?

编程语言层出不穷,从最初的机器语言到如今2500种以上的高级语言,程序员们大呼“学到头秃”。程序员一边面临编程语言不断推陈出新,一边面临由于许多代码已存在,程序员编写新应用程序时存在重复“搬砖”的现象。 无代码/低代码编程应运而生。无代码/低代码是一种创建应用的方法,它可以让开发者使用最少的编码知识来快速开发应用程序。开发者通过图形界面中,可视化建模来组装和配置应用程序。这样一来,开发者直...

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

大三实习生,字节跳动面经分享,已拿Offer

说实话,自己的算法,我一个不会,太难了吧

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

《Oracle Java SE编程自学与面试指南》最佳学习路线图2020年最新版(进大厂必备)

正确选择比瞎努力更重要!

字节跳动面试官竟然问了我JDBC?

轻松等回家通知

面试官:你连SSO都不懂,就别来面试了

大厂竟然要考我SSO,卧槽。

实时更新:计算机编程语言排行榜—TIOBE世界编程语言排行榜(2020年6月份最新版)

内容导航: 1、TIOBE排行榜 2、总榜(2020年6月份) 3、本月前三名 3.1、C 3.2、Java 3.3、Python 4、学习路线图 5、参考地址 1、TIOBE排行榜 TIOBE排行榜是根据全世界互联网上有经验的程序员、课程和第三方厂商的数量,并使用搜索引擎(如Google、Bing、Yahoo!)以及Wikipedia、Amazon、YouTube统计出排名数据。

立即提问
相关内容推荐