使用Oracle SQL Developer连接hive出不来 5C

我想用Oracle SQL Developer来连接hive(如果能用navicat更好)
我按照他的方法去做,结果没有出来
这是我按照他的加入一些包
图片说明
最下面那个,是我另外去抓的

图片说明
这是我没有出来hive

如果能用navicat for hive更好,但我看文章也是试不出来
谢谢

2个回答

weixin_40187983
weixin_40187983 我的mysql是装在linux的,不知道怎么配,如果方便的话,请您加我QQ 1435874017!感谢
接近 2 年之前 回复
xcgh
xcgh 文章说明很清晰,关键哪里你配置不上
接近 2 年之前 回复
weixin_40187983
weixin_40187983 这个我就是看不懂的
接近 2 年之前 回复

图片说明

数据库连接名, 在你配置完驱动 并且重启之后是没有的, 等填写完连接信息并且点击保存按钮才会显示 数据库连接名的, 这里需要注意一点: 数据库类型选择"Hive"! 红色框选处

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
sql developer 添加hive驱动没反应

我在用sqldeveloper连接hive时候,加载hive jar包之后,重启sqldeveloper的新建连接页面,但是没有出现hiv连接选项。求解! ps:mac jdk1.8 安装xcode。

使用sqoop从oracle导数据到hive

望大神帮帮忙,非常谢谢您的照顾!!! 1、下图是报错信息: ![图片说明](https://img-ask.csdn.net/upload/201903/28/1553754743_757251.jpg) 2、下面是我的建表语句,测试数据,sqoop代码 ![图片说明](https://img-ask.csdn.net/upload/201903/28/1553754849_51874.jpg)

JDBC连接hive连接超时

hiveserver2启动了,然后日志也正常,但是用kettle连接或者自己的java代码用jdbc连接都是报错,报错日志如下: java.sql.SQLException: Could not open connection to jdbc:hive2://192.168.162.129:10000/hivedb: java.net.ConnectException: Connection timed out: connect at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:206) at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:178) at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105) at java.sql.DriverManager.getConnection(DriverManager.java:582) at java.sql.DriverManager.getConnection(DriverManager.java:185) at com.ljq.hive.HiveJdbcClient.run(HiveJdbcClient.java:21) at com.ljq.hive.HiveJdbcClient.main(HiveJdbcClient.java:46) Caused by: org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection timed out: connect at org.apache.thrift.transport.TSocket.open(TSocket.java:185) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:248) at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:203) ... 6 more Caused by: java.net.ConnectException: Connection timed out: connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:213) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:529) at org.apache.thrift.transport.TSocket.open(TSocket.java:180) ... 9 more error 实在是不知道怎么搞了

sqoop将oracle数据表导入hive中文乱码问题

请教各位大神一个问题,就是将oracle的表导入到hive后中文乱码,oracle库的编码格式为US7ASCII,各位大神有没有遇到过类型的问题,或者有没有好的解决方案建议,谢谢了。附注:现在已经试过convert(nsrdzdah,'utf8','US7ASCII'),但是还是乱码;还有就是修改hive jdbc jar包,感觉不靠谱就没有试

Hive执行SQL语句报错hive> show databases; ----已解决

``` hive> show databases; FAILED: SemanticException org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient hive> ``` 百度了很多,都是下面的解决方法 ``` 修改hive的配置文件hive-site.xml <property> <name>datanucleus.schema.autoCreateAll</name> <value>true</value> </property> 然后删除MySQL中原来的表 drop database hive_metastore; 重新初始化元数据 schematool -dbType mysql -initSchema ``` 按照上面的操作执行后,还是报相同的错误 请问,这个如何解决? 已解决:在多次挣扎后,我尝试了网上很多的方法,但都没有成功;经过多次修改与测试,最终终于找到了问题所在,原来只是java的版本出现的问题,与hive的版本不匹配,我开始使用的是java的最新版jdk-10.0.2,我开始的想法是高版本的能兼容低版本,后来验证我的想法是错的,我将java换成jdk8后,嘿,一下子就可以了,哎,怪我没有好好的认真看一下官方的文档,浪费了我两天的时间啊。

sqoop的数据导入hive,从sqlserver到hive做定时任务

sqoop的数据导入hive,从sqlserver到hive做定时任务。做job然后用crontab 做定时任务,有没有做过的好的例子

zeppelin连接hive和spark遇到的问题

1.连接hive的时候 zeppelin使用hiveserver2连接hive,由于元数据过多,赶脚zeppelin每次都在遍历元数据,每次执行语句都有1个多小时的延迟 2.连接sparksql报错 java.lang.NoSuchFieldError: HIVE_STATS_JDBC_TIMEOUT at org.apache.spark.sql.hive.HiveUtils$.hiveClientConfig

请教hive 查询sql里的多行注释怎么写

由于查询sql整体比较长,需要说明中间的逻辑,写类似mysql的多行 注释(/*注释内容*/),hive不支持,想请教下大家hive里执行sql时多行 注释怎么写,多谢!

有没有检查hive sql的代码检查工具

有没有工具,可以批量的检查hive sql代码的执行效率,并提供出建议。

java web 连接hive问题

创建了java工程,直接main方法调用jdbc方式连接hive的方法,测试成功, 查询都好用,但是在servlet里面调用那个查询方法就抛异常java.lang.ClassNotFoundException: org.apache.hive.jdbc.HiveDriver, hive相关的jar包都不少,也在buildpath中加入项目里面了,真是搞不明白了,代码就在一个工程里面,方法直接在main函数里就可以执行 在servlet中执行就报错。有人遇见过类似的问题吗?通过servlet调用和直接main方法调用有什么本质区别呢?

sqoop 从oracle导数据到hive中报错

往hive中导入表,报如下错误,请大家帮忙 [root@amorsay3 bin]# ./sqoop import --hive-import --connect jdbc:oracle:thin:@192.168.13.168:1521:orcl --username HADOOPLEARN --password zhao --table EMP -m 1 --hive-table emp1 Warning: /usr/local/hadoophive/sqoop-1.4.6.bin__hadoop-0.23/../hbase does not exist! HBase imports will fail. Please set $HBASE_HOME to the root of your HBase installation. Warning: /usr/local/hadoophive/sqoop-1.4.6.bin__hadoop-0.23/../hcatalog does not exist! HCatalog jobs will fail. Please set $HCAT_HOME to the root of your HCatalog installation. Warning: /usr/local/hadoophive/sqoop-1.4.6.bin__hadoop-0.23/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. Warning: /usr/local/hadoophive/sqoop-1.4.6.bin__hadoop-0.23/../zookeeper does not exist! Accumulo imports will fail. Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation. Warning: $HADOOP_HOME is deprecated. 15/08/11 23:17:02 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6 15/08/11 23:17:02 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 15/08/11 23:17:02 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override 15/08/11 23:17:02 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc. 15/08/11 23:17:02 INFO oracle.OraOopManagerFactory: Data Connector for Oracle and Hadoop is disabled. 15/08/11 23:17:02 INFO manager.SqlManager: Using default fetchSize of 1000 15/08/11 23:17:02 INFO tool.CodeGenTool: Beginning code generation 15/08/11 23:17:03 INFO manager.OracleManager: Time zone has been set to GMT 15/08/11 23:17:03 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM EMP t WHERE 1=0 15/08/11 23:17:03 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/local/hadoophive/hadoop-1.2.1 Note: /tmp/sqoop-root/compile/efda22b79cedc05841de35698062fbbc/EMP.java uses or overrides a deprecated API. Note: Recompile with -Xlint:deprecation for details. 15/08/11 23:17:04 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/efda22b79cedc05841de35698062fbbc/EMP.jar 15/08/11 23:17:04 INFO manager.OracleManager: Time zone has been set to GMT 15/08/11 23:17:04 INFO manager.OracleManager: Time zone has been set to GMT 15/08/11 23:17:04 INFO mapreduce.ImportJobBase: Beginning import of EMP 15/08/11 23:17:04 INFO manager.OracleManager: Time zone has been set to GMT 15/08/11 23:17:06 INFO db.DBInputFormat: Using read commited transaction isolation 15/08/11 23:17:06 INFO mapred.JobClient: Cleaning up the staging area hdfs://192.168.14.168:9000/hadoop/mapred/staging/root/.staging/job_201508111912_0003 Exception in thread "main" java.lang.IncompatibleClassChangeError: Found class org.apache.hadoop.mapreduce.JobContext, but interface was expected at org.apache.sqoop.config.ConfigurationHelper.getJobNumMaps(ConfigurationHelper.java:65) at com.cloudera.sqoop.config.ConfigurationHelper.getJobNumMaps(ConfigurationHelper.java:36) at org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat.getSplits(DataDrivenDBInputFormat.java:125) at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054) at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1071) at org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936) at org.apache.hadoop.mapreduce.Job.submit(Job.java:550) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580) at org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob(ImportJobBase.java:196) at org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:169) at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:266) at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:673) at org.apache.sqoop.manager.OracleManager.importTable(OracleManager.java:444) at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:497) at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605) at org.apache.sqoop.Sqoop.run(Sqoop.java:143) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227) at org.apache.sqoop.Sqoop.main(Sqoop.java:236)

springMVC的项目如何连接hive进行分页查询?

刚接触hive,不知道在项目中如何连接hive分页查询一张表的数据,最好有详细代码或小案例,急需!!! 谢谢

kettle连接hive成功 但是查询失败

hive sql 语句 select * from shuju 版本 kettle 6.1 hive 2.0.0 活着hive1.2.1 都不行 错误信息如下: org.pentaho.di.core.exception.KettleDatabaseException: An error occurred executing SQL: select * from shuju Error determining value metadata from SQL resultset metadata Method not supported at org.pentaho.di.core.database.Database.openQuery(Database.java:1718) at org.pentaho.di.core.database.Database.getRows(Database.java:3398) at org.pentaho.di.core.database.Database.getRows(Database.java:3376) at org.pentaho.di.core.database.Database.getRows(Database.java:3361) at org.pentaho.di.ui.core.database.dialog.SQLEditor.exec(SQLEditor.java:372) at org.pentaho.di.ui.core.database.dialog.SQLEditor.access$200(SQLEditor.java:81) at org.pentaho.di.ui.core.database.dialog.SQLEditor$7.handleEvent(SQLEditor.java:242) at org.eclipse.swt.widgets.EventTable.sendEvent(Unknown Source) at org.eclipse.swt.widgets.Widget.sendEvent(Unknown Source) at org.eclipse.swt.widgets.Display.runDeferredEvents(Unknown Source) at org.eclipse.swt.widgets.Display.readAndDispatch(Unknown Source) at org.pentaho.di.ui.spoon.Spoon.readAndDispatch(Spoon.java:1347) at org.pentaho.di.ui.spoon.Spoon.waitForDispose(Spoon.java:7989) at org.pentaho.di.ui.spoon.Spoon.start(Spoon.java:9269) at org.pentaho.di.ui.spoon.Spoon.main(Spoon.java:662) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.pentaho.commons.launcher.Launcher.main(Launcher.java:92) Caused by: org.pentaho.di.core.exception.KettleDatabaseException: Error determining value metadata from SQL resultset metadata Method not supported at org.pentaho.di.core.row.value.ValueMetaBase.getValueFromSQLType(ValueMetaBase.java:4588) at org.pentaho.di.core.database.Database.getValueFromSQLType(Database.java:2267) at org.pentaho.di.core.database.Database.getRowInfo(Database.java:2229) at org.pentaho.di.core.database.Database.openQuery(Database.java:1714) ... 19 more Caused by: java.sql.SQLException: Method not supported at org.apache.hive.jdbc.HiveResultSetMetaData.isSigned(HiveResultSetMetaData.java:143) at org.pentaho.di.core.row.value.ValueMetaBase.getValueFromSQLType(ValueMetaBase.java:4355) ... 22 more

kettle连接hive2,连接正常,但是获取表信息报错

java.lang.reflect.InvocationTargetException: Problem encountered getting information from the database: org.pentaho.di.core.exception.KettleDatabaseException: Unable to retrieve database information because of an error Unable to get list of procedures from database meta-data: Unable to get list of rows from ResultSet : Error determining value metadata from SQL resultset metadata Method not supported at org.pentaho.di.ui.core.database.dialog.GetDatabaseInfoProgressDialog$1.run(GetDatabaseInfoProgressDialog.java:67) at org.eclipse.jface.operation.ModalContext$ModalContextThread.run(ModalContext.java:113) Caused by: org.pentaho.di.core.exception.KettleDatabaseException: Unable to retrieve database information because of an error Unable to get list of procedures from database meta-data: Unable to get list of rows from ResultSet : Error determining value metadata from SQL resultset metadata Method not supported kettle 版本为5.4, hive版本为1.2.1,hadoop版本为2.7.1

如何使用hive sql 取两个时间?(详见问题描述)

数据片段如下 ![图片说明](https://img-ask.csdn.net/upload/201902/26/1551184017_321906.jpg) ## 需要求出红框中的间隔 ![图片说明](https://img-ask.csdn.net/upload/201902/26/1551184567_971001.png) * id time 状态 102 2019-02-24 17:18:18 1 102 2019-02-24 17:23:19 1 102 2019-02-24 17:28:19 1 102 2019-02-24 17:33:20 1 102 2019-02-24 17:38:20 1 102 2019-02-24 17:43:21 0 102 2019-02-24 17:48:21 0 102 2019-02-24 17:53:22 0 102 2019-02-24 17:58:22 1 102 2019-02-24 18:03:23 1 102 2019-02-24 18:08:23 1 102 2019-02-24 18:13:24 1 102 2019-02-24 18:18:24 0 102 2019-02-24 18:23:24 0 102 2019-02-24 18:28:25 0 102 2019-02-24 18:33:25 0 102 2019-02-24 18:38:26 0 102 2019-02-24 18:43:26 1 102 2019-02-24 18:48:27 1

【hive】SQL问题,4表联查怎么优化,语句太长了

我的数据库是hive,但实际操作是用sql操作的,所以想问怎么优化sql,因为实在是太长了,头说不行 但是我的操作要使用3张数据表和一张字典表 因为sql有点长(大概60多行),发出来大家估计也看不明白,主要问下思路,这是个做报表的功能,现有4张表 A,B,C,D,比如ABC是数据表,D是字典表,我的sql目前的顺序是 先将A和B进行join查询,然后将(AB)和C再次join查询,然后(ABC)再次和D进行join查询 求解,不用临时表可以么,join的过程中会有转换格式和添加固定字段的操作 跪谢 (头不让用with as,不解)

利用sqoop把数据从Oracle导出到hive报错

![图片说明](https://img-ask.csdn.net/upload/201504/16/1429180711_592161.png) bash-4.1$ sqoop import --connect jdbc:oracle:thin:@192.168.1.169:1521:orcl --username HADOOP --password hadoop2015 --table CALC_UPAY_DATE_HADOOP_HDFS --split-by UPAYID --hive-import Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. find: paths must precede expression: ant-eclipse-1.0-jvm1.2.jar Usage: find [-H] [-L] [-P] [-Olevel] [-D help|tree|search|stat|rates|opt|exec] [path...] [expression] 15/04/16 03:28:13 INFO sqoop.Sqoop: Running Sqoop version: 1.4.4-cdh5.0.2 15/04/16 03:28:13 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 15/04/16 03:28:13 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override 15/04/16 03:28:13 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc. 15/04/16 03:28:13 INFO manager.SqlManager: Using default fetchSize of 1000 15/04/16 03:28:13 INFO tool.CodeGenTool: Beginning code generation 15/04/16 03:28:13 INFO manager.OracleManager: Time zone has been set to GMT 15/04/16 03:28:13 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM CALC_UPAY_DATE_HADOOP_HDFS t WHERE 1=0 15/04/16 03:28:14 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-mapreduce Note: /tmp/sqoop-hdfs/compile/e9286bf0e7d796ba396d3155210012b0/CALC_UPAY_DATE_HADOOP_HDFS.java uses or overrides a deprecated API. Note: Recompile with -Xlint:deprecation for details. 15/04/16 03:28:15 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hdfs/compile/e9286bf0e7d796ba396d3155210012b0/CALC_UPAY_DATE_HADOOP_HDFS.jar 15/04/16 03:28:15 INFO mapreduce.ImportJobBase: Beginning import of CALC_UPAY_DATE_HADOOP_HDFS 15/04/16 03:28:15 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar 15/04/16 03:28:15 INFO manager.OracleManager: Time zone has been set to GMT 15/04/16 03:28:16 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps 15/04/16 03:28:16 INFO client.RMProxy: Connecting to ResourceManager at hadoop01/192.168.1.201:8032 15/04/16 03:28:18 INFO db.DBInputFormat: Using read commited transaction isolation 15/04/16 03:28:18 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(UPAYID), MAX(UPAYID) FROM CALC_UPAY_DATE_HADOOP_HDFS 15/04/16 03:28:19 INFO mapreduce.JobSubmitter: number of splits:4 15/04/16 03:28:20 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1429145594985_0020 15/04/16 03:28:20 INFO impl.YarnClientImpl: Submitted application application_1429145594985_0020 15/04/16 03:28:20 INFO mapreduce.Job: The url to track the job: http://hadoop01:8088/proxy/application_1429145594985_0020/ 15/04/16 03:28:20 INFO mapreduce.Job: Running job: job_1429145594985_0020 15/04/16 03:28:31 INFO mapreduce.Job: Job job_1429145594985_0020 running in uber mode : false 15/04/16 03:28:31 INFO mapreduce.Job: map 0% reduce 0% 15/04/16 03:28:59 INFO mapreduce.Job: Task Id : attempt_1429145594985_0020_m_000000_0, Status : FAILED Error: oracle.jdbc.driver.T4CPreparedStatement.isClosed()Z 15/04/16 03:29:00 INFO mapreduce.Job: Task Id : attempt_1429145594985_0020_m_000002_0, Status : FAILED Error: oracle.jdbc.driver.T4CPreparedStatement.isClosed()Z 15/04/16 03:29:01 INFO mapreduce.Job: Task Id : attempt_1429145594985_0020_m_000001_0, Status : FAILED Error: oracle.jdbc.driver.T4CPreparedStatement.isClosed()Z 我用sqoop把数据从hive导出到oracle一切正常

在kerberos环境下使用spark2访问hive报错

2019-05-13 21:27:07,394 [main] WARN org.apache.hadoop.hive.metastore.MetaStoreDirectSql - Self-test query [select "DB_ID" from "DBS"] failed; direct SQL is disabled javax.jdo.JDODataStoreException: Error executing SQL query "select "DB_ID" from "DBS"". at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543) at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:388) at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:213) at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.runTestQuery(MetaStoreDirectSql.java:243) at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.<init>(MetaStoreDirectSql.java:146) at org.apache.hadoop.hive.metastore.ObjectStore.initializeHelper(ObjectStore.java:406) at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:338) at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:299) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:77) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:137) at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:58) at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:67) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStoreForConf(HiveMetaStore.java:612) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:578) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:572) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:639) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:416) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6869) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:248) at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:70) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1700) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:80) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:130) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:101) at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3581) at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3633) at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3613) at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3867) at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:247) at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:230) at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:387) at org.apache.hadoop.hive.ql.metadata.Hive.create(Hive.java:331) at org.apache.hadoop.hive.ql.metadata.Hive.getInternal(Hive.java:311) at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:287) at org.apache.hadoop.hive.ql.session.SessionState.setAuthorizerV2Config(SessionState.java:895) at org.apache.hadoop.hive.ql.session.SessionState.setupAuth(SessionState.java:859) at org.apache.hadoop.hive.ql.session.SessionState.getAuthenticator(SessionState.java:1521) at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:204) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:268) at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:360) at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:264) at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:68) at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:67) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:197) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:197) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:197) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:99) at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:196) at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:106) at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:94) at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39) at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54) at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52) at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35) at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:290) at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1059) at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:137) at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:136) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:136) at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:133) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:632) at com.bigdata_example.oozie.SparkDemo.main(SparkDemo.java:23) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:775) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) at org.apache.oozie.action.hadoop.SparkMain.runSpark(SparkMain.java:181) at org.apache.oozie.action.hadoop.SparkMain.run(SparkMain.java:93) at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:101) at org.apache.oozie.action.hadoop.SparkMain.main(SparkMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.oozie.action.hadoop.LauncherAM.runActionMain(LauncherAM.java:410) at org.apache.oozie.action.hadoop.LauncherAM.access$300(LauncherAM.java:55) at org.apache.oozie.action.hadoop.LauncherAM$2.run(LauncherAM.java:223) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726) at org.apache.oozie.action.hadoop.LauncherAM.run(LauncherAM.java:217) at org.apache.oozie.action.hadoop.LauncherAM$1.run(LauncherAM.java:153) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726) at org.apache.oozie.action.hadoop.LauncherAM.main(LauncherAM.java:141) NestedThrowablesStackTrace: java.sql.SQLSyntaxErrorException: Table/View 'DBS' does not exist. at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown Source) at org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown Source) at org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown Source) at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown Source) at org.apache.derby.impl.jdbc.ConnectionChild.handleException(Unknown Source) at org.apache.derby.impl.jdbc.EmbedPreparedStatement.<init>(Unknown Source) at org.apache.derby.impl.jdbc.EmbedPreparedStatement42.<init>(Unknown Source) at org.apache.derby.jdbc.Driver42.newEmbedPreparedStatement(Unknown Source) at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown Source) at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown Source) at com.jolbox.bonecp.ConnectionHandle.prepareStatement(ConnectionHandle.java:1193) at org.datanucleus.store.rdbms.SQLController.getStatementForQuery(SQLController.java:345) at org.datanucleus.store.rdbms.query.RDBMSQueryUtils.getPreparedStatementForQuery(RDBMSQueryUtils.java:211) at org.datanucleus.store.rdbms.query.SQLQuery.performExecute(SQLQuery.java:633) at org.datanucleus.store.query.Query.executeQuery(Query.java:1844) at org.datanucleus.store.rdbms.query.SQLQuery.executeWithArray(SQLQuery.java:807) at org.datanucleus.store.query.Query.execute(Query.java:1715) at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:371) at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:213) at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.runTestQuery(MetaStoreDirectSql.java:243) at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.<init>(MetaStoreDirectSql.java:146) at org.apache.hadoop.hive.metastore.ObjectStore.initializeHelper(ObjectStore.java:406) at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:338) at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:299) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:77) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:137) at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:58) at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:67) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStoreForConf(HiveMetaStore.java:612) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:578) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:572) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:639) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:416) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6869) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:248) at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:70) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1700) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:80) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:130) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:101) at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3581) at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3633) at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3613) at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3867) at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:247) at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:230) at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:387) at org.apache.hadoop.hive.ql.metadata.Hive.create(Hive.java:331) at org.apache.hadoop.hive.ql.metadata.Hive.getInternal(Hive.java:311) at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:287) at org.apache.hadoop.hive.ql.session.SessionState.setAuthorizerV2Config(SessionState.java:895) at org.apache.hadoop.hive.ql.session.SessionState.setupAuth(SessionState.java:859) at org.apache.hadoop.hive.ql.session.SessionState.getAuthenticator(SessionState.java:1521) at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:204) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:268) at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:360) at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:264) at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:68) at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:67) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:197) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:197) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:197) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:99) at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:196) at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:106) at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:94) at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39) at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54) at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52) at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35) at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:290) at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1059) at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:137) at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:136) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:136) at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:133) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:632) at com.bigdata_example.oozie.SparkDemo.main(SparkDemo.java:23) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:775) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) at org.apache.oozie.action.hadoop.SparkMain.runSpark(SparkMain.java:181) at org.apache.oozie.action.hadoop.SparkMain.run(SparkMain.java:93) at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:101) at org.apache.oozie.action.hadoop.SparkMain.main(SparkMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.oozie.action.hadoop.LauncherAM.runActionMain(LauncherAM.java:410) at org.apache.oozie.action.hadoop.LauncherAM.access$300(LauncherAM.java:55) at org.apache.oozie.action.hadoop.LauncherAM$2.run(LauncherAM.java:223) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726) at org.apache.oozie.action.hadoop.LauncherAM.run(LauncherAM.java:217) at org.apache.oozie.action.hadoop.LauncherAM$1.run(LauncherAM.java:153) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726) at org.apache.oozie.action.hadoop.LauncherAM.main(LauncherAM.java:141) Caused by: ERROR 42X05: Table/View 'DBS' does not exist. at org.apache.derby.iapi.error.StandardException.newException(Unknown Source) at org.apache.derby.iapi.error.StandardException.newException(Unknown Source) at org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(Unknown Source) at org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(Unknown Source) at org.apache.derby.impl.sql.compile.FromList.bindTables(Unknown Source) at org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(Unknown Source) at org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(Unknown Source) at org.apache.derby.impl.sql.compile.DMLStatementNode.bind(Unknown Source) at org.apache.derby.impl.sql.compile.CursorNode.bindStatement(Unknown Source) at org.apache.derby.impl.sql.GenericStatement.prepMinion(Unknown Source) at org.apache.derby.impl.sql.GenericStatement.prepare(Unknown Source) at org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(Unknown Source) ... 113 more 没加kerberos认证,然后报找不到库,我猜是权限不够,然后加了kerberos,又报java.lang.reflect.InvocationTargetException和Caused by:java.lang.NullPointerException

hive sql数据统计,详细说明一下,我需要弄懂,谢谢了

研究Hive SQL 完成以下统计任务,写出统计sql CREATE TABLE `t_order`( `ord_id` bigint,//订单号 `ord_amount` bigint,//订单金额 `cust_id` bigint,//客户id `ord_time` string) //订单时间:格式如2018-01-01 00:00:00 PARTITIONED BY ( `dt` string)//日期分区,格式20180101 1、统计用户月复购率,用户复购率定义:在上月有订单记录的用户,在本月仍然订单记录的用户,占上月有订单记录的用户的比例。 2、假设表中有2018年1-3月每一天的交易金额,统计1-3月每一个用户每天当月累计到当日的交易金额

大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了

大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...

在中国程序员是青春饭吗?

今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...

程序员请照顾好自己,周末病魔差点一套带走我。

程序员在一个周末的时间,得了重病,差点当场去世,还好及时挽救回来了。

技术大佬:我去,你写的 switch 语句也太老土了吧

昨天早上通过远程的方式 review 了两名新来同事的代码,大部分代码都写得很漂亮,严谨的同时注释也很到位,这令我非常满意。但当我看到他们当中有一个人写的 switch 语句时,还是忍不住破口大骂:“我擦,小王,你丫写的 switch 语句也太老土了吧!” 来看看小王写的代码吧,看完不要骂我装逼啊。 private static String createPlayer(PlayerTypes p...

和黑客斗争的 6 天!

互联网公司工作,很难避免不和黑客们打交道,我呆过的两家互联网公司,几乎每月每天每分钟都有黑客在公司网站上扫描。有的是寻找 Sql 注入的缺口,有的是寻找线上服务器可能存在的漏洞,大部分都...

点沙成金:英特尔芯片制造全过程揭密

“亚马逊丛林里的蝴蝶扇动几下翅膀就可能引起两周后美国德州的一次飓风……” 这句人人皆知的话最初用来描述非线性系统中微小参数的变化所引起的系统极大变化。 而在更长的时间尺度内,我们所生活的这个世界就是这样一个异常复杂的非线性系统…… 水泥、穹顶、透视——关于时间与技艺的蝴蝶效应 公元前3000年,古埃及人将尼罗河中挖出的泥浆与纳特龙盐湖中的矿物盐混合,再掺入煅烧石灰石制成的石灰,由此得来了人...

讲一个程序员如何副业月赚三万的真实故事

loonggg读完需要3分钟速读仅需 1 分钟大家好,我是你们的校长。我之前讲过,这年头,只要肯动脑,肯行动,程序员凭借自己的技术,赚钱的方式还是有很多种的。仅仅靠在公司出卖自己的劳动时...

上班一个月,后悔当初着急入职的选择了

最近有个老铁,告诉我说,上班一个月,后悔当初着急入职现在公司了。他之前在美图做手机研发,今年美图那边今年也有一波组织优化调整,他是其中一个,在协商离职后,当时捉急找工作上班,因为有房贷供着,不能没有收入来源。所以匆忙选了一家公司,实际上是一个大型外包公司,主要派遣给其他手机厂商做外包项目。**当时承诺待遇还不错,所以就立马入职去上班了。但是后面入职后,发现薪酬待遇这块并不是HR所说那样,那个HR自...

女程序员,为什么比男程序员少???

昨天看到一档综艺节目,讨论了两个话题:(1)中国学生的数学成绩,平均下来看,会比国外好?为什么?(2)男生的数学成绩,平均下来看,会比女生好?为什么?同时,我又联想到了一个技术圈经常讨...

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

MySQL数据库面试题(2020最新版)

文章目录数据库基础知识为什么要使用数据库什么是SQL?什么是MySQL?数据库三大范式是什么mysql有关权限的表都有哪几个MySQL的binlog有有几种录入格式?分别有什么区别?数据类型mysql有哪些数据类型引擎MySQL存储引擎MyISAM与InnoDB区别MyISAM索引与InnoDB索引的区别?InnoDB引擎的4大特性存储引擎选择索引什么是索引?索引有哪些优缺点?索引使用场景(重点)...

如果你是老板,你会不会踢了这样的员工?

有个好朋友ZS,是技术总监,昨天问我:“有一个老下属,跟了我很多年,做事勤勤恳恳,主动性也很好。但随着公司的发展,他的进步速度,跟不上团队的步伐了,有点...

我入职阿里后,才知道原来简历这么写

私下里,有不少读者问我:“二哥,如何才能写出一份专业的技术简历呢?我总感觉自己写的简历太烂了,所以投了无数份,都石沉大海了。”说实话,我自己好多年没有写过简历了,但我认识的一个同行,他在阿里,给我说了一些他当年写简历的方法论,我感觉太牛逼了,实在是忍不住,就分享了出来,希望能够帮助到你。 01、简历的本质 作为简历的撰写者,你必须要搞清楚一点,简历的本质是什么,它就是为了来销售你的价值主张的。往深...

我说我不会算法,阿里把我挂了。

不说了,字节跳动也反手把我挂了。

优雅的替换if-else语句

场景 日常开发,if-else语句写的不少吧??当逻辑分支非常多的时候,if-else套了一层又一层,虽然业务功能倒是实现了,但是看起来是真的很不优雅,尤其是对于我这种有强迫症的程序"猿",看到这么多if-else,脑袋瓜子就嗡嗡的,总想着解锁新姿势:干掉过多的if-else!!!本文将介绍三板斧手段: 优先判断条件,条件不满足的,逻辑及时中断返回; 采用策略模式+工厂模式; 结合注解,锦...

离职半年了,老东家又发 offer,回不回?

有小伙伴问松哥这个问题,他在上海某公司,在离职了几个月后,前公司的领导联系到他,希望他能够返聘回去,他很纠结要不要回去? 俗话说好马不吃回头草,但是这个小伙伴既然感到纠结了,我觉得至少说明了两个问题:1.曾经的公司还不错;2.现在的日子也不是很如意。否则应该就不会纠结了。 老实说,松哥之前也有过类似的经历,今天就来和小伙伴们聊聊回头草到底吃不吃。 首先一个基本观点,就是离职了也没必要和老东家弄的苦...

为什么你不想学习?只想玩?人是如何一步一步废掉的

不知道是不是只有我这样子,还是你们也有过类似的经历。 上学的时候总有很多光辉历史,学年名列前茅,或者单科目大佬,但是虽然慢慢地长大了,你开始懈怠了,开始废掉了。。。 什么?你说不知道具体的情况是怎么样的? 我来告诉你: 你常常潜意识里或者心理觉得,自己真正的生活或者奋斗还没有开始。总是幻想着自己还拥有大把时间,还有无限的可能,自己还能逆风翻盘,只不是自己还没开始罢了,自己以后肯定会变得特别厉害...

男生更看重女生的身材脸蛋,还是思想?

往往,我们看不进去大段大段的逻辑。深刻的哲理,往往短而精悍,一阵见血。问:产品经理挺漂亮的,有点心动,但不知道合不合得来。男生更看重女生的身材脸蛋,还是...

为什么程序员做外包会被瞧不起?

二哥,有个事想询问下您的意见,您觉得应届生值得去外包吗?公司虽然挺大的,中xx,但待遇感觉挺低,马上要报到,挺纠结的。

当HR压你价,说你只值7K,你该怎么回答?

当HR压你价,说你只值7K时,你可以流畅地回答,记住,是流畅,不能犹豫。 礼貌地说:“7K是吗?了解了。嗯~其实我对贵司的面试官印象很好。只不过,现在我的手头上已经有一份11K的offer。来面试,主要也是自己对贵司挺有兴趣的,所以过来看看……”(未完) 这段话主要是陪HR互诈的同时,从公司兴趣,公司职员印象上,都给予对方正面的肯定,既能提升HR的好感度,又能让谈判气氛融洽,为后面的发挥留足空间。...

面试:第十六章:Java中级开发(16k)

HashMap底层实现原理,红黑树,B+树,B树的结构原理 Spring的AOP和IOC是什么?它们常见的使用场景有哪些?Spring事务,事务的属性,传播行为,数据库隔离级别 Spring和SpringMVC,MyBatis以及SpringBoot的注解分别有哪些?SpringMVC的工作原理,SpringBoot框架的优点,MyBatis框架的优点 SpringCould组件有哪些,他们...

面试阿里p7,被按在地上摩擦,鬼知道我经历了什么?

面试阿里p7被问到的问题(当时我只知道第一个):@Conditional是做什么的?@Conditional多个条件是什么逻辑关系?条件判断在什么时候执...

你打算用Java 8一辈子都不打算升级到Java 14,真香

我们程序员应该抱着尝鲜、猎奇的心态,否则就容易固步自封,技术停滞不前。

无代码时代来临,程序员如何保住饭碗?

编程语言层出不穷,从最初的机器语言到如今2500种以上的高级语言,程序员们大呼“学到头秃”。程序员一边面临编程语言不断推陈出新,一边面临由于许多代码已存在,程序员编写新应用程序时存在重复“搬砖”的现象。 无代码/低代码编程应运而生。无代码/低代码是一种创建应用的方法,它可以让开发者使用最少的编码知识来快速开发应用程序。开发者通过图形界面中,可视化建模来组装和配置应用程序。这样一来,开发者直...

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

大三实习生,字节跳动面经分享,已拿Offer

说实话,自己的算法,我一个不会,太难了吧

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

《经典算法案例》01-08:如何使用质数设计扫雷(Minesweeper)游戏

我们都玩过Windows操作系统中的经典游戏扫雷(Minesweeper),如果把质数当作一颗雷,那么,表格中红色的数字哪些是雷(质数)?您能找出多少个呢?文中用列表的方式罗列了10000以内的自然数、质数(素数),6的倍数等,方便大家观察质数的分布规律及特性,以便对算法求解有指导意义。另外,判断质数是初学算法,理解算法重要性的一个非常好的案例。

《Oracle Java SE编程自学与面试指南》最佳学习路线图(2020最新版)

正确选择比瞎努力更重要!

一文带你入门Java Stream流,太强了

两个星期以前,就有读者强烈要求我写一篇 Java Stream 流的文章,我说市面上不是已经有很多了吗,结果你猜他怎么说:“就想看你写的啊!”你看你看,多么苍白的喜欢啊。那就“勉为其难”写一篇吧,嘻嘻。 单从“Stream”这个单词上来看,它似乎和 java.io 包下的 InputStream 和 OutputStream 有些关系。实际上呢,没毛关系。Java 8 新增的 Stream 是为...

立即提问
相关内容推荐