jdbc连接hive非default库

Connection con = DriverManager.getCo
nnection("jdbc:hive2://10.0.31.89:10000/wsx", "root", "");

指定了库没用,只能操作default库,怎么切换库啊

1个回答

先看你有没有你指定的那个库

qq_35963185
qq_35963185 有,我在命令行可以操作的
2 年多之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
用jdbc连接hive时resultset里面查不到数据,row是null
代码如下: public class ExtractJob { public static void main(String[] args) { String driverName = "org.apache.hive.jdbc.HiveDriver"; String url = "jdbc:hive2://***.***.***.***:10000/default"; Connection conn = null; Statement state = null; ResultSet rs = null; try { Class.forName(driverName); conn = DriverManager.getConnection(url,"hive","hive"); state = conn.createStatement(); state.execute("use test"); rs = state.executeQuery("select * from test1"); int columnCount = rs.getMetaData().getColumnCount(); String str = ""; while(rs.next()){ for(int i = 0;i<columnCount;i++){ str+=rs.getString(i); } System.out.println(str); } } catch (SQLException e) { e.printStackTrace(); }catch (ClassNotFoundException e) { e.printStackTrace(); }finally{ try { rs.close(); state.close(); conn.close(); } catch (SQLException e) { // TODO Auto-generated catch block e.printStackTrace(); }finally{ rs = null; state = null; conn = null; } } } }
Hive JDBC 连接异常问题
代码: String driverName = "org.apache.hive.jdbc.HiveDriver"; String url = "jdbc:hive://192.168.1.108:10000/default"; String user = ""; String password = ""; String sql = ""; ResultSet res = null; Class.forName(driverName); Connection con = DriverManager.getConnection(url, user, password); 错: Exception in thread "main" java.sql.SQLException: No suitable driver found for jdbc:hive://192.168.1.108:10000/default at java.sql.DriverManager.getConnection(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) 所需包都有还包异常。。。求助
用jdbc连hive时resultset里面没有结果,debug时rs里面的row值是null
代码如下: public class ExtractJob { public static void main(String[] args) { String driverName = "org.apache.hive.jdbc.HiveDriver"; String url = "jdbc:hive2://***.***.***.***:10000/default"; Connection conn = null; Statement state = null; ResultSet rs = null; try { Class.forName(driverName); conn = DriverManager.getConnection(url,"hive","hive"); state = conn.createStatement(); state.execute("use test"); rs = state.executeQuery("select * from test1"); int columnCount = rs.getMetaData().getColumnCount(); String str = ""; while(rs.next()){ for(int i = 0;i<columnCount;i++){ str+=rs.getString(i); } System.out.println(str); } } catch (SQLException e) { e.printStackTrace(); }catch (ClassNotFoundException e) { e.printStackTrace(); }finally{ try { rs.close(); state.close(); conn.close(); } catch (SQLException e) { // TODO Auto-generated catch block e.printStackTrace(); }finally{ rs = null; state = null; conn = null; } } } }
spark通过jdbc读取hive的表报错,我是在zeppelin里运行的
## 代码: import org.apache.spark.sql.hive.HiveContext val pro = new java.util.Properties() pro.setProperty("user", "****") pro.setProperty("password", "*****") val driverName = "org.apache.hadoop.hive.jdbc.HiveDriver"; Class.forName(driverName); val hiveContext = new HiveContext(sc) val hivetable = hiveContext.read.jdbc("jdbc:hive://*****/default", "*****", pro); ## 错误: import org.apache.spark.sql.hive.HiveContext pro: java.util.Properties = {} res15: Object = null res16: Object = null driverName: String = org.apache.hadoop.hive.jdbc.HiveDriver res17: Class[_] = class org.apache.hadoop.hive.jdbc.HiveDriver warning: there was one deprecation warning; re-run with -deprecation for details hiveContext: org.apache.spark.sql.hive.HiveContext = org.apache.spark.sql.hive.HiveContext@14f9cc13 java.sql.SQLException: Method not supported at org.apache.hadoop.hive.jdbc.HiveResultSetMetaData.isSigned(Unknown Source) at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.getSchema(JdbcUtils.scala:232) at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:64) at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.<init>(JDBCRelation.scala:113) at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:45) at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:330) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:125) at org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:166) ... 46 elided
hive beeline 连接 User: root is not allowed to impersonate root
hive连接beeline,爆权限问题。连接不上,查看了许多帖子都没能解决问题。 ``` beeline> !connect jdbc:hive2://devcrm:10000/default Connecting to jdbc:hive2://devcrm:10000/default Enter username for jdbc:hive2://devcrm:10000/default: root Enter password for jdbc:hive2://devcrm:10000/default: **** 19/04/22 17:25:31 [main]: WARN jdbc.HiveConnection: Failed to connect to devcrm:10000 Error: Could not open client transport with JDBC Uri: jdbc:hive2://devcrm:10000/default: Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: root is not allowed to impersonate root (state=08S01,code=0) ``` hive的hive-site.xml 配置文件 ``` <property> <name>hive.server2.authentication</name> <value>NONE</value> <description> Expects one of [nosasl, none, ldap, kerberos, pam, custom]. Client authentication types. NONE: no authentication check LDAP: LDAP/AD based authentication KERBEROS: Kerberos/GSSAPI authentication CUSTOM: Custom authentication provider (Use with property hive.server2.custom.authentication.class) PAM: Pluggable authentication module NOSASL: Raw transport </description> </property>   <property>     <name>hive.server2.thrift.client.user</name>     <value>root</value>     <description>Username to use against thrift client</description>   </property>   <property>     <name>hive.server2.thrift.client.password</name>     <value>root</value>     <description>Password to use against thrift client</description>   </property> ``` hadoop 的core-site.xml 配置 ``` <configuration> <!--指定namenode的地址--> <property> <name>fs.defaultFS</name> <value>hdfs://192.168.11.207:9000</value> </property> <!--用来指定使用hadoop时产生文件的存放目录--> <property> <name>hadoop.tmp.dir</name> <!--<value>file:/usr/local/kafka/hadoop-2.7.6/tmp</value>--> <value>file:/home/hadoop/temp</value> </property> <!--用来设置检查点备份日志的最长时间--> <!-- <name>fs.checkpoint.period</name> <value>3600</value> --> <!-- 表示设置 hadoop 的代理用户--> <property> <!--表示代理用户的组所属--> <name>hadoop.proxyuser.root.groups</name> <value>*</value> </property> <property> <!--表示任意节点使用 hadoop 集群的代理用户 hadoop 都能访问 hdfs 集群--> <name>hadoop.proxyuser.root.hosts</name> <value>*</value> </property> </configuration> ```
linux 启动hive时报错
$beeline -u jdbc:hive2://192.168.141.142:10000 Connecting to jdbc:hive2://192.168.141.142:10000 17/07/11 23:37:38 INFO jdbc.Utils: Supplied authorities: 192.168.141.142:10000 17/07/11 23:37:38 INFO jdbc.Utils: Resolved authority: 192.168.141.142:10000 17/07/11 23:37:38 INFO jdbc.HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://192.168.141.142:10000 17/07/11 23:37:39 ERROR jdbc.HiveConnection: Error opening session org.apache.thrift.TApplicationException: Required field 'client_protocol' is unset! Struct:TOpenSessionReq(client_protocol:null, configuration:{use:database=default}) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71) at org.apache.hive.service.cli.thrift.TCLIService$Client.recv_OpenSession(TCLIService.java:156) at org.apache.hive.service.cli.thrift.TCLIService$Client.OpenSession(TCLIService.java:143) at org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:583) at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:192) at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105) at java.sql.DriverManager.getConnection(DriverManager.java:571) at java.sql.DriverManager.getConnection(DriverManager.java:187) at org.apache.hive.beeline.DatabaseConnection.connect(DatabaseConnection.java:142) at org.apache.hive.beeline.DatabaseConnection.getConnection(DatabaseConnection.java:207) at org.apache.hive.beeline.Commands.connect(Commands.java:1149) at org.apache.hive.beeline.Commands.connect(Commands.java:1070) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hive.beeline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:52) at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:970) at org.apache.hive.beeline.BeeLine.initArgs(BeeLine.java:707) at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:757) at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:484) at org.apache.hive.beeline.BeeLine.main(BeeLine.java:467) Error: Could not establish connection to jdbc:hive2://192.168.141.142:10000: Required field 'client_protocol' is unset! Struct:TOpenSessionReq(client_protocol:null, configuration:{use:database=default}) (state=08S01,code=0) Beeline version 1.6.2 by Apache Hive 0: jdbc:hive2://192.168.141.142:10000 (closed)>
hive 0.13.1元数据库无法变成mysql
我把hive-0.12升级成0.13.1版本,先在mysql里执行了source upgrade-0.12.0-to-0.13.0.mysql.sql成功了,然后在mysql中创建了一个hivenew(0.12版的是hive)用户,并给予了权限,更改了hive-site.xml文件如下: 1. <property> 2. <name>hive.stats.dbclass</name> 3. <value>jdbc:mysql</value> 4. <description>The default database that stores temporary hive statistics.</description> 5. </property> 6. 7. <property> 8. <name>hive.stats.jdbcdriver</name> 9. <value>com.mysql.jdbc.Driver</value> 10. <description>The JDBC driver for the database that stores temporary hive statistics.</description> 11. </property> 12. 13. <property> 14. <name>hive.stats.dbconnectionstring</name> 15. <value>jdbc:mysql://localhost:3306/hivenew</value> 16. <description>The default connection string for the database that stores temporary hive statistics.</description> 17. </property> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://localhost:3306/hivenew?createDatabaseIfNotExist=true</value> <description>JDBC connect string for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> <description>Driver class name for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>hivenew</value> <description>username to use against metastore database</description> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>hivenew</value> <description>password to use against metastore database</description> </property> 保存后,把mysql的jdbc复制到了lib下,然后启动hive,虽然能正常显示hive>,和show tables; 但是在mysql里根本没有hivenew这个数据库,我装0.12时装好后自动就有了hive数据库,而show tables,也没有我在hivenew下创建的表格,而且经常出现Caused by: ERROR XSDB6: Another instance of Derby may have already booted the database /opt/apache-hive-0.13.1-bin/metastore_db.这样的错误 可见元数据库仍旧是derby,更诡异的是我把hive-site.xml删掉之后,hive它仍旧工作良好能正常显示hive>,和show tables,难道hive-site.xml的配置都无关紧要吗?求求大家帮帮我,急死人了
hive学习中碰到的错误
小弟初学hive,配置了mysql为元数据库后,然后新建了一个表test(id int,name string) hive> show tables; OK test Time taken: 1.759 seconds hive> drop table test; FAILED: Error in metadata: MetaException(message:javax.jdo.JDODataStoreException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_SELECT_LIMIT=DEFAULT' at line 1 NestedThrowables: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_SELECT_LIMIT=DEFAULT' at line 1) FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask 然后再执行show tables命令同样报以上错误,求大仙指点!!!
hive加载数据的时候,原数据被删除
我创建了两张表,一张内部表 javabloger1 和一张外部表 javabloger1 ,不管是load数据到外部表还是内部表,hdfs上的数据(也就是/my/in文件夹下的数据)都被删除了?这个是什么情况,请大神帮忙解答。详细代码如下: public static void loadData() throws ClassNotFoundException, SQLException{ Class.forName("org.apache.hive.jdbc.HiveDriver"); // String hsql = "create table javabloger1 (key String,value string)"; // String hsql = "create external table javabloger1 (key String,value string)"; String hsql = "load data inpath '/my/in/' into table javabloger1 "; // String hsql = "select * from javabloger"; Connection con = DriverManager.getConnection("jdbc:hive2://XXXXX:10000/default","",""); Statement stmt = con.createStatement(); stmt.executeUpdate(hsql); // ResultSet rs = stmt.executeQuery(hsql); // while(rs.next()){ // System.out.println(rs.getString(1)+"+++++"+rs.getString(2)); // } // rs.close(); stmt.close(); con.close(); }
萌芽求救 HIVE启动失败
**IT届 萌芽求救** ``` [root@h1 ~]# hive Logging initialized using configuration in jar:file:/usr/local/apache-hive-0.13.0-bin/lib/hive-common-0.13.0.jar!/hive-log4j.properties Java HotSpot(TM) Client VM warning: You have loaded library /usr/local/hadoop-2.6.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now. It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'. Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:344) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1412) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:62) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:72) at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2444) at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2456) at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:338) ... 8 more Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1410) ... 13 more Caused by: javax.jdo.JDOFatalInternalException: Error creating transactional connection factory NestedThrowables: java.lang.reflect.InvocationTargetException ... ``` 这是我的配置文件hive-env.sh: ``` [root@h1 conf]# cat hive-env.sh # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Set Hive and Hadoop environment variables here. These variables can be used # to control the execution of Hive. It should be used by admins to configure # the Hive installation (so that users do not have to set environment variables # or set command line parameters to get correct behavior). # # The hive service being invoked (CLI/HWI etc.) is available via the environment # variable SERVICE # Hive Client memory usage can be an issue if a large number of clients # are running at the same time. The flags below have been useful in # reducing memory usage: # # if [ "$SERVICE" = "cli" ]; then # if [ -z "$DEBUG" ]; then # export HADOOP_OPTS="$HADOOP_OPTS -XX:NewRatio=12 -Xms10m -XX:MaxHeapFreeRatio=40 -XX:MinHeapFreeRatio=15 -XX:+UseParNewGC -XX:-UseGCOverheadLimit" # else # export HADOOP_OPTS="$HADOOP_OPTS -XX:NewRatio=12 -Xms10m -XX:MaxHeapFreeRatio=40 -XX:MinHeapFreeRatio=15 -XX:-UseGCOverheadLimit" # fi # fi # The heap size of the jvm stared by hive shell script can be controlled via: # export HADOOP_HEAPSIZE=1024 # # Larger heap size may be required when running queries over large number of files or partitions. # By default hive shell scripts use a heap size of 256 (MB). Larger heap size would also be # appropriate for hive server (hwi etc). # Set HADOOP_HOME to point to a specific hadoop install directory # HADOOP_HOME=${bin}/../../hadoop HADOOP_HOME=${HADOOP_HOME} # Hive Configuration Directory can be controlled by: export HIVE_CONF_DIR=${HIVE_CONF_DIR} # Folder containing extra ibraries required for hive compilation/execution can be controlled by: # export HIVE_AUX_JARS_PATH= ``` 我的hive-site.xml 配置信息: ``` [root@h1 conf]# cat hive-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <configuration> <!-- WARNING!!! This file is provided for documentation purposes ONLY! --> <!-- WARNING!!! Any changes you make to this file will be ignored by Hive. --> <!-- WARNING!!! You must make your changes in hive-site.xml instead. --> <!-- Hive Execution Parameters --> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://192.168.1.103:3306/hive?createDatabaseIfNotExist=true</value> <description>JDBC connect string for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> <description>Driver class name for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>root</value> <description>username to use against metastore database</description> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>root</value> <description>password to use against metastore database</description> </property> </configuration> ```
利用sqoop把数据从Oracle导出到hive报错
![图片说明](https://img-ask.csdn.net/upload/201504/16/1429180711_592161.png) bash-4.1$ sqoop import --connect jdbc:oracle:thin:@192.168.1.169:1521:orcl --username HADOOP --password hadoop2015 --table CALC_UPAY_DATE_HADOOP_HDFS --split-by UPAYID --hive-import Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. find: paths must precede expression: ant-eclipse-1.0-jvm1.2.jar Usage: find [-H] [-L] [-P] [-Olevel] [-D help|tree|search|stat|rates|opt|exec] [path...] [expression] 15/04/16 03:28:13 INFO sqoop.Sqoop: Running Sqoop version: 1.4.4-cdh5.0.2 15/04/16 03:28:13 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 15/04/16 03:28:13 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override 15/04/16 03:28:13 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc. 15/04/16 03:28:13 INFO manager.SqlManager: Using default fetchSize of 1000 15/04/16 03:28:13 INFO tool.CodeGenTool: Beginning code generation 15/04/16 03:28:13 INFO manager.OracleManager: Time zone has been set to GMT 15/04/16 03:28:13 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM CALC_UPAY_DATE_HADOOP_HDFS t WHERE 1=0 15/04/16 03:28:14 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-mapreduce Note: /tmp/sqoop-hdfs/compile/e9286bf0e7d796ba396d3155210012b0/CALC_UPAY_DATE_HADOOP_HDFS.java uses or overrides a deprecated API. Note: Recompile with -Xlint:deprecation for details. 15/04/16 03:28:15 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hdfs/compile/e9286bf0e7d796ba396d3155210012b0/CALC_UPAY_DATE_HADOOP_HDFS.jar 15/04/16 03:28:15 INFO mapreduce.ImportJobBase: Beginning import of CALC_UPAY_DATE_HADOOP_HDFS 15/04/16 03:28:15 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar 15/04/16 03:28:15 INFO manager.OracleManager: Time zone has been set to GMT 15/04/16 03:28:16 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps 15/04/16 03:28:16 INFO client.RMProxy: Connecting to ResourceManager at hadoop01/192.168.1.201:8032 15/04/16 03:28:18 INFO db.DBInputFormat: Using read commited transaction isolation 15/04/16 03:28:18 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(UPAYID), MAX(UPAYID) FROM CALC_UPAY_DATE_HADOOP_HDFS 15/04/16 03:28:19 INFO mapreduce.JobSubmitter: number of splits:4 15/04/16 03:28:20 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1429145594985_0020 15/04/16 03:28:20 INFO impl.YarnClientImpl: Submitted application application_1429145594985_0020 15/04/16 03:28:20 INFO mapreduce.Job: The url to track the job: http://hadoop01:8088/proxy/application_1429145594985_0020/ 15/04/16 03:28:20 INFO mapreduce.Job: Running job: job_1429145594985_0020 15/04/16 03:28:31 INFO mapreduce.Job: Job job_1429145594985_0020 running in uber mode : false 15/04/16 03:28:31 INFO mapreduce.Job: map 0% reduce 0% 15/04/16 03:28:59 INFO mapreduce.Job: Task Id : attempt_1429145594985_0020_m_000000_0, Status : FAILED Error: oracle.jdbc.driver.T4CPreparedStatement.isClosed()Z 15/04/16 03:29:00 INFO mapreduce.Job: Task Id : attempt_1429145594985_0020_m_000002_0, Status : FAILED Error: oracle.jdbc.driver.T4CPreparedStatement.isClosed()Z 15/04/16 03:29:01 INFO mapreduce.Job: Task Id : attempt_1429145594985_0020_m_000001_0, Status : FAILED Error: oracle.jdbc.driver.T4CPreparedStatement.isClosed()Z 我用sqoop把数据从hive导出到oracle一切正常
sqoop 从oracle导数据到hive中报错
往hive中导入表,报如下错误,请大家帮忙 [root@amorsay3 bin]# ./sqoop import --hive-import --connect jdbc:oracle:thin:@192.168.13.168:1521:orcl --username HADOOPLEARN --password zhao --table EMP -m 1 --hive-table emp1 Warning: /usr/local/hadoophive/sqoop-1.4.6.bin__hadoop-0.23/../hbase does not exist! HBase imports will fail. Please set $HBASE_HOME to the root of your HBase installation. Warning: /usr/local/hadoophive/sqoop-1.4.6.bin__hadoop-0.23/../hcatalog does not exist! HCatalog jobs will fail. Please set $HCAT_HOME to the root of your HCatalog installation. Warning: /usr/local/hadoophive/sqoop-1.4.6.bin__hadoop-0.23/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. Warning: /usr/local/hadoophive/sqoop-1.4.6.bin__hadoop-0.23/../zookeeper does not exist! Accumulo imports will fail. Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation. Warning: $HADOOP_HOME is deprecated. 15/08/11 23:17:02 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6 15/08/11 23:17:02 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 15/08/11 23:17:02 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override 15/08/11 23:17:02 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc. 15/08/11 23:17:02 INFO oracle.OraOopManagerFactory: Data Connector for Oracle and Hadoop is disabled. 15/08/11 23:17:02 INFO manager.SqlManager: Using default fetchSize of 1000 15/08/11 23:17:02 INFO tool.CodeGenTool: Beginning code generation 15/08/11 23:17:03 INFO manager.OracleManager: Time zone has been set to GMT 15/08/11 23:17:03 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM EMP t WHERE 1=0 15/08/11 23:17:03 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/local/hadoophive/hadoop-1.2.1 Note: /tmp/sqoop-root/compile/efda22b79cedc05841de35698062fbbc/EMP.java uses or overrides a deprecated API. Note: Recompile with -Xlint:deprecation for details. 15/08/11 23:17:04 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/efda22b79cedc05841de35698062fbbc/EMP.jar 15/08/11 23:17:04 INFO manager.OracleManager: Time zone has been set to GMT 15/08/11 23:17:04 INFO manager.OracleManager: Time zone has been set to GMT 15/08/11 23:17:04 INFO mapreduce.ImportJobBase: Beginning import of EMP 15/08/11 23:17:04 INFO manager.OracleManager: Time zone has been set to GMT 15/08/11 23:17:06 INFO db.DBInputFormat: Using read commited transaction isolation 15/08/11 23:17:06 INFO mapred.JobClient: Cleaning up the staging area hdfs://192.168.14.168:9000/hadoop/mapred/staging/root/.staging/job_201508111912_0003 Exception in thread "main" java.lang.IncompatibleClassChangeError: Found class org.apache.hadoop.mapreduce.JobContext, but interface was expected at org.apache.sqoop.config.ConfigurationHelper.getJobNumMaps(ConfigurationHelper.java:65) at com.cloudera.sqoop.config.ConfigurationHelper.getJobNumMaps(ConfigurationHelper.java:36) at org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat.getSplits(DataDrivenDBInputFormat.java:125) at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054) at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1071) at org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936) at org.apache.hadoop.mapreduce.Job.submit(Job.java:550) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580) at org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob(ImportJobBase.java:196) at org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:169) at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:266) at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:673) at org.apache.sqoop.manager.OracleManager.importTable(OracleManager.java:444) at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:497) at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605) at org.apache.sqoop.Sqoop.run(Sqoop.java:143) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227) at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
Sqoop将数据从hive导入mysql报错,各位帮我看看
这是运行的命令: liuyanbing@ubuntu:/opt/sqoop$ bin/sqoop export --connect jdbc:mysql://localhost:3306/dbtaobao --username root --password root --table user_log --export-dir '/user/hive/warehouse/dbtaobao.db/inner_user_log' --fields-terminated-by ','; 报错内容: Warning: /opt/sqoop/../hcatalog does not exist! HCatalog jobs will fail. Please set $HCAT_HOME to the root of your HCatalog installation. Warning: /opt/sqoop/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. Warning: /opt/sqoop/../zookeeper does not exist! Accumulo imports will fail. Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation. 2019-06-11 16:05:04,541 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6 2019-06-11 16:05:04,573 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 2019-06-11 16:05:04,678 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset. 2019-06-11 16:05:04,678 INFO tool.CodeGenTool: Beginning code generation Tue Jun 11 16:05:04 CST 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. 2019-06-11 16:05:05,241 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `user_log` AS t LIMIT 1 2019-06-11 16:05:05,379 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `user_log` AS t LIMIT 1 2019-06-11 16:05:05,392 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /bigdata/hadoop-3.1.1 Note: /tmp/sqoop-liuyanbing/compile/990c7e516f6811ff0f7c264686938932/user_log.java uses or overrides a deprecated API. Note: Recompile with -Xlint:deprecation for details. 2019-06-11 16:05:09,951 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-liuyanbing/compile/990c7e516f6811ff0f7c264686938932/user_log.jar 2019-06-11 16:05:09,960 INFO mapreduce.ExportJobBase: Beginning export of user_log 2019-06-11 16:05:09,960 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address 2019-06-11 16:05:10,093 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2019-06-11 16:05:10,131 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar 2019-06-11 16:05:11,220 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative 2019-06-11 16:05:11,224 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative 2019-06-11 16:05:11,225 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps 2019-06-11 16:05:11,399 INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:8032 2019-06-11 16:05:12,478 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/liuyanbing/.staging/job_1560238973821_0003 2019-06-11 16:05:15,272 WARN hdfs.DataStreamer: Caught exception java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Thread.join(Thread.java:1252) at java.lang.Thread.join(Thread.java:1326) at org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:986) at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:640) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:810) 2019-06-11 16:05:18,771 INFO input.FileInputFormat: Total input files to process : 1 2019-06-11 16:05:18,780 INFO input.FileInputFormat: Total input files to process : 1 2019-06-11 16:05:19,285 INFO mapreduce.JobSubmitter: number of splits:4 2019-06-11 16:05:19,352 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative 2019-06-11 16:05:19,353 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled 2019-06-11 16:05:19,472 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1560238973821_0003 2019-06-11 16:05:19,473 INFO mapreduce.JobSubmitter: Executing with tokens: [] 2019-06-11 16:05:19,959 INFO conf.Configuration: resource-types.xml not found 2019-06-11 16:05:19,959 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'. 2019-06-11 16:05:20,049 INFO impl.YarnClientImpl: Submitted application application_1560238973821_0003 2019-06-11 16:05:20,105 INFO mapreduce.Job: The url to track the job: http://ubuntu:8088/proxy/application_1560238973821_0003/ 2019-06-11 16:05:20,106 INFO mapreduce.Job: Running job: job_1560238973821_0003 2019-06-11 16:05:29,273 INFO mapreduce.Job: Job job_1560238973821_0003 running in uber mode : false 2019-06-11 16:05:29,286 INFO mapreduce.Job: map 0% reduce 0% 2019-06-11 16:05:42,450 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000002_0, Status : FAILED [2019-06-11 16:05:39.558]Container [pid=22666,containerID=container_1560238973821_0003_01_000004] is running 318323200B beyond the 'VIRTUAL' memory limit. Current usage: 125.2 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000004 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 22910 22666 22666 22666 (java) 302 45 2558558208 31405 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000004/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000002_0 4 |- 22666 22656 22666 22666 (bash) 0 0 14622720 634 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000004/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000002_0 4 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000004/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000004/stderr [2019-06-11 16:05:40.618]Container killed on request. Exit code is 143 [2019-06-11 16:05:40.619]Container exited with a non-zero exit code 143. 2019-06-11 16:05:42,479 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000001_0, Status : FAILED [2019-06-11 16:05:39.558]Container [pid=22651,containerID=container_1560238973821_0003_01_000003] is running 320690688B beyond the 'VIRTUAL' memory limit. Current usage: 127.5 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000003 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 22955 22651 22651 22651 (java) 296 49 2560925696 32025 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000003/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000003 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000001_0 3 |- 22651 22649 22651 22651 (bash) 0 0 14622720 627 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000003/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000003 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000001_0 3 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000003/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000003/stderr [2019-06-11 16:05:40.618]Container killed on request. Exit code is 143 [2019-06-11 16:05:40.621]Container exited with a non-zero exit code 143. 2019-06-11 16:05:42,480 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000003_0, Status : FAILED [2019-06-11 16:05:38.617]Container [pid=22749,containerID=container_1560238973821_0003_01_000005] is running 320125440B beyond the 'VIRTUAL' memory limit. Current usage: 126.4 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000005 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 22987 22749 22749 22749 (java) 324 37 2560360448 31709 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000005/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000005 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000003_0 5 |- 22749 22720 22749 22749 (bash) 0 1 14622720 640 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000005/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000005 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000003_0 5 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000005/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000005/stderr [2019-06-11 16:05:40.620]Container killed on request. Exit code is 143 [2019-06-11 16:05:40.622]Container exited with a non-zero exit code 143. 2019-06-11 16:05:42,482 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000000_0, Status : FAILED [2019-06-11 16:05:39.558]Container [pid=22675,containerID=container_1560238973821_0003_01_000002] is running 319543808B beyond the 'VIRTUAL' memory limit. Current usage: 125.4 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000002 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 22937 22675 22675 22675 (java) 316 38 2559778816 31497 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000002/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000002 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000000_0 2 |- 22675 22670 22675 22675 (bash) 0 0 14622720 612 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000002/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000002 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000000_0 2 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000002/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000002/stderr [2019-06-11 16:05:40.619]Container killed on request. Exit code is 143 [2019-06-11 16:05:40.622]Container exited with a non-zero exit code 143. 2019-06-11 16:05:52,546 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000002_1, Status : FAILED [2019-06-11 16:05:50.910]Container [pid=23116,containerID=container_1560238973821_0003_01_000006] is running 282286592B beyond the 'VIRTUAL' memory limit. Current usage: 68.6 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000006 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 23194 23116 23116 23116 (java) 85 29 2522521600 16852 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000006/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000006 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000002_1 6 |- 23116 23115 23116 23116 (bash) 0 1 14622720 710 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000006/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000006 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000002_1 6 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000006/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000006/stderr [2019-06-11 16:05:50.970]Container killed on request. Exit code is 143 [2019-06-11 16:05:51.012]Container exited with a non-zero exit code 143. 2019-06-11 16:05:55,561 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000000_1, Status : FAILED [2019-06-11 16:05:54.193]Container [pid=23396,containerID=container_1560238973821_0003_01_000009] is running 313866752B beyond the 'VIRTUAL' memory limit. Current usage: 111.1 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000009 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 23396 23394 23396 23396 (bash) 0 1 14622720 710 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000009/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000009 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000000_1 9 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000009/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000009/stderr |- 23473 23396 23396 23396 (java) 245 40 2554101760 27743 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000009/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000009 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000000_1 9 [2019-06-11 16:05:54.228]Container killed on request. Exit code is 143 [2019-06-11 16:05:54.263]Container exited with a non-zero exit code 143. 2019-06-11 16:05:55,563 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000003_1, Status : FAILED [2019-06-11 16:05:54.332]Container [pid=23304,containerID=container_1560238973821_0003_01_000008] is running 314042880B beyond the 'VIRTUAL' memory limit. Current usage: 113.8 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000008 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 23381 23304 23304 23304 (java) 265 51 2554277888 28423 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000008/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000008 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000003_1 8 |- 23304 23302 23304 23304 (bash) 0 1 14622720 720 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000008/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000008 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000003_1 8 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000008/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000008/stderr [2019-06-11 16:05:54.353]Container killed on request. Exit code is 143 [2019-06-11 16:05:54.381]Container exited with a non-zero exit code 143. 2019-06-11 16:05:55,565 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000001_1, Status : FAILED [2019-06-11 16:05:54.408]Container [pid=23200,containerID=container_1560238973821_0003_01_000007] is running 314497536B beyond the 'VIRTUAL' memory limit. Current usage: 115.5 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000007 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 23200 23198 23200 23200 (bash) 0 1 14622720 711 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000007/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000007 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000001_1 7 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000007/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000007/stderr |- 23277 23200 23200 23200 (java) 257 60 2554732544 28852 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000007/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000007 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000001_1 7 [2019-06-11 16:05:54.463]Container killed on request. Exit code is 143 [2019-06-11 16:05:54.482]Container exited with a non-zero exit code 143. 2019-06-11 16:06:01,619 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000002_2, Status : FAILED [2019-06-11 16:06:00.584]Container [pid=23515,containerID=container_1560238973821_0003_01_000011] is running 337451520B beyond the 'VIRTUAL' memory limit. Current usage: 203.4 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000011 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 23515 23513 23515 23515 (bash) 0 1 14622720 712 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000011/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000011 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000002_2 11 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000011/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000011/stderr |- 23592 23515 23515 23515 (java) 456 89 2577686528 51352 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000011/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000011 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000002_2 11 [2019-06-11 16:06:00.602]Container killed on request. Exit code is 143 [2019-06-11 16:06:00.659]Container exited with a non-zero exit code 143. 2019-06-11 16:06:05,651 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000000_2, Status : FAILED [2019-06-11 16:06:03.816]Container [pid=23651,containerID=container_1560238973821_0003_01_000012] is running 331475456B beyond the 'VIRTUAL' memory limit. Current usage: 173.7 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000012 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 23728 23651 23651 23651 (java) 418 39 2571710464 43768 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000012/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000012 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000000_2 12 |- 23651 23649 23651 23651 (bash) 0 1 14622720 707 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000012/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000012 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000000_2 12 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000012/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000012/stderr [2019-06-11 16:06:03.981]Container killed on request. Exit code is 143 [2019-06-11 16:06:03.986]Container exited with a non-zero exit code 143. 2019-06-11 16:06:08,677 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000001_2, Status : FAILED [2019-06-11 16:06:07.127]Container [pid=23848,containerID=container_1560238973821_0003_01_000014] is running 335940096B beyond the 'VIRTUAL' memory limit. Current usage: 198.2 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000014 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 23848 23847 23848 23848 (bash) 0 1 14622720 714 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000014/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000014 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000001_2 14 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000014/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000014/stderr |- 23926 23848 23848 23848 (java) 408 59 2576175104 50032 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000014/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000014 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000001_2 14 [2019-06-11 16:06:07.186]Container killed on request. Exit code is 143 [2019-06-11 16:06:07.201]Container exited with a non-zero exit code 143. 2019-06-11 16:06:08,678 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000003_2, Status : FAILED [2019-06-11 16:06:07.227]Container [pid=23751,containerID=container_1560238973821_0003_01_000013] is running 337357312B beyond the 'VIRTUAL' memory limit. Current usage: 192.7 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000013 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 23829 23751 23751 23751 (java) 463 52 2577592320 48632 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000013/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000013 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000003_2 13 |- 23751 23749 23751 23751 (bash) 0 1 14622720 706 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000013/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000013 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000003_2 13 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000013/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000013/stderr [2019-06-11 16:06:07.280]Container killed on request. Exit code is 143 [2019-06-11 16:06:07.360]Container exited with a non-zero exit code 143. 2019-06-11 16:06:12,703 INFO mapreduce.Job: map 100% reduce 0% 2019-06-11 16:06:12,711 INFO mapreduce.Job: Job job_1560238973821_0003 failed with state FAILED due to: Task failed task_1560238973821_0003_m_000002 Job failed as tasks failed. failedMaps:1 failedReduces:0 killedMaps:0 killedReduces: 0 2019-06-11 16:06:12,979 INFO mapreduce.Job: Counters: 13 Job Counters Failed map tasks=13 Killed map tasks=3 Launched map tasks=16 Other local map tasks=12 Data-local map tasks=4 Total time spent by all maps in occupied slots (ms)=124936 Total time spent by all reduces in occupied slots (ms)=0 Total time spent by all map tasks (ms)=124936 Total vcore-milliseconds taken by all map tasks=124936 Total megabyte-milliseconds taken by all map tasks=127934464 Map-Reduce Framework CPU time spent (ms)=0 Physical memory (bytes) snapshot=0 Virtual memory (bytes) snapshot=0 2019-06-11 16:06:12,986 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead 2019-06-11 16:06:12,990 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 61.7517 seconds (0 bytes/sec) 2019-06-11 16:06:12,999 INFO mapreduce.ExportJobBase: Exported 0 records. 2019-06-11 16:06:12,999 ERROR tool.ExportTool: Error during export: Export job failed! 新手找不到错误 求大神班帮我看看
关于Kerberos的GSS initiate failed问题,不知道自己分析的对不对,帮看下
目前有个springboot的项目,部署在服务器上,运行1天(观察时间几乎是24小时)后报错,错误如下 ``` javax.security.sasl.SaslException: GSS initiate failed at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271) at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49) at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:196) at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:167) at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105) at org.apache.commons.dbcp.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:38) at org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582) at org.apache.commons.pool.impl.GenericObjectPool.addObject(GenericObjectPool.java:1691) at org.apache.commons.pool.impl.GenericObjectPool.ensureMinIdle(GenericObjectPool.java:1648) at org.apache.commons.pool.impl.GenericObjectPool.access$700(GenericObjectPool.java:192) at org.apache.commons.pool.impl.GenericObjectPool$Evictor.run(GenericObjectPool.java:1784) at java.util.TimerThread.mainLoop(Timer.java:555) at java.util.TimerThread.run(Timer.java:505) Caused by: org.ietf.jgss.GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt) at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147) at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122) at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187) at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179) at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192) ... 20 common frames omitted ``` 因为自己对Kerberos了解也不是很多,所以看了些资料,然后去查看自己的krb5.tab文件,妨碍发现这么个东西 ``` default_realm = --------- dns_lookup_kdc = false dns_lookup_realm = false ticket_lifetime = 86400 renew_lifetime = 604800 forwardable = true default_tgs_enctypes = rc4-hmac default_tkt_enctypes = rc4-hmac permitted_enctypes = rc4-hmac udp_preference_limit = 1 kdc_timeout = 3000 [realms] ---.COM = { kdc = --------.com admin_server = --------.com } ``` 问题是不是在于没有 renewable = true ??? 我对比一个网上的配置,少一个这,是因为这个导致没办法renew吗?
sqoop job方式增量导入报错output directory exists
![图片说明](https://img-ask.csdn.net/upload/201803/30/1522408714_369894.png) 用sqoop job命令生成任务之后运行第一次成功了,但是第二次在运行就会出现以上错误 生成命令如下:sqoop job --create tong_count_incre -- import --connect jdbc:mysql://192.0.4.114:3306/hadoop --username root --password root --table tong_count_copy --hive-table default.tong_count --incremental lastmodified --check-column tong_time --last-value "2018-01-23 12:37:18" -m 1
sqoop连接DB2import 时报错,Connection timed out 求大神解答
sqoop连接DB2导入数据至HDFS时,报错,显示连接超时. 用list-table命令连接没有问题​,结果正确; 测试过DB2远程连接,没有问题,telnet 测试端口也没有问题; DB2版本v9.7,用的安装包里面的JDBC插件. 以下是错误信息。 [biadmin@Hadoop01 sqoop]$ ./bin/sqoop import --connect jdbc:db2://9.112.30.177:50000/content --username db2admin --P --table DB2ADMIN.PERSON --as-textfile -m 1 --target-dir /user/test Warning: /opt/ibm/biginsights/sqoop/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. /opt/ibm/biginsights/sqoop/bin/configure-sqoop: line 181: /opt/ibm/biginsights/hive/hcatalog/bin/hcat: Permission denied 16/03/02 08:27:38 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6 Enter password: 16/03/02 08:27:46 INFO manager.SqlManager: Using default fetchSize of 1000 16/03/02 08:27:46 INFO tool.CodeGenTool: Beginning code generation 16/03/02 08:28:49 ERROR manager.SqlManager: Error executing statement: com.ibm.db2.jcc.am.DisconnectNonTransientConnectionException: [jcc][t4][2043][11550][4.14.113] Exception java.net.ConnectException: Error opening socket to server /9.112.30.177 on port 50,000 with message: Connection timed out. ERRORCODE=-4499, SQLSTATE=08001 com.ibm.db2.jcc.am.DisconnectNonTransientConnectionException: [jcc][t4][2043][11550][4.14.113] Exception java.net.ConnectException: Error opening socket to server /9.112.30.177 on port 50,000 with message: Connection timed out. ERRORCODE=-4499, SQLSTATE=08001 at com.ibm.db2.jcc.am.ed.a(ed.java:320) at com.ibm.db2.jcc.am.ed.a(ed.java:338) at com.ibm.db2.jcc.t4.vb.a(vb.java:434) at com.ibm.db2.jcc.t4.vb.<init>(vb.java:93) at com.ibm.db2.jcc.t4.a.b(a.java:354) at com.ibm.db2.jcc.t4.b.newAgent_(b.java:2030) at com.ibm.db2.jcc.am.Connection.initConnection(Connection.java:731) at com.ibm.db2.jcc.am.Connection.<init>(Connection.java:680) at com.ibm.db2.jcc.t4.b.<init>(b.java:334) at com.ibm.db2.jcc.DB2SimpleDataSource.getConnection(DB2SimpleDataSource.java:232) at com.ibm.db2.jcc.DB2SimpleDataSource.getConnection(DB2SimpleDataSource.java:198) at com.ibm.db2.jcc.DB2Driver.connect(DB2Driver.java:475) at com.ibm.db2.jcc.DB2Driver.connect(DB2Driver.java:116) at java.sql.DriverManager.getConnection(DriverManager.java:582) at java.sql.DriverManager.getConnection(DriverManager.java:226) at org.apache.sqoop.manager.SqlManager.makeConnection(SqlManager.java:885) at org.apache.sqoop.manager.GenericJdbcManager.getConnection(GenericJdbcManager.java:52) at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:744) at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:767) at org.apache.sqoop.manager.SqlManager.getColumnInfoForRawQuery(SqlManager.java:270) at org.apache.sqoop.manager.SqlManager.getColumnTypesForRawQuery(SqlManager.java:241) at org.apache.sqoop.manager.SqlManager.getColumnTypes(SqlManager.java:227) at org.apache.sqoop.manager.ConnManager.getColumnTypes(ConnManager.java:295) at org.apache.sqoop.orm.ClassWriter.getColumnTypes(ClassWriter.java:1833) at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:1645) at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:107) at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:478) at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605) at org.apache.sqoop.Sqoop.run(Sqoop.java:143) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227) at org.apache.sqoop.Sqoop.main(Sqoop.java:236) Caused by: java.net.ConnectException: Connection timed out at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:369) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:230) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:212) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:403) at java.net.Socket.connect(Socket.java:642) at com.ibm.db2.jcc.t4.v.run(v.java:49) at java.security.AccessController.doPrivileged(AccessController.java:330) at com.ibm.db2.jcc.t4.vb.a(vb.java:420) ... 31 more 16/03/02 08:28:49 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: No columns to generate for ClassWriter at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:1651) at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:107) at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:478) at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605) at org.apache.sqoop.Sqoop.run(Sqoop.java:143) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227) at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
cloudera manager 离线安装安装agent时,向主节点下载资源超时
错误日志: [19/Nov/2018 16:16:04 +0000] 2789 MainThread stacks_collection_manager INFO Using max_uncompressed_file_size_bytes: 5242880 [19/Nov/2018 16:16:04 +0000] 2789 MainThread __init__ INFO Importing metric schema from file /opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/cmf/monitor/schema.json [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent INFO Supervised processes will add the following to their environment (in addition to the supervisor's env): {'CDH_PARQUET_HOME': '/usr/lib/parquet', 'JSVC_HOME': '/usr/libexec/bigtop-utils', 'CMF_PACKAGE_DIR': '/opt/cloudera-manager/cm-5.10.2/lib64/cmf/service', 'CDH_HADOOP_BIN': '/usr/bin/hadoop', 'MGMT_HOME': '/opt/cloudera-manager/cm-5.10.2/share/cmf', 'CDH_IMPALA_HOME': '/usr/lib/impala', 'CDH_YARN_HOME': '/usr/lib/hadoop-yarn', 'CDH_HDFS_HOME': '/usr/lib/hadoop-hdfs', 'PATH': '/sbin:/usr/sbin:/bin:/usr/bin', 'CDH_HUE_PLUGINS_HOME': '/usr/lib/hadoop', 'CM_STATUS_CODES': u'STATUS_NONE HDFS_DFS_DIR_NOT_EMPTY HBASE_TABLE_DISABLED HBASE_TABLE_ENABLED JOBTRACKER_IN_STANDBY_MODE YARN_RM_IN_STANDBY_MODE', 'KEYTRUSTEE_KP_HOME': '/usr/share/keytrustee-keyprovider', 'CLOUDERA_ORACLE_CONNECTOR_JAR': '/usr/share/java/oracle-connector-java.jar', 'CDH_SQOOP2_HOME': '/usr/lib/sqoop2', 'KEYTRUSTEE_SERVER_HOME': '/usr/lib/keytrustee-server', 'CDH_MR2_HOME': '/usr/lib/hadoop-mapreduce', 'HIVE_DEFAULT_XML': '/etc/hive/conf.dist/hive-default.xml', 'CLOUDERA_POSTGRESQL_JDBC_JAR': '/opt/cloudera-manager/cm-5.10.2/share/cmf/lib/postgresql-9.0-801.jdbc4.jar', 'CDH_KMS_HOME': '/usr/lib/hadoop-kms', 'CDH_HBASE_HOME': '/usr/lib/hbase', 'CDH_SQOOP_HOME': '/usr/lib/sqoop', 'WEBHCAT_DEFAULT_XML': '/etc/hive-webhcat/conf.dist/webhcat-default.xml', 'CDH_OOZIE_HOME': '/usr/lib/oozie', 'CDH_ZOOKEEPER_HOME': '/usr/lib/zookeeper', 'CDH_HUE_HOME': '/usr/lib/hue', 'CLOUDERA_MYSQL_CONNECTOR_JAR': '/usr/share/java/mysql-connector-java.jar', 'CDH_HBASE_INDEXER_HOME': '/usr/lib/hbase-solr', 'CDH_MR1_HOME': '/usr/lib/hadoop-0.20-mapreduce', 'CDH_SOLR_HOME': '/usr/lib/solr', 'CDH_PIG_HOME': '/usr/lib/pig', 'CDH_SENTRY_HOME': '/usr/lib/sentry', 'CDH_CRUNCH_HOME': '/usr/lib/crunch', 'CDH_LLAMA_HOME': '/usr/lib/llama/', 'CDH_HTTPFS_HOME': '/usr/lib/hadoop-httpfs', 'ROOT': '/opt/cloudera-manager/cm-5.10.2/lib64/cmf', 'CDH_HADOOP_HOME': '/usr/lib/hadoop', 'CDH_HIVE_HOME': '/usr/lib/hive', 'ORACLE_HOME': '/usr/share/oracle/instantclient', 'CDH_HCAT_HOME': '/usr/lib/hive-hcatalog', 'CDH_KAFKA_HOME': '/usr/lib/kafka', 'CDH_SPARK_HOME': '/usr/lib/spark', 'TOMCAT_HOME': '/usr/lib/bigtop-tomcat', 'CDH_FLUME_HOME': '/usr/lib/flume-ng'} [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent INFO To override these variables, use /etc/cloudera-scm-agent/config.ini. Environment variables for CDH locations are not used when CDH is installed from parcels. [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent INFO Created /opt/cloudera-manager/cm-5.10.2/run/cloudera-scm-agent/process [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent INFO Chmod'ing /opt/cloudera-manager/cm-5.10.2/run/cloudera-scm-agent/process to 0751 [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent INFO Created /opt/cloudera-manager/cm-5.10.2/run/cloudera-scm-agent/supervisor [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent INFO Chmod'ing /opt/cloudera-manager/cm-5.10.2/run/cloudera-scm-agent/supervisor to 0751 [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent INFO Created /opt/cloudera-manager/cm-5.10.2/run/cloudera-scm-agent/flood [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent INFO Chowning /opt/cloudera-manager/cm-5.10.2/run/cloudera-scm-agent/flood to cloudera-scm (498) cloudera-scm (498) [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent INFO Chmod'ing /opt/cloudera-manager/cm-5.10.2/run/cloudera-scm-agent/flood to 0751 [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent INFO Created /opt/cloudera-manager/cm-5.10.2/run/cloudera-scm-agent/supervisor/include [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent INFO Chmod'ing /opt/cloudera-manager/cm-5.10.2/run/cloudera-scm-agent/supervisor/include to 0751 [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent ERROR Failed to connect to previous supervisor. Traceback (most recent call last): File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/cmf/agent.py", line 2073, in find_or_start_supervisor self.configure_supervisor_clients() File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/cmf/agent.py", line 2254, in configure_supervisor_clients supervisor_options.realize(args=["-c", os.path.join(self.supervisor_dir, "supervisord.conf")]) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/supervisor-3.0-py2.6.egg/supervisor/options.py", line 1599, in realize Options.realize(self, *arg, **kw) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/supervisor-3.0-py2.6.egg/supervisor/options.py", line 333, in realize self.process_config() File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/supervisor-3.0-py2.6.egg/supervisor/options.py", line 341, in process_config self.process_config_file(do_usage) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/supervisor-3.0-py2.6.egg/supervisor/options.py", line 376, in process_config_file self.usage(str(msg)) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/supervisor-3.0-py2.6.egg/supervisor/options.py", line 164, in usage self.exit(2) SystemExit: 2 [19/Nov/2018 16:16:04 +0000] 2789 MainThread tmpfs INFO Successfully mounted tmpfs at /opt/cloudera-manager/cm-5.10.2/run/cloudera-scm-agent/process [19/Nov/2018 16:16:05 +0000] 2789 MainThread agent INFO Trying to connect to newly launched supervisor (Attempt 1) [19/Nov/2018 16:16:05 +0000] 2789 MainThread agent INFO Supervisor version: 3.0, pid: 2821 [19/Nov/2018 16:16:05 +0000] 2789 MainThread agent INFO Successfully connected to supervisor [19/Nov/2018 16:16:05 +0000] 2789 MainThread status_server INFO Using maximum impala profile bundle size of 1073741824 bytes. [19/Nov/2018 16:16:05 +0000] 2789 MainThread status_server INFO Using maximum stacks log bundle size of 1073741824 bytes. [19/Nov/2018 16:16:05 +0000] 2789 MainThread _cplogging INFO [19/Nov/2018:16:16:05] ENGINE Bus STARTING [19/Nov/2018 16:16:05 +0000] 2789 MainThread _cplogging INFO [19/Nov/2018:16:16:05] ENGINE Started monitor thread '_TimeoutMonitor'. [19/Nov/2018 16:16:06 +0000] 2789 MainThread _cplogging INFO [19/Nov/2018:16:16:06] ENGINE Serving on yingzhi01.com:9000 [19/Nov/2018 16:16:06 +0000] 2789 MainThread _cplogging INFO [19/Nov/2018:16:16:06] ENGINE Bus STARTED [19/Nov/2018 16:16:06 +0000] 2789 MainThread __init__ INFO New monitor: (<cmf.monitor.host.HostMonitor object at 0x2990c50>,) [19/Nov/2018 16:16:06 +0000] 2789 MonitorDaemon-Scheduler __init__ INFO Monitor ready to report: ('HostMonitor',) [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent INFO Setting default socket timeout to 30 [19/Nov/2018 16:16:06 +0000] 2789 Monitor-HostMonitor network_interfaces INFO NIC iface eth0 doesn't support ETHTOOL (95) [19/Nov/2018 16:16:06 +0000] 2789 Monitor-HostMonitor throttling_logger ERROR Error getting directory attributes for /opt/cloudera-manager/cm-5.10.2/log/cloudera-scm-agent Traceback (most recent call last): File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/cmf/monitor/dir_monitor.py", line 90, in _get_directory_attributes name = pwd.getpwuid(uid)[0] KeyError: 'getpwuid(): uid not found: 1106' [19/Nov/2018 16:16:06 +0000] 2789 MainThread heartbeat_tracker INFO HB stats (seconds): num:1 LIFE_MIN:0.22 min:0.22 mean:0.22 max:0.22 LIFE_MAX:0.22 [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent INFO CM server guid: dceeafae-a884-42f1-ba7b-4ee187ef3bef [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent INFO Using parcels directory from server provided value: /opt/cloudera/parcels [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent WARNING Expected user root for /opt/cloudera/parcels but was cloudera-scm [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent WARNING Expected group root for /opt/cloudera/parcels but was cloudera-scm [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent INFO Created /opt/cloudera/parcel-cache [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent INFO Chowning /opt/cloudera/parcel-cache to root (0) root (0) [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent INFO Chmod'ing /opt/cloudera/parcel-cache to 0755 [19/Nov/2018 16:16:06 +0000] 2789 MainThread parcel INFO Agent does create users/groups and apply file permissions [19/Nov/2018 16:16:06 +0000] 2789 MainThread downloader INFO Downloader path: /opt/cloudera/parcel-cache [19/Nov/2018 16:16:06 +0000] 2789 MainThread parcel_cache INFO Using /opt/cloudera/parcel-cache for parcel cache [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent INFO Flood daemon (re)start attempt [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent INFO Created /opt/cloudera/parcels/.flood [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent INFO Chowning /opt/cloudera/parcels/.flood to cloudera-scm (498) cloudera-scm (498) [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent INFO Chmod'ing /opt/cloudera/parcels/.flood to 0755 [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent INFO Triggering supervisord update. [19/Nov/2018 16:16:36 +0000] 2789 MainThread downloader ERROR Failed rack peer update: timed out [19/Nov/2018 16:16:36 +0000] 2789 MainThread agent INFO Active parcel list updated; recalculating component info. [19/Nov/2018 16:16:36 +0000] 2789 MainThread throttling_logger WARNING CMF_AGENT_JAVA_HOME environment variable host override will be deprecated in future. JAVA_HOME setting configured from CM server takes precedence over host agent override. Configure JAVA_HOME setting from CM server. [19/Nov/2018 16:16:36 +0000] 2789 MainThread throttling_logger INFO Identified java component java8 with full version JAVA_HOME=/opt/modules/jdk1.8.0_144 java version "1.8.0_144" Java(TM) SE Runtime Environment (build 1.8.0_144-b01) Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode) for requested version . [19/Nov/2018 16:16:36 +0000] 2789 MainThread agent WARNING Long HB processing time: 30.6659779549 [19/Nov/2018 16:16:36 +0000] 2789 MainThread agent WARNING Delayed HB: 15s since last [19/Nov/2018 16:16:44 +0000] 2789 Monitor-HostMonitor throttling_logger ERROR Timeout with args ['ntpdc', '-np'] None [19/Nov/2018 16:16:44 +0000] 2789 Monitor-HostMonitor throttling_logger ERROR Failed to collect NTP metrics Traceback (most recent call last): File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/cmf/monitor/host/ntp_monitor.py", line 48, in collect self.collect_ntpd() File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/cmf/monitor/host/ntp_monitor.py", line 66, in collect_ntpd result, stdout, stderr = self._subprocess_with_timeout(args, self._timeout) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/cmf/monitor/host/ntp_monitor.py", line 38, in _subprocess_with_timeout return subprocess_with_timeout(args, timeout) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/cmf/subprocess_timeout.py", line 94, in subprocess_with_timeout raise Exception("timeout with args %s" % args) Exception: timeout with args ['ntpdc', '-np'] [19/Nov/2018 16:17:06 +0000] 2789 DnsResolutionMonitor throttling_logger INFO Using java location: '/opt/modules/jdk1.8.0_144/bin/java'. [19/Nov/2018 16:17:06 +0000] 2789 MainThread downloader ERROR Failed rack peer update: timed out [19/Nov/2018 16:17:06 +0000] 2789 MainThread agent WARNING Long HB processing time: 30.1082139015 [19/Nov/2018 16:17:06 +0000] 2789 MainThread agent WARNING Delayed HB: 15s since last [19/Nov/2018 16:17:36 +0000] 2789 MainThread downloader ERROR Failed rack peer update: timed out [19/Nov/2018 16:17:36 +0000] 2789 MainThread agent WARNING Long HB processing time: 30.1235852242 [19/Nov/2018 16:17:36 +0000] 2789 MainThread agent WARNING Delayed HB: 15s since last [19/Nov/2018 16:18:07 +0000] 2789 MainThread downloader ERROR Failed rack peer update: timed out [19/Nov/2018 16:18:07 +0000] 2789 MainThread agent WARNING Long HB processing time: 30.1040799618 [19/Nov/2018 16:18:07 +0000] 2789 MainThread agent WARNING Delayed HB: 15s since last [19/Nov/2018 16:18:37 +0000] 2789 MainThread downloader ERROR Failed rack peer update: timed out [19/Nov/2018 16:18:37 +0000] 2789 MainThread agent WARNING Long HB processing time: 30.1849529743 [19/Nov/2018 16:18:37 +0000] 2789 MainThread agent WARNING Delayed HB: 15s since last [19/Nov/2018 16:19:07 +0000] 2789 MainThread downloader ERROR Failed rack peer update: timed out [19/Nov/2018 16:19:07 +0000] 2789 MainThread agent WARNING Long HB processing time: 30.1211960316 [19/Nov/2018 16:19:07 +0000] 2789 MainThread agent WARNING Delayed HB: 15s since last [19/Nov/2018 16:19:37 +0000] 2789 MainThread downloader ERROR Failed rack peer update: timed out [19/Nov/2018 16:19:37 +0000] 2789 MainThread agent WARNING Long HB processing time: 30.1215620041 [19/Nov/2018 16:19:37 +0000] 2789 MainThread agent WARNING Delayed HB: 15s since last [19/Nov/2018 16:20:01 +0000] 2789 CP Server Thread-4 _cplogging INFO 192.168.164.35 - - [19/Nov/2018:16:20:01] "GET /heartbeat HTTP/1.1" 200 2 "" "NING/1.0" [19/Nov/2018 16:20:04 +0000] 2789 CP Server Thread-5 _cplogging INFO 192.168.164.35 - - [19/Nov/2018:16:20:04] "GET /heartbeat HTTP/1.1" 200 2 "" "NING/1.0" [19/Nov/2018 16:20:07 +0000] 2789 MainThread downloader ERROR Failed rack peer update: timed out [19/Nov/2018 16:20:07 +0000] 2789 MainThread agent WARNING Long HB processing time: 30.1212861538 [19/Nov/2018 16:20:07 +0000] 2789 MainThread agent WARNING Delayed HB: 15s since last [19/Nov/2018 16:20:37 +0000] 2789 MainThread downloader ERROR Failed rack peer update: timed out [19/Nov/2018 16:20:37 +0000] 2789 MainThread agent WARNING Long HB processing time: 30.1753029823 [19/Nov/2018 16:20:37 +0000] 2789 MainThread agent WARNING Delayed HB: 15s since last [19/Nov/2018 16:20:37 +0000] 2789 Thread-13 downloader INFO Fetching torrent: http://yingzhi01.com:7180/cmf/parcel/download/CDH-5.10.2-1.cdh5.10.2.p0.5-el6.parcel.torrent [19/Nov/2018 16:20:37 +0000] 2789 Thread-13 downloader INFO Starting download of: http://yingzhi01.com:7180/cmf/parcel/download/CDH-5.10.2-1.cdh5.10.2.p0.5-el6.parcel [19/Nov/2018 16:21:07 +0000] 2789 Thread-13 downloader ERROR Unexpected exception during download Traceback (most recent call last): File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/cmf/downloader.py", line 279, in download self.client.AddTorrent(torrent_url) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/flood/util/cmd.py", line 159, in __call__ return self.fn.__get__(self.binding)(*args, **kwargs) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/flood/util/rpc.py", line 68, in <lambda> return lambda *pargs, **kwargs: self._invoke(*pargs, **kwargs) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/flood/util/rpc.py", line 77, in _invoke return rpcClient.requestor.request(self.schema.name, msg) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/flood/util/rpc.py", line 129, in requestor return avro.ipc.Requestor(self.SCHEMA, self.transceiver) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/flood/util/rpc.py", line 125, in transceiver return avro.ipc.HTTPTransceiver(self.server.host, self.server.port) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/avro-1.6.3-py2.6.egg/avro/ipc.py", line 469, in __init__ self.conn.connect() File "/usr/lib64/python2.6/httplib.py", line 771, in connect self.timeout) File "/usr/lib64/python2.6/socket.py", line 567, in create_connection raise error, msg timeout: timed out [19/Nov/2018 16:21:07 +0000] 2789 Thread-13 downloader INFO Finished download [ url: http://yingzhi01.com:7180/cmf/parcel/download/CDH-5.10.2-1.cdh5.10.2.p0.5-el6.parcel, state: exception, total_bytes: 0, downloaded_bytes: 0, start_time: 2018-11-19 16:20:37, download_end_time: , end_time: 2018-11-19 16:21:07, code: 600, exception_msg: timed out, path: None ] [19/Nov/2018 16:21:07 +0000] 2789 MainThread downloader ERROR Failed rack peer update: timed out [19/Nov/2018 16:21:07 +0000] 2789 MainThread agent WARNING Long HB processing time: 30.1247620583 [19/Nov/2018 16:21:07 +0000] 2789 MainThread agent WARNING Delayed HB: 15s since last [19/Nov/2018 16:21:07 +0000] 2789 Thread-13 downloader INFO Fetching torrent: http://yingzhi01.com:7180/cmf/parcel/download/CDH-5.10.2-1.cdh5.10.2.p0.5-el6.parcel.torrent [19/Nov/2018 16:21:08 +0000] 2789 Thread-13 downloader INFO Starting download of: http://yingzhi01.com:7180/cmf/parcel/download/CDH-5.10.2-1.cdh5.10.2.p0.5-el6.parcel [19/Nov/2018 16:21:38 +0000] 2789 Thread-13 downloader ERROR Unexpected exception during download 然后就是不断重复超时错误求大神指点。。。
sqoop1使用java报Can't get Kerberos principal renewer
全部代码如下 **sqoop1使用java api过Kerberos出现Can't get Master Kerberos principal for use as renewer ** ``` public class SqoopTest { public static void main(String[] args) throws Exception { // ================================================================= Configuration conf = new Configuration(); conf.set("fs.default.name", "hdfs://101.30.188.246:9000/");//设置HDFS服务地址 String keytabFile = "/home/hcj/tab/hdfs.keytab"; String principle = "hdfs@MSO.COM"; String krbConf = "/home/hcj/krb5.conf"; System.setProperty("java.security.krb5.conf", krbConf); conf.set("hadoop.security.authentication", "Kerberos"); //conf.setBoolean("fs.hdfs.imHADOpl.disable.cache", true); conf.set("keytab.file", keytabFile); UserGroupInformation.setConfiguration(conf); UserGroupInformation.loginUserFromKeytab(principle, keytabFile); // ================================================================= String[] arg = new String[] { // Oracle数据库信息 /* * sqoop export --connect jdbc:mysql://127.0.0.1:3306/test --username jamie --table * persons --export-dir /user/hive/warehouse/dw_api_server.db/persons2/ * --input-fields-terminated-by '\t' --input-lines-terminated-by '\n' */ "--connect","jdbc:mysql://114.115.156.37:3306/test", "--username","root", "--password","root", "--table","persons", "--m","1", "--export-dir","hdfs://101.30.188.246:9000/user/hive/warehouse/dw_api_server.db/persons/", "--input-fields-terminated-by","\t" //"-columns","id,city" }; String[] expandArguments = OptionsFileUtil.expandArguments(arg); SqoopTool tool = SqoopTool.getTool("export"); Configuration loadPlugins = SqoopTool.loadPlugins(conf); Sqoop sqoop = new Sqoop((com.cloudera.sqoop.tool.SqoopTool) tool, loadPlugins); int res = Sqoop.runSqoop(sqoop, expandArguments); if (res == 0) System.out.println ("成功"); } } ``` 报错 ``` java.io.IOException: Can't get Master Kerberos principal for use as renewer at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:133) at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100) at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:166) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308) at org.apache.sqoop.mapreduce.ExportJobBase.doSubmitJob(ExportJobBase.java:322) at org.apache.sqoop.mapreduce.ExportJobBase.runJob(ExportJobBase.java:299) at org.apache.sqoop.mapreduce.ExportJobBase.runExport(ExportJobBase.java:440) at org.apache.sqoop.manager.SqlManager.exportTable(SqlManager.java:931) at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:80) at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:99) at org.apache.sqoop.Sqoop.run(Sqoop.java:147) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183) at com.mshuoke.datagw.impl.sqoop.SqoopTest.main(SqoopTest.java:58) ``` 求解
爬虫福利二 之 妹子图网MM批量下载
爬虫福利一:27报网MM批量下载    点击 看了本文,相信大家对爬虫一定会产生强烈的兴趣,激励自己去学习爬虫,在这里提前祝:大家学有所成! 目标网站:妹子图网 环境:Python3.x 相关第三方模块:requests、beautifulsoup4 Re:各位在测试时只需要将代码里的变量 path 指定为你当前系统要保存的路径,使用 python xxx.py 或IDE运行即可。
字节跳动视频编解码面经
引言 本文主要是记录一下面试字节跳动的经历。 三四月份投了字节跳动的实习(图形图像岗位),然后hr打电话过来问了一下会不会opengl,c++,shador,当时只会一点c++,其他两个都不会,也就直接被拒了。 七月初内推了字节跳动的提前批,因为内推没有具体的岗位,hr又打电话问要不要考虑一下图形图像岗,我说实习投过这个岗位不合适,不会opengl和shador,然后hr就说秋招更看重基础。我当时
开源一个功能完整的SpringBoot项目框架
福利来了,给大家带来一个福利。 最近想了解一下有关Spring Boot的开源项目,看了很多开源的框架,大多是一些demo或者是一个未成形的项目,基本功能都不完整,尤其是用户权限和菜单方面几乎没有完整的。 想到我之前做的框架,里面通用模块有:用户模块,权限模块,菜单模块,功能模块也齐全了,每一个功能都是完整的。 打算把这个框架分享出来,供大家使用和学习。 为什么用框架? 框架可以学习整体
小白如何学习java?
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。 1. for - else 什么?不是 if 和 else 才
数据库优化 - SQL优化
前面一篇文章从实例的角度进行数据库优化,通过配置一些参数让数据库性能达到最优。但是一些“不好”的SQL也会导致数据库查询变慢,影响业务流程。本文从SQL角度进行数据库优化,提升SQL运行效率。 判断问题SQL 判断SQL是否有问题时可以通过两个表象进行判断: 系统级别表象 CPU消耗严重 IO等待严重 页面响应时间过长
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 c/c++ 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7
通俗易懂地给女朋友讲:线程池的内部原理
餐厅的约会 餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”我楞了一下,心里想女朋友今天是怎么了,怎么突然问出这么专业的问题,但做为一个专业人士在女朋友面前也不能露怯啊,想了一下便说:“我先给你讲讲我前同事老王的故事吧!” 大龄程序员老王 老王是一个已经北漂十多年的程序员,岁数大了,加班加不动了,升迁也无望,于是拿着手里
经典算法(5)杨辉三角
写在前面: 我是 扬帆向海,这个昵称来源于我的名字以及女朋友的名字。我热爱技术、热爱开源、热爱编程。技术是开源的、知识是共享的。 这博客是对自己学习的一点点总结及记录,如果您对 Java、算法 感兴趣,可以关注我的动态,我们一起学习。 用知识改变命运,让我们的家人过上更好的生活。 目录一、杨辉三角的介绍二、杨辉三角的算法思想三、代码实现1.第一种写法2.第二种写法 一、杨辉三角的介绍 百度
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹
面试官:你连RESTful都不知道我怎么敢要你?
面试官:了解RESTful吗? 我:听说过。 面试官:那什么是RESTful? 我:就是用起来很规范,挺好的 面试官:是RESTful挺好的,还是自我感觉挺好的 我:都挺好的。 面试官:… 把门关上。 我:… 要干嘛?先关上再说。 面试官:我说出去把门关上。 我:what ?,夺门而去 文章目录01 前言02 RESTful的来源03 RESTful6大原则1. C-S架构2. 无状态3.统一的接
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看
SQL-小白最佳入门sql查询一
一 说明 如果是初学者,建议去网上寻找安装Mysql的文章安装,以及使用navicat连接数据库,以后的示例基本是使用mysql数据库管理系统; 二 准备前提 需要建立一张学生表,列分别是id,名称,年龄,学生信息;本示例中文章篇幅原因SQL注释略; 建表语句: CREATE TABLE `student` ( `id` int(11) NOT NULL AUTO_INCREMENT, `
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // dosho
致 Python 初学者
文章目录1. 前言2. 明确学习目标,不急于求成,不好高骛远3. 在开始学习 Python 之前,你需要做一些准备2.1 Python 的各种发行版2.2 安装 Python2.3 选择一款趁手的开发工具3. 习惯使用IDLE,这是学习python最好的方式4. 严格遵从编码规范5. 代码的运行、调试5. 模块管理5.1 同时安装了py2/py35.2 使用Anaconda,或者通过IDE来安装模
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,
程序员:我终于知道post和get的区别
IT界知名的程序员曾说:对于那些月薪三万以下,自称IT工程师的码农们,其实我们从来没有把他们归为我们IT工程师的队伍。他们虽然总是以IT工程师自居,但只是他们一厢情愿罢了。 此话一出,不知激起了多少(码农)程序员的愤怒,却又无可奈何,于是码农问程序员。 码农:你知道get和post请求到底有什么区别? 程序员:你看这篇就知道了。 码农:你月薪三万了? 程序员:嗯。 码农:你是怎么做到的? 程序员:
羞,Java 字符串拼接竟然有这么多姿势
二哥,我今年大二,看你分享的《阿里巴巴 Java 开发手册》上有一段内容说:“循环体内,拼接字符串最好使用 StringBuilder 的 append 方法,而不是 + 号操作符。”到底为什么啊,我平常一直就用的‘+’号操作符啊!二哥有空的时候能否写一篇文章分析一下呢? 就在昨天,一位叫小菜的读者微信我说了上面这段话。 我当时看到这条微信的第一感觉是:小菜你也太菜了吧,这都不知道为啥啊!我估
"狗屁不通文章生成器"登顶GitHub热榜,分分钟写出万字形式主义大作
前言 GitHub 被誉为全球最大的同性交友网站,……,陪伴我们已经走过 10+ 年时间,它托管了大量的软件代码,同时也承载了程序员无尽的欢乐。 上周给大家分享了一篇10个让你笑的合不拢嘴的Github项目,而且还拿了7万+个Star哦,有兴趣的朋友,可以看看, 印象最深刻的是 “ 呼吸不止,码字不停 ”: 老实交代,你是不是经常准备写个技术博客,打开word后瞬间灵感便秘,码不出字? 有什么
推荐几款比较实用的工具,网站
1.盘百度PanDownload   这个云盘工具是免费的,可以进行资源搜索,提速(偶尔会抽风......) 不要去某站买付费的......   PanDownload下载地址   2.BeJSON 这是一款拥有各种在线工具的网站,推荐它的主要原因是网站简洁,功能齐全,广告相比其他广告好太多了     bejson网站   3.二维码美化 这个网站的二维码美化很好看,网站界面
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU
程序员把地府后台管理系统做出来了,还有3.0版本!12月7号最新消息:已在开发中有github地址
第一幕:缘起 听说阎王爷要做个生死簿后台管理系统,我们派去了一个程序员…… 996程序员做的梦: 第一场:团队招募 为了应对地府管理危机,阎王打算找“人”开发一套地府后台管理系统,于是就在地府总经办群中发了项目需求。 话说还是中国电信的信号好,地府都是满格,哈哈!!! 经常会有外行朋友问:看某网站做的不错,功能也简单,你帮忙做一下? 而这次,面对这样的需求,这个程序员
网易云6亿用户音乐推荐算法
网易云音乐是音乐爱好者的集聚地,云音乐推荐系统致力于通过 AI 算法的落地,实现用户千人千面的个性化推荐,为用户带来不一样的听歌体验。 本次分享重点介绍 AI 算法在音乐推荐中的应用实践,以及在算法落地过程中遇到的挑战和解决方案。 将从如下两个部分展开: AI 算法在音乐推荐中的应用 音乐场景下的 AI 思考 从 2013 年 4 月正式上线至今,网易云音乐平台持续提供着:乐屏社区、UGC
8年经验面试官详解 Java 面试秘诀
    作者 | 胡书敏 责编 | 刘静 出品 | CSDN(ID:CSDNnews) 本人目前在一家知名外企担任架构师,而且最近八年来,在多家外企和互联网公司担任Java技术面试官,前后累计面试了有两三百位候选人。在本文里,就将结合本人的面试经验,针对Java初学者、Java初级开发和Java开发,给出若干准备简历和准备面试的建议。   Java程序员准备和投递简历的实
面试官如何考察你的思维方式?
1.两种思维方式在求职面试中,经常会考察这种问题:北京有多少量特斯拉汽车? 某胡同口的煎饼摊一年能卖出多少个煎饼? 深圳有多少个产品经理? 一辆公交车里能装下多少个乒乓球? 一
碎片化的时代,如何学习
今天周末,和大家聊聊学习这件事情。 在如今这个社会,我们的时间被各类 APP 撕的粉碎。 刷知乎、刷微博、刷朋友圈; 看论坛、看博客、看公号; 等等形形色色的信息和知识获取方式一个都不错过。 貌似学了很多,但是却感觉没什么用。 要解决上面这些问题,首先要分清楚一点,什么是信息,什么是知识。 那什么是信息呢? 你一切听到的、看到的,都是信息,比如微博上的明星出轨、微信中的表情大战、抖音上的段子
so easy! 10行代码写个"狗屁不通"文章生成器
前几天,GitHub 有个开源项目特别火,只要输入标题就可以生成一篇长长的文章。 背后实现代码一定很复杂吧,里面一定有很多高深莫测的机器学习等复杂算法 不过,当我看了源代码之后 这程序不到50行 尽管我有多年的Python经验,但我竟然一时也没有看懂 当然啦,原作者也说了,这个代码也是在无聊中诞生的,平时撸码是不写中文变量名的, 中文...
知乎高赞:中国有什么拿得出手的开源软件产品?(整理自本人原创回答)
知乎高赞:中国有什么拿得出手的开源软件产品? 在知乎上,有个问题问“中国有什么拿得出手的开源软件产品(在 GitHub 等社区受欢迎度较好的)?” 事实上,还不少呢~ 本人于2019.7.6进行了较为全面的回答,对这些受欢迎的 Github 开源项目分类整理如下: 分布式计算、云平台相关工具类 1.SkyWalking,作者吴晟、刘浩杨 等等 仓库地址: apache/skywalking 更...
基础拾遗:除了&和&&的区别,你还要知道位运算的这5个运算符
&和&&都可作逻辑与的运算符,表示逻辑与(and),&是位运算符,你还需要知道这5个位运算符,基础很重要,云运算其实很骚!
相关热词 c# plc s1200 c#里氏转换原则 c# 主界面 c# do loop c#存为组套 模板 c# 停掉协程 c# rgb 读取图片 c# 图片颜色调整 最快 c#多张图片上传 c#密封类与密封方法
立即提问