sqoop从postgre全量抽取数据到hive出现cannot resolve sql type for 1111 10C

最近刚接触sqoop,在使用时出现问题,请问大神们该问题如何解决?
要抽取的postgre表中的extra存在json类型的数据,抽取时出现cannot resolve sql type for 1111 和 no java type for sql type for column extra错误,根据https://blog.csdn.net/Post_Yuan/article/details/79799980和https://blog.csdn.net/lookqlp/article/details/52096193看了两篇文章,对sqoop语句做了如下修改:
最开始没有加--map-column-hive Extra=String \和--map-column-java Extra=String \的sqoop语句如下
sqoop import \
--connect jdbc串\
--username 用户名 \
--password 密码\
--table 表名 \
--null-string '\N' \
--null-non-string '\N' \
--hive-overwrite \
--hcatalog-database hive数据库名\
--hcatalog-table hive中创建好的表名 \
--hcatalog-partition-keys dt \
--hcatalog-partition-values 20180913 \
--as-parquetfile \
-m 1
此时报错cannot resolve sql type for 1111 和 no java type for sql type for column extra
加上--map-column-hive Extra=String \和--map-column-java Extra=String \
sqoop import \
--connect jdbc串\
--username 用户名 \
--password 密码\
--table 表名 \
--null-string '\N' \
--null-non-string '\N' \
--map-column-hive Extra=String \
--map-column-java Extra=String \
--hive-overwrite \
--hcatalog-database hive数据库名\
--hcatalog-table hive中创建好的表名 \
--hcatalog-partition-keys dt \
--hcatalog-partition-values 20180913 \
--as-parquetfile \
-m 1
此时报错The connection attempt failed.
connect timed out
Closed a connection to metastore, current connections: 0

1个回答

Phoebe_Ma
Phoebe_Ma 你好~谢谢您的解答,这个博客我也看了,可能我没理解到吧~目前我遇到的问题是:在抽取数据时,先在hive建好表,然后抽取数据,hive中的表在原表字段的基础上增加了一个日期作为分区~
接近 2 年之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
sqoop 从mysql导入数据到hive,有一列数据被改变了

mysql中的数据导入到hdfs中后,发现有个字段被莫名其妙修改了 </br>mysql查询到的结果如下 ![mysql的数据](https://img-ask.csdn.net/upload/202002/29/1582920182_246207.png) </br>hdfs中的数据如下 ![图片说明](https://img-ask.csdn.net/upload/202002/29/1582920278_325177.png) mysql中的_NIC_ID 为空,导入hdfs竟然给改成这个样子成了: ^@他 </br>使用的sqoop语句为 ```sqoop sqoop import --connect jdbc:mysql:/xxx.xxx.xxxx.xxx:3306/business_data --username user --password pass --hive-import --hive-database business_data --hive-table el_company_class_test_3 --fields-terminated-by '\t' --query "select * from el_company_class where ENTID='3249449' and \$CONDITIONS" --target-dir /user/hive/warehouse/business_data.db/el_company_class_test_3 --split-by entid --delete-target-dir --null-string '' --null-non-string '' ```

使用sqoop从oracle导数据到hive

望大神帮帮忙,非常谢谢您的照顾!!! 1、下图是报错信息: ![图片说明](https://img-ask.csdn.net/upload/201903/28/1553754743_757251.jpg) 2、下面是我的建表语句,测试数据,sqoop代码 ![图片说明](https://img-ask.csdn.net/upload/201903/28/1553754849_51874.jpg)

通过sqoop, load数据到hive,sqoop如何知道hive的warehouse

我创建了自己的hive-site.xml文件,在里边指定了hive的warehouse,现在的问题是:我通过sqoop,把数据从sqlserv导入到hive的时候,我如何让sqoop知道我用的是我自己的hive-site.xml文件,从而用自己配置的warehouse。我们不希望用默认的hive warehouse. 各位大神帮帮忙啊。

使用sqoop从mariadb里面导数据到hive报错

RT 执行代码如下 ``` sqoop import --connect jdbc:mysql://localhost:3306/test --username root --password 1 --table exit_tran --hive-import --hive-table exit_tran -m 1 --hive-overwrite ``` 导出数据总是报错 ``` 20/03/03 17:35:40 INFO mapreduce.Job: Task Id : attempt_1583223426401_0007_m_000000_2, Status : FAILED Error: java.io.IOException: SQLException in nextKeyValue at org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:277) at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556) at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80) at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: java.sql.SQLException: HOUR_OF_DAY: 2 -> 3 at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:129) at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97) at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:89) at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:63) at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:73) at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:85) at com.mysql.cj.jdbc.result.ResultSetImpl.getTimestamp(ResultSetImpl.java:903) at org.apache.sqoop.lib.JdbcWritableBridge.readTimestamp(JdbcWritableBridge.java:111) at com.cloudera.sqoop.lib.JdbcWritableBridge.readTimestamp(JdbcWritableBridge.java:83) at exit_tran.readFields(exit_tran.java:229) at org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:244) ... 12 more Caused by: com.mysql.cj.exceptions.WrongArgumentException: HOUR_OF_DAY: 2 -> 3 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:61) at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:105) at com.mysql.cj.result.SqlTimestampValueFactory.localCreateFromTimestamp(SqlTimestampValueFactory.java:112) at com.mysql.cj.result.SqlTimestampValueFactory.localCreateFromTimestamp(SqlTimestampValueFactory.java:50) at com.mysql.cj.result.AbstractDateTimeValueFactory.createFromTimestamp(AbstractDateTimeValueFactory.java:87) at com.mysql.cj.protocol.a.MysqlTextValueDecoder.decodeTimestamp(MysqlTextValueDecoder.java:79) at com.mysql.cj.protocol.result.AbstractResultsetRow.decodeAndCreateReturnValue(AbstractResultsetRow.java:87) at com.mysql.cj.protocol.result.AbstractResultsetRow.getValueFromBytes(AbstractResultsetRow.java:241) at com.mysql.cj.protocol.a.result.TextBufferRow.getValue(TextBufferRow.java:132) ... 17 more Caused by: java.lang.IllegalArgumentException: HOUR_OF_DAY: 2 -> 3 at java.util.GregorianCalendar.computeTime(GregorianCalendar.java:2829) at java.util.Calendar.updateTime(Calendar.java:3393) at java.util.Calendar.getTimeInMillis(Calendar.java:1782) at com.mysql.cj.result.SqlTimestampValueFactory.localCreateFromTimestamp(SqlTimestampValueFactory.java:108) ... 23 more Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 ``` 求指导~

sqoop 从oracle导数据到hive中报错

往hive中导入表,报如下错误,请大家帮忙 [root@amorsay3 bin]# ./sqoop import --hive-import --connect jdbc:oracle:thin:@192.168.13.168:1521:orcl --username HADOOPLEARN --password zhao --table EMP -m 1 --hive-table emp1 Warning: /usr/local/hadoophive/sqoop-1.4.6.bin__hadoop-0.23/../hbase does not exist! HBase imports will fail. Please set $HBASE_HOME to the root of your HBase installation. Warning: /usr/local/hadoophive/sqoop-1.4.6.bin__hadoop-0.23/../hcatalog does not exist! HCatalog jobs will fail. Please set $HCAT_HOME to the root of your HCatalog installation. Warning: /usr/local/hadoophive/sqoop-1.4.6.bin__hadoop-0.23/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. Warning: /usr/local/hadoophive/sqoop-1.4.6.bin__hadoop-0.23/../zookeeper does not exist! Accumulo imports will fail. Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation. Warning: $HADOOP_HOME is deprecated. 15/08/11 23:17:02 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6 15/08/11 23:17:02 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 15/08/11 23:17:02 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override 15/08/11 23:17:02 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc. 15/08/11 23:17:02 INFO oracle.OraOopManagerFactory: Data Connector for Oracle and Hadoop is disabled. 15/08/11 23:17:02 INFO manager.SqlManager: Using default fetchSize of 1000 15/08/11 23:17:02 INFO tool.CodeGenTool: Beginning code generation 15/08/11 23:17:03 INFO manager.OracleManager: Time zone has been set to GMT 15/08/11 23:17:03 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM EMP t WHERE 1=0 15/08/11 23:17:03 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/local/hadoophive/hadoop-1.2.1 Note: /tmp/sqoop-root/compile/efda22b79cedc05841de35698062fbbc/EMP.java uses or overrides a deprecated API. Note: Recompile with -Xlint:deprecation for details. 15/08/11 23:17:04 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/efda22b79cedc05841de35698062fbbc/EMP.jar 15/08/11 23:17:04 INFO manager.OracleManager: Time zone has been set to GMT 15/08/11 23:17:04 INFO manager.OracleManager: Time zone has been set to GMT 15/08/11 23:17:04 INFO mapreduce.ImportJobBase: Beginning import of EMP 15/08/11 23:17:04 INFO manager.OracleManager: Time zone has been set to GMT 15/08/11 23:17:06 INFO db.DBInputFormat: Using read commited transaction isolation 15/08/11 23:17:06 INFO mapred.JobClient: Cleaning up the staging area hdfs://192.168.14.168:9000/hadoop/mapred/staging/root/.staging/job_201508111912_0003 Exception in thread "main" java.lang.IncompatibleClassChangeError: Found class org.apache.hadoop.mapreduce.JobContext, but interface was expected at org.apache.sqoop.config.ConfigurationHelper.getJobNumMaps(ConfigurationHelper.java:65) at com.cloudera.sqoop.config.ConfigurationHelper.getJobNumMaps(ConfigurationHelper.java:36) at org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat.getSplits(DataDrivenDBInputFormat.java:125) at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054) at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1071) at org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936) at org.apache.hadoop.mapreduce.Job.submit(Job.java:550) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580) at org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob(ImportJobBase.java:196) at org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:169) at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:266) at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:673) at org.apache.sqoop.manager.OracleManager.importTable(OracleManager.java:444) at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:497) at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605) at org.apache.sqoop.Sqoop.run(Sqoop.java:143) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227) at org.apache.sqoop.Sqoop.main(Sqoop.java:236)

sqoop的数据导入hive,从sqlserver到hive做定时任务

sqoop的数据导入hive,从sqlserver到hive做定时任务。做job然后用crontab 做定时任务,有没有做过的好的例子

Sqoop从oracle抽取数据到hdfs

从oracle抽数据到hdfs报错,但是到最后还是执行成功。540万的数据,3.2G的数据, 请大神帮看下该如何解决,谢谢![图片说明](https://img-ask.csdn.net/upload/201510/30/1446214879_293181.png)

利用sqoop把数据从Oracle导出到hive报错

![图片说明](https://img-ask.csdn.net/upload/201504/16/1429180711_592161.png) bash-4.1$ sqoop import --connect jdbc:oracle:thin:@192.168.1.169:1521:orcl --username HADOOP --password hadoop2015 --table CALC_UPAY_DATE_HADOOP_HDFS --split-by UPAYID --hive-import Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. find: paths must precede expression: ant-eclipse-1.0-jvm1.2.jar Usage: find [-H] [-L] [-P] [-Olevel] [-D help|tree|search|stat|rates|opt|exec] [path...] [expression] 15/04/16 03:28:13 INFO sqoop.Sqoop: Running Sqoop version: 1.4.4-cdh5.0.2 15/04/16 03:28:13 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 15/04/16 03:28:13 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override 15/04/16 03:28:13 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc. 15/04/16 03:28:13 INFO manager.SqlManager: Using default fetchSize of 1000 15/04/16 03:28:13 INFO tool.CodeGenTool: Beginning code generation 15/04/16 03:28:13 INFO manager.OracleManager: Time zone has been set to GMT 15/04/16 03:28:13 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM CALC_UPAY_DATE_HADOOP_HDFS t WHERE 1=0 15/04/16 03:28:14 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-mapreduce Note: /tmp/sqoop-hdfs/compile/e9286bf0e7d796ba396d3155210012b0/CALC_UPAY_DATE_HADOOP_HDFS.java uses or overrides a deprecated API. Note: Recompile with -Xlint:deprecation for details. 15/04/16 03:28:15 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hdfs/compile/e9286bf0e7d796ba396d3155210012b0/CALC_UPAY_DATE_HADOOP_HDFS.jar 15/04/16 03:28:15 INFO mapreduce.ImportJobBase: Beginning import of CALC_UPAY_DATE_HADOOP_HDFS 15/04/16 03:28:15 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar 15/04/16 03:28:15 INFO manager.OracleManager: Time zone has been set to GMT 15/04/16 03:28:16 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps 15/04/16 03:28:16 INFO client.RMProxy: Connecting to ResourceManager at hadoop01/192.168.1.201:8032 15/04/16 03:28:18 INFO db.DBInputFormat: Using read commited transaction isolation 15/04/16 03:28:18 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(UPAYID), MAX(UPAYID) FROM CALC_UPAY_DATE_HADOOP_HDFS 15/04/16 03:28:19 INFO mapreduce.JobSubmitter: number of splits:4 15/04/16 03:28:20 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1429145594985_0020 15/04/16 03:28:20 INFO impl.YarnClientImpl: Submitted application application_1429145594985_0020 15/04/16 03:28:20 INFO mapreduce.Job: The url to track the job: http://hadoop01:8088/proxy/application_1429145594985_0020/ 15/04/16 03:28:20 INFO mapreduce.Job: Running job: job_1429145594985_0020 15/04/16 03:28:31 INFO mapreduce.Job: Job job_1429145594985_0020 running in uber mode : false 15/04/16 03:28:31 INFO mapreduce.Job: map 0% reduce 0% 15/04/16 03:28:59 INFO mapreduce.Job: Task Id : attempt_1429145594985_0020_m_000000_0, Status : FAILED Error: oracle.jdbc.driver.T4CPreparedStatement.isClosed()Z 15/04/16 03:29:00 INFO mapreduce.Job: Task Id : attempt_1429145594985_0020_m_000002_0, Status : FAILED Error: oracle.jdbc.driver.T4CPreparedStatement.isClosed()Z 15/04/16 03:29:01 INFO mapreduce.Job: Task Id : attempt_1429145594985_0020_m_000001_0, Status : FAILED Error: oracle.jdbc.driver.T4CPreparedStatement.isClosed()Z 我用sqoop把数据从hive导出到oracle一切正常

求助sqoop从hive导出数据到oracle,目标表字段有date类型sqoop失败

sqoop语句为 sqoop export \ --connect jdbc:oracle:thin:@(description=(address=(protocol=tcp)(port=1521)(host=172.18.50.5))(connect_data=(service_name=rac))) \ --username dsp \ --password rac \ --table DSP.S_F_TKFTIS_ORDER_HIS \ --export-dir /user/hive2/warehouse/dml.db/dml_s_f_tkftis_order_his \ --columns L_SERIALNO,C_FLAG,C_ACCOTYPE,C_ACCO,C_TYPE,L_SERVICEID,C_MODE,D_DATE,C_ISACCO,C_FROM,C_USERCODE,D_SERVICEEND,D_SERVICESTART \ --input-fields-terminated-by '\001' \ --input-null-string '\\N' \ --input-null-non-string '\\N' 目标表结构 create table S_F_TKFTIS_ORDER_HIS ( l_serialno VARCHAR2(40), c_flag CHAR(1), c_accotype CHAR(1), c_acco VARCHAR2(40), c_type CHAR(1), l_serviceid VARCHAR2(40), c_mode CHAR(1), d_date VARCHAR2(40), c_isacco CHAR(1), c_from CHAR(1), c_usercode VARCHAR2(16), d_serviceend VARCHAR2(40), d_servicestart VARCHAR2(40) ) tablespace DSP_DATA pctfree 10 initrans 1 maxtrans 255 storage ( initial 64K next 1M minextents 1 maxextents unlimited ); 如果我把oracle目标表字段都改为varchar则可以正常导入,如果字段类型有date则不成功,求大神帮忙看看什么原因。

为什么我用sqoop导数据从hive到mysql会乱序

在hive里面的数据结构是这样 ![图片说明](https://img-ask.csdn.net/upload/201909/28/1569660068_148900.png) 但是到了mysql中就是这样了。 ![图片说明](https://img-ask.csdn.net/upload/201909/28/1569660119_601059.png) 字段完全乱了。

sqoop从MySQL导入数据到hive报错 class not found

![图片说明](https://img-ask.csdn.net/upload/201507/20/1437381730_990023.png)

用sqoop将mysql数据导入hive中多分区时怎么处理

对于一个分区,可以直接指定 --hive-partition-key --hive-partition-value 多个分区如何指定

sqoop从hdfs导入数据到mysql疑问

需求:需要实现从sqlserver库中导入数据到mysql中,但实际上只导入了1条记录就结束了(实际数据600+条)。 查看了原因: 应该就是行分隔符引起了 只导入了一条就结束了 。 代码: 1、通过sqoop脚本将sqlserver导入到hdfs中: sqoop import \ --connect "jdbc:sqlserver://192.168.1.130:1433;database=测试库" \ --username sa \ --password 123456 \ --table=t_factfoud \ --target-dir /tmp/sqoop_data/900804ebea3d4ec79a036604ed3c93a0_2014_yw/t_factfoud9 \ --fields-terminated-by '\t' --null-string '\\N' --null-non-string '\\N' --lines-terminated-by '\001' \ --split-by billid -m 1 2、通过sqoop脚本将hdfs数据导出到mysql中: sqoop export \ --connect 'jdbc:mysql://192.168.1.38:3306/xiayi?useUnicode=true&characterEncoding=utf-8' \ --username root \ --password 123456 \ --table t_factfoud \ --export-dir /tmp/sqoop_data/900804ebea3d4ec79a036604ed3c93a0_2014_yw/t_factfoud9 \ -m 1 \ --fields-terminated-by '\t' \ --null-string '\\N' --null-non-string '\\N' \ --lines-terminated-by '\001' 现在执行结果: 1、sqlserver库中 表 t_factfoud 中有 600 条记录,已正确到到hdfs中 。 2、从hdfs导出到mysql,只正确导入了一条,就结束了。 效果图如下: ![图片说明](https://img-ask.csdn.net/upload/201805/31/1527756119_961528.jpg)

通过sqoop导出hive ORC格式表,是否可以不用启动Hive Metastore?

kerberos环境中,通过sqoop+hcatlog导出hive ORC格式表,是否可以不用启动Hive Metastore? 通过测试发现:在没有kerberos安全认证的环境下,通过sqoop导出ORC格式的数据时不用启动Hive Metastore。 但是在kerberos安全认证的环境下,通过sqoop导出hdfs上的ORC格式的数据就必须启动Hive Metastore。 有没有一种解决方法,在kerberos安全认证环境下导出ORC数据不用启动Hive Metastore?

sqoop1增量导入可以直接导入到hive 或 hbase中吗

sqoop1增量导入可以直接导入到hive 或 hbase中吗

Sqoop将数据从hive导入mysql报错,各位帮我看看

这是运行的命令: liuyanbing@ubuntu:/opt/sqoop$ bin/sqoop export --connect jdbc:mysql://localhost:3306/dbtaobao --username root --password root --table user_log --export-dir '/user/hive/warehouse/dbtaobao.db/inner_user_log' --fields-terminated-by ','; 报错内容: Warning: /opt/sqoop/../hcatalog does not exist! HCatalog jobs will fail. Please set $HCAT_HOME to the root of your HCatalog installation. Warning: /opt/sqoop/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. Warning: /opt/sqoop/../zookeeper does not exist! Accumulo imports will fail. Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation. 2019-06-11 16:05:04,541 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6 2019-06-11 16:05:04,573 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 2019-06-11 16:05:04,678 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset. 2019-06-11 16:05:04,678 INFO tool.CodeGenTool: Beginning code generation Tue Jun 11 16:05:04 CST 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. 2019-06-11 16:05:05,241 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `user_log` AS t LIMIT 1 2019-06-11 16:05:05,379 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `user_log` AS t LIMIT 1 2019-06-11 16:05:05,392 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /bigdata/hadoop-3.1.1 Note: /tmp/sqoop-liuyanbing/compile/990c7e516f6811ff0f7c264686938932/user_log.java uses or overrides a deprecated API. Note: Recompile with -Xlint:deprecation for details. 2019-06-11 16:05:09,951 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-liuyanbing/compile/990c7e516f6811ff0f7c264686938932/user_log.jar 2019-06-11 16:05:09,960 INFO mapreduce.ExportJobBase: Beginning export of user_log 2019-06-11 16:05:09,960 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address 2019-06-11 16:05:10,093 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2019-06-11 16:05:10,131 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar 2019-06-11 16:05:11,220 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative 2019-06-11 16:05:11,224 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative 2019-06-11 16:05:11,225 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps 2019-06-11 16:05:11,399 INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:8032 2019-06-11 16:05:12,478 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/liuyanbing/.staging/job_1560238973821_0003 2019-06-11 16:05:15,272 WARN hdfs.DataStreamer: Caught exception java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Thread.join(Thread.java:1252) at java.lang.Thread.join(Thread.java:1326) at org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:986) at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:640) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:810) 2019-06-11 16:05:18,771 INFO input.FileInputFormat: Total input files to process : 1 2019-06-11 16:05:18,780 INFO input.FileInputFormat: Total input files to process : 1 2019-06-11 16:05:19,285 INFO mapreduce.JobSubmitter: number of splits:4 2019-06-11 16:05:19,352 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative 2019-06-11 16:05:19,353 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled 2019-06-11 16:05:19,472 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1560238973821_0003 2019-06-11 16:05:19,473 INFO mapreduce.JobSubmitter: Executing with tokens: [] 2019-06-11 16:05:19,959 INFO conf.Configuration: resource-types.xml not found 2019-06-11 16:05:19,959 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'. 2019-06-11 16:05:20,049 INFO impl.YarnClientImpl: Submitted application application_1560238973821_0003 2019-06-11 16:05:20,105 INFO mapreduce.Job: The url to track the job: http://ubuntu:8088/proxy/application_1560238973821_0003/ 2019-06-11 16:05:20,106 INFO mapreduce.Job: Running job: job_1560238973821_0003 2019-06-11 16:05:29,273 INFO mapreduce.Job: Job job_1560238973821_0003 running in uber mode : false 2019-06-11 16:05:29,286 INFO mapreduce.Job: map 0% reduce 0% 2019-06-11 16:05:42,450 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000002_0, Status : FAILED [2019-06-11 16:05:39.558]Container [pid=22666,containerID=container_1560238973821_0003_01_000004] is running 318323200B beyond the 'VIRTUAL' memory limit. Current usage: 125.2 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000004 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 22910 22666 22666 22666 (java) 302 45 2558558208 31405 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000004/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000002_0 4 |- 22666 22656 22666 22666 (bash) 0 0 14622720 634 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000004/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000002_0 4 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000004/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000004/stderr [2019-06-11 16:05:40.618]Container killed on request. Exit code is 143 [2019-06-11 16:05:40.619]Container exited with a non-zero exit code 143. 2019-06-11 16:05:42,479 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000001_0, Status : FAILED [2019-06-11 16:05:39.558]Container [pid=22651,containerID=container_1560238973821_0003_01_000003] is running 320690688B beyond the 'VIRTUAL' memory limit. Current usage: 127.5 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000003 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 22955 22651 22651 22651 (java) 296 49 2560925696 32025 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000003/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000003 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000001_0 3 |- 22651 22649 22651 22651 (bash) 0 0 14622720 627 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000003/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000003 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000001_0 3 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000003/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000003/stderr [2019-06-11 16:05:40.618]Container killed on request. Exit code is 143 [2019-06-11 16:05:40.621]Container exited with a non-zero exit code 143. 2019-06-11 16:05:42,480 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000003_0, Status : FAILED [2019-06-11 16:05:38.617]Container [pid=22749,containerID=container_1560238973821_0003_01_000005] is running 320125440B beyond the 'VIRTUAL' memory limit. Current usage: 126.4 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000005 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 22987 22749 22749 22749 (java) 324 37 2560360448 31709 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000005/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000005 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000003_0 5 |- 22749 22720 22749 22749 (bash) 0 1 14622720 640 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000005/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000005 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000003_0 5 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000005/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000005/stderr [2019-06-11 16:05:40.620]Container killed on request. Exit code is 143 [2019-06-11 16:05:40.622]Container exited with a non-zero exit code 143. 2019-06-11 16:05:42,482 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000000_0, Status : FAILED [2019-06-11 16:05:39.558]Container [pid=22675,containerID=container_1560238973821_0003_01_000002] is running 319543808B beyond the 'VIRTUAL' memory limit. Current usage: 125.4 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000002 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 22937 22675 22675 22675 (java) 316 38 2559778816 31497 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000002/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000002 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000000_0 2 |- 22675 22670 22675 22675 (bash) 0 0 14622720 612 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000002/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000002 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000000_0 2 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000002/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000002/stderr [2019-06-11 16:05:40.619]Container killed on request. Exit code is 143 [2019-06-11 16:05:40.622]Container exited with a non-zero exit code 143. 2019-06-11 16:05:52,546 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000002_1, Status : FAILED [2019-06-11 16:05:50.910]Container [pid=23116,containerID=container_1560238973821_0003_01_000006] is running 282286592B beyond the 'VIRTUAL' memory limit. Current usage: 68.6 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000006 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 23194 23116 23116 23116 (java) 85 29 2522521600 16852 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000006/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000006 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000002_1 6 |- 23116 23115 23116 23116 (bash) 0 1 14622720 710 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000006/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000006 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000002_1 6 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000006/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000006/stderr [2019-06-11 16:05:50.970]Container killed on request. Exit code is 143 [2019-06-11 16:05:51.012]Container exited with a non-zero exit code 143. 2019-06-11 16:05:55,561 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000000_1, Status : FAILED [2019-06-11 16:05:54.193]Container [pid=23396,containerID=container_1560238973821_0003_01_000009] is running 313866752B beyond the 'VIRTUAL' memory limit. Current usage: 111.1 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000009 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 23396 23394 23396 23396 (bash) 0 1 14622720 710 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000009/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000009 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000000_1 9 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000009/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000009/stderr |- 23473 23396 23396 23396 (java) 245 40 2554101760 27743 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000009/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000009 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000000_1 9 [2019-06-11 16:05:54.228]Container killed on request. Exit code is 143 [2019-06-11 16:05:54.263]Container exited with a non-zero exit code 143. 2019-06-11 16:05:55,563 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000003_1, Status : FAILED [2019-06-11 16:05:54.332]Container [pid=23304,containerID=container_1560238973821_0003_01_000008] is running 314042880B beyond the 'VIRTUAL' memory limit. Current usage: 113.8 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000008 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 23381 23304 23304 23304 (java) 265 51 2554277888 28423 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000008/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000008 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000003_1 8 |- 23304 23302 23304 23304 (bash) 0 1 14622720 720 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000008/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000008 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000003_1 8 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000008/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000008/stderr [2019-06-11 16:05:54.353]Container killed on request. Exit code is 143 [2019-06-11 16:05:54.381]Container exited with a non-zero exit code 143. 2019-06-11 16:05:55,565 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000001_1, Status : FAILED [2019-06-11 16:05:54.408]Container [pid=23200,containerID=container_1560238973821_0003_01_000007] is running 314497536B beyond the 'VIRTUAL' memory limit. Current usage: 115.5 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000007 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 23200 23198 23200 23200 (bash) 0 1 14622720 711 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000007/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000007 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000001_1 7 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000007/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000007/stderr |- 23277 23200 23200 23200 (java) 257 60 2554732544 28852 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000007/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000007 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000001_1 7 [2019-06-11 16:05:54.463]Container killed on request. Exit code is 143 [2019-06-11 16:05:54.482]Container exited with a non-zero exit code 143. 2019-06-11 16:06:01,619 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000002_2, Status : FAILED [2019-06-11 16:06:00.584]Container [pid=23515,containerID=container_1560238973821_0003_01_000011] is running 337451520B beyond the 'VIRTUAL' memory limit. Current usage: 203.4 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000011 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 23515 23513 23515 23515 (bash) 0 1 14622720 712 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000011/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000011 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000002_2 11 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000011/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000011/stderr |- 23592 23515 23515 23515 (java) 456 89 2577686528 51352 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000011/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000011 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000002_2 11 [2019-06-11 16:06:00.602]Container killed on request. Exit code is 143 [2019-06-11 16:06:00.659]Container exited with a non-zero exit code 143. 2019-06-11 16:06:05,651 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000000_2, Status : FAILED [2019-06-11 16:06:03.816]Container [pid=23651,containerID=container_1560238973821_0003_01_000012] is running 331475456B beyond the 'VIRTUAL' memory limit. Current usage: 173.7 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000012 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 23728 23651 23651 23651 (java) 418 39 2571710464 43768 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000012/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000012 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000000_2 12 |- 23651 23649 23651 23651 (bash) 0 1 14622720 707 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000012/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000012 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000000_2 12 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000012/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000012/stderr [2019-06-11 16:06:03.981]Container killed on request. Exit code is 143 [2019-06-11 16:06:03.986]Container exited with a non-zero exit code 143. 2019-06-11 16:06:08,677 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000001_2, Status : FAILED [2019-06-11 16:06:07.127]Container [pid=23848,containerID=container_1560238973821_0003_01_000014] is running 335940096B beyond the 'VIRTUAL' memory limit. Current usage: 198.2 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000014 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 23848 23847 23848 23848 (bash) 0 1 14622720 714 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000014/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000014 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000001_2 14 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000014/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000014/stderr |- 23926 23848 23848 23848 (java) 408 59 2576175104 50032 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000014/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000014 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000001_2 14 [2019-06-11 16:06:07.186]Container killed on request. Exit code is 143 [2019-06-11 16:06:07.201]Container exited with a non-zero exit code 143. 2019-06-11 16:06:08,678 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000003_2, Status : FAILED [2019-06-11 16:06:07.227]Container [pid=23751,containerID=container_1560238973821_0003_01_000013] is running 337357312B beyond the 'VIRTUAL' memory limit. Current usage: 192.7 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000013 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 23829 23751 23751 23751 (java) 463 52 2577592320 48632 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000013/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000013 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000003_2 13 |- 23751 23749 23751 23751 (bash) 0 1 14622720 706 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000013/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000013 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000003_2 13 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000013/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000013/stderr [2019-06-11 16:06:07.280]Container killed on request. Exit code is 143 [2019-06-11 16:06:07.360]Container exited with a non-zero exit code 143. 2019-06-11 16:06:12,703 INFO mapreduce.Job: map 100% reduce 0% 2019-06-11 16:06:12,711 INFO mapreduce.Job: Job job_1560238973821_0003 failed with state FAILED due to: Task failed task_1560238973821_0003_m_000002 Job failed as tasks failed. failedMaps:1 failedReduces:0 killedMaps:0 killedReduces: 0 2019-06-11 16:06:12,979 INFO mapreduce.Job: Counters: 13 Job Counters Failed map tasks=13 Killed map tasks=3 Launched map tasks=16 Other local map tasks=12 Data-local map tasks=4 Total time spent by all maps in occupied slots (ms)=124936 Total time spent by all reduces in occupied slots (ms)=0 Total time spent by all map tasks (ms)=124936 Total vcore-milliseconds taken by all map tasks=124936 Total megabyte-milliseconds taken by all map tasks=127934464 Map-Reduce Framework CPU time spent (ms)=0 Physical memory (bytes) snapshot=0 Virtual memory (bytes) snapshot=0 2019-06-11 16:06:12,986 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead 2019-06-11 16:06:12,990 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 61.7517 seconds (0 bytes/sec) 2019-06-11 16:06:12,999 INFO mapreduce.ExportJobBase: Exported 0 records. 2019-06-11 16:06:12,999 ERROR tool.ExportTool: Error during export: Export job failed! 新手找不到错误 求大神班帮我看看

查询用sqoop从mysql中导入到hive中的表格,显示格式有问题

mysql中的原始数据如下: ![图片说明](https://img-ask.csdn.net/upload/201903/26/1553587553_692693.png) 通过如下命令将此表格导入到hive中 ``` bin/sqoop import --connect jdbc:mysql://192.168.12.69:3306/userdb --username root --password 123 --table emp --fields-terminated-by '\001' --hive-import --hive-table sqooptohive.emp_hive --hive-overwrite --delete-target-dir --m 1 ``` 导入成功后,从hdfs系统中下载下来对应的文件内容为: ![图片说明](https://img-ask.csdn.net/upload/201903/26/1553587271_642727.png) 在hive中使用查询语句: ``` select * from emp_hive; ``` 字段的字权威null了,结果如下: ![图片说明](https://img-ask.csdn.net/upload/201903/26/1553587452_172392.png)

sqoop将oracle数据表导入hive中文乱码问题

请教各位大神一个问题,就是将oracle的表导入到hive后中文乱码,oracle库的编码格式为US7ASCII,各位大神有没有遇到过类型的问题,或者有没有好的解决方案建议,谢谢了。附注:现在已经试过convert(nsrdzdah,'utf8','US7ASCII'),但是还是乱码;还有就是修改hive jdbc jar包,感觉不靠谱就没有试

sqoop中增量同步的问题

其中我自己写了一条增量同步的语句 如下: sqoop job --create MY_SQOOP_TEST -- import --connect jdbc:oracle:thin:@xxx:orcl --username XXX --password XXX --table MY_TEST --hive-import --hive-table MY_SQOOP_TEST --incremental lastmodified --check-column sj --last-value '2016/12/20 8:09:46' 我的理解是先创建my_sqoop_test,之后去oracle找到 my_test这张表,如果时间大于2016/12/20 8:09:46的数据则导入hive中的my_sqoop_test的表中. 这样子理解对吗? 如果是对的那么该如何运行这个job?

Python数据挖掘简易入门

&nbsp; &nbsp; &nbsp; &nbsp; 本课程为Python数据挖掘方向的入门课程,课程主要以真实数据为基础,详细介绍数据挖掘入门的流程和使用Python实现pandas与numpy在数据挖掘方向的运用,并深入学习如何运用scikit-learn调用常用的数据挖掘算法解决数据挖掘问题,为进一步深入学习数据挖掘打下扎实的基础。

HoloLens2开发入门教程

本课程为HoloLens2开发入门教程,讲解部署开发环境,安装VS2019,Unity版本,Windows SDK,创建Unity项目,讲解如何使用MRTK,编辑器模拟手势交互,打包VS工程并编译部署应用到HoloLens上等。

2019 Python开发者日-培训

本次活动将秉承“只讲技术,拒绝空谈”的理念,邀请十余位身处一线的Python技术专家,重点围绕Web开发、自动化运维、数据分析、人工智能等技术模块,分享真实生产环境中使用Python应对IT挑战的真知灼见。此外,针对不同层次的开发者,大会还安排了深度培训实操环节,为开发者们带来更多深度实战的机会。

Only老K说-爬取妹子图片(简单入门)

安装第三方请求库 requests 被网站禁止了访问 原因是我们是Python过来的 重新给一段 可能还是存在用不了,使用网页的 编写代码 上面注意看匹配内容 User-Agent:请求对象 AppleWebKit:请求内核 Chrome浏览器 //请求网页 import requests import re //正则表达式 就是去不规则的网页里面提取有规律的信息 headers = { 'User-Agent':'存放浏览器里面的' } response = requests.get

2020_五一数学建模_C题_整理后的数据.zip

该数据是我的程序读取的数据,仅供参考,问题的解决方案:https://blog.csdn.net/qq_41228463/article/details/105993051

R语言入门基础

本课程旨在帮助学习者快速入门R语言: 课程系统详细地介绍了使用R语言进行数据处理的基本思路和方法。 课程能够帮助初学者快速入门数据处理。 课程通过大量的案例详细地介绍了如何使用R语言进行数据分析和处理 课程操作实际案例教学,通过编写代码演示R语言的基本使用方法和技巧

人才招聘系统PHP+MySQL源码

PHP 5.0及以上 + MySQL 5.0及以上 开发的人才招聘系统完全可运行源码,按照操作说明简单配置即可运行。学习PHPWEB应用的完整系统程序源码。

Java基础知识面试题(2020最新版)

文章目录Java概述何为编程什么是Javajdk1.5之后的三大版本JVM、JRE和JDK的关系什么是跨平台性?原理是什么Java语言有哪些特点什么是字节码?采用字节码的最大好处是什么什么是Java程序的主类?应用程序和小程序的主类有何不同?Java应用程序与小程序之间有那些差别?Java和C++的区别Oracle JDK 和 OpenJDK 的对比基础语法数据类型Java有哪些数据类型switc...

python可视化分析(matplotlib、seaborn、ggplot2)

python可视化分析总结(matplotlib、seaborn、ggplot)一、matplotlib库1、基本绘图命令3、图形参数设置4、特殊统计图的绘制4.1 数学函数图4.2 气泡图4.1 三维曲面图二、seaborn库1、常用统计图1.1 箱线图1.2 小提琴图1.3 点图1.4 条图与计数图1.5 分组图1.6 概率分布图2、联合图3、配对图三、ggplot库1、图层画法+常用图形2、快速绘图 一、matplotlib库 1、基本绘图命令 import matplotlib.pyplot as

Vue.js 2.0之全家桶系列视频课程

基于新的Vue.js 2.3版本, 目前新全的Vue.js教学视频,让你少走弯路,直达技术前沿! 1. 包含Vue.js全家桶(vue.js、vue-router、axios、vuex、vue-cli、webpack、ElementUI等) 2. 采用笔记+代码案例的形式讲解,通俗易懂

初级玩转Linux+Ubuntu(嵌入式开发基础课程)

课程主要面向嵌入式Linux初学者、工程师、学生 主要从一下几方面进行讲解: 1.linux学习路线、基本命令、高级命令 2.shell、vi及vim入门讲解 3.软件安装下载、NFS、Samba、FTP等服务器配置及使用

人工智能-计算机视觉实战之路(必备算法+深度学习+项目实战)

系列课程主要分为3大阶段:(1)首先掌握计算机视觉必备算法原理,结合Opencv进行学习与练手,通过实际视项目进行案例应用展示。(2)进军当下最火的深度学习进行视觉任务实战,掌握深度学习中必备算法原理与网络模型架构。(3)结合经典深度学习框架与实战项目进行实战,基于真实数据集展开业务分析与建模实战。整体风格通俗易懂,项目驱动学习与就业面试。 建议同学们按照下列顺序来进行学习:1.Python入门视频课程 2.Opencv计算机视觉实战(Python版) 3.深度学习框架-PyTorch实战/人工智能框架实战精讲:Keras项目 4.Python-深度学习-物体检测实战 5.后续实战课程按照自己喜好选择就可以

【大总结2】大学两年,写了这篇几十万字的干货总结

本文十天后设置为粉丝可见,喜欢的提前关注 不要白嫖请点赞 不要白嫖请点赞 不要白嫖请点赞 文中提到的书我都有电子版,可以评论邮箱发给你。 文中提到的书我都有电子版,可以评论邮箱发给你。 文中提到的书我都有电子版,可以评论邮箱发给你。 本篇文章应该算是Java后端开发技术栈的,但是大部分是基础知识,所以我觉得对任何方向都是有用的。 1、数据结构 数据结构是计算机存储、...

lena全身原图(非256*256版本,而是全身原图)

lena全身原图(非256*256版本,而是全身原图) lena原图很有意思,我们通常所用的256*256图片是在lena原图上截取了头部部分的256*256正方形得到的. 原图是花花公子杂志上的一个

【项目实战】 图书信息管理系统(Maven,mybatis)(第一个自己独立完成的项目)

《程序设计综合训练实践报告》 此项目为图书信息管理系统,是一个采用了mysql+mybatis框架+java编写的maven项目

图书管理系统(Java + Mysql)我的第一个完全自己做的实训项目

图书管理系统 Java + MySQL 完整实训代码,MVC三层架构组织,包含所有用到的图片资源以及数据库文件,大三上学期实训,注释很详细,按照阿里巴巴Java编程规范编写

Python入门视频精讲

Python入门视频培训课程以通俗易懂的方式讲解Python核心技术,Python基础,Python入门。适合初学者的教程,让你少走弯路! 课程内容包括:1.Python简介和安装 、2.第一个Python程序、PyCharm的使用 、3.Python基础、4.函数、5.高级特性、6.面向对象、7.模块、8.异常处理和IO操作、9.访问数据库MySQL。教学全程采用笔记+代码案例的形式讲解,通俗易懂!!!

20行代码教你用python给证件照换底色

20行代码教你用python给证件照换底色

2018年全国大学生计算机技能应用大赛决赛 大题

2018年全国大学生计算机技能应用大赛决赛大题,程序填空和程序设计(侵删)

MySQL数据库从入门到实战应用

限时福利1:购课进答疑群专享柳峰(刘运强)老师答疑服务 限时福利2:购课后添加学习助手(微信号:csdn590),按消息提示即可领取编程大礼包! 为什么说每一个程序员都应该学习MySQL? 根据《2019-2020年中国开发者调查报告》显示,超83%的开发者都在使用MySQL数据库。 使用量大同时,掌握MySQL早已是运维、DBA的必备技能,甚至部分IT开发岗位也要求对数据库使用和原理有深入的了解和掌握。 学习编程,你可能会犹豫选择 C++ 还是 Java;入门数据科学,你可能会纠结于选择 Python 还是 R;但无论如何, MySQL 都是 IT 从业人员不可或缺的技能! 【课程设计】 在本课程中,刘运强老师会结合自己十多年来对MySQL的心得体会,通过课程给你分享一条高效的MySQL入门捷径,让学员少走弯路,彻底搞懂MySQL。 本课程包含3大模块:&nbsp; 一、基础篇: 主要以最新的MySQL8.0安装为例帮助学员解决安装与配置MySQL的问题,并对MySQL8.0的新特性做一定介绍,为后续的课程展开做好环境部署。 二、SQL语言篇: 本篇主要讲解SQL语言的四大部分数据查询语言DQL,数据操纵语言DML,数据定义语言DDL,数据控制语言DCL,学会熟练对库表进行增删改查等必备技能。 三、MySQL进阶篇: 本篇可以帮助学员更加高效的管理线上的MySQL数据库;具备MySQL的日常运维能力,语句调优、备份恢复等思路。 &nbsp;

C/C++学习指南全套教程

C/C++学习的全套教程,从基本语法,基本原理,到界面开发、网络开发、Linux开发、安全算法,应用尽用。由毕业于清华大学的业内人士执课,为C/C++编程爱好者的教程。

C/C++跨平台研发从基础到高阶实战系列套餐

一 专题从基础的C语言核心到c++ 和stl完成基础强化; 二 再到数据结构,设计模式完成专业计算机技能强化; 三 通过跨平台网络编程,linux编程,qt界面编程,mfc编程,windows编程,c++与lua联合编程来完成应用强化 四 最后通过基于ffmpeg的音视频播放器,直播推流,屏幕录像,

我以为我对Mysql事务很熟,直到我遇到了阿里面试官

太惨了,面试又被吊打

专为程序员设计的数学课

<p> 限时福利限时福利,<span>15000+程序员的选择!</span> </p> <p> 购课后添加学习助手(微信号:csdn590),按提示消息领取编程大礼包!并获取讲师答疑服务! </p> <p> <br> </p> <p> 套餐中一共包含5门程序员必学的数学课程(共47讲) </p> <p> 课程1:《零基础入门微积分》 </p> <p> 课程2:《数理统计与概率论》 </p> <p> 课程3:《代码学习线性代数》 </p> <p> 课程4:《数据处理的最优化》 </p> <p> 课程5:《马尔可夫随机过程》 </p> <p> <br> </p> <p> 哪些人适合学习这门课程? </p> <p> 1)大学生,平时只学习了数学理论,并未接触如何应用数学解决编程问题; </p> <p> 2)对算法、数据结构掌握程度薄弱的人,数学可以让你更好的理解算法、数据结构原理及应用; </p> <p> 3)看不懂大牛代码设计思想的人,因为所有的程序设计底层逻辑都是数学; </p> <p> 4)想学习新技术,如:人工智能、机器学习、深度学习等,这门课程是你的必修课程; </p> <p> 5)想修炼更好的编程内功,在遇到问题时可以灵活的应用数学思维解决问题。 </p> <p> <br> </p> <p> 在这门「专为程序员设计的数学课」系列课中,我们保证你能收获到这些:<br> <br> <span> </span> </p> <p class="ql-long-24357476"> <span class="ql-author-24357476">①价值300元编程课程大礼包</span> </p> <p class="ql-long-24357476"> <span class="ql-author-24357476">②应用数学优化代码的实操方法</span> </p> <p class="ql-long-24357476"> <span class="ql-author-24357476">③数学理论在编程实战中的应用</span> </p> <p class="ql-long-24357476"> <span class="ql-author-24357476">④程序员必学的5大数学知识</span> </p> <p class="ql-long-24357476"> <span class="ql-author-24357476">⑤人工智能领域必修数学课</span> </p> <p> <br> 备注:此课程只讲程序员所需要的数学,即使你数学基础薄弱,也能听懂,只需要初中的数学知识就足矣。<br> <br> 如何听课? </p> <p> 1、登录CSDN学院 APP 在我的课程中进行学习; </p> <p> 2、登录CSDN学院官网。 </p> <p> <br> </p> <p> 购课后如何领取免费赠送的编程大礼包和加入答疑群? </p> <p> 购课后,添加助教微信:<span> csdn590</span>,按提示领取编程大礼包,或观看付费视频的第一节内容扫码进群答疑交流! </p> <p> <img src="https://img-bss.csdn.net/201912251155398753.jpg" alt=""> </p>

Eclipse archetype-catalog.xml

Eclipse Maven 创建Web 项目报错 Could not resolve archetype org.apache.maven.archetypes:maven-archetype-web

使用TensorFlow+keras快速构建图像分类模型

课程分为两条主线: 1&nbsp;从Tensorflow的基础知识开始,全面介绍Tensorflow和Keras相关内容。通过大量实战,掌握Tensorflow和Keras经常用到的各种建模方式,参数优化方法,自定义参数和模型的手段,以及对训练结果评估与分析的技巧。 2&nbsp;从机器学习基础算法开始,然后进入到图像分类领域,使用MNIST手写数据集和CIFAR10图像数据集,从简单神经网络到深度神经网络,再到卷积神经网络,最终完成复杂模型:残差网络的搭建。完成这条主线,学员将可以自如地使用机器学习的手段来达到图像分类的目的。

Python代码实现飞机大战

文章目录经典飞机大战一.游戏设定二.我方飞机三.敌方飞机四.发射子弹五.发放补给包六.主模块 经典飞机大战 源代码以及素材资料(图片,音频)可从下面的github中下载: 飞机大战源代码以及素材资料github项目地址链接 ————————————————————————————————————————————————————————— 不知道大家有没有打过飞机,喜不喜欢打飞机。当我第一次接触这个东西的时候,我的内心是被震撼到的。第一次接触打飞机的时候作者本人是身心愉悦的,因为周边的朋友都在打飞机, 每

最近面试Java后端开发的感受:如果就以平时项目经验来面试,通过估计很难,不信你来看看

在上周,我密集面试了若干位Java后端的候选人,工作经验在3到5年间。我的标准其实不复杂:第一能干活,第二Java基础要好,第三最好熟悉些分布式框架,我相信其它公司招初级开发时,应该也照着这个标准来面的。 我也知道,不少候选人能力其实不差,但面试时没准备或不会说,这样的人可能在进团队干活后确实能达到期望,但可能就无法通过面试,但面试官总是只根据面试情况来判断。 但现实情况是,大多数人可能面试前没准备,或准备方法不得当。要知道,我们平时干活更偏重于业务,不可能大量接触到算法,数据结构,底层代码这类面试必问

三个项目玩转深度学习(附1G源码)

从事大数据与人工智能开发与实践约十年,钱老师亲自见证了大数据行业的发展与人工智能的从冷到热。事实证明,计算机技术的发展,算力突破,海量数据,机器人技术等,开启了第四次工业革命的序章。深度学习图像分类一直是人工智能的经典任务,是智慧零售、安防、无人驾驶等机器视觉应用领域的核心技术之一,掌握图像分类技术是机器视觉学习的重中之重。针对现有线上学习的特点与实际需求,我们开发了人工智能案例实战系列课程。打造:以项目案例实践为驱动的课程学习方式,覆盖了智能零售,智慧交通等常见领域,通过基础学习、项目案例实践、社群答疑,三维立体的方式,打造最好的学习效果。

微信小程序开发实战之番茄时钟开发

微信小程序番茄时钟视频教程,本课程将带着各位学员开发一个小程序初级实战类项目,针对只看过官方文档而又无从下手的开发者来说,可以作为一个较好的练手项目,对于有小程序开发经验的开发者而言,可以更好加深对小程序各类组件和API 的理解,为更深层次高难度的项目做铺垫。

相关热词 c#分级显示数据 c# 不区分大小写替换 c#中调用就java c#正则表达式 验证小数 c# vscode 配置 c#三维数组能存多少数据 c# 新建excel c#多个文本框 c#怎么创建tcp通讯 c# mvc 电子病例
立即提问