linux下使用sqoop连接windows的MySQL数据库报错 5C

刚入门学习hadoop,然后在sqoop数据迁移这里遇到了问题,linux下使用sqoop连接不上windows系统的MySQL数据库,按照网上的许多方法都没解决。
linux系统是centos6.4,然后hadoop2.4.1,sqoop1.4.7,windows下是mysql5.7
下面是报错信息:

[root@itcast01 bin]# ./sqoop list-tables --connect jdbc:mysql://192.168.147.100:3306/sqoopex1 --username root -password 1234

18/07/12 16:17:28 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7
18/07/12 16:17:28 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
18/07/12 16:17:28 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
18/07/12 16:18:31 ERROR manager.CatalogQueryManager: Failed to list tables
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure

The last packet successfully received from the server was 1,531,383,511,816 milliseconds ago. The last packet sent successfully to the server was 0 milliseconds ago.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:406)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1074)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2214)
at com.mysql.jdbc.ConnectionImpl.(ConnectionImpl.java:773)
at com.mysql.jdbc.JDBC4Connection.(JDBC4Connection.java:46)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:406)
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:352)
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:282)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
at org.apache.sqoop.manager.SqlManager.makeConnection(SqlManager.java:904)
at org.apache.sqoop.manager.GenericJdbcManager.getConnection(GenericJdbcManager.java:59)
at org.apache.sqoop.manager.CatalogQueryManager.listTables(CatalogQueryManager.java:102)
at org.apache.sqoop.tool.ListTablesTool.run(ListTablesTool.java:49)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure

The last packet successfully received from the server was 1,531,383,511,809 milliseconds ago. The last packet sent successfully to the server was 0 milliseconds ago.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:406)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1074)
at com.mysql.jdbc.MysqlIO.(MysqlIO.java:341)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2137)
... 21 more
Caused by: java.net.ConnectException: 连接超时
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at java.net.Socket.connect(Socket.java:538)
at java.net.Socket.(Socket.java:434)
at java.net.Socket.(Socket.java:244)
at com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:253)
at com.mysql.jdbc.MysqlIO.(MysqlIO.java:290)
... 22 more

zookeeper和hadoop服务都开启了的,防火墙也关闭了,去度娘有人说修改my.ini文件,说在[mysqld] 那里加一行: wait_timeout=86400 。 但是我修改后还是报同样的错误。mysql权限也赋予了的。数据库连接驱动使用mysql-connector-5.1.8.jar。
图片说明
图片说明
连接的ip地址192.168.147.100是windows的VMnet1的ip地址,能ping通。然后就是连接不上数据库。使用Navicat连接也能连得上。
图片说明
图片说明

有没有大牛知道我问题出在哪里?感激不尽!

2个回答

问题解决了,在windows的防火墙入站规则那里,给3306端口设置了所有用户权限,就连上了。原来这个问题原因是因为windows防火墙端口问题- -

可以尝试检查所需的依赖包是否放在了lib下。

J_yl02
yuriFish sqoop的话,lib下是有自带的jar包的,不是只要放数据连接驱动的jar包吗
接近 2 年之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
Sqoop导入数据到Hbase报错

# Sqoop导入数据到Hbase报错,我的版本Hadoop3.2.1,hbase2.2.3,sqoop1.4.7,我知道是版本的问题,怎么解决? 我将hbase中的lib所有都导入到Sqoop中都没有解决 ``` Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.hbase.client.HBaseAdmin.<init>(Lorg/apache/hadoop/conf/Configuration;)V at org.apache.sqoop.mapreduce.HBaseImportJob.jobSetup(HBaseImportJob.java:163) at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:268) at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:692) at org.apache.sqoop.manager.MySQLManager.importTable(MySQLManager.java:127) at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:520) at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:628) at org.apache.sqoop.Sqoop.run(Sqoop.java:147) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243) at org.apache.sqoop.Sqoop.main(Sqoop.java:252) ```

sqoop2-tool verify报错

hadoop 2.6.0下安装sqoop 1.99.5 ,工具验证时(./sqoop2-tool verify)报错,报错,报错信息如下: Sqoop home directory: /home/kael/sqoop-1.99.5-bin-hadoop200 Setting SQOOP_HTTP_PORT: 12000 Setting SQOOP_ADMIN_PORT: 12001 Using CATALINA_OPTS: Adding to CATALINA_OPTS: -Dsqoop.http.port=12000 -Dsqoop.admin.port=12001 四月 17, 2015 12:23:47 上午 org.apache.catalina.startup.Tool main 严重: Exception calling main() method java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.catalina.startup.Tool.main(Tool.java:225) Caused by: java.lang.ClassNotFoundException: org.apache.catalina.startup.Catalina at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:425) at java.lang.ClassLoader.loadClass(ClassLoader.java:358) at org.apache.catalina.startup.Bootstrap.init(Bootstrap.java:216) at org.apache.sqoop.tomcat.TomcatToolRunner.main(TomcatToolRunner.java:47) ... 5 more

Sqoop2将mysql数据库的内容以FTP connector的方式导入本地

Sqoop2将mysql数据库的内容以FTP connector的方式导入本地,可以实现吗?

Sqoop将数据从hive导入mysql报错,各位帮我看看

这是运行的命令: liuyanbing@ubuntu:/opt/sqoop$ bin/sqoop export --connect jdbc:mysql://localhost:3306/dbtaobao --username root --password root --table user_log --export-dir '/user/hive/warehouse/dbtaobao.db/inner_user_log' --fields-terminated-by ','; 报错内容: Warning: /opt/sqoop/../hcatalog does not exist! HCatalog jobs will fail. Please set $HCAT_HOME to the root of your HCatalog installation. Warning: /opt/sqoop/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. Warning: /opt/sqoop/../zookeeper does not exist! Accumulo imports will fail. Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation. 2019-06-11 16:05:04,541 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6 2019-06-11 16:05:04,573 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 2019-06-11 16:05:04,678 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset. 2019-06-11 16:05:04,678 INFO tool.CodeGenTool: Beginning code generation Tue Jun 11 16:05:04 CST 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. 2019-06-11 16:05:05,241 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `user_log` AS t LIMIT 1 2019-06-11 16:05:05,379 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `user_log` AS t LIMIT 1 2019-06-11 16:05:05,392 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /bigdata/hadoop-3.1.1 Note: /tmp/sqoop-liuyanbing/compile/990c7e516f6811ff0f7c264686938932/user_log.java uses or overrides a deprecated API. Note: Recompile with -Xlint:deprecation for details. 2019-06-11 16:05:09,951 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-liuyanbing/compile/990c7e516f6811ff0f7c264686938932/user_log.jar 2019-06-11 16:05:09,960 INFO mapreduce.ExportJobBase: Beginning export of user_log 2019-06-11 16:05:09,960 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address 2019-06-11 16:05:10,093 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2019-06-11 16:05:10,131 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar 2019-06-11 16:05:11,220 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative 2019-06-11 16:05:11,224 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative 2019-06-11 16:05:11,225 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps 2019-06-11 16:05:11,399 INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:8032 2019-06-11 16:05:12,478 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/liuyanbing/.staging/job_1560238973821_0003 2019-06-11 16:05:15,272 WARN hdfs.DataStreamer: Caught exception java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Thread.join(Thread.java:1252) at java.lang.Thread.join(Thread.java:1326) at org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:986) at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:640) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:810) 2019-06-11 16:05:18,771 INFO input.FileInputFormat: Total input files to process : 1 2019-06-11 16:05:18,780 INFO input.FileInputFormat: Total input files to process : 1 2019-06-11 16:05:19,285 INFO mapreduce.JobSubmitter: number of splits:4 2019-06-11 16:05:19,352 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative 2019-06-11 16:05:19,353 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled 2019-06-11 16:05:19,472 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1560238973821_0003 2019-06-11 16:05:19,473 INFO mapreduce.JobSubmitter: Executing with tokens: [] 2019-06-11 16:05:19,959 INFO conf.Configuration: resource-types.xml not found 2019-06-11 16:05:19,959 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'. 2019-06-11 16:05:20,049 INFO impl.YarnClientImpl: Submitted application application_1560238973821_0003 2019-06-11 16:05:20,105 INFO mapreduce.Job: The url to track the job: http://ubuntu:8088/proxy/application_1560238973821_0003/ 2019-06-11 16:05:20,106 INFO mapreduce.Job: Running job: job_1560238973821_0003 2019-06-11 16:05:29,273 INFO mapreduce.Job: Job job_1560238973821_0003 running in uber mode : false 2019-06-11 16:05:29,286 INFO mapreduce.Job: map 0% reduce 0% 2019-06-11 16:05:42,450 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000002_0, Status : FAILED [2019-06-11 16:05:39.558]Container [pid=22666,containerID=container_1560238973821_0003_01_000004] is running 318323200B beyond the 'VIRTUAL' memory limit. Current usage: 125.2 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000004 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 22910 22666 22666 22666 (java) 302 45 2558558208 31405 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000004/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000002_0 4 |- 22666 22656 22666 22666 (bash) 0 0 14622720 634 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000004/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000002_0 4 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000004/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000004/stderr [2019-06-11 16:05:40.618]Container killed on request. Exit code is 143 [2019-06-11 16:05:40.619]Container exited with a non-zero exit code 143. 2019-06-11 16:05:42,479 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000001_0, Status : FAILED [2019-06-11 16:05:39.558]Container [pid=22651,containerID=container_1560238973821_0003_01_000003] is running 320690688B beyond the 'VIRTUAL' memory limit. Current usage: 127.5 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000003 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 22955 22651 22651 22651 (java) 296 49 2560925696 32025 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000003/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000003 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000001_0 3 |- 22651 22649 22651 22651 (bash) 0 0 14622720 627 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000003/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000003 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000001_0 3 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000003/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000003/stderr [2019-06-11 16:05:40.618]Container killed on request. Exit code is 143 [2019-06-11 16:05:40.621]Container exited with a non-zero exit code 143. 2019-06-11 16:05:42,480 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000003_0, Status : FAILED [2019-06-11 16:05:38.617]Container [pid=22749,containerID=container_1560238973821_0003_01_000005] is running 320125440B beyond the 'VIRTUAL' memory limit. Current usage: 126.4 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000005 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 22987 22749 22749 22749 (java) 324 37 2560360448 31709 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000005/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000005 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000003_0 5 |- 22749 22720 22749 22749 (bash) 0 1 14622720 640 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000005/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000005 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000003_0 5 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000005/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000005/stderr [2019-06-11 16:05:40.620]Container killed on request. Exit code is 143 [2019-06-11 16:05:40.622]Container exited with a non-zero exit code 143. 2019-06-11 16:05:42,482 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000000_0, Status : FAILED [2019-06-11 16:05:39.558]Container [pid=22675,containerID=container_1560238973821_0003_01_000002] is running 319543808B beyond the 'VIRTUAL' memory limit. Current usage: 125.4 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000002 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 22937 22675 22675 22675 (java) 316 38 2559778816 31497 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000002/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000002 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000000_0 2 |- 22675 22670 22675 22675 (bash) 0 0 14622720 612 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000002/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000002 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000000_0 2 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000002/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000002/stderr [2019-06-11 16:05:40.619]Container killed on request. Exit code is 143 [2019-06-11 16:05:40.622]Container exited with a non-zero exit code 143. 2019-06-11 16:05:52,546 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000002_1, Status : FAILED [2019-06-11 16:05:50.910]Container [pid=23116,containerID=container_1560238973821_0003_01_000006] is running 282286592B beyond the 'VIRTUAL' memory limit. Current usage: 68.6 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000006 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 23194 23116 23116 23116 (java) 85 29 2522521600 16852 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000006/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000006 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000002_1 6 |- 23116 23115 23116 23116 (bash) 0 1 14622720 710 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000006/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000006 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000002_1 6 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000006/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000006/stderr [2019-06-11 16:05:50.970]Container killed on request. Exit code is 143 [2019-06-11 16:05:51.012]Container exited with a non-zero exit code 143. 2019-06-11 16:05:55,561 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000000_1, Status : FAILED [2019-06-11 16:05:54.193]Container [pid=23396,containerID=container_1560238973821_0003_01_000009] is running 313866752B beyond the 'VIRTUAL' memory limit. Current usage: 111.1 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000009 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 23396 23394 23396 23396 (bash) 0 1 14622720 710 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000009/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000009 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000000_1 9 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000009/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000009/stderr |- 23473 23396 23396 23396 (java) 245 40 2554101760 27743 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000009/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000009 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000000_1 9 [2019-06-11 16:05:54.228]Container killed on request. Exit code is 143 [2019-06-11 16:05:54.263]Container exited with a non-zero exit code 143. 2019-06-11 16:05:55,563 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000003_1, Status : FAILED [2019-06-11 16:05:54.332]Container [pid=23304,containerID=container_1560238973821_0003_01_000008] is running 314042880B beyond the 'VIRTUAL' memory limit. Current usage: 113.8 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000008 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 23381 23304 23304 23304 (java) 265 51 2554277888 28423 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000008/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000008 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000003_1 8 |- 23304 23302 23304 23304 (bash) 0 1 14622720 720 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000008/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000008 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000003_1 8 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000008/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000008/stderr [2019-06-11 16:05:54.353]Container killed on request. Exit code is 143 [2019-06-11 16:05:54.381]Container exited with a non-zero exit code 143. 2019-06-11 16:05:55,565 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000001_1, Status : FAILED [2019-06-11 16:05:54.408]Container [pid=23200,containerID=container_1560238973821_0003_01_000007] is running 314497536B beyond the 'VIRTUAL' memory limit. Current usage: 115.5 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000007 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 23200 23198 23200 23200 (bash) 0 1 14622720 711 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000007/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000007 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000001_1 7 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000007/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000007/stderr |- 23277 23200 23200 23200 (java) 257 60 2554732544 28852 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000007/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000007 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000001_1 7 [2019-06-11 16:05:54.463]Container killed on request. Exit code is 143 [2019-06-11 16:05:54.482]Container exited with a non-zero exit code 143. 2019-06-11 16:06:01,619 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000002_2, Status : FAILED [2019-06-11 16:06:00.584]Container [pid=23515,containerID=container_1560238973821_0003_01_000011] is running 337451520B beyond the 'VIRTUAL' memory limit. Current usage: 203.4 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000011 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 23515 23513 23515 23515 (bash) 0 1 14622720 712 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000011/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000011 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000002_2 11 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000011/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000011/stderr |- 23592 23515 23515 23515 (java) 456 89 2577686528 51352 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000011/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000011 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000002_2 11 [2019-06-11 16:06:00.602]Container killed on request. Exit code is 143 [2019-06-11 16:06:00.659]Container exited with a non-zero exit code 143. 2019-06-11 16:06:05,651 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000000_2, Status : FAILED [2019-06-11 16:06:03.816]Container [pid=23651,containerID=container_1560238973821_0003_01_000012] is running 331475456B beyond the 'VIRTUAL' memory limit. Current usage: 173.7 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000012 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 23728 23651 23651 23651 (java) 418 39 2571710464 43768 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000012/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000012 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000000_2 12 |- 23651 23649 23651 23651 (bash) 0 1 14622720 707 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000012/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000012 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000000_2 12 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000012/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000012/stderr [2019-06-11 16:06:03.981]Container killed on request. Exit code is 143 [2019-06-11 16:06:03.986]Container exited with a non-zero exit code 143. 2019-06-11 16:06:08,677 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000001_2, Status : FAILED [2019-06-11 16:06:07.127]Container [pid=23848,containerID=container_1560238973821_0003_01_000014] is running 335940096B beyond the 'VIRTUAL' memory limit. Current usage: 198.2 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000014 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 23848 23847 23848 23848 (bash) 0 1 14622720 714 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000014/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000014 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000001_2 14 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000014/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000014/stderr |- 23926 23848 23848 23848 (java) 408 59 2576175104 50032 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000014/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000014 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000001_2 14 [2019-06-11 16:06:07.186]Container killed on request. Exit code is 143 [2019-06-11 16:06:07.201]Container exited with a non-zero exit code 143. 2019-06-11 16:06:08,678 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000003_2, Status : FAILED [2019-06-11 16:06:07.227]Container [pid=23751,containerID=container_1560238973821_0003_01_000013] is running 337357312B beyond the 'VIRTUAL' memory limit. Current usage: 192.7 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000013 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 23829 23751 23751 23751 (java) 463 52 2577592320 48632 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000013/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000013 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000003_2 13 |- 23751 23749 23751 23751 (bash) 0 1 14622720 706 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000013/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000013 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000003_2 13 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000013/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000013/stderr [2019-06-11 16:06:07.280]Container killed on request. Exit code is 143 [2019-06-11 16:06:07.360]Container exited with a non-zero exit code 143. 2019-06-11 16:06:12,703 INFO mapreduce.Job: map 100% reduce 0% 2019-06-11 16:06:12,711 INFO mapreduce.Job: Job job_1560238973821_0003 failed with state FAILED due to: Task failed task_1560238973821_0003_m_000002 Job failed as tasks failed. failedMaps:1 failedReduces:0 killedMaps:0 killedReduces: 0 2019-06-11 16:06:12,979 INFO mapreduce.Job: Counters: 13 Job Counters Failed map tasks=13 Killed map tasks=3 Launched map tasks=16 Other local map tasks=12 Data-local map tasks=4 Total time spent by all maps in occupied slots (ms)=124936 Total time spent by all reduces in occupied slots (ms)=0 Total time spent by all map tasks (ms)=124936 Total vcore-milliseconds taken by all map tasks=124936 Total megabyte-milliseconds taken by all map tasks=127934464 Map-Reduce Framework CPU time spent (ms)=0 Physical memory (bytes) snapshot=0 Virtual memory (bytes) snapshot=0 2019-06-11 16:06:12,986 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead 2019-06-11 16:06:12,990 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 61.7517 seconds (0 bytes/sec) 2019-06-11 16:06:12,999 INFO mapreduce.ExportJobBase: Exported 0 records. 2019-06-11 16:06:12,999 ERROR tool.ExportTool: Error during export: Export job failed! 新手找不到错误 求大神班帮我看看

hadoop的sqoop指令从mysql导入数据时,是否会对mysql造成压力

当使用sqoop --import从mysql导入数据时,如果不携带mysql的索引字段或者不携带mysql库的分片字段,是否会对mysql数据库产生影响?

sqoop,MySQL,hdfs数据传输报错

使用sqoop进行数据传输,windows-mysql传输数据到虚拟机里面的hdfs,报错The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. sqoop进行传输的时候没有要求sqoop的安装和MySQL的安装都在一个系统中吧?使用sqoop要求本机和虚拟机必须ping通吗?我的网络链接方式用的hostonly。

sqoop2的服务器启动报错

各位大侠好 我在用sqoop.sh server start启动后,登录客户端成功,并且用set server 配置了服务器,但用show version --all 发现client server 正常,server version报错:org.apache.hadoop.security.authentication.client.AuthenticationException ![图片说明](https://img-ask.csdn.net/upload/202002/04/1580782727_23107.png) 配置信息如下: sqoop.properties: org.apache.sqoop.submission.engine.mapreduce.configuration.directory=/opt/hadoop/hadoop-3.2.1/etc/hadoop org.apache.sqoop.security.authentication.type=SIMPLE org.apache.sqoop.security.authentication.handler=org.apache.sqoop.security.authentication.SimpleAuthenticationHandler org.apache.sqoop.security.authentication.anonymous=true org.apache.sqoop.repository.jdbc.url=jdbc:derby:/root/sqoop/logs/repository/db;create=true org.apache.sqoop.repository.sysprop.derby.stream.error.file=/root/sqoop/derbyrepo.log sqoop.properties其他默认 hadoop的core-site.xml已经加了proxyuser: <property> <name>hadoop.proxyuser.root.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.root.groups</name> <value>*</value> </property> 也指定了HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_MAPRED_HOME,HADOOP_YARN_HOME,并且Hadoop/share下的所有jar拷贝到sqoop/server/lib下

Sqoop 抽取Mysql数据库出错

本人小白 今天用虚拟机抽取另一台虚拟机Mysql库中的数据成功了 命令是 bin/sqoop import --connect jdbc:mysql://*.*.*.* :3306/sns2 --username root -P --direct --table sns_talk --target --dir --m 1 之后又试用了抽取远程服务器的数据库,命令是(只是改了IP) bin/sqoop import --connect jdbc:mysql://*.*.*.* :3306/sns2 --username root -P --direct --table sns_talk --target --dir --m 1 接着就报错了 ![图片说明](https://img-ask.csdn.net/upload/201611/17/1479349705_993553.jpg) ![图片说明](https://img-ask.csdn.net/upload/201611/17/1479349715_343818.jpg) 求大神指教,十分感谢!

sqoop job方式增量导入报错output directory exists

![图片说明](https://img-ask.csdn.net/upload/201803/30/1522408714_369894.png) 用sqoop job命令生成任务之后运行第一次成功了,但是第二次在运行就会出现以上错误 生成命令如下:sqoop job --create tong_count_incre -- import --connect jdbc:mysql://192.0.4.114:3306/hadoop --username root --password root --table tong_count_copy --hive-table default.tong_count --incremental lastmodified --check-column tong_time --last-value "2018-01-23 12:37:18" -m 1

sqoop将mysql表数据导入hive报错

ERROR manager.SqlManager: Error reading from database: java.sql.SQLException: Streaming result set com.mysql.jdbc.RowDataDynamic@1e730495 is still active. No statements may be issued when any streaming result sets are open and in use on a given connection. Ensure that you have called .close() on any active streaming result sets before attempting more queries. java.sql.SQLException: Streaming result set com.mysql.jdbc.RowDataDynamic@1e730495 is still active. No statements may be issued when any streaming result sets are open and in use on a given connection. Ensure that you have called .close() on any active streaming result sets before attempting more queries.

sqoop从MySQL导入数据到hive报错 class not found

![图片说明](https://img-ask.csdn.net/upload/201507/20/1437381730_990023.png)

sqoop从hdfs导入数据到mysql疑问

需求:需要实现从sqlserver库中导入数据到mysql中,但实际上只导入了1条记录就结束了(实际数据600+条)。 查看了原因: 应该就是行分隔符引起了 只导入了一条就结束了 。 代码: 1、通过sqoop脚本将sqlserver导入到hdfs中: sqoop import \ --connect "jdbc:sqlserver://192.168.1.130:1433;database=测试库" \ --username sa \ --password 123456 \ --table=t_factfoud \ --target-dir /tmp/sqoop_data/900804ebea3d4ec79a036604ed3c93a0_2014_yw/t_factfoud9 \ --fields-terminated-by '\t' --null-string '\\N' --null-non-string '\\N' --lines-terminated-by '\001' \ --split-by billid -m 1 2、通过sqoop脚本将hdfs数据导出到mysql中: sqoop export \ --connect 'jdbc:mysql://192.168.1.38:3306/xiayi?useUnicode=true&characterEncoding=utf-8' \ --username root \ --password 123456 \ --table t_factfoud \ --export-dir /tmp/sqoop_data/900804ebea3d4ec79a036604ed3c93a0_2014_yw/t_factfoud9 \ -m 1 \ --fields-terminated-by '\t' \ --null-string '\\N' --null-non-string '\\N' \ --lines-terminated-by '\001' 现在执行结果: 1、sqlserver库中 表 t_factfoud 中有 600 条记录,已正确到到hdfs中 。 2、从hdfs导出到mysql,只正确导入了一条,就结束了。 效果图如下: ![图片说明](https://img-ask.csdn.net/upload/201805/31/1527756119_961528.jpg)

sqoop export to mysql

在使用sqoop从hive往mysql导数时,脚本命令也增加了--input-null-string '\\N' --input-null-non-string '\\N'。当第一个字段为null,导出失败;当第一个字段不为null,其他字段为null,导出成功。 请问,这是怎么回事?该如何解决?

Mysql链接数据库经常超时

Mysql链接数据库经常超时,MaxWaitMillis这个参数是什么意思

【Sqoop】在用sqoop从Mysql中将表导入到HDFS时,mr走完后会报如下错误:

在用sqoop从Mysql中将表导入到HDFS时,mr走完后会报如下错误:ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: java.net.ConnectException: Call From hadoop01/192.168.164.188 to 0.0.0.0:10020 failed on connection exception: java.net.ConnectException: Connection refused;,但是数据确正确导入了

新手连接Mysql,JDBC报错

package com; import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; public class ConnctionTest { public static void main(String[] args) throws ClassNotFoundException, SQLException { //加载驱动 Class.forName("con.mysql.jdbc.Dricer"); Connection conn=DriverManager.getConnection("jdbc:mysql://localhost:3306/test","root1","root"); System.out.println(conn); } } 架包:5.1.25 错误信息: ![图片说明](https://img-ask.csdn.net/upload/201712/07/1512619915_824442.png)

sqoop1使用java操作,打包运行出现“找不到符号错误”

sqoop1在ide中测试完全没问题,打包后运行springboot项目,然后就会报错,报的是Text找不到符号,就是sqoop生成的那个java文件,为什么阿?

sqoop1.99.6启动job报错

用start job -jid 1报错 Exception has occurred during processing command Exception: org.apache.sqoop.common.SqoopException Message: CLIENT_0001:Server has returned exception Stack trace: at org.apache.sqoop.client.request.ResourceRequest (ResourceRequest.java:129) at org.apache.sqoop.client.request.ResourceRequest (ResourceRequest.java:179) at org.apache.sqoop.client.request.JobResourceRequest (JobResourceRequest.java:112) at org.apache.sqoop.client.request.SqoopResourceRequests (SqoopResourceRequests.java:157) at org.apache.sqoop.client.SqoopClient (SqoopClient.java:452) at org.apache.sqoop.shell.StartJobFunction (StartJobFunction.java:80) at org.apache.sqoop.shell.SqoopFunction (SqoopFunction.java:51) at org.apache.sqoop.shell.SqoopCommand (SqoopCommand.java:135) at org.apache.sqoop.shell.SqoopCommand (SqoopCommand.java:111) at org.codehaus.groovy.tools.shell.Command$execute (null:-1) at org.codehaus.groovy.runtime.callsite.CallSiteArray (CallSiteArray.java:42) at org.codehaus.groovy.tools.shell.Command$execute (null:-1) at org.codehaus.groovy.tools.shell.Shell (Shell.groovy:101) at org.codehaus.groovy.tools.shell.Groovysh (Groovysh.groovy:-1) at sun.reflect.GeneratedMethodAccessor23 (null:-1) at sun.reflect.DelegatingMethodAccessorImpl (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method (Method.java:498) at org.codehaus.groovy.reflection.CachedMethod (CachedMethod.java:90) at groovy.lang.MetaMethod (MetaMethod.java:233) at groovy.lang.MetaClassImpl (MetaClassImpl.java:1054) at org.codehaus.groovy.runtime.ScriptBytecodeAdapter (ScriptBytecodeAdapter.java:128) at org.codehaus.groovy.tools.shell.Groovysh (Groovysh.groovy:173) at sun.reflect.GeneratedMethodAccessor22 (null:-1) at sun.reflect.DelegatingMethodAccessorImpl (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method (Method.java:498) at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite$PogoCachedMethodSiteNoUnwrapNoCoerce (PogoMetaMethodSite.java:267) at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite (PogoMetaMethodSite.java:52) at org.codehaus.groovy.runtime.callsite.AbstractCallSite (AbstractCallSite.java:141) at org.codehaus.groovy.tools.shell.Groovysh (Groovysh.groovy:121) at org.codehaus.groovy.tools.shell.Shell (Shell.groovy:114) at org.codehaus.groovy.tools.shell.Shell$leftShift$0 (null:-1) at org.codehaus.groovy.tools.shell.ShellRunner (ShellRunner.groovy:88) at org.codehaus.groovy.tools.shell.InteractiveShellRunner (InteractiveShellRunner.groovy:-1) at sun.reflect.GeneratedMethodAccessor20 (null:-1) at sun.reflect.DelegatingMethodAccessorImpl (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method (Method.java:498) at org.codehaus.groovy.reflection.CachedMethod (CachedMethod.java:90) at groovy.lang.MetaMethod (MetaMethod.java:233) at groovy.lang.MetaClassImpl (MetaClassImpl.java:1054) at org.codehaus.groovy.runtime.ScriptBytecodeAdapter (ScriptBytecodeAdapter.java:128) at org.codehaus.groovy.runtime.ScriptBytecodeAdapter (ScriptBytecodeAdapter.java:148) at org.codehaus.groovy.tools.shell.InteractiveShellRunner (InteractiveShellRunner.groovy:100) at sun.reflect.GeneratedMethodAccessor19 (null:-1) at sun.reflect.DelegatingMethodAccessorImpl (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method (Method.java:498) at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite$PogoCachedMethodSiteNoUnwrapNoCoerce (PogoMetaMethodSite.java:267) at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite (PogoMetaMethodSite.java:52) at org.codehaus.groovy.runtime.callsite.AbstractCallSite (AbstractCallSite.java:137) at org.codehaus.groovy.tools.shell.ShellRunner (ShellRunner.groovy:57) at org.codehaus.groovy.tools.shell.InteractiveShellRunner (InteractiveShellRunner.groovy:-1) at sun.reflect.NativeMethodAccessorImpl (NativeMethodAccessorImpl.java:-2) at sun.reflect.NativeMethodAccessorImpl (NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method (Method.java:498) at org.codehaus.groovy.reflection.CachedMethod (CachedMethod.java:90) at groovy.lang.MetaMethod (MetaMethod.java:233) at groovy.lang.MetaClassImpl (MetaClassImpl.java:1054) at org.codehaus.groovy.runtime.ScriptBytecodeAdapter (ScriptBytecodeAdapter.java:128) at org.codehaus.groovy.runtime.ScriptBytecodeAdapter (ScriptBytecodeAdapter.java:148) at org.codehaus.groovy.tools.shell.InteractiveShellRunner (InteractiveShellRunner.groovy:66) at java_lang_Runnable$run (null:-1) at org.codehaus.groovy.runtime.callsite.CallSiteArray (CallSiteArray.java:42) at org.codehaus.groovy.runtime.callsite.AbstractCallSite (AbstractCallSite.java:108) at org.codehaus.groovy.runtime.callsite.AbstractCallSite (AbstractCallSite.java:112) at org.codehaus.groovy.tools.shell.Groovysh (Groovysh.groovy:463) at org.codehaus.groovy.tools.shell.Groovysh (Groovysh.groovy:402) at org.apache.sqoop.shell.SqoopShell (SqoopShell.java:130) Caused by: Exception: org.apache.sqoop.common.SqoopException Message: GENERIC_HDFS_CONNECTOR_0007:Invalid output directory - Unexpected exception Stack trace: at org.apache.sqoop.connector.hdfs.HdfsToInitializer (HdfsToInitializer.java:71) at org.apache.sqoop.connector.hdfs.HdfsToInitializer (HdfsToInitializer.java:35) at org.apache.sqoop.driver.JobManager (JobManager.java:449) at org.apache.sqoop.driver.JobManager (JobManager.java:373) at org.apache.sqoop.driver.JobManager (JobManager.java:276) at org.apache.sqoop.handler.JobRequestHandler (JobRequestHandler.java:380) at org.apache.sqoop.handler.JobRequestHandler (JobRequestHandler.java:116) at org.apache.sqoop.server.v1.JobServlet (JobServlet.java:96) at org.apache.sqoop.server.SqoopProtocolServlet (SqoopProtocolServlet.java:79) at javax.servlet.http.HttpServlet (HttpServlet.java:646) at javax.servlet.http.HttpServlet (HttpServlet.java:723) at org.apache.catalina.core.ApplicationFilterChain (ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain (ApplicationFilterChain.java:206) at org.apache.hadoop.security.authentication.server.AuthenticationFilter (AuthenticationFilter.java:644) at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter (DelegationTokenAuthenticationFilter.java:304) at org.apache.hadoop.security.authentication.server.AuthenticationFilter (AuthenticationFilter.java:592) at org.apache.catalina.core.ApplicationFilterChain (ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain (ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve (StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve (StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve (StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve (ErrorReportValve.java:103) at org.apache.catalina.core.StandardEngineValve (StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter (CoyoteAdapter.java:293) at org.apache.coyote.http11.Http11Processor (Http11Processor.java:861) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler (Http11Protocol.java:606) at org.apache.tomcat.util.net.JIoEndpoint$Worker (JIoEndpoint.java:489) at java.lang.Thread (Thread.java:748) Caused by: Exception: java.io.IOException Message: Failed on local exception: org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length; Host Details : local host is: "node01/192.168.65.100"; destination host is: "node01":9870; Stack trace: at org.apache.hadoop.net.NetUtils (NetUtils.java:818) at org.apache.hadoop.ipc.Client (Client.java:1549) at org.apache.hadoop.ipc.Client (Client.java:1491) at org.apache.hadoop.ipc.Client (Client.java:1388) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker (ProtobufRpcEngine.java:233) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker (ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy19 (null:-1) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB (ClientNamenodeProtocolTranslatorPB.java:907) at sun.reflect.NativeMethodAccessorImpl (NativeMethodAccessorImpl.java:-2) at sun.reflect.NativeMethodAccessorImpl (NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method (Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler (RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call (RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call (RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call (RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler (RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy20 (null:-1) at org.apache.hadoop.hdfs.DFSClient (DFSClient.java:1666) at org.apache.hadoop.hdfs.DistributedFileSystem$29 (DistributedFileSystem.java:1576) at org.apache.hadoop.hdfs.DistributedFileSystem$29 (DistributedFileSystem.java:1573) at org.apache.hadoop.fs.FileSystemLinkResolver (FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem (DistributedFileSystem.java:1588) at org.apache.hadoop.fs.FileSystem (FileSystem.java:1683) at org.apache.sqoop.connector.hdfs.HdfsToInitializer (HdfsToInitializer.java:58) at org.apache.sqoop.connector.hdfs.HdfsToInitializer (HdfsToInitializer.java:35) at org.apache.sqoop.driver.JobManager (JobManager.java:449) at org.apache.sqoop.driver.JobManager (JobManager.java:373) at org.apache.sqoop.driver.JobManager (JobManager.java:276) at org.apache.sqoop.handler.JobRequestHandler (JobRequestHandler.java:380) at org.apache.sqoop.handler.JobRequestHandler (JobRequestHandler.java:116) at org.apache.sqoop.server.v1.JobServlet (JobServlet.java:96) at org.apache.sqoop.server.SqoopProtocolServlet (SqoopProtocolServlet.java:79) at javax.servlet.http.HttpServlet (HttpServlet.java:646) at javax.servlet.http.HttpServlet (HttpServlet.java:723) at org.apache.catalina.core.ApplicationFilterChain (ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain (ApplicationFilterChain.java:206) at org.apache.hadoop.security.authentication.server.AuthenticationFilter (AuthenticationFilter.java:644) at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter (DelegationTokenAuthenticationFilter.java:304) at org.apache.hadoop.security.authentication.server.AuthenticationFilter (AuthenticationFilter.java:592) at org.apache.catalina.core.ApplicationFilterChain (ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain (ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve (StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve (StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve (StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve (ErrorReportValve.java:103) at org.apache.catalina.core.StandardEngineValve (StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter (CoyoteAdapter.java:293) at org.apache.coyote.http11.Http11Processor (Http11Processor.java:861) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler (Http11Protocol.java:606) at org.apache.tomcat.util.net.JIoEndpoint$Worker (JIoEndpoint.java:489) at java.lang.Thread (Thread.java:748) Caused by: Exception: java.lang.Throwable Message: RPC response exceeds maximum data length Stack trace: at org.apache.hadoop.ipc.Client$IpcStreams (Client.java:1864) at org.apache.hadoop.ipc.Client$Connection (Client.java:1183) at org.apache.hadoop.ipc.Client$Connection (Client.java:1079) 哪位大侠帮忙看看:主要应该是这句 Caused by: Exception: java.io.IOException Message: Failed on local exception: org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length; Host Details : local host is: "node01/192.168.65.100"; destination host is: "node01":9870; 但不知道问题出在哪里 我的link配置: From database configuration Schema name: mysql Table name: help_topic Table SQL statement: Table column names: Partition column name: Null value allowed for the partition column: Boundary query: Incremental read Check column: Last value: To HDFS configuration Override null value: Null value: Output format: 0 : TEXT_FILE 1 : SEQUENCE_FILE Choose: 0 Compression format: 0 : NONE 1 : DEFAULT 2 : DEFLATE 3 : GZIP 4 : BZIP2 5 : LZO 6 : LZ4 7 : SNAPPY 8 : CUSTOM Choose: 0 Custom compression format: Output directory: hdfs://node01:9870/sqoop Append mode: Throttling resources

使用sqoop从mariadb里面导数据到hive报错

RT 执行代码如下 ``` sqoop import --connect jdbc:mysql://localhost:3306/test --username root --password 1 --table exit_tran --hive-import --hive-table exit_tran -m 1 --hive-overwrite ``` 导出数据总是报错 ``` 20/03/03 17:35:40 INFO mapreduce.Job: Task Id : attempt_1583223426401_0007_m_000000_2, Status : FAILED Error: java.io.IOException: SQLException in nextKeyValue at org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:277) at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556) at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80) at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: java.sql.SQLException: HOUR_OF_DAY: 2 -> 3 at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:129) at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97) at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:89) at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:63) at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:73) at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:85) at com.mysql.cj.jdbc.result.ResultSetImpl.getTimestamp(ResultSetImpl.java:903) at org.apache.sqoop.lib.JdbcWritableBridge.readTimestamp(JdbcWritableBridge.java:111) at com.cloudera.sqoop.lib.JdbcWritableBridge.readTimestamp(JdbcWritableBridge.java:83) at exit_tran.readFields(exit_tran.java:229) at org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:244) ... 12 more Caused by: com.mysql.cj.exceptions.WrongArgumentException: HOUR_OF_DAY: 2 -> 3 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:61) at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:105) at com.mysql.cj.result.SqlTimestampValueFactory.localCreateFromTimestamp(SqlTimestampValueFactory.java:112) at com.mysql.cj.result.SqlTimestampValueFactory.localCreateFromTimestamp(SqlTimestampValueFactory.java:50) at com.mysql.cj.result.AbstractDateTimeValueFactory.createFromTimestamp(AbstractDateTimeValueFactory.java:87) at com.mysql.cj.protocol.a.MysqlTextValueDecoder.decodeTimestamp(MysqlTextValueDecoder.java:79) at com.mysql.cj.protocol.result.AbstractResultsetRow.decodeAndCreateReturnValue(AbstractResultsetRow.java:87) at com.mysql.cj.protocol.result.AbstractResultsetRow.getValueFromBytes(AbstractResultsetRow.java:241) at com.mysql.cj.protocol.a.result.TextBufferRow.getValue(TextBufferRow.java:132) ... 17 more Caused by: java.lang.IllegalArgumentException: HOUR_OF_DAY: 2 -> 3 at java.util.GregorianCalendar.computeTime(GregorianCalendar.java:2829) at java.util.Calendar.updateTime(Calendar.java:3393) at java.util.Calendar.getTimeInMillis(Calendar.java:1782) at com.mysql.cj.result.SqlTimestampValueFactory.localCreateFromTimestamp(SqlTimestampValueFactory.java:108) ... 23 more Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 ``` 求指导~

在中国程序员是青春饭吗?

今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...

Java校招入职华为,半年后我跑路了

何来 我,一个双非本科弟弟,有幸在 19 届的秋招中得到前东家华为(以下简称 hw)的赏识,当时秋招签订就业协议,说是入了某 java bg,之后一系列组织架构调整原因等等让人无法理解的神操作,最终毕业前夕,被通知调往其他 bg 做嵌入式开发(纯 C 语言)。 由于已至于校招末尾,之前拿到的其他 offer 又无法再收回,一时感到无力回天,只得默默接受。 毕业后,直接入职开始了嵌入式苦旅,由于从未...

Java基础知识面试题(2020最新版)

文章目录Java概述何为编程什么是Javajdk1.5之后的三大版本JVM、JRE和JDK的关系什么是跨平台性?原理是什么Java语言有哪些特点什么是字节码?采用字节码的最大好处是什么什么是Java程序的主类?应用程序和小程序的主类有何不同?Java应用程序与小程序之间有那些差别?Java和C++的区别Oracle JDK 和 OpenJDK 的对比基础语法数据类型Java有哪些数据类型switc...

@程序员:GitHub这个项目快薅羊毛

今天下午在朋友圈看到很多人都在发github的羊毛,一时没明白是怎么回事。 后来上百度搜索了一下,原来真有这回事,毕竟资源主义的羊毛不少啊,1000刀刷爆了朋友圈!不知道你们的朋友圈有没有看到类似的消息。 这到底是啥情况? 微软开发者平台GitHub 的一个区块链项目 Handshake ,搞了一个招募新会员的活动,面向GitHub 上前 25万名开发者派送 4,246.99 HNS币,大约价...

再不跳槽,应届毕业生拿的都比我多了!

跳槽几乎是每个人职业生涯的一部分,很多HR说“三年两跳”已经是一个跳槽频繁与否的阈值了,可为什么市面上有很多程序员不到一年就跳槽呢?他们不担心影响履历吗? PayScale之前发布的**《员工最短任期公司排行榜》中,两家码农大厂Amazon和Google**,以1年和1.1年的员工任期中位数分列第二、第四名。 PayScale:员工最短任期公司排行榜 意外的是,任期中位数极小的这两家公司,薪资...

我以为我学懂了数据结构,直到看了这个导图才发现,我错了

数据结构与算法思维导图

技术大佬:我去,你写的 switch 语句也太老土了吧

昨天早上通过远程的方式 review 了两名新来同事的代码,大部分代码都写得很漂亮,严谨的同时注释也很到位,这令我非常满意。但当我看到他们当中有一个人写的 switch 语句时,还是忍不住破口大骂:“我擦,小王,你丫写的 switch 语句也太老土了吧!” 来看看小王写的代码吧,看完不要骂我装逼啊。 private static String createPlayer(PlayerTypes p...

华为初面+综合面试(Java技术面)附上面试题

华为面试整体流程大致分为笔试,性格测试,面试,综合面试,回学校等结果。笔试来说,华为的难度较中等,选择题难度和网易腾讯差不多。最后的代码题,相比下来就简单很多,一共3道题目,前2题很容易就AC,题目已经记不太清楚,不过难度确实不大。最后一题最后提交的代码过了75%的样例,一直没有发现剩下的25%可能存在什么坑。 笔试部分太久远,我就不怎么回忆了。直接将面试。 面试 如果说腾讯的面试是挥金如土...

和黑客斗争的 6 天!

互联网公司工作,很难避免不和黑客们打交道,我呆过的两家互联网公司,几乎每月每天每分钟都有黑客在公司网站上扫描。有的是寻找 Sql 注入的缺口,有的是寻找线上服务器可能存在的漏洞,大部分都...

讲一个程序员如何副业月赚三万的真实故事

loonggg读完需要3分钟速读仅需 1 分钟大家好,我是你们的校长。我之前讲过,这年头,只要肯动脑,肯行动,程序员凭借自己的技术,赚钱的方式还是有很多种的。仅仅靠在公司出卖自己的劳动时...

win10暴力查看wifi密码

刚才邻居打了个电话说:喂小灰,你家wifi的密码是多少,我怎么连不上了。 我。。。 我也忘了哎,就找到了一个好办法,分享给大家: 第一种情况:已经连接上的wifi,怎么知道密码? 打开:控制面板\网络和 Internet\网络连接 然后右击wifi连接的无线网卡,选择状态 然后像下图一样: 第二种情况:前提是我不知道啊,但是我以前知道密码。 此时可以利用dos命令了 1、利用netsh wlan...

上班一个月,后悔当初着急入职的选择了

最近有个老铁,告诉我说,上班一个月,后悔当初着急入职现在公司了。他之前在美图做手机研发,今年美图那边今年也有一波组织优化调整,他是其中一个,在协商离职后,当时捉急找工作上班,因为有房贷供着,不能没有收入来源。所以匆忙选了一家公司,实际上是一个大型外包公司,主要派遣给其他手机厂商做外包项目。**当时承诺待遇还不错,所以就立马入职去上班了。但是后面入职后,发现薪酬待遇这块并不是HR所说那样,那个HR自...

女程序员,为什么比男程序员少???

昨天看到一档综艺节目,讨论了两个话题:(1)中国学生的数学成绩,平均下来看,会比国外好?为什么?(2)男生的数学成绩,平均下来看,会比女生好?为什么?同时,我又联想到了一个技术圈经常讨...

总结了 150 余个神奇网站,你不来瞅瞅吗?

原博客再更新,可能就没了,之后将持续更新本篇博客。

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

MySQL数据库面试题(2020最新版)

文章目录数据库基础知识为什么要使用数据库什么是SQL?什么是MySQL?数据库三大范式是什么mysql有关权限的表都有哪几个MySQL的binlog有有几种录入格式?分别有什么区别?数据类型mysql有哪些数据类型引擎MySQL存储引擎MyISAM与InnoDB区别MyISAM索引与InnoDB索引的区别?InnoDB引擎的4大特性存储引擎选择索引什么是索引?索引有哪些优缺点?索引使用场景(重点)...

如果你是老板,你会不会踢了这样的员工?

有个好朋友ZS,是技术总监,昨天问我:“有一个老下属,跟了我很多年,做事勤勤恳恳,主动性也很好。但随着公司的发展,他的进步速度,跟不上团队的步伐了,有点...

我入职阿里后,才知道原来简历这么写

私下里,有不少读者问我:“二哥,如何才能写出一份专业的技术简历呢?我总感觉自己写的简历太烂了,所以投了无数份,都石沉大海了。”说实话,我自己好多年没有写过简历了,但我认识的一个同行,他在阿里,给我说了一些他当年写简历的方法论,我感觉太牛逼了,实在是忍不住,就分享了出来,希望能够帮助到你。 01、简历的本质 作为简历的撰写者,你必须要搞清楚一点,简历的本质是什么,它就是为了来销售你的价值主张的。往深...

程序员写出这样的代码,能不挨骂吗?

当你换槽填坑时,面对一个新的环境。能够快速熟练,上手实现业务需求是关键。但是,哪些因素会影响你快速上手呢?是原有代码写的不够好?还是注释写的不够好?昨夜...

!大部分程序员只会写3年代码

如果世界上都是这种不思进取的软件公司,那别说大部分程序员只会写 3 年代码,恐怕就没有程序员这种职业。

离职半年了,老东家又发 offer,回不回?

有小伙伴问松哥这个问题,他在上海某公司,在离职了几个月后,前公司的领导联系到他,希望他能够返聘回去,他很纠结要不要回去? 俗话说好马不吃回头草,但是这个小伙伴既然感到纠结了,我觉得至少说明了两个问题:1.曾经的公司还不错;2.现在的日子也不是很如意。否则应该就不会纠结了。 老实说,松哥之前也有过类似的经历,今天就来和小伙伴们聊聊回头草到底吃不吃。 首先一个基本观点,就是离职了也没必要和老东家弄的苦...

HTTP与HTTPS的区别

面试官问HTTP与HTTPS的区别,我这样回答让他竖起大拇指!

男生更看重女生的身材脸蛋,还是思想?

往往,我们看不进去大段大段的逻辑。深刻的哲理,往往短而精悍,一阵见血。问:产品经理挺漂亮的,有点心动,但不知道合不合得来。男生更看重女生的身材脸蛋,还是...

程序员为什么千万不要瞎努力?

本文作者用对比非常鲜明的两个开发团队的故事,讲解了敏捷开发之道 —— 如果你的团队缺乏统一标准的环境,那么即使勤劳努力,不仅会极其耗时而且成果甚微,使用...

为什么程序员做外包会被瞧不起?

二哥,有个事想询问下您的意见,您觉得应届生值得去外包吗?公司虽然挺大的,中xx,但待遇感觉挺低,马上要报到,挺纠结的。

当HR压你价,说你只值7K,你该怎么回答?

当HR压你价,说你只值7K时,你可以流畅地回答,记住,是流畅,不能犹豫。 礼貌地说:“7K是吗?了解了。嗯~其实我对贵司的面试官印象很好。只不过,现在我的手头上已经有一份11K的offer。来面试,主要也是自己对贵司挺有兴趣的,所以过来看看……”(未完) 这段话主要是陪HR互诈的同时,从公司兴趣,公司职员印象上,都给予对方正面的肯定,既能提升HR的好感度,又能让谈判气氛融洽,为后面的发挥留足空间。...

面试:第十六章:Java中级开发(16k)

HashMap底层实现原理,红黑树,B+树,B树的结构原理 Spring的AOP和IOC是什么?它们常见的使用场景有哪些?Spring事务,事务的属性,传播行为,数据库隔离级别 Spring和SpringMVC,MyBatis以及SpringBoot的注解分别有哪些?SpringMVC的工作原理,SpringBoot框架的优点,MyBatis框架的优点 SpringCould组件有哪些,他们...

面试阿里p7,被按在地上摩擦,鬼知道我经历了什么?

面试阿里p7被问到的问题(当时我只知道第一个):@Conditional是做什么的?@Conditional多个条件是什么逻辑关系?条件判断在什么时候执...

终于懂了TCP和UDP协议区别

终于懂了TCP和UDP协议区别

Python爬虫,高清美图我全都要(彼岸桌面壁纸)

爬取彼岸桌面网站较为简单,用到了requests、lxml、Beautiful Soup4

立即提问
相关内容推荐