Sqoop将数据从hive导入mysql报错,各位帮我看看

这是运行的命令:
liuyanbing@ubuntu:/opt/sqoop$ bin/sqoop export --connect jdbc:mysql://localhost:3306/dbtaobao --username root --password root --table user_log --export-dir '/user/hive/warehouse/dbtaobao.db/inner_user_log' --fields-terminated-by ',';

报错内容:
Warning: /opt/sqoop/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /opt/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /opt/sqoop/../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
2019-06-11 16:05:04,541 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
2019-06-11 16:05:04,573 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
2019-06-11 16:05:04,678 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
2019-06-11 16:05:04,678 INFO tool.CodeGenTool: Beginning code generation
Tue Jun 11 16:05:04 CST 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
2019-06-11 16:05:05,241 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM user_log AS t LIMIT 1
2019-06-11 16:05:05,379 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM user_log AS t LIMIT 1
2019-06-11 16:05:05,392 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /bigdata/hadoop-3.1.1
Note: /tmp/sqoop-liuyanbing/compile/990c7e516f6811ff0f7c264686938932/user_log.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
2019-06-11 16:05:09,951 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-liuyanbing/compile/990c7e516f6811ff0f7c264686938932/user_log.jar
2019-06-11 16:05:09,960 INFO mapreduce.ExportJobBase: Beginning export of user_log
2019-06-11 16:05:09,960 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
2019-06-11 16:05:10,093 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2019-06-11 16:05:10,131 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
2019-06-11 16:05:11,220 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
2019-06-11 16:05:11,224 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative
2019-06-11 16:05:11,225 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
2019-06-11 16:05:11,399 INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:8032
2019-06-11 16:05:12,478 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/liuyanbing/.staging/job_1560238973821_0003
2019-06-11 16:05:15,272 WARN hdfs.DataStreamer: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1252)
at java.lang.Thread.join(Thread.java:1326)
at org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:986)
at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:640)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:810)
2019-06-11 16:05:18,771 INFO input.FileInputFormat: Total input files to process : 1
2019-06-11 16:05:18,780 INFO input.FileInputFormat: Total input files to process : 1
2019-06-11 16:05:19,285 INFO mapreduce.JobSubmitter: number of splits:4
2019-06-11 16:05:19,352 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative
2019-06-11 16:05:19,353 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
2019-06-11 16:05:19,472 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1560238973821_0003
2019-06-11 16:05:19,473 INFO mapreduce.JobSubmitter: Executing with tokens: []
2019-06-11 16:05:19,959 INFO conf.Configuration: resource-types.xml not found
2019-06-11 16:05:19,959 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2019-06-11 16:05:20,049 INFO impl.YarnClientImpl: Submitted application application_1560238973821_0003
2019-06-11 16:05:20,105 INFO mapreduce.Job: The url to track the job: http://ubuntu:8088/proxy/application_1560238973821_0003/
2019-06-11 16:05:20,106 INFO mapreduce.Job: Running job: job_1560238973821_0003
2019-06-11 16:05:29,273 INFO mapreduce.Job: Job job_1560238973821_0003 running in uber mode : false
2019-06-11 16:05:29,286 INFO mapreduce.Job: map 0% reduce 0%
2019-06-11 16:05:42,450 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000002_0, Status : FAILED
[2019-06-11 16:05:39.558]Container [pid=22666,containerID=container_1560238973821_0003_01_000004] is running 318323200B beyond the 'VIRTUAL' memory limit. Current usage: 125.2 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1560238973821_0003_01_000004 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 22910 22666 22666 22666 (java) 302 45 2558558208 31405 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000004/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000002_0 4
|- 22666 22656 22666 22666 (bash) 0 0 14622720 634 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000004/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000002_0 4 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000004/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000004/stderr

[2019-06-11 16:05:40.618]Container killed on request. Exit code is 143
[2019-06-11 16:05:40.619]Container exited with a non-zero exit code 143.

2019-06-11 16:05:42,479 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000001_0, Status : FAILED
[2019-06-11 16:05:39.558]Container [pid=22651,containerID=container_1560238973821_0003_01_000003] is running 320690688B beyond the 'VIRTUAL' memory limit. Current usage: 127.5 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1560238973821_0003_01_000003 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 22955 22651 22651 22651 (java) 296 49 2560925696 32025 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000003/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000003 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000001_0 3
|- 22651 22649 22651 22651 (bash) 0 0 14622720 627 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000003/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000003 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000001_0 3 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000003/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000003/stderr

[2019-06-11 16:05:40.618]Container killed on request. Exit code is 143
[2019-06-11 16:05:40.621]Container exited with a non-zero exit code 143.

2019-06-11 16:05:42,480 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000003_0, Status : FAILED
[2019-06-11 16:05:38.617]Container [pid=22749,containerID=container_1560238973821_0003_01_000005] is running 320125440B beyond the 'VIRTUAL' memory limit. Current usage: 126.4 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1560238973821_0003_01_000005 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 22987 22749 22749 22749 (java) 324 37 2560360448 31709 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000005/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000005 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000003_0 5
|- 22749 22720 22749 22749 (bash) 0 1 14622720 640 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000005/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000005 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000003_0 5 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000005/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000005/stderr

[2019-06-11 16:05:40.620]Container killed on request. Exit code is 143
[2019-06-11 16:05:40.622]Container exited with a non-zero exit code 143.

2019-06-11 16:05:42,482 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000000_0, Status : FAILED
[2019-06-11 16:05:39.558]Container [pid=22675,containerID=container_1560238973821_0003_01_000002] is running 319543808B beyond the 'VIRTUAL' memory limit. Current usage: 125.4 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1560238973821_0003_01_000002 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 22937 22675 22675 22675 (java) 316 38 2559778816 31497 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000002/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000002 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000000_0 2
|- 22675 22670 22675 22675 (bash) 0 0 14622720 612 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000002/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000002 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000000_0 2 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000002/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000002/stderr

[2019-06-11 16:05:40.619]Container killed on request. Exit code is 143
[2019-06-11 16:05:40.622]Container exited with a non-zero exit code 143.

2019-06-11 16:05:52,546 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000002_1, Status : FAILED
[2019-06-11 16:05:50.910]Container [pid=23116,containerID=container_1560238973821_0003_01_000006] is running 282286592B beyond the 'VIRTUAL' memory limit. Current usage: 68.6 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1560238973821_0003_01_000006 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 23194 23116 23116 23116 (java) 85 29 2522521600 16852 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000006/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000006 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000002_1 6
|- 23116 23115 23116 23116 (bash) 0 1 14622720 710 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000006/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000006 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000002_1 6 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000006/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000006/stderr

[2019-06-11 16:05:50.970]Container killed on request. Exit code is 143
[2019-06-11 16:05:51.012]Container exited with a non-zero exit code 143.

2019-06-11 16:05:55,561 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000000_1, Status : FAILED
[2019-06-11 16:05:54.193]Container [pid=23396,containerID=container_1560238973821_0003_01_000009] is running 313866752B beyond the 'VIRTUAL' memory limit. Current usage: 111.1 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1560238973821_0003_01_000009 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 23396 23394 23396 23396 (bash) 0 1 14622720 710 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000009/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000009 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000000_1 9 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000009/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000009/stderr

|- 23473 23396 23396 23396 (java) 245 40 2554101760 27743 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000009/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000009 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000000_1 9

[2019-06-11 16:05:54.228]Container killed on request. Exit code is 143
[2019-06-11 16:05:54.263]Container exited with a non-zero exit code 143.

2019-06-11 16:05:55,563 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000003_1, Status : FAILED
[2019-06-11 16:05:54.332]Container [pid=23304,containerID=container_1560238973821_0003_01_000008] is running 314042880B beyond the 'VIRTUAL' memory limit. Current usage: 113.8 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1560238973821_0003_01_000008 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 23381 23304 23304 23304 (java) 265 51 2554277888 28423 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000008/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000008 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000003_1 8
|- 23304 23302 23304 23304 (bash) 0 1 14622720 720 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000008/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000008 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000003_1 8 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000008/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000008/stderr

[2019-06-11 16:05:54.353]Container killed on request. Exit code is 143
[2019-06-11 16:05:54.381]Container exited with a non-zero exit code 143.

2019-06-11 16:05:55,565 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000001_1, Status : FAILED
[2019-06-11 16:05:54.408]Container [pid=23200,containerID=container_1560238973821_0003_01_000007] is running 314497536B beyond the 'VIRTUAL' memory limit. Current usage: 115.5 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1560238973821_0003_01_000007 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 23200 23198 23200 23200 (bash) 0 1 14622720 711 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000007/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000007 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000001_1 7 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000007/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000007/stderr

|- 23277 23200 23200 23200 (java) 257 60 2554732544 28852 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000007/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000007 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000001_1 7

[2019-06-11 16:05:54.463]Container killed on request. Exit code is 143
[2019-06-11 16:05:54.482]Container exited with a non-zero exit code 143.

2019-06-11 16:06:01,619 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000002_2, Status : FAILED
[2019-06-11 16:06:00.584]Container [pid=23515,containerID=container_1560238973821_0003_01_000011] is running 337451520B beyond the 'VIRTUAL' memory limit. Current usage: 203.4 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1560238973821_0003_01_000011 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 23515 23513 23515 23515 (bash) 0 1 14622720 712 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000011/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000011 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000002_2 11 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000011/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000011/stderr

|- 23592 23515 23515 23515 (java) 456 89 2577686528 51352 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000011/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000011 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000002_2 11

[2019-06-11 16:06:00.602]Container killed on request. Exit code is 143
[2019-06-11 16:06:00.659]Container exited with a non-zero exit code 143.

2019-06-11 16:06:05,651 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000000_2, Status : FAILED
[2019-06-11 16:06:03.816]Container [pid=23651,containerID=container_1560238973821_0003_01_000012] is running 331475456B beyond the 'VIRTUAL' memory limit. Current usage: 173.7 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1560238973821_0003_01_000012 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 23728 23651 23651 23651 (java) 418 39 2571710464 43768 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000012/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000012 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000000_2 12
|- 23651 23649 23651 23651 (bash) 0 1 14622720 707 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000012/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000012 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000000_2 12 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000012/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000012/stderr

[2019-06-11 16:06:03.981]Container killed on request. Exit code is 143
[2019-06-11 16:06:03.986]Container exited with a non-zero exit code 143.

2019-06-11 16:06:08,677 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000001_2, Status : FAILED
[2019-06-11 16:06:07.127]Container [pid=23848,containerID=container_1560238973821_0003_01_000014] is running 335940096B beyond the 'VIRTUAL' memory limit. Current usage: 198.2 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1560238973821_0003_01_000014 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 23848 23847 23848 23848 (bash) 0 1 14622720 714 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000014/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000014 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000001_2 14 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000014/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000014/stderr

|- 23926 23848 23848 23848 (java) 408 59 2576175104 50032 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000014/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000014 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000001_2 14

[2019-06-11 16:06:07.186]Container killed on request. Exit code is 143
[2019-06-11 16:06:07.201]Container exited with a non-zero exit code 143.

2019-06-11 16:06:08,678 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000003_2, Status : FAILED
[2019-06-11 16:06:07.227]Container [pid=23751,containerID=container_1560238973821_0003_01_000013] is running 337357312B beyond the 'VIRTUAL' memory limit. Current usage: 192.7 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1560238973821_0003_01_000013 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 23829 23751 23751 23751 (java) 463 52 2577592320 48632 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000013/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000013 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000003_2 13
|- 23751 23749 23751 23751 (bash) 0 1 14622720 706 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000013/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000013 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000003_2 13 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000013/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000013/stderr

[2019-06-11 16:06:07.280]Container killed on request. Exit code is 143
[2019-06-11 16:06:07.360]Container exited with a non-zero exit code 143.

2019-06-11 16:06:12,703 INFO mapreduce.Job: map 100% reduce 0%
2019-06-11 16:06:12,711 INFO mapreduce.Job: Job job_1560238973821_0003 failed with state FAILED due to: Task failed task_1560238973821_0003_m_000002
Job failed as tasks failed. failedMaps:1 failedReduces:0 killedMaps:0 killedReduces: 0

2019-06-11 16:06:12,979 INFO mapreduce.Job: Counters: 13
Job Counters
Failed map tasks=13
Killed map tasks=3
Launched map tasks=16
Other local map tasks=12
Data-local map tasks=4
Total time spent by all maps in occupied slots (ms)=124936
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=124936
Total vcore-milliseconds taken by all map tasks=124936
Total megabyte-milliseconds taken by all map tasks=127934464
Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
2019-06-11 16:06:12,986 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
2019-06-11 16:06:12,990 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 61.7517 seconds (0 bytes/sec)
2019-06-11 16:06:12,999 INFO mapreduce.ExportJobBase: Exported 0 records.
2019-06-11 16:06:12,999 ERROR tool.ExportTool: Error during export: Export job failed!

新手找不到错误 求大神班帮我看看

1个回答

019-06-11 16:06:00.602]Container killed on request. Exit code is 143
[2019-06-11 16:06:00.659]Container exited with a non-zero exit code 143.

2019-06-11 16:06:05,651 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000000_2, Status : FAILED
[2019-06-11 16:06:03.816]Container [pid=23651,containerID=container_1560238973821_0003_01_000012] is running 331475456B beyond the 'VIRTUAL' memory limit. Current usage: 173.7 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container.

这里显示是容器内存不足,可以增加container内存,或者减小每个map的数据量输入

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
为什么我用sqoop导数据从hive到mysql会乱序
在hive里面的数据结构是这样 ![图片说明](https://img-ask.csdn.net/upload/201909/28/1569660068_148900.png) 但是到了mysql中就是这样了。 ![图片说明](https://img-ask.csdn.net/upload/201909/28/1569660119_601059.png) 字段完全乱了。
sqoop将mysql表数据导入hive报错
ERROR manager.SqlManager: Error reading from database: java.sql.SQLException: Streaming result set com.mysql.jdbc.RowDataDynamic@1e730495 is still active. No statements may be issued when any streaming result sets are open and in use on a given connection. Ensure that you have called .close() on any active streaming result sets before attempting more queries. java.sql.SQLException: Streaming result set com.mysql.jdbc.RowDataDynamic@1e730495 is still active. No statements may be issued when any streaming result sets are open and in use on a given connection. Ensure that you have called .close() on any active streaming result sets before attempting more queries.
sqoop将hive数据导出mysql命令是啥?各参数作用是什么
sqoop将hive数据导出mysql命令是啥?各参数作用是什么
通过sqoop, load数据到hive,sqoop如何知道hive的warehouse
我创建了自己的hive-site.xml文件,在里边指定了hive的warehouse,现在的问题是:我通过sqoop,把数据从sqlserv导入到hive的时候,我如何让sqoop知道我用的是我自己的hive-site.xml文件,从而用自己配置的warehouse。我们不希望用默认的hive warehouse. 各位大神帮帮忙啊。
sqoop从MySQL导入数据到hive报错 class not found
![图片说明](https://img-ask.csdn.net/upload/201507/20/1437381730_990023.png)
sqoop将oracle数据表导入hive中文乱码问题
请教各位大神一个问题,就是将oracle的表导入到hive后中文乱码,oracle库的编码格式为US7ASCII,各位大神有没有遇到过类型的问题,或者有没有好的解决方案建议,谢谢了。附注:现在已经试过convert(nsrdzdah,'utf8','US7ASCII'),但是还是乱码;还有就是修改hive jdbc jar包,感觉不靠谱就没有试
用sqoop将mysql数据导入hive中多分区时怎么处理
对于一个分区,可以直接指定 --hive-partition-key --hive-partition-value 多个分区如何指定
sqoop从hdfs导入数据到mysql疑问
需求:需要实现从sqlserver库中导入数据到mysql中,但实际上只导入了1条记录就结束了(实际数据600+条)。 查看了原因: 应该就是行分隔符引起了 只导入了一条就结束了 。 代码: 1、通过sqoop脚本将sqlserver导入到hdfs中: sqoop import \ --connect "jdbc:sqlserver://192.168.1.130:1433;database=测试库" \ --username sa \ --password 123456 \ --table=t_factfoud \ --target-dir /tmp/sqoop_data/900804ebea3d4ec79a036604ed3c93a0_2014_yw/t_factfoud9 \ --fields-terminated-by '\t' --null-string '\\N' --null-non-string '\\N' --lines-terminated-by '\001' \ --split-by billid -m 1 2、通过sqoop脚本将hdfs数据导出到mysql中: sqoop export \ --connect 'jdbc:mysql://192.168.1.38:3306/xiayi?useUnicode=true&characterEncoding=utf-8' \ --username root \ --password 123456 \ --table t_factfoud \ --export-dir /tmp/sqoop_data/900804ebea3d4ec79a036604ed3c93a0_2014_yw/t_factfoud9 \ -m 1 \ --fields-terminated-by '\t' \ --null-string '\\N' --null-non-string '\\N' \ --lines-terminated-by '\001' 现在执行结果: 1、sqlserver库中 表 t_factfoud 中有 600 条记录,已正确到到hdfs中 。 2、从hdfs导出到mysql,只正确导入了一条,就结束了。 效果图如下: ![图片说明](https://img-ask.csdn.net/upload/201805/31/1527756119_961528.jpg)
hadoop的sqoop指令从mysql导入数据时,是否会对mysql造成压力
当使用sqoop --import从mysql导入数据时,如果不携带mysql的索引字段或者不携带mysql库的分片字段,是否会对mysql数据库产生影响?
【Sqoop】在用sqoop从Mysql中将表导入到HDFS时,mr走完后会报如下错误:
在用sqoop从Mysql中将表导入到HDFS时,mr走完后会报如下错误:ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: java.net.ConnectException: Call From hadoop01/192.168.164.188 to 0.0.0.0:10020 failed on connection exception: java.net.ConnectException: Connection refused;,但是数据确正确导入了
利用sqoop把数据从Oracle导出到hive报错
![图片说明](https://img-ask.csdn.net/upload/201504/16/1429180711_592161.png) bash-4.1$ sqoop import --connect jdbc:oracle:thin:@192.168.1.169:1521:orcl --username HADOOP --password hadoop2015 --table CALC_UPAY_DATE_HADOOP_HDFS --split-by UPAYID --hive-import Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. find: paths must precede expression: ant-eclipse-1.0-jvm1.2.jar Usage: find [-H] [-L] [-P] [-Olevel] [-D help|tree|search|stat|rates|opt|exec] [path...] [expression] 15/04/16 03:28:13 INFO sqoop.Sqoop: Running Sqoop version: 1.4.4-cdh5.0.2 15/04/16 03:28:13 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 15/04/16 03:28:13 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override 15/04/16 03:28:13 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc. 15/04/16 03:28:13 INFO manager.SqlManager: Using default fetchSize of 1000 15/04/16 03:28:13 INFO tool.CodeGenTool: Beginning code generation 15/04/16 03:28:13 INFO manager.OracleManager: Time zone has been set to GMT 15/04/16 03:28:13 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM CALC_UPAY_DATE_HADOOP_HDFS t WHERE 1=0 15/04/16 03:28:14 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-mapreduce Note: /tmp/sqoop-hdfs/compile/e9286bf0e7d796ba396d3155210012b0/CALC_UPAY_DATE_HADOOP_HDFS.java uses or overrides a deprecated API. Note: Recompile with -Xlint:deprecation for details. 15/04/16 03:28:15 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hdfs/compile/e9286bf0e7d796ba396d3155210012b0/CALC_UPAY_DATE_HADOOP_HDFS.jar 15/04/16 03:28:15 INFO mapreduce.ImportJobBase: Beginning import of CALC_UPAY_DATE_HADOOP_HDFS 15/04/16 03:28:15 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar 15/04/16 03:28:15 INFO manager.OracleManager: Time zone has been set to GMT 15/04/16 03:28:16 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps 15/04/16 03:28:16 INFO client.RMProxy: Connecting to ResourceManager at hadoop01/192.168.1.201:8032 15/04/16 03:28:18 INFO db.DBInputFormat: Using read commited transaction isolation 15/04/16 03:28:18 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(UPAYID), MAX(UPAYID) FROM CALC_UPAY_DATE_HADOOP_HDFS 15/04/16 03:28:19 INFO mapreduce.JobSubmitter: number of splits:4 15/04/16 03:28:20 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1429145594985_0020 15/04/16 03:28:20 INFO impl.YarnClientImpl: Submitted application application_1429145594985_0020 15/04/16 03:28:20 INFO mapreduce.Job: The url to track the job: http://hadoop01:8088/proxy/application_1429145594985_0020/ 15/04/16 03:28:20 INFO mapreduce.Job: Running job: job_1429145594985_0020 15/04/16 03:28:31 INFO mapreduce.Job: Job job_1429145594985_0020 running in uber mode : false 15/04/16 03:28:31 INFO mapreduce.Job: map 0% reduce 0% 15/04/16 03:28:59 INFO mapreduce.Job: Task Id : attempt_1429145594985_0020_m_000000_0, Status : FAILED Error: oracle.jdbc.driver.T4CPreparedStatement.isClosed()Z 15/04/16 03:29:00 INFO mapreduce.Job: Task Id : attempt_1429145594985_0020_m_000002_0, Status : FAILED Error: oracle.jdbc.driver.T4CPreparedStatement.isClosed()Z 15/04/16 03:29:01 INFO mapreduce.Job: Task Id : attempt_1429145594985_0020_m_000001_0, Status : FAILED Error: oracle.jdbc.driver.T4CPreparedStatement.isClosed()Z 我用sqoop把数据从hive导出到oracle一切正常
sqoop,MySQL,hdfs数据传输报错
使用sqoop进行数据传输,windows-mysql传输数据到虚拟机里面的hdfs,报错The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. sqoop进行传输的时候没有要求sqoop的安装和MySQL的安装都在一个系统中吧?使用sqoop要求本机和虚拟机必须ping通吗?我的网络链接方式用的hostonly。
关于mysql中的数据导入hive的一些问题 ?
关于mysql中的数据导入hive的一些问题 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: Hive exited with status 1 查了相关的一些解决办法 缺少JAR包之类的都试过 试之前下面的代码都没有问题 不知道为什么上面的代码始终都报错 求大神指导![图片说明](https://img-ask.csdn.net/upload/201909/06/1567785187_233018.png) sqoop import --connect jdbc:mysql://cloud00:3306/anli --username hive --password hive --table User_ratings1 --hive-import --hive-table User_ratings1 -m 1 --hive-overwrite ``` ```sqoop import --connect jdbc:mysql://cloud00:3306/test --username hive --password hive --table exit_tran --hive-import --hive-table exit_tran -m 1 --hive-overwrite
sqoop将oracle数据导入hbase的问题,求各位大神们指导
sqoop将oracle数据导入hbase,要求可以Java连接服务器上的sqoop,sqoop1可以直接实现但是没有Java client的API,sqoop2 有client但是不能直接实现oracle到hbase,这是我得出的结论,请教大神们,有没有好的方法?
查询用sqoop从mysql中导入到hive中的表格,显示格式有问题
mysql中的原始数据如下: ![图片说明](https://img-ask.csdn.net/upload/201903/26/1553587553_692693.png) 通过如下命令将此表格导入到hive中 ``` bin/sqoop import --connect jdbc:mysql://192.168.12.69:3306/userdb --username root --password 123 --table emp --fields-terminated-by '\001' --hive-import --hive-table sqooptohive.emp_hive --hive-overwrite --delete-target-dir --m 1 ``` 导入成功后,从hdfs系统中下载下来对应的文件内容为: ![图片说明](https://img-ask.csdn.net/upload/201903/26/1553587271_642727.png) 在hive中使用查询语句: ``` select * from emp_hive; ``` 字段的字权威null了,结果如下: ![图片说明](https://img-ask.csdn.net/upload/201903/26/1553587452_172392.png)
sqoop 从oracle导数据到hive中报错
往hive中导入表,报如下错误,请大家帮忙 [root@amorsay3 bin]# ./sqoop import --hive-import --connect jdbc:oracle:thin:@192.168.13.168:1521:orcl --username HADOOPLEARN --password zhao --table EMP -m 1 --hive-table emp1 Warning: /usr/local/hadoophive/sqoop-1.4.6.bin__hadoop-0.23/../hbase does not exist! HBase imports will fail. Please set $HBASE_HOME to the root of your HBase installation. Warning: /usr/local/hadoophive/sqoop-1.4.6.bin__hadoop-0.23/../hcatalog does not exist! HCatalog jobs will fail. Please set $HCAT_HOME to the root of your HCatalog installation. Warning: /usr/local/hadoophive/sqoop-1.4.6.bin__hadoop-0.23/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. Warning: /usr/local/hadoophive/sqoop-1.4.6.bin__hadoop-0.23/../zookeeper does not exist! Accumulo imports will fail. Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation. Warning: $HADOOP_HOME is deprecated. 15/08/11 23:17:02 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6 15/08/11 23:17:02 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 15/08/11 23:17:02 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override 15/08/11 23:17:02 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc. 15/08/11 23:17:02 INFO oracle.OraOopManagerFactory: Data Connector for Oracle and Hadoop is disabled. 15/08/11 23:17:02 INFO manager.SqlManager: Using default fetchSize of 1000 15/08/11 23:17:02 INFO tool.CodeGenTool: Beginning code generation 15/08/11 23:17:03 INFO manager.OracleManager: Time zone has been set to GMT 15/08/11 23:17:03 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM EMP t WHERE 1=0 15/08/11 23:17:03 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/local/hadoophive/hadoop-1.2.1 Note: /tmp/sqoop-root/compile/efda22b79cedc05841de35698062fbbc/EMP.java uses or overrides a deprecated API. Note: Recompile with -Xlint:deprecation for details. 15/08/11 23:17:04 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/efda22b79cedc05841de35698062fbbc/EMP.jar 15/08/11 23:17:04 INFO manager.OracleManager: Time zone has been set to GMT 15/08/11 23:17:04 INFO manager.OracleManager: Time zone has been set to GMT 15/08/11 23:17:04 INFO mapreduce.ImportJobBase: Beginning import of EMP 15/08/11 23:17:04 INFO manager.OracleManager: Time zone has been set to GMT 15/08/11 23:17:06 INFO db.DBInputFormat: Using read commited transaction isolation 15/08/11 23:17:06 INFO mapred.JobClient: Cleaning up the staging area hdfs://192.168.14.168:9000/hadoop/mapred/staging/root/.staging/job_201508111912_0003 Exception in thread "main" java.lang.IncompatibleClassChangeError: Found class org.apache.hadoop.mapreduce.JobContext, but interface was expected at org.apache.sqoop.config.ConfigurationHelper.getJobNumMaps(ConfigurationHelper.java:65) at com.cloudera.sqoop.config.ConfigurationHelper.getJobNumMaps(ConfigurationHelper.java:36) at org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat.getSplits(DataDrivenDBInputFormat.java:125) at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054) at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1071) at org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936) at org.apache.hadoop.mapreduce.Job.submit(Job.java:550) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580) at org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob(ImportJobBase.java:196) at org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:169) at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:266) at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:673) at org.apache.sqoop.manager.OracleManager.importTable(OracleManager.java:444) at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:497) at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605) at org.apache.sqoop.Sqoop.run(Sqoop.java:143) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227) at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
linux下使用sqoop连接windows的MySQL数据库报错
刚入门学习hadoop,然后在sqoop数据迁移这里遇到了问题,linux下使用sqoop连接不上windows系统的MySQL数据库,按照网上的许多方法都没解决。 linux系统是centos6.4,然后hadoop2.4.1,sqoop1.4.7,windows下是mysql5.7 下面是报错信息: [root@itcast01 bin]# ./sqoop list-tables --connect jdbc:mysql://192.168.147.100:3306/sqoopex1 --username root -password 1234 18/07/12 16:17:28 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7 18/07/12 16:17:28 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 18/07/12 16:17:28 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset. 18/07/12 16:18:31 ERROR manager.CatalogQueryManager: Failed to list tables com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure The last packet successfully received from the server was 1,531,383,511,816 milliseconds ago. The last packet sent successfully to the server was 0 milliseconds ago. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at com.mysql.jdbc.Util.handleNewInstance(Util.java:406) at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1074) at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2214) at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:773) at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:46) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at com.mysql.jdbc.Util.handleNewInstance(Util.java:406) at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:352) at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:282) at java.sql.DriverManager.getConnection(DriverManager.java:664) at java.sql.DriverManager.getConnection(DriverManager.java:247) at org.apache.sqoop.manager.SqlManager.makeConnection(SqlManager.java:904) at org.apache.sqoop.manager.GenericJdbcManager.getConnection(GenericJdbcManager.java:59) at org.apache.sqoop.manager.CatalogQueryManager.listTables(CatalogQueryManager.java:102) at org.apache.sqoop.tool.ListTablesTool.run(ListTablesTool.java:49) at org.apache.sqoop.Sqoop.run(Sqoop.java:147) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243) at org.apache.sqoop.Sqoop.main(Sqoop.java:252) Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure The last packet successfully received from the server was 1,531,383,511,809 milliseconds ago. The last packet sent successfully to the server was 0 milliseconds ago. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at com.mysql.jdbc.Util.handleNewInstance(Util.java:406) at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1074) at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:341) at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2137) ... 21 more Caused by: java.net.ConnectException: 连接超时 at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at java.net.Socket.connect(Socket.java:538) at java.net.Socket.<init>(Socket.java:434) at java.net.Socket.<init>(Socket.java:244) at com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:253) at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:290) ... 22 more zookeeper和hadoop服务都开启了的,防火墙也关闭了,去度娘有人说修改my.ini文件,说在[mysqld] 那里加一行: wait_timeout=86400 。 但是我修改后还是报同样的错误。mysql权限也赋予了的。数据库连接驱动使用mysql-connector-5.1.8.jar。 ![图片说明](https://img-ask.csdn.net/upload/201807/12/1531385340_467365.png) ![图片说明](https://img-ask.csdn.net/upload/201807/12/1531384769_440982.png) 连接的ip地址192.168.147.100是windows的VMnet1的ip地址,能ping通。然后就是连接不上数据库。使用Navicat连接也能连得上。 ![图片说明](https://img-ask.csdn.net/upload/201807/12/1531384494_888105.png) ![图片说明](https://img-ask.csdn.net/upload/201807/12/1531384583_92826.png) 有没有大牛知道我问题出在哪里?感激不尽!
sqoop1增量导入可以直接导入到hive 或 hbase中吗
sqoop1增量导入可以直接导入到hive 或 hbase中吗
通过sqoop 导入hive错误
[root@orep apps]# sqoop import --connect jdbc:mysql://192.168.3.6:3306/hive --username root --password mysql123 --table VERSION --hive-table VERSION --hive-import 17/03/09 05:45:34 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.10.0 17/03/09 05:45:34 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 17/03/09 05:45:34 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override 17/03/09 05:45:34 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc. 17/03/09 05:45:34 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset. 17/03/09 05:45:34 INFO tool.CodeGenTool: Beginning code generation 17/03/09 05:45:34 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `VERSION` AS t LIMIT 1 17/03/09 05:45:34 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `VERSION` AS t LIMIT 1 17/03/09 05:45:34 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /apps/hadoop-2.7.2 Note: /tmp/sqoop-root/compile/4b54b295fba470d9743716efe53e0d48/VERSION.java uses or overrides a deprecated API. Note: Recompile with -Xlint:deprecation for details. 17/03/09 05:45:36 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/4b54b295fba470d9743716efe53e0d48/VERSION.jar 17/03/09 05:45:36 WARN manager.MySQLManager: It looks like you are importing from mysql. 17/03/09 05:45:36 WARN manager.MySQLManager: This transfer can be faster! Use the --direct 17/03/09 05:45:36 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path. 17/03/09 05:45:36 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql) 17/03/09 05:45:36 INFO mapreduce.ImportJobBase: Beginning import of VERSION SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/apps/hadoop-2.7.2/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/apps/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 17/03/09 05:45:36 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar 17/03/09 05:45:37 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps 17/03/09 05:45:37 INFO client.RMProxy: Connecting to ResourceManager at /192.168.3.6:8032 17/03/09 05:45:41 INFO db.DBInputFormat: Using read commited transaction isolation 17/03/09 05:45:41 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(`VER_ID`), MAX(`VER_ID`) FROM `VERSION` 17/03/09 05:45:41 INFO db.IntegerSplitter: Split size: 0; Num splits: 4 from: 1 to: 1 17/03/09 05:45:41 INFO mapreduce.JobSubmitter: number of splits:1 17/03/09 05:45:41 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1488979391372_0005 17/03/09 05:45:42 INFO impl.YarnClientImpl: Submitted application application_1488979391372_0005 17/03/09 05:45:42 INFO mapreduce.Job: The url to track the job: http://Kylin01:8088/proxy/application_1488979391372_0005/ 17/03/09 05:45:42 INFO mapreduce.Job: Running job: job_1488979391372_0005 17/03/09 05:45:50 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=FAILED. Redirecting to job history server 17/03/09 05:45:50 INFO mapreduce.Job: Job job_1488979391372_0005 running in uber mode : false 17/03/09 05:45:50 INFO mapreduce.Job: map 0% reduce 100% 17/03/09 05:45:50 INFO mapreduce.Job: Job job_1488979391372_0005 failed with state FAILED due to: 17/03/09 05:45:50 INFO mapreduce.ImportJobBase: The MapReduce job has already been retired. Performance 17/03/09 05:45:50 INFO mapreduce.ImportJobBase: counters are unavailable. To get this information, 17/03/09 05:45:50 INFO mapreduce.ImportJobBase: you will need to enable the completed job store on 17/03/09 05:45:50 INFO mapreduce.ImportJobBase: the jobtracker with: 17/03/09 05:45:50 INFO mapreduce.ImportJobBase: mapreduce.jobtracker.persist.jobstatus.active = true 17/03/09 05:45:50 INFO mapreduce.ImportJobBase: mapreduce.jobtracker.persist.jobstatus.hours = 1 17/03/09 05:45:50 INFO mapreduce.ImportJobBase: A jobtracker restart is required for these settings 17/03/09 05:45:50 INFO mapreduce.ImportJobBase: to take effect. 17/03/09 05:45:50 ERROR tool.ImportTool: Error during import: Import job failed! 烦请给我大神帮我诊断一下,谢谢
相见恨晚的超实用网站
搞学习 知乎:www.zhihu.com 简答题:http://www.jiandati.com/ 网易公开课:https://open.163.com/ted/ 网易云课堂:https://study.163.com/ 中国大学MOOC:www.icourse163.org 网易云课堂:study.163.com 哔哩哔哩弹幕网:www.bilibili.com 我要自学网:www.51zxw
花了20分钟,给女朋友们写了一个web版群聊程序
参考博客 [1]https://www.byteslounge.com/tutorials/java-ee-html5-websocket-example
爬虫福利二 之 妹子图网MM批量下载
爬虫福利一:27报网MM批量下载    点击 看了本文,相信大家对爬虫一定会产生强烈的兴趣,激励自己去学习爬虫,在这里提前祝:大家学有所成! 目标网站:妹子图网 环境:Python3.x 相关第三方模块:requests、beautifulsoup4 Re:各位在测试时只需要将代码里的变量 path 指定为你当前系统要保存的路径,使用 python xxx.py 或IDE运行即可。
字节跳动视频编解码面经
引言 本文主要是记录一下面试字节跳动的经历。 三四月份投了字节跳动的实习(图形图像岗位),然后hr打电话过来问了一下会不会opengl,c++,shador,当时只会一点c++,其他两个都不会,也就直接被拒了。 七月初内推了字节跳动的提前批,因为内推没有具体的岗位,hr又打电话问要不要考虑一下图形图像岗,我说实习投过这个岗位不合适,不会opengl和shador,然后hr就说秋招更看重基础。我当时
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 顺便拉下票,我在参加csdn博客之星竞选,欢迎投票支持,每个QQ或者微信每天都可以投5票,扫二维码即可,http://m234140.nofollow.ax.
比特币原理详解
一、什么是比特币 比特币是一种电子货币,是一种基于密码学的货币,在2008年11月1日由中本聪发表比特币白皮书,文中提出了一种去中心化的电子记账系统,我们平时的电子现金是银行来记账,因为银行的背后是国家信用。去中心化电子记账系统是参与者共同记账。比特币可以防止主权危机、信用风险。其好处不多做赘述,这一层面介绍的文章很多,本文主要从更深层的技术原理角度进行介绍。 二、问题引入 假设现有4个人...
Python 基础(一):入门必备知识
目录1 标识符2 关键字3 引号4 编码5 输入输出6 缩进7 多行8 注释9 数据类型10 运算符10.1 常用运算符10.2 运算符优先级 1 标识符 标识符是编程时使用的名字,用于给变量、函数、语句块等命名,Python 中标识符由字母、数字、下划线组成,不能以数字开头,区分大小写。 以下划线开头的标识符有特殊含义,单下划线开头的标识符,如:_xxx ,表示不能直接访问的类属性,需通过类提供
这30个CSS选择器,你必须熟记(上)
关注前端达人,与你共同进步CSS的魅力就是让我们前端工程师像设计师一样进行网页的设计,我们能轻而易举的改变颜色、布局、制作出漂亮的影音效果等等,我们只需要改几行代码,不需...
国产开源API网关项目进入Apache孵化器:APISIX
点击蓝色“程序猿DD”关注我回复“资源”获取独家整理的学习资料!近日,又有一个开源项目加入了这个Java开源界大名鼎鼎的Apache基金会,开始进行孵化器。项目名称:AP...
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 欢迎 改进 留言。 演示地点跳到演示地点 html代码如下`&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;music&lt;/title&gt; &lt;meta charset="utf-8"&gt
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。 1. for - else 什么?不是 if 和 else 才
数据库优化 - SQL优化
前面一篇文章从实例的角度进行数据库优化,通过配置一些参数让数据库性能达到最优。但是一些“不好”的SQL也会导致数据库查询变慢,影响业务流程。本文从SQL角度进行数据库优化,提升SQL运行效率。 判断问题SQL 判断SQL是否有问题时可以通过两个表象进行判断: 系统级别表象 CPU消耗严重 IO等待严重 页面响应时间过长
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 c/c++ 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7
通俗易懂地给女朋友讲:线程池的内部原理
餐厅的约会 餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”我楞了一下,心里想女朋友今天是怎么了,怎么突然问出这么专业的问题,但做为一个专业人士在女朋友面前也不能露怯啊,想了一下便说:“我先给你讲讲我前同事老王的故事吧!” 大龄程序员老王 老王是一个已经北漂十多年的程序员,岁数大了,加班加不动了,升迁也无望,于是拿着手里
经典算法(5)杨辉三角
杨辉三角 是经典算法,这篇博客对它的算法思想进行了讲解,并有完整的代码实现。
编写Spring MVC控制器的14个技巧
本期目录 1.使用@Controller构造型 2.实现控制器接口 3.扩展AbstractController类 4.为处理程序方法指定URL映射 5.为处理程序方法指定HTTP请求方法 6.将请求参数映射到处理程序方法 7.返回模型和视图 8.将对象放入模型 9.处理程序方法中的重定向 10.处理表格提交和表格验证 11.处理文件上传 12.在控制器中自动装配业务类 ...
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹
面试官:你连RESTful都不知道我怎么敢要你?
面试官:了解RESTful吗? 我:听说过。 面试官:那什么是RESTful? 我:就是用起来很规范,挺好的 面试官:是RESTful挺好的,还是自我感觉挺好的 我:都挺好的。 面试官:… 把门关上。 我:… 要干嘛?先关上再说。 面试官:我说出去把门关上。 我:what ?,夺门而去 文章目录01 前言02 RESTful的来源03 RESTful6大原则1. C-S架构2. 无状态3.统一的接
求小姐姐抠图竟遭白眼?痛定思痛,我决定用 Python 自力更生!
点击蓝色“Python空间”关注我丫加个“星标”,每天一起快乐的学习大家好,我是 Rocky0429,一个刚恰完午饭,正在用刷网页浪费生命的蒟蒻...一堆堆无聊八卦信息的网页内容慢慢使我的双眼模糊,一个哈欠打出了三斤老泪,就在此时我看到了一张图片:是谁!是谁把我女朋友的照片放出来的!awsl!太好看了叭...等等,那个背景上的一堆鬼画符是什么鬼?!真是看不下去!叔叔婶婶能忍,隔壁老王的三姨妈的四表...
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看
SQL-小白最佳入门sql查询一
不要偷偷的查询我的个人资料,即使你再喜欢我,也不要这样,真的不好;
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // doshom...
致 Python 初学者
欢迎来到“Python进阶”专栏!来到这里的每一位同学,应该大致上学习了很多 Python 的基础知识,正在努力成长的过程中。在此期间,一定遇到了很多的困惑,对未来的学习方向感到迷茫。我非常理解你们所面临的处境。我从2007年开始接触 python 这门编程语言,从2009年开始单一使用 python 应对所有的开发工作,直至今天。回顾自己的学习过程,也曾经遇到过无数的困难,也曾经迷茫过、困惑过。开办这个专栏,正是为了帮助像我当年一样困惑的 Python 初学者走出困境、快速成长。希望我的经验能真正帮到你
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,...
程序员:我终于知道post和get的区别
是一个老生常谈的话题,然而随着不断的学习,对于以前的认识有很多误区,所以还是需要不断地总结的,学而时习之,不亦说乎
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU...
相关热词 c#处理浮点数 c# 生成字母数字随机数 c# 动态曲线 控件 c# oracle 开发 c#选择字体大小的控件 c# usb 批量传输 c#10进制转8进制 c#转base64 c# 科学计算 c#下拉列表获取串口
立即提问