使用sqoop从oracle导数据到hive

望大神帮帮忙,非常谢谢您的照顾!!!

1、下图是报错信息:
图片说明

2、下面是我的建表语句,测试数据,sqoop代码

图片说明

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
Python+OpenCV计算机视觉

Python+OpenCV计算机视觉

sqoop 从oracle导数据到hive中报错

往hive中导入表,报如下错误,请大家帮忙 [root@amorsay3 bin]# ./sqoop import --hive-import --connect jdbc:oracle:thin:@192.168.13.168:1521:orcl --username HADOOPLEARN --password zhao --table EMP -m 1 --hive-table emp1 Warning: /usr/local/hadoophive/sqoop-1.4.6.bin__hadoop-0.23/../hbase does not exist! HBase imports will fail. Please set $HBASE_HOME to the root of your HBase installation. Warning: /usr/local/hadoophive/sqoop-1.4.6.bin__hadoop-0.23/../hcatalog does not exist! HCatalog jobs will fail. Please set $HCAT_HOME to the root of your HCatalog installation. Warning: /usr/local/hadoophive/sqoop-1.4.6.bin__hadoop-0.23/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. Warning: /usr/local/hadoophive/sqoop-1.4.6.bin__hadoop-0.23/../zookeeper does not exist! Accumulo imports will fail. Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation. Warning: $HADOOP_HOME is deprecated. 15/08/11 23:17:02 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6 15/08/11 23:17:02 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 15/08/11 23:17:02 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override 15/08/11 23:17:02 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc. 15/08/11 23:17:02 INFO oracle.OraOopManagerFactory: Data Connector for Oracle and Hadoop is disabled. 15/08/11 23:17:02 INFO manager.SqlManager: Using default fetchSize of 1000 15/08/11 23:17:02 INFO tool.CodeGenTool: Beginning code generation 15/08/11 23:17:03 INFO manager.OracleManager: Time zone has been set to GMT 15/08/11 23:17:03 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM EMP t WHERE 1=0 15/08/11 23:17:03 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/local/hadoophive/hadoop-1.2.1 Note: /tmp/sqoop-root/compile/efda22b79cedc05841de35698062fbbc/EMP.java uses or overrides a deprecated API. Note: Recompile with -Xlint:deprecation for details. 15/08/11 23:17:04 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/efda22b79cedc05841de35698062fbbc/EMP.jar 15/08/11 23:17:04 INFO manager.OracleManager: Time zone has been set to GMT 15/08/11 23:17:04 INFO manager.OracleManager: Time zone has been set to GMT 15/08/11 23:17:04 INFO mapreduce.ImportJobBase: Beginning import of EMP 15/08/11 23:17:04 INFO manager.OracleManager: Time zone has been set to GMT 15/08/11 23:17:06 INFO db.DBInputFormat: Using read commited transaction isolation 15/08/11 23:17:06 INFO mapred.JobClient: Cleaning up the staging area hdfs://192.168.14.168:9000/hadoop/mapred/staging/root/.staging/job_201508111912_0003 Exception in thread "main" java.lang.IncompatibleClassChangeError: Found class org.apache.hadoop.mapreduce.JobContext, but interface was expected at org.apache.sqoop.config.ConfigurationHelper.getJobNumMaps(ConfigurationHelper.java:65) at com.cloudera.sqoop.config.ConfigurationHelper.getJobNumMaps(ConfigurationHelper.java:36) at org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat.getSplits(DataDrivenDBInputFormat.java:125) at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054) at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1071) at org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936) at org.apache.hadoop.mapreduce.Job.submit(Job.java:550) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580) at org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob(ImportJobBase.java:196) at org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:169) at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:266) at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:673) at org.apache.sqoop.manager.OracleManager.importTable(OracleManager.java:444) at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:497) at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605) at org.apache.sqoop.Sqoop.run(Sqoop.java:143) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227) at org.apache.sqoop.Sqoop.main(Sqoop.java:236)

利用sqoop把数据从Oracle导出到hive报错

![图片说明](https://img-ask.csdn.net/upload/201504/16/1429180711_592161.png) bash-4.1$ sqoop import --connect jdbc:oracle:thin:@192.168.1.169:1521:orcl --username HADOOP --password hadoop2015 --table CALC_UPAY_DATE_HADOOP_HDFS --split-by UPAYID --hive-import Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. find: paths must precede expression: ant-eclipse-1.0-jvm1.2.jar Usage: find [-H] [-L] [-P] [-Olevel] [-D help|tree|search|stat|rates|opt|exec] [path...] [expression] 15/04/16 03:28:13 INFO sqoop.Sqoop: Running Sqoop version: 1.4.4-cdh5.0.2 15/04/16 03:28:13 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 15/04/16 03:28:13 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override 15/04/16 03:28:13 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc. 15/04/16 03:28:13 INFO manager.SqlManager: Using default fetchSize of 1000 15/04/16 03:28:13 INFO tool.CodeGenTool: Beginning code generation 15/04/16 03:28:13 INFO manager.OracleManager: Time zone has been set to GMT 15/04/16 03:28:13 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM CALC_UPAY_DATE_HADOOP_HDFS t WHERE 1=0 15/04/16 03:28:14 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-mapreduce Note: /tmp/sqoop-hdfs/compile/e9286bf0e7d796ba396d3155210012b0/CALC_UPAY_DATE_HADOOP_HDFS.java uses or overrides a deprecated API. Note: Recompile with -Xlint:deprecation for details. 15/04/16 03:28:15 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hdfs/compile/e9286bf0e7d796ba396d3155210012b0/CALC_UPAY_DATE_HADOOP_HDFS.jar 15/04/16 03:28:15 INFO mapreduce.ImportJobBase: Beginning import of CALC_UPAY_DATE_HADOOP_HDFS 15/04/16 03:28:15 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar 15/04/16 03:28:15 INFO manager.OracleManager: Time zone has been set to GMT 15/04/16 03:28:16 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps 15/04/16 03:28:16 INFO client.RMProxy: Connecting to ResourceManager at hadoop01/192.168.1.201:8032 15/04/16 03:28:18 INFO db.DBInputFormat: Using read commited transaction isolation 15/04/16 03:28:18 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(UPAYID), MAX(UPAYID) FROM CALC_UPAY_DATE_HADOOP_HDFS 15/04/16 03:28:19 INFO mapreduce.JobSubmitter: number of splits:4 15/04/16 03:28:20 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1429145594985_0020 15/04/16 03:28:20 INFO impl.YarnClientImpl: Submitted application application_1429145594985_0020 15/04/16 03:28:20 INFO mapreduce.Job: The url to track the job: http://hadoop01:8088/proxy/application_1429145594985_0020/ 15/04/16 03:28:20 INFO mapreduce.Job: Running job: job_1429145594985_0020 15/04/16 03:28:31 INFO mapreduce.Job: Job job_1429145594985_0020 running in uber mode : false 15/04/16 03:28:31 INFO mapreduce.Job: map 0% reduce 0% 15/04/16 03:28:59 INFO mapreduce.Job: Task Id : attempt_1429145594985_0020_m_000000_0, Status : FAILED Error: oracle.jdbc.driver.T4CPreparedStatement.isClosed()Z 15/04/16 03:29:00 INFO mapreduce.Job: Task Id : attempt_1429145594985_0020_m_000002_0, Status : FAILED Error: oracle.jdbc.driver.T4CPreparedStatement.isClosed()Z 15/04/16 03:29:01 INFO mapreduce.Job: Task Id : attempt_1429145594985_0020_m_000001_0, Status : FAILED Error: oracle.jdbc.driver.T4CPreparedStatement.isClosed()Z 我用sqoop把数据从hive导出到oracle一切正常

sqoop将oracle数据表导入hive中文乱码问题

请教各位大神一个问题,就是将oracle的表导入到hive后中文乱码,oracle库的编码格式为US7ASCII,各位大神有没有遇到过类型的问题,或者有没有好的解决方案建议,谢谢了。附注:现在已经试过convert(nsrdzdah,'utf8','US7ASCII'),但是还是乱码;还有就是修改hive jdbc jar包,感觉不靠谱就没有试

求助sqoop从hive导出数据到oracle,目标表字段有date类型sqoop失败

sqoop语句为 sqoop export \ --connect jdbc:oracle:thin:@(description=(address=(protocol=tcp)(port=1521)(host=172.18.50.5))(connect_data=(service_name=rac))) \ --username dsp \ --password rac \ --table DSP.S_F_TKFTIS_ORDER_HIS \ --export-dir /user/hive2/warehouse/dml.db/dml_s_f_tkftis_order_his \ --columns L_SERIALNO,C_FLAG,C_ACCOTYPE,C_ACCO,C_TYPE,L_SERVICEID,C_MODE,D_DATE,C_ISACCO,C_FROM,C_USERCODE,D_SERVICEEND,D_SERVICESTART \ --input-fields-terminated-by '\001' \ --input-null-string '\\N' \ --input-null-non-string '\\N' 目标表结构 create table S_F_TKFTIS_ORDER_HIS ( l_serialno VARCHAR2(40), c_flag CHAR(1), c_accotype CHAR(1), c_acco VARCHAR2(40), c_type CHAR(1), l_serviceid VARCHAR2(40), c_mode CHAR(1), d_date VARCHAR2(40), c_isacco CHAR(1), c_from CHAR(1), c_usercode VARCHAR2(16), d_serviceend VARCHAR2(40), d_servicestart VARCHAR2(40) ) tablespace DSP_DATA pctfree 10 initrans 1 maxtrans 255 storage ( initial 64K next 1M minextents 1 maxextents unlimited ); 如果我把oracle目标表字段都改为varchar则可以正常导入,如果字段类型有date则不成功,求大神帮忙看看什么原因。

sqoop的数据导入hive,从sqlserver到hive做定时任务

sqoop的数据导入hive,从sqlserver到hive做定时任务。做job然后用crontab 做定时任务,有没有做过的好的例子

sqoop将oracle数据导入hdfs显示访问被拒绝

sqoop lib库有ojdbc6.jar ping服务器能ping通,用toad能访问oracle; sqoop import --connect jdbc:oracle:thin:@192.168.1.10:1521:ORCL --username -password --m 1 --table TEST1 显示错误如下: ERROR manager.SqlManager: Error executing statement: java.sql.SQLException: The Network Adapter could not establish the connection java.sql.SQLException: The Network Adapter could not establish the connection at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:412) at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:531) at oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:221) at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:32) at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:503) at java.sql.DriverManager.getConnection(DriverManager.java:571) at java.sql.DriverManager.getConnection(DriverManager.java:215) at org.apache.sqoop.manager.OracleManager.makeConnection(OracleManager.java:327) at org.apache.sqoop.manager.GenericJdbcManager.getConnection(GenericJdbcManager.java:52) at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:744) at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:767) at org.apache.sqoop.manager.SqlManager.getColumnInfoForRawQuery(SqlManager.java:270) at org.apache.sqoop.manager.SqlManager.getColumnTypesForRawQuery(SqlManager.java:241) at org.apache.sqoop.manager.SqlManager.getColumnTypes(SqlManager.java:227) at org.apache.sqoop.manager.ConnManager.getColumnTypes(ConnManager.java:295) at org.apache.sqoop.orm.ClassWriter.getColumnTypes(ClassWriter.java:1833) at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:1645) at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:107) at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:478) at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605) at org.apache.sqoop.Sqoop.run(Sqoop.java:143) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227) at org.apache.sqoop.Sqoop.main(Sqoop.java:236) Caused by: oracle.net.ns.NetException: The Network Adapter could not establish the connection at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:359) at oracle.net.resolver.AddrResolution.resolveAndExecute(AddrResolution.java:422) at oracle.net.ns.NSProtocol.establishConnection(NSProtocol.java:672) at oracle.net.ns.NSProtocol.connect(NSProtocol.java:237) at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1042) at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:301) ... 25 more Caused by: java.net.ConnectException: ¾Ü¾øÁ¬½Ó at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at oracle.net.nt.TcpNTAdapter.connect(TcpNTAdapter.java:141) at oracle.net.nt.ConnOption.connect(ConnOption.java:123) at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:337) ... 30 more 15/12/19 12:32:01 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: No columns to generate for ClassWriter at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:1651) at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:107) at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:478) at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605) at org.apache.sqoop.Sqoop.run(Sqoop.java:143) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227) at org.apache.sqoop.Sqoop.main(Sqoop.java:236)

Sqoop导入数据到Oracle数据库表别名前使用了as如何避免

Sqoop 导出语句: ``` sqoop export \ --driver oracle.jdbc.driver.OracleDriver \ --connect jdbc:oracle:thin:@//10.10.122.165:1521/new \ --username test \ --password 'test2008' \ --table ORDER_O \ --export-dir /user/hive/warehouse/test.db/order_o \ --columns cv_time,cv_date \ --input-fields-terminated-by '\t' \ --input-lines-terminated-by '\n' \ --input-null-string '\\N' \ --input-null-non-string '\\N' ``` 执行后报错如下: ![图片说明](https://img-ask.csdn.net/upload/201906/06/1559787595_111989.png) 发现sql语句中表别名加了as,oracle数据库无法识别,请问如何避免。

sqoop中增量同步的问题

其中我自己写了一条增量同步的语句 如下: sqoop job --create MY_SQOOP_TEST -- import --connect jdbc:oracle:thin:@xxx:orcl --username XXX --password XXX --table MY_TEST --hive-import --hive-table MY_SQOOP_TEST --incremental lastmodified --check-column sj --last-value '2016/12/20 8:09:46' 我的理解是先创建my_sqoop_test,之后去oracle找到 my_test这张表,如果时间大于2016/12/20 8:09:46的数据则导入hive中的my_sqoop_test的表中. 这样子理解对吗? 如果是对的那么该如何运行这个job?

sqoop从postgre全量抽取数据到hive出现cannot resolve sql type for 1111

最近刚接触sqoop,在使用时出现问题,请问大神们该问题如何解决? 要抽取的postgre表中的extra存在json类型的数据,抽取时出现cannot resolve sql type for 1111 和 no java type for sql type for column extra错误,根据https://blog.csdn.net/Post_Yuan/article/details/79799980和https://blog.csdn.net/lookqlp/article/details/52096193看了两篇文章,对sqoop语句做了如下修改: 最开始没有加--map-column-hive Extra=String \和--map-column-java Extra=String \的sqoop语句如下 sqoop import \ --connect jdbc串\ --username 用户名 \ --password 密码\ --table 表名 \ --null-string '\\N' \ --null-non-string '\\N' \ --hive-overwrite \ --hcatalog-database hive数据库名\ --hcatalog-table hive中创建好的表名 \ --hcatalog-partition-keys dt \ --hcatalog-partition-values 20180913 \ --as-parquetfile \ -m 1 此时报错cannot resolve sql type for 1111 和 no java type for sql type for column extra 加上--map-column-hive Extra=String \和--map-column-java Extra=String \ sqoop import \ --connect jdbc串\ --username 用户名 \ --password 密码\ --table 表名 \ --null-string '\\N' \ --null-non-string '\\N' \ --map-column-hive Extra=String \ --map-column-java Extra=String \ --hive-overwrite \ --hcatalog-database hive数据库名\ --hcatalog-table hive中创建好的表名 \ --hcatalog-partition-keys dt \ --hcatalog-partition-values 20180913 \ --as-parquetfile \ -m 1 此时报错The connection attempt failed. connect timed out Closed a connection to metastore, current connections: 0

sqoop client java api将mysql的数据导到hdfs

``` package com.hadoop.recommend; import org.apache.sqoop.client.SqoopClient; import org.apache.sqoop.model.MDriverConfig; import org.apache.sqoop.model.MFromConfig; import org.apache.sqoop.model.MJob; import org.apache.sqoop.model.MLink; import org.apache.sqoop.model.MLinkConfig; import org.apache.sqoop.model.MSubmission; import org.apache.sqoop.model.MToConfig; import org.apache.sqoop.submission.counter.Counter; import org.apache.sqoop.submission.counter.CounterGroup; import org.apache.sqoop.submission.counter.Counters; import org.apache.sqoop.validation.Status; public class MysqlToHDFS { public static void main(String[] args) { sqoopTransfer(); } public static void sqoopTransfer() { //初始化 String url = "http://master:12000/sqoop/"; SqoopClient client = new SqoopClient(url); //创建一个源链接 JDBC long fromConnectorId = 2; MLink fromLink = client.createLink(fromConnectorId); fromLink.setName("JDBC connector"); fromLink.setCreationUser("hadoop"); MLinkConfig fromLinkConfig = fromLink.getConnectorLinkConfig(); fromLinkConfig.getStringInput("linkConfig.connectionString").setValue("jdbc:mysql://master:3306/hive"); fromLinkConfig.getStringInput("linkConfig.jdbcDriver").setValue("com.mysql.jdbc.Driver"); fromLinkConfig.getStringInput("linkConfig.username").setValue("root"); fromLinkConfig.getStringInput("linkConfig.password").setValue(""); Status fromStatus = client.saveLink(fromLink); if(fromStatus.canProceed()) { System.out.println("创建JDBC Link成功,ID为: " + fromLink.getPersistenceId()); } else { System.out.println("创建JDBC Link失败"); } //创建一个目的地链接HDFS long toConnectorId = 1; MLink toLink = client.createLink(toConnectorId); toLink.setName("HDFS connector"); toLink.setCreationUser("hadoop"); MLinkConfig toLinkConfig = toLink.getConnectorLinkConfig(); toLinkConfig.getStringInput("linkConfig.uri").setValue("hdfs://master:9000/"); Status toStatus = client.saveLink(toLink); if(toStatus.canProceed()) { System.out.println("创建HDFS Link成功,ID为: " + toLink.getPersistenceId()); } else { System.out.println("创建HDFS Link失败"); } //创建一个任务 long fromLinkId = fromLink.getPersistenceId(); long toLinkId = toLink.getPersistenceId(); MJob job = client.createJob(fromLinkId, toLinkId); job.setName("MySQL to HDFS job"); job.setCreationUser("hadoop"); //设置源链接任务配置信息 MFromConfig fromJobConfig = job.getFromJobConfig(); fromJobConfig.getStringInput("fromJobConfig.schemaName").setValue("sqoop"); fromJobConfig.getStringInput("fromJobConfig.tableName").setValue("sqoop"); fromJobConfig.getStringInput("fromJobConfig.partitionColumn").setValue("id"); MToConfig toJobConfig = job.getToJobConfig(); toJobConfig.getStringInput("toJobConfig.outputDirectory").setValue("/user/hdfs/recommend"); MDriverConfig driverConfig = job.getDriverConfig(); driverConfig.getStringInput("throttlingConfig.numExtractors").setValue("3"); Status status = client.saveJob(job); if(status.canProceed()) { System.out.println("JOB创建成功,ID为: "+ job.getPersistenceId()); } else { System.out.println("JOB创建失败。"); } //启动任务 long jobId = job.getPersistenceId(); MSubmission submission = client.startJob(jobId); System.out.println("JOB提交状态为 : " + submission.getStatus()); while(submission.getStatus().isRunning() && submission.getProgress() != -1) { System.out.println("进度 : " + String.format("%.2f %%", submission.getProgress() * 100)); //三秒报告一次进度 try { Thread.sleep(3000); } catch (InterruptedException e) { e.printStackTrace(); } } System.out.println("JOB执行结束... ..."); System.out.println("Hadoop任务ID为 :" + submission.getExternalId()); Counters counters = submission.getCounters(); if(counters != null) { System.out.println("计数器:"); for(CounterGroup group : counters) { System.out.print("\t"); System.out.println(group.getName()); for(Counter counter : group) { System.out.print("\t\t"); System.out.print(counter.getName()); System.out.print(": "); System.out.println(counter.getValue()); } } } if(submission.getExceptionInfo() != null) { System.out.println("JOB执行异常,异常信息为 : " +submission.getExceptionInfo()); } System.out.println("MySQL通过sqoop传输数据到HDFS统计执行完毕"); } } ``` 报了这个错失咋回事?? ![图片说明](https://img-ask.csdn.net/upload/201508/26/1440518641_700480.png)

sqoop export to mysql

在使用sqoop从hive往mysql导数时,脚本命令也增加了--input-null-string '\\N' --input-null-non-string '\\N'。当第一个字段为null,导出失败;当第一个字段不为null,其他字段为null,导出成功。 请问,这是怎么回事?该如何解决?

sqoop连接DB2import 时报错,Connection timed out 求大神解答

sqoop连接DB2导入数据至HDFS时,报错,显示连接超时. 用list-table命令连接没有问题​,结果正确; 测试过DB2远程连接,没有问题,telnet 测试端口也没有问题; DB2版本v9.7,用的安装包里面的JDBC插件. 以下是错误信息。 [biadmin@Hadoop01 sqoop]$ ./bin/sqoop import --connect jdbc:db2://9.112.30.177:50000/content --username db2admin --P --table DB2ADMIN.PERSON --as-textfile -m 1 --target-dir /user/test Warning: /opt/ibm/biginsights/sqoop/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. /opt/ibm/biginsights/sqoop/bin/configure-sqoop: line 181: /opt/ibm/biginsights/hive/hcatalog/bin/hcat: Permission denied 16/03/02 08:27:38 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6 Enter password: 16/03/02 08:27:46 INFO manager.SqlManager: Using default fetchSize of 1000 16/03/02 08:27:46 INFO tool.CodeGenTool: Beginning code generation 16/03/02 08:28:49 ERROR manager.SqlManager: Error executing statement: com.ibm.db2.jcc.am.DisconnectNonTransientConnectionException: [jcc][t4][2043][11550][4.14.113] Exception java.net.ConnectException: Error opening socket to server /9.112.30.177 on port 50,000 with message: Connection timed out. ERRORCODE=-4499, SQLSTATE=08001 com.ibm.db2.jcc.am.DisconnectNonTransientConnectionException: [jcc][t4][2043][11550][4.14.113] Exception java.net.ConnectException: Error opening socket to server /9.112.30.177 on port 50,000 with message: Connection timed out. ERRORCODE=-4499, SQLSTATE=08001 at com.ibm.db2.jcc.am.ed.a(ed.java:320) at com.ibm.db2.jcc.am.ed.a(ed.java:338) at com.ibm.db2.jcc.t4.vb.a(vb.java:434) at com.ibm.db2.jcc.t4.vb.<init>(vb.java:93) at com.ibm.db2.jcc.t4.a.b(a.java:354) at com.ibm.db2.jcc.t4.b.newAgent_(b.java:2030) at com.ibm.db2.jcc.am.Connection.initConnection(Connection.java:731) at com.ibm.db2.jcc.am.Connection.<init>(Connection.java:680) at com.ibm.db2.jcc.t4.b.<init>(b.java:334) at com.ibm.db2.jcc.DB2SimpleDataSource.getConnection(DB2SimpleDataSource.java:232) at com.ibm.db2.jcc.DB2SimpleDataSource.getConnection(DB2SimpleDataSource.java:198) at com.ibm.db2.jcc.DB2Driver.connect(DB2Driver.java:475) at com.ibm.db2.jcc.DB2Driver.connect(DB2Driver.java:116) at java.sql.DriverManager.getConnection(DriverManager.java:582) at java.sql.DriverManager.getConnection(DriverManager.java:226) at org.apache.sqoop.manager.SqlManager.makeConnection(SqlManager.java:885) at org.apache.sqoop.manager.GenericJdbcManager.getConnection(GenericJdbcManager.java:52) at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:744) at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:767) at org.apache.sqoop.manager.SqlManager.getColumnInfoForRawQuery(SqlManager.java:270) at org.apache.sqoop.manager.SqlManager.getColumnTypesForRawQuery(SqlManager.java:241) at org.apache.sqoop.manager.SqlManager.getColumnTypes(SqlManager.java:227) at org.apache.sqoop.manager.ConnManager.getColumnTypes(ConnManager.java:295) at org.apache.sqoop.orm.ClassWriter.getColumnTypes(ClassWriter.java:1833) at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:1645) at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:107) at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:478) at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605) at org.apache.sqoop.Sqoop.run(Sqoop.java:143) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227) at org.apache.sqoop.Sqoop.main(Sqoop.java:236) Caused by: java.net.ConnectException: Connection timed out at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:369) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:230) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:212) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:403) at java.net.Socket.connect(Socket.java:642) at com.ibm.db2.jcc.t4.v.run(v.java:49) at java.security.AccessController.doPrivileged(AccessController.java:330) at com.ibm.db2.jcc.t4.vb.a(vb.java:420) ... 31 more 16/03/02 08:28:49 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: No columns to generate for ClassWriter at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:1651) at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:107) at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:478) at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605) at org.apache.sqoop.Sqoop.run(Sqoop.java:143) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227) at org.apache.sqoop.Sqoop.main(Sqoop.java:236)

sqoop1使用java报Can't get Kerberos principal renewer

全部代码如下 **sqoop1使用java api过Kerberos出现Can't get Master Kerberos principal for use as renewer ** ``` public class SqoopTest { public static void main(String[] args) throws Exception { // ================================================================= Configuration conf = new Configuration(); conf.set("fs.default.name", "hdfs://101.30.188.246:9000/");//设置HDFS服务地址 String keytabFile = "/home/hcj/tab/hdfs.keytab"; String principle = "hdfs@MSO.COM"; String krbConf = "/home/hcj/krb5.conf"; System.setProperty("java.security.krb5.conf", krbConf); conf.set("hadoop.security.authentication", "Kerberos"); //conf.setBoolean("fs.hdfs.imHADOpl.disable.cache", true); conf.set("keytab.file", keytabFile); UserGroupInformation.setConfiguration(conf); UserGroupInformation.loginUserFromKeytab(principle, keytabFile); // ================================================================= String[] arg = new String[] { // Oracle数据库信息 /* * sqoop export --connect jdbc:mysql://127.0.0.1:3306/test --username jamie --table * persons --export-dir /user/hive/warehouse/dw_api_server.db/persons2/ * --input-fields-terminated-by '\t' --input-lines-terminated-by '\n' */ "--connect","jdbc:mysql://114.115.156.37:3306/test", "--username","root", "--password","root", "--table","persons", "--m","1", "--export-dir","hdfs://101.30.188.246:9000/user/hive/warehouse/dw_api_server.db/persons/", "--input-fields-terminated-by","\t" //"-columns","id,city" }; String[] expandArguments = OptionsFileUtil.expandArguments(arg); SqoopTool tool = SqoopTool.getTool("export"); Configuration loadPlugins = SqoopTool.loadPlugins(conf); Sqoop sqoop = new Sqoop((com.cloudera.sqoop.tool.SqoopTool) tool, loadPlugins); int res = Sqoop.runSqoop(sqoop, expandArguments); if (res == 0) System.out.println ("成功"); } } ``` 报错 ``` java.io.IOException: Can't get Master Kerberos principal for use as renewer at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:133) at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100) at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:166) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308) at org.apache.sqoop.mapreduce.ExportJobBase.doSubmitJob(ExportJobBase.java:322) at org.apache.sqoop.mapreduce.ExportJobBase.runJob(ExportJobBase.java:299) at org.apache.sqoop.mapreduce.ExportJobBase.runExport(ExportJobBase.java:440) at org.apache.sqoop.manager.SqlManager.exportTable(SqlManager.java:931) at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:80) at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:99) at org.apache.sqoop.Sqoop.run(Sqoop.java:147) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183) at com.mshuoke.datagw.impl.sqoop.SqoopTest.main(SqoopTest.java:58) ``` 求解

数据迁移,字符集转换问题

各位大神想请教一个问题,就是现在oracle的字符集是US7ASCII,hive的元数据编码为utf,现在将数据使用sqoop导入到hive后中文乱码,我想请教一下,导入后数据的字符集就改变成了utf,还是说只是因为hive和oracle的里面的字符集设置的不同导致显示乱码(就是说导入到hive里面的数据字符集还是US7ASCII格式的),请大家帮忙

oozie 做sqoop导入报错

各位大神,在使用oozie 做sqoop导入时候报了一个错误,请问下有没有遇到的,有没有好的意见 Log Length: 5997 log4j:ERROR Could not find value for key log4j.appender.CLA log4j:ERROR Could not instantiate appender named "CLA". log4j:WARN No appenders could be found for logger (org.apache.hadoop.yarn.client.RMProxy). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. 注: /tmp/sqoop-yarn/compile/9f3c81b0062cec6973184f1f95c215f9/JC_AJXX.java使用或覆盖了已过时的 API。 注: 有关详细信息, 请使用 -Xlint:deprecation 重新编译。 org.kitesdk.data.DatasetNotFoundException: Unknown dataset URI pattern: dataset:hive:/test/JC_AJXX all_scheme are [URIPattern{pattern=file:/*path/:namespace/:dataset?absolute=true}, URIPattern{pattern=file:*path/:namespace/:dataset}, URIPattern{pattern=hdfs:/*path/:namespace/:dataset?absolute=true}, URIPattern{pattern=hdfs:*path/:namespace/:dataset}, URIPattern{pattern=webhdfs:/*path/:namespace/:dataset?absolute=true}] Check that JARs for hive datasets are on the classpath at org.kitesdk.data.spi.Registration.lookupDatasetUri(Registration.java:108) at org.kitesdk.data.Datasets.create(Datasets.java:228) at org.kitesdk.data.Datasets.create(Datasets.java:307) at org.apache.sqoop.mapreduce.ParquetJob.createDataset(ParquetJob.java:107) at org.apache.sqoop.mapreduce.ParquetJob.configureImportJob(ParquetJob.java:89) at org.apache.sqoop.mapreduce.DataDrivenImportJob.configureMapper(DataDrivenImportJob.java:106) at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:260) at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:668) at org.apache.sqoop.manager.OracleManager.importTable(OracleManager.java:444) at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:497) at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605) at org.apache.sqoop.Sqoop.run(Sqoop.java:143) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227) at org.apache.sqoop.Sqoop.main(Sqoop.java:236) at org.apache.oozie.action.hadoop.SqoopMain.runSqoopJob(SqoopMain.java:196) at org.apache.oozie.action.hadoop.SqoopMain.run(SqoopMain.java:176) at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:46) at org.apache.oozie.action.hadoop.SqoopMain.main(SqoopMain.java:46) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:228) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) at org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler.runSubtask(LocalContainerLauncher.java:370) at org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler.runTask(LocalContainerLauncher.java:295) at org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler.access$200(LocalContainerLauncher.java:181) at org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler$1.run(LocalContainerLauncher.java:224) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Intercepting System.exit(1) Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.SqoopMain], exit code [1] 十二月 04, 2015 2:35:06 下午 com.google.inject.servlet.InternalServletModule$BackwardsCompatibleServletContextProvider get 警告: You are attempting to use a deprecated API (specifically, attempting to @Inject ServletContext inside an eagerly created singleton. While we allow this for backwards compatibility, be warned that this MAY have unexpected behavior if you have more than one injector (with ServletModule) running in the same JVM. Please consult the Guice documentation at http://code.google.com/p/google-guice/wiki/Servlets for more information. 十二月 04, 2015 2:35:07 下午 com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register 信息: Registering org.apache.hadoop.mapreduce.v2.app.webapp.JAXBContextResolver as a provider class 十二月 04, 2015 2:35:07 下午 com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register 信息: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class 十二月 04, 2015 2:35:07 下午 com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register 信息: Registering org.apache.hadoop.mapreduce.v2.app.webapp.AMWebServices as a root resource class 十二月 04, 2015 2:35:07 下午 com.sun.jersey.server.impl.application.WebApplicationImpl _initiate 信息: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM' 十二月 04, 2015 2:35:07 下午 com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider 信息: Binding org.apache.hadoop.mapreduce.v2.app.webapp.JAXBContextResolver to GuiceManagedComponentProvider with the scope "Singleton" 十二月 04, 2015 2:35:07 下午 com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider 信息: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope "Singleton" 十二月 04, 2015 2:35:07 下午 com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider 信息: Binding org.apache.hadoop.mapreduce.v2.app.webapp.AMWebServices to GuiceManagedComponentProvider with the scope "PerRequest"

执行source /etc/profile 报错,很多命令不可使用,怎么解决?

执行source /etc/profile 报错,很多命令不可使用 ``` # /etc/profile # System wide environment and startup programs, for login setup # Functions and aliases go in /etc/bashrc # It's NOT a good idea to change this file unless you know what you # are doing. It's much better to create a custom.sh shell script in # /etc/profile.d/ to make custom changes to your environment, as this # will prevent the need for merging in future updates. pathmunge () { case ":${PATH}:" in *:"$1":*) ;; *) if [ "$2" = "after" ] ; then PATH=$PATH:$1 else PATH=$1:$PATH fi esac } if [ -x /usr/bin/id ]; then if [ -z "$EUID" ]; then # ksh workaround EUID=`/usr/bin/id -u` UID=`/usr/bin/id -ru` fi USER="`/usr/bin/id -un`" LOGNAME=$USER MAIL="/var/spool/mail/$USER" fi # Path manipulation if [ "$EUID" = "0" ]; then pathmunge /usr/sbin pathmunge /usr/local/sbin else pathmunge /usr/local/sbin after pathmunge /usr/sbin after fi HOSTNAME=`/usr/bin/hostname 2>/dev/null` HISTSIZE=1000 if [ "$HISTCONTROL" = "ignorespace" ] ; then export HISTCONTROL=ignoreboth else export HISTCONTROL=ignoredups fi export PATH USER LOGNAME MAIL HOSTNAME HISTSIZE HISTCONTROL # By default, we want umask to get set. This sets it for login shell # Current threshold for system reserved uid/gids is 200 # You could check uidgid reservation validity in # /usr/share/doc/setup-*/uidgid file if [ $UID -gt 199 ] && [ "`/usr/bin/id -gn`" = "`/usr/bin/id -un`" ]; then umask 002 else umask 022 fi for i in /etc/profile.d/*.sh /etc/profile.d/sh.local ; do if [ -r "$i" ]; then if [ "${-#*i}" != "$-" ]; then . "$i" else . "$i" >/dev/null fi fi done unset i unset -f pathmunge if [ $USER = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi fi export JAVA_HOME=/usr/local/apps/jdk1.8/jdk1.8.0_161 export PATH=$JAVA_HOME/bin:$PATH export HADOOP_HOME=/usr/local/apps/hadoop-2.8.4 export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin export ZOOKEEPER_HOME=/usr/local/apps/zookeeper-3.4.7 export PATH=$PATH:$ZOOKEEPER_HOME/bin export HBASE_HOME=/usr/local/apps/hbase-1.1.2 export PATH=$PATN:$HBASE_HOME/bin:$HBASE_HOME/conf export KAFKA_HOME=/usr/local/apps/kafka_2.11-0.10.2.1 export PATH=$PATH:$KAFKA_HOME/bin export HIVE_HOME=/usr/local/apps/apache-hive-1.2.0-bin export PATH=$PATH:$HIVE_HOME/bin export CLASS_PATH=$CALSSPATH:$HIVE_HOME/lib export SCALA_HOME=/usr/local/apps/scala-2.11.8 export PATH=$PATH:$SCALA_HOME/bin export SPARK_HOME=/usr/local/apps/spark-2.2.0-bin-hadoop2.7 export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin export PATH=$PATH:/usr/local/mysql//bin<br>source /etc/profile export ODBCINI=/etc/odbc.ini export ODBCSYSINI=/etc export MAVEN_HOME=/usr/local/apps/apache-maven-3.6.1 export PATH=$PATH:$MAVEN_HOME/bin export SQOOP_HOME=/usr/local/apps/sqoop-1.4.6 export PATH=$PATH:$SQOOP_HOME/bin ``` ![图片说明](https://img-ask.csdn.net/upload/201908/01/1564650725_86379.jpg)

cloudera manager 离线安装安装agent时,向主节点下载资源超时

错误日志: [19/Nov/2018 16:16:04 +0000] 2789 MainThread stacks_collection_manager INFO Using max_uncompressed_file_size_bytes: 5242880 [19/Nov/2018 16:16:04 +0000] 2789 MainThread __init__ INFO Importing metric schema from file /opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/cmf/monitor/schema.json [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent INFO Supervised processes will add the following to their environment (in addition to the supervisor's env): {'CDH_PARQUET_HOME': '/usr/lib/parquet', 'JSVC_HOME': '/usr/libexec/bigtop-utils', 'CMF_PACKAGE_DIR': '/opt/cloudera-manager/cm-5.10.2/lib64/cmf/service', 'CDH_HADOOP_BIN': '/usr/bin/hadoop', 'MGMT_HOME': '/opt/cloudera-manager/cm-5.10.2/share/cmf', 'CDH_IMPALA_HOME': '/usr/lib/impala', 'CDH_YARN_HOME': '/usr/lib/hadoop-yarn', 'CDH_HDFS_HOME': '/usr/lib/hadoop-hdfs', 'PATH': '/sbin:/usr/sbin:/bin:/usr/bin', 'CDH_HUE_PLUGINS_HOME': '/usr/lib/hadoop', 'CM_STATUS_CODES': u'STATUS_NONE HDFS_DFS_DIR_NOT_EMPTY HBASE_TABLE_DISABLED HBASE_TABLE_ENABLED JOBTRACKER_IN_STANDBY_MODE YARN_RM_IN_STANDBY_MODE', 'KEYTRUSTEE_KP_HOME': '/usr/share/keytrustee-keyprovider', 'CLOUDERA_ORACLE_CONNECTOR_JAR': '/usr/share/java/oracle-connector-java.jar', 'CDH_SQOOP2_HOME': '/usr/lib/sqoop2', 'KEYTRUSTEE_SERVER_HOME': '/usr/lib/keytrustee-server', 'CDH_MR2_HOME': '/usr/lib/hadoop-mapreduce', 'HIVE_DEFAULT_XML': '/etc/hive/conf.dist/hive-default.xml', 'CLOUDERA_POSTGRESQL_JDBC_JAR': '/opt/cloudera-manager/cm-5.10.2/share/cmf/lib/postgresql-9.0-801.jdbc4.jar', 'CDH_KMS_HOME': '/usr/lib/hadoop-kms', 'CDH_HBASE_HOME': '/usr/lib/hbase', 'CDH_SQOOP_HOME': '/usr/lib/sqoop', 'WEBHCAT_DEFAULT_XML': '/etc/hive-webhcat/conf.dist/webhcat-default.xml', 'CDH_OOZIE_HOME': '/usr/lib/oozie', 'CDH_ZOOKEEPER_HOME': '/usr/lib/zookeeper', 'CDH_HUE_HOME': '/usr/lib/hue', 'CLOUDERA_MYSQL_CONNECTOR_JAR': '/usr/share/java/mysql-connector-java.jar', 'CDH_HBASE_INDEXER_HOME': '/usr/lib/hbase-solr', 'CDH_MR1_HOME': '/usr/lib/hadoop-0.20-mapreduce', 'CDH_SOLR_HOME': '/usr/lib/solr', 'CDH_PIG_HOME': '/usr/lib/pig', 'CDH_SENTRY_HOME': '/usr/lib/sentry', 'CDH_CRUNCH_HOME': '/usr/lib/crunch', 'CDH_LLAMA_HOME': '/usr/lib/llama/', 'CDH_HTTPFS_HOME': '/usr/lib/hadoop-httpfs', 'ROOT': '/opt/cloudera-manager/cm-5.10.2/lib64/cmf', 'CDH_HADOOP_HOME': '/usr/lib/hadoop', 'CDH_HIVE_HOME': '/usr/lib/hive', 'ORACLE_HOME': '/usr/share/oracle/instantclient', 'CDH_HCAT_HOME': '/usr/lib/hive-hcatalog', 'CDH_KAFKA_HOME': '/usr/lib/kafka', 'CDH_SPARK_HOME': '/usr/lib/spark', 'TOMCAT_HOME': '/usr/lib/bigtop-tomcat', 'CDH_FLUME_HOME': '/usr/lib/flume-ng'} [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent INFO To override these variables, use /etc/cloudera-scm-agent/config.ini. Environment variables for CDH locations are not used when CDH is installed from parcels. [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent INFO Created /opt/cloudera-manager/cm-5.10.2/run/cloudera-scm-agent/process [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent INFO Chmod'ing /opt/cloudera-manager/cm-5.10.2/run/cloudera-scm-agent/process to 0751 [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent INFO Created /opt/cloudera-manager/cm-5.10.2/run/cloudera-scm-agent/supervisor [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent INFO Chmod'ing /opt/cloudera-manager/cm-5.10.2/run/cloudera-scm-agent/supervisor to 0751 [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent INFO Created /opt/cloudera-manager/cm-5.10.2/run/cloudera-scm-agent/flood [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent INFO Chowning /opt/cloudera-manager/cm-5.10.2/run/cloudera-scm-agent/flood to cloudera-scm (498) cloudera-scm (498) [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent INFO Chmod'ing /opt/cloudera-manager/cm-5.10.2/run/cloudera-scm-agent/flood to 0751 [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent INFO Created /opt/cloudera-manager/cm-5.10.2/run/cloudera-scm-agent/supervisor/include [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent INFO Chmod'ing /opt/cloudera-manager/cm-5.10.2/run/cloudera-scm-agent/supervisor/include to 0751 [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent ERROR Failed to connect to previous supervisor. Traceback (most recent call last): File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/cmf/agent.py", line 2073, in find_or_start_supervisor self.configure_supervisor_clients() File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/cmf/agent.py", line 2254, in configure_supervisor_clients supervisor_options.realize(args=["-c", os.path.join(self.supervisor_dir, "supervisord.conf")]) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/supervisor-3.0-py2.6.egg/supervisor/options.py", line 1599, in realize Options.realize(self, *arg, **kw) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/supervisor-3.0-py2.6.egg/supervisor/options.py", line 333, in realize self.process_config() File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/supervisor-3.0-py2.6.egg/supervisor/options.py", line 341, in process_config self.process_config_file(do_usage) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/supervisor-3.0-py2.6.egg/supervisor/options.py", line 376, in process_config_file self.usage(str(msg)) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/supervisor-3.0-py2.6.egg/supervisor/options.py", line 164, in usage self.exit(2) SystemExit: 2 [19/Nov/2018 16:16:04 +0000] 2789 MainThread tmpfs INFO Successfully mounted tmpfs at /opt/cloudera-manager/cm-5.10.2/run/cloudera-scm-agent/process [19/Nov/2018 16:16:05 +0000] 2789 MainThread agent INFO Trying to connect to newly launched supervisor (Attempt 1) [19/Nov/2018 16:16:05 +0000] 2789 MainThread agent INFO Supervisor version: 3.0, pid: 2821 [19/Nov/2018 16:16:05 +0000] 2789 MainThread agent INFO Successfully connected to supervisor [19/Nov/2018 16:16:05 +0000] 2789 MainThread status_server INFO Using maximum impala profile bundle size of 1073741824 bytes. [19/Nov/2018 16:16:05 +0000] 2789 MainThread status_server INFO Using maximum stacks log bundle size of 1073741824 bytes. [19/Nov/2018 16:16:05 +0000] 2789 MainThread _cplogging INFO [19/Nov/2018:16:16:05] ENGINE Bus STARTING [19/Nov/2018 16:16:05 +0000] 2789 MainThread _cplogging INFO [19/Nov/2018:16:16:05] ENGINE Started monitor thread '_TimeoutMonitor'. [19/Nov/2018 16:16:06 +0000] 2789 MainThread _cplogging INFO [19/Nov/2018:16:16:06] ENGINE Serving on yingzhi01.com:9000 [19/Nov/2018 16:16:06 +0000] 2789 MainThread _cplogging INFO [19/Nov/2018:16:16:06] ENGINE Bus STARTED [19/Nov/2018 16:16:06 +0000] 2789 MainThread __init__ INFO New monitor: (<cmf.monitor.host.HostMonitor object at 0x2990c50>,) [19/Nov/2018 16:16:06 +0000] 2789 MonitorDaemon-Scheduler __init__ INFO Monitor ready to report: ('HostMonitor',) [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent INFO Setting default socket timeout to 30 [19/Nov/2018 16:16:06 +0000] 2789 Monitor-HostMonitor network_interfaces INFO NIC iface eth0 doesn't support ETHTOOL (95) [19/Nov/2018 16:16:06 +0000] 2789 Monitor-HostMonitor throttling_logger ERROR Error getting directory attributes for /opt/cloudera-manager/cm-5.10.2/log/cloudera-scm-agent Traceback (most recent call last): File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/cmf/monitor/dir_monitor.py", line 90, in _get_directory_attributes name = pwd.getpwuid(uid)[0] KeyError: 'getpwuid(): uid not found: 1106' [19/Nov/2018 16:16:06 +0000] 2789 MainThread heartbeat_tracker INFO HB stats (seconds): num:1 LIFE_MIN:0.22 min:0.22 mean:0.22 max:0.22 LIFE_MAX:0.22 [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent INFO CM server guid: dceeafae-a884-42f1-ba7b-4ee187ef3bef [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent INFO Using parcels directory from server provided value: /opt/cloudera/parcels [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent WARNING Expected user root for /opt/cloudera/parcels but was cloudera-scm [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent WARNING Expected group root for /opt/cloudera/parcels but was cloudera-scm [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent INFO Created /opt/cloudera/parcel-cache [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent INFO Chowning /opt/cloudera/parcel-cache to root (0) root (0) [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent INFO Chmod'ing /opt/cloudera/parcel-cache to 0755 [19/Nov/2018 16:16:06 +0000] 2789 MainThread parcel INFO Agent does create users/groups and apply file permissions [19/Nov/2018 16:16:06 +0000] 2789 MainThread downloader INFO Downloader path: /opt/cloudera/parcel-cache [19/Nov/2018 16:16:06 +0000] 2789 MainThread parcel_cache INFO Using /opt/cloudera/parcel-cache for parcel cache [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent INFO Flood daemon (re)start attempt [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent INFO Created /opt/cloudera/parcels/.flood [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent INFO Chowning /opt/cloudera/parcels/.flood to cloudera-scm (498) cloudera-scm (498) [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent INFO Chmod'ing /opt/cloudera/parcels/.flood to 0755 [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent INFO Triggering supervisord update. [19/Nov/2018 16:16:36 +0000] 2789 MainThread downloader ERROR Failed rack peer update: timed out [19/Nov/2018 16:16:36 +0000] 2789 MainThread agent INFO Active parcel list updated; recalculating component info. [19/Nov/2018 16:16:36 +0000] 2789 MainThread throttling_logger WARNING CMF_AGENT_JAVA_HOME environment variable host override will be deprecated in future. JAVA_HOME setting configured from CM server takes precedence over host agent override. Configure JAVA_HOME setting from CM server. [19/Nov/2018 16:16:36 +0000] 2789 MainThread throttling_logger INFO Identified java component java8 with full version JAVA_HOME=/opt/modules/jdk1.8.0_144 java version "1.8.0_144" Java(TM) SE Runtime Environment (build 1.8.0_144-b01) Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode) for requested version . [19/Nov/2018 16:16:36 +0000] 2789 MainThread agent WARNING Long HB processing time: 30.6659779549 [19/Nov/2018 16:16:36 +0000] 2789 MainThread agent WARNING Delayed HB: 15s since last [19/Nov/2018 16:16:44 +0000] 2789 Monitor-HostMonitor throttling_logger ERROR Timeout with args ['ntpdc', '-np'] None [19/Nov/2018 16:16:44 +0000] 2789 Monitor-HostMonitor throttling_logger ERROR Failed to collect NTP metrics Traceback (most recent call last): File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/cmf/monitor/host/ntp_monitor.py", line 48, in collect self.collect_ntpd() File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/cmf/monitor/host/ntp_monitor.py", line 66, in collect_ntpd result, stdout, stderr = self._subprocess_with_timeout(args, self._timeout) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/cmf/monitor/host/ntp_monitor.py", line 38, in _subprocess_with_timeout return subprocess_with_timeout(args, timeout) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/cmf/subprocess_timeout.py", line 94, in subprocess_with_timeout raise Exception("timeout with args %s" % args) Exception: timeout with args ['ntpdc', '-np'] [19/Nov/2018 16:17:06 +0000] 2789 DnsResolutionMonitor throttling_logger INFO Using java location: '/opt/modules/jdk1.8.0_144/bin/java'. [19/Nov/2018 16:17:06 +0000] 2789 MainThread downloader ERROR Failed rack peer update: timed out [19/Nov/2018 16:17:06 +0000] 2789 MainThread agent WARNING Long HB processing time: 30.1082139015 [19/Nov/2018 16:17:06 +0000] 2789 MainThread agent WARNING Delayed HB: 15s since last [19/Nov/2018 16:17:36 +0000] 2789 MainThread downloader ERROR Failed rack peer update: timed out [19/Nov/2018 16:17:36 +0000] 2789 MainThread agent WARNING Long HB processing time: 30.1235852242 [19/Nov/2018 16:17:36 +0000] 2789 MainThread agent WARNING Delayed HB: 15s since last [19/Nov/2018 16:18:07 +0000] 2789 MainThread downloader ERROR Failed rack peer update: timed out [19/Nov/2018 16:18:07 +0000] 2789 MainThread agent WARNING Long HB processing time: 30.1040799618 [19/Nov/2018 16:18:07 +0000] 2789 MainThread agent WARNING Delayed HB: 15s since last [19/Nov/2018 16:18:37 +0000] 2789 MainThread downloader ERROR Failed rack peer update: timed out [19/Nov/2018 16:18:37 +0000] 2789 MainThread agent WARNING Long HB processing time: 30.1849529743 [19/Nov/2018 16:18:37 +0000] 2789 MainThread agent WARNING Delayed HB: 15s since last [19/Nov/2018 16:19:07 +0000] 2789 MainThread downloader ERROR Failed rack peer update: timed out [19/Nov/2018 16:19:07 +0000] 2789 MainThread agent WARNING Long HB processing time: 30.1211960316 [19/Nov/2018 16:19:07 +0000] 2789 MainThread agent WARNING Delayed HB: 15s since last [19/Nov/2018 16:19:37 +0000] 2789 MainThread downloader ERROR Failed rack peer update: timed out [19/Nov/2018 16:19:37 +0000] 2789 MainThread agent WARNING Long HB processing time: 30.1215620041 [19/Nov/2018 16:19:37 +0000] 2789 MainThread agent WARNING Delayed HB: 15s since last [19/Nov/2018 16:20:01 +0000] 2789 CP Server Thread-4 _cplogging INFO 192.168.164.35 - - [19/Nov/2018:16:20:01] "GET /heartbeat HTTP/1.1" 200 2 "" "NING/1.0" [19/Nov/2018 16:20:04 +0000] 2789 CP Server Thread-5 _cplogging INFO 192.168.164.35 - - [19/Nov/2018:16:20:04] "GET /heartbeat HTTP/1.1" 200 2 "" "NING/1.0" [19/Nov/2018 16:20:07 +0000] 2789 MainThread downloader ERROR Failed rack peer update: timed out [19/Nov/2018 16:20:07 +0000] 2789 MainThread agent WARNING Long HB processing time: 30.1212861538 [19/Nov/2018 16:20:07 +0000] 2789 MainThread agent WARNING Delayed HB: 15s since last [19/Nov/2018 16:20:37 +0000] 2789 MainThread downloader ERROR Failed rack peer update: timed out [19/Nov/2018 16:20:37 +0000] 2789 MainThread agent WARNING Long HB processing time: 30.1753029823 [19/Nov/2018 16:20:37 +0000] 2789 MainThread agent WARNING Delayed HB: 15s since last [19/Nov/2018 16:20:37 +0000] 2789 Thread-13 downloader INFO Fetching torrent: http://yingzhi01.com:7180/cmf/parcel/download/CDH-5.10.2-1.cdh5.10.2.p0.5-el6.parcel.torrent [19/Nov/2018 16:20:37 +0000] 2789 Thread-13 downloader INFO Starting download of: http://yingzhi01.com:7180/cmf/parcel/download/CDH-5.10.2-1.cdh5.10.2.p0.5-el6.parcel [19/Nov/2018 16:21:07 +0000] 2789 Thread-13 downloader ERROR Unexpected exception during download Traceback (most recent call last): File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/cmf/downloader.py", line 279, in download self.client.AddTorrent(torrent_url) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/flood/util/cmd.py", line 159, in __call__ return self.fn.__get__(self.binding)(*args, **kwargs) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/flood/util/rpc.py", line 68, in <lambda> return lambda *pargs, **kwargs: self._invoke(*pargs, **kwargs) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/flood/util/rpc.py", line 77, in _invoke return rpcClient.requestor.request(self.schema.name, msg) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/flood/util/rpc.py", line 129, in requestor return avro.ipc.Requestor(self.SCHEMA, self.transceiver) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/flood/util/rpc.py", line 125, in transceiver return avro.ipc.HTTPTransceiver(self.server.host, self.server.port) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/avro-1.6.3-py2.6.egg/avro/ipc.py", line 469, in __init__ self.conn.connect() File "/usr/lib64/python2.6/httplib.py", line 771, in connect self.timeout) File "/usr/lib64/python2.6/socket.py", line 567, in create_connection raise error, msg timeout: timed out [19/Nov/2018 16:21:07 +0000] 2789 Thread-13 downloader INFO Finished download [ url: http://yingzhi01.com:7180/cmf/parcel/download/CDH-5.10.2-1.cdh5.10.2.p0.5-el6.parcel, state: exception, total_bytes: 0, downloaded_bytes: 0, start_time: 2018-11-19 16:20:37, download_end_time: , end_time: 2018-11-19 16:21:07, code: 600, exception_msg: timed out, path: None ] [19/Nov/2018 16:21:07 +0000] 2789 MainThread downloader ERROR Failed rack peer update: timed out [19/Nov/2018 16:21:07 +0000] 2789 MainThread agent WARNING Long HB processing time: 30.1247620583 [19/Nov/2018 16:21:07 +0000] 2789 MainThread agent WARNING Delayed HB: 15s since last [19/Nov/2018 16:21:07 +0000] 2789 Thread-13 downloader INFO Fetching torrent: http://yingzhi01.com:7180/cmf/parcel/download/CDH-5.10.2-1.cdh5.10.2.p0.5-el6.parcel.torrent [19/Nov/2018 16:21:08 +0000] 2789 Thread-13 downloader INFO Starting download of: http://yingzhi01.com:7180/cmf/parcel/download/CDH-5.10.2-1.cdh5.10.2.p0.5-el6.parcel [19/Nov/2018 16:21:38 +0000] 2789 Thread-13 downloader ERROR Unexpected exception during download 然后就是不断重复超时错误求大神指点。。。

2019 Python开发者日-培训

2019 Python开发者日-培训

150讲轻松搞定Python网络爬虫

150讲轻松搞定Python网络爬虫

设计模式(JAVA语言实现)--20种设计模式附带源码

设计模式(JAVA语言实现)--20种设计模式附带源码

YOLOv3目标检测实战:训练自己的数据集

YOLOv3目标检测实战:训练自己的数据集

java后台+微信小程序 实现完整的点餐系统

java后台+微信小程序 实现完整的点餐系统

三个项目玩转深度学习(附1G源码)

三个项目玩转深度学习(附1G源码)

初级玩转Linux+Ubuntu(嵌入式开发基础课程)

初级玩转Linux+Ubuntu(嵌入式开发基础课程)

2019 AI开发者大会

2019 AI开发者大会

玩转Linux:常用命令实例指南

玩转Linux:常用命令实例指南

一学即懂的计算机视觉(第一季)

一学即懂的计算机视觉(第一季)

4小时玩转微信小程序——基础入门与微信支付实战

4小时玩转微信小程序——基础入门与微信支付实战

Git 实用技巧

Git 实用技巧

Python数据清洗实战入门

Python数据清洗实战入门

使用TensorFlow+keras快速构建图像分类模型

使用TensorFlow+keras快速构建图像分类模型

实用主义学Python(小白也容易上手的Python实用案例)

实用主义学Python(小白也容易上手的Python实用案例)

程序员的算法通关课:知己知彼(第一季)

程序员的算法通关课:知己知彼(第一季)

MySQL数据库从入门到实战应用

MySQL数据库从入门到实战应用

机器学习初学者必会的案例精讲

机器学习初学者必会的案例精讲

手把手实现Java图书管理系统(附源码)

手把手实现Java图书管理系统(附源码)

极简JAVA学习营第四期(报名以后加助教微信:eduxy-1)

极简JAVA学习营第四期(报名以后加助教微信:eduxy-1)

.net core快速开发框架

.net core快速开发框架

玩转Python-Python3基础入门

玩转Python-Python3基础入门

Python数据挖掘简易入门

Python数据挖掘简易入门

微信公众平台开发入门

微信公众平台开发入门

程序员的兼职技能课

程序员的兼职技能课

Windows版YOLOv4目标检测实战:训练自己的数据集

Windows版YOLOv4目标检测实战:训练自己的数据集

HoloLens2开发入门教程

HoloLens2开发入门教程

微信小程序开发实战

微信小程序开发实战

Java8零基础入门视频教程

Java8零基础入门视频教程

Python可以这样学(第一季:Python内功修炼)

Python可以这样学(第一季:Python内功修炼)

C++语言基础视频教程

C++语言基础视频教程

相关热词 c# 按行txt c#怎么扫条形码 c#打包html c# 实现刷新数据 c# 两个自定义控件重叠 c#浮点类型计算 c#.net 中文乱码 c# 时间排序 c# 必备书籍 c#异步网络通信
立即提问