sqoop1使用java操作,打包运行出现“找不到符号错误”

sqoop1在ide中测试完全没问题,打包后运行springboot项目,然后就会报错,报的是Text找不到符号,就是sqoop生成的那个java文件,为什么阿?

2个回答

问题描述太简单了,建议贴上代码和具体日志信息

这个错都是在MAVEN插件在编译的时候报的,所以问题一定是出在编译的环节上。

这个时候就要好好检查MAVEN的编译配置,

1、看看配置里的编译版本和本机环境上配置的java版本是否一致,有时候报错的类有可能是引用了另外另外一个MAVEN模块的代码,也要看看那个模块的版本配置编码是否一致。但这并不是一定的,有时候不一致也不会有问题,但这是一个可以注意的点。

例如下图情况:


org.apache.maven.plugins
maven-compiler-plugin
3.3

1.7
1.7
UTF-8


2、如果报错的类里面有引用了另外一个MAVEN模块的代码,那么在打这个模块的包之前,最好先编译打包一下那个要引用的MAVEN模块。

3、还有可能是编译插件版本的问题,例如刚刚上面的MAVEN插件配置,在出问题的时候可以尝试把版本调低或者调高,然后再编译试试。

4、还要看看编译插件里面是否还有引用了其它的插件,例如下面情况所示:


org.apache.maven.plugins
maven-compiler-plugin



org.mapstruct
mapstruct-processor
${mapstruct.version}




这里的编译插件还引用了mapstruct依赖,所以要检查一下这个引用的依赖版本是否有问题,可以尝试更换其它版本。

最近我遇到的一个问题就是出在这里。我的工程里面有一个DTO类,然后在另外一个impl类里面会调用这个DTO类某个属性的set方法,就是在编译这个impl类的时候,报了找不到符号,报错的位置就是这个set方法的位置。后来试了好多方法都不行,最后发现原来这是mapstruct的一个bug,如果我这个DTO类的getter和setter方法的顺序与属性的顺序不一致的话,就会编译失败。例如类里面的属性先是name,下一行就是age,再下一行就是habbit,那么getter,setter方法也要按这个顺序,先是name的getter,setter方法,然后再是age的getter,setter方法,等等。我就是有个组getter,setter方法顺序不对,所以编译出错了。这真的是非常坑。

所以用这些开源的插件,尽量用稳定版本,不然怎得非常坑。

5、简单粗暴地使用“Maven Update Project”,这个方法能解决大部分情况下的这个问题。

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
sqoop1使用java报Can't get Kerberos principal renewer

全部代码如下 **sqoop1使用java api过Kerberos出现Can't get Master Kerberos principal for use as renewer ** ``` public class SqoopTest { public static void main(String[] args) throws Exception { // ================================================================= Configuration conf = new Configuration(); conf.set("fs.default.name", "hdfs://101.30.188.246:9000/");//设置HDFS服务地址 String keytabFile = "/home/hcj/tab/hdfs.keytab"; String principle = "hdfs@MSO.COM"; String krbConf = "/home/hcj/krb5.conf"; System.setProperty("java.security.krb5.conf", krbConf); conf.set("hadoop.security.authentication", "Kerberos"); //conf.setBoolean("fs.hdfs.imHADOpl.disable.cache", true); conf.set("keytab.file", keytabFile); UserGroupInformation.setConfiguration(conf); UserGroupInformation.loginUserFromKeytab(principle, keytabFile); // ================================================================= String[] arg = new String[] { // Oracle数据库信息 /* * sqoop export --connect jdbc:mysql://127.0.0.1:3306/test --username jamie --table * persons --export-dir /user/hive/warehouse/dw_api_server.db/persons2/ * --input-fields-terminated-by '\t' --input-lines-terminated-by '\n' */ "--connect","jdbc:mysql://114.115.156.37:3306/test", "--username","root", "--password","root", "--table","persons", "--m","1", "--export-dir","hdfs://101.30.188.246:9000/user/hive/warehouse/dw_api_server.db/persons/", "--input-fields-terminated-by","\t" //"-columns","id,city" }; String[] expandArguments = OptionsFileUtil.expandArguments(arg); SqoopTool tool = SqoopTool.getTool("export"); Configuration loadPlugins = SqoopTool.loadPlugins(conf); Sqoop sqoop = new Sqoop((com.cloudera.sqoop.tool.SqoopTool) tool, loadPlugins); int res = Sqoop.runSqoop(sqoop, expandArguments); if (res == 0) System.out.println ("成功"); } } ``` 报错 ``` java.io.IOException: Can't get Master Kerberos principal for use as renewer at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:133) at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100) at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:166) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308) at org.apache.sqoop.mapreduce.ExportJobBase.doSubmitJob(ExportJobBase.java:322) at org.apache.sqoop.mapreduce.ExportJobBase.runJob(ExportJobBase.java:299) at org.apache.sqoop.mapreduce.ExportJobBase.runExport(ExportJobBase.java:440) at org.apache.sqoop.manager.SqlManager.exportTable(SqlManager.java:931) at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:80) at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:99) at org.apache.sqoop.Sqoop.run(Sqoop.java:147) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183) at com.mshuoke.datagw.impl.sqoop.SqoopTest.main(SqoopTest.java:58) ``` 求解

sqoop1.99.5 java api 问题

在调用sqoop JavaAPI,创建mysql link时报如下错误,且link id和已存在的link是不重复的,调用的api代码是和官方文档是一样的 Exception in thread "main" org.apache.sqoop.common.SqoopException: MODEL_011:Input do not exist - Input name: linkConfig.connectionString at org.apache.sqoop.model.MConfig.getInput(MConfig.java:74) at org.apache.sqoop.model.MConfigList.getInput(MConfigList.java:65) at org.apache.sqoop.model.MConfigList.getStringInput(MConfigList.java:69) at service.Mysql2HDFS.createMysqlLink(Mysql2HDFS.java:26) at service.Mysql2HDFS.main(Mysql2HDFS.java:149) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147) 错误提示所在行代码: long connectorId = 2; MLink link = client.createLink(connectorId); link.setName("JDBC_link"); link.setCreationUser("hdfs"); MLinkConfig linkConfig = link.getConnectorLinkConfig(); // fill in the link config values linkConfig.getStringInput("linkConfig.connectionString").setValue("jdbc:mysql://10.0.0.1:3306/table"); linkConfig.getStringInput("linkConfig.jdbcDriver").setValue("com.mysql.jdbc.Driver"); linkConfig.getStringInput("linkConfig.username").setValue("root"); linkConfig.getStringInput("linkConfig.password").setValue("root");

sqoop client java api将mysql的数据导到hdfs

``` package com.hadoop.recommend; import org.apache.sqoop.client.SqoopClient; import org.apache.sqoop.model.MDriverConfig; import org.apache.sqoop.model.MFromConfig; import org.apache.sqoop.model.MJob; import org.apache.sqoop.model.MLink; import org.apache.sqoop.model.MLinkConfig; import org.apache.sqoop.model.MSubmission; import org.apache.sqoop.model.MToConfig; import org.apache.sqoop.submission.counter.Counter; import org.apache.sqoop.submission.counter.CounterGroup; import org.apache.sqoop.submission.counter.Counters; import org.apache.sqoop.validation.Status; public class MysqlToHDFS { public static void main(String[] args) { sqoopTransfer(); } public static void sqoopTransfer() { //初始化 String url = "http://master:12000/sqoop/"; SqoopClient client = new SqoopClient(url); //创建一个源链接 JDBC long fromConnectorId = 2; MLink fromLink = client.createLink(fromConnectorId); fromLink.setName("JDBC connector"); fromLink.setCreationUser("hadoop"); MLinkConfig fromLinkConfig = fromLink.getConnectorLinkConfig(); fromLinkConfig.getStringInput("linkConfig.connectionString").setValue("jdbc:mysql://master:3306/hive"); fromLinkConfig.getStringInput("linkConfig.jdbcDriver").setValue("com.mysql.jdbc.Driver"); fromLinkConfig.getStringInput("linkConfig.username").setValue("root"); fromLinkConfig.getStringInput("linkConfig.password").setValue(""); Status fromStatus = client.saveLink(fromLink); if(fromStatus.canProceed()) { System.out.println("创建JDBC Link成功,ID为: " + fromLink.getPersistenceId()); } else { System.out.println("创建JDBC Link失败"); } //创建一个目的地链接HDFS long toConnectorId = 1; MLink toLink = client.createLink(toConnectorId); toLink.setName("HDFS connector"); toLink.setCreationUser("hadoop"); MLinkConfig toLinkConfig = toLink.getConnectorLinkConfig(); toLinkConfig.getStringInput("linkConfig.uri").setValue("hdfs://master:9000/"); Status toStatus = client.saveLink(toLink); if(toStatus.canProceed()) { System.out.println("创建HDFS Link成功,ID为: " + toLink.getPersistenceId()); } else { System.out.println("创建HDFS Link失败"); } //创建一个任务 long fromLinkId = fromLink.getPersistenceId(); long toLinkId = toLink.getPersistenceId(); MJob job = client.createJob(fromLinkId, toLinkId); job.setName("MySQL to HDFS job"); job.setCreationUser("hadoop"); //设置源链接任务配置信息 MFromConfig fromJobConfig = job.getFromJobConfig(); fromJobConfig.getStringInput("fromJobConfig.schemaName").setValue("sqoop"); fromJobConfig.getStringInput("fromJobConfig.tableName").setValue("sqoop"); fromJobConfig.getStringInput("fromJobConfig.partitionColumn").setValue("id"); MToConfig toJobConfig = job.getToJobConfig(); toJobConfig.getStringInput("toJobConfig.outputDirectory").setValue("/user/hdfs/recommend"); MDriverConfig driverConfig = job.getDriverConfig(); driverConfig.getStringInput("throttlingConfig.numExtractors").setValue("3"); Status status = client.saveJob(job); if(status.canProceed()) { System.out.println("JOB创建成功,ID为: "+ job.getPersistenceId()); } else { System.out.println("JOB创建失败。"); } //启动任务 long jobId = job.getPersistenceId(); MSubmission submission = client.startJob(jobId); System.out.println("JOB提交状态为 : " + submission.getStatus()); while(submission.getStatus().isRunning() && submission.getProgress() != -1) { System.out.println("进度 : " + String.format("%.2f %%", submission.getProgress() * 100)); //三秒报告一次进度 try { Thread.sleep(3000); } catch (InterruptedException e) { e.printStackTrace(); } } System.out.println("JOB执行结束... ..."); System.out.println("Hadoop任务ID为 :" + submission.getExternalId()); Counters counters = submission.getCounters(); if(counters != null) { System.out.println("计数器:"); for(CounterGroup group : counters) { System.out.print("\t"); System.out.println(group.getName()); for(Counter counter : group) { System.out.print("\t\t"); System.out.print(counter.getName()); System.out.print(": "); System.out.println(counter.getValue()); } } } if(submission.getExceptionInfo() != null) { System.out.println("JOB执行异常,异常信息为 : " +submission.getExceptionInfo()); } System.out.println("MySQL通过sqoop传输数据到HDFS统计执行完毕"); } } ``` 报了这个错失咋回事?? ![图片说明](https://img-ask.csdn.net/upload/201508/26/1440518641_700480.png)

Sqoop导入数据到Hbase报错

# Sqoop导入数据到Hbase报错,我的版本Hadoop3.2.1,hbase2.2.3,sqoop1.4.7,我知道是版本的问题,怎么解决? 我将hbase中的lib所有都导入到Sqoop中都没有解决 ``` Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.hbase.client.HBaseAdmin.<init>(Lorg/apache/hadoop/conf/Configuration;)V at org.apache.sqoop.mapreduce.HBaseImportJob.jobSetup(HBaseImportJob.java:163) at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:268) at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:692) at org.apache.sqoop.manager.MySQLManager.importTable(MySQLManager.java:127) at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:520) at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:628) at org.apache.sqoop.Sqoop.run(Sqoop.java:147) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243) at org.apache.sqoop.Sqoop.main(Sqoop.java:252) ```

sqoop将oracle数据导入hdfs显示访问被拒绝

sqoop lib库有ojdbc6.jar ping服务器能ping通,用toad能访问oracle; sqoop import --connect jdbc:oracle:thin:@192.168.1.10:1521:ORCL --username -password --m 1 --table TEST1 显示错误如下: ERROR manager.SqlManager: Error executing statement: java.sql.SQLException: The Network Adapter could not establish the connection java.sql.SQLException: The Network Adapter could not establish the connection at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:412) at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:531) at oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:221) at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:32) at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:503) at java.sql.DriverManager.getConnection(DriverManager.java:571) at java.sql.DriverManager.getConnection(DriverManager.java:215) at org.apache.sqoop.manager.OracleManager.makeConnection(OracleManager.java:327) at org.apache.sqoop.manager.GenericJdbcManager.getConnection(GenericJdbcManager.java:52) at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:744) at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:767) at org.apache.sqoop.manager.SqlManager.getColumnInfoForRawQuery(SqlManager.java:270) at org.apache.sqoop.manager.SqlManager.getColumnTypesForRawQuery(SqlManager.java:241) at org.apache.sqoop.manager.SqlManager.getColumnTypes(SqlManager.java:227) at org.apache.sqoop.manager.ConnManager.getColumnTypes(ConnManager.java:295) at org.apache.sqoop.orm.ClassWriter.getColumnTypes(ClassWriter.java:1833) at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:1645) at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:107) at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:478) at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605) at org.apache.sqoop.Sqoop.run(Sqoop.java:143) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227) at org.apache.sqoop.Sqoop.main(Sqoop.java:236) Caused by: oracle.net.ns.NetException: The Network Adapter could not establish the connection at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:359) at oracle.net.resolver.AddrResolution.resolveAndExecute(AddrResolution.java:422) at oracle.net.ns.NSProtocol.establishConnection(NSProtocol.java:672) at oracle.net.ns.NSProtocol.connect(NSProtocol.java:237) at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1042) at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:301) ... 25 more Caused by: java.net.ConnectException: ¾Ü¾øÁ¬½Ó at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at oracle.net.nt.TcpNTAdapter.connect(TcpNTAdapter.java:141) at oracle.net.nt.ConnOption.connect(ConnOption.java:123) at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:337) ... 30 more 15/12/19 12:32:01 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: No columns to generate for ClassWriter at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:1651) at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:107) at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:478) at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605) at org.apache.sqoop.Sqoop.run(Sqoop.java:143) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227) at org.apache.sqoop.Sqoop.main(Sqoop.java:236)

spingboot打包之后找不到数据库驱动了,打包之前在idea中没有问题

![图片说明](https://img-ask.csdn.net/upload/201903/26/1553591287_148767.jpg) ![图片说明](https://img-ask.csdn.net/upload/201903/26/1553591324_753464.jpg) 在idea中执行没有问题,在打包之后 java -jar的形式报错了,打包是使用的mvn packae方式,请问这是什么原因呢?

sqoop从postgre全量抽取数据到hive出现cannot resolve sql type for 1111

最近刚接触sqoop,在使用时出现问题,请问大神们该问题如何解决? 要抽取的postgre表中的extra存在json类型的数据,抽取时出现cannot resolve sql type for 1111 和 no java type for sql type for column extra错误,根据https://blog.csdn.net/Post_Yuan/article/details/79799980和https://blog.csdn.net/lookqlp/article/details/52096193看了两篇文章,对sqoop语句做了如下修改: 最开始没有加--map-column-hive Extra=String \和--map-column-java Extra=String \的sqoop语句如下 sqoop import \ --connect jdbc串\ --username 用户名 \ --password 密码\ --table 表名 \ --null-string '\\N' \ --null-non-string '\\N' \ --hive-overwrite \ --hcatalog-database hive数据库名\ --hcatalog-table hive中创建好的表名 \ --hcatalog-partition-keys dt \ --hcatalog-partition-values 20180913 \ --as-parquetfile \ -m 1 此时报错cannot resolve sql type for 1111 和 no java type for sql type for column extra 加上--map-column-hive Extra=String \和--map-column-java Extra=String \ sqoop import \ --connect jdbc串\ --username 用户名 \ --password 密码\ --table 表名 \ --null-string '\\N' \ --null-non-string '\\N' \ --map-column-hive Extra=String \ --map-column-java Extra=String \ --hive-overwrite \ --hcatalog-database hive数据库名\ --hcatalog-table hive中创建好的表名 \ --hcatalog-partition-keys dt \ --hcatalog-partition-values 20180913 \ --as-parquetfile \ -m 1 此时报错The connection attempt failed. connect timed out Closed a connection to metastore, current connections: 0

通过sqoop 导入hive错误

[root@orep apps]# sqoop import --connect jdbc:mysql://192.168.3.6:3306/hive --username root --password mysql123 --table VERSION --hive-table VERSION --hive-import 17/03/09 05:45:34 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.10.0 17/03/09 05:45:34 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 17/03/09 05:45:34 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override 17/03/09 05:45:34 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc. 17/03/09 05:45:34 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset. 17/03/09 05:45:34 INFO tool.CodeGenTool: Beginning code generation 17/03/09 05:45:34 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `VERSION` AS t LIMIT 1 17/03/09 05:45:34 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `VERSION` AS t LIMIT 1 17/03/09 05:45:34 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /apps/hadoop-2.7.2 Note: /tmp/sqoop-root/compile/4b54b295fba470d9743716efe53e0d48/VERSION.java uses or overrides a deprecated API. Note: Recompile with -Xlint:deprecation for details. 17/03/09 05:45:36 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/4b54b295fba470d9743716efe53e0d48/VERSION.jar 17/03/09 05:45:36 WARN manager.MySQLManager: It looks like you are importing from mysql. 17/03/09 05:45:36 WARN manager.MySQLManager: This transfer can be faster! Use the --direct 17/03/09 05:45:36 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path. 17/03/09 05:45:36 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql) 17/03/09 05:45:36 INFO mapreduce.ImportJobBase: Beginning import of VERSION SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/apps/hadoop-2.7.2/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/apps/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 17/03/09 05:45:36 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar 17/03/09 05:45:37 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps 17/03/09 05:45:37 INFO client.RMProxy: Connecting to ResourceManager at /192.168.3.6:8032 17/03/09 05:45:41 INFO db.DBInputFormat: Using read commited transaction isolation 17/03/09 05:45:41 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(`VER_ID`), MAX(`VER_ID`) FROM `VERSION` 17/03/09 05:45:41 INFO db.IntegerSplitter: Split size: 0; Num splits: 4 from: 1 to: 1 17/03/09 05:45:41 INFO mapreduce.JobSubmitter: number of splits:1 17/03/09 05:45:41 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1488979391372_0005 17/03/09 05:45:42 INFO impl.YarnClientImpl: Submitted application application_1488979391372_0005 17/03/09 05:45:42 INFO mapreduce.Job: The url to track the job: http://Kylin01:8088/proxy/application_1488979391372_0005/ 17/03/09 05:45:42 INFO mapreduce.Job: Running job: job_1488979391372_0005 17/03/09 05:45:50 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=FAILED. Redirecting to job history server 17/03/09 05:45:50 INFO mapreduce.Job: Job job_1488979391372_0005 running in uber mode : false 17/03/09 05:45:50 INFO mapreduce.Job: map 0% reduce 100% 17/03/09 05:45:50 INFO mapreduce.Job: Job job_1488979391372_0005 failed with state FAILED due to: 17/03/09 05:45:50 INFO mapreduce.ImportJobBase: The MapReduce job has already been retired. Performance 17/03/09 05:45:50 INFO mapreduce.ImportJobBase: counters are unavailable. To get this information, 17/03/09 05:45:50 INFO mapreduce.ImportJobBase: you will need to enable the completed job store on 17/03/09 05:45:50 INFO mapreduce.ImportJobBase: the jobtracker with: 17/03/09 05:45:50 INFO mapreduce.ImportJobBase: mapreduce.jobtracker.persist.jobstatus.active = true 17/03/09 05:45:50 INFO mapreduce.ImportJobBase: mapreduce.jobtracker.persist.jobstatus.hours = 1 17/03/09 05:45:50 INFO mapreduce.ImportJobBase: A jobtracker restart is required for these settings 17/03/09 05:45:50 INFO mapreduce.ImportJobBase: to take effect. 17/03/09 05:45:50 ERROR tool.ImportTool: Error during import: Import job failed! 烦请给我大神帮我诊断一下,谢谢

linux下使用sqoop连接windows的MySQL数据库报错

刚入门学习hadoop,然后在sqoop数据迁移这里遇到了问题,linux下使用sqoop连接不上windows系统的MySQL数据库,按照网上的许多方法都没解决。 linux系统是centos6.4,然后hadoop2.4.1,sqoop1.4.7,windows下是mysql5.7 下面是报错信息: [root@itcast01 bin]# ./sqoop list-tables --connect jdbc:mysql://192.168.147.100:3306/sqoopex1 --username root -password 1234 18/07/12 16:17:28 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7 18/07/12 16:17:28 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 18/07/12 16:17:28 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset. 18/07/12 16:18:31 ERROR manager.CatalogQueryManager: Failed to list tables com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure The last packet successfully received from the server was 1,531,383,511,816 milliseconds ago. The last packet sent successfully to the server was 0 milliseconds ago. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at com.mysql.jdbc.Util.handleNewInstance(Util.java:406) at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1074) at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2214) at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:773) at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:46) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at com.mysql.jdbc.Util.handleNewInstance(Util.java:406) at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:352) at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:282) at java.sql.DriverManager.getConnection(DriverManager.java:664) at java.sql.DriverManager.getConnection(DriverManager.java:247) at org.apache.sqoop.manager.SqlManager.makeConnection(SqlManager.java:904) at org.apache.sqoop.manager.GenericJdbcManager.getConnection(GenericJdbcManager.java:59) at org.apache.sqoop.manager.CatalogQueryManager.listTables(CatalogQueryManager.java:102) at org.apache.sqoop.tool.ListTablesTool.run(ListTablesTool.java:49) at org.apache.sqoop.Sqoop.run(Sqoop.java:147) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243) at org.apache.sqoop.Sqoop.main(Sqoop.java:252) Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure The last packet successfully received from the server was 1,531,383,511,809 milliseconds ago. The last packet sent successfully to the server was 0 milliseconds ago. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at com.mysql.jdbc.Util.handleNewInstance(Util.java:406) at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1074) at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:341) at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2137) ... 21 more Caused by: java.net.ConnectException: 连接超时 at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at java.net.Socket.connect(Socket.java:538) at java.net.Socket.<init>(Socket.java:434) at java.net.Socket.<init>(Socket.java:244) at com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:253) at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:290) ... 22 more zookeeper和hadoop服务都开启了的,防火墙也关闭了,去度娘有人说修改my.ini文件,说在[mysqld] 那里加一行: wait_timeout=86400 。 但是我修改后还是报同样的错误。mysql权限也赋予了的。数据库连接驱动使用mysql-connector-5.1.8.jar。 ![图片说明](https://img-ask.csdn.net/upload/201807/12/1531385340_467365.png) ![图片说明](https://img-ask.csdn.net/upload/201807/12/1531384769_440982.png) 连接的ip地址192.168.147.100是windows的VMnet1的ip地址,能ping通。然后就是连接不上数据库。使用Navicat连接也能连得上。 ![图片说明](https://img-ask.csdn.net/upload/201807/12/1531384494_888105.png) ![图片说明](https://img-ask.csdn.net/upload/201807/12/1531384583_92826.png) 有没有大牛知道我问题出在哪里?感激不尽!

使用sqoop从mariadb里面导数据到hive报错

RT 执行代码如下 ``` sqoop import --connect jdbc:mysql://localhost:3306/test --username root --password 1 --table exit_tran --hive-import --hive-table exit_tran -m 1 --hive-overwrite ``` 导出数据总是报错 ``` 20/03/03 17:35:40 INFO mapreduce.Job: Task Id : attempt_1583223426401_0007_m_000000_2, Status : FAILED Error: java.io.IOException: SQLException in nextKeyValue at org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:277) at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556) at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80) at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: java.sql.SQLException: HOUR_OF_DAY: 2 -> 3 at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:129) at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97) at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:89) at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:63) at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:73) at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:85) at com.mysql.cj.jdbc.result.ResultSetImpl.getTimestamp(ResultSetImpl.java:903) at org.apache.sqoop.lib.JdbcWritableBridge.readTimestamp(JdbcWritableBridge.java:111) at com.cloudera.sqoop.lib.JdbcWritableBridge.readTimestamp(JdbcWritableBridge.java:83) at exit_tran.readFields(exit_tran.java:229) at org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:244) ... 12 more Caused by: com.mysql.cj.exceptions.WrongArgumentException: HOUR_OF_DAY: 2 -> 3 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:61) at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:105) at com.mysql.cj.result.SqlTimestampValueFactory.localCreateFromTimestamp(SqlTimestampValueFactory.java:112) at com.mysql.cj.result.SqlTimestampValueFactory.localCreateFromTimestamp(SqlTimestampValueFactory.java:50) at com.mysql.cj.result.AbstractDateTimeValueFactory.createFromTimestamp(AbstractDateTimeValueFactory.java:87) at com.mysql.cj.protocol.a.MysqlTextValueDecoder.decodeTimestamp(MysqlTextValueDecoder.java:79) at com.mysql.cj.protocol.result.AbstractResultsetRow.decodeAndCreateReturnValue(AbstractResultsetRow.java:87) at com.mysql.cj.protocol.result.AbstractResultsetRow.getValueFromBytes(AbstractResultsetRow.java:241) at com.mysql.cj.protocol.a.result.TextBufferRow.getValue(TextBufferRow.java:132) ... 17 more Caused by: java.lang.IllegalArgumentException: HOUR_OF_DAY: 2 -> 3 at java.util.GregorianCalendar.computeTime(GregorianCalendar.java:2829) at java.util.Calendar.updateTime(Calendar.java:3393) at java.util.Calendar.getTimeInMillis(Calendar.java:1782) at com.mysql.cj.result.SqlTimestampValueFactory.localCreateFromTimestamp(SqlTimestampValueFactory.java:108) ... 23 more Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 ``` 求指导~

【Sqoop】在用sqoop从Mysql中将表导入到HDFS时,mr走完后会报如下错误:

在用sqoop从Mysql中将表导入到HDFS时,mr走完后会报如下错误:ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: java.net.ConnectException: Call From hadoop01/192.168.164.188 to 0.0.0.0:10020 failed on connection exception: java.net.ConnectException: Connection refused;,但是数据确正确导入了

sqoop中增量同步的问题

其中我自己写了一条增量同步的语句 如下: sqoop job --create MY_SQOOP_TEST -- import --connect jdbc:oracle:thin:@xxx:orcl --username XXX --password XXX --table MY_TEST --hive-import --hive-table MY_SQOOP_TEST --incremental lastmodified --check-column sj --last-value '2016/12/20 8:09:46' 我的理解是先创建my_sqoop_test,之后去oracle找到 my_test这张表,如果时间大于2016/12/20 8:09:46的数据则导入hive中的my_sqoop_test的表中. 这样子理解对吗? 如果是对的那么该如何运行这个job?

使用sqoop从oracle导数据到hive

望大神帮帮忙,非常谢谢您的照顾!!! 1、下图是报错信息: ![图片说明](https://img-ask.csdn.net/upload/201903/28/1553754743_757251.jpg) 2、下面是我的建表语句,测试数据,sqoop代码 ![图片说明](https://img-ask.csdn.net/upload/201903/28/1553754849_51874.jpg)

sqoop1.99.6启动job报错

用start job -jid 1报错 Exception has occurred during processing command Exception: org.apache.sqoop.common.SqoopException Message: CLIENT_0001:Server has returned exception Stack trace: at org.apache.sqoop.client.request.ResourceRequest (ResourceRequest.java:129) at org.apache.sqoop.client.request.ResourceRequest (ResourceRequest.java:179) at org.apache.sqoop.client.request.JobResourceRequest (JobResourceRequest.java:112) at org.apache.sqoop.client.request.SqoopResourceRequests (SqoopResourceRequests.java:157) at org.apache.sqoop.client.SqoopClient (SqoopClient.java:452) at org.apache.sqoop.shell.StartJobFunction (StartJobFunction.java:80) at org.apache.sqoop.shell.SqoopFunction (SqoopFunction.java:51) at org.apache.sqoop.shell.SqoopCommand (SqoopCommand.java:135) at org.apache.sqoop.shell.SqoopCommand (SqoopCommand.java:111) at org.codehaus.groovy.tools.shell.Command$execute (null:-1) at org.codehaus.groovy.runtime.callsite.CallSiteArray (CallSiteArray.java:42) at org.codehaus.groovy.tools.shell.Command$execute (null:-1) at org.codehaus.groovy.tools.shell.Shell (Shell.groovy:101) at org.codehaus.groovy.tools.shell.Groovysh (Groovysh.groovy:-1) at sun.reflect.GeneratedMethodAccessor23 (null:-1) at sun.reflect.DelegatingMethodAccessorImpl (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method (Method.java:498) at org.codehaus.groovy.reflection.CachedMethod (CachedMethod.java:90) at groovy.lang.MetaMethod (MetaMethod.java:233) at groovy.lang.MetaClassImpl (MetaClassImpl.java:1054) at org.codehaus.groovy.runtime.ScriptBytecodeAdapter (ScriptBytecodeAdapter.java:128) at org.codehaus.groovy.tools.shell.Groovysh (Groovysh.groovy:173) at sun.reflect.GeneratedMethodAccessor22 (null:-1) at sun.reflect.DelegatingMethodAccessorImpl (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method (Method.java:498) at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite$PogoCachedMethodSiteNoUnwrapNoCoerce (PogoMetaMethodSite.java:267) at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite (PogoMetaMethodSite.java:52) at org.codehaus.groovy.runtime.callsite.AbstractCallSite (AbstractCallSite.java:141) at org.codehaus.groovy.tools.shell.Groovysh (Groovysh.groovy:121) at org.codehaus.groovy.tools.shell.Shell (Shell.groovy:114) at org.codehaus.groovy.tools.shell.Shell$leftShift$0 (null:-1) at org.codehaus.groovy.tools.shell.ShellRunner (ShellRunner.groovy:88) at org.codehaus.groovy.tools.shell.InteractiveShellRunner (InteractiveShellRunner.groovy:-1) at sun.reflect.GeneratedMethodAccessor20 (null:-1) at sun.reflect.DelegatingMethodAccessorImpl (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method (Method.java:498) at org.codehaus.groovy.reflection.CachedMethod (CachedMethod.java:90) at groovy.lang.MetaMethod (MetaMethod.java:233) at groovy.lang.MetaClassImpl (MetaClassImpl.java:1054) at org.codehaus.groovy.runtime.ScriptBytecodeAdapter (ScriptBytecodeAdapter.java:128) at org.codehaus.groovy.runtime.ScriptBytecodeAdapter (ScriptBytecodeAdapter.java:148) at org.codehaus.groovy.tools.shell.InteractiveShellRunner (InteractiveShellRunner.groovy:100) at sun.reflect.GeneratedMethodAccessor19 (null:-1) at sun.reflect.DelegatingMethodAccessorImpl (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method (Method.java:498) at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite$PogoCachedMethodSiteNoUnwrapNoCoerce (PogoMetaMethodSite.java:267) at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite (PogoMetaMethodSite.java:52) at org.codehaus.groovy.runtime.callsite.AbstractCallSite (AbstractCallSite.java:137) at org.codehaus.groovy.tools.shell.ShellRunner (ShellRunner.groovy:57) at org.codehaus.groovy.tools.shell.InteractiveShellRunner (InteractiveShellRunner.groovy:-1) at sun.reflect.NativeMethodAccessorImpl (NativeMethodAccessorImpl.java:-2) at sun.reflect.NativeMethodAccessorImpl (NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method (Method.java:498) at org.codehaus.groovy.reflection.CachedMethod (CachedMethod.java:90) at groovy.lang.MetaMethod (MetaMethod.java:233) at groovy.lang.MetaClassImpl (MetaClassImpl.java:1054) at org.codehaus.groovy.runtime.ScriptBytecodeAdapter (ScriptBytecodeAdapter.java:128) at org.codehaus.groovy.runtime.ScriptBytecodeAdapter (ScriptBytecodeAdapter.java:148) at org.codehaus.groovy.tools.shell.InteractiveShellRunner (InteractiveShellRunner.groovy:66) at java_lang_Runnable$run (null:-1) at org.codehaus.groovy.runtime.callsite.CallSiteArray (CallSiteArray.java:42) at org.codehaus.groovy.runtime.callsite.AbstractCallSite (AbstractCallSite.java:108) at org.codehaus.groovy.runtime.callsite.AbstractCallSite (AbstractCallSite.java:112) at org.codehaus.groovy.tools.shell.Groovysh (Groovysh.groovy:463) at org.codehaus.groovy.tools.shell.Groovysh (Groovysh.groovy:402) at org.apache.sqoop.shell.SqoopShell (SqoopShell.java:130) Caused by: Exception: org.apache.sqoop.common.SqoopException Message: GENERIC_HDFS_CONNECTOR_0007:Invalid output directory - Unexpected exception Stack trace: at org.apache.sqoop.connector.hdfs.HdfsToInitializer (HdfsToInitializer.java:71) at org.apache.sqoop.connector.hdfs.HdfsToInitializer (HdfsToInitializer.java:35) at org.apache.sqoop.driver.JobManager (JobManager.java:449) at org.apache.sqoop.driver.JobManager (JobManager.java:373) at org.apache.sqoop.driver.JobManager (JobManager.java:276) at org.apache.sqoop.handler.JobRequestHandler (JobRequestHandler.java:380) at org.apache.sqoop.handler.JobRequestHandler (JobRequestHandler.java:116) at org.apache.sqoop.server.v1.JobServlet (JobServlet.java:96) at org.apache.sqoop.server.SqoopProtocolServlet (SqoopProtocolServlet.java:79) at javax.servlet.http.HttpServlet (HttpServlet.java:646) at javax.servlet.http.HttpServlet (HttpServlet.java:723) at org.apache.catalina.core.ApplicationFilterChain (ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain (ApplicationFilterChain.java:206) at org.apache.hadoop.security.authentication.server.AuthenticationFilter (AuthenticationFilter.java:644) at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter (DelegationTokenAuthenticationFilter.java:304) at org.apache.hadoop.security.authentication.server.AuthenticationFilter (AuthenticationFilter.java:592) at org.apache.catalina.core.ApplicationFilterChain (ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain (ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve (StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve (StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve (StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve (ErrorReportValve.java:103) at org.apache.catalina.core.StandardEngineValve (StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter (CoyoteAdapter.java:293) at org.apache.coyote.http11.Http11Processor (Http11Processor.java:861) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler (Http11Protocol.java:606) at org.apache.tomcat.util.net.JIoEndpoint$Worker (JIoEndpoint.java:489) at java.lang.Thread (Thread.java:748) Caused by: Exception: java.io.IOException Message: Failed on local exception: org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length; Host Details : local host is: "node01/192.168.65.100"; destination host is: "node01":9870; Stack trace: at org.apache.hadoop.net.NetUtils (NetUtils.java:818) at org.apache.hadoop.ipc.Client (Client.java:1549) at org.apache.hadoop.ipc.Client (Client.java:1491) at org.apache.hadoop.ipc.Client (Client.java:1388) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker (ProtobufRpcEngine.java:233) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker (ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy19 (null:-1) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB (ClientNamenodeProtocolTranslatorPB.java:907) at sun.reflect.NativeMethodAccessorImpl (NativeMethodAccessorImpl.java:-2) at sun.reflect.NativeMethodAccessorImpl (NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method (Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler (RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call (RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call (RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call (RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler (RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy20 (null:-1) at org.apache.hadoop.hdfs.DFSClient (DFSClient.java:1666) at org.apache.hadoop.hdfs.DistributedFileSystem$29 (DistributedFileSystem.java:1576) at org.apache.hadoop.hdfs.DistributedFileSystem$29 (DistributedFileSystem.java:1573) at org.apache.hadoop.fs.FileSystemLinkResolver (FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem (DistributedFileSystem.java:1588) at org.apache.hadoop.fs.FileSystem (FileSystem.java:1683) at org.apache.sqoop.connector.hdfs.HdfsToInitializer (HdfsToInitializer.java:58) at org.apache.sqoop.connector.hdfs.HdfsToInitializer (HdfsToInitializer.java:35) at org.apache.sqoop.driver.JobManager (JobManager.java:449) at org.apache.sqoop.driver.JobManager (JobManager.java:373) at org.apache.sqoop.driver.JobManager (JobManager.java:276) at org.apache.sqoop.handler.JobRequestHandler (JobRequestHandler.java:380) at org.apache.sqoop.handler.JobRequestHandler (JobRequestHandler.java:116) at org.apache.sqoop.server.v1.JobServlet (JobServlet.java:96) at org.apache.sqoop.server.SqoopProtocolServlet (SqoopProtocolServlet.java:79) at javax.servlet.http.HttpServlet (HttpServlet.java:646) at javax.servlet.http.HttpServlet (HttpServlet.java:723) at org.apache.catalina.core.ApplicationFilterChain (ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain (ApplicationFilterChain.java:206) at org.apache.hadoop.security.authentication.server.AuthenticationFilter (AuthenticationFilter.java:644) at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter (DelegationTokenAuthenticationFilter.java:304) at org.apache.hadoop.security.authentication.server.AuthenticationFilter (AuthenticationFilter.java:592) at org.apache.catalina.core.ApplicationFilterChain (ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain (ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve (StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve (StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve (StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve (ErrorReportValve.java:103) at org.apache.catalina.core.StandardEngineValve (StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter (CoyoteAdapter.java:293) at org.apache.coyote.http11.Http11Processor (Http11Processor.java:861) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler (Http11Protocol.java:606) at org.apache.tomcat.util.net.JIoEndpoint$Worker (JIoEndpoint.java:489) at java.lang.Thread (Thread.java:748) Caused by: Exception: java.lang.Throwable Message: RPC response exceeds maximum data length Stack trace: at org.apache.hadoop.ipc.Client$IpcStreams (Client.java:1864) at org.apache.hadoop.ipc.Client$Connection (Client.java:1183) at org.apache.hadoop.ipc.Client$Connection (Client.java:1079) 哪位大侠帮忙看看:主要应该是这句 Caused by: Exception: java.io.IOException Message: Failed on local exception: org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length; Host Details : local host is: "node01/192.168.65.100"; destination host is: "node01":9870; 但不知道问题出在哪里 我的link配置: From database configuration Schema name: mysql Table name: help_topic Table SQL statement: Table column names: Partition column name: Null value allowed for the partition column: Boundary query: Incremental read Check column: Last value: To HDFS configuration Override null value: Null value: Output format: 0 : TEXT_FILE 1 : SEQUENCE_FILE Choose: 0 Compression format: 0 : NONE 1 : DEFAULT 2 : DEFLATE 3 : GZIP 4 : BZIP2 5 : LZO 6 : LZ4 7 : SNAPPY 8 : CUSTOM Choose: 0 Custom compression format: Output directory: hdfs://node01:9870/sqoop Append mode: Throttling resources

sqoop连接DB2import 时报错,Connection timed out 求大神解答

sqoop连接DB2导入数据至HDFS时,报错,显示连接超时. 用list-table命令连接没有问题​,结果正确; 测试过DB2远程连接,没有问题,telnet 测试端口也没有问题; DB2版本v9.7,用的安装包里面的JDBC插件. 以下是错误信息。 [biadmin@Hadoop01 sqoop]$ ./bin/sqoop import --connect jdbc:db2://9.112.30.177:50000/content --username db2admin --P --table DB2ADMIN.PERSON --as-textfile -m 1 --target-dir /user/test Warning: /opt/ibm/biginsights/sqoop/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. /opt/ibm/biginsights/sqoop/bin/configure-sqoop: line 181: /opt/ibm/biginsights/hive/hcatalog/bin/hcat: Permission denied 16/03/02 08:27:38 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6 Enter password: 16/03/02 08:27:46 INFO manager.SqlManager: Using default fetchSize of 1000 16/03/02 08:27:46 INFO tool.CodeGenTool: Beginning code generation 16/03/02 08:28:49 ERROR manager.SqlManager: Error executing statement: com.ibm.db2.jcc.am.DisconnectNonTransientConnectionException: [jcc][t4][2043][11550][4.14.113] Exception java.net.ConnectException: Error opening socket to server /9.112.30.177 on port 50,000 with message: Connection timed out. ERRORCODE=-4499, SQLSTATE=08001 com.ibm.db2.jcc.am.DisconnectNonTransientConnectionException: [jcc][t4][2043][11550][4.14.113] Exception java.net.ConnectException: Error opening socket to server /9.112.30.177 on port 50,000 with message: Connection timed out. ERRORCODE=-4499, SQLSTATE=08001 at com.ibm.db2.jcc.am.ed.a(ed.java:320) at com.ibm.db2.jcc.am.ed.a(ed.java:338) at com.ibm.db2.jcc.t4.vb.a(vb.java:434) at com.ibm.db2.jcc.t4.vb.<init>(vb.java:93) at com.ibm.db2.jcc.t4.a.b(a.java:354) at com.ibm.db2.jcc.t4.b.newAgent_(b.java:2030) at com.ibm.db2.jcc.am.Connection.initConnection(Connection.java:731) at com.ibm.db2.jcc.am.Connection.<init>(Connection.java:680) at com.ibm.db2.jcc.t4.b.<init>(b.java:334) at com.ibm.db2.jcc.DB2SimpleDataSource.getConnection(DB2SimpleDataSource.java:232) at com.ibm.db2.jcc.DB2SimpleDataSource.getConnection(DB2SimpleDataSource.java:198) at com.ibm.db2.jcc.DB2Driver.connect(DB2Driver.java:475) at com.ibm.db2.jcc.DB2Driver.connect(DB2Driver.java:116) at java.sql.DriverManager.getConnection(DriverManager.java:582) at java.sql.DriverManager.getConnection(DriverManager.java:226) at org.apache.sqoop.manager.SqlManager.makeConnection(SqlManager.java:885) at org.apache.sqoop.manager.GenericJdbcManager.getConnection(GenericJdbcManager.java:52) at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:744) at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:767) at org.apache.sqoop.manager.SqlManager.getColumnInfoForRawQuery(SqlManager.java:270) at org.apache.sqoop.manager.SqlManager.getColumnTypesForRawQuery(SqlManager.java:241) at org.apache.sqoop.manager.SqlManager.getColumnTypes(SqlManager.java:227) at org.apache.sqoop.manager.ConnManager.getColumnTypes(ConnManager.java:295) at org.apache.sqoop.orm.ClassWriter.getColumnTypes(ClassWriter.java:1833) at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:1645) at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:107) at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:478) at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605) at org.apache.sqoop.Sqoop.run(Sqoop.java:143) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227) at org.apache.sqoop.Sqoop.main(Sqoop.java:236) Caused by: java.net.ConnectException: Connection timed out at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:369) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:230) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:212) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:403) at java.net.Socket.connect(Socket.java:642) at com.ibm.db2.jcc.t4.v.run(v.java:49) at java.security.AccessController.doPrivileged(AccessController.java:330) at com.ibm.db2.jcc.t4.vb.a(vb.java:420) ... 31 more 16/03/02 08:28:49 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: No columns to generate for ClassWriter at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:1651) at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:107) at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:478) at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605) at org.apache.sqoop.Sqoop.run(Sqoop.java:143) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227) at org.apache.sqoop.Sqoop.main(Sqoop.java:236)

sqoop从hdfs导入数据到mysql疑问

需求:需要实现从sqlserver库中导入数据到mysql中,但实际上只导入了1条记录就结束了(实际数据600+条)。 查看了原因: 应该就是行分隔符引起了 只导入了一条就结束了 。 代码: 1、通过sqoop脚本将sqlserver导入到hdfs中: sqoop import \ --connect "jdbc:sqlserver://192.168.1.130:1433;database=测试库" \ --username sa \ --password 123456 \ --table=t_factfoud \ --target-dir /tmp/sqoop_data/900804ebea3d4ec79a036604ed3c93a0_2014_yw/t_factfoud9 \ --fields-terminated-by '\t' --null-string '\\N' --null-non-string '\\N' --lines-terminated-by '\001' \ --split-by billid -m 1 2、通过sqoop脚本将hdfs数据导出到mysql中: sqoop export \ --connect 'jdbc:mysql://192.168.1.38:3306/xiayi?useUnicode=true&characterEncoding=utf-8' \ --username root \ --password 123456 \ --table t_factfoud \ --export-dir /tmp/sqoop_data/900804ebea3d4ec79a036604ed3c93a0_2014_yw/t_factfoud9 \ -m 1 \ --fields-terminated-by '\t' \ --null-string '\\N' --null-non-string '\\N' \ --lines-terminated-by '\001' 现在执行结果: 1、sqlserver库中 表 t_factfoud 中有 600 条记录,已正确到到hdfs中 。 2、从hdfs导出到mysql,只正确导入了一条,就结束了。 效果图如下: ![图片说明](https://img-ask.csdn.net/upload/201805/31/1527756119_961528.jpg)

Sqoop导入数据到Oracle数据库表别名前使用了as如何避免

Sqoop 导出语句: ``` sqoop export \ --driver oracle.jdbc.driver.OracleDriver \ --connect jdbc:oracle:thin:@//10.10.122.165:1521/new \ --username test \ --password 'test2008' \ --table ORDER_O \ --export-dir /user/hive/warehouse/test.db/order_o \ --columns cv_time,cv_date \ --input-fields-terminated-by '\t' \ --input-lines-terminated-by '\n' \ --input-null-string '\\N' \ --input-null-non-string '\\N' ``` 执行后报错如下: ![图片说明](https://img-ask.csdn.net/upload/201906/06/1559787595_111989.png) 发现sql语句中表别名加了as,oracle数据库无法识别,请问如何避免。

sqoop2数据导入相关疑问

场景: 通过 sqoop2, 从oracle 数据导入 hdfs 问题1)当oracle表中有blob字段的列时出现类型转换失败 , Integer 不能转BigDecimal, 问题是表中没有number列也出现此问题. 问题2)sqoop2是否支持往hbase的数据导入?? , 因为connector中 没有hbase相关连接器模板 希望发下以上两个问题描述或相关资料连接地址 注: 时 sqoop2 相关资料. 别发sqoop1的资料

oozie调用sqoop import任务,出现异常

oozie4.3.1 sqoop1.4.7 workflow.xml ``` <workflow-app xmlns="uri:oozie:workflow:0.5" name="sqoop-import-wf"> <start to="sqoop-node"/> <action name="sqoop-node"> <sqoop xmlns="uri:oozie:sqoop-action:0.3"> <job-tracker>${jobTracker}</job-tracker> <name-node>${nameNode}</name-node> <configuration> <property> <name>mapred.job.queue.name</name> <value>${queueName}</value> </property> <property> <name>oozie.sqoop.log.level</name> <value>WARN</value> </property> </configuration> <command>import --connect jdbc:mysql://study:3306/test --username root --password 123456 --table terminal_info --where "update_time between 20180615230000 and 20180616225959" --target-dir "/user/hive/warehouse/temp_terminal_info" --append --fields-terminated-by "," --lines-terminated-by "\n" --num-mappers 1 --direct</command> </sqoop> <ok to="end"/> <error to="fail"/> </action> <kill name="fail"> <message>Sqoop failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message> </kill> <end name="end"/> </workflow-app> ``` oozie job log 2018-06-19 10:12:10,615 WARN SqoopActionExecutor:523 - SERVER[study] USER[root] GROUP[-] TOKEN[] APP[sqoop-import-wf] JOB[0000017-180619092621453-oozie-root-W] ACTION[0000017-180619092621453-oozie-root-W@sqoop-node] Launcher ERROR, reason: Main class [org.apache.oozie.action.hadoop.SqoopMain], exit code [1] mapreduce job 执行成功了 ``` Job Name: oozie:launcher:T=sqoop:W=sqoop-import-wf:A=sqoop-node:ID=0000017-180619092621453-oozie-root-W User Name: root Queue: root.root State: SUCCEEDED Uberized: true Submitted: Tue Jun 19 10:11:59 CST 2018 Started: Tue Jun 19 10:12:07 CST 2018 Finished: Tue Jun 19 10:12:08 CST 2018 Elapsed: 1sec Diagnostics: Average Map Time 1sec ``` 但是oozie 出现了 Launcher ERROR, reason: Main class [org.apache.oozie.action.hadoop.SqoopMain], exit code [1] 也没有其他的日志,请问有碰到这样问题的朋友吗 或者怎么去排查这个问题 求大神帮忙

大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了

大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...

在中国程序员是青春饭吗?

今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...

程序员请照顾好自己,周末病魔差点一套带走我。

程序员在一个周末的时间,得了重病,差点当场去世,还好及时挽救回来了。

技术大佬:我去,你写的 switch 语句也太老土了吧

昨天早上通过远程的方式 review 了两名新来同事的代码,大部分代码都写得很漂亮,严谨的同时注释也很到位,这令我非常满意。但当我看到他们当中有一个人写的 switch 语句时,还是忍不住破口大骂:“我擦,小王,你丫写的 switch 语句也太老土了吧!” 来看看小王写的代码吧,看完不要骂我装逼啊。 private static String createPlayer(PlayerTypes p...

和黑客斗争的 6 天!

互联网公司工作,很难避免不和黑客们打交道,我呆过的两家互联网公司,几乎每月每天每分钟都有黑客在公司网站上扫描。有的是寻找 Sql 注入的缺口,有的是寻找线上服务器可能存在的漏洞,大部分都...

点沙成金:英特尔芯片制造全过程揭密

“亚马逊丛林里的蝴蝶扇动几下翅膀就可能引起两周后美国德州的一次飓风……” 这句人人皆知的话最初用来描述非线性系统中微小参数的变化所引起的系统极大变化。 而在更长的时间尺度内,我们所生活的这个世界就是这样一个异常复杂的非线性系统…… 水泥、穹顶、透视——关于时间与技艺的蝴蝶效应 公元前3000年,古埃及人将尼罗河中挖出的泥浆与纳特龙盐湖中的矿物盐混合,再掺入煅烧石灰石制成的石灰,由此得来了人...

讲一个程序员如何副业月赚三万的真实故事

loonggg读完需要3分钟速读仅需 1 分钟大家好,我是你们的校长。我之前讲过,这年头,只要肯动脑,肯行动,程序员凭借自己的技术,赚钱的方式还是有很多种的。仅仅靠在公司出卖自己的劳动时...

上班一个月,后悔当初着急入职的选择了

最近有个老铁,告诉我说,上班一个月,后悔当初着急入职现在公司了。他之前在美图做手机研发,今年美图那边今年也有一波组织优化调整,他是其中一个,在协商离职后,当时捉急找工作上班,因为有房贷供着,不能没有收入来源。所以匆忙选了一家公司,实际上是一个大型外包公司,主要派遣给其他手机厂商做外包项目。**当时承诺待遇还不错,所以就立马入职去上班了。但是后面入职后,发现薪酬待遇这块并不是HR所说那样,那个HR自...

女程序员,为什么比男程序员少???

昨天看到一档综艺节目,讨论了两个话题:(1)中国学生的数学成绩,平均下来看,会比国外好?为什么?(2)男生的数学成绩,平均下来看,会比女生好?为什么?同时,我又联想到了一个技术圈经常讨...

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

MySQL数据库面试题(2020最新版)

文章目录数据库基础知识为什么要使用数据库什么是SQL?什么是MySQL?数据库三大范式是什么mysql有关权限的表都有哪几个MySQL的binlog有有几种录入格式?分别有什么区别?数据类型mysql有哪些数据类型引擎MySQL存储引擎MyISAM与InnoDB区别MyISAM索引与InnoDB索引的区别?InnoDB引擎的4大特性存储引擎选择索引什么是索引?索引有哪些优缺点?索引使用场景(重点)...

如果你是老板,你会不会踢了这样的员工?

有个好朋友ZS,是技术总监,昨天问我:“有一个老下属,跟了我很多年,做事勤勤恳恳,主动性也很好。但随着公司的发展,他的进步速度,跟不上团队的步伐了,有点...

我入职阿里后,才知道原来简历这么写

私下里,有不少读者问我:“二哥,如何才能写出一份专业的技术简历呢?我总感觉自己写的简历太烂了,所以投了无数份,都石沉大海了。”说实话,我自己好多年没有写过简历了,但我认识的一个同行,他在阿里,给我说了一些他当年写简历的方法论,我感觉太牛逼了,实在是忍不住,就分享了出来,希望能够帮助到你。 01、简历的本质 作为简历的撰写者,你必须要搞清楚一点,简历的本质是什么,它就是为了来销售你的价值主张的。往深...

我说我不会算法,阿里把我挂了。

不说了,字节跳动也反手把我挂了。

优雅的替换if-else语句

场景 日常开发,if-else语句写的不少吧??当逻辑分支非常多的时候,if-else套了一层又一层,虽然业务功能倒是实现了,但是看起来是真的很不优雅,尤其是对于我这种有强迫症的程序"猿",看到这么多if-else,脑袋瓜子就嗡嗡的,总想着解锁新姿势:干掉过多的if-else!!!本文将介绍三板斧手段: 优先判断条件,条件不满足的,逻辑及时中断返回; 采用策略模式+工厂模式; 结合注解,锦...

离职半年了,老东家又发 offer,回不回?

有小伙伴问松哥这个问题,他在上海某公司,在离职了几个月后,前公司的领导联系到他,希望他能够返聘回去,他很纠结要不要回去? 俗话说好马不吃回头草,但是这个小伙伴既然感到纠结了,我觉得至少说明了两个问题:1.曾经的公司还不错;2.现在的日子也不是很如意。否则应该就不会纠结了。 老实说,松哥之前也有过类似的经历,今天就来和小伙伴们聊聊回头草到底吃不吃。 首先一个基本观点,就是离职了也没必要和老东家弄的苦...

为什么你不想学习?只想玩?人是如何一步一步废掉的

不知道是不是只有我这样子,还是你们也有过类似的经历。 上学的时候总有很多光辉历史,学年名列前茅,或者单科目大佬,但是虽然慢慢地长大了,你开始懈怠了,开始废掉了。。。 什么?你说不知道具体的情况是怎么样的? 我来告诉你: 你常常潜意识里或者心理觉得,自己真正的生活或者奋斗还没有开始。总是幻想着自己还拥有大把时间,还有无限的可能,自己还能逆风翻盘,只不是自己还没开始罢了,自己以后肯定会变得特别厉害...

男生更看重女生的身材脸蛋,还是思想?

往往,我们看不进去大段大段的逻辑。深刻的哲理,往往短而精悍,一阵见血。问:产品经理挺漂亮的,有点心动,但不知道合不合得来。男生更看重女生的身材脸蛋,还是...

为什么程序员做外包会被瞧不起?

二哥,有个事想询问下您的意见,您觉得应届生值得去外包吗?公司虽然挺大的,中xx,但待遇感觉挺低,马上要报到,挺纠结的。

当HR压你价,说你只值7K,你该怎么回答?

当HR压你价,说你只值7K时,你可以流畅地回答,记住,是流畅,不能犹豫。 礼貌地说:“7K是吗?了解了。嗯~其实我对贵司的面试官印象很好。只不过,现在我的手头上已经有一份11K的offer。来面试,主要也是自己对贵司挺有兴趣的,所以过来看看……”(未完) 这段话主要是陪HR互诈的同时,从公司兴趣,公司职员印象上,都给予对方正面的肯定,既能提升HR的好感度,又能让谈判气氛融洽,为后面的发挥留足空间。...

面试:第十六章:Java中级开发(16k)

HashMap底层实现原理,红黑树,B+树,B树的结构原理 Spring的AOP和IOC是什么?它们常见的使用场景有哪些?Spring事务,事务的属性,传播行为,数据库隔离级别 Spring和SpringMVC,MyBatis以及SpringBoot的注解分别有哪些?SpringMVC的工作原理,SpringBoot框架的优点,MyBatis框架的优点 SpringCould组件有哪些,他们...

面试阿里p7,被按在地上摩擦,鬼知道我经历了什么?

面试阿里p7被问到的问题(当时我只知道第一个):@Conditional是做什么的?@Conditional多个条件是什么逻辑关系?条件判断在什么时候执...

你打算用Java 8一辈子都不打算升级到Java 14,真香

我们程序员应该抱着尝鲜、猎奇的心态,否则就容易固步自封,技术停滞不前。

无代码时代来临,程序员如何保住饭碗?

编程语言层出不穷,从最初的机器语言到如今2500种以上的高级语言,程序员们大呼“学到头秃”。程序员一边面临编程语言不断推陈出新,一边面临由于许多代码已存在,程序员编写新应用程序时存在重复“搬砖”的现象。 无代码/低代码编程应运而生。无代码/低代码是一种创建应用的方法,它可以让开发者使用最少的编码知识来快速开发应用程序。开发者通过图形界面中,可视化建模来组装和配置应用程序。这样一来,开发者直...

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

大三实习生,字节跳动面经分享,已拿Offer

说实话,自己的算法,我一个不会,太难了吧

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

《经典算法案例》01-08:如何使用质数设计扫雷(Minesweeper)游戏

我们都玩过Windows操作系统中的经典游戏扫雷(Minesweeper),如果把质数当作一颗雷,那么,表格中红色的数字哪些是雷(质数)?您能找出多少个呢?文中用列表的方式罗列了10000以内的自然数、质数(素数),6的倍数等,方便大家观察质数的分布规律及特性,以便对算法求解有指导意义。另外,判断质数是初学算法,理解算法重要性的一个非常好的案例。

《Oracle Java SE编程自学与面试指南》最佳学习路线图(2020最新版)

正确选择比瞎努力更重要!

一文带你入门Java Stream流,太强了

两个星期以前,就有读者强烈要求我写一篇 Java Stream 流的文章,我说市面上不是已经有很多了吗,结果你猜他怎么说:“就想看你写的啊!”你看你看,多么苍白的喜欢啊。那就“勉为其难”写一篇吧,嘻嘻。 单从“Stream”这个单词上来看,它似乎和 java.io 包下的 InputStream 和 OutputStream 有些关系。实际上呢,没毛关系。Java 8 新增的 Stream 是为...

立即提问
相关内容推荐