求解hadoop使用hadoop命令报错的问题~~~谢谢

图片说明

1个回答

hadoop fs -ls ./ 试试会不会出这问题

u012056216
Superziy 还是会出错,是hdfs的原因吗?
4 年多之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
萌妹子 求解hadoop集群搭建 ZKFC报错

hdfs zkfc -formatZK执行后: WARNING: Before proceeding, ensure that all HDFS services and failover controllers are stopped! =============================================== Proceed formatting /hadoop-ha/mycluster? (Y or N) 16/02/26 01:18:56 INFO ha.ActiveStandbyElector: Session connected. y 16/02/26 01:19:12 INFO ha.ActiveStandbyElector: Recursively deleting /hadoop-ha/mycluster from ZK... 16/02/26 01:19:12 INFO ha.ActiveStandbyElector: Successfully deleted /hadoop-ha/mycluster from ZK. 16/02/26 01:19:12 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/mycluster in ZK. 16/02/26 01:19:12 INFO zookeeper.ClientCnxn: EventThread shut down 16/02/26 01:19:12 INFO zookeeper.ZooKeeper: Session: 0x153196bc9790000 closed 结果ZKFC还是没起来,然后我尝试: [root@h1 ~]# hadoop-daemon.sh start DFSZKFailoverController ......... Error: Could not find or load main class DFSZKFailoverController 但仍然报错!

hbase启动报错问题,求解

hbase集群启动通过ssh启动会爆下面这个错 java.lang.RuntimeException: Failed construction of Regionserver: class org.apache.hadoop.hbase.regionserver.HRegionServer at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2706) at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:64) at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2721) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2704) ... 5 more Caused by: java.io.IOException: No FileSystem for scheme: hdfs at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2799) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2810) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:100) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2849) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2831) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356) at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:1003) at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:579) 通过自定义脚本或者 bin/start-hbase.sh 启动会报错,但通过$ bin/hbase-daemon.sh start master $ bin/hbase-daemon.sh start regionserver 这命令单独在各个机子上运行则不报,求解!!!!

单机伪分布配置Hadoop,SecondaryNameNode无法启动,求解

环境配置: VMware fusion中安装ubuntu-14.04.4-desktop-amd64 hadoop版本:2.6.5 单机模式安装伪分布配置后,执行 ./sbin/start-dfs.sh 不报错,但通过jps查看,没有SecondaryNameNode进程。 core-site.xml配置如下: <configuration> <property> <name>hadoop.tmp.dir</name> <value>file:/user/local/hadoop/tmp</value> <description>Abase for other temporary directories</description> </property> <property> <name>fs.defaultFS</name> <value>hdfs://localhost:9000</value> </property> </configuration> hdfs-site.xml配置如下: <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/usr/local/hadoop/tmp/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/usr/local/hadoop/tmp/dfs/data</value> </property> </configuration> 无论重启start-dfs.sh,还是重新格式化namenode,毫无作用,跪求大神高见~

java连接hbase Kerberos 24小时报错问题

java连接hbase kerberos 24小时过期问题。求指点 ``` try { String krbStr = Thread.currentThread().getContextClassLoader().getResource("krb5.ini").getFile(); String keyStr = Thread.currentThread().getContextClassLoader().getResource(keytab).getFile(); System.setProperty("java.security.krb5.conf", krbStr); UserGroupInformation.setConfiguration(conf); UserGroupInformation.loginUserFromKeytab(principal, keyStr); } catch (IOException e) { log.error(e); } try { HBaseAdmin admin = new HBaseAdmin(conf); if (!admin.tableExists(tableName)) { HTableDescriptor tableDescripter = new HTableDescriptor( tableName.getBytes()); tableDescripter.addFamily(new HColumnDescriptor("data")); admin.createTable(tableDescripter); } } catch (Exception e) { log.error( e); } ``` 以上代码刚运行时没问题,但是过了24小时左右admin.tableExists(tableName)这句就会报错,错误如下 org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions: Sat Jul 13 14:30:29 CST 2019, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68090: row 'p_rsdisk,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=xx-xx-xx-xx-xx.indata.com,16020,1560266314048, seqNum=0 at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:271) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:223) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:61) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320) at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:295) at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:160) at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:155) at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:811) at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:602) at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366) at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:303) at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:313) at javax.servlet.http.HttpServlet.service(HttpServlet.java:731) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) at org.apache.catalina.filters.CorsFilter.handleNonCORS(CorsFilter.java:436) at org.apache.catalina.filters.CorsFilter.doFilter(CorsFilter.java:177) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:505) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:956) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:423) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1079) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:625) at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:316) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:748) Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=68090: row 'p_rsdisk,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=xx-xx-xx-xx-xx.indata.com,16020,1560266314048, seqNum=0 at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159) at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:64) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ... 1 more Caused by: java.io.IOException: Could not set up IO Streams to xx-xx-xx-xx-xx.indata.com/xx.xx.xx.xx:16020 at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:777) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:885) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:854) at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1180) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32651) at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:372) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:199) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:62) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:369) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:343) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126) ... 4 more Caused by: java.lang.RuntimeException: SASL authentication failed. The most likely cause is missing or invalid credentials. Consider 'kinit'. at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$1.run(RpcClientImpl.java:677) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.handleSaslConnectionFailure(RpcClientImpl.java:635) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:743) ... 17 more Caused by: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:609) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600(RpcClientImpl.java:154) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:735) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:732) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:732) ... 17 more Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt) at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147) at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122) at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187) at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179) at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192) ... 26 more 求解原因,是因为票据过期吗?

spring boot和spring data hadoop冲突怎么解决啊

就是我的项目是spring boot项目,然后我在里面加了一个spring data hadoop的依赖,然后就运行不起来项目了,而且不报错,就是stop server,也没有错误日志,求解

求解 FLINK 集群 Standalone 模式 高可用部署无法启动。

##### 根据教程部署的hdfs,zookeeper,flink 集群。 ##### HDFS , zookeeper,工作正常,flink-standalone 启动正常。在搭建HA集群时,集群启动未报错,查看jps时发现没有进程,查看日志出现如下内容。(启动顺序zkServer.sh start ==> start-dfs.sh, start-yarn.sh bin/start-cluster.sh ) ##### 集群环境: Centos 7.3, Hadoop-2.8.5, java 1.8, scala-2.12, flink-1.9.0-2.12, zookeeper 3.4.14 ``` 2019-09-05 21:38:02,658 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -------------------------------------------------------------------------------- 2019-09-05 21:38:02,660 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Starting StandaloneSessionClusterEntrypoint (Version: 1.9.0, Rev:9c32ed9, Date:19.08.2019 @ 16:16:55 UTC) 2019-09-05 21:38:02,660 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - OS current user: root 2019-09-05 21:38:02,660 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Current Hadoop/Kerberos user: <no hadoop dependency found> 2019-09-05 21:38:02,660 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - JVM: Java HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.221-b11 2019-09-05 21:38:02,660 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Maximum heap size: 989 MiBytes 2019-09-05 21:38:02,660 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - JAVA_HOME: /usr/java/jdk1.8.0_221-amd64 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - No Hadoop Dependency available 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - JVM Options: 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Xms1024m 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Xmx1024m 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Dlog.file=/opt/software/flink/log/flink-root-standalonesession-14-master.log 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Dlog4j.configuration=file:/opt/software/flink/conf/log4j.properties 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Dlogback.configurationFile=file:/opt/software/flink/conf/logback.xml 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Program Arguments: 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --configDir 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - /opt/software/flink/conf 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --executionMode 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - cluster 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --host 2019-09-05 21:38:02,662 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - master 2019-09-05 21:38:02,662 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --webui-port 2019-09-05 21:38:02,662 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 8081 2019-09-05 21:38:02,662 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Classpath: /opt/software/flink/lib/flink-table_2.12-1.9.0.jar:/opt/software/flink/lib/flink-table-blink_2.12-1.9.0.jar:/opt/software/flink/lib/log4j-1.2.17.jar:/opt/software/flink/lib/slf4j-log4j12-1.7.15.jar:/opt/software/flink/lib/flink-dist_2.12-1.9.0.jar::/usr/hadoop/hadoop/etc/hadoop: 2019-09-05 21:38:02,662 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -------------------------------------------------------------------------------- 2019-09-05 21:38:02,663 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Registered UNIX signal handlers for [TERM, HUP, INT] 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: env.java.home, /usr/java/jdk1.8.0_221-amd64 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.address, master 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.port, 6123 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.heap.size, 1024m 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.heap.size, 1024m 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.numberOfTaskSlots, 1 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: parallelism.default, 1 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability, zookeeper 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.storageDir, hdfs:///flink/ha/ 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.zookeeper.quorum, master:2181,slave02:2181,slave03:2181 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.zookeeper.path.root, /flink 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.cluster-id, /cluster_one 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.zookeeper.client.acl, open 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.execution.failover-strategy, region 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.port, 8081 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.address, master,slave03 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.bind-port, 8080-8090 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.bind-address, master,slave03 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: web.submit.enable, false 2019-09-05 21:38:02,842 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Starting StandaloneSessionClusterEntrypoint. 2019-09-05 21:38:02,842 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Install default filesystem. 2019-09-05 21:38:02,877 INFO org.apache.flink.core.fs.FileSystem - Hadoop is not in the classpath/dependencies. The extended set of supported File Systems via Hadoop is not available. 2019-09-05 21:38:02,903 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Install security context. 2019-09-05 21:38:02,914 INFO org.apache.flink.runtime.security.modules.HadoopModuleFactory - Cannot create Hadoop Security Module because Hadoop cannot be found in the Classpath. 2019-09-05 21:38:02,926 INFO org.apache.flink.runtime.security.SecurityUtils - Cannot install HadoopSecurityContext because Hadoop cannot be found in the Classpath. 2019-09-05 21:38:02,927 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Initializing cluster services. 2019-09-05 21:38:03,430 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils - Trying to start actor system at master:0 2019-09-05 21:38:04,268 INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started 2019-09-05 21:38:04,314 INFO akka.remote.Remoting - Starting remoting 2019-09-05 21:38:04,566 INFO akka.remote.Remoting - Remoting started; listening on addresses :[akka.tcp://flink@master:36882] 2019-09-05 21:38:04,674 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils - Actor system started at akka.tcp://flink@master:36882 2019-09-05 21:38:04,701 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Shutting StandaloneSessionClusterEntrypoint down with application status FAILED. Diagnostics java.io.IOException: Could not create FileSystem for highly available storage (high-availability.storageDir) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:119) at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:92) at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:120) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:292) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:257) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:202) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:164) at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:163) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:501) at org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:65) Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:447) at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:359) at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:116) ... 10 more Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies. at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:58) at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:443) ... 13 more . 2019-09-05 21:38:04,708 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Stopping Akka RPC service. 2019-09-05 21:38:04,738 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Shutting down remote daemon. 2019-09-05 21:38:04,738 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remote daemon shut down; proceeding with flushing remote transports. 2019-09-05 21:38:04,765 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remoting shut down. 2019-09-05 21:38:04,815 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Stopped Akka RPC service. 2019-09-05 21:38:04,816 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Could not start cluster entrypoint StandaloneSessionClusterEntrypoint. org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to initialize the cluster entrypoint StandaloneSessionClusterEntrypoint. at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:182) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:501) at org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:65) Caused by: java.io.IOException: Could not create FileSystem for highly available storage (high-availability.storageDir) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:119) at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:92) at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:120) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:292) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:257) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:202) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:164) at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:163) ... 2 more Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:447) at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:359) at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:116) ... 10 more Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies. at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:58) at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:443) ... 13 more [root@master log]# vim flink-root-standalonesession-14-master.log 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: env.java.home, /usr/java/jdk1.8.0_221-amd64 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.address, master 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.port, 6123 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.heap.size, 1024m 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.heap.size, 1024m 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.numberOfTaskSlots, 1 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: parallelism.default, 1 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability, zookeeper 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.storageDir, hdfs:///flink/ha/ 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.zookeeper.quorum, master:2181,slave02:2181,slave03:2181 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.zookeeper.path.root, /flink 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.cluster-id, /cluster_one 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.zookeeper.client.acl, open 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.execution.failover-strategy, region 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.port, 8081 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.address, master,slave03 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.bind-port, 8080-8090 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.bind-address, master,slave03 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: web.submit.enable, false 2019-09-05 21:38:02,842 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Starting StandaloneSessionClusterEntrypoint. 2019-09-05 21:38:02,842 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Install default filesystem. 2019-09-05 21:38:02,877 INFO org.apache.flink.core.fs.FileSystem - Hadoop is not in the classpath/dependencies. The extended set of supported File Systems via Hadoop is not available. 2019-09-05 21:38:02,903 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Install security context. 2019-09-05 21:38:02,914 INFO org.apache.flink.runtime.security.modules.HadoopModuleFactory - Cannot create Hadoop Security Module because Hadoop cannot be found in the Classpath. 2019-09-05 21:38:02,926 INFO org.apache.flink.runtime.security.SecurityUtils - Cannot install HadoopSecurityContext because Hadoop cannot be found in the Classpath. 2019-09-05 21:38:02,927 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Initializing cluster services. 2019-09-05 21:38:03,430 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils - Trying to start actor system at master:0 2019-09-05 21:38:04,268 INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started 2019-09-05 21:38:04,314 INFO akka.remote.Remoting - Starting remoting 2019-09-05 21:38:04,566 INFO akka.remote.Remoting - Remoting started; listening on addresses :[akka.tcp://flink@master:36882] 2019-09-05 21:38:04,674 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils - Actor system started at akka.tcp://flink@master:36882 2019-09-05 21:38:04,701 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Shutting StandaloneSessionClusterEntrypoint down with application status FAILED. Diagnostics java.io.IOException: Could not create FileSystem for highly available storage (high-availability.storageDir) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:119) at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:92) at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:120) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:292) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:257) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:202) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:164) at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:163) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:501) at org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:65) Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:447) at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:359) at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:116) ... 10 more Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies. at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:58) at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:443) ... 13 more . 2019-09-05 21:38:04,708 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Stopping Akka RPC service. 2019-09-05 21:38:04,738 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Shutting down remote daemon. 2019-09-05 21:38:04,738 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remote daemon shut down; proceeding with flushing remote transports. 2019-09-05 21:38:04,765 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remoting shut down. 2019-09-05 21:38:04,815 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Stopped Akka RPC service. 2019-09-05 21:38:04,816 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Could not start cluster entrypoint StandaloneSessionClusterEntrypoint. org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to initialize the cluster entrypoint StandaloneSessionClusterEntrypoint. at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:182) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:501) at org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:65) Caused by: java.io.IOException: Could not create FileSystem for highly available storage (high-availability.storageDir) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:119) at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:92) at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:120) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:292) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:257) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:202) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:164) at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:163) ... 2 more Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:447) at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:359) at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:116) ... 10 more Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies. at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:58) at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:443) ... 13 more ``` ##### 环境变量已经配置如下(/etc/profile): ``` export JAVA_HOME=/usr/java/jdk1.8.0_221-amd64 export JRE_HOME=${JAVA_HOME}/jre export CLASSPATH=.:${JAVA_HOME}/lib:/lib export PATH=$PATH:$JAVA_HOME/bin:. export ZOOKEEPER_HOME=/opt/software/zookeeper export PATH=$PATH:$ZOOKEEPER_HOME/bin:. export HADOOP_HOME=/usr/hadoop/hadoop export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:. export PYTHON_HOME=/usr/local/python3 export PATH=$PATH:$PYTHON_HOME/bin:. export SCALA_HOME=/usr/local/scala export PATH=$PATH:/usr/local/scala/bin export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop export YARN_HOME=$HADOOP_HOME export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop export LD_LIBRARY_PATH=$JAVA_HOME/jre/lib/amd64/server:/usr/local/lib:/usr/hadoop/hadoop/lib/native ``` ##### flinl-conf.yaml HA配置: ``` high-availability: zookeeper high-availability.storageDir: hdfs:///flink/ha/ high-availability.zookeeper.quorum: master:2181,slave02:2181,slave03:2181 high-availability.zookeeper.path.root: /flink high-availability.cluster-id: /cluster_one ``` ##### 因日志中提到如下信息,猜测可能是环境变量或者hadoop 依赖路径的问题。 ``` 2019-09-06 11:44:39,820 INFO org.apache.flink.core.fs.FileSystem - Hadoop is not in the classpath/dependencies. The extended set of supported File Systems via Hadoop is not available. …… 2019-09-06 11:44:41,943 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Shutting StandaloneSessionClusterEntrypoint down with application status FAILED. Diagnostics java.io.IOException: Could not create FileSystem for highly available storage (high-availability.storageDir) …… Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. ^ Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies. …… 2019-09-06 11:44:42,075 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Could not start cluster entrypoint StandaloneSessionClusterEntrypoint. org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to initialize the cluster entrypoint StandaloneSessionClusterEntrypoint. …… Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. …… Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies. ``` ##### 接下来在flink-conf.yaml文件中添加hdfs配置。 ``` fs.hdfs.hadoopconf: /usr/hadoop/hadoop/etc/hadoop fs.hdfs.hdfsdefault: /usr/hadoop/hadoop/etc/hadoop/hdfs-default.xml fs.hdfs.hdfssite: /usr/hadoop/hadoop/etc/hadoop/hdfs-site.xml ``` ##### flink集群依然无法按启动,日志内容与之前没有差别。 ##### 小弟在此向社区各路大神求教,如果有遇到相关情况,是否有解决办法。 ##### 非常感谢 ##### ps:flink on yarn 搭建方式尚未尝试。

sqoop 启动job失败 求解

``` sqoop:000> start job --name job_1 Submission details Job Name: job_1 Server URL: http://localhost:12000/sqoop/ Created by: hadoop Creation date: 2020-07-06 17:25:55 CST Lastly updated by: hadoop External ID: job_1594023705104_0003 http://server1:8088/proxy/application_1594023705104_0003/ 2020-07-06 17:25:55 CST: BOOTING - Progress is not available sqoop:000> status job --name job_1 Submission details Job Name: job_1 Server URL: http://localhost:12000/sqoop/ Created by: hadoop Creation date: 2020-07-06 17:25:55 CST Lastly updated by: hadoop External ID: job_1594023705104_0003 http://server1:8088/proxy/application_1594023705104_0003/ 2020-07-06 17:27:36 CST: FAILED Exception: Job Failed with status:3 ``` 日志: ``` 2020-07-06 17:25:39,129 INFO [org.apache.sqoop.handler.JobRequestHandler.handleEvent(JobRequestHandler.java:96)] Got job request 2020-07-06 17:25:39,133 INFO [org.apache.sqoop.audit.FileAuditLogger.logAuditEvent(FileAuditLogger.java:61)] user=hadoop ip=127.0.0.1 op=get obj=jobobjId=all 2020-07-06 17:25:39,145 INFO [org.apache.sqoop.repository.JdbcRepositoryTransaction.close(JdbcRepositoryTransaction.java:111)] Attempting transaction commit 2020-07-06 17:25:39,159 INFO [org.apache.sqoop.repository.JdbcRepositoryTransaction.close(JdbcRepositoryTransaction.java:111)] Attempting transaction commit 2020-07-06 17:25:43,692 INFO [org.apache.sqoop.audit.FileAuditLogger.logAuditEvent(FileAuditLogger.java:61)] user=hadoop ip=127.0.0.1 op=get obj=linobjId=all 2020-07-06 17:25:43,714 INFO [org.apache.sqoop.repository.JdbcRepositoryTransaction.close(JdbcRepositoryTransaction.java:111)] Attempting transaction commit 2020-07-06 17:25:43,721 INFO [org.apache.sqoop.repository.JdbcRepositoryTransaction.close(JdbcRepositoryTransaction.java:111)] Attempting transaction commit 2020-07-06 17:25:43,723 INFO [org.apache.sqoop.repository.JdbcRepositoryTransaction.close(JdbcRepositoryTransaction.java:111)] Attempting transaction commit 2020-07-06 17:25:55,673 INFO [org.apache.sqoop.handler.JobRequestHandler.handleEvent(JobRequestHandler.java:96)] Got job request 2020-07-06 17:25:55,693 INFO [org.apache.sqoop.repository.JdbcRepositoryTransaction.close(JdbcRepositoryTransaction.java:111)] Attempting transaction commit 2020-07-06 17:25:55,725 INFO [org.apache.sqoop.repository.JdbcRepositoryTransaction.close(JdbcRepositoryTransaction.java:111)] Attempting transaction commit 2020-07-06 17:25:55,726 INFO [org.apache.sqoop.audit.FileAuditLogger.logAuditEvent(FileAuditLogger.java:61)] user=hadoop ip=127.0.0.1 op=submit obj=job objId=job_1 2020-07-06 17:25:55,734 INFO [org.apache.sqoop.repository.JdbcRepositoryTransaction.close(JdbcRepositoryTransaction.java:111)] Attempting transaction commit 2020-07-06 17:25:55,739 INFO [org.apache.sqoop.repository.JdbcRepositoryTransaction.close(JdbcRepositoryTransaction.java:111)] Attempting transaction commit 2020-07-06 17:25:55,741 INFO [org.apache.sqoop.repository.JdbcRepositoryTransaction.close(JdbcRepositoryTransaction.java:111)] Attempting transaction commit 2020-07-06 17:25:55,831 INFO [org.apache.sqoop.connector.jdbc.GenericJdbcFromInitializer.configurePartitionProperties(GenericJdbcFromInitializer.java:137)] Found primary key columns [id] 2020-07-06 17:25:55,831 INFO [org.apache.sqoop.connector.jdbc.GenericJdbcFromInitializer.configurePartitionProperties(GenericJdbcFromInitializer.java:152)] Using partition column: id 2020-07-06 17:25:55,833 INFO [org.apache.sqoop.connector.jdbc.GenericJdbcFromInitializer.configurePartitionProperties(GenericJdbcFromInitializer.java:238)] Using min/max query: SELECT MIN( id ), MAX( id ) FROM test1 . student 2020-07-06 17:25:55,836 INFO [org.apache.sqoop.connector.jdbc.GenericJdbcFromInitializer.configurePartitionProperties(GenericJdbcFromInitializer.java:291)] Boundaries for the job: min=1001, max=1010, columnType=4 2020-07-06 17:25:55,838 INFO [org.apache.sqoop.connector.jdbc.GenericJdbcFromInitializer.configureTableProperties(GenericJdbcFromInitializer.java:353)] Using dataSql: SELECT * FROM test1 . student WHERE ${CONDITIONS} 2020-07-06 17:25:55,838 INFO [org.apache.sqoop.connector.jdbc.GenericJdbcFromInitializer.configureTableProperties(GenericJdbcFromInitializer.java:354)] Field names: id , name , classid 2020-07-06 17:25:55,854 INFO [org.apache.sqoop.connector.hdfs.HdfsUtils.createConfiguration(HdfsUtils.java:66)] Found Hadoop configuration file httpfs-site.xml 2020-07-06 17:25:55,857 INFO [org.apache.sqoop.connector.hdfs.HdfsUtils.createConfiguration(HdfsUtils.java:66)] Found Hadoop configuration file kms-site.xml 2020-07-06 17:25:55,857 INFO [org.apache.sqoop.connector.hdfs.HdfsUtils.createConfiguration(HdfsUtils.java:66)] Found Hadoop configuration file hdfs-site.xml 2020-07-06 17:25:55,857 INFO [org.apache.sqoop.connector.hdfs.HdfsUtils.createConfiguration(HdfsUtils.java:66)] Found Hadoop configuration file yarn-site.xml 2020-07-06 17:25:55,857 INFO [org.apache.sqoop.connector.hdfs.HdfsUtils.createConfiguration(HdfsUtils.java:66)] Found Hadoop configuration file core-site.xml 2020-07-06 17:25:55,858 INFO [org.apache.sqoop.connector.hdfs.HdfsUtils.createConfiguration(HdfsUtils.java:66)] Found Hadoop configuration file mapred-site.xml 2020-07-06 17:25:55,914 INFO [org.apache.hadoop.conf.Configuration.logDeprecation(Configuration.java:1395)] fs.default.name is deprecated. Instead, use fs.defaultFS 2020-07-06 17:25:55,968 INFO [org.apache.sqoop.connector.hadoop.security.SecurityUtils.generateDelegationTokens(SecurityUtils.java:75)] Running on unsecured cluster, skipping delegation token generation. 2020-07-06 17:25:55,971 INFO [org.apache.sqoop.connector.hdfs.HdfsToInitializer.initialize(HdfsToInitializer.java:90)] Using working directory: /sqoop_test/.7bf8856a-24ff-4ab8-a0f0-4962f47a9b86 2020-07-06 17:25:56,197 INFO [org.apache.sqoop.repository.JdbcRepositoryTransaction.close(JdbcRepositoryTransaction.java:111)] Attempting transaction commit 2020-07-06 17:25:56,351 INFO [org.apache.hadoop.yarn.client.RMProxy.newProxyInstance(RMProxy.java:133)] Connecting to ResourceManager at server1/172.16.160.128:8032 2020-07-06 17:25:56,381 WARN [org.apache.hadoop.mapreduce.JobResourceUploader.uploadResourcesInternal(JobResourceUploader.java:149)] Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2020-07-06 17:25:56,396 INFO [org.apache.hadoop.mapreduce.JobResourceUploader.disableErasureCodingForPath(JobResourceUploader.java:906)] Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/hadoop/.staging/job_1594023705104_0003 2020-07-06 17:25:56,479 INFO [org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:239)] SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2020-07-06 17:25:56,945 INFO [org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:239)] SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2020-07-06 17:25:56,982 INFO [org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:239)] SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2020-07-06 17:25:57,432 INFO [org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:239)] SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2020-07-06 17:25:57,905 INFO [org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:239)] SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2020-07-06 17:25:58,377 INFO [org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:239)] SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2020-07-06 17:25:58,418 INFO [org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:239)] SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2020-07-06 17:26:00,565 INFO [org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:239)] SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2020-07-06 17:26:00,902 INFO [org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:239)] SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2020-07-06 17:26:01,368 INFO [org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:239)] SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2020-07-06 17:26:01,397 INFO [org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:239)] SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2020-07-06 17:26:01,830 WARN [org.apache.hadoop.mapreduce.JobResourceUploader.uploadJobJar(JobResourceUploader.java:482)] No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2020-07-06 17:26:01,861 INFO [org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:239)] SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2020-07-06 17:26:01,879 INFO [org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:239)] SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2020-07-06 17:26:02,302 INFO [org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:202)] number of splits:9 2020-07-06 17:26:02,359 INFO [org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:239)] SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2020-07-06 17:26:02,843 INFO [org.apache.hadoop.mapreduce.JobSubmitter.printTokens(JobSubmitter.java:298)] Submitting tokens for job: job_1594023705104_0003 2020-07-06 17:26:02,843 INFO [org.apache.hadoop.mapreduce.JobSubmitter.printTokens(JobSubmitter.java:299)] Executing with tokens: [] 2020-07-06 17:26:02,855 INFO [org.apache.hadoop.mapred.YARNRunner.setupLocalResources(YARNRunner.java:426)] Job jar is not present. Not adding any jar to the list of resources. 2020-07-06 17:26:03,220 INFO [org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:329)] Submitted application application_1594023705104_0003 2020-07-06 17:26:03,243 INFO [org.apache.hadoop.mapreduce.Job.submit(Job.java:1574)] The url to track the job: http://server1:8088/proxy/application_1594023705104_0003/ 2020-07-06 17:26:03,888 INFO [org.apache.sqoop.repository.JdbcRepositoryTransaction.close(JdbcRepositoryTransaction.java:111)] Attempting transaction commit 2020-07-06 17:27:35,957 INFO [org.apache.sqoop.repository.JdbcRepositoryTransaction.close(JdbcRepositoryTransaction.java:111)] Attempting transaction commit 2020-07-06 17:27:36,045 INFO [org.apache.sqoop.repository.JdbcRepositoryTransaction.close(JdbcRepositoryTransaction.java:111)] Attempting transaction commit 2020-07-06 17:30:14,020 INFO [org.apache.sqoop.handler.JobRequestHandler.handleEvent(JobRequestHandler.java:96)] Got job request 2020-07-06 17:30:14,062 INFO [org.apache.sqoop.repository.JdbcRepositoryTransaction.close(JdbcRepositoryTransaction.java:111)] Attempting transaction commit 2020-07-06 17:30:14,064 INFO [org.apache.sqoop.repository.JdbcRepositoryTransaction.close(JdbcRepositoryTransaction.java:111)] Attempting transaction commit 2020-07-06 17:30:14,064 INFO [org.apache.sqoop.audit.FileAuditLogger.logAuditEvent(FileAuditLogger.java:61)] user=hadoop ip=127.0.0.1 op=status obj=job objId=job_1 2020-07-06 17:30:14,080 INFO [org.apache.sqoop.repository.JdbcRepositoryTransaction.close(JdbcRepositoryTransaction.java:111)] Attempting transaction commit 2020-07-06 17:32:36,052 INFO [org.apache.sqoop.repository.JdbcRepositoryTransaction.close(JdbcRepositoryTransaction.java:111)] Attempting transaction commit ``` 没有报错,但是有几个warn,都百度过还是没什么头绪,求解

救命,hive中使用insert into 不能插入数据,别的语句都行,求解(着急)

先简单叙述,最近有个项目需要使用hive,目前我的数据在一个map中,我使用jdbc连接hive,连接没有问题,创建表什么的也正常,但是一执行insert into 语句,就会卡住,就是一直转圈,不报错,然后我强行终止后会出现一张临时表,临时表中有我要插入的数据,但是并没有插入我指定的那张表,而且连接关闭重新连接后,临时表就消失了,救救我吧, 是在搞不定hive这个 我的问题类似于这个博主说的http://www.aboutyun.com/thread-20833-1-1.html 但是我不仅这样,我是一执行insert,还会卡主,不往下执行,我用idea的datasource也是这样,必须手动停止,然后就会出现临时表 hive的地址是公司给的地址 请问怎么办,拜托了各位学大数据的大神,已经折磨我一周了 ![图片说明](https://img-ask.csdn.net/upload/201803/12/1520833045_237865.jpg) ![图片说明](https://img-ask.csdn.net/upload/201803/12/1520833054_408464.jpg)

sqoop1使用java报Can't get Kerberos principal renewer

全部代码如下 **sqoop1使用java api过Kerberos出现Can't get Master Kerberos principal for use as renewer ** ``` public class SqoopTest { public static void main(String[] args) throws Exception { // ================================================================= Configuration conf = new Configuration(); conf.set("fs.default.name", "hdfs://101.30.188.246:9000/");//设置HDFS服务地址 String keytabFile = "/home/hcj/tab/hdfs.keytab"; String principle = "hdfs@MSO.COM"; String krbConf = "/home/hcj/krb5.conf"; System.setProperty("java.security.krb5.conf", krbConf); conf.set("hadoop.security.authentication", "Kerberos"); //conf.setBoolean("fs.hdfs.imHADOpl.disable.cache", true); conf.set("keytab.file", keytabFile); UserGroupInformation.setConfiguration(conf); UserGroupInformation.loginUserFromKeytab(principle, keytabFile); // ================================================================= String[] arg = new String[] { // Oracle数据库信息 /* * sqoop export --connect jdbc:mysql://127.0.0.1:3306/test --username jamie --table * persons --export-dir /user/hive/warehouse/dw_api_server.db/persons2/ * --input-fields-terminated-by '\t' --input-lines-terminated-by '\n' */ "--connect","jdbc:mysql://114.115.156.37:3306/test", "--username","root", "--password","root", "--table","persons", "--m","1", "--export-dir","hdfs://101.30.188.246:9000/user/hive/warehouse/dw_api_server.db/persons/", "--input-fields-terminated-by","\t" //"-columns","id,city" }; String[] expandArguments = OptionsFileUtil.expandArguments(arg); SqoopTool tool = SqoopTool.getTool("export"); Configuration loadPlugins = SqoopTool.loadPlugins(conf); Sqoop sqoop = new Sqoop((com.cloudera.sqoop.tool.SqoopTool) tool, loadPlugins); int res = Sqoop.runSqoop(sqoop, expandArguments); if (res == 0) System.out.println ("成功"); } } ``` 报错 ``` java.io.IOException: Can't get Master Kerberos principal for use as renewer at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:133) at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100) at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:166) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308) at org.apache.sqoop.mapreduce.ExportJobBase.doSubmitJob(ExportJobBase.java:322) at org.apache.sqoop.mapreduce.ExportJobBase.runJob(ExportJobBase.java:299) at org.apache.sqoop.mapreduce.ExportJobBase.runExport(ExportJobBase.java:440) at org.apache.sqoop.manager.SqlManager.exportTable(SqlManager.java:931) at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:80) at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:99) at org.apache.sqoop.Sqoop.run(Sqoop.java:147) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183) at com.mshuoke.datagw.impl.sqoop.SqoopTest.main(SqoopTest.java:58) ``` 求解

hive配置oarcle为metastore报错ORA-01754

在hive配置远程模式metastore为Oracle,启动正常,创建表时报错 ``` hive> create table dht_tab(name1 int,name45 varchar(50))row format delimited fields terminated by '\t'; FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:javax.jdo.JDODataStoreException: An exception was thrown while adding/validating class(es) : ORA-01754: a table may contain only one column of type LONG java.sql.SQLSyntaxErrorException: ORA-01754: a table may contain only one column of type LONG ``` 根据网上修改"hive/lib/hive-metastore-1.2.1.jar"包中package.jdo文件,将LONGVARCHAR类型修改为CLOB,操作如下: ``` cd $HIVE\_HOME/lib mkdir temp cp hive-metastore-1.2.1.jar temp cd temp jar -xvf hive-metastore-1.2.1.jar sed -i -e 's/LONGVARCHAR/CLOB/g' package.jdo jar cfm hive-metastore-1.2.1.jar META-INF/MANIFEST.MF * cp hive-metastore-1.2.1.jar $HIVE_HOME/lib ``` 再通过 hive --service metastore 初始化hive 再进入hive创建表仍然报相同错误。 求解???? 还有一个问题 就是hive --service hiveserver2 没有反应 ![图片说明](https://img-ask.csdn.net/upload/201604/19/1461049192_64211.png) 感谢!

2019 AI开发者大会

2019 AI开发者大会(AI ProCon 2019)是由中国IT社区CSDN主办的AI技术与产业年度盛会。多年经验淬炼,如今蓄势待发:2019年9月6-7日,大会将有近百位中美顶尖AI专家、知名企业代表以及千余名AI开发者齐聚北京,进行技术解读和产业论证。我们不空谈口号,只谈技术,诚挚邀请AI业内人士一起共铸人工智能新篇章!

实现简单的文件系统

实验内容: 通过对具体的文件存储空间的管理、文件的物理结构、目录结构和文件操作的实现,加深对文件系统内部功能和实现过程的理解。 要求: 1.在内存中开辟一个虚拟磁盘空间作为文件存储器,在其上实现一个简

MIPS单周期CPU-组成原理实验-华中科技大学

使用logisim布线完成的MIPS单周期CPU,可支持28条指令。跑马灯的代码已经装入了寄存器,可以直接开启时钟运行。

2019数学建模A题高压油管的压力控制 省一论文即代码

2019数学建模A题高压油管的压力控制省一完整论文即详细C++和Matlab代码,希望对同学们有所帮助

基于QT和OpenCV的五子棋实现源码

一个简单的五子棋应用,基于QT和OpenCV的实现源码,通过相邻棋子判断是否获胜,不包含人工智能算法,适合新手入门

Git 实用技巧

这几年越来越多的开发团队使用了Git,掌握Git的使用已经越来越重要,已经是一个开发者必备的一项技能;但很多人在刚开始学习Git的时候会遇到很多疑问,比如之前使用过SVN的开发者想不通Git提交代码为什么需要先commit然后再去push,而不是一条命令一次性搞定; 更多的开发者对Git已经入门,不过在遇到一些代码冲突、需要恢复Git代码时候就不知所措,这个时候哪些对 Git掌握得比较好的少数人,就像团队中的神一样,在队友遇到 Git 相关的问题的时候用各种流利的操作来帮助队友于水火。 我去年刚加入新团队,发现一些同事对Git的常规操作没太大问题,但对Git的理解还是比较生疏,比如说分支和分支之间的关联关系、合并代码时候的冲突解决、提交代码前未拉取新代码导致冲突问题的处理等,我在协助处理这些问题的时候也记录各种问题的解决办法,希望整理后通过教程帮助到更多对Git操作进阶的开发者。 本期教程学习方法分为“掌握基础——稳步进阶——熟悉协作”三个层次。从掌握基础的 Git的推送和拉取开始,以案例进行演示,分析每一个步骤的操作方式和原理,从理解Git 工具的操作到学会代码存储结构、演示不同场景下Git遇到问题的不同处理方案。循序渐进让同学们掌握Git工具在团队协作中的整体协作流程。 在教程中会通过大量案例进行分析,案例会模拟在工作中遇到的问题,从最基础的代码提交和拉取、代码冲突解决、代码仓库的数据维护、Git服务端搭建等。为了让同学们容易理解,对Git简单易懂,文章中详细记录了详细的操作步骤,提供大量演示截图和解析。在教程的最后部分,会从提升团队整体效率的角度对Git工具进行讲解,包括规范操作、Gitlab的搭建、钩子事件的应用等。 为了让同学们可以利用碎片化时间来灵活学习,在教程文章中大程度降低了上下文的依赖,让大家可以在工作之余进行学习与实战,并同时掌握里面涉及的Git不常见操作的相关知识,理解Git工具在工作遇到的问题解决思路和方法,相信一定会对大家的前端技能进阶大有帮助。

实用主义学Python(小白也容易上手的Python实用案例)

原价169,限时立减100元! 系统掌握Python核心语法16点,轻松应对工作中80%以上的Python使用场景! 69元=72讲+源码+社群答疑+讲师社群分享会&nbsp; 【哪些人适合学习这门课程?】 1)大学生,平时只学习了Python理论,并未接触Python实战问题; 2)对Python实用技能掌握薄弱的人,自动化、爬虫、数据分析能让你快速提高工作效率; 3)想学习新技术,如:人工智能、机器学习、深度学习等,这门课程是你的必修课程; 4)想修炼更好的编程内功,优秀的工程师肯定不能只会一门语言,Python语言功能强大、使用高效、简单易学。 【超实用技能】 从零开始 自动生成工作周报 职场升级 豆瓣电影数据爬取 实用案例 奥运冠军数据分析 自动化办公:通过Python自动化分析Excel数据并自动操作Word文档,最终获得一份基于Excel表格的数据分析报告。 豆瓣电影爬虫:通过Python自动爬取豆瓣电影信息并将电影图片保存到本地。 奥运会数据分析实战 简介:通过Python分析120年间奥运会的数据,从不同角度入手分析,从而得出一些有趣的结论。 【超人气老师】 二两 中国人工智能协会高级会员 生成对抗神经网络研究者 《深入浅出生成对抗网络:原理剖析与TensorFlow实现》一书作者 阿里云大学云学院导师 前大型游戏公司后端工程师 【超丰富实用案例】 0)图片背景去除案例 1)自动生成工作周报案例 2)豆瓣电影数据爬取案例 3)奥运会数据分析案例 4)自动处理邮件案例 5)github信息爬取/更新提醒案例 6)B站百大UP信息爬取与分析案例 7)构建自己的论文网站案例

深度学习原理+项目实战+算法详解+主流框架(套餐)

深度学习系列课程从深度学习基础知识点开始讲解一步步进入神经网络的世界再到卷积和递归神经网络,详解各大经典网络架构。实战部分选择当下最火爆深度学习框架PyTorch与Tensorflow/Keras,全程实战演示框架核心使用与建模方法。项目实战部分选择计算机视觉与自然语言处理领域经典项目,从零开始详解算法原理,debug模式逐行代码解读。适合准备就业和转行的同学们加入学习! 建议按照下列课程顺序来进行学习 (1)掌握深度学习必备经典网络架构 (2)深度框架实战方法 (3)计算机视觉与自然语言处理项目实战。(按照课程排列顺序即可)

C/C++跨平台研发从基础到高阶实战系列套餐

一 专题从基础的C语言核心到c++ 和stl完成基础强化; 二 再到数据结构,设计模式完成专业计算机技能强化; 三 通过跨平台网络编程,linux编程,qt界面编程,mfc编程,windows编程,c++与lua联合编程来完成应用强化 四 最后通过基于ffmpeg的音视频播放器,直播推流,屏幕录像,

三个项目玩转深度学习(附1G源码)

从事大数据与人工智能开发与实践约十年,钱老师亲自见证了大数据行业的发展与人工智能的从冷到热。事实证明,计算机技术的发展,算力突破,海量数据,机器人技术等,开启了第四次工业革命的序章。深度学习图像分类一直是人工智能的经典任务,是智慧零售、安防、无人驾驶等机器视觉应用领域的核心技术之一,掌握图像分类技术是机器视觉学习的重中之重。针对现有线上学习的特点与实际需求,我们开发了人工智能案例实战系列课程。打造:以项目案例实践为驱动的课程学习方式,覆盖了智能零售,智慧交通等常见领域,通过基础学习、项目案例实践、社群答疑,三维立体的方式,打造最好的学习效果。

Java基础知识面试题(2020最新版)

文章目录Java概述何为编程什么是Javajdk1.5之后的三大版本JVM、JRE和JDK的关系什么是跨平台性?原理是什么Java语言有哪些特点什么是字节码?采用字节码的最大好处是什么什么是Java程序的主类?应用程序和小程序的主类有何不同?Java应用程序与小程序之间有那些差别?Java和C++的区别Oracle JDK 和 OpenJDK 的对比基础语法数据类型Java有哪些数据类型switc...

Python界面版学生管理系统

前不久上传了一个控制台版本的学生管理系统,这个是Python界面版学生管理系统,这个是使用pycharm开发的一个有界面的学生管理系统,基本的增删改查,里面又演示视频和完整代码,有需要的伙伴可以自行下

Vue.js 2.0之全家桶系列视频课程

基于新的Vue.js 2.3版本, 目前新全的Vue.js教学视频,让你少走弯路,直达技术前沿! 1. 包含Vue.js全家桶(vue.js、vue-router、axios、vuex、vue-cli、webpack、ElementUI等) 2. 采用笔记+代码案例的形式讲解,通俗易懂

linux“开发工具三剑客”速成攻略

工欲善其事,必先利其器。Vim+Git+Makefile是Linux环境下嵌入式开发常用的工具。本专题主要面向初次接触Linux的新手,熟练掌握工作中常用的工具,在以后的学习和工作中提高效率。

JAVA初级工程师面试36问(完结)

第三十一问: 说一下线程中sleep()和wait()区别? 1 . sleep()是让正在执行的线程主动让出CPU,当时间到了,在回到自己的线程让程序运行。但是它并没有释放同步资源锁只是让出。 2.wait()是让当前线程暂时退让出同步资源锁,让其他线程来获取到这个同步资源在调用notify()方法,才会让其解除wait状态,再次参与抢资源。 3. sleep()方法可以在任何地方使用,而wait()只能在同步方法或同步块使用。 ...

java jdk 8 帮助文档 中文 文档 chm 谷歌翻译

JDK1.8 API 中文谷歌翻译版 java帮助文档 JDK API java 帮助文档 谷歌翻译 JDK1.8 API 中文 谷歌翻译版 java帮助文档 Java最新帮助文档 本帮助文档是使用谷

我以为我对Mysql事务很熟,直到我遇到了阿里面试官

太惨了,面试又被吊打

智鼎(附答案).zip

并不是完整题库,但是有智鼎在线2019年9、10、11三个月的试题,有十七套以上题目,普通的网申行测题足以对付,可以在做题时自己总结一些规律,都不是很难

Visual Assist X 破解补丁

vs a's'sixt插件 支持vs2008-vs2019 亲测可以破解,希望可以帮助到大家

150讲轻松搞定Python网络爬虫

【为什么学爬虫?】 &nbsp; &nbsp; &nbsp; &nbsp;1、爬虫入手容易,但是深入较难,如何写出高效率的爬虫,如何写出灵活性高可扩展的爬虫都是一项技术活。另外在爬虫过程中,经常容易遇到被反爬虫,比如字体反爬、IP识别、验证码等,如何层层攻克难点拿到想要的数据,这门课程,你都能学到! &nbsp; &nbsp; &nbsp; &nbsp;2、如果是作为一个其他行业的开发者,比如app开发,web开发,学习爬虫能让你加强对技术的认知,能够开发出更加安全的软件和网站 【课程设计】 一个完整的爬虫程序,无论大小,总体来说可以分成三个步骤,分别是: 网络请求:模拟浏览器的行为从网上抓取数据。 数据解析:将请求下来的数据进行过滤,提取我们想要的数据。 数据存储:将提取到的数据存储到硬盘或者内存中。比如用mysql数据库或者redis等。 那么本课程也是按照这几个步骤循序渐进的进行讲解,带领学生完整的掌握每个步骤的技术。另外,因为爬虫的多样性,在爬取的过程中可能会发生被反爬、效率低下等。因此我们又增加了两个章节用来提高爬虫程序的灵活性,分别是: 爬虫进阶:包括IP代理,多线程爬虫,图形验证码识别、JS加密解密、动态网页爬虫、字体反爬识别等。 Scrapy和分布式爬虫:Scrapy框架、Scrapy-redis组件、分布式爬虫等。 通过爬虫进阶的知识点我们能应付大量的反爬网站,而Scrapy框架作为一个专业的爬虫框架,使用他可以快速提高我们编写爬虫程序的效率和速度。另外如果一台机器不能满足你的需求,我们可以用分布式爬虫让多台机器帮助你快速爬取数据。 &nbsp; 从基础爬虫到商业化应用爬虫,本套课程满足您的所有需求! 【课程服务】 专属付费社群+每周三讨论会+1v1答疑

JavaWEB商城项目(包括数据库)

功能描述:包括用户的登录注册,以及个人资料的修改.商品的分类展示,详情,加入购物车,生成订单,到银行支付等!另外还有收货地址的和我的收藏等常用操作.环境(JDK 1.7 ,mysql 5.5,Ecli

Python数据挖掘简易入门

&nbsp; &nbsp; &nbsp; &nbsp; 本课程为Python数据挖掘方向的入门课程,课程主要以真实数据为基础,详细介绍数据挖掘入门的流程和使用Python实现pandas与numpy在数据挖掘方向的运用,并深入学习如何运用scikit-learn调用常用的数据挖掘算法解决数据挖掘问题,为进一步深入学习数据挖掘打下扎实的基础。

一学即懂的计算机视觉(第一季)

图像处理和计算机视觉的课程大家已经看过很多,但总有“听不透”,“用不了”的感觉。课程致力于创建人人都能听的懂的计算机视觉,通过生动、细腻的讲解配合实战演练,让学生真正学懂、用会。 【超实用课程内容】 课程内容分为三篇,包括视觉系统构成,图像处理基础,特征提取与描述,运动跟踪,位姿估计,三维重构等内容。课程理论与实战结合,注重教学内容的可视化和工程实践,为人工智能视觉研发及算法工程师等相关高薪职位就业打下坚实基础。 【课程如何观看?】 PC端:https://edu.csdn.net/course/detail/26281 移动端:CSDN 学院APP(注意不是CSDN APP哦) 本课程为录播课,课程2年有效观看时长,但是大家可以抓紧时间学习后一起讨论哦~ 【学员专享增值服务】 源码开放 课件、课程案例代码完全开放给你,你可以根据所学知识,自行修改、优化 下载方式:电脑登录https://edu.csdn.net/course/detail/26281,点击右下方课程资料、代码、课件等打包下载

软件测试2小时入门

本课程内容系统、全面、简洁、通俗易懂,通过2个多小时的介绍,让大家对软件测试有个系统的理解和认识,具备基本的软件测试理论基础。 主要内容分为5个部分: 1 软件测试概述,了解测试是什么、测试的对象、原则、流程、方法、模型;&nbsp; 2.常用的黑盒测试用例设计方法及示例演示;&nbsp; 3 常用白盒测试用例设计方法及示例演示;&nbsp; 4.自动化测试优缺点、使用范围及示例‘;&nbsp; 5.测试经验谈。

初级玩转Linux+Ubuntu(嵌入式开发基础课程)

课程主要面向嵌入式Linux初学者、工程师、学生 主要从一下几方面进行讲解: 1.linux学习路线、基本命令、高级命令 2.shell、vi及vim入门讲解 3.软件安装下载、NFS、Samba、FTP等服务器配置及使用

2019 Python开发者日-培训

本次活动将秉承“只讲技术,拒绝空谈”的理念,邀请十余位身处一线的Python技术专家,重点围绕Web开发、自动化运维、数据分析、人工智能等技术模块,分享真实生产环境中使用Python应对IT挑战的真知灼见。此外,针对不同层次的开发者,大会还安排了深度培训实操环节,为开发者们带来更多深度实战的机会。

快速入门Android开发 视频 教程 android studio

这是一门快速入门Android开发课程,顾名思义是让大家能快速入门Android开发。 学完能让你学会如下知识点: Android的发展历程 搭建Java开发环境 搭建Android开发环境 Android Studio基础使用方法 Android Studio创建项目 项目运行到模拟器 项目运行到真实手机 Android中常用控件 排查开发中的错误 Android中请求网络 常用Android开发命令 快速入门Gradle构建系统 项目实战:看美图 常用Android Studio使用技巧 项目签名打包 如何上架市场

机器学习初学者必会的案例精讲

通过六个实际的编码项目,带领同学入门人工智能。这些项目涉及机器学习(回归,分类,聚类),深度学习(神经网络),底层数学算法,Weka数据挖掘,利用Git开源项目实战等。

4小时玩转微信小程序——基础入门与微信支付实战

这是一个门针对零基础学员学习微信小程序开发的视频教学课程。课程采用腾讯官方文档作为教程的唯一技术资料来源。杜绝网络上质量良莠不齐的资料给学员学习带来的障碍。 视频课程按照开发工具的下载、安装、使用、程序结构、视图层、逻辑层、微信小程序等几个部分组织课程,详细讲解整个小程序的开发过程

YOLOv3目标检测实战系列课程

《YOLOv3目标检测实战系列课程》旨在帮助大家掌握YOLOv3目标检测的训练、原理、源码与网络模型改进方法。 本课程的YOLOv3使用原作darknet(c语言编写),在Ubuntu系统上做项目演示。 本系列课程包括三门课: (1)《YOLOv3目标检测实战:训练自己的数据集》 包括:安装darknet、给自己的数据集打标签、整理自己的数据集、修改配置文件、训练自己的数据集、测试训练出的网络模型、性能统计(mAP计算和画出PR曲线)和先验框聚类。 (2)《YOLOv3目标检测:原理与源码解析》讲解YOLOv1、YOLOv2、YOLOv3的原理、程序流程并解析各层的源码。 (3)《YOLOv3目标检测:网络模型改进方法》讲解YOLOv3的改进方法,包括改进1:不显示指定类别目标的方法 (增加功能) ;改进2:合并BN层到卷积层 (加快推理速度) ; 改进3:使用GIoU指标和损失函数 (提高检测精度) ;改进4:tiny YOLOv3 (简化网络模型)并介绍 AlexeyAB/darknet项目。

Qt5 局域网通信软件(模仿QQ)

采用Qt5进行开发的局域网通信客户端+Server,界面模仿QQ的界面,聊天界面采用QWidget绘制的气泡!

Python数据清洗实战入门

本次课程主要以真实的电商数据为基础,通过Python详细的介绍了数据分析中的数据清洗阶段各种技巧和方法。

YOLOv3目标检测实战:训练自己的数据集

YOLOv3是一种基于深度学习的端到端实时目标检测方法,以速度快见长。本课程将手把手地教大家使用labelImg标注和使用YOLOv3训练自己的数据集。课程分为三个小项目:足球目标检测(单目标检测)、梅西目标检测(单目标检测)、足球和梅西同时目标检测(两目标检测)。 本课程的YOLOv3使用Darknet,在Ubuntu系统上做项目演示。包括:安装Darknet、给自己的数据集打标签、整理自己的数据集、修改配置文件、训练自己的数据集、测试训练出的网络模型、性能统计(mAP计算和画出PR曲线)和先验框聚类。 Darknet是使用C语言实现的轻型开源深度学习框架,依赖少,可移植性好,值得深入探究。 除本课程《YOLOv3目标检测实战:训练自己的数据集》外,本人推出了有关YOLOv3目标检测的系列课程,请持续关注该系列的其它课程视频,包括: 《YOLOv3目标检测实战:交通标志识别》 《YOLOv3目标检测:原理与源码解析》 《YOLOv3目标检测:网络模型改进方法》 敬请关注并选择学习!

Python可以这样学(第一季:Python内功修炼)

董付国系列教材《Python程序设计基础》、《Python程序设计(第2版)》、《Python可以这样学》配套视频,讲解Python 3.5.x和3.6.x语法、内置对象用法、选择与循环以及函数设计与使用、lambda表达式用法、字符串与正则表达式应用、面向对象编程、文本文件与二进制文件操作、目录操作与系统运维、异常处理结构。

ANSI-VITA-46.0-2007(中文翻译版)_V1.0.pdf

ANSI-VITA 46.0 2007老版本的的英文版本的中文翻译版本,含目录,通过谷歌翻译进行文字翻译,翻译的不是很专业,对初学者理解VPX接口及协议有一定的帮助,后续会对翻译进行改进。

Java 最常见的 200+ 面试题:面试必备

这份面试清单是从我 2015 年做了 TeamLeader 之后开始收集的,一方面是给公司招聘用,另一方面是想用它来挖掘在 Java 技术栈中,还有那些知识点是我不知道的,我想找到这些技术盲点,然后修复它,以此来提高自己的技术水平。虽然我是从 2009 年就开始参加编程工作了,但我依旧觉得自己现在要学的东西很多,并且学习这些知识,让我很有成就感和满足感,那所以何乐而不为呢? 说回面试的事,这份面试...

MySQL数据库从入门到实战应用

限时福利1:购课进答疑群专享柳峰(刘运强)老师答疑服务 限时福利2:购课后添加学习助手(微信号:csdn590),按消息提示即可领取编程大礼包! 为什么说每一个程序员都应该学习MySQL? 根据《2019-2020年中国开发者调查报告》显示,超83%的开发者都在使用MySQL数据库。 使用量大同时,掌握MySQL早已是运维、DBA的必备技能,甚至部分IT开发岗位也要求对数据库使用和原理有深入的了解和掌握。 学习编程,你可能会犹豫选择 C++ 还是 Java;入门数据科学,你可能会纠结于选择 Python 还是 R;但无论如何, MySQL 都是 IT 从业人员不可或缺的技能! 【课程设计】 在本课程中,刘运强老师会结合自己十多年来对MySQL的心得体会,通过课程给你分享一条高效的MySQL入门捷径,让学员少走弯路,彻底搞懂MySQL。 本课程包含3大模块:&nbsp; 一、基础篇: 主要以最新的MySQL8.0安装为例帮助学员解决安装与配置MySQL的问题,并对MySQL8.0的新特性做一定介绍,为后续的课程展开做好环境部署。 二、SQL语言篇: 本篇主要讲解SQL语言的四大部分数据查询语言DQL,数据操纵语言DML,数据定义语言DDL,数据控制语言DCL,学会熟练对库表进行增删改查等必备技能。 三、MySQL进阶篇: 本篇可以帮助学员更加高效的管理线上的MySQL数据库;具备MySQL的日常运维能力,语句调优、备份恢复等思路。 &nbsp;

c语言项目开发实例

十个c语言案例 (1)贪吃蛇 (2)五子棋游戏 (3)电话薄管理系统 (4)计算器 (5)万年历 (6)电子表 (7)客户端和服务器通信 (8)潜艇大战游戏 (9)鼠标器程序 (10)手机通讯录系统

相关热词 c# 不能序列化继承类 c# char* 调用 c# 开发dll模板 c#添加控件到工具箱 c#控制台组合数 编程计算猴子吃桃问题c# c# wpf 背景透明 随机抽取号码软件c# c# 开发环境 c# 属性和字段
立即提问