求解 FLINK 集群 Standalone 模式 高可用部署无法启动。
根据教程部署的hdfs,zookeeper,flink 集群。
HDFS , zookeeper,工作正常,flink-standalone 启动正常。在搭建HA集群时,集群启动未报错,查看jps时发现没有进程,查看日志出现如下内容。(启动顺序zkServer.sh start ==> start-dfs.sh, start-yarn.sh bin/start-cluster.sh )
集群环境: Centos 7.3, Hadoop-2.8.5, java 1.8, scala-2.12, flink-1.9.0-2.12, zookeeper 3.4.14
2019-09-05 21:38:02,658 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         - --------------------------------------------------------------------------------
2019-09-05 21:38:02,660 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -  Starting StandaloneSessionClusterEntrypoint (Version: 1.9.0, Rev:9c32ed9, Date:19.08.2019 @ 16:16:55 UTC)
2019-09-05 21:38:02,660 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -  OS current user: root
2019-09-05 21:38:02,660 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -  Current Hadoop/Kerberos user: <no hadoop dependency found>
2019-09-05 21:38:02,660 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -  JVM: Java HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.221-b11
2019-09-05 21:38:02,660 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -  Maximum heap size: 989 MiBytes
2019-09-05 21:38:02,660 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -  JAVA_HOME: /usr/java/jdk1.8.0_221-amd64
2019-09-05 21:38:02,661 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -  No Hadoop Dependency available
2019-09-05 21:38:02,661 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -  JVM Options:
2019-09-05 21:38:02,661 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -     -Xms1024m
2019-09-05 21:38:02,661 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -     -Xmx1024m
2019-09-05 21:38:02,661 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -     -Dlog.file=/opt/software/flink/log/flink-root-standalonesession-14-master.log
2019-09-05 21:38:02,661 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -     -Dlog4j.configuration=file:/opt/software/flink/conf/log4j.properties
2019-09-05 21:38:02,661 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -     -Dlogback.configurationFile=file:/opt/software/flink/conf/logback.xml
2019-09-05 21:38:02,661 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -  Program Arguments:
2019-09-05 21:38:02,661 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -     --configDir
2019-09-05 21:38:02,661 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -     /opt/software/flink/conf
2019-09-05 21:38:02,661 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -     --executionMode
2019-09-05 21:38:02,661 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -     cluster
2019-09-05 21:38:02,661 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -     --host
2019-09-05 21:38:02,662 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -     master
2019-09-05 21:38:02,662 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -     --webui-port
2019-09-05 21:38:02,662 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -     8081
2019-09-05 21:38:02,662 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -  Classpath: /opt/software/flink/lib/flink-table_2.12-1.9.0.jar:/opt/software/flink/lib/flink-table-blink_2.12-1.9.0.jar:/opt/software/flink/lib/log4j-1.2.17.jar:/opt/software/flink/lib/slf4j-log4j12-1.7.15.jar:/opt/software/flink/lib/flink-dist_2.12-1.9.0.jar::/usr/hadoop/hadoop/etc/hadoop:
2019-09-05 21:38:02,662 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         - --------------------------------------------------------------------------------
2019-09-05 21:38:02,663 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         - Registered UNIX signal handlers for [TERM, HUP, INT]
2019-09-05 21:38:02,696 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: env.java.home, /usr/java/jdk1.8.0_221-amd64
2019-09-05 21:38:02,696 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: jobmanager.rpc.address, master
2019-09-05 21:38:02,696 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: jobmanager.rpc.port, 6123
2019-09-05 21:38:02,696 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: jobmanager.heap.size, 1024m
2019-09-05 21:38:02,697 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: taskmanager.heap.size, 1024m
2019-09-05 21:38:02,697 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: taskmanager.numberOfTaskSlots, 1
2019-09-05 21:38:02,697 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: parallelism.default, 1
2019-09-05 21:38:02,697 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: high-availability, zookeeper
2019-09-05 21:38:02,697 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: high-availability.storageDir, hdfs:///flink/ha/
2019-09-05 21:38:02,697 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: high-availability.zookeeper.quorum, master:2181,slave02:2181,slave03:2181
2019-09-05 21:38:02,698 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: high-availability.zookeeper.path.root, /flink
2019-09-05 21:38:02,698 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: high-availability.cluster-id, /cluster_one
2019-09-05 21:38:02,698 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: high-availability.zookeeper.client.acl, open
2019-09-05 21:38:02,698 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: jobmanager.execution.failover-strategy, region
2019-09-05 21:38:02,698 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: rest.port, 8081
2019-09-05 21:38:02,699 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: rest.address, master,slave03
2019-09-05 21:38:02,699 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: rest.bind-port, 8080-8090
2019-09-05 21:38:02,699 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: rest.bind-address, master,slave03
2019-09-05 21:38:02,699 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: web.submit.enable, false
2019-09-05 21:38:02,842 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         - Starting StandaloneSessionClusterEntrypoint.
2019-09-05 21:38:02,842 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         - Install default filesystem.
2019-09-05 21:38:02,877 INFO  org.apache.flink.core.fs.FileSystem                           - Hadoop is not in the classpath/dependencies. The extended set of supported File Systems via Hadoop is not available.
2019-09-05 21:38:02,903 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         - Install security context.
2019-09-05 21:38:02,914 INFO  org.apache.flink.runtime.security.modules.HadoopModuleFactory  - Cannot create Hadoop Security Module because Hadoop cannot be found in the Classpath.
2019-09-05 21:38:02,926 INFO  org.apache.flink.runtime.security.SecurityUtils               - Cannot install HadoopSecurityContext because Hadoop cannot be found in the Classpath.
2019-09-05 21:38:02,927 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         - Initializing cluster services.
2019-09-05 21:38:03,430 INFO  org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils         - Trying to start actor system at master:0
2019-09-05 21:38:04,268 INFO  akka.event.slf4j.Slf4jLogger                                  - Slf4jLogger started
2019-09-05 21:38:04,314 INFO  akka.remote.Remoting                                          - Starting remoting
2019-09-05 21:38:04,566 INFO  akka.remote.Remoting                                          - Remoting started; listening on addresses :[akka.tcp://flink@master:36882]
2019-09-05 21:38:04,674 INFO  org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils         - Actor system started at akka.tcp://flink@master:36882
2019-09-05 21:38:04,701 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         - Shutting StandaloneSessionClusterEntrypoint down with application status FAILED. Diagnostics java.io.IOException: Could not create FileSystem for highly available storage (high-availability.storageDir)
        at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:119)
        at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:92)
        at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:120)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:292)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:257)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:202)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:164)
        at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:163)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:501)
        at org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:65)
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded.
        at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:447)
        at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:359)
        at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298)
        at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:116)
        ... 10 more
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies.
        at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:58)
        at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:443)
        ... 13 more
.
2019-09-05 21:38:04,708 INFO  org.apache.flink.runtime.rpc.akka.AkkaRpcService              - Stopping Akka RPC service.
2019-09-05 21:38:04,738 INFO  akka.remote.RemoteActorRefProvider$RemotingTerminator         - Shutting down remote daemon.
2019-09-05 21:38:04,738 INFO  akka.remote.RemoteActorRefProvider$RemotingTerminator         - Remote daemon shut down; proceeding with flushing remote transports.
2019-09-05 21:38:04,765 INFO  akka.remote.RemoteActorRefProvider$RemotingTerminator         - Remoting shut down.
2019-09-05 21:38:04,815 INFO  org.apache.flink.runtime.rpc.akka.AkkaRpcService              - Stopped Akka RPC service.
2019-09-05 21:38:04,816 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint         - Could not start cluster entrypoint StandaloneSessionClusterEntrypoint.
org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to initialize the cluster entrypoint StandaloneSessionClusterEntrypoint.
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:182)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:501)
        at org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:65)
Caused by: java.io.IOException: Could not create FileSystem for highly available storage (high-availability.storageDir)
        at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:119)
        at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:92)
        at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:120)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:292)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:257)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:202)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:164)
        at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:163)
        ... 2 more
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded.
        at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:447)
        at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:359)
        at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298)
        at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:116)
        ... 10 more
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies.
        at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:58)
        at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:443)
        ... 13 more
[root@master log]# vim flink-root-standalonesession-14-master.log
2019-09-05 21:38:02,696 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: env.java.home, /usr/java/jdk1.8.0_221-amd64
2019-09-05 21:38:02,696 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: jobmanager.rpc.address, master
2019-09-05 21:38:02,696 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: jobmanager.rpc.port, 6123
2019-09-05 21:38:02,696 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: jobmanager.heap.size, 1024m
2019-09-05 21:38:02,697 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: taskmanager.heap.size, 1024m
2019-09-05 21:38:02,697 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: taskmanager.numberOfTaskSlots, 1
2019-09-05 21:38:02,697 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: parallelism.default, 1
2019-09-05 21:38:02,697 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: high-availability, zookeeper
2019-09-05 21:38:02,697 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: high-availability.storageDir, hdfs:///flink/ha/
2019-09-05 21:38:02,697 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: high-availability.zookeeper.quorum, master:2181,slave02:2181,slave03:2181
2019-09-05 21:38:02,698 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: high-availability.zookeeper.path.root, /flink
2019-09-05 21:38:02,698 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: high-availability.cluster-id, /cluster_one
2019-09-05 21:38:02,698 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: high-availability.zookeeper.client.acl, open
2019-09-05 21:38:02,698 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: jobmanager.execution.failover-strategy, region
2019-09-05 21:38:02,698 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: rest.port, 8081
2019-09-05 21:38:02,699 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: rest.address, master,slave03
2019-09-05 21:38:02,699 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: rest.bind-port, 8080-8090
2019-09-05 21:38:02,699 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: rest.bind-address, master,slave03
2019-09-05 21:38:02,699 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: web.submit.enable, false
2019-09-05 21:38:02,842 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         - Starting StandaloneSessionClusterEntrypoint.
2019-09-05 21:38:02,842 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         - Install default filesystem.
2019-09-05 21:38:02,877 INFO  org.apache.flink.core.fs.FileSystem                           - Hadoop is not in the classpath/dependencies. The extended set of supported File Systems via Hadoop is not available.
2019-09-05 21:38:02,903 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         - Install security context.
2019-09-05 21:38:02,914 INFO  org.apache.flink.runtime.security.modules.HadoopModuleFactory  - Cannot create Hadoop Security Module because Hadoop cannot be found in the Classpath.
2019-09-05 21:38:02,926 INFO  org.apache.flink.runtime.security.SecurityUtils               - Cannot install HadoopSecurityContext because Hadoop cannot be found in the Classpath.
2019-09-05 21:38:02,927 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         - Initializing cluster services.
2019-09-05 21:38:03,430 INFO  org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils         - Trying to start actor system at master:0
2019-09-05 21:38:04,268 INFO  akka.event.slf4j.Slf4jLogger                                  - Slf4jLogger started
2019-09-05 21:38:04,314 INFO  akka.remote.Remoting                                          - Starting remoting
2019-09-05 21:38:04,566 INFO  akka.remote.Remoting                                          - Remoting started; listening on addresses :[akka.tcp://flink@master:36882]
2019-09-05 21:38:04,674 INFO  org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils         - Actor system started at akka.tcp://flink@master:36882
2019-09-05 21:38:04,701 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         - Shutting StandaloneSessionClusterEntrypoint down with application status FAILED. Diagnostics java.io.IOException: Could not create FileSystem for highly available storage (high-availability.storageDir)
        at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:119)
        at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:92)
        at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:120)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:292)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:257)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:202)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:164)
        at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:163)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:501)
        at org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:65)
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded.
        at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:447)
        at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:359)
        at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298)
        at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:116)
        ... 10 more
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies.
        at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:58)
        at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:443)
        ... 13 more
.
2019-09-05 21:38:04,708 INFO  org.apache.flink.runtime.rpc.akka.AkkaRpcService              - Stopping Akka RPC service.
2019-09-05 21:38:04,738 INFO  akka.remote.RemoteActorRefProvider$RemotingTerminator         - Shutting down remote daemon.
2019-09-05 21:38:04,738 INFO  akka.remote.RemoteActorRefProvider$RemotingTerminator         - Remote daemon shut down; proceeding with flushing remote transports.
2019-09-05 21:38:04,765 INFO  akka.remote.RemoteActorRefProvider$RemotingTerminator         - Remoting shut down.
2019-09-05 21:38:04,815 INFO  org.apache.flink.runtime.rpc.akka.AkkaRpcService              - Stopped Akka RPC service.
2019-09-05 21:38:04,816 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint         - Could not start cluster entrypoint StandaloneSessionClusterEntrypoint.
org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to initialize the cluster entrypoint StandaloneSessionClusterEntrypoint.
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:182)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:501)
        at org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:65)
Caused by: java.io.IOException: Could not create FileSystem for highly available storage (high-availability.storageDir)
        at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:119)
        at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:92)
        at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:120)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:292)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:257)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:202)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:164)
        at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:163)
        ... 2 more
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded.
        at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:447)
        at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:359)
        at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298)
        at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:116)
        ... 10 more
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies.
        at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:58)
        at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:443)
        ... 13 more

##### 环境变量已经配置如下(/etc/profile):

export JAVA_HOME=/usr/java/jdk1.8.0_221-amd64
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:/lib
export PATH=$PATH:$JAVA_HOME/bin:.
export ZOOKEEPER_HOME=/opt/software/zookeeper
export PATH=$PATH:$ZOOKEEPER_HOME/bin:.
export HADOOP_HOME=/usr/hadoop/hadoop
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:.
export PYTHON_HOME=/usr/local/python3
export PATH=$PATH:$PYTHON_HOME/bin:.
export SCALA_HOME=/usr/local/scala
export PATH=$PATH:/usr/local/scala/bin

export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_HOME=$HADOOP_HOME
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
export LD_LIBRARY_PATH=$JAVA_HOME/jre/lib/amd64/server:/usr/local/lib:/usr/hadoop/hadoop/lib/native

flinl-conf.yaml HA配置:
high-availability: zookeeper
high-availability.storageDir: hdfs:///flink/ha/
high-availability.zookeeper.quorum: master:2181,slave02:2181,slave03:2181
high-availability.zookeeper.path.root: /flink
high-availability.cluster-id: /cluster_one
因日志中提到如下信息,猜测可能是环境变量或者hadoop 依赖路径的问题。
2019-09-06 11:44:39,820 INFO  org.apache.flink.core.fs.FileSystem                           - Hadoop is not in the classpath/dependencies. The extended set of supported File Systems via Hadoop is not available.
……
2019-09-06 11:44:41,943 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint         - Shutting StandaloneSessionClusterEntrypoint down with application status FAILED. Diagnostics java.io.IOException: Could not create FileSystem for highly available storage (high-availability.storageDir)
……
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded.
^
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies.

……
2019-09-06 11:44:42,075 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint         - Could not start cluster entrypoint StandaloneSessionClusterEntrypoint.
org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to initialize the cluster entrypoint StandaloneSessionClusterEntrypoint.
……
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded.
……
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies.

接下来在flink-conf.yaml文件中添加hdfs配置。
fs.hdfs.hadoopconf: /usr/hadoop/hadoop/etc/hadoop
fs.hdfs.hdfsdefault: /usr/hadoop/hadoop/etc/hadoop/hdfs-default.xml
fs.hdfs.hdfssite: /usr/hadoop/hadoop/etc/hadoop/hdfs-site.xml
flink集群依然无法按启动,日志内容与之前没有差别。
小弟在此向社区各路大神求教,如果有遇到相关情况,是否有解决办法。
非常感谢
ps:flink on yarn 搭建方式尚未尝试。

2个回答

缺少hadoop的jar导致的 可以直接从 Flink 官网上下载最新版本的 Flink binary 的压缩包试用:
https://flink.apache.org/downloads.html ,下载时候需要注意,接下来的实验中需要用到 hdfs,
所以需要下载带有 hadoop 支持的 flink binary 包。否则,接下来初始化 standalone 集群的时候会
报错,初始化 hdfs filesystem 失败。下载好放在flink下面的lib下面

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
flink搭建standalone模式集群,jobmanager会自动挂掉,只有一直刷的warn日志
flink搭建standalone模式集群,启动后任务提交跟运行正常,gc情况观察了一下也正常,但是jobmanager到晚上会自动挂掉,而且一直刷的warn日志。 flink版本:1.7.2 三台机器,web界面信息正常。 **问题:jobmanager会挂掉,跟这个日志是否有关呢?我希望集群可以稳定跑下去,目前任务只是对接kafka与redis。** warn日志如下: ``` 09-06 14:00:23,430 WARN akka.remote.transport.netty.NettyTransport - Remote connection to [null] failed with java.net.ConnectException: Connection refused: localhost/127.0.0.1:63408 2019-09-06 14:00:23,431 WARN akka.remote.ReliableDeliverySupervisor - Association with remote system [akka.tcp://flink-metrics@localhost:63408] has failed, address is now gated for [50] ms. Reason: [Association failed with [akka.tcp://flink-metrics@localhost:63408]] Caused by: [Connection refused: localhost/127.0.0.1:63408] 2019-09-06 14:00:23,431 WARN akka.remote.transport.netty.NettyTransport - Remote connection to [null] failed with java.net.ConnectException: Connection refused: localhost/127.0.0.1:30060 2019-09-06 14:00:23,431 WARN akka.remote.ReliableDeliverySupervisor - Association with remote system [akka.tcp://flink-metrics@localhost:30060] has failed, address is now gated for [50] ms. Reason: [Association failed with [akka.tcp://flink-metrics@localhost:30060]] Caused by: [Connection refused: localhost/127.0.0.1:30060] ``` 集群启动日志如下: ``` 2019-09-06 13:50:33,581 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -------------------------------------------------------------------------------- 2019-09-06 13:50:33,582 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Starting StandaloneSessionClusterEntrypoint (Version: 1.7.2, Rev:ceba8af, Date:11.02.2019 @ 14:17:09 UTC) 2019-09-06 13:50:33,582 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - OS current user: apps 2019-09-06 13:50:33,816 WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2019-09-06 13:50:33,945 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Current Hadoop/Kerberos user: apps 2019-09-06 13:50:33,945 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - JVM: Java HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.161-b12 2019-09-06 13:50:33,945 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Maximum heap size: 981 MiBytes 2019-09-06 13:50:33,945 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - JAVA_HOME: /apps/svr/jdk1.8.0_161 2019-09-06 13:50:33,947 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Hadoop version: 2.6.5 2019-09-06 13:50:33,947 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - JVM Options: 2019-09-06 13:50:33,948 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Xms1024m 2019-09-06 13:50:33,948 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Xmx1024m 2019-09-06 13:50:33,948 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Dlog.file=/home/apps/jfy/flink-1.7.2/log/flink-apps-standalonesession-6-arch-dev-rmq.log 2019-09-06 13:50:33,948 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Dlog4j.configuration=file:/home/apps/jfy/flink-1.7.2/conf/log4j.properties 2019-09-06 13:50:33,948 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Dlogback.configurationFile=file:/home/apps/jfy/flink-1.7.2/conf/logback.xml 2019-09-06 13:50:33,948 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Program Arguments: 2019-09-06 13:50:33,948 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --configDir 2019-09-06 13:50:33,948 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - /home/apps/jfy/flink-1.7.2/conf 2019-09-06 13:50:33,948 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --executionMode 2019-09-06 13:50:33,948 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - cluster 2019-09-06 13:50:33,948 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Classpath: /home/apps/jfy/flink-1.7.2/lib/flink-python_2.11-1.7.2.jar:/home/apps/jfy/flink-1.7.2/lib/flink-shaded-hadoop2-uber-1.7.2.jar:/home/apps/jfy/flink-1.7.2/lib/log4j-1.2.17.jar:/home/apps/jfy/flink-1.7.2/lib/slf4j-log4j12-1.7.15.jar:/home/apps/jfy/flink-1.7.2/lib/flink-dist_2.11-1.7.2.jar::: 2019-09-06 13:50:33,948 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -------------------------------------------------------------------------------- 2019-09-06 13:50:33,949 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Registered UNIX signal handlers for [TERM, HUP, INT] 2019-09-06 13:50:33,959 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.address, 172.31.50.59 2019-09-06 13:50:33,960 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.port, 6123 2019-09-06 13:50:33,960 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.heap.size, 1024m 2019-09-06 13:50:33,960 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.heap.size, 1024m 2019-09-06 13:50:33,960 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.numberOfTaskSlots, 1 2019-09-06 13:50:33,960 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: parallelism.default, 1 2019-09-06 13:50:33,960 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.port, 8081 2019-09-06 13:50:33,973 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Starting StandaloneSessionClusterEntrypoint. 2019-09-06 13:50:33,973 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Install default filesystem. 2019-09-06 13:50:33,983 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Install security context. 2019-09-06 13:50:34,016 INFO org.apache.flink.runtime.security.modules.HadoopModule - Hadoop user set to apps (auth:SIMPLE) 2019-09-06 13:50:34,030 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Initializing cluster services. 2019-09-06 13:50:34,191 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils - Trying to start actor system at 172.31.50.59:6123 2019-09-06 13:50:34,520 INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started 2019-09-06 13:50:34,571 INFO akka.remote.Remoting - Starting remoting 2019-09-06 13:50:34,726 INFO akka.remote.Remoting - Remoting started; listening on addresses :[akka.tcp://flink@172.31.50.59:6123] 2019-09-06 13:50:34,733 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils - Actor system started at akka.tcp://flink@172.31.50.59:6123 2019-09-06 13:50:34,747 WARN org.apache.flink.configuration.Configuration - Config uses deprecated configuration key 'jobmanager.rpc.address' instead of proper key 'rest.address' 2019-09-06 13:50:34,757 INFO org.apache.flink.runtime.blob.BlobServer - Created BLOB server storage directory /tmp/blobStore-c7a49a00-4241-463b-97d6-f01795c08cde 2019-09-06 13:50:34,760 INFO org.apache.flink.runtime.blob.BlobServer - Started BLOB server at 0.0.0.0:22324 - max concurrent requests: 50 - max backlog: 1000 2019-09-06 13:50:34,774 INFO org.apache.flink.runtime.metrics.MetricRegistryImpl - No metrics reporter configured, no metrics will be exposed/reported. 2019-09-06 13:50:34,775 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Trying to start actor system at 172.31.50.59:0 2019-09-06 13:50:34,790 INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started 2019-09-06 13:50:34,795 INFO akka.remote.Remoting - Starting remoting 2019-09-06 13:50:34,802 INFO akka.remote.Remoting - Remoting started; listening on addresses :[akka.tcp://flink-metrics@172.31.50.59:44195] 2019-09-06 13:50:34,803 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Actor system started at akka.tcp://flink-metrics@172.31.50.59:44195 2019-09-06 13:50:34,807 INFO org.apache.flink.runtime.dispatcher.FileArchivedExecutionGraphStore - Initializing FileArchivedExecutionGraphStore: Storage directory /tmp/executionGraphStore-be620752-bb92-49c0-9556-f93d802f61c2, expiration time 3600000, maximum cache size 52428800 bytes. 2019-09-06 13:50:34,834 INFO org.apache.flink.runtime.blob.TransientBlobCache - Created BLOB cache storage directory /tmp/blobStore-ac295e58-8bce-4747-80f5-086a3ddf6874 2019-09-06 13:50:34,850 WARN org.apache.flink.configuration.Configuration - Config uses deprecated configuration key 'jobmanager.rpc.address' instead of proper key 'rest.address' 2019-09-06 13:50:34,851 WARN org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint - Upload directory /tmp/flink-web-59e5be3d-7736-4a43-ab10-3c5116bfe201/flink-web-upload does not exist, or has been deleted externally. Previously uploaded files are no longer available. 2019-09-06 13:50:34,852 INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint - Created directory /tmp/flink-web-59e5be3d-7736-4a43-ab10-3c5116bfe201/flink-web-upload for file uploads. 2019-09-06 13:50:34,855 INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint - Starting rest endpoint. 2019-09-06 13:50:35,063 INFO org.apache.flink.runtime.webmonitor.WebMonitorUtils - Determined location of main cluster component log file: /home/apps/jfy/flink-1.7.2/log/flink-apps-standalonesession-6-arch-dev-rmq.log 2019-09-06 13:50:35,063 INFO org.apache.flink.runtime.webmonitor.WebMonitorUtils - Determined location of main cluster component stdout file: /home/apps/jfy/flink-1.7.2/log/flink-apps-standalonesession-6-arch-dev-rmq.out 2019-09-06 13:50:35,202 INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint - Rest endpoint listening at 172.31.50.59:8081 2019-09-06 13:50:35,202 INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint - http://172.31.50.59:8081 was granted leadership with leaderSessionID=00000000-0000-0000-0000-000000000000 2019-09-06 13:50:35,202 INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint - Web frontend listening at http://172.31.50.59:8081. 2019-09-06 13:50:35,259 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Starting RPC endpoint for org.apache.flink.runtime.resourcemanager.StandaloneResourceManager at akka://flink/user/resourcemanager . 2019-09-06 13:50:35,274 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Starting RPC endpoint for org.apache.flink.runtime.dispatcher.StandaloneDispatcher at akka://flink/user/dispatcher . 2019-09-06 13:50:35,288 INFO org.apache.flink.runtime.resourcemanager.StandaloneResourceManager - ResourceManager akka.tcp://flink@172.31.50.59:6123/user/resourcemanager was granted leadership with fencing token 00000000000000000000000000000000 2019-09-06 13:50:35,289 INFO org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager - Starting the SlotManager. 2019-09-06 13:50:35,302 INFO org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Dispatcher akka.tcp://flink@172.31.50.59:6123/user/dispatcher was granted leadership with fencing token 00000000-0000-0000-0000-000000000000 2019-09-06 13:50:35,305 INFO org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Recovering all persisted jobs. 2019-09-06 13:50:35,921 INFO org.apache.flink.runtime.resourcemanager.StandaloneResourceManager - Registering TaskManager with ResourceID d9ac21b93546848cee400e09e79bf55c (akka.tcp://flink@localhost:32199/user/taskmanager_0) at ResourceManager 2019-09-06 13:50:35,931 INFO org.apache.flink.runtime.resourcemanager.StandaloneResourceManager - Registering TaskManager with ResourceID e7f27036fca804c716fd6bada9f1e0d6 (akka.tcp://flink@localhost:28648/user/taskmanager_0) at ResourceManager ```
Flink local模式下,运行Flink自带的jar包一直报错
Flink local模式下,运行Flink自带的jar包一直报错 启动Flink: ![图片说明](https://img-ask.csdn.net/upload/201911/22/1574391817_772547.png) 执行: bin/flink run examples/streaming/SocketWindowWordCount.jar --port 8888 利用nc -lk 8888模拟socket 输入,然后会报错,并且页面也进不去了 前台页面显示: ![图片说明](https://img-ask.csdn.net/upload/201911/22/1574392093_133203.png) 后台报错内容: ``` org.apache.flink.client.program.ProgramInvocationException: Could not retrieve the execution result. (JobID: f2ee89e0ed991f22ed9eaec00edfd789) at org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:261) at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:486) at org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:66) at org.apache.flink.streaming.examples.socket.SocketWindowWordCount.main(SocketWindowWordCount.java:92) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529) at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421) at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:426) at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:816) at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:290) at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:216) at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1053) at org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1129) at org.apache.flink.client.cli.CliFrontend$$Lambda$1/1977310713.call(Unknown Source) at org.apache.flink.runtime.security.HadoopSecurityContext$$Lambda$2/1169474473.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754) at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1129) Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to submit JobGraph. at org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$8(RestClusterClient.java:380) at org.apache.flink.client.program.rest.RestClusterClient$$Lambda$11/256346753.apply(Unknown Source) at java.util.concurrent.CompletableFuture$ExceptionCompletion.run(CompletableFuture.java:1246) at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:193) at java.util.concurrent.CompletableFuture.internalComplete(CompletableFuture.java:210) at java.util.concurrent.CompletableFuture$ThenApply.run(CompletableFuture.java:723) at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:193) at java.util.concurrent.CompletableFuture.internalComplete(CompletableFuture.java:210) at java.util.concurrent.CompletableFuture$ThenCopy.run(CompletableFuture.java:1333) at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:193) at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2361) at org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$5(FutureUtils.java:203) at org.apache.flink.runtime.concurrent.FutureUtils$$Lambda$21/100819684.accept(Unknown Source) at java.util.concurrent.CompletableFuture$WhenCompleteCompletion.run(CompletableFuture.java:1298) at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:193) at java.util.concurrent.CompletableFuture.internalComplete(CompletableFuture.java:210) at java.util.concurrent.CompletableFuture$ThenCopy.run(CompletableFuture.java:1333) at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:193) at java.util.concurrent.CompletableFuture.internalComplete(CompletableFuture.java:210) at java.util.concurrent.CompletableFuture$AsyncCompose.exec(CompletableFuture.java:626) at java.util.concurrent.CompletableFuture$Async.run(CompletableFuture.java:428) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.flink.runtime.rest.util.RestClientException: [Internal server error., <Exception on server side: akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://flink/user/dispatcher#1680779493]] after [12000 ms]. Sender[null] sent message of type "org.apache.flink.runtime.rpc.messages.LocalFencedMessage". at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:604) at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:126) at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601) at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109) at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599) at akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:329) at akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:280) at akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:284) at akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:236) at java.lang.Thread.run(Thread.java:745) End of exception on server side>] at org.apache.flink.runtime.rest.RestClient.parseResponse(RestClient.java:350) at org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$3(RestClient.java:334) at org.apache.flink.runtime.rest.RestClient$$Lambda$31/1522310222.apply(Unknown Source) at java.util.concurrent.CompletableFuture$AsyncCompose.exec(CompletableFuture.java:604) ... 4 more ``` 网上搜了一下,添加了这几个参数还是有问题: ``` akka.ask.timeout: 60 s web.timeout: 12000 taskmanager.host: localhost ``` 有人遇到过吗?
flink集群任务,FlinkKafkaConsumer010读取topic消息 ,将消息存储并上传至cos报错
用flink读取kafka集群的topic ,使用了一个java 的定时任务 Timer() 打包放到flink集群上  刚接触flink  实在不知道如何调 ![图片说明](https://img-ask.csdn.net/upload/201903/18/1552913971_808109.png)
多线程向flink集群提交任务失败
Caused by: org.apache.flink.client.program.ProgramInvocationException: The program execution failed: Could not upload the program's JAR files to the JobManager. at org.apache.flink.client.program.ClusterClient.runDetached(ClusterClient.java:454) at org.apache.flink.client.program.StandaloneClusterClient.submitJob(StandaloneClusterClient.java:99) at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:400) at org.apache.flink.client.program.DetachedEnvironment.finalizeExecute(DetachedEnvironment.java:76) at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:345) ... 14 common frames omitted Caused by: org.apache.flink.runtime.client.JobSubmissionException: Could not upload the program's JAR files to the JobManager. at org.apache.flink.runtime.client.JobClient.submitJobDetached(JobClient.java:410) at org.apache.flink.client.program.ClusterClient.runDetached(ClusterClient.java:451) ... 19 common frames omitted Caused by: java.io.IOException: Could not retrieve the JobManager's blob port. at org.apache.flink.runtime.blob.BlobClient.uploadJarFiles(BlobClient.java:745) at org.apache.flink.runtime.jobgraph.JobGraph.uploadUserJars(JobGraph.java:565) at org.apache.flink.runtime.client.JobClient.submitJobDetached(JobClient.java:407) ... 20 common frames omitted Caused by: java.io.IOException: PUT operation failed: Could not transfer error message at org.apache.flink.runtime.blob.BlobClient.putInputStream(BlobClient.java:512) at org.apache.flink.runtime.blob.BlobClient.put(BlobClient.java:374) at org.apache.flink.runtime.blob.BlobClient.uploadJarFiles(BlobClient.java:771) at org.apache.flink.runtime.blob.BlobClient.uploadJarFiles(BlobClient.java:740) ... 22 common frames omitted Caused by: java.io.IOException: Could not transfer error message at org.apache.flink.runtime.blob.BlobClient.readExceptionFromStream(BlobClient.java:799) at org.apache.flink.runtime.blob.BlobClient.receivePutResponseAndCompare(BlobClient.java:537) at org.apache.flink.runtime.blob.BlobClient.putInputStream(BlobClient.java:508) ... 25 common frames omitted Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.ipc.RemoteException at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:348) at org.apache.flink.util.InstantiationUtil$ClassLoaderObjectInputStream.resolveClass(InstantiationUtil.java:64) at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1613) at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1518) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1774) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000) at java.io.ObjectInputStream.defaultReadObject(ObjectInputStream.java:501) at java.lang.Throwable.readObject(Throwable.java:914) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1058) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1900) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371) at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:290) at org.apache.flink.runtime.blob.BlobClient.readExceptionFromStream(BlobClient.java:795) ... 27 common frames omitted
Flink on yarn 提交任务 文件不存在 不知道是为什么
The program finished with the following exception: org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't deploy Yarn session cluster at org.apache.flink.yarn.AbstractYarnClusterDescriptor.deploySessionCluster(AbstractYarnClusterDescriptor.java:420) at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:259) at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:215) at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1044) at org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1120) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754) at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1120) Caused by: org.apache.flink.yarn.AbstractYarnClusterDescriptor$YarnDeploymentException: The YARN application unexpectedly switched to state FAILED during deployment. Diagnostics from YARN: Application application_1577490600420_0001 failed 1 times due to AM Container for appattempt_1577490600420_0001_000001 exited with exitCode: -1000 For more detailed output, check application tracking page:http://master:8088/cluster/app/application_1577490600420_0001Then, click on links to logs of each attempt. Diagnostics: File file:/home/chenjia/.flink/application_1577490600420_0001/application_1577490600420_0001-flink-conf.yaml1025360586633443671.tmp does not exist java.io.FileNotFoundException: File file:/home/chenjia/.flink/application_1577490600420_0001/application_1577490600420_0001-flink-conf.yaml1025360586633443671.tmp does not exist at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:611) at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:824) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:601) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:428) at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:253) at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:63) at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:361) at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:358) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Failing this attempt. Failing the application.
Flink on yarn 提交任务 文件不存在
Application application_1577461148853_0001 failed 1 times due to AM Container for appattempt_1577461148853_0001_000001 exited with exitCode: -1000 For more detailed output, check application tracking page:http://master:8088/cluster/app/application_1577461148853_0001Then, click on links to logs of each attempt. Diagnostics: File file:/home/chenjia/.flink/application_1577461148853_0001/application_1577461148853_0001-flink-conf.yaml1177367680042949062.tmp does not exist java.io.FileNotFoundException: File file:/home/chenjia/.flink/application_1577461148853_0001/application_1577461148853_0001-flink-conf.yaml1177367680042949062.tmp does not exist at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:611) at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:824) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:601) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:428) at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:253) at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:63) at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:361) at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:358) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Failing this attempt. Failing the application.
flink on yarn 模式下出现提交任务失败
我在提交任务的时候,只能提交有限个任务,![图片说明](https://img-ask.csdn.net/upload/201909/24/1569320809_222117.png) ![图片说明](https://img-ask.csdn.net/upload/201909/24/1569320989_140951.png) 这yarn集群里的资源还是很足的呀,为什么我提交一个很简单的任务都会上面这样的错呢
使用RestTemplate 访问Flink程序,怎么运行jar包?
我的Flink程序需要从args里读取一个json格式的参数,用RestTemplate 怎么发送json格式,在Flink程序里一直不能收到,怎么传参?
Flink如何将kafka里的消息写入到对应的topic
已知所有kafka里topic为固定格式的json,目前想用flink处理所有topic里的数据,并且写入第二个kafka,sink的topic和source的topic一致,如何实现?
运算符名称数据源超出了80个字符的长度限制?
Flink 运算符名称数据源超出了80个字符的长度限制,已被截断?有人遇到过吗?
flink中的分流和分区有什么区别吗,分别应用到哪些应用场景中呢
flink中的分流和 分区有什么区别吗,它们的应用场景又是怎样的呢
flink-1.8.1测试运行自带的example/batch/WordCount出现以下问题
org.apache.flink.client.program.ProgramInvocationException:无法检索执行结果。(JobID:5a7165e1260c6316fa11d2760bd3d49f) 在org.apache.flink.client.program。rest .RestClusterClient.submitJob(RestClusterClient.java:260) 在org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:486) 在org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:66) 在org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1511) at com.dataartisans.streamledger.examples.simpletrade.SimpleTradeExample.main(SimpleTradeExample.java:98) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 在java.lang.reflect.Method.invoke(Method.java:497) 在org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529) 在org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421) 在org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:426) 在org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:804) 在org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:280) 在org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:215) 在org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1044) 在org.apache.flink.client.cli.CliFrontend.lambda $ main $ 16(CliFrontend.java:1120) at java.security.AccessController.doPrivileged(Native Method) 在javax.security.auth.Subject.doAs(Subject.java:422) 在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) 在org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) 在org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1120) 引起:org.apache.flink.runtime.client.JobSubmissionException:无法提交JobGraph。 在org.apache.flink.client.program。rest .RestClusterClient.lambda $ submitJob $ 25(RestClusterClient.java:379) at java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870) at java.util.concurrent.CompletableFuture $ UniExceptionally.tryFire(CompletableFuture.java:852) at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474) at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977) 在org.apache.flink.runtime.concurrent.FutureUtils.lambda $ retryOperationWithDelay $ 32(FutureUtils.java:213) at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760) at java.util.concurrent.CompletableFuture $ UniWhenComplete.tryFire(CompletableFuture.java:736) at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474) at java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:561) at java.util.concurrent.CompletableFuture $ UniCompose.tryFire(CompletableFuture.java:929) at java.util.concurrent.CompletableFuture $ Completion.run(CompletableFuture.java:442) 在java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor $ Worker.run(ThreadPoolExecutor.java:617) 在java.lang。线程 .run(线程 .java:745) 引起:java.util.concurrent.CompletionException:org.apache.flink.runtime.concurrent.FutureUtils $ RetryException:无法完成操作。异常不可重试。 at java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:326) at java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:338) at java.util.concurrent.CompletableFuture.uniRelay(CompletableFuture.java:911) at java.util.concurrent.CompletableFuture $ UniRelay.tryFire(CompletableFuture.java:899) ......还有12个 引起:org.apache.flink.runtime.concurrent.FutureUtils $ RetryException:无法完成操作。异常不可重试。 ......还有10个 引起:java.util.concurrent.CompletionException:org.apache.flink.runtime。rest .util.RestClientException:[作业提交失败。] at java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:326) at java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:338) at java.util.concurrent.CompletableFuture.uniRelay(CompletableFuture.java:911) at java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:953) at java.util.concurrent.CompletableFuture $ UniCompose.tryFire(CompletableFuture.java:926) ......还有4个 引起:org.apache.flink.runtime。rest .util.RestClientException:[作业提交失败。] 在org.apache.flink.runtime。rest .RestClient.parseResponse(RestClient.java:310) 在org.apache.flink.runtime。rest .RestClient.lambda $ submitRequest $ 364(RestClient.java:294) at java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:952) ......还有5个 附件
如何在flink运行时更新flink正在执行的sql?
有一个项目有这样的需求,需要使用flink动态的更新正在执行的sql.请问大家有什么建议么?
Flink AggregateFunction中的merge方法是做什么用的?
看了官方文档说是合并Accumulator,但是我这个方法即便return null 也不会有任何影响呀? 什么情况下会调用这个方法呢?
[flink]Gelly如何计算从KAFKA中读取的Datastream型数据?
小白求教;先有一需求将数据从KAFKA中取出,使用Flink kafka connector这里得到的是DataStream格式数据。想将这些数据使用Gelly进行图分析,但是Gelly是基于DataSet格式的。尝试过将DataStream转换为Table,再将Table转换为DataSet,但是报错如下。求问还有什么较好的解决办法么?![图片说明](https://img-ask.csdn.net/upload/201908/14/1565773740_150241.png)![图片说明](https://img-ask.csdn.net/upload/201908/14/1565773757_625389.png)
flink中的排重逻辑如何处理比较好
flink中如何对于所有的数据进行基于主键的去重。目前采用的keyBy(pk)后通过状态过滤,实际运行的时候checkpoints时间太长,非常阻塞性能。有没有什么更好的处理方式。另外flink-sql中的distinct具体实现逻辑是什么样子,必须指定窗口么。
flink在hadoop yarn运行出错,报相应的jar找不到(self4j)
在flink目录执行./bin/yarn-session.sh -n 2 -s 2 -jm 1024 -tm 1024时,启动的时候报2018-12-16 16:01:42,879 ERROR org.apache.flink.yarn.cli.FlinkYarnSessionCli - Error while running the Flink Yarn session. org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't deploy Yarn session cluster at org.apache.flink.yarn.AbstractYarnClusterDescriptor.deploySessionCluster(AbstractYarnClusterDescriptor.java:423) at org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:607) at org.apache.flink.yarn.cli.FlinkYarnSessionCli.lambda$main$2(FlinkYarnSessionCli.java:810) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692) at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) at org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:810) Caused by: org.apache.flink.yarn.AbstractYarnClusterDescriptor$YarnDeploymentException: The YARN application unexpectedly switched to state FAILED during deployment. Diagnostics from YARN: Application application_1544946711234_0002 failed 1 times due to AM Container for appattempt_1544946711234_0002_000001 exited with exitCode: -1000 For more detailed output, check application tracking page:http://master:8088/proxy/application_1544946711234_0002/Then, click on links to logs of each attempt. Diagnostics: File file:/root/.flink/application_1544946711234_0002/lib/slf4j-log4j12-1.7.15.jar does not exist java.io.FileNotFoundException: File file:/root/.flink/application_1544946711234_0002/lib/slf4j-log4j12-1.7.15.jar does not exist at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:534)
关于flink的JDBCAppendTableSink不能往MySQL写入数据的问题
1、使用继承RichSinkFunction类的方法,自定义写入MySQL,就可以完成数据写入MySQL的操作; 使用JDBCAppendTableSink.builder()这种方法的时候,没法写数据到MySQL。 2、 ``` JDBCAppendTableSink sink = JDBCAppendTableSink.builder().setBatchSize(1) .setDrivername("com.mysql.jdbc.Driver") .setDBUrl("jdbc:mysql://localhost:3306/flink") .setUsername("root") .setPassword("123456") .setQuery(sql2) .setParameterTypes(types) .build(); ```
Flink中与spark PairFunction对应的是什么
Flink中与spark PairFunction对应的是什么
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、PDF搜索网站推荐 对于大部
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 顺便拉下票,我在参加csdn博客之星竞选,欢迎投票支持,每个QQ或者微信每天都可以投5票,扫二维码即可,http://m234140.nofollow.ax.
Vue + Spring Boot 项目实战(十四):用户认证方案与完善的访问拦截
本篇文章主要讲解 token、session 等用户认证方案的区别并分析常见误区,以及如何通过前后端的配合实现完善的访问拦截,为下一步权限控制的实现打下基础。
比特币原理详解
一、什么是比特币 比特币是一种电子货币,是一种基于密码学的货币,在2008年11月1日由中本聪发表比特币白皮书,文中提出了一种去中心化的电子记账系统,我们平时的电子现金是银行来记账,因为银行的背后是国家信用。去中心化电子记账系统是参与者共同记账。比特币可以防止主权危机、信用风险。其好处不多做赘述,这一层面介绍的文章很多,本文主要从更深层的技术原理角度进行介绍。 二、问题引入  假设现有4个人
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 欢迎 改进 留言。 演示地点跳到演示地点 html代码如下`&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;music&lt;/title&gt; &lt;meta charset="utf-8"&gt
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。 1. for - else 什么?不是 if 和 else 才
数据库优化 - SQL优化
前面一篇文章从实例的角度进行数据库优化,通过配置一些参数让数据库性能达到最优。但是一些“不好”的SQL也会导致数据库查询变慢,影响业务流程。本文从SQL角度进行数据库优化,提升SQL运行效率。 判断问题SQL 判断SQL是否有问题时可以通过两个表象进行判断: 系统级别表象 CPU消耗严重 IO等待严重 页面响应时间过长
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 c/c++ 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7
通俗易懂地给女朋友讲:线程池的内部原理
餐厅的约会 餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”我楞了一下,心里想女朋友今天是怎么了,怎么突然问出这么专业的问题,但做为一个专业人士在女朋友面前也不能露怯啊,想了一下便说:“我先给你讲讲我前同事老王的故事吧!” 大龄程序员老王 老王是一个已经北漂十多年的程序员,岁数大了,加班加不动了,升迁也无望,于是拿着手里
经典算法(5)杨辉三角
写在前面: 我是 扬帆向海,这个昵称来源于我的名字以及女朋友的名字。我热爱技术、热爱开源、热爱编程。技术是开源的、知识是共享的。 这博客是对自己学习的一点点总结及记录,如果您对 Java、算法 感兴趣,可以关注我的动态,我们一起学习。 用知识改变命运,让我们的家人过上更好的生活。 目录一、杨辉三角的介绍二、杨辉三角的算法思想三、代码实现1.第一种写法2.第二种写法 一、杨辉三角的介绍 百度
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹
面试官:你连RESTful都不知道我怎么敢要你?
面试官:了解RESTful吗? 我:听说过。 面试官:那什么是RESTful? 我:就是用起来很规范,挺好的 面试官:是RESTful挺好的,还是自我感觉挺好的 我:都挺好的。 面试官:… 把门关上。 我:… 要干嘛?先关上再说。 面试官:我说出去把门关上。 我:what ?,夺门而去 文章目录01 前言02 RESTful的来源03 RESTful6大原则1. C-S架构2. 无状态3.统一的接
SQL-小白最佳入门sql查询一
一 说明 如果是初学者,建议去网上寻找安装Mysql的文章安装,以及使用navicat连接数据库,以后的示例基本是使用mysql数据库管理系统; 二 准备前提 需要建立一张学生表,列分别是id,名称,年龄,学生信息;本示例中文章篇幅原因SQL注释略; 建表语句: CREATE TABLE `student` ( `id` int(11) NOT NULL AUTO_INCREMENT, `
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // dosho
漫话:什么是平衡(AVL)树?这应该是把AVL树讲的最好的文章了
这篇文章通过对话的形式,由浅入深带你读懂 AVL 树,看完让你保证理解 AVL 树的各种操作,如果觉得不错,别吝啬你的赞哦。 1、若它的左子树不为空,则左子树上所有的节点值都小于它的根节点值。 2、若它的右子树不为空,则右子树上所有的节点值均大于它的根节点值。 3、它的左右子树也分别可以充当为二叉查找树。 例如: 例如,我现在想要查找数值为14的节点。由于二叉查找树的特性,我们可...
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,
程序员:我终于知道post和get的区别
IT界知名的程序员曾说:对于那些月薪三万以下,自称IT工程师的码农们,其实我们从来没有把他们归为我们IT工程师的队伍。他们虽然总是以IT工程师自居,但只是他们一厢情愿罢了。 此话一出,不知激起了多少(码农)程序员的愤怒,却又无可奈何,于是码农问程序员。 码农:你知道get和post请求到底有什么区别? 程序员:你看这篇就知道了。 码农:你月薪三万了? 程序员:嗯。 码农:你是怎么做到的? 程序员:
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU
开源并不是你认为的那些事
点击上方蓝字 关注我们开源之道导读所以 ————想要理清开源是什么?先要厘清开源不是什么,名正言顺是句中国的古代成语,概念本身的理解非常之重要。大部分生物多样性的起源,...
加快推动区块链技术和产业创新发展,2019可信区块链峰会在京召开
      11月8日,由中国信息通信研究院、中国通信标准化协会、中国互联网协会、可信区块链推进计划联合主办,科技行者协办的2019可信区块链峰会将在北京悠唐皇冠假日酒店开幕。   区块链技术被认为是继蒸汽机、电力、互联网之后,下一代颠覆性的核心技术。如果说蒸汽机释放了人类的生产力,电力解决了人类基本的生活需求,互联网彻底改变了信息传递的方式,区块链作为构造信任的技术有重要的价值。   1
程序员把地府后台管理系统做出来了,还有3.0版本!12月7号最新消息:已在开发中有github地址
第一幕:缘起 听说阎王爷要做个生死簿后台管理系统,我们派去了一个程序员…… 996程序员做的梦: 第一场:团队招募 为了应对地府管理危机,阎王打算找“人”开发一套地府后台管理系统,于是就在地府总经办群中发了项目需求。 话说还是中国电信的信号好,地府都是满格,哈哈!!! 经常会有外行朋友问:看某网站做的不错,功能也简单,你帮忙做一下? 而这次,面对这样的需求,这个程序员
网易云6亿用户音乐推荐算法
网易云音乐是音乐爱好者的集聚地,云音乐推荐系统致力于通过 AI 算法的落地,实现用户千人千面的个性化推荐,为用户带来不一样的听歌体验。 本次分享重点介绍 AI 算法在音乐推荐中的应用实践,以及在算法落地过程中遇到的挑战和解决方案。 将从如下两个部分展开: AI 算法在音乐推荐中的应用 音乐场景下的 AI 思考 从 2013 年 4 月正式上线至今,网易云音乐平台持续提供着:乐屏社区、UGC
【技巧总结】位运算装逼指南
位算法的效率有多快我就不说,不信你可以去用 10 亿个数据模拟一下,今天给大家讲一讲位运算的一些经典例子。不过,最重要的不是看懂了这些例子就好,而是要在以后多去运用位运算这些技巧,当然,采用位运算,也是可以装逼的,不信,你往下看。我会从最简单的讲起,一道比一道难度递增,不过居然是讲技巧,那么也不会太难,相信你分分钟看懂。 判断奇偶数 判断一个数是基于还是偶数,相信很多人都做过,一般的做法的代码如下
《C++ Primer》学习笔记(六):C++模块设计——函数
专栏C++学习笔记 《C++ Primer》学习笔记/习题答案 总目录 https://blog.csdn.net/TeFuirnever/article/details/100700212 —————————————————————————————————————————————————————— 《C++ Primer》习题参考答案:第6章 - C++模块设计——函数 文章目录专栏C+...
8年经验面试官详解 Java 面试秘诀
    作者 | 胡书敏 责编 | 刘静 出品 | CSDN(ID:CSDNnews) 本人目前在一家知名外企担任架构师,而且最近八年来,在多家外企和互联网公司担任Java技术面试官,前后累计面试了有两三百位候选人。在本文里,就将结合本人的面试经验,针对Java初学者、Java初级开发和Java开发,给出若干准备简历和准备面试的建议。   Java程序员准备和投递简历的实
面试官如何考察你的思维方式?
1.两种思维方式在求职面试中,经常会考察这种问题:北京有多少量特斯拉汽车? 某胡同口的煎饼摊一年能卖出多少个煎饼? 深圳有多少个产品经理? 一辆公交车里能装下多少个乒乓球? 一
so easy! 10行代码写个"狗屁不通"文章生成器
前几天,GitHub 有个开源项目特别火,只要输入标题就可以生成一篇长长的文章。背后实现代码一定很复杂吧,里面一定有很多高深莫测的机器学习等复杂算法不过,当我看了源代码之后这程序不到50
知乎高赞:中国有什么拿得出手的开源软件产品?(整理自本人原创回答)
知乎高赞:中国有什么拿得出手的开源软件产品? 在知乎上,有个问题问“中国有什么拿得出手的开源软件产品(在 GitHub 等社区受欢迎度较好的)?” 事实上,还不少呢~ 本人于2019.7.6进行了较为全面的 回答 - Bravo Yeung,获得该问题下回答中得最高赞(236赞和1枚专业勋章),对这些受欢迎的 Github 开源项目分类整理如下: 分布式计算、云平台相关工具类 1.SkyWalk
相关热词 c# plc s1200 c#里氏转换原则 c# 主界面 c# do loop c#存为组套 模板 c# 停掉协程 c# rgb 读取图片 c# 图片颜色调整 最快 c#多张图片上传 c#密封类与密封方法
立即提问