Flink local模式下,运行Flink自带的jar包一直报错

Flink local模式下,运行Flink自带的jar包一直报错

启动Flink:

图片说明

执行: bin/flink run examples/streaming/SocketWindowWordCount.jar --port 8888
利用nc -lk 8888模拟socket 输入,然后会报错,并且页面也进不去了

前台页面显示:
图片说明

后台报错内容:

org.apache.flink.client.program.ProgramInvocationException: Could not retrieve the execution result. (JobID: f2ee89e0ed991f22ed9eaec00edfd789)
    at org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:261)
    at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:486)
    at org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:66)
    at org.apache.flink.streaming.examples.socket.SocketWindowWordCount.main(SocketWindowWordCount.java:92)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:483)
    at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)
    at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)
    at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:426)
    at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:816)
    at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:290)
    at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:216)
    at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1053)
    at org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1129)
    at org.apache.flink.client.cli.CliFrontend$$Lambda$1/1977310713.call(Unknown Source)
    at org.apache.flink.runtime.security.HadoopSecurityContext$$Lambda$2/1169474473.run(Unknown Source)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
    at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
    at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1129)
Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to submit JobGraph.
    at org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$8(RestClusterClient.java:380)
    at org.apache.flink.client.program.rest.RestClusterClient$$Lambda$11/256346753.apply(Unknown Source)
    at java.util.concurrent.CompletableFuture$ExceptionCompletion.run(CompletableFuture.java:1246)
    at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:193)
    at java.util.concurrent.CompletableFuture.internalComplete(CompletableFuture.java:210)
    at java.util.concurrent.CompletableFuture$ThenApply.run(CompletableFuture.java:723)
    at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:193)
    at java.util.concurrent.CompletableFuture.internalComplete(CompletableFuture.java:210)
    at java.util.concurrent.CompletableFuture$ThenCopy.run(CompletableFuture.java:1333)
    at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:193)
    at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2361)
    at org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$5(FutureUtils.java:203)
    at org.apache.flink.runtime.concurrent.FutureUtils$$Lambda$21/100819684.accept(Unknown Source)
    at java.util.concurrent.CompletableFuture$WhenCompleteCompletion.run(CompletableFuture.java:1298)
    at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:193)
    at java.util.concurrent.CompletableFuture.internalComplete(CompletableFuture.java:210)
    at java.util.concurrent.CompletableFuture$ThenCopy.run(CompletableFuture.java:1333)
    at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:193)
    at java.util.concurrent.CompletableFuture.internalComplete(CompletableFuture.java:210)
    at java.util.concurrent.CompletableFuture$AsyncCompose.exec(CompletableFuture.java:626)
    at java.util.concurrent.CompletableFuture$Async.run(CompletableFuture.java:428)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.flink.runtime.rest.util.RestClientException: [Internal server error., <Exception on server side:
akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://flink/user/dispatcher#1680779493]] after [12000 ms]. Sender[null] sent message of type "org.apache.flink.runtime.rpc.messages.LocalFencedMessage".
    at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:604)
    at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:126)
    at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
    at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)
    at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
    at akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:329)
    at akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:280)
    at akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:284)
    at akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:236)
    at java.lang.Thread.run(Thread.java:745)

End of exception on server side>]

    at org.apache.flink.runtime.rest.RestClient.parseResponse(RestClient.java:350)
    at org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$3(RestClient.java:334)
    at org.apache.flink.runtime.rest.RestClient$$Lambda$31/1522310222.apply(Unknown Source)
    at java.util.concurrent.CompletableFuture$AsyncCompose.exec(CompletableFuture.java:604)
    ... 4 more

网上搜了一下,添加了这几个参数还是有问题:

akka.ask.timeout: 60 s
web.timeout: 12000
taskmanager.host: localhost

有人遇到过吗?

aLivable_Dedode
aLivable_Dedode 尝试过Flink 1.6-1.9的版本 但是都是这样
5 个月之前 回复

1个回答

提交任务命令不对
bin/flink run -m ip:port .../WordCount.jar

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
求解 FLINK 集群 Standalone 模式 高可用部署无法启动。
##### 根据教程部署的hdfs,zookeeper,flink 集群。 ##### HDFS , zookeeper,工作正常,flink-standalone 启动正常。在搭建HA集群时,集群启动未报错,查看jps时发现没有进程,查看日志出现如下内容。(启动顺序zkServer.sh start ==> start-dfs.sh, start-yarn.sh bin/start-cluster.sh ) ##### 集群环境: Centos 7.3, Hadoop-2.8.5, java 1.8, scala-2.12, flink-1.9.0-2.12, zookeeper 3.4.14 ``` 2019-09-05 21:38:02,658 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -------------------------------------------------------------------------------- 2019-09-05 21:38:02,660 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Starting StandaloneSessionClusterEntrypoint (Version: 1.9.0, Rev:9c32ed9, Date:19.08.2019 @ 16:16:55 UTC) 2019-09-05 21:38:02,660 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - OS current user: root 2019-09-05 21:38:02,660 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Current Hadoop/Kerberos user: <no hadoop dependency found> 2019-09-05 21:38:02,660 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - JVM: Java HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.221-b11 2019-09-05 21:38:02,660 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Maximum heap size: 989 MiBytes 2019-09-05 21:38:02,660 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - JAVA_HOME: /usr/java/jdk1.8.0_221-amd64 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - No Hadoop Dependency available 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - JVM Options: 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Xms1024m 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Xmx1024m 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Dlog.file=/opt/software/flink/log/flink-root-standalonesession-14-master.log 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Dlog4j.configuration=file:/opt/software/flink/conf/log4j.properties 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Dlogback.configurationFile=file:/opt/software/flink/conf/logback.xml 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Program Arguments: 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --configDir 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - /opt/software/flink/conf 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --executionMode 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - cluster 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --host 2019-09-05 21:38:02,662 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - master 2019-09-05 21:38:02,662 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --webui-port 2019-09-05 21:38:02,662 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 8081 2019-09-05 21:38:02,662 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Classpath: /opt/software/flink/lib/flink-table_2.12-1.9.0.jar:/opt/software/flink/lib/flink-table-blink_2.12-1.9.0.jar:/opt/software/flink/lib/log4j-1.2.17.jar:/opt/software/flink/lib/slf4j-log4j12-1.7.15.jar:/opt/software/flink/lib/flink-dist_2.12-1.9.0.jar::/usr/hadoop/hadoop/etc/hadoop: 2019-09-05 21:38:02,662 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -------------------------------------------------------------------------------- 2019-09-05 21:38:02,663 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Registered UNIX signal handlers for [TERM, HUP, INT] 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: env.java.home, /usr/java/jdk1.8.0_221-amd64 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.address, master 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.port, 6123 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.heap.size, 1024m 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.heap.size, 1024m 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.numberOfTaskSlots, 1 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: parallelism.default, 1 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability, zookeeper 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.storageDir, hdfs:///flink/ha/ 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.zookeeper.quorum, master:2181,slave02:2181,slave03:2181 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.zookeeper.path.root, /flink 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.cluster-id, /cluster_one 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.zookeeper.client.acl, open 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.execution.failover-strategy, region 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.port, 8081 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.address, master,slave03 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.bind-port, 8080-8090 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.bind-address, master,slave03 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: web.submit.enable, false 2019-09-05 21:38:02,842 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Starting StandaloneSessionClusterEntrypoint. 2019-09-05 21:38:02,842 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Install default filesystem. 2019-09-05 21:38:02,877 INFO org.apache.flink.core.fs.FileSystem - Hadoop is not in the classpath/dependencies. The extended set of supported File Systems via Hadoop is not available. 2019-09-05 21:38:02,903 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Install security context. 2019-09-05 21:38:02,914 INFO org.apache.flink.runtime.security.modules.HadoopModuleFactory - Cannot create Hadoop Security Module because Hadoop cannot be found in the Classpath. 2019-09-05 21:38:02,926 INFO org.apache.flink.runtime.security.SecurityUtils - Cannot install HadoopSecurityContext because Hadoop cannot be found in the Classpath. 2019-09-05 21:38:02,927 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Initializing cluster services. 2019-09-05 21:38:03,430 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils - Trying to start actor system at master:0 2019-09-05 21:38:04,268 INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started 2019-09-05 21:38:04,314 INFO akka.remote.Remoting - Starting remoting 2019-09-05 21:38:04,566 INFO akka.remote.Remoting - Remoting started; listening on addresses :[akka.tcp://flink@master:36882] 2019-09-05 21:38:04,674 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils - Actor system started at akka.tcp://flink@master:36882 2019-09-05 21:38:04,701 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Shutting StandaloneSessionClusterEntrypoint down with application status FAILED. Diagnostics java.io.IOException: Could not create FileSystem for highly available storage (high-availability.storageDir) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:119) at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:92) at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:120) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:292) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:257) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:202) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:164) at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:163) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:501) at org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:65) Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:447) at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:359) at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:116) ... 10 more Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies. at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:58) at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:443) ... 13 more . 2019-09-05 21:38:04,708 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Stopping Akka RPC service. 2019-09-05 21:38:04,738 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Shutting down remote daemon. 2019-09-05 21:38:04,738 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remote daemon shut down; proceeding with flushing remote transports. 2019-09-05 21:38:04,765 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remoting shut down. 2019-09-05 21:38:04,815 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Stopped Akka RPC service. 2019-09-05 21:38:04,816 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Could not start cluster entrypoint StandaloneSessionClusterEntrypoint. org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to initialize the cluster entrypoint StandaloneSessionClusterEntrypoint. at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:182) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:501) at org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:65) Caused by: java.io.IOException: Could not create FileSystem for highly available storage (high-availability.storageDir) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:119) at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:92) at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:120) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:292) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:257) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:202) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:164) at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:163) ... 2 more Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:447) at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:359) at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:116) ... 10 more Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies. at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:58) at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:443) ... 13 more [root@master log]# vim flink-root-standalonesession-14-master.log 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: env.java.home, /usr/java/jdk1.8.0_221-amd64 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.address, master 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.port, 6123 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.heap.size, 1024m 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.heap.size, 1024m 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.numberOfTaskSlots, 1 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: parallelism.default, 1 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability, zookeeper 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.storageDir, hdfs:///flink/ha/ 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.zookeeper.quorum, master:2181,slave02:2181,slave03:2181 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.zookeeper.path.root, /flink 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.cluster-id, /cluster_one 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.zookeeper.client.acl, open 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.execution.failover-strategy, region 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.port, 8081 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.address, master,slave03 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.bind-port, 8080-8090 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.bind-address, master,slave03 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: web.submit.enable, false 2019-09-05 21:38:02,842 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Starting StandaloneSessionClusterEntrypoint. 2019-09-05 21:38:02,842 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Install default filesystem. 2019-09-05 21:38:02,877 INFO org.apache.flink.core.fs.FileSystem - Hadoop is not in the classpath/dependencies. The extended set of supported File Systems via Hadoop is not available. 2019-09-05 21:38:02,903 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Install security context. 2019-09-05 21:38:02,914 INFO org.apache.flink.runtime.security.modules.HadoopModuleFactory - Cannot create Hadoop Security Module because Hadoop cannot be found in the Classpath. 2019-09-05 21:38:02,926 INFO org.apache.flink.runtime.security.SecurityUtils - Cannot install HadoopSecurityContext because Hadoop cannot be found in the Classpath. 2019-09-05 21:38:02,927 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Initializing cluster services. 2019-09-05 21:38:03,430 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils - Trying to start actor system at master:0 2019-09-05 21:38:04,268 INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started 2019-09-05 21:38:04,314 INFO akka.remote.Remoting - Starting remoting 2019-09-05 21:38:04,566 INFO akka.remote.Remoting - Remoting started; listening on addresses :[akka.tcp://flink@master:36882] 2019-09-05 21:38:04,674 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils - Actor system started at akka.tcp://flink@master:36882 2019-09-05 21:38:04,701 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Shutting StandaloneSessionClusterEntrypoint down with application status FAILED. Diagnostics java.io.IOException: Could not create FileSystem for highly available storage (high-availability.storageDir) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:119) at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:92) at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:120) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:292) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:257) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:202) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:164) at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:163) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:501) at org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:65) Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:447) at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:359) at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:116) ... 10 more Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies. at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:58) at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:443) ... 13 more . 2019-09-05 21:38:04,708 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Stopping Akka RPC service. 2019-09-05 21:38:04,738 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Shutting down remote daemon. 2019-09-05 21:38:04,738 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remote daemon shut down; proceeding with flushing remote transports. 2019-09-05 21:38:04,765 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remoting shut down. 2019-09-05 21:38:04,815 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Stopped Akka RPC service. 2019-09-05 21:38:04,816 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Could not start cluster entrypoint StandaloneSessionClusterEntrypoint. org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to initialize the cluster entrypoint StandaloneSessionClusterEntrypoint. at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:182) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:501) at org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:65) Caused by: java.io.IOException: Could not create FileSystem for highly available storage (high-availability.storageDir) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:119) at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:92) at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:120) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:292) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:257) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:202) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:164) at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:163) ... 2 more Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:447) at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:359) at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:116) ... 10 more Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies. at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:58) at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:443) ... 13 more ``` ##### 环境变量已经配置如下(/etc/profile): ``` export JAVA_HOME=/usr/java/jdk1.8.0_221-amd64 export JRE_HOME=${JAVA_HOME}/jre export CLASSPATH=.:${JAVA_HOME}/lib:/lib export PATH=$PATH:$JAVA_HOME/bin:. export ZOOKEEPER_HOME=/opt/software/zookeeper export PATH=$PATH:$ZOOKEEPER_HOME/bin:. export HADOOP_HOME=/usr/hadoop/hadoop export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:. export PYTHON_HOME=/usr/local/python3 export PATH=$PATH:$PYTHON_HOME/bin:. export SCALA_HOME=/usr/local/scala export PATH=$PATH:/usr/local/scala/bin export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop export YARN_HOME=$HADOOP_HOME export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop export LD_LIBRARY_PATH=$JAVA_HOME/jre/lib/amd64/server:/usr/local/lib:/usr/hadoop/hadoop/lib/native ``` ##### flinl-conf.yaml HA配置: ``` high-availability: zookeeper high-availability.storageDir: hdfs:///flink/ha/ high-availability.zookeeper.quorum: master:2181,slave02:2181,slave03:2181 high-availability.zookeeper.path.root: /flink high-availability.cluster-id: /cluster_one ``` ##### 因日志中提到如下信息,猜测可能是环境变量或者hadoop 依赖路径的问题。 ``` 2019-09-06 11:44:39,820 INFO org.apache.flink.core.fs.FileSystem - Hadoop is not in the classpath/dependencies. The extended set of supported File Systems via Hadoop is not available. …… 2019-09-06 11:44:41,943 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Shutting StandaloneSessionClusterEntrypoint down with application status FAILED. Diagnostics java.io.IOException: Could not create FileSystem for highly available storage (high-availability.storageDir) …… Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. ^ Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies. …… 2019-09-06 11:44:42,075 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Could not start cluster entrypoint StandaloneSessionClusterEntrypoint. org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to initialize the cluster entrypoint StandaloneSessionClusterEntrypoint. …… Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. …… Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies. ``` ##### 接下来在flink-conf.yaml文件中添加hdfs配置。 ``` fs.hdfs.hadoopconf: /usr/hadoop/hadoop/etc/hadoop fs.hdfs.hdfsdefault: /usr/hadoop/hadoop/etc/hadoop/hdfs-default.xml fs.hdfs.hdfssite: /usr/hadoop/hadoop/etc/hadoop/hdfs-site.xml ``` ##### flink集群依然无法按启动,日志内容与之前没有差别。 ##### 小弟在此向社区各路大神求教,如果有遇到相关情况,是否有解决办法。 ##### 非常感谢 ##### ps:flink on yarn 搭建方式尚未尝试。
sbt idea build project 报错 invalid sha1: expected=<html> [warn] <head> ?
Error while importing sbt project: [info] Loading settings for project global-plugins from idea.sbt ... [info] Loading global plugins from D:\SBT\sbt\.sbt\plugins [info] Updating ProjectRef(uri("file:/D:/SBT/sbt/.sbt/plugins/"), "global-plugins")... [info] Done updating. [info] Loading project definition from D:\IDEA\sbttest\project [info] Updating ProjectRef(uri("file:/D:/IDEA/sbttest/project/"), "sbttest-build")... [info] Done updating. [info] Loading settings for project root from build.sbt ... [info] Set current project to sbttest (in build file:/D:/IDEA/sbttest/) [info] sbt server started at local:sbt-server-92123181769dae5e81c2 sbt:sbttest> [info] Defining Global / sbtStructureOptions, Global / sbtStructureOutputFile, shellPrompt [info] The new values will be used by no settings or tasks. [info] Reapplying settings... [info] Set current project to sbttest (in build file:/D:/IDEA/sbttest/) [info] Applying State transformations org.jetbrains.sbt.CreateTasks from D:/IDEA/IAD/.IntelliJIdea/config/plugins/Scala/launcher/sbt-structure-1.0.jar [info] Reapplying settings... [info] Set current project to sbttest (in build file:/D:/IDEA/sbttest/) [info] Updating ... [warn] problem while downloading module descriptor: http://mirrors.ibiblio.org/maven2/org/apache/flink/flink-streaming-scala/1.7.0/flink-streaming-scala-1.7.0.pom: invalid sha1: expected=<html> [warn] <head> computed=47abf2c9ebd0dedcc526c0800426f709d8179231 (160357ms) [warn] module not found: org.apache.flink#flink -streaming-scala;1.7.0 [warn] ==== local: tried [warn] D:\SBT\sbt\.ivy2\local\org.apache.flink\flink-streaming-scala\1.7.0\ivys\ivy.xml [warn] ==== comp-maven: tried [warn] http://mvnrepository.com/artifact/org/apache/flink/flink-streaming-scala/1.7.0/flink-streaming-scala-1.7.0.pom [warn] ==== sbt-releases-repo: tried [warn] http://repo.typesafe.com/typesafe/ivy-releases/org.apache.flink/flink-streaming-scala/1.7.0/ivys/ivy.xml [warn] ==== sbt-plugins-repo: tried [warn] http://repo.scala-sbt.org/scalasbt/sbt-plugin-releases/org.apache.flink/flink-streaming-scala/1.7.0/ivys/ivy.xml [warn] ==== ali1: tried [warn] http://maven.aliyun.com/nexus/content/groups/public/org/apache/flink/flink-streaming-scala/1.7.0/flink-streaming-scala-1.7.0.pom [warn] ==== ali2: tried [warn] https://oss.sonatype.org/content/repositories/snapshots/org/apache/flink/flink-streaming-scala/1.7.0/flink-streaming-scala-1.7.0.pom [warn] ==== store_0: tried [warn] https://repository.apache.org/content/repositories/snapshots/org/apache/flink/flink-streaming-scala/1.7.0/flink-streaming-scala-1.7.0.pom [warn] ==== store_1: tried [warn] http://mirrors.ibiblio.org/maven2/org/apache/flink/flink-streaming-scala/1.7.0/flink-streaming-scala-1.7.0.pom [warn] ==== store_2: tried [warn] http://repo2.maven.org/maven2/org/apache/flink/flink-streaming-scala/1.7.0/flink-streaming-scala-1.7.0.pom [warn] ==== maven-central: tried [warn] http://repo1.maven.org/maven2/org/apache/flink/flink-streaming-scala/1.7.0/flink-streaming-scala-1.7.0.pom [warn] ==== Apach : tried [warn] https://repository.apache.org/content/repositories/snapshots/org/apache/flink/flink-streaming-scala/1.7.0/flink-streaming-scala-1.7.0.pom [warn] ==== velvia maven: tried [warn] http://dl.bintray.com/velvia/maven/org/apache/flink/flink-streaming-scala/1.7.0/flink-streaming-scala-1.7.0.pom [warn] ==== ali: tried [warn] https://maven.aliyun.com/repository/public/org/apache/flink/flink-streaming-scala/1.7.0/flink-streaming-scala-1.7.0.pom [warn] :::::::::::::::::::::::::::::::::::::::::::::: [warn] :: UNRESOLVED DEPENDENCIES :: [warn] :::::::::::::::::::::::::::::::::::::::::::::: [warn] :: org.apache.flink#flink-streaming-scala;1.7.0: not found [warn] :::::::::::::::::::::::::::::::::::::::::::::: [warn] [warn] Note: Unresolved dependencies path: [warn] org.apache.flink:flink-streaming-scala:1.7.0 (D:\IDEA\sbttest\build.sbt#L35) [warn] +- sbttest:sbttest_2.11:0.1 [error] sbt.librarymanagement.ResolveException: unresolved dependency: org.apache.flink#flink-streaming-scala;1.7.0: not found [error] at sbt.internal.librarymanagement.IvyActions$.resolveAndRetrieve(IvyActions.scala:332) [error] at sbt.internal.librarymanagement.IvyActions$.$anonfun$updateEither$1(IvyActions.scala:208) [error] at sbt.internal.librarymanagement.IvySbt$Module.$anonfun$withModule$1(Ivy.scala:239) [error] at sbt.internal.librarymanagement.IvySbt.$anonfun$withIvy$1(Ivy.scala:204) [error] at sbt.internal.librarymanagement.IvySbt.sbt$internal$librarymanagement$IvySbt$$action$1(Ivy.scala:70) [error] at sbt.internal.librarymanagement.IvySbt$$anon$3.call(Ivy.scala:77) [error] at xsbt.boot.Locks$GlobalLock.withChannel$1(Locks.scala:95) [error] at xsbt.boot.Locks$GlobalLock.xsbt$boot$Locks$GlobalLock$$withChannelRetries$1(Locks.scala:80) [error] at xsbt.boot.Locks$GlobalLock$$anonfun$withFileLock$1.apply(Locks.scala:99) [error] at xsbt.boot.Using$.withResource(Using.scala:10) [error] at xsbt.boot.Using$.apply(Using.scala:9) [error] at xsbt.boot.Locks$GlobalLock.ignoringDeadlockAvoided(Locks.scala:60) [error] at xsbt.boot.Locks$GlobalLock.withLock(Locks.scala:50) [error] at xsbt.boot.Locks$.apply0(Locks.scala:31) [error] at xsbt.boot.Locks$.apply(Locks.scala:28) [error] at sbt.internal.librarymanagement.IvySbt.withDefaultLogger(Ivy.scala:77) [error] at sbt.internal.librarymanagement.IvySbt.withIvy(Ivy.scala:199) [error] at sbt.internal.librarymanagement.IvySbt.withIvy(Ivy.scala:196) [error] at sbt.internal.librarymanagement.IvySbt$Module.withModule(Ivy.scala:238) [error] at sbt.internal.librarymanagement.IvyActions$.updateEither(IvyActions.scala:193) [error] at sbt.librarymanagement.ivy.IvyDependencyResolution.update(IvyDependencyResolution.scala:20) [error] at sbt.librarymanagement.DependencyResolution.update(DependencyResolution.scala:56) [error] at sbt.internal.LibraryManagement$.resolve$1(LibraryManagement.scala:45) [error] at sbt.internal.LibraryManagement$.$anonfun$cachedUpdate$12(LibraryManagement.scala:93) [error] at sbt.util.Tracked$.$anonfun$lastOutput$1(Tracked.scala:68) [error] at sbt.internal.LibraryManagement$.$anonfun$cachedUpdate$19(LibraryManagement.scala:106) [error] at scala.util.control.Exception$Catch.apply(Exception.scala:224) [error] at sbt.internal.LibraryManagement$.$anonfun$cachedUpdate$11(LibraryManagement.scala:106) [error] at sbt.internal.LibraryManagement$.$anonfun$cachedUpdate$11$adapted(LibraryManagement.scala:89) [error] at sbt.util.Tracked$.$anonfun$inputChanged$1(Tracked.scala:149) [error] at sbt.internal.LibraryManagement$.cachedUpdate(LibraryManagement.scala:120) [error] at sbt.Classpaths$.$anonfun$updateTask$5(Defaults.scala:2556) [error] at scala.Function1.$anonfun$compose$1(Function1.scala:44) [error] at sbt.internal.util.$tilde$greater.$anonfun$$u2219$1(TypeFunctions.scala:40) [error] at sbt.std.Transform$$anon$4.work(System.scala:67) [error] at sbt.Execute.$anonfun$submit$2(Execute.scala:269) [error] at sbt.internal.util.ErrorHandling$.wideConvert(ErrorHandling.scala:16) [error] at sbt.Execute.work(Execute.scala:278) [error] at sbt.Execute.$anonfun$submit$1(Execute.scala:269) [error] at sbt.ConcurrentRestrictions$$anon$4.$anonfun$submitValid$1(ConcurrentRestrictions.scala:178) [error] at sbt.CompletionService$$anon$2.call(CompletionService.scala:37) [error] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [error] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [error] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [error] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [error] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [error] at java.lang.Thread.run(Thread.java:748) [error] sbt.librarymanagement.ResolveException: unresolved dependency: org.apache.flink#flink-streaming-scala;1.7.0: not found [error] at sbt.internal.librarymanagement.IvyActions$.resolveAndRetrieve(IvyActions.scala:332) [error] at sbt.internal.librarymanagement.IvyActions$.$anonfun$updateEither$1(IvyActions.scala:208) [error] at sbt.internal.librarymanagement.IvySbt$Module.$anonfun$withModule$1(Ivy.scala:239) [error] at sbt.internal.librarymanagement.IvySbt.$anonfun$withIvy$1(Ivy.scala:204) [error] at sbt.internal.librarymanagement.IvySbt.sbt$internal$librarymanagement$IvySbt$$action$1(Ivy.scala:70) [error] at sbt.internal.librarymanagement.IvySbt$$anon$3.call(Ivy.scala:77) [error] at xsbt.boot.Locks$GlobalLock.withChannel$1(Locks.scala:95) [error] at xsbt.boot.Locks$GlobalLock.xsbt$boot$Locks$GlobalLock$$withChannelRetries$1(Locks.scala:80) [error] at xsbt.boot.Locks$GlobalLock$$anonfun$withFileLock$1.apply(Locks.scala:99) [error] at xsbt.boot.Using$.withResource(Using.scala:10) [error] at xsbt.boot.Using$.apply(Using.scala:9) [error] at xsbt.boot.Locks$GlobalLock.ignoringDeadlockAvoided(Locks.scala:60) [error] at xsbt.boot.Locks$GlobalLock.withLock(Locks.scala:50) [error] at xsbt.boot.Locks$.apply0(Locks.scala:31) [error] at xsbt.boot.Locks$.apply(Locks.scala:28) [error] at sbt.internal.librarymanagement.IvySbt.withDefaultLogger(Ivy.scala:77) [error] at sbt.internal.librarymanagement.IvySbt.withIvy(Ivy.scala:199) [error] at sbt.internal.librarymanagement.IvySbt.withIvy(Ivy.scala:196) [error] at sbt.internal.librarymanagement.IvySbt$Module.withModule(Ivy.scala:238) [error] at sbt.internal.librarymanagement.IvyActions$.updateEither(IvyActions.scala:193) [error] at sbt.librarymanagement.ivy.IvyDependencyResolution.update(IvyDependencyResolution.scala:20) [error] at sbt.librarymanagement.DependencyResolution.update(DependencyResolution.scala:56) [error] at sbt.internal.LibraryManagement$.resolve$1(LibraryManagement.scala:45) [error] at sbt.internal.LibraryManagement$.$anonfun$cachedUpdate$12(LibraryManagement.scala:93) [error] at sbt.util.Tracked$.$anonfun$lastOutput$1(Tracked.scala:68) [error] at sbt.internal.LibraryManagement$.$anonfun$cachedUpdate$19(LibraryManagement.scala:106) [error] at scala.util.control.Exception$Catch.apply(Exception.scala:224) [error] at sbt.internal.LibraryManagement$.$anonfun$cachedUpdate$11(LibraryManagement.scala:106) [error] at sbt.internal.LibraryManagement$.$anonfun$cachedUpdate$11$adapted(LibraryManagement.scala:89) [error] at sbt.util.Tracked$.$anonfun$inputChanged$1(Tracked.scala:149) [error] at sbt.internal.LibraryManagement$.cachedUpdate(LibraryManagement.scala:120) [error] at sbt.Classpaths$.$anonfun$updateTask$5(Defaults.scala:2556) [error] at scala.Function1.$anonfun$compose$1(Function1.scala:44) [error] at sbt.internal.util.$tilde$greater.$anonfun$$u2219$1(TypeFunctions.scala:40) [error] at sbt.std.Transform$$anon$4.work(System.scala:67) [error] at sbt.Execute.$anonfun$submit$2(Execute.scala:269) [error] at sbt.internal.util.ErrorHandling$.wideConvert(ErrorHandling.scala:16) [error] at sbt.Execute.work(Execute.scala:278) [error] at sbt.Execute.$anonfun$submit$1(Execute.scala:269) [error] at sbt.ConcurrentRestrictions$$anon$4.$anonfun$submitValid$1(ConcurrentRestrictions.scala:178) [error] at sbt.CompletionService$$anon$2.call(CompletionService.scala:37) [error] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [error] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [error] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [error] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [error] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [error] at java.lang.Thread.run(Thread.java:748) [error] (update) sbt.librarymanagement.ResolveException: unresolved dependency: org.apache.flink#flink-streaming-scala;1.7.0: not found [error] (ssExtractDependencies) sbt.librarymanagement.ResolveException: unresolved dependency: org.apache.flink#flink-streaming-scala;1.7.0: not found [error] Total time: 178 s, completed 2019-6-19 8:35:29 [info] shutting down server Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512M; support was removed in 8.0
在中国程序员是青春饭吗?
今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...
深析Synchronized关键字(小白慎入,深入jvm源码,两万字长文)
目录一、synchronized基础1.1synchronized的使用1.1示例1.2验证1.2.1 普通方法和代码块中使用this是同一个监视器(锁),即某个具体调用该代码的对象1.2.2 静态方法和代码块中使用该类的class对象是同一个监视器,任何该类的对象调用该段代码时都是在争夺同一个监视器的锁定1.2、synchronized的特点二、synchronized进阶2.1对象头2.2sy
GitHub 总星 4w+!删库?女装?表情包?这些沙雕中文项目真是我每天快乐的源泉!
大家好,我是 Rocky0429,一个喜欢在 GitHub 上瞎逛的蒟蒻… 好看的皮囊千篇一律,有趣的灵魂没有底线。作为全球最大的同性交友网站,GayHub GitHub 上不止有鲜活的代码,秃头的算法,还有很多拥有有(sha)趣(diao)灵魂的宝藏。 还记得我之前给大家介绍的 Sorry 项目嘛,一个可以自己做表情包的项目,这个的沙雕程度在下面这些项目面前只能算弟弟。虽然说沙雕不分国...
程序员请照顾好自己,周末病魔差点一套带走我。
程序员在一个周末的时间,得了重病,差点当场去世,还好及时挽救回来了。
卸载 x 雷某度!GitHub 标星 1.5w+,从此我只用这款全能高速下载工具!
作者 | Rocky0429 来源 | Python空间 大家好,我是 Rocky0429,一个喜欢在网上收集各种资源的蒟蒻… 网上资源眼花缭乱,下载的方式也同样千奇百怪,比如 BT 下载,磁力链接,网盘资源等等等等,下个资源可真不容易,不一样的方式要用不同的下载软件,因此某比较有名的 x 雷和某度网盘成了我经常使用的工具。 作为一个没有钱的穷鬼,某度网盘几十 kb 的下载速度让我...
20道你必须要背会的微服务面试题,面试一定会被问到
这篇博客总结了面试中最常见的微服务面试题,相信对你有所帮助。
讲真,这两个IDE插件,可以让你写出质量杠杠的代码
周末躺在床上看《拯救大兵瑞恩》 周末在闲逛的时候,发现了两个优秀的 IDE 插件,据说可以提高代码的质量,我就安装了一下,试了试以后发现,确实很不错,就推荐给大家。 01、Alibaba Java 代码规范插件 《阿里巴巴 Java 开发手册》,相信大家都不会感到陌生,其 IDEA 插件的下载次数据说达到了 80 万次,我今天又贡献了一次。嘿嘿。 该项目的插件地址: https://github....
为什么猝死的都是程序员,基本上不见产品经理猝死呢?
相信大家时不时听到程序员猝死的消息,但是基本上听不到产品经理猝死的消息,这是为什么呢? 我们先百度搜一下:程序员猝死,出现将近700多万条搜索结果: 搜索一下:产品经理猝死,只有400万条的搜索结果,从搜索结果数量上来看,程序员猝死的搜索结果就比产品经理猝死的搜索结果高了一倍,而且从下图可以看到,首页里面的五条搜索结果,其实只有两条才是符合条件。 所以程序员猝死的概率真的比产品经理大,并不是错...
毕业5年,我问遍了身边的大佬,总结了他们的学习方法
我问了身边10个大佬,总结了他们的学习方法,原来成功都是有迹可循的。
推荐10个堪称神器的学习网站
每天都会收到很多读者的私信,问我:“二哥,有什么推荐的学习网站吗?最近很浮躁,手头的一些网站都看烦了,想看看二哥这里有什么新鲜货。” 今天一早做了个恶梦,梦到被老板辞退了。虽然说在我们公司,只有我辞退老板的份,没有老板辞退我这一说,但是还是被吓得 4 点多都起来了。(主要是因为我掌握着公司所有的核心源码,哈哈哈) 既然 4 点多起来,就得好好利用起来。于是我就挑选了 10 个堪称神器的学习网站,推...
良心推荐,我珍藏的一些Chrome插件
上次搬家的时候,发了一个朋友圈,附带的照片中不小心暴露了自己的 Chrome 浏览器插件之多,于是就有小伙伴评论说分享一下我觉得还不错的浏览器插件。 我下面就把我日常工作和学习中经常用到的一些 Chrome 浏览器插件分享给大家,随便一个都能提高你的“生活品质”和工作效率。 Markdown Here Markdown Here 可以让你更愉快的写邮件,由于支持 Markdown 直接转电子邮...
看完这篇HTTP,跟面试官扯皮就没问题了
我是一名程序员,我的主要编程语言是 Java,我更是一名 Web 开发人员,所以我必须要了解 HTTP,所以本篇文章就来带你从 HTTP 入门到进阶,看完让你有一种恍然大悟、醍醐灌顶的感觉。 最初在有网络之前,我们的电脑都是单机的,单机系统是孤立的,我还记得 05 年前那会儿家里有个电脑,想打电脑游戏还得两个人在一个电脑上玩儿,及其不方便。我就想为什么家里人不让上网,我的同学 xxx 家里有网,每...
2020 年,大火的 Python 和 JavaScript 是否会被取而代之?
Python 和 JavaScript 是目前最火的两大编程语言,但是2020 年,什么编程语言将会取而代之呢? 作者 |Richard Kenneth Eng 译者 |明明如月,责编 | 郭芮 出品 | CSDN(ID:CSDNnews) 以下为译文: Python 和 JavaScript 是目前最火的两大编程语言。然而,他们不可能永远屹立不倒。最终,必将像其他编程语言一...
C语言数字图像处理---1.4直方图拉伸和直方图均衡化
本篇将延续上一篇的内容,对直方图进行扩展,讲述直方图拉伸和直方图均衡化两个内容,并通过简单的C语言来实现这两个基础功能,让初学者通俗易懂。
史上最全的IDEA快捷键总结
现在Idea成了主流开发工具,这篇博客对其使用的快捷键做了总结,希望对大家的开发工作有所帮助。
阿里程序员写了一个新手都写不出的低级bug,被骂惨了。
这种新手都不会范的错,居然被一个工作好几年的小伙子写出来,差点被当场开除了。
谁是华为扫地僧?
是的,华为也有扫地僧!2020年2月11-12日,“养在深闺人不知”的华为2012实验室扫地僧们,将在华为开发者大会2020(Cloud)上,和大家见面。到时,你可以和扫地僧们,吃一个洋...
没用过这些 IDEA 插件?怪不得写代码头疼
使用插件,可以提高开发效率。对于开发人员很有帮助。这篇博客介绍了IDEA中最常用的一些插件。
AI 没让人类失业,搞 AI 的人先失业了
最近和几个 AI 领域的大佬闲聊 根据他们讲的消息和段子 改编出下面这个故事 如有雷同 都是巧合 1. 老王创业失败,被限制高消费 “这里写我跑路的消息实在太夸张了。” 王葱葱哼笑一下,把消息分享给群里。 阿杰也看了消息,笑了笑。在座几位也都笑了。 王葱葱是个有名的人物,21岁那年以全额奖学金进入 KMU 攻读人工智能博士,累计发表论文 40 余篇,个人技术博客更是成为深度学习领域内风向标。 ...
2020年,冯唐49岁:我给20、30岁IT职场年轻人的建议
点击“技术领导力”关注∆每天早上8:30推送 作者|Mr.K 编辑| Emma 来源|技术领导力(ID:jishulingdaoli) 前天的推文《冯唐:职场人35岁以后,方法论比经验重要》,收到了不少读者的反馈,觉得挺受启发。其实,冯唐写了不少关于职场方面的文章,都挺不错的。可惜大家只记住了“春风十里不如你”、“如何避免成为油腻腻的中年人”等不那么正经的文章。 本文整理了冯...
推荐一些有趣的在线编程游戏
1.Robocode 让坦克们互相博弈的游戏,你可以看到它们飞奔,碾碎一切挡道的东西。机器人配有雷达与火炮,选手在躲避对手进攻的同时攻击对手,以此来较量得分的多少。这个游戏很有意思,曾经令我沉迷… 你可以用Java、Scala、C#等编程语言,编写人工智能程序,驱动机器人。 2.Code Combat Code Combat是一款学习编程的角色扮演游戏。每一关都用任务的形式设立目标,用实时的反馈...
工作十年的数据分析师被炒,没有方向,你根本躲不过中年危机
2020年刚刚开始,就意味着离职潮高峰的到来,我身边就有不少人拿着年终奖离职了,而最让我感到意外的,是一位工作十年的数据分析师也离职了,不同于别人的主动辞职,他是被公司炒掉的。 很多人都说数据分析是个好饭碗,工作不累薪资高、入门简单又好学。然而今年34的他,却真正尝到了中年危机的滋味,平时也有不少人都会私信问我: 数据分析师也有中年危机吗?跟程序员一样是吃青春饭的吗?该怎么保证自己不被公司淘汰...
作为一名大学生,如何在B站上快乐的学习?
B站是个宝,谁用谁知道???? 作为一名大学生,你必须掌握的一项能力就是自学能力,很多看起来很牛X的人,你可以了解下,人家私底下一定是花大量的时间自学的,你可能会说,我也想学习啊,可是嘞,该学习啥嘞,不怕告诉你,互联网时代,最不缺的就是学习资源,最宝贵的是啥? 你可能会说是时间,不,不是时间,而是你的注意力,懂了吧! 那么,你说学习资源多,我咋不知道,那今天我就告诉你一个你必须知道的学习的地方,人称...
那些年,我们信了课本里的那些鬼话
教材永远都是有错误的,从小学到大学,我们不断的学习了很多错误知识。 斑羚飞渡 在我们学习的很多小学课文里,有很多是错误文章,或者说是假课文。像《斑羚飞渡》: 随着镰刀头羊的那声吼叫,整个斑羚群迅速分成两拨,老年斑羚为一拨,年轻斑羚为一拨。 就在这时,我看见,从那拨老斑羚里走出一只公斑羚来。公斑羚朝那拨年轻斑羚示意性地咩了一声,一只半大的斑羚应声走了出来。一老一少走到伤心崖,后退了几步,突...
张朝阳回应迟到 1 分钟罚 500:资本家就得剥削员工
loonggg读完需要2分钟速读仅需 1 分钟大家我,我是你们的校长。前几天,搜狐的董事局主席兼 CEO 张朝阳和搜狐都上热搜了。原因很简单,就是搜狐出了“考勤新规”。一封搜狐对员工发布...
一个程序在计算机中是如何运行的?超级干货!!!
强烈声明:本文很干,请自备茶水!???? 开门见山,咱不说废话! 你有没有想过,你写的程序,是如何在计算机中运行的吗?比如我们搞Java的,肯定写过这段代码 public class HelloWorld { public static void main(String[] args) { System.out.println("Hello World!"); } ...
【蘑菇街技术部年会】程序员与女神共舞,鼻血再次没止住。(文末内推)
蘑菇街技术部的年会,别开生面,一样全是美女。
那个在阿里养猪的工程师,5年了……
简介: 在阿里,走过1825天,没有趴下,依旧斗志满满,被称为“五年陈”。他们会被授予一枚戒指,过程就叫做“授戒仪式”。今天,咱们听听阿里的那些“五年陈”们的故事。 下一个五年,猪圈见! 我就是那个在养猪场里敲代码的工程师,一年多前我和20位工程师去了四川的猪场,出发前总架构师慷慨激昂的说:同学们,中国的养猪产业将因为我们而改变。但到了猪场,发现根本不是那么回事:要个WIFI,没有;...
为什么程序猿都不愿意去外包?
分享外包的组织架构,盈利模式,亲身经历,以及根据一些外包朋友的反馈,写了这篇文章 ,希望对正在找工作的老铁有所帮助
Java校招入职华为,半年后我跑路了
何来 我,一个双非本科弟弟,有幸在 19 届的秋招中得到前东家华为(以下简称 hw)的赏识,当时秋招签订就业协议,说是入了某 java bg,之后一系列组织架构调整原因等等让人无法理解的神操作,最终毕业前夕,被通知调往其他 bg 做嵌入式开发(纯 C 语言)。 由于已至于校招末尾,之前拿到的其他 offer 又无法再收回,一时感到无力回天,只得默默接受。 毕业后,直接入职开始了嵌入式苦旅,由于从未...
世界上有哪些代码量很少,但很牛逼很经典的算法或项目案例?
点击上方蓝字设为星标下面开始今天的学习~今天分享四个代码量很少,但很牛逼很经典的算法或项目案例。1、no code 项目地址:https://github.com/kelseyhight...
​两年前不知如何编写代码的我,现在是一名人工智能工程师
全文共3526字,预计学习时长11分钟 图源:Unsplash 经常有小伙伴私信给小芯,我没有编程基础,不会写代码,如何进入AI行业呢?还能赶上AI浪潮吗? 任何时候努力都不算晚。 下面,小芯就给大家讲一个朋友的真实故事,希望能给那些处于迷茫与徘徊中的小伙伴们一丝启发。(下文以第一人称叙述) 图源:Unsplash 正如Elsa所说,职业转换是...
强烈推荐10本程序员必读的书
很遗憾,这个春节注定是刻骨铭心的,新型冠状病毒让每个人的神经都是紧绷的。那些处在武汉的白衣天使们,尤其值得我们的尊敬。而我们这些窝在家里的程序员,能不外出就不外出,就是对社会做出的最大的贡献。 有些读者私下问我,窝了几天,有点颓丧,能否推荐几本书在家里看看。我花了一天的时间,挑选了 10 本我最喜欢的书,你可以挑选感兴趣的来读一读。读书不仅可以平复恐惧的压力,还可以对未来充满希望,毕竟苦难终将会...
作为一个程序员,内存的这些硬核知识你必须懂!
我们之前讲过CPU,也说了CPU和内存的那点事儿,今天咱就再来说说有关内存,作为一个程序员,你必须要懂的哪那些硬核知识! 大白话聊一聊,很重要! 先来大白话的跟大家聊一聊,我们这里说的内存啊,其实就是说的我们电脑里面的内存条,所以嘞,内存就是内存条,数据要放在这上面才能被cpu读取从而做运算,还有硬盘,就是电脑中的C盘啥的,一个程序需要运行的话需要向内存申请一块独立的内存空间,这个程序本身是存放在...
非典逼出了淘宝和京东,新冠病毒能够逼出什么?
loonggg读完需要5分钟速读仅需 2 分钟大家好,我是你们的校长。我知道大家在家里都憋坏了,大家可能相对于封闭在家里“坐月子”,更希望能够早日上班。今天我带着大家换个思路来聊一个问题...
牛逼!一行代码居然能解决这么多曾经困扰我半天的算法题
春节假期这么长,干啥最好?当然是折腾一些算法题了,下面给大家讲几道一行代码就能解决的算法题,当然,我相信这些算法题你都做过,不过就算做过,也是可以看一看滴,毕竟,你当初大概率不是一行代码解决的。 学会了一行代码解决,以后遇到面试官问起的话,就可以装逼了。 一、2 的幂次方 问题描述:判断一个整数 n 是否为 2 的幂次方 对于这道题,常规操作是不断这把这个数除以 2,然后判断是否有余数,直到 ...
Spring框架|JdbcTemplate介绍
文章目录一、JdbcTemplate 概述二、创建对象的源码分析三、JdbcTemplate操作数据库 一、JdbcTemplate 概述 在之前的web学习中,学习了手动封装JDBCtemplate,其好处是通过(sql语句+参数)模板化了编程。而真正的JDBCtemplete类,是Spring框架为我们写好的。 它是 Spring 框架中提供的一个对象,是对原始 Jdbc API 对象的简单...
谁说程序员不懂浪漫——我的C语言结婚请柬(附源码)
前言:但行好事,莫问前程——《增广贤文》 从上学起开始学C++,后面也做过H5,现在做Android。无论是学习用的,还是工作用的,上百个软件不止。但最另我骄傲的是,我用程序烂漫了一把。 用C++语言,利用WIN32框架写一个结婚请柬,文末附源码和使用方法,大家可以自行修改,记得帮我点赞哦。 点开程序,你的电脑像中毒一般,漫天的樱花从屏幕上方,伴随着歌声《今天你要嫁给我》,缓缓落下。 ...
为什么说程序员做外包没前途?
之前做过不到3个月的外包,2020的第一天就被释放了,2019年还剩1天,我从外包公司离职了。我就谈谈我个人的看法吧。首先我们定义一下什么是有前途 稳定的工作环境 不错的收入 能够在项目中不断提升自己的技能(ps:非技术上的认知也算) 找下家的时候能找到一份工资更高的工作 如果你目前还年轻,但高不成低不就,只有外包offer,那请往下看。 外包公司你应该...
B 站上有哪些很好的学习资源?
哇说起B站,在小九眼里就是宝藏般的存在,放年假宅在家时一天刷6、7个小时不在话下,更别提今年的跨年晚会,我简直是跪着看完的!! 最早大家聚在在B站是为了追番,再后来我在上面刷欧美新歌和漂亮小姐姐的舞蹈视频,最近两年我和周围的朋友们已经把B站当作学习教室了,而且学习成本还免费,真是个励志的好平台ヽ(.◕ฺˇд ˇ◕ฺ;)ノ 下面我们就来盘点一下B站上优质的学习资源: 综合类 Oeasy: 综合...
终于!疫情之下,第一批企业没能熬住面临倒闭,员工被遣散,没能等来春暖花开!
先来看一个图: 这个春节,我同所有人一样,不仅密切关注这次新型肺炎,还同时关注行业趋势和企业。在家憋了半个月,我选择给自己看书充电。因为在疫情之后,行业竞争会更加加剧,必须做好未雨绸缪,时刻保持充电。 看了今年的情况,突然想到大佬往年经典语录: 马云:未来无业可就,无工可打,无商可务 李彦宏:人工智能时代,有些专业将被淘汰,还没毕业就失业 马化腾:未来3年将大洗牌,迎21世界以来最大失业潮 王...
昂,我24岁了
24岁的程序员,还在未来迷茫,不知道能不能买得起房子
作为程序员的我,大学四年一直自学,全靠这些实用工具和学习网站!
我本人因为高中沉迷于爱情,导致学业荒废,后来高考,毫无疑问进入了一所普普通通的大学,实在惭愧???? 我又是那么好强,现在学历不行,没办法改变的事情了,所以,进入大学开始,我就下定决心,一定要让自己掌握更多的技能,尤其选择了计算机这个行业,一定要多学习技术。 在进入大学学习不久后,我就认清了一个现实:我这个大学的整体教学质量和学习风气,真的一言难尽,懂的人自然知道怎么回事? 怎么办?我该如何更好的提升自...
新来个技术总监,禁止我们使用Lombok!
我有个学弟,在一家小型互联网公司做Java后端开发,最近他们公司新来了一个技术总监,这位技术总监对技术细节很看重,一来公司之后就推出了很多"政策",比如定义了很多开发规范、日志规范、甚至是要求大家统一使用某一款IDE。 但是这些都不是我这个学弟和我吐槽的点,他真正和我吐槽的是,他很不能理解,这位新来的技术总监竟然禁止公司内部所有开发使用Lombok。但是又没给出十分明确的,可以让人信服的理由。 于...
疫情下的招聘季还会是金三银四吗?
想必大家都看过朋友圈流行的一个段子: 前天一觉醒来,假期还有⑤天。昨天一觉醒来,假期还有⑦天。今天一觉醒来,假期还有⑬天。真的不敢再睡了 今天,有个朋友跟我说: 一觉醒来,公司倒闭了。 昨天有些公司已经通知复工了,有些选择在线办工,也些同学也已进入公司码代码了。 能复工的同学应该庆幸,因为你们公司还能撑得下去。 对于大部分的打工族而言,休假比工作爽,反正啥活不干,工资照发。 而对于企...
字节跳动的技术架构
字节跳动创立于2012年3月,到目前仅4年时间。从十几个工程师开始研发,到上百人,再到200余人。产品线由内涵段子,到今日头条,今日特卖,今日电影等产品线。 一、产品背景 今日头条是为用户提供个性化资讯客户端。下面就和大家分享一下当前今日头条的数据(据内部与公开数据综合): 5亿注册用户 2014年5月1.5亿,2015年5月3亿,2016年5月份为5亿。几乎为成倍增长。 ...
在三线城市工作爽吗?
我是一名程序员,从正值青春年华的 24 岁回到三线城市洛阳工作,至今已经 6 年有余。一不小心又暴露了自己的实际年龄,但老读者都知道,我驻颜有术,上次去看房子,业务员肯定地说:“小哥肯定比我小,我今年还不到 24。”我只好强颜欢笑:“你说得对。” 从我拥有记忆到现在进入而立之年,我觉得,我做过最明智的选择有下面三个: 1)高中三年,和一位女同学保持着算不上朋友的冷淡关系;大学半年,把这位女同学追到...
立即提问