flume开启报错java.lang.SecurityException: sealing violation: package org.apache.flume.conf is sealed

开启了hadoop,zk,kafka之后,配置了conf,flume消费kafka中产生的信息

开启flume的命令:

flume-ng agent --conf /home/hduser/apps/flume/conf --conf-file /home/hduser/apps/flume/conf/applog --name a1 -Dflume.root.logger=INFO,console
第一次开启
报错信息:

2019-04-27 13:20:32,643 (main) [ERROR - org.apache.flume.node.Application.main(Application.java:374)] A fatal error occurred while running. Exception follows.
java.lang.SecurityException: sealing violation: package org.apache.flume.conf is sealed
    at java.net.URLClassLoader.getAndVerifyPackage(URLClassLoader.java:399)
    at java.net.URLClassLoader.definePackageInternal(URLClassLoader.java:419)
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:451)
    at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    at java.lang.ClassLoader.defineClass1(Native Method)
    at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
    at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
    at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    at org.apache.flume.node.Application.main(Application.java:350)

上网查了老半天,实在没办法了,有大佬知道怎么办吗?
这问题就是说package org.apache.flume.conf is sealed被封装吗?

多次开启后的错误显示:

(main) [ERROR - org.apache.flume.node.Application.main(Application.java:374)] A fatal error occurred while running. Exception follows.
java.lang.SecurityException: sealing violation: can't seal package org.apache.flume.conf: already loaded
    at java.net.URLClassLoader.getAndVerifyPackage(URLClassLoader.java:406)
    at java.net.URLClassLoader.definePackageInternal(URLClassLoader.java:419)
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:451)
    at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    at java.lang.ClassLoader.defineClass1(Native Method)
    at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
    at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
    at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    at org.apache.flume.node.Application.main(Application.java:350)

说那个已经被加载了 更蒙 了
求解决!

weixin_39868387
江湖侠客 我也遇到过,你现在解决了吗?
2 个月之前 回复
jjj3487521
turbo_ladyJ 我记得昨天开启小demo的时候还是好使的,能传文件
大约一年之前 回复

1个回答

还是jar包重复问题,关于seal violation问题网上有个解释的很详细,下面是链接
http://blog.sina.com.cn/s/blog_4c890a7f01015iaa.html
我在flume/lib下加了自己的jar包,其中存在依赖org.apache.flume.conf META_INF
文件夹下有个.MF文件,其中有句话 sealed=true给改成false,就不会报错了,我是这么改的

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
运行hadoop 报错Error: java.lang.ArrayIndexOutOfBoundsException: 1,感觉程序没问题

``` public class HH { public static class HHMapper extends Mapper<LongWritable,Text,Text,IntWritable>{ private Text map_key=new Text(); private IntWritable map_value=new IntWritable(); protected void map(LongWritable key,Text value,Context context) throws IOException,InterruptedException{ //System.out.println(value.toString()); String[] H=value.toString().split("\t"); String[] h=H[1].split(","); int[][] R=new int[h.length][h.length]; for(int i=0;i<h.length;i++) for(int j=0;j<h.length;j++){ R[i][j]=Integer.parseInt(h[i])*Integer.parseInt(h[j]); map_key.set(H[0]+","+j); // map_key.set(H[0]); map_value.set((R[i][j])); // map_value.set(1); context.write(map_key, map_value); } } } ``` 感觉程序没问题 就是报下标越界错误,文件格式也没问题啊

运行flume的agent,出现如下错误

我的代码: ``` agent.sources = s1 agent.channels = c1 agent.sinks = k1 agent.sources.s1.type=spooldir agent.sources.s1.spoolDir=/tmp/logs/tomcat2kafka agent.sources.s1.channels=c1 agent.channels.c1.type=memory agent.channels.c1.capacity=10000 agent.channels.c1.transactionCapacity=100 #设置Kafka接收 agent.sinks.k1.type= org.apache.flume.sink.kafka.KafkaSink #设置Kafka的broker地址和端口号 agent.sinks.k1.brokerList=222.30.194.254:9092 #设置Kafka的Topic agent.sinks.k1.topic=kafkatest2 #设置序列化方式 agent.sinks.k1.serializer.class=kafka.serializer.StringEncoder agent.sinks.k1.channel=c1 ``` 错误提示: ``` [ERROR - org.apache.flume.sink.kafka.KafkaSink.process(KafkaSink.java:240)] Failed to publish events org.apache.kafka.common.errors.InterruptException: Flush interrupted. at org.apache.kafka.clients.producer.KafkaProducer.flush(KafkaProducer.java:546) at org.apache.flume.sink.kafka.KafkaSink.process(KafkaSink.java:224) at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:67) at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:145) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231) at org.apache.kafka.clients.producer.internals.ProduceRequestResult.await(ProduceRequestResult.java:57) at org.apache.kafka.clients.producer.internals.RecordAccumulator.awaitFlushCompletion(RecordAccumulator.java:425) at org.apache.kafka.clients.producer.KafkaProducer.flush(KafkaProducer.java:544) ... 4 more ``` 网上是真没有相应的答案,无奈了,给分求助

flume-ng 1.4 elasticsearch sink 报错

哪位知道这个怎么回事啊? 使用flume-ng 使用elasticsearch 作为sink的时候报错。 20 三月 2014 22:28:09,417 INFO [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.loadChannels:150) - Creating channels 20 三月 2014 22:28:09,438 INFO [conf-file-poller-0] (org.apache.flume.channel.DefaultChannelFactory.create:40) - Creating instance of channel c1 type memory 20 三月 2014 22:28:09,451 INFO [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.loadChannels:205) - Created channel c1 20 三月 2014 22:28:09,453 INFO [conf-file-poller-0] (org.apache.flume.source.DefaultSourceFactory.create:39) - Creating instance of source r1, type spooldir 20 三月 2014 22:28:09,478 INFO [conf-file-poller-0] (org.apache.flume.sink.DefaultSinkFactory.create:40) - Creating instance of sink: k1, type: elasticsearch 20 三月 2014 22:28:09,486 ERROR [conf-file-poller-0] (org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run:145) - Failed to start agent because dependencies were not found in classpath. Error follows. java.lang.NoClassDefFoundError: org/elasticsearch/common/transport/TransportAddress at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:190) at org.apache.flume.sink.DefaultSinkFactory.getClass(DefaultSinkFactory.java:67) at org.apache.flume.sink.DefaultSinkFactory.create(DefaultSinkFactory.java:41) at org.apache.flume.node.AbstractConfigurationProvider.loadSinks(AbstractConfigurationProvider.java:415) at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:103) at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:140) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:724) Caused by: java.lang.ClassNotFoundException: org.elasticsearch.common.transport.TransportAddress at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at java.lang.ClassLoader.loadClass(ClassLoader.java:357)

flume采集kafka报错怎么解决

报错信息: Source.java:120)] Event #: 0 2018-11-23 17:59:18,995 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:119)] Waited: 965 2018-11-23 17:59:18,995 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:120)] Event #: 0 2018-11-23 17:59:19,005 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:119)] Waited: 975 2018-11-23 17:59:19,005 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:120)] Event #: 0 2018-11-23 17:59:19,015 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:119)] Waited: 985 2018-11-23 17:59:19,015 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:120)] Event #: 0 2018-11-23 17:59:19,025 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:119)] Waited: 995 2018-11-23 17:59:19,025 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:120)] Event #: 0 2018-11-23 17:59:19,036 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:119)] Waited: 1006 2018-11-23 17:59:19,036 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:120)] Event #: 0 2018-11-23 17:59:19,036 (PollableSourceRunner-KafkaSource-kaSource) [ERROR - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:153)] KafkaSource EXCEPTION, {} java.lang.NullPointerException at org.apache.flume.instrumentation.MonitoredCounterGroup.increment(MonitoredCounterGroup.java:261) at org.apache.flume.instrumentation.kafka.KafkaSourceCounter.incrementKafkaEmptyCount(KafkaSourceCounter.java:49) at org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:146) at org.apache.flume.source.PollableSourceRunner$PollingRunner.run(PollableSourceRunner.java:139) at java.lang.Thread.run(Thread.java:748) -------------------------------------------- 配置文件 kafkaLogger.sources = kaSource kafkaLogger.channels = memoryChannel kafkaLogger.sinks = kaSink # The channel can be defined as follows. kafkaLogger.sources.kaSource.channels = memoryChannel kafkaLogger.sources.kaSource.type= org.apache.flume.source.kafka.KafkaSource kafkaLogger.sources.kaSource.zookeeperConnect=192.168.130.4:2181,192.168.130.5:2181,192.168.130.6:2181 kafkaLogger.sources.kaSource.topic=dwd-topic kafkaLogger.sources.kaSource.groupId = 0 kafkaLogger.channels.memoryChannel.type=memory kafkaLogger.channels.memoryChannel.capacity = 1000 kafkaLogger.channels.memoryChannel.keep-alive = 60 kafkaLogger.sinks.kaSink.type = elasticsearch kafkaLogger.sinks.kaSink.hostNames = 192.168.130.6:9300 kafkaLogger.sinks.kaSink.indexName = flume_mq_es_d kafkaLogger.sinks.kaSink.indexType = flume_mq_es kafkaLogger.sinks.kaSink.clusterName = zyuc-elasticsearch kafkaLogger.sinks.kaSink.batchSize = 100 kafkaLogger.sinks.kaSink.client = transport kafkaLogger.sinks.kaSink.serializer = com.commons.flume.sink.elasticsearch.CommonElasticSearchIndexRequestBuilderFactory kafkaLogger.sinks.kaSink.serializer.parse = com.commons.log.parser.LogTextParser kafkaLogger.sinks.kaSink.serializer.formatPattern = yyyyMMdd kafkaLogger.sinks.kaSink.serializer.dateFieldName = time kafkaLogger.sinks.kaSink.channel = memoryChannel

安装hawq pxf时报错java.lang.NoClassDefFoundError: com/ctc/wstx/io/InputBootstrapper

![图片说明](https://img-ask.csdn.net/upload/201811/06/1541492438_910486.png) 求大神帮忙分析一波

java.net.BindException: Addressalreadyinuse:bind

java.net.BindException: Address already in use: bind 会的帮忙看下谢谢了,实在不会了! java.net.BindException: Address already in use: bind at sun.nio.ch.Net.bind0(Native Method) ~[na:1.8.0_121] at sun.nio.ch.Net.bind(Net.java:433) ~[na:1.8.0_121] at sun.nio.ch.Net.bind(Net.java:425) ~[na:1.8.0_121] at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) ~[na:1.8.0_121] at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) ~[na:1.8.0_121] at org.apache.tomcat.util.net.NioEndpoint.bind(NioEndpoint.java:340) ~[tomcat-embed-core-8.0.33.jar:8.0.33] at org.apache.tomcat.util.net.AbstractEndpoint.start(AbstractEndpoint.java:773) ~[tomcat-embed-core-8.0.33.jar:8.0.33] at org.apache.coyote.AbstractProtocol.start(AbstractProtocol.java:473) ~[tomcat-embed-core-8.0.33.jar:8.0.33] at org.apache.catalina.connector.Connector.startInternal(Connector.java:986) [tomcat-embed-core-8.0.33.jar:8.0.33] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:147) [tomcat-embed-core-8.0.33.jar:8.0.33] at org.apache.catalina.core.StandardService.addConnector(StandardService.java:239) [tomcat-embed-core-8.0.33.jar:8.0.33] at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer.addPreviouslyRemovedConnectors(TomcatEmbeddedServletContainer.java:194) [spring-boot-1.3.5.RELEASE.jar:1.3.5.RELEASE] at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer.start(TomcatEmbeddedServletContainer.java:151) [spring-boot-1.3.5.RELEASE.jar:1.3.5.RELEASE] at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.startEmbeddedServletContainer(EmbeddedWebApplicationContext.java:293) [spring-boot-1.3.5.RELEASE.jar:1.3.5.RELEASE] at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.finishRefresh(EmbeddedWebApplicationContext.java:141) [spring-boot-1.3.5.RELEASE.jar:1.3.5.RELEASE] at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:541) [spring-context-4.2.6.RELEASE.jar:4.2.6.RELEASE] at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh(EmbeddedWebApplicationContext.java:118) [spring-boot-1.3.5.RELEASE.jar:1.3.5.RELEASE] at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:766) [spring-boot-1.3.5.RELEASE.jar:1.3.5.RELEASE] at org.springframework.boot.SpringApplication.createAndRefreshContext(SpringApplication.java:361) [spring-boot-1.3.5.RELEASE.jar:1.3.5.RELEASE] at org.springframework.boot.SpringApplication.run(SpringApplication.java:307) [spring-boot-1.3.5.RELEASE.jar:1.3.5.RELEASE] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1191) [spring-boot-1.3.5.RELEASE.jar:1.3.5.RELEASE] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1180) [spring-boot-1.3.5.RELEASE.jar:1.3.5.RELEASE] at com.nowcoder.ToutiaoApplication.main(ToutiaoApplication.java:16) [classes/:na] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_121] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_121] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_121] at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_121] at org.springframework.boot.devtools.restart.RestartLauncher.run(RestartLauncher.java:49) [spring-boot-devtools-1.3.5.RELEASE.jar:1.3.5.RELEASE] 2017-02-10 18:27:02.955 ERROR 24196 --- [ restartedMain] o.apache.catalina.core.StandardService : Failed to start connector [Connector[HTTP/1.1-8080]]

kafka.common.KafkaException:

package com; import java.util.Properties; import java.util.concurrent.TimeUnit; import kafka.javaapi.producer.Producer; import kafka.producer.KeyedMessage; import kafka.producer.ProducerConfig; import kafka.serializer.StringEncoder; public class kafkaProducer extends Thread{ private String topic; public kafkaProducer(String topic){ super(); this.topic = topic; } @Override public void run() { Producer producer = createProducer(); int i=0; while(true){ producer.send(new KeyedMessage<Integer, String>(topic, "message: " + i++)); try { TimeUnit.SECONDS.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } } } private Producer createProducer() { Properties properties = new Properties(); properties.put("zookeeper.connect", "localhost:2181");//声明zk properties.put("serializer.class", StringEncoder.class.getName()); properties.put("metadata.broker.list", "localhost:9092");// 声明kafka broker return new Producer<Integer, String>(new ProducerConfig(properties)); } public static void main(String[] args) { new kafkaProducer("test").start();// 使用kafka集群中创建好的主题 test } } kafka.common.KafkaException: fetching topic metadata for topics [Set(test)] from broker [ArrayBuffer(id:0,host:localhost,port:9092)] failed at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:72) at kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82) at kafka.producer.async.DefaultEventHandler$$anonfun$handle$1.apply$mcV$sp(DefaultEventHandler.scala:67) at kafka.utils.Utils$.swallow(Utils.scala:172) at kafka.utils.Logging$class.swallowError(Logging.scala:106) at kafka.utils.Utils$.swallowError(Utils.scala:45) at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:67) at kafka.producer.Producer.send(Producer.scala:77) at kafka.javaapi.producer.Producer.send(Producer.scala:33) at com.kafkaProducer.run(kafkaProducer.java:29) Caused by: java.nio.channels.ClosedChannelException at kafka.network.BlockingChannel.send(BlockingChannel.scala:100) at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73) at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72) at kafka.producer.SyncProducer.send(SyncProducer.scala:113) at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58) ... 9 more ``` ```

大家有谁用过个推吗? 配置完报错! 错误如下:

Caused by: java.lang.ClassNotFoundException: Didn't find class "com.getui.demo.PushDemoReceiver" on path: DexPathList[[zip file "/data/app/com.zhensoft.wgoperationmaintenance- 1.apk"],nativeLibraryDirectories=[/data/app-lib/com.zhensoft.wgoperationmaintenance-1, /vendor/lib, /system/lib]] java.lang.RuntimeException: Unable to instantiate receiver com.getui.demo.PushDemoReceiver: java.lang.ClassNotFoundException: Didn't find class "com.getui.demo.PushDemoReceiver" on path: DexPathList[[zip file "/data/app/com.zhensoft.wgoperationmaintenance- 2.apk"],nativeLibraryDirectories=[/data/app-lib/com.zhensoft.wgoperationmaintenance-2, /vendor/lib, /system/lib]]

关于Flume-ng的netcat配置问题

参考网上的相关教程,我的netcat配置如下: ``` agent1.sources.source1.type = netcat agent1.sources.source1.bind = localhost agent1.sources.source1.port = 44444 ``` 其他的配置就省略了。我启动服务后也正常,出现如下正常日志: ``` 2017-05-09 21:40:21,951 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.source.NetcatSource.start(NetcatSource.java:164)] Created serverSocket:sun.nio.ch.ServerSocketChannelImpl[/127.0.0.1:44444] ``` 然后在windows上开启一个console,telnet 192.168.200.143 44444,结果提示我无法连接主机端口(ps:192.168.200.143就是Flume的主机IP)。 一顿懵逼后,想了想,我并没有开启过44444端口,于是换了下8089端口,这个端口我开了服务,重启启动,报了一堆错: ``` Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind(Native Method) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126) ``` 地址已经被使用,瞬间崩溃!那个地址的配置不正是监听服务器8089端口的数据情况莫,怎么会说地址被占用,难道启动的时候flume会自己开启8089端口? 好吧,我重新改了下配置,改成监听我windows机器的端口: ``` agent1.sources.source1.type = netcat agent1.sources.source1.bind = 192.168.205.143 #远程windows机器 agent1.sources.source1.port = 9000 #windows开启的9000服务 ``` 再次启动,又是报错: ``` Caused by: java.net.BindException: Cannot assign requested address at sun.nio.ch.Net.bind(Native Method) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:52) at org.apache.flume.source.NetcatSource.start(NetcatSource.java:162) ``` 我彻底崩溃,完全被这个配置搞晕了。 在这里我有个问题需要弄清楚,望大神们帮我解答,多谢! netcat的绑定地址和端口,这个配置到底是什么意思? 1)是Flume自己会根据配置的地址和端口去创建socketServer端口服务,然后客户端程序向这个端口发送日志数据?这显然不符合Flume主动采集日志的特性。 2)还是Flume根据配置的端口和地址去监听着个服务端口和日志数据。我想Flume应该是监听,但是为什么我去监听指定的端口却连启动都不行。 我现在是特地一脸懵逼,被卡在这好难受,大神们快快出现,小弟多谢!

大数据:flume-ng启动报错

flume-ng1.5.0启动报错java.lang.OutOfMemoryError: Direct buffer memory。 flume-env.sh内存配置4G绝对足够了,请求解决方法

把Flume的spooldir目录中的文件传到HDFS失败——java.io.IOException:File type DataStream not supported

这个问题困扰两天了——Flume能检测到Spooldir目录中的文件增加,但是就是传不到hdfs中,总是报错:java.io.IOException:File type DataStream not supported 我的hadoop集群第一台是master,另外两台是slave。 ![图片说明](https://img-ask.csdn.net/upload/202003/25/1585132522_800088.png)![图片说明](https://img-ask.csdn.net/upload/202003/25/1585132530_697329.png) flume的配置如下: # Name the components on this agent   agent1.sources = spooldirSource   agent1.channels = fileChannel   agent1.sinks = hdfsSink       # Describe/configure the source   agent1.sources.spooldirSource.type=spooldir   agent1.sources.spooldirSource.spoolDir=/usr/local/flume/data/spooldir       # Describe the sink   agent1.sinks.hdfsSink.type=hdfs   agent1.sinks.hdfsSink.hdfs.path=hdfs://192.168.174.128:9000/flume/%y-%m-%d/%H%M/%S   agent1.sinks.hdfsSink.hdfs.round = true   agent1.sinks.hdfsSink.hdfs.roundValue = 10   agent1.sinks.hdfsSink.hdfs.roundUnit = minute   agent1.sinks.hdfsSink.hdfs.useLocalTimeStamp = true   agent1.sinks.hdfsSink.hdfs.fileType=DataStream         # Describe the channel   agent1.channels.fileChannel.type = file   agent1.channels.fileChannel.dataDirs=/hadoop/flume/datadir       # Bind the source and sink to the channel   agent1.sources.spooldirSource.channels=fileChannel   agent1.sinks.hdfsSink.channel=fileChannel 另外,/hadoop/flume/datadir这个目录是在master上会自动创建的目录吗,不太懂。

Flume运行报错,显示没有配置过滤器和正则表达式无效

Flume的配置文件: a1.sources=s1 a1.channels=c1 a1.sinks=k1 a1.sources.s1.type = spooldir a1.sources.s1.channels = c1 a1.sources.s1.spoolDir =/home/frankyu/serverlogs a1.source.s1.ignorePattern= ^(.)*\\.tmp$ a1.sinks.k1.type = hdfs a1.sinks.k1.hdfs.path = hdfs://hadoop1:9000/flume a1.sinks.k1.hdfs.writeFormat = Text a1.sinks.k1.hdfs.rollInterval = 0 a1.sinks.k1.hdfs.rollSize = 0 a1.sinks.k1.hdfs.rollCount = 10 a1.sinks.k1.channel = c1 a1.channels.c1.type = memory 报错信息: 2019-03-14 02:27:33,799 (conf-file-poller-0) [WARN - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.validateConfigFilterSet(FlumeConfiguration.java:623)] Agent configuration for 'a1' has no configfilters. 2019-03-14 02:27:33,790 (conf-file-poller-0) [WARN - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1161)] Invalid property specified: source.s1.ignorePattern 2019-03-14 02:27:33,796 (conf-file-poller-0) [WARN - org.apache.flume.conf.FlumeConfiguration.<init>(FlumeConfiguration.java:126)] Configuration property ignored: a1.source.s1.ignorePattern = ^(.)*\.tmp$ 感谢帮助!

flume上传文件到hadoop,没有文件时正常,有文件时抛出DistributeFileSystem not found?

flume配置好了,分布式。没有文件的时候运行正常,往它查询的目录上传一个文件就报错 Unable to deliver event. Exception follows. org.apache.flume.EventDeliveryException: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.hdfs.DistributeFileSystem not found 。但是hadoop-hdfs-2.7.3.jar我已经导入了。为啥会找不到,请指教。

flume配置了kakfaChannel后,启动报错!求大神帮忙

是在虚拟机上配置搭建的服务,flume部署在172.235.10.10上, kafka部署在172.235.10.11上。配置文件如下: flume: a1.sources = r1 a1.sinks = k1 a1.channels = c1 a1.sources.r1.type = http a1.sources.r1.channels = c1 a1.sources.r1.bind = 0.0.0.0 a1.sources.r1.port = 8000 a1.sinks.k1.type = file_roll a1.sinks.k1.channel=c1 a1.sinks.k1.directory=/usr/log/flume a1.channels.c1.type = org.apache.flume.channel.kafka.KafkaChannel a1.channels.c1.capacity = 1000 a1.channels.c1.transactionCapacity = 100 a1.channels.c1.brokerList=172.235.10.11:9002 a1.channels.c1.topic=test2 a1.channels.c1.zookeeperConnect=172.235.10.11:2181 zookeeper是使用了kafka自带的,配置文件没做修改 kafka配置server.properties修改: host.name=172.235.10.11 zookeeper.connect=172.235.10.11:2181 kafka启动正常,生产者和消费者也是可以通话的 但在启动flume的时候却报这样的一个错误: Error while getting events from Kafka. This is usually caused by trying to read a non-flume event. Ensure the setting for parseAsFlumeEvent is correct 求大神帮忙

使用类加载器java.net.URLClassLoader时的奇怪问题?

先写个继承java.net.URLClassLoader的类,如下: [code="java"] package kite.jvm; import java.net.URL; import java.net.URLClassLoader; public class OneURLClassLoader extends URLClassLoader { // 类加载器的parent默认为系统AppClassLoader. public OneURLClassLoader(URL[] urls) { super(urls); } // 使用父类的findClass(String name)方法加载类. public Class findClass(String name) throws ClassNotFoundException { return super.findClass(name); } } [/code] 接下来写个接口,如下: [code="java"] package kite.jvm; public interface OneInterface {} [/code] 紧接着写个实现上面接口的一个类,如下: [code="java"] package kite.jvm; public class Constant implements OneInterface {} [/code] 好了,写个含main方法的类测试下,如下: [code="java"] package kite.jvm; import java.net.URL; public class Run { public static void main(String[] args) throws Exception { // class字节码所在的位置. String dir = "file:/Development/workspace/jvm/bin/"; URL url = new URL(dir); OneURLClassLoader oucl = new OneURLClassLoader(new URL[]{url}); // 用类加载器加载kite.jvm.Constant并返回它的class对象. Class c = oucl.findClass("kite.jvm.Constant"); // 根据class对象c实例化一个对象,用它的接口类型(OneInterface)做类型转换. OneInterface instance1 = (OneInterface) c.newInstance(); // 根据class对象c实例化一个对象,用它的实际类型(Constant)做类型转换. //Constant instance2 = (Constant) c.newInstance(); System.out.println(instance1); //System.out.println(instance2); } } [/code] 如上代码运行结果打印如下: [code="java"] kite.jvm.Constant@b6e39f [/code] 如果打开注释掉的那两条代码,运行结果打印如下: [code="java"] Exception in thread "main" java.lang.ClassCastException: kite.jvm.Constant cannot be cast to kite.jvm.Constant at kite.jvm.Run.main(Run.java:12) [/code] 问题:解释下异常的出现,在此情景中接口和实际类型转化的区别?先谢谢大家了。 注:附件里是源代码。

flume 的hdfs sink效率低的问题

哈喽,大家好,我现在遇到了一个问题。 我的flume向hdfs中写文件时,效率比较低 大约1G/3分钟 我单独测试时用put方式 1分钟能达到8G 如果用file sink也能达到1分钟1G 日志没有任何异常 只是DEBUG的时候发现每次提交一个块用时将近20秒 有高手能帮忙分析下是什么原因么 client.sources = r1 client.channels = c1 client.sinks = k1 client.sources.r1.type = spooldir client.sources.r1.spoolDir = /var/data/tmpdata client.sources.r1.fileSuffix = .COMPLETED client.sources.r1.deletePolicy = never client.sources.r1.batchSize = 500 client.sources.r1.channels = c1 client.channels.c1.type = memory client.channels.c1.capacity = 1000000 client.channels.c1.transactionCapacity = 50000 client.channels.c1.keep-alive = 3 client.sinks.k1.type = hdfs client.sinks.k1.hdfs.path = /flume/events/%Y%m%d/%H client.sinks.k1.hdfs.useLocalTimeStamp = true client.sinks.k1.hdfs.rollInterval = 3600 client.sinks.k1.hdfs.rollSize = 1000000000 client.sinks.k1.hdfs.rollCount = 0 client.sinks.k1.hdfs.batchSize = 500 client.sinks.k1.hdfs.callTimeout = 30000 client.sinks.k1.hdfs.fileType = DataStream client.sinks.k1.channel = c1 12 Aug 2015 16:14:24,739 DEBUG [conf-file-poller-0] (org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run:126) - Checking file:../conf/flume-client.conf for changes 12 Aug 2015 16:14:54,740 DEBUG [conf-file-poller-0] (org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run:126) - Checking file:../conf/flume-client.conf for changes 12 Aug 2015 16:15:24,740 DEBUG [conf-file-poller-0] (org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run:126) - Checking file:../conf/flume-client.conf for changes 12 Aug 2015 16:15:54,741 DEBUG [conf-file-poller-0] (org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run:126) - Checking file:../conf/flume-client.conf for changes 12 Aug 2015 16:16:24,742 DEBUG [conf-file-poller-0] (org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run:126) - Checking file:../conf/flume-client.conf for changes 12 Aug 2015 16:16:54,742 DEBUG [conf-file-poller-0] (org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run:126) - Checking file:../conf/flume-client.conf for changes 12 Aug 2015 16:17:24,743 DEBUG [conf-file-poller-0] (org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run:126) - Checking file:../conf/flume-client.conf for changes 12 Aug 2015 16:17:54,744 DEBUG [conf-file-poller-0] (org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run:126) - Checking file:../conf/flume-client.conf for changes 12 Aug 2015 16:18:24,745 DEBUG [conf-file-poller-0] (org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run:126) - Checking file:../conf/flume-client.conf for changes 12 Aug 2015 16:18:54,746 DEBUG [conf-file-poller-0] (org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run:126) - Checking file:../conf/flume-client.conf for changes 12 Aug 2015 16:19:24,746 DEBUG [conf-file-poller-0] (org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run:126) - Checking file:../conf/flume-client.conf for changes 日志没有问题 就是慢

elasticsearch插入数据报错

之前用的elasticsearch是1.3.2版本的,装了ik插件,用起来很正常,后面elasticsearch换成2.0.0的了,还装了logstash2.0.0,kibana4.2.0, ik分词器是1.5.0的,用Java代码批量插入的时候就报错了,在elasticsearch的控制台手动插入是没问题的 [INFO][2016-11-16 18:22:34] org.elasticsearch.plugins.PluginsService.<init>(PluginsService.java:151) main [Agatha Harkness] loaded [analysis-jcseg], sites [] [INFO][2016-11-16 18:22:35] org.elasticsearch.client.transport.TransportClientNodesService$SniffNodesSampler$1$1.handleException(TransportClientNodesService.java:443) elasticsearch[Agatha Harkness][transport_client_worker][T#1]{New I/O worker #28} [Agatha Harkness] failed to get local cluster state for [#transport#-1][USER-20150529VW][inet[localhost/127.0.0.1:9300]], disconnecting... org.elasticsearch.transport.RemoteTransportException: Failed to deserialize exception response from stream Caused by: org.elasticsearch.transport.TransportSerializationException: Failed to deserialize exception response from stream at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:173) at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:125) at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296) at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268) at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255) at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318) at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.StreamCorruptedException: Unsupported version: 1 at org.elasticsearch.common.io.ThrowableObjectInputStream.readStreamHeader(ThrowableObjectInputStream.java:46) at java.io.ObjectInputStream.<init>(ObjectInputStream.java:299) at org.elasticsearch.common.io.ThrowableObjectInputStream.<init>(ThrowableObjectInputStream.java:38) at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:170) ... 23 more [WARN][2016-11-16 18:22:35] org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:135) elasticsearch[Agatha Harkness][transport_client_worker][T#1]{New I/O worker #28} [Agatha Harkness] Message not fully read (response) for [0] handler org.elasticsearch.client.transport.TransportClientNodesService$SniffNodesSampler$1$1@7e8c9412, error [true], resetting [INFO][2016-11-16 18:22:39] org.springframework.context.support.AbstractApplicationContext.doClose(AbstractApplicationContext.java:862) Thread-1 Closing org.springframework.context.support.GenericApplicationContext@fbd1f6: startup date [Wed Nov 16 18:21:55 CST 2016]; root of context hierarchy [INFO][2016-11-16 18:22:39] org.elasticsearch.node.internal.InternalNode.stop(InternalNode.java:272) Thread-1 [Fer-de-Lance] stopping ... [INFO][2016-11-16 18:22:39] org.elasticsearch.node.internal.InternalNode.stop(InternalNode.java:310) Thread-1 [Fer-de-Lance] stopped

kafka数据上传hbase的问题

我使用的环境是hdp的伪分布集群 我的项目是flume采集数据发送到kafka的各个topic当中 再由jar文件使得从kafka当中获取数据 发送到hbase做持久化 然后因为数据量颇大 每次传个半个小时的数据 regionserver就挂掉了 项目是肯定没问题的 因为目前在学习阶段 别人是可以执行且不报错的 问题如下所示 ``` java.io.FileNotFoundException: File /tmp/hbase-root/hbase/lib does not exist at org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:431) ~[hadoop-common-2.7.3.jar!/:na] at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1517) ~[hadoop-common-2.7.3.jar!/:na] at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1557) ~[hadoop-common-2.7.3.jar!/:na] at org.apache.hadoop.fs.ChecksumFileSystem.listStatus(ChecksumFileSystem.java:674) ~[hadoop-common-2.7.3.jar!/:na] at org.apache.hadoop.hbase.util.DynamicClassLoader.loadNewJars(DynamicClassLoader.java:178) [hbase-common-1.1.2.jar!/:1.1.2] at org.apache.hadoop.hbase.util.DynamicClassLoader.loadClass(DynamicClassLoader.java:142) [hbase-common-1.1.2.jar!/:1.1.2] at java.lang.Class.forName0(Native Method) [na:1.8.0_161] at java.lang.Class.forName(Class.java:348) [na:1.8.0_161] at org.apache.hadoop.hbase.protobuf.ProtobufUtil.toException(ProtobufUtil.java:1543) [hbase-client-1.1.2.jar!/:1.1.2] at org.apache.hadoop.hbase.protobuf.ResponseConverter.getResults(ResponseConverter.java:120) [hbase-client-1.1.2.jar!/:1.1.2] at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:134) [hbase-client-1.1.2.jar!/:1.1.2] at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:54) [hbase-client-1.1.2.jar!/:1.1.2] at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) [hbase-client-1.1.2.jar!/:1.1.2] at org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl$SingleServerRequestRunnable.run(AsyncProcess.java:708) [hbase-client-1.1.2.jar!/:1.1.2] ``` 他突然开始寻找 File /tmp/hbase-root/hbase/lib does not exist 这个路径的文件 我的项目中并没有从这个路径下寻找文件 我前往到这个路径 路径是空的 就是根本没有这个路径 然后我前往hbase的log中查看 hbase来了一套组合拳 ``` 2020-03-21 19:29:49,789 ERROR [Thread-19] util.PolicyRefresher: PolicyRefresher(serviceName=Sandbox_hbase): failed to refresh policies. Will continue to use last known version of policies (6) com.sun.jersey.api.client.ClientHandlerException: java.net.SocketTimeoutException: Read timed out at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:149) at com.sun.jersey.api.client.Client.handle(Client.java:648) at com.sun.jersey.api.client.WebResource.handle(WebResource.java:670) at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74) at com.sun.jersey.api.client.WebResource$Builder.get(WebResource.java:503) at org.apache.ranger.admin.client.RangerAdminRESTClient.getServicePoliciesIfUpdated(RangerAdminRESTClient.java:135) at org.apache.ranger.plugin.util.PolicyRefresher.loadPolicyfromPolicyAdmin(PolicyRefresher.java:264) at org.apache.ranger.plugin.util.PolicyRefresher.loadPolicy(PolicyRefresher.java:202) at org.apache.ranger.plugin.util.PolicyRefresher.run(PolicyRefresher.java:171) Caused by: java.net.SocketTimeoutException: Read timed out at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) at java.net.SocketInputStream.read(SocketInputStream.java:171) at java.net.SocketInputStream.read(SocketInputStream.java:141) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:735) at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:678) at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1587) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492) at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480) at com.sun.jersey.client.urlconnection.URLConnectionClientHandler._invoke(URLConnectionClientHandler.java:240) at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:147) ... 8 more ``` 然后就是读取超时 ``` com.sun.jersey.api.client.ClientHandlerException: java.net.SocketTimeoutException: Read timed out at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:149) at com.sun.jersey.api.client.Client.handle(Client.java:648) at com.sun.jersey.api.client.WebResource.handle(WebResource.java:670) at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74) at com.sun.jersey.api.client.WebResource$Builder.get(WebResource.java:503) at org.apache.ranger.admin.client.RangerAdminRESTClient.getServicePoliciesIfUpdated(RangerAdminRESTClient.java:135) at org.apache.ranger.plugin.util.PolicyRefresher.loadPolicyfromPolicyAdmin(PolicyRefresher.java:264) at org.apache.ranger.plugin.util.PolicyRefresher.loadPolicy(PolicyRefresher.java:202) at org.apache.ranger.plugin.util.PolicyRefresher.run(PolicyRefresher.java:171) ``` 然后就是最匪夷所思的异常 ``` 2020-03-21 19:33:36,252 ERROR [Thread-19] util.PolicyRefresher: PolicyRefresher(serviceName=Sandbox_hbase): failed to refresh policies. Will continue to use last known version of policies (6) com.sun.jersey.api.client.ClientHandlerException: java.net.SocketTimeoutException: Read timed out at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:149) at com.sun.jersey.api.client.Client.handle(Client.java:648) at com.sun.jersey.api.client.WebResource.handle(WebResource.java:670) at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74) at com.sun.jersey.api.client.WebResource$Builder.get(WebResource.java:503) at org.apache.ranger.admin.client.RangerAdminRESTClient.getServicePoliciesIfUpdated(RangerAdminRESTClient.java:135) at org.apache.ranger.plugin.util.PolicyRefresher.loadPolicyfromPolicyAdmin(PolicyRefresher.java:264) at org.apache.ranger.plugin.util.PolicyRefresher.loadPolicy(PolicyRefresher.java:202) at org.apache.ranger.plugin.util.PolicyRefresher.run(PolicyRefresher.java:171) Caused by: java.net.SocketTimeoutException: Read timed out at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) at java.net.SocketInputStream.read(SocketInputStream.java:171) at java.net.SocketInputStream.read(SocketInputStream.java:141) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:735) at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:678) at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1587) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492) at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480) at com.sun.jersey.client.urlconnection.URLConnectionClientHandler._invoke(URLConnectionClientHandler.java:240) at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:147) ... 8 more ``` 求大佬解答

flume-ng能否自定义数据读取完成标识?

flume读取文件时会为文件添加一个读取完成的标示,例如:文件python_20161027.log, 读取完成后会添加一个.COMPLETED的标示,把文件变成了python_20161027.log.COMPLETED 这样破坏了原有的文档结构,例如一些本来可以直接读取的txt文件,被flume采集后就不能再直接读取了,而且还会出现一些其他的问题。 我在使用flume的过程中发现,如果在上游一个程序在不停的写log日志,下游用flume实时采集日志有可能会报java.lang.IllegalStateException: File name has been re-used with different files. 这是因为我们上游的程序是以重定向的方式来写log日志的,当flume读取日志后,把日志名变成了python_20161027.log.COMPLETED后,上游程序再次生成log日志时,先判断是否存在python_20161027.log文件,如果没有的话就会创建python_20161027.log文件,而flume再次读取python_20161027.log文件时,还要生成python_20161027.log.COMPLETED文件。但是因为文件目录下已经存在此文件了,所以就会报上面的错误 我想问问各位大神,有没有什么办法可以让flume采集日志文件后,不改变原有的文件名,从而避免上面的问题

大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了

大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...

在中国程序员是青春饭吗?

今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...

springboot+jwt实现token登陆权限认证

一 前言 此篇文章的内容也是学习不久,终于到周末有时间码一篇文章分享知识追寻者的粉丝们,学完本篇文章,读者将对token类的登陆认证流程有个全面的了解,可以动态搭建自己的登陆认证过程;对小项目而已是个轻量级的认证机制,符合开发需求;更多精彩原创内容关注公主号知识追寻者,读者的肯定,就是对作者的创作的最大支持; 二 jwt实现登陆认证流程 用户使用账号和面发出post请求 服务器接受到请求后使用私...

技术大佬:我去,你写的 switch 语句也太老土了吧

昨天早上通过远程的方式 review 了两名新来同事的代码,大部分代码都写得很漂亮,严谨的同时注释也很到位,这令我非常满意。但当我看到他们当中有一个人写的 switch 语句时,还是忍不住破口大骂:“我擦,小王,你丫写的 switch 语句也太老土了吧!” 来看看小王写的代码吧,看完不要骂我装逼啊。 private static String createPlayer(PlayerTypes p...

女程序员,为什么比男程序员少???

昨天看到一档综艺节目,讨论了两个话题:(1)中国学生的数学成绩,平均下来看,会比国外好?为什么?(2)男生的数学成绩,平均下来看,会比女生好?为什么?同时,我又联想到了一个技术圈经常讨...

总结了 150 余个神奇网站,你不来瞅瞅吗?

原博客再更新,可能就没了,之后将持续更新本篇博客。

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

MySQL数据库面试题(2020最新版)

文章目录数据库基础知识为什么要使用数据库什么是SQL?什么是MySQL?数据库三大范式是什么mysql有关权限的表都有哪几个MySQL的binlog有有几种录入格式?分别有什么区别?数据类型mysql有哪些数据类型引擎MySQL存储引擎MyISAM与InnoDB区别MyISAM索引与InnoDB索引的区别?InnoDB引擎的4大特性存储引擎选择索引什么是索引?索引有哪些优缺点?索引使用场景(重点)...

如果你是老板,你会不会踢了这样的员工?

有个好朋友ZS,是技术总监,昨天问我:“有一个老下属,跟了我很多年,做事勤勤恳恳,主动性也很好。但随着公司的发展,他的进步速度,跟不上团队的步伐了,有点...

我入职阿里后,才知道原来简历这么写

私下里,有不少读者问我:“二哥,如何才能写出一份专业的技术简历呢?我总感觉自己写的简历太烂了,所以投了无数份,都石沉大海了。”说实话,我自己好多年没有写过简历了,但我认识的一个同行,他在阿里,给我说了一些他当年写简历的方法论,我感觉太牛逼了,实在是忍不住,就分享了出来,希望能够帮助到你。 01、简历的本质 作为简历的撰写者,你必须要搞清楚一点,简历的本质是什么,它就是为了来销售你的价值主张的。往深...

程序员写出这样的代码,能不挨骂吗?

当你换槽填坑时,面对一个新的环境。能够快速熟练,上手实现业务需求是关键。但是,哪些因素会影响你快速上手呢?是原有代码写的不够好?还是注释写的不够好?昨夜...

外包程序员的幸福生活

今天给你们讲述一个外包程序员的幸福生活。男主是Z哥,不是在外包公司上班的那种,是一名自由职业者,接外包项目自己干。接下来讲的都是真人真事。 先给大家介绍一下男主,Z哥,老程序员,是我十多年前的老同事,技术大牛,当过CTO,也创过业。因为我俩都爱好喝酒、踢球,再加上住的距离不算远,所以一直也断断续续的联系着,我对Z哥的状况也有大概了解。 Z哥几年前创业失败,后来他开始干起了外包,利用自己的技术能...

优雅的替换if-else语句

场景 日常开发,if-else语句写的不少吧??当逻辑分支非常多的时候,if-else套了一层又一层,虽然业务功能倒是实现了,但是看起来是真的很不优雅,尤其是对于我这种有强迫症的程序"猿",看到这么多if-else,脑袋瓜子就嗡嗡的,总想着解锁新姿势:干掉过多的if-else!!!本文将介绍三板斧手段: 优先判断条件,条件不满足的,逻辑及时中断返回; 采用策略模式+工厂模式; 结合注解,锦...

离职半年了,老东家又发 offer,回不回?

有小伙伴问松哥这个问题,他在上海某公司,在离职了几个月后,前公司的领导联系到他,希望他能够返聘回去,他很纠结要不要回去? 俗话说好马不吃回头草,但是这个小伙伴既然感到纠结了,我觉得至少说明了两个问题:1.曾经的公司还不错;2.现在的日子也不是很如意。否则应该就不会纠结了。 老实说,松哥之前也有过类似的经历,今天就来和小伙伴们聊聊回头草到底吃不吃。 首先一个基本观点,就是离职了也没必要和老东家弄的苦...

2020阿里全球数学大赛:3万名高手、4道题、2天2夜未交卷

阿里巴巴全球数学竞赛( Alibaba Global Mathematics Competition)由马云发起,由中国科学技术协会、阿里巴巴基金会、阿里巴巴达摩院共同举办。大赛不设报名门槛,全世界爱好数学的人都可参与,不论是否出身数学专业、是否投身数学研究。 2020年阿里巴巴达摩院邀请北京大学、剑桥大学、浙江大学等高校的顶尖数学教师组建了出题组。中科院院士、美国艺术与科学院院士、北京国际数学...

为什么你不想学习?只想玩?人是如何一步一步废掉的

不知道是不是只有我这样子,还是你们也有过类似的经历。 上学的时候总有很多光辉历史,学年名列前茅,或者单科目大佬,但是虽然慢慢地长大了,你开始懈怠了,开始废掉了。。。 什么?你说不知道具体的情况是怎么样的? 我来告诉你: 你常常潜意识里或者心理觉得,自己真正的生活或者奋斗还没有开始。总是幻想着自己还拥有大把时间,还有无限的可能,自己还能逆风翻盘,只不是自己还没开始罢了,自己以后肯定会变得特别厉害...

男生更看重女生的身材脸蛋,还是思想?

往往,我们看不进去大段大段的逻辑。深刻的哲理,往往短而精悍,一阵见血。问:产品经理挺漂亮的,有点心动,但不知道合不合得来。男生更看重女生的身材脸蛋,还是...

为什么程序员做外包会被瞧不起?

二哥,有个事想询问下您的意见,您觉得应届生值得去外包吗?公司虽然挺大的,中xx,但待遇感觉挺低,马上要报到,挺纠结的。

当HR压你价,说你只值7K,你该怎么回答?

当HR压你价,说你只值7K时,你可以流畅地回答,记住,是流畅,不能犹豫。 礼貌地说:“7K是吗?了解了。嗯~其实我对贵司的面试官印象很好。只不过,现在我的手头上已经有一份11K的offer。来面试,主要也是自己对贵司挺有兴趣的,所以过来看看……”(未完) 这段话主要是陪HR互诈的同时,从公司兴趣,公司职员印象上,都给予对方正面的肯定,既能提升HR的好感度,又能让谈判气氛融洽,为后面的发挥留足空间。...

面试:第十六章:Java中级开发

HashMap底层实现原理,红黑树,B+树,B树的结构原理 Spring的AOP和IOC是什么?它们常见的使用场景有哪些?Spring事务,事务的属性,传播行为,数据库隔离级别 Spring和SpringMVC,MyBatis以及SpringBoot的注解分别有哪些?SpringMVC的工作原理,SpringBoot框架的优点,MyBatis框架的优点 SpringCould组件有哪些,他们...

早上躺尸,晚上干活:硅谷科技公司这么流行迟到?

硅谷科技公司上班时间OPEN早已不是什么新鲜事,早九晚五是常态,但有很多企业由于不打卡,员工们10点、11点才“姗姗来迟”的情况也屡见不鲜。 这种灵活的考勤制度为人羡慕,甚至近年来,国内某些互联网企业也纷纷效仿。不过,硅谷普遍弹性的上班制度是怎么由来的呢?这种“流行性迟到”真的有那么轻松、悠哉吗? 《动态规划专题班》 课程试听内容: 动态规划的解题要领 动态规划三大类 求最值/计数/可行性 常...

面试阿里p7,被按在地上摩擦,鬼知道我经历了什么?

面试阿里p7被问到的问题(当时我只知道第一个):@Conditional是做什么的?@Conditional多个条件是什么逻辑关系?条件判断在什么时候执...

终于懂了TCP和UDP协议区别

终于懂了TCP和UDP协议区别

Python爬虫,高清美图我全都要(彼岸桌面壁纸)

爬取彼岸桌面网站较为简单,用到了requests、lxml、Beautiful Soup4

无代码时代来临,程序员如何保住饭碗?

编程语言层出不穷,从最初的机器语言到如今2500种以上的高级语言,程序员们大呼“学到头秃”。程序员一边面临编程语言不断推陈出新,一边面临由于许多代码已存在,程序员编写新应用程序时存在重复“搬砖”的现象。 无代码/低代码编程应运而生。无代码/低代码是一种创建应用的方法,它可以让开发者使用最少的编码知识来快速开发应用程序。开发者通过图形界面中,可视化建模来组装和配置应用程序。这样一来,开发者直...

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

大三实习生,字节跳动面经分享,已拿Offer

说实话,自己的算法,我一个不会,太难了吧

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

Java岗开发3年,公司临时抽查算法,离职后这几题我记一辈子

前几天我们公司做了一件蠢事,非常非常愚蠢的事情。我原以为从学校出来之后,除了找工作有测试外,不会有任何与考试有关的事儿。 但是,天有不测风云,公司技术总监、人事总监两位大佬突然降临到我们事业线,叫上我老大,给我们组织了一场别开生面的“考试”。 那是一个风和日丽的下午,我翘着二郎腿,左手端着一杯卡布奇诺,右手抓着我的罗技鼠标,滚动着轮轴,穿梭在头条热点之间。 “淡黄的长裙~蓬松的头发...

大胆预测下未来5年的Web开发

在2019年的ReactiveConf 上,《Elm in Action》的作者Richard Feldman对未来5年Web开发的发展做了预测,很有意思,分享给大家。如果你有机会从头...

立即提问
相关内容推荐