sparkStreaming运行一段时间发生错误 timeout: timed out 20C

Traceback (most recent call last):
File "/root/apps/a/ReceiveSleepData.py", line 130, in
ssc.awaitTermination()
File "/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/lib/spark/python/lib/pyspark.zip/pyspark/streaming/context.py", line 289, in awaitTermination
File "/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/lib/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in call
File "/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/lib/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o43.awaitTermination.
: org.apache.spark.SparkException: An exception was raised by Python:
Traceback (most recent call last):
File "/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/lib/spark/python/lib/pyspark.zip/pyspark/streaming/util.py", line 65, in call
r = self.func(t, *rdds)
File "/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/lib/spark/python/lib/pyspark.zip/pyspark/streaming/dstream.py", line 159, in
func = lambda t, rdd: old_func(rdd)
File "/root/apps/a/het.zip/het/action/SleepD.py", line 100, in
join_rdd.foreachRDD(lambda x:processRdd(x))
File "/root/apps/a/het.zip/het/action/SleepD.py", line 41, in processRdd
rdd.foreachPartition(lambda it: sendMattressStatus(it))
File "/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/lib/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 764, in foreachPartition
self.mapPartitions(func).count() # Force evaluation
File "/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/lib/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1004, in count
return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
File "/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/lib/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 995, in sum
return self.mapPartitions(lambda x: [sum(x)]).fold(0, operator.add)
File "/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/lib/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 869, in fold
vals = self.mapPartitions(func).collect()
File "/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/lib/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 772, in collect
return list(_load_from_socket(port, self._jrdd_deserializer))
File "/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/lib/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 142, in _load_from_socket
for item in serializer.load_stream(rf):
File "/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/lib/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 139, in load_stream
yield self._read_with_length(stream)
File "/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/lib/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 156, in _read_with_length
length = read_int(stream)
File "/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/lib/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 543, in read_int
length = stream.read(4)
File "/usr/local/python2.7/lib/python2.7/socket.py", line 380, in read
data = self._sock.recv(left)

    at org.apache.spark.streaming.api.python.TransformFunction.callPythonTransformFunction(PythonDStream.scala:95)
    at org.apache.spark.streaming.api.python.TransformFunction.apply(PythonDStream.scala:78)
    at org.apache.spark.streaming.api.python.PythonDStream$$anonfun$callForeachRDD$1.apply(PythonDStream.scala:189)
    at org.apache.spark.streaming.api.python.PythonDStream$$anonfun$callForeachRDD$1.apply(PythonDStream.scala:189)
    at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:50)
    at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50)
    at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50)
    at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:426)
    at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:49)
    at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49)
    at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49)
    at scala.util.Try$.apply(Try.scala:161)
    at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39)
    at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:224)
    at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224)
    at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224)
    at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
    at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:223)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

1个回答

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
springboot集成rabbitmq,运行一段时间后提示:Network is down (Read failed)
如题:springboot集成rabbitmq,启动后使用正常,但过一段时间后(时间不确定,有时两个小时,有时4小时)后台输出:Network is down (Read failed) ,如图: ![图片说明](https://img-ask.csdn.net/upload/201909/04/1567586975_658806.png) 且再过一段时间后出现操作超时错误,如图: ![图片说明](https://img-ask.csdn.net/upload/201909/04/1567587185_173029.png) 出现以上错误,rabbitMQ仍能正常使用,求解? 使用springboot rabbit配置: ``` spring: rabbitmq: host: 192.168.10.11 port: 5672 username: root password: root connnection-timeout: 60000 ```
SparkStreaming超时问题
SparkStreaming跑一段时间后会出现org.apache.spark.rpc.RpcTimeoutException:Futures timed out after [10 seconds].This timeout is controlled by spark.executor.hearbeatInterval
设置timeout:120000.报错超时120000
![图片说明](https://img-ask.csdn.net/upload/201811/11/1541932766_232881.png) 不设置超时,timeout是1000ms,设置超时120000,timeout:120000ms。设置多少就超时多少
win10 snmpwalk报错 Timeout: No Response from 127.0.0.1
![图片说明](https://img-ask.csdn.net/upload/201911/27/1574825009_820850.jpg)![图片说明](https://img-ask.csdn.net/upload/201911/27/1574825040_846527.jpg)![图片说明](https://img-ask.csdn.net/upload/201911/27/1574825073_884393.jpg) 这些都配置好了,为什么还是会timeout呢?!
Python selenium+firefox运行一段时间后卡住不动是什么原因
我用的版本是selenium2.53和firefox30 的经典搭配。。 部分代码如下: self.driver = webdriver.Firefox() self.driver.set_page_load_timeout(5) self.driver.implicitly_wait(5) try: self.driver.get(response.url) except Exception as e: print e pass else: 开始解析。。。 为啥运行了一段时间后,火狐浏览器就开始在那打转然后程序就不往下运行了?求大神解答。。万分感谢
使用spark的standalone模式调整心跳时间时出现Error(Invalid argument to --conf: spark.worker.timeout)?
使用spark集群运行程序时报错日志显示: ERROR TaskSchedulerImpl:70 - Lost executor 1 on : Executor heartbeat timed out after 381181 ms 所以使用spark submit更改心跳时间 [hadoop@Master spark2.4.0]$ bin/spark-submit --master spark://master:7077 --conf spark.worker.timeout 10000000 --py-files id.py id.py --name id 但是显示没有指令,请问该怎么做? Error: Invalid argument to --conf: spark.worker.timeout
netty在运行一段时候以后就会卡死,请大佬帮忙看看是怎么回事
服务端在运行若干小时以后就会卡死,不管客户端再怎么请求也收不到任何返回。主要是出现在业务逻辑线程池里: 我先放出线程堆栈: Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.121-b13 mixed mode): "Attach Listener" #82 daemon prio=9 os_prio=0 tid=0x00007fe4e4001000 nid=0x5f1 waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE "nioEventLoopGroup-4-32" #74 prio=10 os_prio=0 tid=0x00007fe46403c800 nid=0x6e28 runnable [0x00007fe45814d000] java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) - locked <0x0000000080b08d08> (a io.netty.channel.nio.SelectedSelectionKeySet) - locked <0x0000000080b09dc0> (a java.util.Collections$UnmodifiableSet) - locked <0x0000000080b08c10> (a sun.nio.ch.EPollSelectorImpl) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:752) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:408) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:745) "nioEventLoopGroup-4-31" #72 prio=10 os_prio=0 tid=0x00007fe46403a800 nid=0x6dc2 waiting on condition [0x00007fe45818d000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x000000008012d6d8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at org.apache.commons.pool2.impl.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:587) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:440) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:361) at redis.clients.jedis.util.Pool.getResource(Pool.java:50) at redis.clients.jedis.JedisPool.getResource(JedisPool.java:234) at com.game.redis.RedisManager.getString(RedisManager.java:219) at com.game.handler.CapitalHandler.hander(CapitalHandler.java:36) at com.game.netty.TcpServerHanler.channelRead(TcpServerHanler.java:107) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:297) at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:413) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:745) "nioEventLoopGroup-4-30" #71 prio=10 os_prio=0 tid=0x00007fe464038800 nid=0x6cd0 waiting on condition [0x00007fe4581ce000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x000000008012d6d8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at org.apache.commons.pool2.impl.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:587) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:440) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:361) at redis.clients.jedis.util.Pool.getResource(Pool.java:50) at redis.clients.jedis.JedisPool.getResource(JedisPool.java:234) at com.game.redis.RedisManager.getString(RedisManager.java:219) at com.game.handler.CapitalHandler.hander(CapitalHandler.java:36) at com.game.netty.TcpServerHanler.channelRead(TcpServerHanler.java:107) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:745) "nioEventLoopGroup-4-29" #70 prio=10 os_prio=0 tid=0x00007fe464037000 nid=0x6cae waiting on condition [0x00007fe45820f000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x000000008012d6d8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at org.apache.commons.pool2.impl.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:587) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:440) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:361) at redis.clients.jedis.util.Pool.getResource(Pool.java:50) at redis.clients.jedis.JedisPool.getResource(JedisPool.java:234) at com.game.redis.RedisManager.getString(RedisManager.java:219) at com.game.handler.CapitalHandler.hander(CapitalHandler.java:36) at com.game.netty.TcpServerHanler.channelRead(TcpServerHanler.java:107) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:297) at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:413) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:745) "nioEventLoopGroup-4-28" #69 prio=10 os_prio=0 tid=0x00007fe464036000 nid=0x6c70 waiting on condition [0x00007fe458291000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x000000008012d6d8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at org.apache.commons.pool2.impl.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:587) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:440) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:361) at redis.clients.jedis.util.Pool.getResource(Pool.java:50) at redis.clients.jedis.JedisPool.getResource(JedisPool.java:234) at com.game.redis.RedisManager.getString(RedisManager.java:219) at com.game.handler.CapitalHandler.hander(CapitalHandler.java:36) at com.game.netty.TcpServerHanler.channelRead(TcpServerHanler.java:107) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:297) at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:413) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:745) "nioEventLoopGroup-4-27" #68 prio=10 os_prio=0 tid=0x00007fe464034000 nid=0x6c1d waiting on condition [0x00007fe458250000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x000000008012d6d8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at org.apache.commons.pool2.impl.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:587) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:440) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:361) at redis.clients.jedis.util.Pool.getResource(Pool.java:50) at redis.clients.jedis.JedisPool.getResource(JedisPool.java:234) at com.game.redis.RedisManager.getString(RedisManager.java:219) at com.game.handler.CapitalHandler.hander(CapitalHandler.java:36) at com.game.netty.TcpServerHanler.channelRead(TcpServerHanler.java:107) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:297) at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:413) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:745) "nioEventLoopGroup-4-26" #66 prio=10 os_prio=0 tid=0x00007fe464032800 nid=0x6bf4 waiting on condition [0x00007fe4582d2000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x000000008012d6d8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at org.apache.commons.pool2.impl.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:587) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:440) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:361) at redis.clients.jedis.util.Pool.getResource(Pool.java:50) at redis.clients.jedis.JedisPool.getResource(JedisPool.java:234) at com.game.redis.RedisManager.getString(RedisManager.java:219) at com.game.handler.CapitalHandler.hander(CapitalHandler.java:36) at com.game.netty.TcpServerHanler.channelRead(TcpServerHanler.java:107) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:297) at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:413) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:745) "nioEventLoopGroup-4-25" #65 prio=10 os_prio=0 tid=0x00007fe464030800 nid=0x6be5 waiting on condition [0x00007fe458313000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x000000008012d6d8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at org.apache.commons.pool2.impl.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:587) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:440) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:361) at redis.clients.jedis.util.Pool.getResource(Pool.java:50) at redis.clients.jedis.JedisPool.getResource(JedisPool.java:234) at com.game.redis.RedisManager.getKeysList(RedisManager.java:324) at com.game.handler.DrawTaskRewardHandler.hander(DrawTaskRewardHandler.java:51) at com.game.netty.TcpServerHanler.channelRead(TcpServerHanler.java:107) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:745) "nioEventLoopGroup-4-24" #64 prio=10 os_prio=0 tid=0x00007fe46402f000 nid=0x6bd9 runnable [0x00007fe458355000] java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) - locked <0x0000000080c7c3b0> (a io.netty.channel.nio.SelectedSelectionKeySet) - locked <0x0000000080c7c3a0> (a java.util.Collections$UnmodifiableSet) - locked <0x0000000080c7c3c8> (a sun.nio.ch.EPollSelectorImpl) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:752) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:408) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:745) "nioEventLoopGroup-4-23" #63 prio=10 os_prio=0 tid=0x00007fe46402d000 nid=0x6b77 waiting on condition [0x00007fe458395000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x000000008012d6d8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at org.apache.commons.pool2.impl.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:587) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:440) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:361) at redis.clients.jedis.util.Pool.getResource(Pool.java:50) at redis.clients.jedis.JedisPool.getResource(JedisPool.java:234) at com.game.redis.RedisManager.getString(RedisManager.java:219) at com.game.handler.CapitalHandler.hander(CapitalHandler.java:36) at com.game.netty.TcpServerHanler.channelRead(TcpServerHanler.java:107) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:297) at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:413) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:745) "nioEventLoopGroup-4-22" #62 prio=10 os_prio=0 tid=0x00007fe46402b000 nid=0x6b25 runnable [0x00007fe4583d7000] java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) - locked <0x0000000080c19bb8> (a io.netty.channel.nio.SelectedSelectionKeySet) - locked <0x0000000080c19ba8> (a java.util.Collections$UnmodifiableSet) - locked <0x0000000080c19bd0> (a sun.nio.ch.EPollSelectorImpl) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:752) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:408) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:745) "nioEventLoopGroup-4-21" #61 prio=10 os_prio=0 tid=0x00007fe46402a000 nid=0x6ae7 waiting on condition [0x00007fe458417000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x000000008012d6d8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at org.apache.commons.pool2.impl.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:587) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:440) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:361) at redis.clients.jedis.util.Pool.getResource(Pool.java:50) at redis.clients.jedis.JedisPool.getResource(JedisPool.java:234) at com.game.redis.RedisManager.getString(RedisManager.java:219) at com.game.handler.CapitalHandler.hander(CapitalHandler.java:36) at com.game.netty.TcpServerHanler.channelRead(TcpServerHanler.java:107) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:745) "nioEventLoopGroup-4-20" #59 prio=10 os_prio=0 tid=0x00007fe464028800 nid=0x6ab6 waiting on condition [0x00007fe458ab5000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x000000008012d6d8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at org.apache.commons.pool2.impl.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:587) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:440) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:361) at redis.clients.jedis.util.Pool.getResource(Pool.java:50) at redis.clients.jedis.JedisPool.getResource(JedisPool.java:234) at com.game.redis.RedisManager.getString(RedisManager.java:219) at com.game.handler.CapitalHandler.hander(CapitalHandler.java:36) at com.game.netty.TcpServerHanler.channelRead(TcpServerHanler.java:107) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:297) at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:413) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:745) "nioEventLoopGroup-4-19" #58 prio=10 os_prio=0 tid=0x00007fe464026800 nid=0x6aad waiting on condition [0x00007fe458af6000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x000000008012d6d8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at org.apache.commons.pool2.impl.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:587) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:440) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:361) at redis.clients.jedis.util.Pool.getResource(Pool.java:50) at redis.clients.jedis.JedisPool.getResource(JedisPool.java:234) at com.game.redis.RedisManager.getString(RedisManager.java:219) at com.game.handler.CapitalHandler.hander(CapitalHandler.java:36) at com.game.netty.TcpServerHanler.channelRead(TcpServerHanler.java:107) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:297) at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:413) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:745) "nioEventLoopGroup-4-18" #57 prio=10 os_prio=0 tid=0x00007fe464025000 nid=0x6aac runnable [0x00007fe458b38000] java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) - locked <0x000000008086c6f0> (a io.netty.channel.nio.SelectedSelectionKeySet) - locked <0x000000008086c6e0> (a java.util.Collections$UnmodifiableSet) - locked <0x000000008086c708> (a sun.nio.ch.EPollSelectorImpl) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:752) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:408) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:745) "nioEventLoopGroup-4-17" #56 prio=10 os_prio=0 tid=0x00007fe464023000 nid=0x6a9a waiting on condition [0x00007fe458b78000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x000000008012d6d8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at org.apache.commons.pool2.impl.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:587) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:440) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:361) at redis.clients.jedis.util.Pool.getResource(Pool.java:50) at redis.clients.jedis.JedisPool.getResource(JedisPool.java:234) at com.game.redis.RedisManager.getString(RedisManager.java:219) at com.game.handler.CapitalHandler.hander(CapitalHandler.java:36) at com.game.netty.TcpServerHanler.channelRead(TcpServerHanler.java:107) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:297) at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:413) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:745) "nioEventLoopGroup-4-16" #55 prio=10 os_prio=0 tid=0x00007fe464021800 nid=0x6a87 runnable [0x00007fe458bba000] java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) - locked <0x00000000807dcf40> (a io.netty.channel.nio.SelectedSelectionKeySet) - locked <0x00000000807dcf30> (a java.util.Collections$UnmodifiableSet) - locked <0x00000000807dcf58> (a sun.nio.ch.EPollSelectorImpl) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:752) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:408) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:745) "nioEventLoopGroup-4-15" #54 prio=10 os_prio=0 tid=0x00007fe46401f800 nid=0x6a02 runnable [0x00007fe458bfb000] java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) - locked <0x00000000808b60d8> (a io.netty.channel.nio.SelectedSelectionKeySet) - locked <0x00000000808ae818> (a java.util.Collections$UnmodifiableSet) - locked <0x00000000808b6040> (a sun.nio.ch.EPollSelectorImpl) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:752) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:408) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:745) "nioEventLoopGroup-4-14" #53 prio=10 os_prio=0 tid=0x00007fe46401d800 nid=0x69ee runnable [0x00007fe4e806a000] java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) - locked <0x0000000080866e78> (a io.netty.channel.nio.SelectedSelectionKeySet) - locked <0x0000000080866e68> (a java.util.Collections$UnmodifiableSet) - locked <0x0000000080866e90> (a sun.nio.ch.EPollSelectorImpl) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:752) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:408) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:745) "nioEventLoopGroup-4-13" #52 prio=10 os_prio=0 tid=0x00007fe46401c000 nid=0x69cb waiting on condition [0x00007fe4e80aa000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x000000008012d6d8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at org.apache.commons.pool2.impl.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:587) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:440) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:361) at redis.clients.jedis.util.Pool.getResource(Pool.java:50) at redis.clients.jedis.JedisPool.getResource(JedisPool.java:234) at com.game.redis.RedisManager.getString(RedisManager.java:219) at com.game.handler.CapitalHandler.hander(CapitalHandler.java:36) at com.game.netty.TcpServerHanler.channelRead(TcpServerHanler.java:107) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:745) "nioEventLoopGroup-4-12" #51 prio=10 os_prio=0 tid=0x00007fe46401a800 nid=0x6928 waiting on condition [0x00007fe4e80eb000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x000000008012d6d8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at org.apache.commons.pool2.impl.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:587) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:440) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:361) at redis.clients.jedis.util.Pool.getResource(Pool.java:50) at redis.clients.jedis.JedisPool.getResource(JedisPool.java:234) at com.game.redis.RedisManager.getString(RedisManager.java:219) at com.game.handler.CapitalHandler.hander(CapitalHandler.java:36) at com.game.netty.TcpServerHanler.channelRead(TcpServerHanler.java:107) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:745) "nioEventLoopGroup-4-11" #50 prio=10 os_prio=0 tid=0x00007fe464018800 nid=0x68fe runnable [0x00007fe4e812d000] java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) - locked <0x0000000080036480> (a io.netty.channel.nio.SelectedSelectionKeySet) - locked <0x0000000080036470> (a java.util.Collections$UnmodifiableSet) - locked <0x0000000080036428> (a sun.nio.ch.EPollSelectorImpl) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:752) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:408) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:745) "nioEventLoopGroup-4-10" #49 prio=10 os_prio=0 tid=0x00007fe464017000 nid=0x68dc waiting on condition [0x00007fe4e816d000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x000000008012d6d8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at org.apache.commons.pool2.impl.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:587) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:440) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:361) at redis.clients.jedis.util.Pool.getResource(Pool.java:50) at redis.clients.jedis.JedisPool.getResource(JedisPool.java:234) at com.game.redis.RedisManager.getString(RedisManager.java:219) at com.game.handler.CapitalHandler.hander(CapitalHandler.java:36) at com.game.netty.TcpServerHanler.channelRead(TcpServerHanler.java:107) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:745) "nioEventLoopGroup-4-9" #48 prio=10 os_prio=0 tid=0x00007fe464015000 nid=0x68c7 runnable [0x00007fe4e81af000] java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) - locked <0x0000000080036788> (a io.netty.channel.nio.SelectedSelectionKeySet) - locked <0x0000000080036778> (a java.util.Collections$UnmodifiableSet) - locked <0x0000000080036730> (a sun.nio.ch.EPollSelectorImpl) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:752) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:408) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:745) 线程堆栈有些没看明白,排除法先是看看线程池、数据库线程池还没想好什么想法去看。
java 域名解析超时会报什么异常?
在服务器端将/etc/resolv.conf 中的超时时间设置为 options timeout:1 如果域名解析时间超过1秒,在java服务器端会报什么样的异常出来?
用openresty(nginx+lua)框架开发的项目实现接入统一认证功能,发请求拿token报xxxxx.com.cn could not be resolved (110: Operation timed out)
用openresty(nginx+lua)框架开发的项目实现接入统一认证功能(oauth2),发请求拿access-token时报xxxxx.com.cn could not be resolved (110: Operation timed out),同样的代码在centos,ubuntu虚拟机上测试时能成功运行并返回正确结果,把代码部署到centos服务器上就不行了 nginx部分配置如下 ``` resolver 114.114.114.114 8.8.8.8 8.8.4.4; include mime.types; server { listen 80; lua_ssl_verify_depth 10; lua_ssl_trusted_certificate /etc/ssl/certs/ca-certificates.crt; set $resp_body ""; set $arg_accessToken ""; lua_need_request_body on; include location_aies.conf; } ``` oauth部分配置如下(在虚拟机中ip用localhost代替) ``` oauth_callback_url = 'http://10.xx.1xx.xx:80/oauth/callback' ``` 获得code后去拿access_token的代码如下 ``` local function request_access_token(code) --ngx.log(ngx.ERR, 'Requesting access token with code ' .. code) local httpc = http.new() httpc:set_timeout(7000) local payload = { client_id=client_id, grant_type="authorization_code", client_secret=client_secret, code=code } local params = { headers = { ["Content-Type"] = "application/x-www-form-urlencoded", }, method="POST",body=ngx.encode_args(payload) } local url=access_token_uri local res, err = httpc:request_uri(url, params) if err then **ngx.log(ngx.ERR, "Got error during access token request: " .. err)** ngx.header['Content-type'] = 'text/html' ngx.status = ngx.HTTP_FORBIDDEN ngx.say("Got error during access token request: " .. err) return ngx.exit(ngx.HTTP_FORBIDDEN) else ``` 就是这这段代码报错的,返回err,err信息为:Got error during access token request:xxxx.com.cn could not be resolved (110: Operation timed out) 我花了好久的时间都没解决,麻烦各路大神指点一下,谢谢了
Netty4.1运行一段时间后监听端口收不到请求
项目用的Netty4.1编写, 情况是运行一段时间后,监听端口就接收不到前端请求,大概过1分钟自动又恢复,貌似运行越久这种状况出现的越是频繁。上线时并发测试还挺不错的。就是这个问题折腾到现在反复重现,求救大神。 关键代码如下: ``` EventLoopGroup bossGroup = new NioEventLoopGroup(); //定义一个线程组,这个线程组的作用是用来接收客户端的连接 EventLoopGroup workerGroup = new NioEventLoopGroup(); //定义一个线程组,这个线程组用来处理业务逻辑 final EventExecutorGroup e2=new DefaultEventExecutorGroup(32); try { ServerBootstrap b = new ServerBootstrap(); //定义一个ServerBootstarp类,这个类用来初始化netty服务器 //将两个线程组绑定到ServerBootstarp中,channel使用的模式为非阻塞模式 b.group(bossGroup, workerGroup); b.channel(NioServerSocketChannel.class); b.childHandler(new ChannelInitializer<SocketChannel>() { int i = 0; @Override //当有连接接入的时候会调用这个方法 public void initChannel(SocketChannel ch) throws Exception { //server端发送的是httpResponse,所以要使用HttpResponseEncoder进行编码,将HttpResponse转化为ByteBuffer ch.pipeline().addLast(new HttpResponseEncoder()); //server端接收到的是httpRequest,所以要使用HttpRequestDecoder进行解码,将HttpRequset解码为ByteBuffer ch.pipeline().addLast(new HttpRequestDecoder()); //处理接收HTTP报文不全的特殊设置 ch.pipeline().addLast("aggregator", new HttpObjectAggregator(3200)); //收到客户端的连接之后就调用HttpServerInboundHandler来处理 //ch.pipeline().addLast(new HttpServerInboundHandler()); ch.pipeline().addLast(e2,new HttpServerInboundHandler()); } }); b.option(ChannelOption.SO_BACKLOG, 1024); b.childOption(ChannelOption.CONNECT_TIMEOUT_MILLIS,30); b.childOption(ChannelOption.SO_KEEPALIVE, false); ChannelFuture f = b.bind(port).sync(); //和套接字的绑定类似,监听班底的port端口 f.channel().closeFuture().sync(); //等待结束 ```
python pymssql 连接返回错误值为字节集,如何显示为文本。
#源代码:连接返回错误值为字节集,如何显示为文本? ``` import pymssql try: conn = pymssql.connect(server=".\SQLEXPRESS", user="sa", password="1230", database="master",timeout = 0,charset = 'GBK') except Exception as e: print(type(e)) print(dir(e)) print (e) ``` ![图片说明](https://img-ask.csdn.net/upload/202001/16/1579152426_237839.jpg)
卡夫卡kafka消费问题,消费一段时间就会超时 求助原因
``` 80 DEBUG [YyjkStorageKafkaConsumerThread] YyjkStorageKafkaConsumerThread.value:{"tableName":"sw_segment","operateType":"INSERT","operateId":"4921.43.15759673490360004","indexType":"type","storageType":"elasticsearch","date":1575967330707,"tableData":{"trace_id":"4921.43.15759673490360005","endpoint_name":"/v4/default/registry/microservices/b0b6cb4d62e32b56c3bf8cb4bd2b7aed46cdffbe/instances/72abd3701b1811eaa6ea005056b6530b/heartbeat","latency":3,"end_time":1575967349039,"endpoint_id":189076,"service_instance_id":4921,"version":2,"start_time":1575967349036,"data_binary":"Cg0KC7kmK8SNreHuqv8bEsQBEP///////////wEYrLLl9+4tIK+y5ffuLTCUxQtABFABWANgO3oSCgtodHRwLm1ldGhvZBIDUFVUeogBCgN1cmwSgAEvdjQvZGVmYXVsdC9yZWdpc3RyeS9taWNyb3NlcnZpY2VzL2IwYjZjYjRkNjJlMzJiNTZjM2JmOGNiNGJkMmI3YWVkNDZjZGZmYmUvaW5zdGFuY2VzLzcyYWJkMzcwMWIxODExZWFhNmVhMDA1MDU2YjY1MzBiL2hlYXJ0YmVhdBgNILkm","service_id":13,"time_bucket":20191210164229,"is_error":0,"segment_id":"4921.43.15759673490360004"}} 16:40:05,480 DEBUG [YyjkStorageKafkaConsumerThread] YyjkStorageKafkaConsumerThread.FormatData,tableName:sw_segment|operateId:4921.43.15759673490360004|tableMap:{trace_id=4921.43.15759673490360005, endpoint_name=/v4/default/registry/microservices/b0b6cb4d62e32b56c3bf8cb4bd2b7aed46cdffbe/instances/72abd3701b1811eaa6ea005056b6530b/heartbeat, latency=3, end_time=1575967349039, endpoint_id=189076, service_instance_id=4921, version=2, start_time=1575967349036, data_binary=Cg0KC7kmK8SNreHuqv8bEsQBEP///////////wEYrLLl9+4tIK+y5ffuLTCUxQtABFABWANgO3oSCgtodHRwLm1ldGhvZBIDUFVUeogBCgN1cmwSgAEvdjQvZGVmYXVsdC9yZWdpc3RyeS9taWNyb3NlcnZpY2VzL2IwYjZjYjRkNjJlMzJiNTZjM2JmOGNiNGJkMmI3YWVkNDZjZGZmYmUvaW5zdGFuY2VzLzcyYWJkMzcwMWIxODExZWFhNmVhMDA1MDU2YjY1MzBiL2hlYXJ0YmVhdBgNILkm, service_id=13, time_bucket=20191210164229, is_error=0, segment_id=4921.43.15759673490360004} 16:40:05,480 DEBUG [ElasticSearchClient] Executing bulk [32] with 8 requests 16:40:05,481 DEBUG [MainClientExec] [exchange: 44] start execution 16:40:05,481 DEBUG [RequestAddCookies] CookieSpec selected: default 16:40:05,481 DEBUG [RequestAuthCache] Re-using cached 'basic' auth scheme for http://10.23.11.224:9200 16:40:05,481 DEBUG [RequestAuthCache] No credentials for preemptive authentication 16:40:05,481 DEBUG [InternalHttpAsyncClient] [exchange: 44] Request connection for {}->http://10.23.11.224:9200 16:40:05,481 DEBUG [PoolingNHttpClientConnectionManager] Connection request: [route: {}->http://10.23.11.224:9200][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30] 16:40:05,482 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][r:r]: Set timeout 0 16:40:05,482 DEBUG [PoolingNHttpClientConnectionManager] Connection leased: [id: http-outgoing-0][route: {}->http://10.23.11.224:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30] 16:40:05,482 DEBUG [InternalHttpAsyncClient] [exchange: 44] Connection allocated: CPoolProxy{http-outgoing-0 [ACTIVE]} 16:40:05,482 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][r:r]: Set attribute http.nio.exchange-handler 16:40:05,482 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][rw:r]: Event set [w] 16:40:05,482 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE] Request ready 16:40:05,482 DEBUG [InternalHttpAsyncClient] Connection route already established 16:40:05,482 DEBUG [MainClientExec] [exchange: 44] Attempt 1 to execute request 16:40:05,482 DEBUG [MainClientExec] Target auth state: UNCHALLENGED 16:40:05,482 DEBUG [MainClientExec] Proxy auth state: UNCHALLENGED 16:40:05,482 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][rw:w]: Set timeout 30000 16:40:05,482 DEBUG [headers] http-outgoing-0 >> POST /_bulk?timeout=1m HTTP/1.1 16:40:05,482 DEBUG [headers] http-outgoing-0 >> Content-Length: 6657 16:40:05,482 DEBUG [headers] http-outgoing-0 >> Content-Type: application/json 16:40:05,482 DEBUG [headers] http-outgoing-0 >> Host: 10.23.11.224:9200 16:40:05,482 DEBUG [headers] http-outgoing-0 >> Connection: Keep-Alive 16:40:05,482 DEBUG [headers] http-outgoing-0 >> User-Agent: Apache-HttpAsyncClient/4.1.2 (Java/1.8.0_221) 16:40:05,483 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][rw:w]: Event set [w] 16:40:05,483 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE] Output ready 16:40:05,483 DEBUG [MainClientExec] [exchange: 44] produce content 16:40:05,483 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE] [content length: 6657; pos: 4096; completed: false] 16:40:05,483 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][rw:w]: 4293 bytes written 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "POST /_bulk?timeout=1m HTTP/1.1[\r][\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "Content-Length: 6657[\r][\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "Content-Type: application/json[\r][\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "Host: 10.23.11.224:9200[\r][\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "Connection: Keep-Alive[\r][\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "User-Agent: Apache-HttpAsyncClient/4.1.2 (Java/1.8.0_221)[\r][\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "[\r][\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"index":{"_index":"sw_segment","_type":"type","_id":"11.88.15759673496781142"}}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"trace_id":"11.88.15759673496781143","endpoint_name":"/authentication","latency":71,"end_time":1575967349749,"endpoint_id":150,"service_instance_id":11,"version":2,"start_time":1575967349678,"data_binary":"CgwKCgtY1oK15O6q/xsS3gEIARivt+X37i0gurfl9+4tMIkBQAVQAVgBYCF6DgoHZGIudHlwZRIDc3FsehsKC2RiLmluc3RhbmNlEgx0eWd6cHRfZHpzd2p6kwEKDGRiLnN0YXRlbWVudBKCAXNlbGVjdCB0LmNoZWNrX3RpbWUsdC5leHRlbmRfaW5mbyx0LnVzZXJfbmFtZSx0LmxvZ2luX2NoYW5uZWwgZnJvbSBzc29fdXNlcl9zZXNzaW9uIHQgd2hlcmUgdC50aWNrZXQgPSA/IGFuZCB0LmxvZ291dF90aW1lIGlzIG51bGwSnAEIAhjGt+X37i0g2Lfl9+4tMJUBQAVQAVgBYCF6DgoHZGIudHlwZRIDc3FsehsKC2RiLmluc3RhbmNlEgx0eWd6cHRfZHpzd2p6UgoMZGIuc3RhdGVtZW50EkJ1cGRhdGUgc3NvX3VzZXJfc2Vzc2lvbiB0IHNldCB0LmV4dGVuZF9pbmZvID0gPyB3aGVyZSB0LnRpY2tldCA9ID8SWAgDGNm35ffuLSDrt+X37i0wlAFABVABWAFgIXoOCgdkYi50eXBlEgNzcWx6GwoLZGIuaW5zdGFuY2USDHR5Z3pwdF9kenN3anoOCgxkYi5zdGF0ZW1lbnQSZhD///////////8BGK635ffuLSD1t+X37i0wlgFYA2ABejAKA3VybBIpaHR0cDovL25zc28uZHpzd2pqYy50YXguY24vYXV0aGVudGljYXRpb256EgoLaHR0cC5tZXRob2QSA0dFVBgMIAs=","service_id":12,"time_bucket":20191210164229,"is_error":0,"segment_id":"11.88.15759673496781142"}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"index":{"_index":"sw_segment","_type":"type","_id":"4921.36.15759673457660020"}}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"trace_id":"4921.36.15759673457660021","endpoint_name":"Mysql/JDBI/PreparedStatement/executeQuery","latency":377,"end_time":1575967346143,"endpoint_id":162,"service_instance_id":4921,"version":2,"start_time":1575967345766,"data_binary":"Cg0KC7kmJPSg4dHuqv8bErYBEP///////////wEY5pjl9+4tIN+b5ffuLTCiAUADUAFYAWAheg4KB2RiLnR5cGUSA3NxbHohCgtkYi5pbnN0YW5jZRISdHlnenB0X2R6c3dqX3d3X2tmel0KDGRiLnN0YXRlbWVudBJNc2VsZWN0ICogZnJvbSBxel9kbWIgd2hlcmUgeHlieiA9ICdZJyBBTkQgeXhieiA9ICdZJyBhbmQgbG93ZXIoY29kZSk9bG93ZXIoPykYDSC5Jg==","service_id":13,"time_bucket":20191210164225,"is_error":0,"segment_id":"4921.36.15759673457660020"}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"index":{"_index":"sw_segment","_type":"type","_id":"4921.36.15759673461450022"}}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"trace_id":"4921.36.15759673461450023","endpoint_name":"Mysql/JDBI/PreparedStatement/executeQuery","latency":69,"end_time":1575967346214,"endpoint_id":162,"service_instance_id":4921,"version":2,"start_time":1575967346145,"data_binary":"Cg0KC7kmJKbKyNPuqv8bErYBEP///////////wEY4Zvl9+4tIKac5ffuLTCiAUADUAFYAWAheg4KB2RiLnR5cGUSA3NxbHohCgtkYi5pbnN0YW5jZRISdHlnenB0X2R6c3dqX3d3X2tmel0KDGRiLnN0YXRlbWVudBJNc2VsZWN0ICogZnJvbSBxel9kbWIgd2hlcmUgeHlieiA9ICdZJyBBTkQgeXhieiA9ICdZJyBhbmQgbG93ZXIoY29kZSk9bG93ZXIoPykYDSC5Jg==","service_id":13,"time_bucket":20191210164226,"is_error":0,"segment_id":"4921.36.15759673461450022"}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"index":{"_index":"sw_segment","_type":"type","_id":"11.37.15759673529984298"}}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"trace_id":"11.37.15759673529984299","endpoint_name":"Mysql/JDBI/PreparedStatement/executeQuery","latency":14,"end_time":1575967353012,"endpoint_id":137,"service_instance_id":11,"version":2,"start_time":1575967352998,"data_binary":"CgwKCgslqsqf9O6q/xsSsAEQ////////////ARim0eX37i0gtNHl9+4tMIkBQAVQAVgBYCF6DgoHZGIudHlwZRIDc3FsehsKC2RiLmluc3RhbmNlEgx0eWd6cHRfZHpzd2p6XQoMZGIuc3RhdGVtZW50Ek1zZWxlY3QgKiBmcm9tIHF6X2RtYiB3aGVyZSB4eWJ6ID0gJ1knIEFORCB5eGJ6ID0gJ1knIGFuZCBsb3dlcihjb2RlKT1sb3dlcig/KRgMIAs=","service_id":12,"time_bucket":20191210164232,"is_error":0,"segment_id":"11.37.15759673529984298"}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"index":{"_index":"sw_segment","_type":"type","_id":"11.37.15759673530124300"}}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"trace_id":"11.37.15759673530124301","endpoint_name":"Mysql/JDBI/PreparedStatement/executeQuery","latency":12,"end_time":1575967353024,"endpoint_id":137,"service_instance_id":11,"version":2,"start_time":1575967353012,"data_binary":"CgwKCgsljJCo9O6q/xsSsAEQ////////////ARi00eX37i0gwNHl9+4tMIkBQAVQAVgBYCF6DgoHZGIudHlwZRIDc3FsehsKC2RiLmluc3RhbmNlEgx0eWd6cHRfZHpzd2p6XQoMZGIuc3RhdGVtZW50Ek1zZWxlY3QgKiBmcm9tIHF6X2RtYiB3aGVyZSB4eWJ6ID0gJ1knIEFORCB5eGJ6ID0gJ1knIGFuZCBsb3dlcihjb2RlKT1sb3dlcig/KRgMIAs=","service_id":12,"time_bucket":20191210164233,"is_error":0,"segment_id":"11.37.15759673530124300"}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"index":{"_index":"sw_segment","_type":"type","_id":"11.47.15759673539631576"}}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"trace_id":"11.47.15759673539631577","endpoint_name":"/v4/default/registry/mi" 16:40:05,483 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE] Output ready 16:40:05,483 DEBUG [MainClientExec] [exchange: 44] produce content 16:40:05,483 DEBUG [MainClientExec] [exchange: 44] Request completed 16:40:05,484 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE] [content length: 6657; pos: 6657; completed: true] 16:40:05,484 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][rw:w]: 2561 bytes written 16:40:05,484 DEBUG [wire] http-outgoing-0 >> "croservices/62af8840312c4750370c3ea64fd68203bf02d518/instances/1563e3d11a5f11eabd58005056b67cc4/heartbeat","latency":2,"end_time":1575967353965,"endpoint_id":146,"service_instance_id":11,"version":2,"start_time":1575967353963,"data_binary":"CgwKCgsv2LPs+O6q/xsSwwEQ////////////ARjr2OX37i0g7djl9+4tMJIBQAdQAVgDYDt6EgoLaHR0cC5tZXRob2QSA1BVVHqIAQoDdXJsEoABL3Y0L2RlZmF1bHQvcmVnaXN0cnkvbWljcm9zZXJ2aWNlcy82MmFmODg0MDMxMmM0NzUwMzcwYzNlYTY0ZmQ2ODIwM2JmMDJkNTE4L2luc3RhbmNlcy8xNTYzZTNkMTFhNWYxMWVhYmQ1ODAwNTA1NmI2N2NjNC9oZWFydGJlYXQYDCAL","service_id":12,"time_bucket":20191210164233,"is_error":0,"segment_id":"11.47.15759673539631576"}[\n]" 16:40:05,484 DEBUG [wire] http-outgoing-0 >> "{"index":{"_index":"sw_segment","_type":"type","_id":"11.47.15759673539651578"}}[\n]" 16:40:05,484 DEBUG [wire] http-outgoing-0 >> "{"trace_id":"11.47.15759673539631577","endpoint_name":"#/v4/default/registry/microservices/62af8840312c4750370c3ea64fd68203bf02d518/instances/1563e3d11a5f11eabd58005056b67cc4/heartbeat","latency":0,"end_time":1575967353965,"endpoint_id":147,"service_instance_id":11,"version":2,"start_time":1575967353965,"data_binary":"CgwKCgsv+s/t+O6q/xsSwAMQ////////////ARjt2OX37i0g7djl9+4tKpoCCAESDAoKCy/Ys+z47qr/GyALOAtCgAEvdjQvZGVmYXVsdC9yZWdpc3RyeS9taWNyb3NlcnZpY2VzLzYyYWY4ODQwMzEyYzQ3NTAzNzBjM2VhNjRmZDY4MjAzYmYwMmQ1MTgvaW5zdGFuY2VzLzE1NjNlM2QxMWE1ZjExZWFiZDU4MDA1MDU2YjY3Y2M0L2hlYXJ0YmVhdFKAAS92NC9kZWZhdWx0L3JlZ2lzdHJ5L21pY3Jvc2VydmljZXMvNjJhZjg4NDAzMTJjNDc1MDM3MGMzZWE2NGZkNjgyMDNiZjAyZDUxOC9pbnN0YW5jZXMvMTU2M2UzZDExYTVmMTFlYWJkNTgwMDUwNTZiNjdjYzQvaGVhcnRiZWF0OoEBIy92NC9kZWZhdWx0L3JlZ2lzdHJ5L21pY3Jvc2VydmljZXMvNjJhZjg4NDAzMTJjNDc1MDM3MGMzZWE2NGZkNjgyMDNiZjAyZDUxOC9pbnN0YW5jZXMvMTU2M2UzZDExYTVmMTFlYWJkNTgwMDUwNTZiNjdjYzQvaGVhcnRiZWF0UAJYA2A7GAwgCw==","service_id":12,"time_bucket":20191210164233,"is_error":0,"segment_id":"11.47.15759673539651578"}[\n]" 16:40:05,484 DEBUG [wire] http-outgoing-0 >> "{"index":{"_index":"sw_segment","_type":"type","_id":"4921.43.15759673490360004"}}[\n]" 16:40:05,484 DEBUG [wire] http-outgoing-0 >> "{"trace_id":"4921.43.15759673490360005","endpoint_name":"/v4/default/registry/microservices/b0b6cb4d62e32b56c3bf8cb4bd2b7aed46cdffbe/instances/72abd3701b1811eaa6ea005056b6530b/heartbeat","latency":3,"end_time":1575967349039,"endpoint_id":189076,"service_instance_id":4921,"version":2,"start_time":1575967349036,"data_binary":"Cg0KC7kmK8SNreHuqv8bEsQBEP///////////wEYrLLl9+4tIK+y5ffuLTCUxQtABFABWANgO3oSCgtodHRwLm1ldGhvZBIDUFVUeogBCgN1cmwSgAEvdjQvZGVmYXVsdC9yZWdpc3RyeS9taWNyb3NlcnZpY2VzL2IwYjZjYjRkNjJlMzJiNTZjM2JmOGNiNGJkMmI3YWVkNDZjZGZmYmUvaW5zdGFuY2VzLzcyYWJkMzcwMWIxODExZWFhNmVhMDA1MDU2YjY1MzBiL2hlYXJ0YmVhdBgNILkm","service_id":13,"time_bucket":20191210164229,"is_error":0,"segment_id":"4921.43.15759673490360004"}[\n]" 16:40:05,484 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE] Request ready 16:40:05,484 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][r:w]: Event cleared [w] 16:40:06,073 DEBUG [FetchSessionHandler] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Node 0 sent an incremental fetch response for session 520315326 with 0 response partition(s), 1 implied partition(s) 16:40:07,365 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][r:r]: 1460 bytes read 16:40:07,365 DEBUG [wire] http-outgoing-0 << "HTTP/1.1 200 OK[\r][\n]" 16:40:07,365 DEBUG [wire] http-outgoing-0 << "content-type: application/json; charset=UTF-8[\r][\n]" 16:40:07,365 DEBUG [wire] http-outgoing-0 << "content-length: 3697[\r][\n]" 16:40:07,365 DEBUG [wire] http-outgoing-0 << "[\r][\n]" 16:40:07,365 DEBUG [wire] http-outgoing-0 << "{"took":1872,"errors":true,"items":[{"index":{"_index":"sw_segment","_type":"type","_id":"11.88.15759673496781142","status":429,"error":{"type":"es_rejected_execution_exception","reason":"rejected execution of processing of [32196501][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[sw_segment][1]] containing [5] requests, target allocation id: U61PmKwGRPe_wjkdosiWUg, primary term: 1 on EsThreadPoolExecutor[name = JKPT-ES-NODE-001/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@37fc16a1[Running, pool size = 8, active threads = 8, queued tasks = 200, completed tasks = 4765825]]"}}},{"index":{"_index":"sw_segment","_type":"type","_id":"4921.36.15759673457660020","status":429,"error":{"type":"es_rejected_execution_exception","reason":"rejected execution of processing of [32196501][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[sw_segment][1]] containing [5] requests, target allocation id: U61PmKwGRPe_wjkdosiWUg, primary term: 1 on EsThreadPoolExecutor[name = JKPT-ES-NODE-001/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@37fc16a1[Running, pool size = 8, active threads = 8, queued tasks = 200, completed tasks = 4765825]]"}}},{"index":{"_index":"sw_segment","_type":"type","_id":"4921.36.15759673461450022","_version":1,"result":"created","_shards"" 16:40:07,365 DEBUG [headers] http-outgoing-0 << HTTP/1.1 200 OK 16:40:07,365 DEBUG [headers] http-outgoing-0 << content-type: application/json; charset=UTF-8 16:40:07,365 DEBUG [headers] http-outgoing-0 << content-length: 3697 16:40:07,365 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE(1372)] Response received 16:40:07,365 DEBUG [MainClientExec] [exchange: 44] Response received HTTP/1.1 200 OK 16:40:07,365 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE(1372)] Input ready 16:40:07,365 DEBUG [MainClientExec] [exchange: 44] Consume content 16:40:07,365 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][r:r]: 2325 bytes read 16:40:07,365 DEBUG [wire] http-outgoing-0 << ":{"total":1,"successful":1,"failed":0},"_seq_no":24332,"_primary_term":1,"status":201}},{"index":{"_index":"sw_segment","_type":"type","_id":"11.37.15759673529984298","status":429,"error":{"type":"es_rejected_execution_exception","reason":"rejected execution of processing of [32196501][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[sw_segment][1]] containing [5] requests, target allocation id: U61PmKwGRPe_wjkdosiWUg, primary term: 1 on EsThreadPoolExecutor[name = JKPT-ES-NODE-001/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@37fc16a1[Running, pool size = 8, active threads = 8, queued tasks = 200, completed tasks = 4765825]]"}}},{"index":{"_index":"sw_segment","_type":"type","_id":"11.37.15759673530124300","_version":1,"result":"created","_shards":{"total":1,"successful":1,"failed":0},"_seq_no":24333,"_primary_term":1,"status":201}},{"index":{"_index":"sw_segment","_type":"type","_id":"11.47.15759673539631576","status":429,"error":{"type":"es_rejected_execution_exception","reason":"rejected execution of processing of [32196501][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[sw_segment][1]] containing [5] requests, target allocation id: U61PmKwGRPe_wjkdosiWUg, primary term: 1 on EsThreadPoolExecutor[name = JKPT-ES-NODE-001/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@37fc16a1[Running, pool size = 8, active threads = 8, queued tasks = 200, completed tasks = 4765825]]"}}},{"index":{"_index":"sw_segment","_type":"type","_id":"11.47.15759673539651578","status":429,"error":{"type":"es_rejected_execution_exception","reason":"rejected execution of processing of [32196501][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[sw_segment][1]] containing [5] requests, target allocation id: U61PmKwGRPe_wjkdosiWUg, primary term: 1 on EsThreadPoolExecutor[name = JKPT-ES-NODE-001/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@37fc16a1[Running, pool size = 8, active threads = 8, queued tasks = 200, completed tasks = 4765825]]"}}},{"index":{"_index":"sw_segment","_type":"type","_id":"4921.43.15759673490360004","_version":1,"result":"created","_shards":{"total":1,"successful":1,"failed":0},"_seq_no":24334,"_primary_term":1,"status":201}}]}" 16:40:07,365 DEBUG [InternalHttpAsyncClient] [exchange: 44] Connection can be kept alive indefinitely 16:40:07,365 DEBUG [MainClientExec] [exchange: 44] Response processed 16:40:07,365 DEBUG [InternalHttpAsyncClient] [exchange: 44] releasing connection 16:40:07,365 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][r:r]: Remove attribute http.nio.exchange-handler 16:40:07,365 DEBUG [PoolingNHttpClientConnectionManager] Releasing connection: [id: http-outgoing-0][route: {}->http://10.23.11.224:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30] 16:40:07,365 DEBUG [PoolingNHttpClientConnectionManager] Connection [id: http-outgoing-0][route: {}->http://10.23.11.224:9200] can be kept alive indefinitely 16:40:07,365 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][r:r]: Set timeout 0 16:40:07,365 DEBUG [PoolingNHttpClientConnectionManager] Connection released: [id: http-outgoing-0][route: {}->http://10.23.11.224:9200][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30] 16:40:07,367 DEBUG [RestClient] request [POST http://10.23.11.224:9200/_bulk?timeout=1m] returned [HTTP/1.1 200 OK] 16:40:07,367 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE] [content length: 3697; pos: 3697; completed: true] 16:40:08,187 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending Heartbeat request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) 16:40:08,389 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Received successful Heartbeat response 16:40:11,203 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending Heartbeat request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) 16:40:11,404 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Received successful Heartbeat response 16:40:14,221 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending Heartbeat request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending Heartbeat request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) 16:41:20,774 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Received successful Heartbeat response 16:41:23,589 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending Heartbeat request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) 16:41:23,790 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Received successful Heartbeat response actCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Received successful Heartbeat response 16:41:38,665 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending Heartbeat request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) 16:41:38,867 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Received successful Heartbeat response 17:13:13,402 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending Heartbeat request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) 17:13:13,605 DEBUG [NetworkClient] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Node -1 disconnected. 17:13:13,606 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Received successful Heartbeat response 17:13:13,707 DEBUG [NetworkClient] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending metadata request (type=MetadataRequest, topics=compute_traceStorage, allowAutoCreate=true) to node 10.23.11.235:9092 (id: 0 rack: null) 17:13:13,907 DEBUG [Metadata] Updating last seen epoch from 0 to 0 for partition compute_traceStorage-0 17:13:13,907 DEBUG [Metadata] Updated cluster metadata version 4 to MetadataCache{cluster=Cluster(id = yQ_sRlMlSui8hlVtaPl4wg, nodes = [10.23.11.235:9092 (id: 0 rack: null)], partitions = [Partition(topic = compute_traceStorage, partition = 0, leader = 0, replicas = [0], isr = [0], offlineReplicas = [])], controller = 10.23.11.235:9092 (id: 0 rack: null))} 17:13:16,420 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending Heartbeat request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) 17:13:16, 17:15:02,170 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Received successful Heartbeat response 17:15:04,683 WARN [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] This member will leave the group because consumer poll timeout has expired. This means the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time processing messages. You can address this either by increasing max.poll.interval.ms or by reducing the maximum size of batches returned in poll() with max.poll.records. 17:15:04,683 INFO [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Member consumer-1-f8b4d0da-f83c-4849-8cfd-74e748aad3c7 sending LeaveGroup request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) 17:15:04,683 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Disabling heartbeat thread ```
把Systrace.py换成3.4版本后,生成的html为空
``` D:\Program Files (x86)\SDK\SDK\platform-tools\systrace>python systrace.py -e 66J0118628001147 -t 5 CRITICAL:root:Timed out. Dumping threads. StartAgentTracing timed out. Unable to start controller tracing agent. Starting tracing (5 seconds) Tracing completed. Collecting output... CRITICAL:root:Timed out. Dumping threads. GetResults timed out. Warning: Timeout when getting results from <systrace.tracing_controller.TracingControllerAgent objec Outputting Systrace results... Tracing complete, writing results Wrote trace HTML file: file://D:\Program Files (x86)\SDK\SDK\platform-tools\systrace\trace.html ``` StartAgentTracing timed out. Warning: Timeout when getting results from <systrace.tracing_controller.TracingControllerAgent object at 0x031578F0>. 定位不到什么原因。。。
redis运行一段时间后,客户端链接不上
redis运行一天后,redis-cli或者使用jredis链接redis服务,提示连接超时或者链接拒绝。重启redis服务就可以重新使用了!大牛没帮忙看看什么问题? 配置文件如下: port 6379 tcp-backlog 511 timeout 60000 tcp-keepalive 0 loglevel debug logfile "D:\\redis-2.8.19\\redis.log" databases 16 save 900 1 save 300 10 save 60 10000 stop-writes-on-bgsave-error yes rdbcompression yes rdbchecksum yes dbfilename dump.rdb dir ./ slave-serve-stale-data yes slave-read-only yes repl-diskless-sync no repl-diskless-sync-delay 5 repl-disable-tcp-nodelay no slave-priority 100 requirepass foobared maxclients 10000 maxheap 2gb maxmemory 2gb maxmemory-policy volatile-lru appendonly no appendfsync everysec no-appendfsync-on-rewrite no auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb aof-load-truncated yes lua-time-limit 5000 slowlog-log-slower-than 10000 slowlog-max-len 128 ################################ LATENCY MONITOR ############################## # The Redis latency monitoring subsystem samples different operations # at runtime in order to collect data related to possible sources of # latency of a Redis instance. # # Via the LATENCY command this information is available to the user that can # print graphs and obtain reports. # # The system only logs operations that were performed in a time equal or # greater than the amount of milliseconds specified via the # latency-monitor-threshold configuration directive. When its value is set # to zero, the latency monitor is turned off. # # By default latency monitoring is disabled since it is mostly not needed # if you don't have latency issues, and collecting data has a performance # impact, that while very small, can be measured under big load. Latency # monitoring can easily be enalbed at runtime using the command # "CONFIG SET latency-monitor-threshold <milliseconds>" if needed. latency-monitor-threshold 0 ############################# Event notification ############################## # Redis can notify Pub/Sub clients about events happening in the key space. # This feature is documented at http://redis.io/topics/notifications # # For instance if keyspace events notification is enabled, and a client # performs a DEL operation on key "foo" stored in the Database 0, two # messages will be published via Pub/Sub: # # PUBLISH __keyspace@0__:foo del # PUBLISH __keyevent@0__:del foo # # It is possible to select the events that Redis will notify among a set # of classes. Every class is identified by a single character: # # K Keyspace events, published with __keyspace@<db>__ prefix. # E Keyevent events, published with __keyevent@<db>__ prefix. # g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ... # $ String commands # l List commands # s Set commands # h Hash commands # z Sorted set commands # x Expired events (events generated every time a key expires) # e Evicted events (events generated when a key is evicted for maxmemory) # A Alias for g$lshzxe, so that the "AKE" string means all the events. # # The "notify-keyspace-events" takes as argument a string that is composed # of zero or multiple characters. The empty string means that notifications # are disabled. # # Example: to enable list and generic events, from the point of view of the # event name, use: # # notify-keyspace-events Elg # # Example 2: to get the stream of the expired keys subscribing to channel # name __keyevent@0__:expired use: # # notify-keyspace-events Ex # # By default all notifications are disabled because most users don't need # this feature and the feature has some overhead. Note that if you don't # specify at least one of K or E, no events will be delivered. notify-keyspace-events "" ############################### ADVANCED CONFIG ############################### # Hashes are encoded using a memory efficient data structure when they have a # small number of entries, and the biggest entry does not exceed a given # threshold. These thresholds can be configured using the following directives. hash-max-ziplist-entries 512 hash-max-ziplist-value 64 # Similarly to hashes, small lists are also encoded in a special way in order # to save a lot of space. The special representation is only used when # you are under the following limits: list-max-ziplist-entries 512 list-max-ziplist-value 64 # Sets have a special encoding in just one case: when a set is composed # of just strings that happen to be integers in radix 10 in the range # of 64 bit signed integers. # The following configuration setting sets the limit in the size of the # set in order to use this special memory saving encoding. set-max-intset-entries 512 # Similarly to hashes and lists, sorted sets are also specially encoded in # order to save a lot of space. This encoding is only used when the length and # elements of a sorted set are below the following limits: zset-max-ziplist-entries 128 zset-max-ziplist-value 64 # HyperLogLog sparse representation bytes limit. The limit includes the # 16 bytes header. When an HyperLogLog using the sparse representation crosses # this limit, it is converted into the dense representation. # # A value greater than 16000 is totally useless, since at that point the # dense representation is more memory efficient. # # The suggested value is ~ 3000 in order to have the benefits of # the space efficient encoding without slowing down too much PFADD, # which is O(N) with the sparse encoding. The value can be raised to # ~ 10000 when CPU is not a concern, but space is, and the data set is # composed of many HyperLogLogs with cardinality in the 0 - 15000 range. hll-sparse-max-bytes 3000 # Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in # order to help rehashing the main Redis hash table (the one mapping top-level # keys to values). The hash table implementation Redis uses (see dict.c) # performs a lazy rehashing: the more operation you run into a hash table # that is rehashing, the more rehashing "steps" are performed, so if the # server is idle the rehashing is never complete and some more memory is used # by the hash table. # # The default is to use this millisecond 10 times every second in order to # actively rehash the main dictionaries, freeing memory when possible. # # If unsure: # use "activerehashing no" if you have hard latency requirements and it is # not a good thing in your environment that Redis can reply from time to time # to queries with 2 milliseconds delay. # # use "activerehashing yes" if you don't have such hard requirements but # want to free memory asap when possible. activerehashing yes # The client output buffer limits can be used to force disconnection of clients # that are not reading data from the server fast enough for some reason (a # common reason is that a Pub/Sub client can't consume messages as fast as the # publisher can produce them). # # The limit can be set differently for the three different classes of clients: # # normal -> normal clients including MONITOR clients # slave -> slave clients # pubsub -> clients subscribed to at least one pubsub channel or pattern # # The syntax of every client-output-buffer-limit directive is the following: # # client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds> # # A client is immediately disconnected once the hard limit is reached, or if # the soft limit is reached and remains reached for the specified number of # seconds (continuously). # So for instance if the hard limit is 32 megabytes and the soft limit is # 16 megabytes / 10 seconds, the client will get disconnected immediately # if the size of the output buffers reach 32 megabytes, but will also get # disconnected if the client reaches 16 megabytes and continuously overcomes # the limit for 10 seconds. # # By default normal clients are not limited because they don't receive data # without asking (in a push way), but just after a request, so only # asynchronous clients may create a scenario where data is requested faster # than it can read. # # Instead there is a default limit for pubsub and slave clients, since # subscribers and slaves receive data in a push fashion. # # Both the hard or the soft limit can be disabled by setting them to zero. client-output-buffer-limit normal 0 0 0 client-output-buffer-limit slave 256mb 64mb 60 client-output-buffer-limit pubsub 32mb 8mb 60 # Redis calls an internal function to perform many background tasks, like # closing connections of clients in timeot, purging expired keys that are # never requested, and so forth. # # Not all tasks are perforemd with the same frequency, but Redis checks for # tasks to perform according to the specified "hz" value. # # By default "hz" is set to 10. Raising the value will use more CPU when # Redis is idle, but at the same time will make Redis more responsive when # there are many keys expiring at the same time, and timeouts may be # handled with more precision. # # The range is between 1 and 500, however a value over 100 is usually not # a good idea. Most users should use the default of 10 and raise this up to # 100 only in environments where very low latency is required. hz 10 # When a child rewrites the AOF file, if the following option is enabled # the file will be fsync-ed every 32 MB of data generated. This is useful # in order to commit the file to the disk more incrementally and avoid # big latency spikes. aof-rewrite-incremental-fsync yes ################################## INCLUDES ################################### # Include one or more other config files here. This is useful if you # have a standard template that goes to all Redis server but also need # to customize a few per-server settings. Include files can include # other files, so use this wisely. # # include /path/to/local.conf # include /path/to/other.conf
SpringBoot 2设置session过期时间生效后再修改不生效
在IDEA中,如下设置为100分钟,后重启生效,再修改为 PT1M 即1分钟后无效不生效,clean 重新编译 重启后还是100分钟。为何呢? springboot 版本是 2.1.8.RELEASE ```yml server: port: 8080 servlet: session: timeout: PT100M ```
winfrom 通过multipart/form-data方式 上传EXCEL文件,发生错误
static void FileUpload(string m_fileNamePath, string wenjianmin) { string Boundary = "wojuedezhgexiangmushigekeng"; //构造请求参数 Dictionary<string, string> PostInfo = new Dictionary<string, string>(); //PostInfo.Add("sequenceNo ", "A0002"); //PostInfo.Add("type", "application/vnd.ms-excel"); //PostInfo.Add("file", ""); //PostInfo.Add("filename", wenjianmin); //构造POST请求体 StringBuilder PostContent = new StringBuilder("--" + Boundary); byte[] ContentEnd = Encoding.UTF8.GetBytes("--" + Boundary + "--\r\n");//请求体末尾,后面会用到 //组成普通参数信息 foreach (KeyValuePair<string, string> item in PostInfo) { PostContent.Append("\r\n") .Append("Content-Disposition: form-data; name=\"") .Append(item.Key + "\"").Append("\r\n") .Append("\r\n").Append(item.Value).Append("\r\n") .Append("--").Append(Boundary); } //转换为二进制数组,后面会用到 byte[] PostContentByte = Encoding.UTF8.GetBytes(PostContent.ToString()); //文件信息 byte[] UpdateFile = File2Bytes(m_fileNamePath);//转换为二进制 StringBuilder FileContent = new StringBuilder(); FileContent.Append("\r\n") .Append("Content-Disposition:form-data; name=\"") .Append("flie" + "\"; ") .Append("filename=\"") .Append(wenjianmin + "\"") .Append("\r\n") .Append("Content-Type: application/vnd.ms-excel") .Append("\r\n") .Append("\r\n"); byte[] FileContentByte = Encoding.UTF8.GetBytes(FileContent.ToString()); HttpWebRequest request = (HttpWebRequest)WebRequest.Create(m_address); request.Method = "POST"; request.Timeout = 100000; //这里确定了分隔符是什么 request.Headers.Add("Cookie", "JSESSIONID=19FCE6CF428E78732830248719E3836A"); //request.Headers.Add("Accept-Encoding", " gzip, deflate"); request.ContentType = "multipart/form-data;boundary=" + Boundary; //request.Cookie = ""; //定义请求流 Stream myRequestStream = request.GetRequestStream(); myRequestStream.Write(PostContentByte, 0, PostContentByte.Length);//写入参数 myRequestStream.Write(FileContentByte, 0, FileContentByte.Length);//写入文件信息 myRequestStream.Write(UpdateFile, 0, UpdateFile.Length);//文件写入请求流中 myRequestStream.Write(ContentEnd, 0, ContentEnd.Length);//写入结尾 HttpWebResponse res; try { res = (HttpWebResponse)request.GetResponse(); } catch (WebException ex) { res = (HttpWebResponse)ex.Response; } //StreamReader sr = new StreamReader(res.GetResponseStream(), strEncode); //strHtml = sr.ReadToEnd(); HttpWebResponse response = (HttpWebResponse)request.GetResponse(); //获取返回值 Stream myResponseStream = response.GetResponseStream(); StreamReader myStreamReader = new StreamReader(myResponseStream, Encoding.GetEncoding("utf-8")); string retString = myStreamReader.ReadToEnd(); myRequestStream.Close(); myStreamReader.Close(); myResponseStream.Close(); } 以下是抓包截图 ![图片说明](https://img-ask.csdn.net/upload/202001/09/1578548657_696695.png) 返回错误信息 {"timestamp":"2020-01-09 13:40:35","status":500,"error":"Internal Server Error","message":"Failed to parse multipart servlet request; nested exception is org.apache.commons.fileupload.FileUploadBase$IOFileUploadException: Processing of multipart/form-data request failed. Stream ended unexpectedly","path":"/lab/calibrator/uploadReport"}
tomcat项目运行一段时间后无法连接sql server2008数据库,重启tomat又好了
具体报错: org.apache.ibatis.exceptions.PersistenceException: ### Error querying database. Cause: org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Timeout waiting for idle object ### The error may exist in /com/dayainfo/ssp/sqlMapper/sqlMapper-basicDataExpertSearch.xml ### The error may involve .selectBasicDataExpertSearchByKey ### The error occurred while executing a query ### Cause: org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Timeout waiting for idle object] with root cause java.util.NoSuchElementException: Timeout waiting for idle object tomcat版本:是tomcat7 sql server版本:sql server 2008 R2 tomcat和sql server不在同一个服务器
JS换行保存后,不显示换行,显示的却是字符<br>
![图片说明](https://img-ask.csdn.net/upload/202001/12/1578817782_338780.png) 缺陷内容保存后可以换行显示,处理结果就不可以,就显示<br> ``` function getAdd() { var w=$(window).width(); var h=$(window).height(); $('#dd').dialog({ title: '添加缺陷记录', width: w * .6, height: h - 136, closed: false, cache: false, href: 'dia/log/defect-record/get-add.html?module_id=' + module_id + '&station_info=' + station_info, modal: true, buttons: [{ text: '保存', iconCls: "easy-icon-save", handler: function () { var shift = $("#reportShift").combobox("getValue"); if(shift == -1 || shift == ""){ $.messager.alert('操作', '请重新核对缺陷上报日期以及所选班次'); return false; } checkStation("station");//检测是否选择或输入 var defectLevel = $('#defectLevel').combobox('getValue'); if (defectLevel == 0 || defectLevel == 1) { var planTime = $('#planProTime').datebox('getValue'); if (planTime == null || planTime == '') { $.messager.alert("修改", "计划处理时间不能为空!", "error"); return; } } var data = $("#defect-add-form").serialize(); // var content = $('#content').val().replace(/\n/g,"<br/>"); $.ajax({ url: "dia/log/defect-record/insert.do?module_id=" + module_id + '&proResult=' + $('#proResult').val().replace(/\r\n/g, '<br/>').replace(/\n/g, '<br/>').replace(/\s/g, ' '), type : "post", dataType : "json", data : data, success : function(request) { if(request.success){ query(); } $.messager.show({ title: '操作提示', msg: request.msg, timeout: 2000, sshowType: 'slide' }); } }); $("#dd").dialog({ closed: true }); } }, { text: '关闭', iconCls: "easy-icon-cancel", handler: function () { $("#dd").dialog({ closed: true }); } }], //用于图形化,窗口打开完成后,给厂站赋值 onLoad: function () { var indexDevId = $("#indexDevId").val(); $('#station').combobox('setValue', indexDevId); } }); } ``` 这是JSP代码 ``` <tr> <td class="td-inputtitle" style="text-align: center">处理结果</td> <td class="td-input" colspan="5" > <c:forEach var="item" items="${historyProcess }"> <c:if test="${item.proResult!=null && item.proResult!='' }"> <div class="history_process"> <div class="user_info">${item.userName }<br/>${item.time }</div> <div class="content" >${fn:replace(item.proResult,vEnter,'<br>') } </div> <div class="clearfix"></div> </div> </c:if> </c:forEach> <textarea type="text" rows="6" name="proResult" id="proResult" style="width:95%;"> ${currProcess.proResult } </textarea> </td> </tr> ```
Java编程思想 哲学家问题 为什么为哲学家增加思考的时间,就能缓解死锁的产生?
我十分不解,为什么在Philosopher类中的run()内增加pause()就能让死锁慢一点发生?请大神指教 这个例子中,产生死锁的原因在于,所有的哲学家都拿到了右筷子,却因为拿不到左筷子而陷入循环等待。但这和是否使用pause()有关系吗?可以注意到例子中使用的随机数赋予了种子,也就是说,每次运行得到的结果是一个固定且相同的值,即便是加了pause(),等待哪怕1000秒,当1000秒后,所有的哲学家仍然同时由等待状态转变成了可运行态,接着受到线程调度器分配时间片来驱动,这和等待时间的长短又有什么必然的关系呢???(ps: 我的理解是把种子去掉,不同的哲学家思考的时间不同,进入"拿筷子"阶段的时间不同,**降低同时请求共享资源的可能性**,这样倒是能缓解死锁发生的概率,书上的说法实在是无法理解) 请先避开不谈满足死锁的四个条件。直接讨论这个例子。 ``` /** * 哲学家问题 筷子 */ public class Chopstick { private boolean taken = false; public synchronized void take(int id, String direction) throws InterruptedException{ while(taken) { System.out.println("Philosopher " + id + " waiting " + direction + " chopstick"); this.wait(); } // 现在由新的哲学家持有这根筷子 taken = true; System.out.println("Philosopher " + id + " grabbed " + direction + " chopstick"); } public synchronized void drop() { taken = false; this.notifyAll(); } } ``` ``` /** * 哲学家问题 哲学家 */ public class Philosopher implements Runnable{ private Chopstick left; private Chopstick right; private final int id; private final int ponderFactor; private Random rand = new Random(47); private void pause() throws InterruptedException{ if(ponderFactor == 0) { return; } TimeUnit.MILLISECONDS.sleep(rand.nextInt(ponderFactor * 250)); } public Philosopher(Chopstick left, Chopstick right, int ident, int ponder) { this.left = left; this.right = right; this.id = ident; this.ponderFactor = ponder; } @Override public void run() { try { while(!Thread.interrupted()) { // 为什么加上的等待时间越长,产生死锁的可能性越小? pause(); //System.out.println(this + " beginning eating " + LocalDateTime.now().getNano()); // 哲学家开始变饿 System.out.println(this + " " + "grabbing right"); right.take(id, "right"); System.out.println(this + " " + "grabbing left"); left.take(id, "left"); System.out.println(this + " " + "eating"); pause(); right.drop(); left.drop(); } }catch (InterruptedException e) { System.out.println(this + " " + "exiting via interrupt"); } } public String toString() { return "Philosopher " + id; } } ``` ``` /** * 哲学家问题 演示死锁 */ public class DeadlockingDiningPhilosopher { public static void main(String[] args) throws InterruptedException, IOException { int ponder = 5; if(args.length > 0) { ponder = Integer.parseInt(args[0]); } int size = 5; if (args.length > 1) { size = Integer.parseInt(args[1]); } ExecutorService exec = Executors.newCachedThreadPool(); Chopstick[] sticks = new Chopstick[size]; for(int i = 0; i < size; i++) { sticks[i] = new Chopstick(); } for (int i = 0; i < size; i++) { exec.execute(new Philosopher(sticks[i], sticks[(i+1) % size], i, ponder)); } if (args.length == 3 && args[2].equals("timeout")) { TimeUnit.SECONDS.sleep(5); } else { System.out.println("Press 'Enter' to quit"); System.in.read(); } exec.shutdownNow(); } } ```
爬虫福利二 之 妹子图网MM批量下载
爬虫福利一:27报网MM批量下载    点击 看了本文,相信大家对爬虫一定会产生强烈的兴趣,激励自己去学习爬虫,在这里提前祝:大家学有所成! 目标网站:妹子图网 环境:Python3.x 相关第三方模块:requests、beautifulsoup4 Re:各位在测试时只需要将代码里的变量 path 指定为你当前系统要保存的路径,使用 python xxx.py 或IDE运行即可。
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、PDF搜索网站推荐 对于大部
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 顺便拉下票,我在参加csdn博客之星竞选,欢迎投票支持,每个QQ或者微信每天都可以投5票,扫二维码即可,http://m234140.nofollow.ax.
比特币原理详解
一、什么是比特币 比特币是一种电子货币,是一种基于密码学的货币,在2008年11月1日由中本聪发表比特币白皮书,文中提出了一种去中心化的电子记账系统,我们平时的电子现金是银行来记账,因为银行的背后是国家信用。去中心化电子记账系统是参与者共同记账。比特币可以防止主权危机、信用风险。其好处不多做赘述,这一层面介绍的文章很多,本文主要从更深层的技术原理角度进行介绍。 二、问题引入  假设现有4个人
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 欢迎 改进 留言。 演示地点跳到演示地点 html代码如下`&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;music&lt;/title&gt; &lt;meta charset="utf-8"&gt
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。 1. for - else 什么?不是 if 和 else 才
数据库优化 - SQL优化
前面一篇文章从实例的角度进行数据库优化,通过配置一些参数让数据库性能达到最优。但是一些“不好”的SQL也会导致数据库查询变慢,影响业务流程。本文从SQL角度进行数据库优化,提升SQL运行效率。 判断问题SQL 判断SQL是否有问题时可以通过两个表象进行判断: 系统级别表象 CPU消耗严重 IO等待严重 页面响应时间过长
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 c/c++ 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7
通俗易懂地给女朋友讲:线程池的内部原理
餐厅的约会 餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”我楞了一下,心里想女朋友今天是怎么了,怎么突然问出这么专业的问题,但做为一个专业人士在女朋友面前也不能露怯啊,想了一下便说:“我先给你讲讲我前同事老王的故事吧!” 大龄程序员老王 老王是一个已经北漂十多年的程序员,岁数大了,加班加不动了,升迁也无望,于是拿着手里
经典算法(5)杨辉三角
写在前面: 我是 扬帆向海,这个昵称来源于我的名字以及女朋友的名字。我热爱技术、热爱开源、热爱编程。技术是开源的、知识是共享的。 这博客是对自己学习的一点点总结及记录,如果您对 Java、算法 感兴趣,可以关注我的动态,我们一起学习。 用知识改变命运,让我们的家人过上更好的生活。 目录一、杨辉三角的介绍二、杨辉三角的算法思想三、代码实现1.第一种写法2.第二种写法 一、杨辉三角的介绍 百度
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹
面试官:你连RESTful都不知道我怎么敢要你?
面试官:了解RESTful吗? 我:听说过。 面试官:那什么是RESTful? 我:就是用起来很规范,挺好的 面试官:是RESTful挺好的,还是自我感觉挺好的 我:都挺好的。 面试官:… 把门关上。 我:… 要干嘛?先关上再说。 面试官:我说出去把门关上。 我:what ?,夺门而去 文章目录01 前言02 RESTful的来源03 RESTful6大原则1. C-S架构2. 无状态3.统一的接
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看
SQL-小白最佳入门sql查询一
一 说明 如果是初学者,建议去网上寻找安装Mysql的文章安装,以及使用navicat连接数据库,以后的示例基本是使用mysql数据库管理系统; 二 准备前提 需要建立一张学生表,列分别是id,名称,年龄,学生信息;本示例中文章篇幅原因SQL注释略; 建表语句: CREATE TABLE `student` ( `id` int(11) NOT NULL AUTO_INCREMENT, `
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // dosho
【图解经典算法题】如何用一行代码解决约瑟夫环问题
约瑟夫环问题算是很经典的题了,估计大家都听说过,然后我就在一次笔试中遇到了,下面我就用 3 种方法来详细讲解一下这道题,最后一种方法学了之后保证让你可以让你装逼。 问题描述:编号为 1-N 的 N 个士兵围坐在一起形成一个圆圈,从编号为 1 的士兵开始依次报数(1,2,3…这样依次报),数到 m 的 士兵会被杀死出列,之后的士兵再从 1 开始报数。直到最后剩下一士兵,求这个士兵的编号。 1、方
致 Python 初学者
文章目录1. 前言2. 明确学习目标,不急于求成,不好高骛远3. 在开始学习 Python 之前,你需要做一些准备2.1 Python 的各种发行版2.2 安装 Python2.3 选择一款趁手的开发工具3. 习惯使用IDLE,这是学习python最好的方式4. 严格遵从编码规范5. 代码的运行、调试5. 模块管理5.1 同时安装了py2/py35.2 使用Anaconda,或者通过IDE来安装模
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,
程序员:我终于知道post和get的区别
IT界知名的程序员曾说:对于那些月薪三万以下,自称IT工程师的码农们,其实我们从来没有把他们归为我们IT工程师的队伍。他们虽然总是以IT工程师自居,但只是他们一厢情愿罢了。 此话一出,不知激起了多少(码农)程序员的愤怒,却又无可奈何,于是码农问程序员。 码农:你知道get和post请求到底有什么区别? 程序员:你看这篇就知道了。 码农:你月薪三万了? 程序员:嗯。 码农:你是怎么做到的? 程序员:
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU
加快推动区块链技术和产业创新发展,2019可信区块链峰会在京召开
      11月8日,由中国信息通信研究院、中国通信标准化协会、中国互联网协会、可信区块链推进计划联合主办,科技行者协办的2019可信区块链峰会将在北京悠唐皇冠假日酒店开幕。   区块链技术被认为是继蒸汽机、电力、互联网之后,下一代颠覆性的核心技术。如果说蒸汽机释放了人类的生产力,电力解决了人类基本的生活需求,互联网彻底改变了信息传递的方式,区块链作为构造信任的技术有重要的价值。   1
程序员把地府后台管理系统做出来了,还有3.0版本!12月7号最新消息:已在开发中有github地址
第一幕:缘起 听说阎王爷要做个生死簿后台管理系统,我们派去了一个程序员…… 996程序员做的梦: 第一场:团队招募 为了应对地府管理危机,阎王打算找“人”开发一套地府后台管理系统,于是就在地府总经办群中发了项目需求。 话说还是中国电信的信号好,地府都是满格,哈哈!!! 经常会有外行朋友问:看某网站做的不错,功能也简单,你帮忙做一下? 而这次,面对这样的需求,这个程序员
网易云6亿用户音乐推荐算法
网易云音乐是音乐爱好者的集聚地,云音乐推荐系统致力于通过 AI 算法的落地,实现用户千人千面的个性化推荐,为用户带来不一样的听歌体验。 本次分享重点介绍 AI 算法在音乐推荐中的应用实践,以及在算法落地过程中遇到的挑战和解决方案。 将从如下两个部分展开: AI 算法在音乐推荐中的应用 音乐场景下的 AI 思考 从 2013 年 4 月正式上线至今,网易云音乐平台持续提供着:乐屏社区、UGC
【技巧总结】位运算装逼指南
位算法的效率有多快我就不说,不信你可以去用 10 亿个数据模拟一下,今天给大家讲一讲位运算的一些经典例子。不过,最重要的不是看懂了这些例子就好,而是要在以后多去运用位运算这些技巧,当然,采用位运算,也是可以装逼的,不信,你往下看。我会从最简单的讲起,一道比一道难度递增,不过居然是讲技巧,那么也不会太难,相信你分分钟看懂。 判断奇偶数 判断一个数是基于还是偶数,相信很多人都做过,一般的做法的代码如下
日均350000亿接入量,腾讯TubeMQ性能超过Kafka
整理 | 夕颜出品 | AI科技大本营(ID:rgznai100) 【导读】近日,腾讯开源动作不断,相继开源了分布式消息中间件TubeMQ,基于最主流的 OpenJDK8开发的
8年经验面试官详解 Java 面试秘诀
    作者 | 胡书敏 责编 | 刘静 出品 | CSDN(ID:CSDNnews) 本人目前在一家知名外企担任架构师,而且最近八年来,在多家外企和互联网公司担任Java技术面试官,前后累计面试了有两三百位候选人。在本文里,就将结合本人的面试经验,针对Java初学者、Java初级开发和Java开发,给出若干准备简历和准备面试的建议。   Java程序员准备和投递简历的实
面试官如何考察你的思维方式?
1.两种思维方式在求职面试中,经常会考察这种问题:北京有多少量特斯拉汽车? 某胡同口的煎饼摊一年能卖出多少个煎饼? 深圳有多少个产品经理? 一辆公交车里能装下多少个乒乓球? 一
相关热词 如何提升c#开发能力 矩阵乘法c# c#调用谷歌浏览器 c# 去空格去转义符 c#用户登录窗体代码 c# 流 c# linux 可视化 c# mvc 返回图片 c# 像素空间 c# 日期 最后一天
立即提问