怎么将gRPC的数据序列化从Protocol Buffers替换成JSON

图片说明
最近在学习gRPC,看到gRPC的概述文档中说,可以把数据序列化从Protocol Buffers换成JSON,请问有没有人知道怎么替换?

2个回答

http://www.jianshu.com/p/774b38306c30

就是你客户端直接用json不用pb

Nero__A
Nero__A 服务端gRPC,客户端JSON-RPC?
3 年多之前 回复

换个试试,或者等待一下看看呢

Nero__A
Nero__A 什么意思
3 年多之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
grpc helloworld程序卡死?
在跟随教程安装 grpc 后运行 helloworld 时,greeter__server 没有预想的输出。 具体来说应该是在 "BuildAndStart" 这一步卡住,清空grpc重装后还是这个问题,不知道什么原因。 跟随这个教程:https://blog.csdn.net/libaineu2004/article/details/80734547 我确定完全按照教程去做的,除了git下载第三方库下载不了去github里一个个手动拷贝的 ![图片说明](https://img-ask.csdn.net/upload/202001/16/1579136058_120097.png) 环境是虚拟机centos7
怎么使用gRPC调用SpringBean的方法
最近在学习gRPC,请问各位大神,怎么把gRPC集成到Spring环境里,通过gRPC远程调用SpringBean?
Prometheus 如何监控 grpc-java server
如何通过prometheus 去监控 grpc 的 server ip 等一些信息?用的是Java语言
grpc-java 如何做负载均衡?
那位大佬有关于grpc-java的负载均衡的demo?求大佬解救一下小白!!!
急求大佬指导grpc python进行安全连接的代码(有PHP版本的代码,改写成python)
急求大佬,我现在在用python写grpc的客户端,用来获取数据提供商的股票数据。提供商只给了我一个PHP版本的代码用作指导,但是我需要用python写。其他地方都没有问题了,只有安全连接这个部分卡住了,查了两天也没查到解决办法。。用了好几种api可是都不行。情况就是有url,端口,.pem格式的证书以及账号和密码。大概的思路应该是在建立channel的时候带入额外的账号密码等信息? 求大佬帮忙看看用python怎么写呢。。跪谢大佬了! 下面为php代码: ![图片说明](https://img-ask.csdn.net/upload/201708/15/1502780176_733195.jpg) 下面为我写的不好用的python代码: ``` from __future__ import print_function import grpc import historicalQuote_pb2 import historicalQuote_pb2_grpc import common_pb2 import common_pb2_grpc import marketData_pb2 import marketData_pb2_grpc import os from google import auth as google_auth from google.auth.transport import grpc as google_auth_transport_grpc from google.auth.transport import requests as google_auth_transport_requests def run(): symbol = common_pb2.Symbol() symbol.code = "BABA" symbol.market = common_pb2.US symbol.type = common_pb2.Stock market_request = marketData_pb2.MarketDataRequest() market_request.symbol.code = symbol.code market_request.symbol.market = symbol.market market_request.symbol.type = symbol.type market_request.language = common_pb2.zhHans print(market_request) try: path = os.path.abspath('.') pemPath = path + '/open_test_cert.pem' transport_creds = grpc.ssl_channel_credentials(open(pemPath).read()) options = [] update_metadata = {} update_metadata2 = {} update_metadata['UserName'] = 'xxxx' update_metadata['Password'] = 'yyyy' update_metadata2['grpc.ssl_target_name_override'] = 'open.test.yintongzhibo.com' options.append(update_metadata) # options.append(update_metadata2) channel = grpc.secure_channel("open.test.yintongzhibo.com:9002",transport_creds,options) # credentials , project= google_auth.default(scopes=(scope1,scope2,scope3)) # credentials , project= google_auth.default() # http_request = google_auth_transport_requests.Request() # metadata_plugin = AuthMetadataPlugin(credentials,http_request) # google_auth_credentials = grpc.metadata_call_credentials(metadata_plugin) # ssl_credentials = grpc.ssl_channel_credentials(open(pemPath).read()) # composite_credentials = grpc.composite_channel_credentials(ssl_credentials,google_auth_credentials) # channel = grpc.secure_channel("open.test.yintongzhibo.com:9002",composite_credentials) # channel = google_auth_transport_grpc.secure_authorized_channel(credentials,request,'open.test.yintongzhibo.com:9002') stub = historicalQuote_pb2_grpc.HistoricalQuoteServiceStub(channel) response = stub.getTodayM1Quotes(symbol) # stub = marketData_pb2_grpc.MarketDataServiceStub(channel) # response = stub.getMarketData(market_request) print(response.message) except Exception as e: print (e) if __name__ == '__main__': run() ```
关于Colab上Keras模型转TPU模型的问题
使用TPU加速训练,将Keras模型转TPU模型时报错,如图![图片说明](https://img-ask.csdn.net/upload/202001/14/1578998736_238721.png) 关键代码如下 引用库: ``` %tensorflow_version 1.x import json import os import numpy as np import tensorflow as tf from tensorflow.python.keras.applications import resnet from tensorflow.python.keras import callbacks from tensorflow.python.keras.preprocessing.image import ImageDataGenerator import matplotlib.pyplot as plt ``` 转换TPU模型代码如下 ``` # This address identifies the TPU we'll use when configuring TensorFlow. TPU_WORKER = 'grpc://' + os.environ['COLAB_TPU_ADDR'] tf.logging.set_verbosity(tf.logging.INFO) self.model = tf.contrib.tpu.keras_to_tpu_model(self.model, strategy=tf.contrib.tpu.TPUDistributionStrategy(tf.contrib.cluster_resolver.TPUClusterResolver(TPU_WORKER))) self.model = resnet50.ResNet50(weights=None, input_shape=dataset.input_shape, classes=num_classes) ```
c++ grpc 异步服务器在多线程情况下出core。
我有一个c++ grpc服务器,异步的,不是stream模式,在多线程情况下跑了一会儿会core,打印出来的错误是 E0717 18:54:02.635424482 14110 server_cc.cc:677] Fatal: grpc_call_start_batch returned 8 E0717 18:54:02.635514690 14110 server_cc.cc:678] ops[0]: SEND_MESSAGE ptr=0x7f3c0815cb60 E0717 18:54:02.635563281 14110 server_cc.cc:678] ops[1]: SEND_STATUS_FROM_SERVER status=0 单线程情况下就没有啥问题,请问这个是为什么? 我google了下,说是不能同时有多个写,可是我看grpc repo里面的实现也没有对输出做处理。 有没有正确的grpc async simple server 在多线程下的正确实现可以参考下的吗?
实例化合约报错:chaincode registration failed
环境是用fabric1.4.2 + raft 共识机制 3台raft+1台peer image 均启动正常 在实例化链码时候报错,请各位大咖帮我看看,如何解决,谢谢 # 1、实例化链码 peer chaincode instantiate -o orderer0.koko.com:7050 --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/koko.com/orderers/orderer0.koko.com/msp/tlscacerts/tlsca.koko.com-cert.pem -C kokochannel -n mycc -l golang -v 1.0 -c '{"Args":["init","a","100","b","200"]}' -P 'AND("Org1MSP.peer")' # 2、cli容器报错: Error: could not assemble transaction, err proposal response was not successful, error code 500, msg chaincode registration failed: container exited with 0![图片说明](https://img-ask.csdn.net/upload/201909/21/1569050405_900988.png) # 3、实例化合约peer docker 日志报错 信息 docker logs -f peer d4e4543te534r5 [kokochannel][52342533] Exit chaincode: name:"lscc" (44159ms) 2019-09-21 06:23:09.871 UTC [endorser] SimulateProposal -> ERRO 04c [kokochannel][52342533] failed to invoke chaincode name:"lscc" , error: container exited with 0 github.com/hyperledger/fabric/core/chaincode.(*RuntimeLauncher).Launch.func1 /opt/gopath/src/github.com/hyperledger/fabric/core/chaincode/runtime_launcher.go:63 runtime.goexit /opt/go/src/runtime/asm_amd64.s:1333 chaincode registration failed 2019-09-21 06:23:09.871 UTC [comm.grpc.server] 1 -> INFO 04d unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peer_address=172.19.0.1:39190 grpc.code=OK grpc.call_duration=44.161172022s![图片说明](https://img-ask.csdn.net/upload/201909/21/1569050439_481805.png) 4、实例化合约 orderer docker 日志错误信息 docker logs -f 78ret8rere8retre 2019-09-21 06:04:23.367 UTC [common.deliver] deliverBlocks -> WARN 039 [channel: kokochannel] Rejecting deliver request for 192.168.213.134:46044 because of consenter error 2019-09-21 06:04:23.372 UTC [comm.grpc.server] 1 -> INFO 03a streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address=192.168.213.134:46044 grpc.code=OK grpc.call_duration=197.489105ms 2019-09-21 06:04:23.591 UTC [common.deliver] deliverBlocks -> WARN 03b [channel: kokochannel] Rejecting deliver request for 192.168.213.134:46046 because of consenter error 2019-09-21 06:04:23.643 UTC [comm.grpc.server] 1 -> INFO 03c streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address=192.168.213.134:46046 grpc.code=OK grpc.call_duration=252.482946ms 2019-09-21 06:04:23.848 UTC [common.deliver] deliverBlocks -> WARN 03d [channel: kokochannel] Rejecting deliver request for 192.168.213.134:46048 because of consenter error 2019-09-21 06:04:23.848 UTC [comm.grpc.server] 1 -> INFO 03e streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address=192.168.213.134:46048 grpc.code=OK grpc.call_duration=200.238916ms 2019-09-21 06:04:24.054 UTC [common.deliver] deliverBlocks -> WARN 03f [channel: kokochannel] Rejecting deliver request for 192.168.213.134:46050 because of consenter error 2019-09-21 06:04:24.054 UTC [comm.grpc.server] 1 -> INFO 040 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address=192.168.213.134:46050 grpc.code=OK grpc.call_duration=201.471445ms 2019-09-21 06:04:24.260 UTC [common.deliver] deliverBlocks -> WARN 041 [channel: kokochannel] Rejecting deliver request for 192.168.213.134:46052 because of consenter error 2019-09-21 06:04:24.260 UTC [comm.grpc.server] 1 -> INFO 042 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address=192.168.213.134:46052 grpc.code=OK grpc.call_duration=201.238803ms 2019-09-21 06:04:24.295 UTC [orderer.consensus.etcdraft] Step -> INFO 043 1 [logterm: 1, index: 3, vote: 0] cast MsgPreVote for 2 [logterm: 1, index: 3] at term 1 channel=kokochannel node=1 2019-09-21 06:04:24.339 UTC [orderer.consensus.etcdraft] Step -> INFO 044 1 [term: 1] received a MsgVote message with higher term from 2 [term: 2] channel=kokochannel node=1 2019-09-21 06:04:24.339 UTC [orderer.consensus.etcdraft] becomeFollower -> INFO 045 1 became follower at term 2 channel=kokochannel node=1 2019-09-21 06:04:24.339 UTC [orderer.consensus.etcdraft] Step -> INFO 046 1 [logterm: 1, index: 3, vote: 0] cast MsgVote for 2 [logterm: 1, index: 3] at term 2 channel=kokochannel node=1 2019-09-21 06:04:24.343 UTC [orderer.consensus.etcdraft] run -> INFO 047 raft.node: 1 elected leader 2 at term 2 channel=kokochannel node=1 2019-09-21 06:04:24.344 UTC [orderer.consensus.etcdraft] serveRequest -> INFO 048 Raft leader changed: 0 -> 2 channel=kokochannel node=1 2019-09-21 06:04:24.498 UTC [common.deliver] Handle -> WARN 049 Error reading from 192.168.213.134:46054: rpc error: code = Canceled desc = context canceled 2019-09-21 06:04:24.498 UTC [comm.grpc.server] 1 -> INFO 04a streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address=192.168.213.134:46054 error="rpc error: code = Canceled desc = context canceled" grpc.code=Canceled grpc.call_duration=233.075631ms 2019-09-21 06:23:09.917 UTC [orderer.common.broadcast] Handle -> WARN 04b Error reading from 192.168.213.134:46118: rpc error: code = Canceled desc = context canceled 2019-09-21 06:23:09.917 UTC [comm.grpc.server] 1 -> INFO 04c streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Broadcast grpc.peer_address=192.168.213.134:46118 error="rpc error: code = Canceled desc = context canceled" grpc.code=Canceled grpc.call_duration=44.160408206s![图片说明](https://img-ask.csdn.net/upload/201909/21/1569050453_792288.png) 已解决
grpc连接服务器报错,连接报Make sure to call shutdown()
每次使用java客户端连接python服务器时,grpc连接就报错。但是最终会连接成功。 每次报错一大堆错误日志: ERROR io.grpc.internal.ManagedChannelOrphanWrapper - *~*~*~ Channel ManagedChannelImpl{logId=433, target=57.101.32.97:8002} was not shutdown properly!!! ~*~*~* Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true. java.lang.RuntimeException: ManagedChannel allocation site at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.<init>(ManagedChannelOrphanWrapper.java:94) at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:52) at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:43) at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:514) at com.ruijie.wechat.open.util.GrpcUtil.createChannel(GrpcUtil.java:32) at com.ruijie.wechat.open.util.GrpcUtil.getGrpcChannel(GrpcUtil.java:41) at com.ruijie.wechat.open.service.statistics.impl.UnifiedDataCenterService.dealCountTypeSpecs(UnifiedDataCenterService.java:127) at com.ruijie.wechat.open.service.statistics.impl.UnifiedDataCenterService$$FastClassBySpringCGLIB$$a137a63d.invoke(<generated>) at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204) at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:669) at com.ruijie.wechat.open.service.statistics.impl.UnifiedDataCenterService$$EnhancerBySpringCGLIB$$df7696f8.dealCountTypeSpecs(<generated>) at com.ruijie.wechat.open.service.statistics.impl.UnifiedStatisticsServiceImpl.getCountInfo(UnifiedStatisticsServiceImpl.java:116) at com.ruijie.wechat.open.controller.statistics.UnifiedStatisticsController.getCount(UnifiedStatisticsController.java:127) at sun.reflect.GeneratedMethodAccessor502.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205) at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:133) at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:97) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:827) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:738) at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:967) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:901) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970) at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:872) at javax.servlet.http.HttpServlet.service(HttpServlet.java:661) at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846) at javax.servlet.http.HttpServlet.service(HttpServlet.java:742) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.springframework.boot.web.filter.ApplicationContextHeaderFilter.doFilterInternal(ApplicationContextHeaderFilter.java:55) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.springframework.boot.actuate.trace.WebRequestTraceFilter.doFilterInternal(WebRequestTraceFilter.java:110) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.springframework.web.filter.HttpPutFormContentFilter.doFilterInternal(HttpPutFormContentFilter.java:108) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:81) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:197) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.springframework.boot.actuate.autoconfigure.MetricsFilter.doFilterInternal(MetricsFilter.java:106) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:478) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:342) at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:803) at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66) at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:868) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1459) at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:745)
Python client + Java Server 使用grpc 报错
Python client + Java Server 使用grpc 报错 因为没有找到现成的例子,对python也不是很熟悉.所以请各位看官大佬帮忙看下这个问题.最好有的demo的话也可以给一下.多谢 ``` grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with (StatusCode.UNIMPLEMENTED, Method not found: DemoGrpc.SendMailService/sendMail)> ``` proto文件都是一样.同种语言之间调用没有问题,就是互相调用的时候直接报这个错.
Error:(22, 8) java: 类重复: net.whir.hos.organization.grpc.converters.UserInfoConverterImpl
用spring Boot 启动 显示类重复 求各位大神帮帮忙,如何解决?![图片说明](https://img-ask.csdn.net/upload/201911/23/1574503940_832340.png)
--go_out: protoc-gen-go: 系统找不到指定的文件。
protoc --go_out=plugins=grpc:. hello.proto --go_out: protoc-gen-go: 系统找不到指定的文件。
Hyperledger Explorer 出现 Error: 2 UNKNOWN: Stream removed 异常
``` [2019-11-13 20:20:34.351] [DEBUG] Platform - ******* Initialization started for hyperledger fabric platform ****** [2019-11-13 20:20:34.352] [DEBUG] Platform - Setting admin organization enrolment files [2019-11-13 20:20:34.352] [DEBUG] Platform - Creating client [[object Object]] >> first-network [2019-11-13 20:20:34.353] [DEBUG] FabricUtils - ************ Initializing fabric client for [first-network]************ [2019-11-13 20:20:34.353] [DEBUG] FabricClient - Client configuration [first-network] ... this.client_config { name: 'first-network', profile: '/root/go/src/github.com/hyperledger/blockchain-explorer/app/platform/fabric/connection-profile/first-network.json', enableAuthentication: false } [2019-11-13 20:20:34.354] [DEBUG] FabricGateway - LOADING CONFIGURATION [OBJECT OBJECT] [2019-11-13 20:20:34.354] [DEBUG] FabricGateway - LOADING CONFIGURATION [OBJECT OBJECT] [2019-11-13 20:20:34.354] [INFO] FabricGateway - peer0.org1.example.com [2019-11-13 20:20:34.354] [INFO] FabricGateway - peer0.org1.example.com [2019-11-13 20:20:34.355] [INFO] FabricGateway - /root/go/src/github.com/hyperledger/fabric-samples/first-network/crypto-config/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp/signcerts/Admin@org1.example.com-cert.pem adminPrivateKeyPath /root/go/src/github.com/hyperledger/fabric-samples/first-network/crypto-config/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp/keystore/5ceddae32a833ed134279f0a7caab64a7193614c86948ad3491bb309283b401d_sk [2019-11-13 20:20:34.355] [INFO] FabricGateway - /root/go/src/github.com/hyperledger/fabric-samples/first-network/crypto-config/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp/signcerts/Admin@org1.example.com-cert.pem adminPrivateKeyPath /root/go/src/github.com/hyperledger/fabric-samples/first-network/crypto-config/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp/keystore/5ceddae32a833ed134279f0a7caab64a7193614c86948ad3491bb309283b401d_sk [2019-11-13 20:20:34.356] [INFO] FabricGateway - admin [2019-11-13 20:20:34.356] [INFO] FabricGateway - admin [2019-11-13 20:20:34.631] [DEBUG] FabricClient - Set client [first-network] default channel as >> mychannel [2019-11-13 20:20:34.652] [ERROR] FabricClient - { Error: 2 UNKNOWN: Stream removed at Object.exports.createStatusError (/root/go/src/github.com/hyperledger/blockchain-explorer/node_modules/fabric-network/node_modules/grpc/src/common.js:91:15) at Object.onReceiveStatus (/root/go/src/github.com/hyperledger/blockchain-explorer/node_modules/fabric-network/node_modules/grpc/src/client_interceptors.js:1204:28) at InterceptingListener._callNext (/root/go/src/github.com/hyperledger/blockchain-explorer/node_modules/fabric-network/node_modules/grpc/src/client_interceptors.js:568:42) at InterceptingListener.onReceiveStatus (/root/go/src/github.com/hyperledger/blockchain-explorer/node_modules/fabric-network/node_modules/grpc/src/client_interceptors.js:618:8) at callback (/root/go/src/github.com/hyperledger/blockchain-explorer/node_modules/fabric-network/node_modules/grpc/src/client_interceptors.js:845:24) code: 2, metadata: Metadata { _internal_repr: {} }, details: 'Stream removed', peer: { url: 'grpc://隐藏地址:7051', name: 'peer0.org1.example.com', options: { 'grpc.max_receive_message_length': -1, 'grpc.max_send_message_length': -1, 'grpc.keepalive_time_ms': 120000, 'grpc.http2.min_time_between_pings_ms': 120000, 'grpc.keepalive_timeout_ms': 20000, 'grpc.http2.max_pings_without_data': 0, 'grpc.keepalive_permit_without_calls': 1, name: 'peer0.org1.example.com', 'request-timeout': 300000, clientCert: '-----BEGIN CERTIFICATE-----\nMIICKzCCAdGgAwIBAgIRAJw2YDgGud/J/TE7tq1P1a8wCgYIKoZIzj0EAwIwczEL\nMAkGA1UEBhMCVVMxEzARBgNVBAgTCkNhbGlmb3JuaWExFjAUBgNVBAcTDVNhbiBG\ncmFuY2lzY28xGTAXBgNVBAoTEG9yZzEuZXhhbXBsZS5jb20xHDAaBgNVBAMTE2Nh\nLm9yZzEuZXhhbXBsZS5jb20wHhcNMTkxMTEzMDMzMjAwWhcNMjkxMTEwMDMzMjAw\nWjBsMQswCQYDVQQGEwJVUzETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UEBxMN\nU2FuIEZyYW5jaXNjbzEPMA0GA1UECxMGY2xpZW50MR8wHQYDVQQDDBZBZG1pbkBv\ncmcxLmV4YW1wbGUuY29tMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEDv+QrNpH\nD7X3Erolb3YUfoPwfWXpBReeNumtPqGmSqMtpEOm/Bz1JoZ6crNav6/QDrOrOZHl\naHYxMfFrZy0aO6NNMEswDgYDVR0PAQH/BAQDAgeAMAwGA1UdEwEB/wQCMAAwKwYD\nVR0jBCQwIoAgjdsNfvZPIL6xrscyAQBn/1ykaWuPEFOyeEa0ryCJYqMwCgYIKoZI\nzj0EAwIDSAAwRQIhAJ5sBseMFcGBs/6xr2xj/RzsPWCfrWvk6wL3y/sm468aAiBL\n9CBXGEs01kgH33Z61/vEpTJLubdw19v2gr1rE4p/kw==\n-----END CERTIFICATE-----\n', 'grpc.ssl_target_name_override': 'peer0.org1.example.com', 'grpc.default_authority': 'peer0.org1.example.com' } } } [2019-11-13 20:20:34.653] [DEBUG] FabricClient - this.defaultPeer peer0.org1.example.com [2019-11-13 20:20:34.662] [ERROR] FabricClient - { Error: 2 UNKNOWN: Stream removed at Object.exports.createStatusError (/root/go/src/github.com/hyperledger/blockchain-explorer/node_modules/fabric-network/node_modules/grpc/src/common.js:91:15) at Object.onReceiveStatus (/root/go/src/github.com/hyperledger/blockchain-explorer/node_modules/fabric-network/node_modules/grpc/src/client_interceptors.js:1204:28) at InterceptingListener._callNext (/root/go/src/github.com/hyperledger/blockchain-explorer/node_modules/fabric-network/node_modules/grpc/src/client_interceptors.js:568:42) at InterceptingListener.onReceiveStatus (/root/go/src/github.com/hyperledger/blockchain-explorer/node_modules/fabric-network/node_modules/grpc/src/client_interceptors.js:618:8) at callback (/root/go/src/github.com/hyperledger/blockchain-explorer/node_modules/fabric-network/node_modules/grpc/src/client_interceptors.js:845:24) code: 2, metadata: Metadata { _internal_repr: {} }, details: 'Stream removed' } [2019-11-13 20:20:34.665] [INFO] main - Please set logger.setLevel to DEBUG in ./app/helper.js to log the debugging. ```
fabric网络环境整合java-sdk grpc连接异常
1、调用fabric-java-sdk整合fabric网络环境。启动的过程中一直报错 ``` 23:16:57.825 [main] ERROR org.hyperledger.fabric.sdk.Channel - Channel Channel{id: 1, name: mychannel} Sending proposal with transaction: 5fe505ed0e555ac50cc4773876d8eb3746da951a3ad2f8461b2fa177901e5862 to Peer{ id: 2, name: peer0.org1.example.com, channelName: mychannel, url: grpc://x.x.x.x:7051} failed because of: gRPC failure=Status{code=INTERNAL, description=http2 exception, cause=io.netty.handler.codec.http2.Http2Exception: First received frame was not SETTINGS. Hex dump for first 5 bytes: 1503010002 at io.netty.handler.codec.http2.Http2Exception.connectionError(Http2Exception.java:85) at io.netty.handler.codec.http2.Http2ConnectionHandler$PrefaceDecoder.verifyFirstFrameIsSettings(Http2ConnectionHandler.java:350) at io.netty.handler.codec.http2.Http2ConnectionHandler$PrefaceDecoder.decode(Http2ConnectionHandler.java:251) at io.netty.handler.codec.http2.Http2ConnectionHandler.decode(Http2ConnectionHandler.java:450) at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:502) at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:337) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1408) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:677) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:612) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:529) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:491) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:905) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) } java.lang.Exception: io.grpc.StatusRuntimeException: INTERNAL: http2 exception at org.hyperledger.fabric.sdk.Channel.sendProposalToPeers(Channel.java:4179) at org.hyperledger.fabric.sdk.Channel.getConfigBlock(Channel.java:854) at org.hyperledger.fabric.sdk.Channel.parseConfigBlock(Channel.java:1820) at org.hyperledger.fabric.sdk.Channel.loadCACertificates(Channel.java:1657) at org.hyperledger.fabric.sdk.Channel.initialize(Channel.java:1103) ``` grpc://x.x.x.x:7051 访问失败 服务器地址为阿里云服务 根据网上提供的方法都说是grpc通信错误,添加相应的依赖, 然后并没用。 实在搞不懂是怎么弄了 哪位大神指导一下。 Fabric环境为1.0的版本
Use the @grpc/proto-loader module with grpc.loadPackageDefinition instead
postgres://root:root@127.0.0.1:5432/root (node:12744) DeprecationWarning: grpc.load: Use the @grpc/proto-loader module with grpc.loadPackageDefinition instead <<<<<<<<<<<<<<<<<<<<<<<<<< Explorer Error >>>>>>>>>>>>>>>>>>>>> TypeError: Channel's second argument must be a ChannelCredentials at ServiceClient.Client (/home/wdrfxy/blockchain-explorer/node_modules/grpc/src/client.js:410:23) at new ServiceClient (/home/wdrfxy/blockchain-explorer/node_modules/grpc/src/client.js:936:12) at new AdminPeer (/home/wdrfxy/blockchain-explorer/app/platform/fabric/AdminPeer.js:65:30) at FabricClient.newAdminPeer (/home/wdrfxy/blockchain-explorer/app/platform/fabric/FabricClient.js:546:25) at FabricClient.initializeChannelFromDiscover (/home/wdrfxy/blockchain-explorer/app/platform/fabric/FabricClient.js:499:38) at <anonymous> Received kill signal, shutting down gracefully Closed out connections
移植GRPC,key在开源Java server上加载,总是报Proc-type:4
小妹初步接触这个grpc,看了一下: 产品必须要用固定的key文件,格式为下面的,应该是pkcs1,在工具服务端加载总是报错;然后grpc 开源的自己客户端、服务端对接,可以,我看了下,是pkcs8的。想请问一下,grpc 开源的只能支持pkcs8么,不能支持pkcs1的key么。但是产品上的key必须是pkcs1,如果是pkcs8,他会提示:key文件没有加密!现在只能寄希望开源能适配pkcs1的情况了 这种情况要怎么适配 小妹只有3个币,倾其所有,望大神帮我解答 -----BEGIN RSA PRIVATE KEY----- Proc-Type: 4,ENCRYPTED DEK-Info: DES-EDE3-CBC,29DE8F99F382D122 3gQZqxU8HPaUzfFOX1scxXU2qiJezXEwVarq13fALK6/08CEs+66O7iQxfjSMsT9 J9aDluB7Xr9JtAhAqGCQZ9XMLS5rP7aOWnXpmp46XtDYoYEB3Kebov7iFJOEAgXf xtPm7AJtQSqTE1wof4y9EQTjysECnndyyXDASt/ugvfFuLn6dZ5Qiv8f2Jpce8Ek dzU6ToVFnrb3eNt+/UP1PGCKFuTA6nEG284e0FrTE94JL8Fwi9NWlp7CdYINM2HL Q6TMLIVdxJk/gnZm3h2bmVrqhMoacNdStC6EKMoEBoSWsqTpQRg08Z9qA6m5BjYI CzoRNfB5BPIIVxCa1maxWbCbac1Wk+34bwk1aLuKZGXRpRWT3wE5vMYr6y54MbYt XMtB7I6xLQZRzWEkqxWtpca/PfzgGF0DwxSLGFIVmc0EpdX7OJQEjPoLeCTokDlG K4c1S930kqMrkK8Fc4o7l05TzRGwQQVJCw0MaN626daSy1obF/hHoZPt8jTWFmBV kELDGwljaxvBj5oqBZZuZwv2IGZ56W4vbyDuF7/CoGLAtnh5XxXSV14jlo62NYIf g/BW6tPznLXCWm+gzNJ6nud7cARZZwOfF4dQND6JIKGLob0cAUHkawqXXEbRUdF5 IVCRCbrpg14BZyRyoGsj7oQXBQz0h/8QE7Lj8QqIZ5is84xAczs/+UWtx+qZKFq1 ugAm+raN+jFL4LtAe7dovDSx7SfRgPkQJykToaiMP4hJcPqHHtoBw7oiaFZzY5KV p131eXUMr8Yv9w1FWBlc8IrInvBkGmMhcsPbePh9/CKQdU6WH993Dg== -----END RSA PRIVATE KEY-----
C++编译Tensorflow时出现 no such package '@icu//'no such package '@grpc//'这两个error是为什么呀?
![图片说明](https://img-ask.csdn.net/upload/201907/03/1562133791_485352.png)
帮忙解决一个docker守护进程自己莫名死掉的问题,containerd异常
CentOS7, 多台虚拟机构建的Docker集群,所有服务器配置相同,其中一台总是莫名的Docker守护进程死掉,查看message日志如下: ``` Sep 3 12:30:03 sup-svc-70 systemd: Unit containerd.service entered failed state. Sep 3 12:30:03 sup-svc-70 containerd: github.com/containerd/containerd/metrics/cgroups.(*oomCollector).start(0xc0003146c0) Sep 3 12:30:03 sup-svc-70 containerd: /go/src/github.com/containerd/containerd/metrics/cgroups/oom.go:114 +0x7d Sep 3 12:30:03 sup-svc-70 containerd: created by github.com/containerd/containerd/metrics/cgroups.newOOMCollector Sep 3 12:30:03 sup-svc-70 containerd: /go/src/github.com/containerd/containerd/metrics/cgroups/oom.go:50 +0xed Sep 3 12:30:03 sup-svc-70 containerd: goroutine 15 [IO wait, 1615 minutes]: Sep 3 12:30:03 sup-svc-70 containerd: internal/poll.runtime_pollWait(0x7f906759ff00, 0x72, 0x0) Sep 3 12:30:03 sup-svc-70 containerd: /.GOROOT/src/runtime/netpoll.go:173 +0x68 Sep 3 12:30:03 sup-svc-70 containerd: internal/poll.(*pollDesc).wait(0xc00036af98, 0x72, 0xc0001b6800, 0x0, 0x0) Sep 3 12:30:03 sup-svc-70 containerd: /.GOROOT/src/internal/poll/fd_poll_runtime.go:85 +0x9c Sep 3 12:30:03 sup-svc-70 containerd: internal/poll.(*pollDesc).waitRead(0xc00036af98, 0xffffffffffffff00, 0x0, 0x0) Sep 3 12:30:03 sup-svc-70 containerd: /.GOROOT/src/internal/poll/fd_poll_runtime.go:90 +0x3f Sep 3 12:30:03 sup-svc-70 containerd: internal/poll.(*FD).Accept(0xc00036af80, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0) Sep 3 12:30:03 sup-svc-70 containerd: /.GOROOT/src/internal/poll/fd_unix.go:384 +0x1a2 Sep 3 12:30:03 sup-svc-70 containerd: net.(*netFD).accept(0xc00036af80, 0xc000326058, 0x0, 0x0) Sep 3 12:30:03 sup-svc-70 containerd: /.GOROOT/src/net/fd_unix.go:238 +0x44 Sep 3 12:30:03 sup-svc-70 containerd: net.(*UnixListener).accept(0xc0000df890, 0xc0003d9dc8, 0xc0003d9dd0, 0x18) Sep 3 12:30:03 sup-svc-70 containerd: /.GOROOT/src/net/unixsock_posix.go:162 +0x34 Sep 3 12:30:03 sup-svc-70 containerd: net.(*UnixListener).Accept(0xc0000df890, 0x55d3f39bdd58, 0xc0000c2000, 0x55d3f39eb6c0, 0xc000326058) Sep 3 12:30:03 sup-svc-70 containerd: /.GOROOT/src/net/unixsock.go:257 +0x49 Sep 3 12:30:03 sup-svc-70 containerd: github.com/containerd/containerd/vendor/google.golang.org/grpc.(*Server).Serve(0xc0000c2000, 0x55d3f39e1960, 0xc0000df890, 0x0, 0x0) Sep 3 12:30:03 sup-svc-70 containerd: /go/src/github.com/containerd/containerd/vendor/google.golang.org/grpc/server.go:544 +0x212 Sep 3 12:30:03 sup-svc-70 containerd: github.com/containerd/containerd/services/server.(*Server).ServeGRPC(0xc00026e7b0, 0x55d3f39e1960, 0xc0000df890, 0x18, 0xc0004b2738) Sep 3 12:30:03 sup-svc-70 containerd: /go/src/github.com/containerd/containerd/services/server/server.go:167 +0x6b Sep 3 12:30:03 sup-svc-70 containerd: github.com/containerd/containerd/services/server.(*Server).ServeGRPC-fm(0x55d3f39e1960, 0xc0000df890,0xc0000df890, 0x0) Sep 3 12:30:03 sup-svc-70 containerd: /go/src/github.com/containerd/containerd/cmd/containerd/command/main.go:171 +0x40 Sep 3 12:30:03 sup-svc-70 containerd: github.com/containerd/containerd/cmd/containerd/command.serve.func1(0x55d3f39e1960, 0xc0000df890, 0xc000249f70, 0x55d3f39e2f20, 0xc00003c018, 0xc0001e9640, 0x1f) Sep 3 12:30:03 sup-svc-70 dockerd: time="2019-09-03T12:30:03.279896469+08:00" level=error msg="failed to get event" error="rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialin g dial unix /run/containerd/containerd.sock: connect: connection refused\"" module=libcontainerd namespace=moby Sep 3 12:30:03 sup-svc-70 dockerd: time="2019-09-03T12:30:03.279979412+08:00" level=error msg="failed to get event" error="rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialin g dial unix /run/containerd/containerd.sock: connect: connection refused\"" module=libcontainerd namespace=moby ``` **** 请问下,这可能是什么问题造成的,如何解决,或者该朝哪个方向去调查?
Sending proposal to peer1.org1.example.com failed
原来用的nodejs的sdk,现在要换成java的sdk,底层docker容器也都启动成功了 ![docker](https://img-ask.csdn.net/upload/201809/30/1538267524_410437.png) 但在IDEA中运行End2endIT例子的时候报了 Sending instantiateProposalRequest to all peers with arguments: a and b set to 100 and 200 respectively 2018-09-30 00:30:16,755 main ERROR Channel:2768 - Sending proposal to peer1.org1.example.com failed because of: gRPC failure=Status{code=UNAVAILABLE, description=Network closed for unknown reason, cause=null} java.lang.Exception: io.grpc.StatusRuntimeException: UNAVAILABLE: Network closed for unknown reason at org.hyperledger.fabric.sdk.Channel.sendProposalToPeers(Channel.java:2768) at org.hyperledger.fabric.sdk.Channel.sendInstantiationProposal(Channel.java:1540) at org.hyperledger.fabric.sdkintegration.End2endIT.runChannel(End2endIT.java:492) at org.hyperledger.fabric.sdkintegration.End2endIT.runFabricTest(End2endIT.java:208) at org.hyperledger.fabric.sdkintegration.End2endIT.setup(End2endIT.java:189) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runner.JUnitCore.run(JUnitCore.java:137) at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68) at com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47) at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242) at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70) Caused by: io.grpc.StatusRuntimeException: UNAVAILABLE: Network closed for unknown reason at io.grpc.Status.asRuntimeException(Status.java:526) at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:467) at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:37) at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23) at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40) at io.grpc.internal.CensusStatsModule$StatsClientInterceptor$1$1.onClose(CensusStatsModule.java:684) at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:37) at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23) at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40) at io.grpc.internal.CensusTracingModule$TracingClientInterceptor$1$1.onClose(CensusTracingModule.java:391) at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:471) at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:63) at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:553) at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$600(ClientCallImpl.java:474) at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:591) at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37) at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) 2018-09-30 00:30:16,758 main ERROR Channel:2768 - Sending proposal to peer0.org1.example.com failed because of: gRPC failure=Status{code=UNAVAILABLE, description=Network closed for unknown reason, cause=null} java.lang.Exception: io.grpc.StatusRuntimeException: UNAVAILABLE: Network closed for unknown reason at org.hyperledger.fabric.sdk.Channel.sendProposalToPeers(Channel.java:2768) at org.hyperledger.fabric.sdk.Channel.sendInstantiationProposal(Channel.java:1540) at org.hyperledger.fabric.sdkintegration.End2endIT.runChannel(End2endIT.java:492) at org.hyperledger.fabric.sdkintegration.End2endIT.runFabricTest(End2endIT.java:208) at org.hyperledger.fabric.sdkintegration.End2endIT.setup(End2endIT.java:189) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runner.JUnitCore.run(JUnitCore.java:137) at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68) at com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47) at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242) at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70) Caused by: io.grpc.StatusRuntimeException: UNAVAILABLE: Network closed for unknown reason at io.grpc.Status.asRuntimeException(Status.java:526) at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:467) at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:37) at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23) at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40) at io.grpc.internal.CensusStatsModule$StatsClientInterceptor$1$1.onClose(CensusStatsModule.java:684) at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:37) at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23) at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40) at io.grpc.internal.CensusTracingModule$TracingClientInterceptor$1$1.onClose(CensusTracingModule.java:391) at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:471) at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:63) at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:553) at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$600(ClientCallImpl.java:474) at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:591) at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37) at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) Received 2 instantiate proposal responses. Successful+verified: 0 . Failed: 2 Not enough endorsers for instantiate :0endorser failed with Sending proposal to peer1.org1.example.com failed because of: gRPC failure=Status{code=UNAVAILABLE, description=Network closed for unknown reason, cause=null}, on peerPeer peer1.org1.example.com url: grpc://47.104.249.104:7056 Not enough endorsers for instantiate :0endorser failed with Sending proposal to peer0.org1.example.com failed because of: gRPC failure=Status{code=UNAVAILABLE, description=Network closed for unknown reason, cause=null}, on peerPeer peer0.org1.example.com url: grpc://47.104.249.104:7051 java.lang.AssertionError: Not enough endorsers for instantiate :0endorser failed with Sending proposal to peer1.org1.example.com failed because of: gRPC failure=Status{code=UNAVAILABLE, description=Network closed for unknown reason, cause=null}. Was verified:false at org.junit.Assert.fail(Assert.java:88) at org.hyperledger.fabric.sdkintegration.End2endIT.runChannel(End2endIT.java:512) at org.hyperledger.fabric.sdkintegration.End2endIT.runFabricTest(End2endIT.java:208) at org.hyperledger.fabric.sdkintegration.End2endIT.setup(End2endIT.java:189) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runner.JUnitCore.run(JUnitCore.java:137) at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68) at com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47) at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242) at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70) 查了半天也没弄清啥原因,望各位大佬不吝赐教!!!
终于明白阿里百度这样的大公司,为什么面试经常拿ThreadLocal考验求职者了
点击上面↑「爱开发」关注我们每晚10点,捕获技术思考和创业资源洞察什么是ThreadLocalThreadLocal是一个本地线程副本变量工具类,各个线程都拥有一份线程私有的数
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过...
《奇巧淫技》系列-python!!每天早上八点自动发送天气预报邮件到QQ邮箱
此博客仅为我业余记录文章所用,发布到此,仅供网友阅读参考,如有侵权,请通知我,我会删掉。 补充 有不少读者留言说本文章没有用,因为天气预报直接打开手机就可以收到了,为何要多此一举发送到邮箱呢!!!那我在这里只能说:因为你没用,所以你没用!!! 这里主要介绍的是思路,不是天气预报!不是天气预报!!不是天气预报!!!天气预报只是用于举例。请各位不要再刚了!!! 下面是我会用到的两个场景: 每日下
面试官问我:什么是消息队列?什么场景需要他?用了会出现什么问题?
你知道的越多,你不知道的越多 点赞再看,养成习惯 GitHub上已经开源 https://github.com/JavaFamily 有一线大厂面试点脑图、个人联系方式,欢迎Star和完善 前言 消息队列在互联网技术存储方面使用如此广泛,几乎所有的后端技术面试官都要在消息队列的使用和原理方面对小伙伴们进行360°的刁难。 作为一个在互联网公司面一次拿一次Offer的面霸,打败了无数
8年经验面试官详解 Java 面试秘诀
    作者 | 胡书敏 责编 | 刘静 出品 | CSDN(ID:CSDNnews) 本人目前在一家知名外企担任架构师,而且最近八年来,在多家外企和互联网公司担任Java技术面试官,前后累计面试了有两三百位候选人。在本文里,就将结合本人的面试经验,针对Java初学者、Java初级开发和Java开发,给出若干准备简历和准备面试的建议。   Java程序员准备和投递简历的实
究竟你适不适合买Mac?
我清晰的记得,刚买的macbook pro回到家,开机后第一件事情,就是上了淘宝网,花了500元钱,找了一个上门维修电脑的师傅,上门给我装了一个windows系统。。。。。。 表砍我。。。 当时买mac的初衷,只是想要个固态硬盘的笔记本,用来运行一些复杂的扑克软件。而看了当时所有的SSD笔记本后,最终决定,还是买个好(xiong)看(da)的。 已经有好几个朋友问我mba怎么样了,所以今天尽量客观
程序员一般通过什么途径接私活?
二哥,你好,我想知道一般程序猿都如何接私活,我也想接,能告诉我一些方法吗? 上面是一个读者“烦不烦”问我的一个问题。其实不止是“烦不烦”,还有很多读者问过我类似这样的问题。 我接的私活不算多,挣到的钱也没有多少,加起来不到 20W。说实话,这个数目说出来我是有点心虚的,毕竟太少了,大家轻喷。但我想,恰好配得上“一般程序员”这个称号啊。毕竟苍蝇再小也是肉,我也算是有经验的人了。 唾弃接私活、做外
大学四年自学走来,这些珍藏的「实用工具/学习网站」我全贡献出来了
知乎高赞:文中列举了互联网一线大厂程序员都在用的工具集合,涉及面非常广,小白和老手都可以进来看看,或许有新收获。
《阿里巴巴开发手册》读书笔记-编程规约
Java编程规约命名风格 命名风格 类名使用UpperCamelCase风格 方法名,参数名,成员变量,局部变量都统一使用lowerCamelcase风格 常量命名全部大写,单词间用下划线隔开, 力求语义表达完整清楚,不要嫌名字长 ...
Python爬虫爬取淘宝,京东商品信息
小编是一个理科生,不善长说一些废话。简单介绍下原理然后直接上代码。 使用的工具(Python+pycharm2019.3+selenium+xpath+chromedriver)其中要使用pycharm也可以私聊我selenium是一个框架可以通过pip下载 pip install selenium -i https://pypi.tuna.tsinghua.edu.cn/simple/ 
阿里程序员写了一个新手都写不出的低级bug,被骂惨了。
你知道的越多,你不知道的越多 点赞再看,养成习惯 本文 GitHub https://github.com/JavaFamily 已收录,有一线大厂面试点思维导图,也整理了很多我的文档,欢迎Star和完善,大家面试可以参照考点复习,希望我们一起有点东西。 前前言 为啥今天有个前前言呢? 因为你们的丙丙啊,昨天有牌面了哟,直接被微信官方推荐,知乎推荐,也就仅仅是还行吧(心里乐开花)
Java工作4年来应聘要16K最后没要,细节如下。。。
前奏: 今天2B哥和大家分享一位前几天面试的一位应聘者,工作4年26岁,统招本科。 以下就是他的简历和面试情况。 基本情况: 专业技能: 1、&nbsp;熟悉Sping了解SpringMVC、SpringBoot、Mybatis等框架、了解SpringCloud微服务 2、&nbsp;熟悉常用项目管理工具:SVN、GIT、MAVEN、Jenkins 3、&nbsp;熟悉Nginx、tomca
Python爬虫精简步骤1 获取数据
爬虫的工作分为四步: 1.获取数据。爬虫程序会根据我们提供的网址,向服务器发起请求,然后返回数据。 2.解析数据。爬虫程序会把服务器返回的数据解析成我们能读懂的格式。 3.提取数据。爬虫程序再从中提取出我们需要的数据。 4.储存数据。爬虫程序把这些有用的数据保存起来,便于你日后的使用和分析。 这一篇的内容就是:获取数据。 首先,我们将会利用一个强大的库——requests来获取数据。 在电脑上安装
Python绘图,圣诞树,花,爱心 | Turtle篇
1.画圣诞树 import turtle screen = turtle.Screen() screen.setup(800,600) circle = turtle.Turtle() circle.shape('circle') circle.color('red') circle.speed('fastest') circle.up() square = turtle.Turtle()
作为一个程序员,CPU的这些硬核知识你必须会!
CPU对每个程序员来说,是个既熟悉又陌生的东西? 如果你只知道CPU是中央处理器的话,那可能对你并没有什么用,那么作为程序员的我们,必须要搞懂的就是CPU这家伙是如何运行的,尤其要搞懂它里面的寄存器是怎么一回事,因为这将让你从底层明白程序的运行机制。 随我一起,来好好认识下CPU这货吧 把CPU掰开来看 对于CPU来说,我们首先就要搞明白它是怎么回事,也就是它的内部构造,当然,CPU那么牛的一个东
破14亿,Python分析我国存在哪些人口危机!
2020年1月17日,国家统计局发布了2019年国民经济报告,报告中指出我国人口突破14亿。 猪哥的朋友圈被14亿人口刷屏,但是很多人并没有看到我国复杂的人口问题:老龄化、男女比例失衡、生育率下降、人口红利下降等。 今天我们就来分析一下我们国家的人口数据吧! 更多有趣分析教程,扫描下方二维码关注vx公号「裸睡的猪」 即可查看! 一、背景 1.人口突破14亿 2020年1月17日,国家统计局发布
听说想当黑客的都玩过这个Monyer游戏(1~14攻略)
第零关 进入传送门开始第0关(游戏链接) 请点击链接进入第1关: 连接在左边→ ←连接在右边 看不到啊。。。。(只能看到一堆大佬做完的留名,也能看到菜鸡的我,在后面~~) 直接fn+f12吧 &lt;span&gt;连接在左边→&lt;/span&gt; &lt;a href="first.php"&gt;&lt;/a&gt; &lt;span&gt;←连接在右边&lt;/span&gt; o
在家远程办公效率低?那你一定要收好这个「在家办公」神器!
相信大家都已经收到国务院延长春节假期的消息,接下来,在家远程办公可能将会持续一段时间。 但是问题来了。远程办公不是人在电脑前就当坐班了,相反,对于沟通效率,文件协作,以及信息安全都有着极高的要求。有着非常多的挑战,比如: 1在异地互相不见面的会议上,如何提高沟通效率? 2文件之间的来往反馈如何做到及时性?如何保证信息安全? 3如何规划安排每天工作,以及如何进行成果验收? ......
作为一个程序员,内存和磁盘的这些事情,你不得不知道啊!!!
截止目前,我已经分享了如下几篇文章: 一个程序在计算机中是如何运行的?超级干货!!! 作为一个程序员,CPU的这些硬核知识你必须会! 作为一个程序员,内存的这些硬核知识你必须懂! 这些知识可以说是我们之前都不太重视的基础知识,可能大家在上大学的时候都学习过了,但是嘞,当时由于老师讲解的没那么有趣,又加上这些知识本身就比较枯燥,所以嘞,大家当初几乎等于没学。 再说啦,学习这些,也看不出来有什么用啊!
别低估自己的直觉,也别高估自己的智商
所有群全部吵翻天,朋友圈全部沦陷,公众号疯狂转发。这两周没怎么发原创,只发新闻,可能有人注意到了。我不是懒,是文章写了却没发,因为大家的关注力始终在这次的疫情上面,发了也没人看。当然,我...
这个世界上人真的分三六九等,你信吗?
偶然间,在知乎上看到一个问题 一时间,勾起了我深深的回忆。 以前在厂里打过两次工,做过家教,干过辅导班,做过中介。零下几度的晚上,贴过广告,满脸、满手地长冻疮。   再回首那段岁月,虽然苦,但让我学会了坚持和忍耐。让我明白了,在这个世界上,无论环境多么的恶劣,只要心存希望,星星之火,亦可燎原。   下文是原回答,希望能对你能有所启发。   如果我说,这个世界上人真的分三六九等,
为什么听过很多道理,依然过不好这一生?
记录学习笔记是一个重要的习惯,不希望学习过的东西成为过眼云烟。做总结的同时也是一次复盘思考的过程。 本文是根据阅读得到 App上《万维钢·精英日课》部分文章后所做的一点笔记和思考。学习是一个系统的过程,思维模型的建立需要相对完整的学习和思考过程。以下观点是在碎片化阅读后总结的一点心得总结。
B 站上有哪些很好的学习资源?
哇说起B站,在小九眼里就是宝藏般的存在,放年假宅在家时一天刷6、7个小时不在话下,更别提今年的跨年晚会,我简直是跪着看完的!! 最早大家聚在在B站是为了追番,再后来我在上面刷欧美新歌和漂亮小姐姐的舞蹈视频,最近两年我和周围的朋友们已经把B站当作学习教室了,而且学习成本还免费,真是个励志的好平台ヽ(.◕ฺˇд ˇ◕ฺ;)ノ 下面我们就来盘点一下B站上优质的学习资源: 综合类 Oeasy: 综合
如何优雅地打印一个Java对象?
你好呀,我是沉默王二,一个和黄家驹一样身高,和刘德华一样颜值的程序员。虽然已经写了十多年的 Java 代码,但仍然觉得自己是个菜鸟(请允许我惭愧一下)。 在一个月黑风高的夜晚,我思前想后,觉得再也不能这么蹉跎下去了。于是痛下决心,准备通过输出的方式倒逼输入,以此来修炼自己的内功,从而进阶成为一名真正意义上的大神。与此同时,希望这些文章能够帮助到更多的读者,让大家在学习的路上不再寂寞、空虚和冷。 ...
雷火神山直播超两亿,Web播放器事件监听是怎么实现的?
Web播放器解决了在手机浏览器和PC浏览器上播放音视频数据的问题,让视音频内容可以不依赖用户安装App,就能进行播放以及在社交平台进行传播。在视频业务大数据平台中,播放数据的统计分析非常重要,所以Web播放器在使用过程中,需要对其内部的数据进行收集并上报至服务端,此时,就需要对发生在其内部的一些播放行为进行事件监听。 那么Web播放器事件监听是怎么实现的呢? 01 监听事件明细表 名
3万字总结,Mysql优化之精髓
本文知识点较多,篇幅较长,请耐心学习 MySQL已经成为时下关系型数据库产品的中坚力量,备受互联网大厂的青睐,出门面试想进BAT,想拿高工资,不会点MySQL优化知识,拿offer的成功率会大大下降。 为什么要优化 系统的吞吐量瓶颈往往出现在数据库的访问速度上 随着应用程序的运行,数据库的中的数据会越来越多,处理时间会相应变慢 数据是存放在磁盘上的,读写速度无法和内存相比 如何优化 设计
Linux 命令(122)—— watch 命令
1.命令简介 2.命令格式 3.选项说明 4.常用示例 参考文献 [1] watch(1) manual
Linux 命令(121)—— cal 命令
1.命令简介 2.命令格式 3.选项说明 4.常用示例 参考文献 [1] cal(1) manual
记jsp+servlet+jdbc实现的新闻管理系统
1.工具:eclipse+SQLyog 2.介绍:实现的内容就是显示新闻的基本信息,然后一个增删改查的操作。 3.数据库表设计 列名 中文名称 数据类型 长度 非空 newsId 文章ID int 11 √ newsTitle 文章标题 varchar 20 √ newsContent 文章内容 text newsStatus 是否审核 varchar 10 news...
Python新型冠状病毒疫情数据自动爬取+统计+发送报告+数据屏幕(三)发送篇
今天介绍的项目是使用 Itchat 发送统计报告 项目功能设计: 定时爬取疫情数据存入Mysql 进行数据分析制作疫情报告 使用itchat给亲人朋友发送分析报告(本文) 基于Django做数据屏幕 使用Tableau做数据分析 来看看最终效果 目前已经完成,预计2月12日前更新 使用 itchat 发送数据统计报告 itchat 是一个基于 web微信的一个框架,但微信官方并不允
相关热词 c#如何定义数组列表 c#倒序读取txt文件 java代码生成c# c# tcp发送数据 c#解决时间格式带星期 c#类似hashmap c#设置istbox的值 c#获取多线程返回值 c# 包含数字 枚举 c# timespan
立即提问