python引用问题:from wx.lib.pubsub import Publisher

在模块开始加入
from wx.lib.pubsub import Publisher

报错:
Traceback (most recent call last):
File "G:\learning\python\python_workspace\test_QQ\src\test\test2.py", line 15, in
from wx.lib.pubsub import Publisher
ImportError: cannot import name Publisher

网上搜了好久,信息好少!
不知道怎么办,求大神指导!

1个回答

from wx.lib.pubsub import pub as Publisher

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
Python运行ride.py报错

各位大佬,小白请教一个问题,我在cmd命令下运行ride.py的时候报如下错,能否帮忙看下,小白谢谢各位大佬了~! E:\Python27\Scripts>python ride.py Traceback (most recent call last): File "E:\Python27\lib\site-packages\robotide\__init__.py", line 83, in main _run(inpath, not noupdatecheck, debug_console) File "E:\Python27\lib\site-packages\robotide\__init__.py", line 102, in _run from robotide.application import RIDE File "E:\Python27\lib\site-packages\robotide\application\__init__.py", line 16, in <module> from .application import RIDE, Project File "E:\Python27\lib\site-packages\robotide\application\application.py", line 22, in <module> from robotide.namespace import Namespace File "E:\Python27\lib\site-packages\robotide\namespace\__init__.py", line 16, in <module> from .namespace import Namespace File "E:\Python27\lib\site-packages\robotide\namespace\namespace.py", line 31, in <module> from robotide.publish import PUBLISHER, RideSettingsChanged, RideLogMessage File "E:\Python27\lib\site-packages\robotide\publish\__init__.py", line 123, in <module> from .messages import * File "E:\Python27\lib\site-packages\robotide\publish\messages.py", line 21, in <module> from .messages2 import * File "E:\Python27\lib\site-packages\robotide\publish\messages2.py", line 24, in <module> from robotide.publish import publisher File "E:\Python27\lib\site-packages\robotide\publish\publisher.py", line 24, in <module> from pubsub import pub ImportError: cannot import name pub

在Golang中使用Google PubSub。 轮询服务的最有效(成本)方式

<div class="post-text" itemprop="text"> <p>We're in the process of moving from AMQP to Google's Pubsub. </p> <p><a href="https://cloud.google.com/pubsub/subscriber" rel="nofollow">The docs suggest</a> that pull might be the best choice for us since we're using compute engine and can't open our workers to receive via the push service.</p> <p>It also says that pull might incur additional costs depending on usage:</p> <blockquote> <p>If polling is used, high network usage may be incurred if you are opening connections frequently and closing them immediately.</p> </blockquote> <p>We'd created a test subscriber in go that runs in a loop as so:</p> <pre><code>func main() { jsonKey, err := ioutil.ReadFile("pubsub-key.json") if err != nil { log.Fatal(err) } conf, err := google.JWTConfigFromJSON( jsonKey, pubsub.ScopeCloudPlatform, pubsub.ScopePubSub, ) if err != nil { log.Fatal(err) } ctx := cloud.NewContext("xxx", conf.Client(oauth2.NoContext)) msgIDs, err := pubsub.Publish(ctx, "topic1", &amp;pubsub.Message{ Data: []byte("hello world"), }) if err != nil { log.Println(err) } log.Printf("Published a message with a message id: %s ", msgIDs[0]) for { msgs, err := pubsub.Pull(ctx, "subscription1", 1) if err != nil { log.Println(err) } if len(msgs) &gt; 0 { log.Printf("New message arrived: %v, len: %d ", msgs[0].ID, len(msgs)) if err := pubsub.Ack(ctx, "subscription1", msgs[0].AckID); err != nil { log.Fatal(err) } log.Println("Acknowledged message") log.Printf("Message: %s", msgs[0].Data) } } } </code></pre> <p>The question I have though is really whether this is the correct / recommended way to go about pulling messages.</p> <p>We recieve about 100msg per second throughout the day. I'm not sure if running it in an endless loop is going to bankrupt us and can't find any other decent go examples.</p> </div>

Redis Golang客户端会定期丢弃不良的PubSub连接(EOF)

<div class="post-text" itemprop="text"> <h2>What I did:</h2> <p>I'm using the <code>golang</code> Redis library from <code>github.com/go-redis/redis</code>. My client listens on a PubSub channel called 'control'. Whenever a message arrives, I handle it and continue receiving the next message. I listen endlessly and the messages can come often, or sometimes not for days.</p> <h2>What I expect:</h2> <p>I expect the redis channel to stay open endlessly and receive messages as they are sent.</p> <h2>What I experience:</h2> <p>Usually it runs well for days, but every once in a while <code>client.Receive()</code> returns <code>EOF</code> error. After this error, the client no longer receives messages on that channel. Internally, the redis client prints to stdout the following message:</p> <blockquote> <p>redis: 2019/08/29 14:18:57 pubsub.go:151: redis: discarding bad PubSub connection: EOF</p> </blockquote> <p><strong><em>Disclaimer:</em></strong> I am not certain that this error is what causes me to stop receiving messages, it just seems related.</p> <h2>Additional questions:</h2> <p>I'd like to understand why this happens, if this is normal and if reconnecting to the channel via <code>client.Subscribe()</code> whenever I encounter the behaviour is a good remedy, or should I address the root issue, whatever it may be.</p> <h2>The code:</h2> <p>Here is the entire code that handles my client (connect to redis, subscribe to channel, endlessly receive messages):</p> <pre><code>func InitAndListenAsync(log *log.Logger, sseHandler func(string, string) error) error { rootLogger = log.With(zap.String("component", "redis-client")) host := env.RedisHost port := env.RedisPort pass := env.RedisPass addr := fmt.Sprintf("%s:%s", host, port) tlsCfg := &amp;tls.Config{} client = redis.NewClient(&amp;redis.Options{ Addr: addr, Password: pass, TLSConfig: tlsCfg, }) if _, err := client.Ping().Result(); err != nil { return err } go func() { controlSub := client.Subscribe("control") defer controlSub.Close() for { in, err := controlSub.Receive() // *** SOMETIMES RETURNS EOF ERROR *** if err != nil { rootLogger.Error("failed to get feedback", zap.Error(err)) break } switch in.(type) { case *redis.Message: cm := comm.ControlMessageEvent{} payload := []byte(in.(*redis.Message).Payload) if err := json.Unmarshal(payload, &amp;cm); err != nil { rootLogger.Error("failed to parse control message", zap.Error(err)) } else if err := handleIncomingEvent(&amp;cm); err != nil { rootLogger.Error("failed to handle control message", zap.Error(err)) } default: rootLogger.Warn("Received unknown input over REDIS PubSub control channel", zap.Any("received", in)) } } }() return nil } </code></pre> </div>

cloud.google.com/go/pubsub iterator.go:44:未定义:acker和puller

<div class="post-text" itemprop="text"> <p>I'm hitting the following error messages when I build codes that employ cloud.google.com/go/pubsub. Am I working on a wrong package? Or it's just because of alpha?</p> <pre><code>$ go test # cloud.google.com/go/pubsub ./iterator.go:44: undefined: acker ./iterator.go:46: undefined: puller FAIL cloud.google.com/go/pubsub [build failed] </code></pre> </div>

redis报错 有大神遇到过么

2018.11.09 at 17:51:45.533 CST [lettuce-nioEventLoop-3-4] WARN io.netty.util.internal.logging.Slf4JLogger 141 warn - [/192.168.1.103:55005 -> /192.168.1.101:6379] Unexpected exception during request: java.lang.NullPointerException java.lang.NullPointerException: null at com.lambdaworks.redis.protocol.RedisStateMachine.safeSet(RedisStateMachine.java:195) ~[lettuce-3.5.0.Final.jar:?] at com.lambdaworks.redis.protocol.RedisStateMachine.decode(RedisStateMachine.java:161) ~[lettuce-3.5.0.Final.jar:?] at com.lambdaworks.redis.protocol.RedisStateMachine.decode(RedisStateMachine.java:61) ~[lettuce-3.5.0.Final.jar:?] at com.lambdaworks.redis.pubsub.PubSubCommandHandler.decode(PubSubCommandHandler.java:60) ~[lettuce-3.5.0.Final.jar:?] at com.lambdaworks.redis.protocol.CommandHandler.channelRead(CommandHandler.java:153) ~[lettuce-3.5.0.Final.jar:?] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) [netty-all-4.0.24.Final.jar:4.0.24.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) [netty-all-4.0.24.Final.jar:4.0.24.Final] at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) [netty-all-4.0.24.Final.jar:4.0.24.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) [netty-all-4.0.24.Final.jar:4.0.24.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) [netty-all-4.0.24.Final.jar:4.0.24.Final] at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) [netty-all-4.0.24.Final.jar:4.0.24.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) [netty-all-4.0.24.Final.jar:4.0.24.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) [netty-all-4.0.24.Final.jar:4.0.24.Final] at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) [netty-all-4.0.24.Final.jar:4.0.24.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) [netty-all-4.0.24.Final.jar:4.0.24.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) [netty-all-4.0.24.Final.jar:4.0.24.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787) [netty-all-4.0.24.Final.jar:4.0.24.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130) [netty-all-4.0.24.Final.jar:4.0.24.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) [netty-all-4.0.24.Final.jar:4.0.24.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) [netty-all-4.0.24.Final.jar:4.0.24.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) [netty-all-4.0.24.Final.jar:4.0.24.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) [netty-all-4.0.24.Final.jar:4.0.24.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) [netty-all-4.0.24.Final.jar:4.0.24.Final] at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) [netty-all-4.0.24.Final.jar:4.0.24.Final] at java.lang.Thread.run(Thread.java:722) [?:1.7.0_17]

客户端断开连接后如何正确使用ctx.Done()?

<div class="post-text" itemprop="text"> <p>If client will be disconnected by network error, server must close in my case pub/sub connection. I know about <code>ctx.Done()</code> function, but don't know how to use it properly in my case. Can somebody explain please?</p> <p>grpc-go: 1.7.0</p> <p>go version go1.8.4</p> <pre><code>func (a *API) Notifications(in *empty.Empty, stream pb.Service_NotificationsServer) error { ctx := stream.Context() _, ok := user.FromContext(ctx) if !ok { return grpc.Errorf(codes.Unauthenticated, "user not found") } pubsub := a.redisClient.Subscribe("notifications") defer pubsub.Close() for { msg, err := pubsub.ReceiveMessage() if err != nil { grpclog.Warningf("Notifications: pubsub error: %v", err) return grpc.Errorf(codes.Internal, "pubsub error %v", err) } notification := &amp;pb.Notification{} err = json.Unmarshal([]byte(msg.Payload), notification) if err != nil { grpclog.Warningf("Notifications: parse error: %v", err) continue } if err := stream.Send(notification); err != nil { grpclog.Warningf("Notifications: %v", err) return err } grpclog.Infof("Notifications: send msg %v", notification) } } </code></pre> </div>

将pubsub与golang一起使用:ocgrpc.NewClientStatsHandler

<div class="post-text" itemprop="text"> <p>I am following this tutorial to publish a topic to Pub/Sub from a golang project and here's the code I have for that project at the moment:</p> <pre><code>package main import "cloud.google.com/go/pubsub" import "fmt" func main() { fmt.Printf("hello, world ") } </code></pre> <p>All it does is simply imports the pubsub but when I run <code>go get</code> I get this error: <code>undefined: ocgrpc.NewClientStatsHandler</code></p> <pre><code>C:\Users\iha001\Dev\golang-projects\src\github.com aguibihab\golang-playarea\src\gcloud&gt;go get # cloud.google.com/go/pubsub ..\..\..\..\..\cloud.google.com\go\pubsub\go18.go:34:51: undefined: ocgrpc.NewClientStatsHandler </code></pre> <p>Is there anything else I need to install to get this running?</p> </div>

“新样式”的google pubsub golang函数无法正常工作

<div class="post-text" itemprop="text"> <p>I'm trying to use the <a href="https://godoc.org/google.golang.org/cloud/pubsub" rel="nofollow">Go pubsub library</a> against the <a href="https://cloud.google.com/sdk/gcloud/reference/beta/emulators/pubsub/" rel="nofollow">local emulated pubsub server</a>. I'm finding that the "old style" (deprecated) functions (e.g. <code>CreateSub</code> and <code>PullWait</code>) work find, but the "new style" API (e.g. <code>Iterators</code> and <code>SubscriptionHandles</code>) does not work as expected.</p> <p>I've written two different unit tests that both test the same sequence of actions, one using the "new style" API and one using the "old style" API.</p> <p>The sequence is:</p> <ul> <li>create a subscription</li> <li>fail to pull any messages (since none available)</li> <li>publish a message</li> <li>pull that message, but don't ACK it</li> <li>lastly pull it again which should take 10s since the message ACK timeout has to expire first</li> </ul> <p><a href="https://gist.github.com/ianrose14/db6ecd9ccb6c84c8b36bf49d93b11bfb" rel="nofollow">https://gist.github.com/ianrose14/db6ecd9ccb6c84c8b36bf49d93b11bfb</a></p> <p>The test using the old-style API works just as I would expect:</p> <pre><code>=== RUN TestPubSubRereadLegacyForDemo --- PASS: TestPubSubRereadLegacyForDemo (10.32s) pubsubintg_test.go:217: PullWait returned in 21.64236ms (expected 0) pubsubintg_test.go:228: PullWait returned in 10.048119558s (expected 10s) PASS </code></pre> <p>Whereas the test using the new-style API works unreliably. Sometimes things work as expected:</p> <pre><code>=== RUN TestPubSubRereadForDemo --- PASS: TestPubSubRereadForDemo (11.38s) pubsubintg_test.go:149: iter.Next() returned in 17.686701ms (expected 0) pubsubintg_test.go:171: iter.Next() returned in 10.059492646s (expected 10s) PASS </code></pre> <p>But sometimes I find that <code>iter.Stop()</code> doesn't return promptly as it should (and note how the second iter.Next too way longer than it should):</p> <pre><code>=== RUN TestPubSubRereadForDemo --- FAIL: TestPubSubRereadForDemo (23.87s) pubsubintg_test.go:149: iter.Next() returned in 7.3284ms (expected 0) pubsubintg_test.go:171: iter.Next() returned in 20.074994835s (expected 10s) pubsubintg_test.go:183: iter.Stop() took too long (2.475055901s) FAIL </code></pre> <p>And other times I find that the first Pull after publishing the message takes too long (it should be near instant):</p> <pre><code>=== RUN TestPubSubRereadForDemo --- FAIL: TestPubSubRereadForDemo (6.32s) pubsubintg_test.go:147: failed to pull message from iterator: context deadline exceeded FAIL </code></pre> <p>Any ideas? Are there any working examples using the new-style API? Unfortunately, the <a href="https://cloud.google.com/go/getting-started/using-pub-sub" rel="nofollow">Go starter project here</a> uses the old, deprecated API.</p> </div>

转到GCP Cloud PubSub,而不是批量发布消息

<div class="post-text" itemprop="text"> <p>I am working on a sample project that takes output from bigquery and publishes it to pubsub. The row output from bigquery could be &gt;100,000. I saw there are options to batch publish and I've read in multiple places that 1k messages per batch is ideal. The issue I am running into is that for the life of me I can't get it to batch multiple messages and I think the solution is simple, but I'm missing how to do it..</p> <p>Here is what I have right now and all it does is publish one message at a time.</p> <pre><code>func publish(client pubsub.Client, data []byte) (string, error) { ctx := context.Background() topic := client.Topic("topic-name") topic.PublishSettings = pubsub.PublishSettings{ // ByteThreshold: 5000, CountThreshold: 1000, // no matter what I put here it still sends one per publish // DelayThreshold: 1000 * time.Millisecond, } result := topic.Publish(ctx, &amp;pubsub.Message{ Data: data, }) id, err := result.Get(ctx) if err != nil { return "", err } return id, nil } </code></pre> <p>And this function is called by: </p> <pre><code>for _, v := range qr { data, err := json.Marshal(v) if err != nil { log.Printf("Unable to marshal %s", data) continue } id, err := publish(*pubsubClient, data) if err != nil { log.Printf("Unable to publish message: %s", data) } log.Printf("Published message with id: %s", id) } </code></pre> <p>Where qr is a slice of structs that contain the data returned from the bigquery query.</p> <p>Now, is it due to how I am calling the function <code>publish</code> that makes each message get published and the <code>topic.PublishSettings</code> are being overwritten each method call so it forgets the previous messages? I'm at a loss here.</p> <p>I saw some of the batch publishing code here: <a href="https://github.com/GoogleCloudPlatform/golang-samples/blob/master/pubsub/topics/main.go#L217" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/golang-samples/blob/master/pubsub/topics/main.go#L217</a></p> <p>But they don't actually call it in their sample, so I can't tell how it should be done. </p> <p>Side note and to prove my point further that it doesn't work, if I set the <code>DelayThreshold</code> in the <code>topic.PublishSettings</code> var to say, 1 second, it simply publishes one message every second, not all the messages that are supposed to be in memory.</p> <p>Appreciate the help, thanks.</p> <p><strong>EDIT #1:</strong> </p> <p>So going with kingkupps comment, I switched up the code to be this for testing purposes: (project and topic names switched from the real ones) </p> <pre><code>func QueryAndPublish(w http.ResponseWriter, r *http.Request) { ctx := context.Background() // setting up the pubsub client pubsubClient, err := pubsub.NewClient(ctx, "fake-project-id") if err != nil { log.Fatalf("Unable to get pubsub client: %v", err) } // init topic and settings for publishing 1000 messages in batch topic := pubsubClient.Topic("fake-topic") topic.PublishSettings = pubsub.PublishSettings{ // ByteThreshold: 5000, CountThreshold: 1000, // DelayThreshold: 1000 * time.Millisecond, } // bq set up bqClient, err := bigquery.NewClient(ctx, "fake-project-id") if err != nil { log.Fatalf("Unable to get bq client: %v", err) } // bq query function call qr, err := query(*bqClient) if err != nil { log.Fatal(err) } log.Printf("Got query results, publishing now") // marshalling messages to json format messages := make([][]byte, len(qr)) timeToMarshal := time.Now() for i, v := range qr { data, err := json.Marshal(v) if err != nil { log.Printf("Unable to marshal %s", data) continue } messages[i] = data } elapsedMarshal := time.Since(timeToMarshal).Nanoseconds() / 1000000 log.Printf("Took %v ms to marshal %v messages", elapsedMarshal, len(messages)) // publishing messages timeToPublish := time.Now() publishCount := 0 for _, v := range messages { // ignore result, err from topic.Publish return, just publish topic.Publish(ctx, &amp;pubsub.Message{ Data: v, }) publishCount++ } elapsedPublish := time.Since(timeToPublish).Nanoseconds() / 1000000 log.Printf("Took %v ms to publish %v messages", elapsedPublish, publishCount) fmt.Fprint(w, "Job completed") } </code></pre> <p>What this does now is when my message count is 100,000 it will finish the publish calls in roughly 600ms but in the background, it will still be publishing one by one to the pubsub endpoint. </p> <p>I can see this in both StackDriver and Wireshark where my messages/second in stackdriver is roughly 10-16/second and Wireshark is showing new connections per message sent. </p> </div>

Google pubsub golang订户闲置了几个小时后停止接收新发布的消息

<div class="post-text" itemprop="text"> <p>I created a TOPIC in google pubsub, and created a SUBSCRIPTION inside the TOPIC, with the following settings</p> <p><a href="https://i.stack.imgur.com/FC6Us.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FC6Us.png" alt="enter image description here"></a></p> <p>then I wrote a <a href="https://godoc.org/cloud.google.com/go/pubsub" rel="nofollow noreferrer">puller in go</a>, using its <a href="https://godoc.org/cloud.google.com/go/pubsub#hdr-Receiving" rel="nofollow noreferrer">Receive</a> to pull and acknowledge published messages</p> <pre><code>package main import ( ... ) func main() { ctx := context.Background() client, err := pubsub.NewClient(ctx, config.C.Project) if err != nil { // do things with err } sub := client.Subscription(config.C.PubsubSubscription) err := sub.Receive(ctx, func(ctx context.Context, msg *pubsub.Message) { msg.Ack() }) if err != context.Canceled { logger.Error(fmt.Sprintf("Cancelled: %s", err.Error())) } if err != nil { logger.Error(fmt.Sprintf("Error: %s", err.Error())) } } </code></pre> <p>Nothing fancy, its working well, but then after a while (~ after 3 hours idle), it stops receiving new published messages, no error(s), nothing. Am i missing something?</p> </div>

Windows命令行启动ride.py报错。

Win10,64位,Python版本3.8.1, pip 19.3.1 Pypubsub 4.0.3 pywin32 227 requests 2.22.0 robotframework 3.1.2 robotframework-ride 1.7.4 setuptools 41.2.0 wxPython 4.1.0a1.dev4523+46bae17a 出现的情况是:用命令行python ride.py启动就如图报错; 直接双击ride.py会有一个空白的命令行窗口闪一下,然后就没反应了。 这个报错图上面还有蛮长的一段,内容都是<class 'robotide.preferences.configobj.UnreprError'> Parse error in value at lineXX这种。 ![图片说明](https://img-ask.csdn.net/upload/202001/12/1578762599_139263.png) 还请大佬能为我答疑解惑。

Golang Redis PubSub超时

<div class="post-text" itemprop="text"> <p>So far I've been doing this:</p> <pre><code>import ( _redis "gopkg.in/redis.v3" "strconv" "time" ) type Redis struct { Connector *_redis.Client PubSub *_redis.PubSub } var redis *Redis = nil func NewRedis() bool { if redis == nil { redis = new(Redis) redis.Connector = _redis.NewClient(&amp;_redis.Options{ Addr: config.RedisHostname + ":" + strconv.FormatInt(config.RedisPort, 10), Password: "", DB: 0, }) Logger.Log(nil, "Connected to Redis") err := redis.Init() if err != nil { Logger.Fatal(nil, "Cannot setup Redis:", err.Error()) return false } return true } return false } func (this *Redis) Init() error { pubsub, err := this.Connector.Subscribe("test") if err != nil { return err } defer pubsub.Close() this.PubSub = pubsub for { msgi, err := this.PubSub.ReceiveTimeout(100 * time.Millisecond) if err != nil { Logger.Error(nil, "PubSub error:", err.Error()) err = this.PubSub.Ping("") if err != nil { Logger.Error(nil, "PubSub failure:", err.Error()) break } continue } switch msg := msgi.(type) { case *_redis.Message: Logger.Log(nil, "Received", msg.Payload, "on channel", msg.Channel) } } return nil } </code></pre> <p>My Connector is a redis.Client, it's working because I was able to publish messages as well.</p> <p>When I run my program, I get the following error: <code>PubSub error: WSARecv tcp 127.0.0.1:64505: i/o timeout</code></p> <p>Do you have any idea of what I'm doing wrong ? I'm using this package: <a href="https://github.com/go-redis/redis" rel="nofollow">https://github.com/go-redis/redis</a></p> </div>

如何在Golang中为Redis(redigo)Pubsub编写更好的Receive()?

<div class="post-text" itemprop="text"> <pre><code>psc := redis.PubSubConn{c} psc.Subscribe("example") func Receive() { for { switch v := psc.Receive().(type) { case redis.Message: fmt.Printf("%s: message: %s ", v.Channel, v.Data) case redis.Subscription: fmt.Printf("%s: %s %d ", v.Channel, v.Kind, v.Count) case error: return v } } } </code></pre> <p>In the above code(taken from <a href="https://godoc.org/github.com/garyburd/redigo/redis" rel="nofollow">Redigo doc</a>), if connection is lost, all subscriptions are also lost. What will be better way to recover from lost connection and resubscribe.</p> </div>

python实现Redis的订阅与发布(接收到的消息type为subscribe)

import redis class SendRedis(object): def __init__(self): self.pool = redis.ConnectionPool(host='192.168.67.129', port=6379, db=0) self.red = redis.StrictRedis(host='192.168.67.129') self.msg_key = 'msg_redis' def send(self): pub = self.red.pubsub() # 开始订阅 pub.subscribe(self.msg_key) # 订阅频道 while True: msg = input("请输入你要发送的消息(over结束):") self.red.publish(self.msg_key, msg) # 开始发布消息 if msg == "over": print("停止发送") break ``` # 接收 import redis class SubscribeRedis(object): def __init__(self): self.pool = redis.ConnectionPool(host='192.168.67.129', port=6379, db=0) self.red = redis.StrictRedis(connection_pool=self.pool) self.msg_key = 'msg_redis' def subscribe(self): pub = self.red.pubsub() # 开始订阅 pub.subscribe(self.msg_key) # 订阅频道 for item in pub.listen(): # 监听状态:有消息发布了就拿过来 print(item) if item['type'] == 'message': print(item['channel'].decode()) print(item['data']) if item['data'] == 'over': print("%s : 停止发送" % (item['channel'].decode())) pub.unsubscribe(self.msg_key) print("取消了订阅") break elif item['type'] == 'subscribe': print("获取的类型不对: %s" % item['type']) break ``` # main import threading from sendRedis import SendRedis from subscribeRedis import SubscribeRedis class MainRedis(object): def main(self): send = SendRedis() sub = SubscribeRedis() t1 = threading.Thread(target=send.send(), args=()) t2 = threading.Thread(target=sub.subscribe(), args=()) t1.start() t2.start()

如何解决“找不到默认凭据”错误

<div class="post-text" itemprop="text"> <p>I'm making a program from <a href="https://github.com/GoogleCloudPlatform/golang-samples/blob/master/vision/detect/detect.go" rel="nofollow noreferrer">this link</a> on image detection but while calling a function it will give the error in main main function I call that function function which detect the image that what type of image Is. The program is given below:-</p> <pre><code>package main import ( "bufio" "bytes" "context" "fmt" "io" "os" vision "cloud.google.com/go/vision/apiv1" ) func init() { _ = context.Background() _ = vision.ImageAnnotatorClient{} _ = os.Open } func detectFaces(w io.Writer, file string) error { ctx := context.Background() client, err := vision.NewImageAnnotatorClient(ctx) if err != nil { fmt.Println("Hello in function") return err } f, err := os.Open(file) if err != nil { return err } defer f.Close() image, err := vision.NewImageFromReader(f) if err != nil { return err } annotations, err := client.DetectFaces(ctx, image, nil, 10) if err != nil { return err } if len(annotations) == 0 { fmt.Fprintln(w, "No faces found.") } else { fmt.Fprintln(w, "Faces:") for i, annotation := range annotations { fmt.Fprintln(w, " Face", i) fmt.Fprintln(w, " Anger:", annotation.AngerLikelihood) fmt.Fprintln(w, " Joy:", annotation.JoyLikelihood) fmt.Fprintln(w, " Surprise:", annotation.SurpriseLikelihood) } } return nil } func main() { var b bytes.Buffer writer := bufio.NewWriter(&amp;b) err := detectFaces(writer, "aaa.jpg") fmt.Println(err) } </code></pre> <p>Error is:- </p> <blockquote> <p>google: could not find default credentials. See <a href="https://developers.google.com/accounts/docs/application-default-credentials" rel="nofollow noreferrer">https://developers.google.com/accounts/docs/application-default-credentials</a> for more information.</p> </blockquote> <p>How to solve this error. Can anyone Help me?</p> </div>

请教一下redis的参数client-output-buffer-limit pubsub是怎么优化的?如果参数设置client-output-buffer-limit pubsub 0 0 0对服务器有什么影响?

最近后台服务偶发报错: redis.clients.jedis.exceptions.JedisConnectionException: Unexpected end of stream. at redis.clients.util.RedisInputStream.ensureFill(RedisInputStream.java:198) at redis.clients.util.RedisInputStream.read(RedisInputStream.java:180) at redis.clients.jedis.Protocol.processBulkReply(Protocol.java:158) at redis.clients.jedis.Protocol.process(Protocol.java:132) at redis.clients.jedis.Protocol.processMultiBulkReply(Protocol.java:183) at redis.clients.jedis.Protocol.process(Protocol.java:134) at redis.clients.jedis.Protocol.read(Protocol.java:192) at redis.clients.jedis.Connection.readProtocolWithCheckingBroken(Connection.java:282) at redis.clients.jedis.Connection.getRawObjectMultiBulkReply(Connection.java:227) at redis.clients.jedis.JedisPubSub.process(JedisPubSub.java:108) at redis.clients.jedis.JedisPubSub.proceedWithPatterns(JedisPubSub.java:95) at redis.clients.jedis.Jedis.psubscribe(Jedis.java:2513) at BenchRedisConsumer$BenchRunner.run(BenchRedisConsumer.java:208) at java.lang.Thread.run(Thread.java:745) 对于pubsub client,如果client-output-buffer一旦超过32mb,又或者超过8mb持续60秒,那么服务器就会立即断开客户端连接。如果设置reids参数为client-output-buffer-limit pubsub 0 0 0的话,对于服务器会产生什么影响呢?还是要根据其他参数弹性的调整这个参数?

Stackdriver日志未显示在GKE中

<div class="post-text" itemprop="text"> <p>I am unable to see log messages that are sent from my GKE clusters using Golang. They work fine when running locally but not from the container running in GKE. Clearly something is misconfigured in GKE but I don't see any errors but not really sure where to look. Any insight or places to check would be very useful.</p> <p>Below is my code and my cluster scopes (if it helps).</p> <p>Thanks.</p> <p>Scopes:</p> <pre><code>oauthScopes: - https://www.googleapis.com/auth/cloud-platform - https://www.googleapis.com/auth/compute - https://www.googleapis.com/auth/datastore - https://www.googleapis.com/auth/devstorage.full_control - https://www.googleapis.com/auth/devstorage.read_only - https://www.googleapis.com/auth/logging.write - https://www.googleapis.com/auth/monitoring - https://www.googleapis.com/auth/monitoring.write - https://www.googleapis.com/auth/pubsub - https://www.googleapis.com/auth/service.management.readonly - https://www.googleapis.com/auth/servicecontrol - https://www.googleapis.com/auth/source.full_control - https://www.googleapis.com/auth/sqlservice.admin - https://www.googleapis.com/auth/trace.append </code></pre> <p>Code: </p> <pre><code>func LogMessage(logLevel ReddiyoLoggingSeverity, message, domain, transactionID string) { ctx := context.Background() // Creates a client. client, err := logging.NewClient(ctx, loggingData.ProjectID) if err != nil { log.Fatalf("Failed to create client: %v", err) } // Selects the log to write to. logger := client.Logger(loggingData.LogName) labels := make(map[string]string) labels["transactionID"] = transactionID labels["domain"] = domain var logSeverity logging.Severity switch logLevel { case debug: logSeverity = logging.Debug case info: logSeverity = logging.Info case warning: logSeverity = logging.Warning case reddiyoError: logSeverity = logging.Error case critical: logSeverity = logging.Critical case emergency: logSeverity = logging.Emergency default: logSeverity = logging.Warning } logger.Log(logging.Entry{ Payload: message, Severity: logSeverity, Labels: labels}) // Closes the client and flushes the buffer to the Stackdriver Logging // service. if err := client.Close(); err != nil { log.Fatalf("Failed to close client: %v", err) } } </code></pre> </div>

无法让Google Auth在docker内部工作以发布到pubsub

<div class="post-text" itemprop="text"> <p>I'm trying to get my small go app (pub/sub) to work inside of docker so I an put it in GKE but I can't get the auth to work for some reason.</p> <pre><code> docker run --rm -it gcr.io/snappy-premise-118915/sensorgen:v1 {"pressure":24.10712641247902,"temperature":70.24302653595491,"dewpoint":41.3666446148299,"timecollected":"","latitude":-121.47104803040895,"longitude":0.007102469057958554,"humidity":19.463373213885937,"sensorId":"","zipcode":0} 2018/08/02 07:37:14 Failed to publish: context deadline exceeded </code></pre> <p>I'm creating the dockerfile like this:</p> <pre><code>FROM golang:1.8-alpine COPY ./ /src ENV LATITUDE = "-121.464" ENV LONGITUDE = "36.9397" ENV SENSORID = "sensor1234" ENV ZIPCODE = "95023" ENV INTERVAL = "3" ENV GOOGLE_CLOUD_PROJECT = "snappy-premise-118915" RUN apk add --no-cache git &amp;&amp; \ cd /src &amp;&amp; \ go get -t -v cloud.google.com/go/pubsub &amp;&amp; \ CGO_ENABLED=0 GOOS=linux go build main.go # final stage FROM alpine ENV LATITUDE "-121.464" ENV LONGITUDE "36.9397" ENV SENSORID "sensor1234" ENV ZIPCODE "95023" ENV INTERVAL "3" ENV GOOGLE_CLOUD_PROJECT "snappy-premise-118915" ENV GOOGLE_APPLICATION_CREDENTIALS "/app/key.json" WORKDIR /app COPY --from=0 /src/main /app/ COPY --from=0 /src/key.json /app/ ENTRYPOINT /app/main </code></pre> <p>The app does start as I get the data output but when it tries to publish to pubsub, it seem to hand and then throw this error: <code>2018/08/02 07:37:14 Failed to publish: context deadline exceeded</code></p> <p>------- UPDATE ----------</p> <p>I changed my Dockerfile to add in x509 certs but still having a cert issue it seems like:</p> <pre><code>{"pressure":24.13764705280961,"temperature":70.30698990487159,"dewpoint":40.44394673486464,"timecollected":"","latitude":-121.47166212174045,"longitude":0.005826195394839833,"humidity":19.821878333280246,"sensorId":"","zipcode":0} INFO: 2018/08/02 13:58:09 ccResolverWrapper: sending new addresses to cc: [{pubsub.googleapis.com:443 0 &lt;nil&gt;}] INFO: 2018/08/02 13:58:09 balancerWrapper: got update addr from Notify: [{pubsub.googleapis.com:443 0} {pubsub.googleapis.com:443 1} {pubsub.googleapis.com:443 2} {pubsub.googleapis.com:443 3}] WARNING: 2018/08/02 13:58:09 grpc: addrConn.createTransport failed to connect to {pubsub.googleapis.com:443 0 3}. Err :connection error: desc = "transport: authentication handshake failed: x509: failed to load system roots and no roots provided". Reconnecting... </code></pre> <p>Docker file:</p> <pre><code>FROM golang:1.8-alpine COPY ./ /src ENV LATITUDE = "-121.464" ENV LONGITUDE = "36.9397" ENV SENSORID = "sensor1234" ENV ZIPCODE = "95023" ENV INTERVAL = "3" ENV GOOGLE_CLOUD_PROJECT = "snappy-premise-118915" RUN apk add --no-cache git &amp;&amp; \ apk --no-cache --update add ca-certificates &amp;&amp; \ cd /src &amp;&amp; \ go get -t -v cloud.google.com/go/pubsub &amp;&amp; \ CGO_ENABLED=0 GOOS=linux go build main.go # final stage FROM alpine ENV LATITUDE "-121.464" ENV LONGITUDE "36.9397" ENV SENSORID "sensor1234" ENV ZIPCODE "95023" ENV INTERVAL "3" ENV GOOGLE_CLOUD_PROJECT "snappy-premise-118915" ENV GOOGLE_APPLICATION_CREDENTIALS "/app/key.json" ENV GRPC_GO_LOG_SEVERITY_LEVEL "INFO" WORKDIR /app COPY --from=0 /src/main /app/ COPY --from=0 /src/key.json /app/ ENTRYPOINT /app/main EXPOSE 8080 </code></pre> <p>--------- UPDATE ---------------</p> <p>changed my docker file on images but still no go:</p> <pre><code>2018/08/02 14:10:40 Could not create pubsub Client: pubsub: google: error getting credentials using GOOGLE_APPLICATION_CREDENTIALS environment variable: open /key.json: no such file or directory </code></pre> <p>dockerfile</p> <pre><code>FROM golang:1.8 as build-env WORKDIR /go/src/app ADD . /go/src/app COPY key.json / RUN go-wrapper download # "go get -d -v ./..." RUN go-wrapper install # final stage FROM gcr.io/distroless/base ENV LATITUDE "-121.464" ENV LONGITUDE "36.9397" ENV SENSORID "sensor1234" ENV ZIPCODE "95023" ENV INTERVAL "3" ENV GOOGLE_CLOUD_PROJECT "snappy-premise-118915" ENV GOOGLE_APPLICATION_CREDENTIALS "/key.json" ENV GRPC_GO_LOG_SEVERITY_LEVEL "INFO" COPY --from=build-env /go/bin/app / CMD ["/app"] </code></pre> </div>

Google Speech API + Go-转录长度未知的音频流

<div class="post-text" itemprop="text"> <p>I have an rtmp stream of a video call and I want to transcribe it. I have created 2 services in Go and I'm getting results but it's not very accurate and a lot of data seems to get lost.</p> <p>Let me explain. </p> <p>I have a <code>transcode</code> service, I use ffmpeg to transcode the video to Linear16 audio and place the output bytes onto a PubSub queue for a <code>transcribe</code> service to handle. Obviously there is a limit to the size of the PubSub message, and I want to start transcribing before the end of the video call. So, I chunk the transcoded data into 3 second clips (not fixed length, just seems about right) and put them onto the queue.</p> <p>The data is transcoded quite simply:</p> <pre><code>var stdout Buffer cmd := exec.Command("ffmpeg", "-i", url, "-f", "s16le", "-acodec", "pcm_s16le", "-ar", "16000", "-ac", "1", "-") cmd.Stdout = &amp;stdout if err := cmd.Start(); err != nil { log.Fatal(err) } ticker := time.NewTicker(3 * time.Second) for { select { case &lt;-ticker.C: bytesConverted := stdout.Len() log.Infof("Converted %d bytes", bytesConverted) // Send the data we converted, even if there are no bytes. topic.Publish(ctx, &amp;pubsub.Message{ Data: stdout.Bytes(), }) stdout.Reset() } } </code></pre> <p>The <code>transcribe</code> service pulls messages from the queue at a rate of 1 every 3 seconds, helping to process the audio data at about the same rate as it's being created. There are limits on the Speech API stream, it can't be longer than 60 seconds so I stop the old stream and start a new one every 30 seconds so we never hit the limit, no matter how long the video call lasts for.</p> <p>This is how I'm transcribing it:</p> <pre><code>stream := prepareNewStream() clipLengthTicker := time.NewTicker(30 * time.Second) chunkLengthTicker := time.NewTicker(3 * time.Second) cctx, cancel := context.WithCancel(context.TODO()) err := subscription.Receive(cctx, func(ctx context.Context, msg *pubsub.Message) { select { case &lt;-clipLengthTicker.C: log.Infof("Clip length reached.") log.Infof("Closing stream and starting over") err := stream.CloseSend() if err != nil { log.Fatalf("Could not close stream: %v", err) } go getResult(stream) stream = prepareNewStream() case &lt;-chunkLengthTicker.C: log.Infof("Chunk length reached.") bytesConverted := len(msg.Data) log.Infof("Received %d bytes ", bytesConverted) if bytesConverted &gt; 0 { if err := stream.Send(&amp;speechpb.StreamingRecognizeRequest{ StreamingRequest: &amp;speechpb.StreamingRecognizeRequest_AudioContent{ AudioContent: transcodedChunk.Data, }, }); err != nil { resp, _ := stream.Recv() log.Errorf("Could not send audio: %v", resp.GetError()) } } msg.Ack() } }) </code></pre> <p>I think the problem is that my 3 second chunks don't necessarily line up with starts and end of phrases or sentences so I suspect that the Speech API is a recurrent neural network which has been trained on full sentences rather than individual words. So starting a clip in the middle of a sentence loses some data because it can't figure out the first few words up to the natural end of a phrase. Also, I lose some data in changing from an old stream to a new stream. There's some context lost. I guess overlapping clips might help with this.</p> <p>I have a couple of questions:</p> <p>1) Does this architecture seem appropriate for my constraints (unknown length of audio stream, etc.)?</p> <p>2) What can I do to improve accuracy and minimise lost data? </p> <p>(Note I've simplified the examples for readability. Point out if anything doesn't make sense because I've been heavy handed in cutting the examples down.) </p> </div>

C/C++学习指南全套教程

C/C++学习的全套教程,从基本语法,基本原理,到界面开发、网络开发、Linux开发、安全算法,应用尽用。由毕业于清华大学的业内人士执课,为C/C++编程爱好者的教程。

定量遥感中文版 梁顺林著 范闻捷译

这是梁顺林的定量遥感的中文版,由范闻捷等翻译的,是电子版PDF,解决了大家看英文费时费事的问题,希望大家下载看看,一定会有帮助的

YOLOv3目标检测实战:训练自己的数据集

YOLOv3是一种基于深度学习的端到端实时目标检测方法,以速度快见长。本课程将手把手地教大家使用labelImg标注和使用YOLOv3训练自己的数据集。课程分为三个小项目:足球目标检测(单目标检测)、梅西目标检测(单目标检测)、足球和梅西同时目标检测(两目标检测)。 本课程的YOLOv3使用Darknet,在Ubuntu系统上做项目演示。包括:安装Darknet、给自己的数据集打标签、整理自己的数据集、修改配置文件、训练自己的数据集、测试训练出的网络模型、性能统计(mAP计算和画出PR曲线)和先验框聚类。 Darknet是使用C语言实现的轻型开源深度学习框架,依赖少,可移植性好,值得深入探究。 除本课程《YOLOv3目标检测实战:训练自己的数据集》外,本人推出了有关YOLOv3目标检测的系列课程,请持续关注该系列的其它课程视频,包括: 《YOLOv3目标检测实战:交通标志识别》 《YOLOv3目标检测:原理与源码解析》 《YOLOv3目标检测:网络模型改进方法》 敬请关注并选择学习!

sql语句 异常 Err] 1064 - You have an error in your SQL syntax; check the manual that corresponds to your

在我们开发的工程中,有时候会报 [Err] 1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ------ 这种异常 不用多想,肯定是我们的sql语句出现问题,下面...

浪潮集团 往年的软件类 笔试题 比较详细的哦

浪潮集团 往年的软件类 笔试题 比较详细的哦

2019 AI开发者大会

2019 AI开发者大会(AI ProCon 2019)是由中国IT社区CSDN主办的AI技术与产业年度盛会。多年经验淬炼,如今蓄势待发:2019年9月6-7日,大会将有近百位中美顶尖AI专家、知名企业代表以及千余名AI开发者齐聚北京,进行技术解读和产业论证。我们不空谈口号,只谈技术,诚挚邀请AI业内人士一起共铸人工智能新篇章!

I2c串口通信实现加速度传感器和FPGA的交流

此代码能实现加速度传感器与FPGA之间的交流,从而测出运动物体的加速度。

Python可以这样学(第一季:Python内功修炼)

董付国系列教材《Python程序设计基础》、《Python程序设计(第2版)》、《Python可以这样学》配套视频,讲解Python 3.5.x和3.6.x语法、内置对象用法、选择与循环以及函数设计与使用、lambda表达式用法、字符串与正则表达式应用、面向对象编程、文本文件与二进制文件操作、目录操作与系统运维、异常处理结构。

微信公众平台开发入门

本套课程的设计完全是为初学者量身打造,课程内容由浅入深,课程讲解通俗易懂,代码实现简洁清晰。通过本课程的学习,学员能够入门微信公众平台开发,能够胜任企业级的订阅号、服务号、企业号的应用开发工作。 通过本课程的学习,学员能够对微信公众平台有一个清晰的、系统性的认识。例如,公众号是什么,它有什么特点,它能做什么,怎么开发公众号。 其次,通过本课程的学习,学员能够掌握微信公众平台开发的方法、技术和应用实现。例如,开发者文档怎么看,开发环境怎么搭建,基本的消息交互如何实现,常用的方法技巧有哪些,真实应用怎么开发。

机器学习初学者必会的案例精讲

通过六个实际的编码项目,带领同学入门人工智能。这些项目涉及机器学习(回归,分类,聚类),深度学习(神经网络),底层数学算法,Weka数据挖掘,利用Git开源项目实战等。

eclipseme 1.7.9

eclipse 出了新的eclipseme插件,官方有下载,但特慢,我都下了大半天(可能自己网速差)。有急需要的朋友可以下哦。。。

Spring Boot -01- 快速入门篇(图文教程)

Spring Boot -01- 快速入门篇 今天开始不断整理 Spring Boot 2.0 版本学习笔记,大家可以在博客看到我的笔记,然后大家想看视频课程也可以到【慕课网】手机 app,去找【Spring Boot 2.0 深度实践】的课程,令人开心的是,课程完全免费! 什么是 Spring Boot? Spring Boot 是由 Pivotal 团队提供的全新框架。Spring Boot...

HoloLens2开发入门教程

本课程为HoloLens2开发入门教程,讲解部署开发环境,安装VS2019,Unity版本,Windows SDK,创建Unity项目,讲解如何使用MRTK,编辑器模拟手势交互,打包VS工程并编译部署应用到HoloLens上等。

最简单的倍频verilog程序(Quartus II)

一个工程文件 几段简单的代码 一个输入一个输出(50Mhz倍频到100Mhz)

计算机组成原理实验教程

西北工业大学计算机组成原理实验课唐都仪器实验帮助,同实验指导书。分为运算器,存储器,控制器,模型计算机,输入输出系统5个章节

4小时玩转微信小程序——基础入门与微信支付实战

这是一个门针对零基础学员学习微信小程序开发的视频教学课程。课程采用腾讯官方文档作为教程的唯一技术资料来源。杜绝网络上质量良莠不齐的资料给学员学习带来的障碍。 视频课程按照开发工具的下载、安装、使用、程序结构、视图层、逻辑层、微信小程序等几个部分组织课程,详细讲解整个小程序的开发过程

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

基于RSA通信密钥分发的加密通信

基于RSA通信密钥分发的加密通信,采用pycrypto中的RSA、AES模块实现

不同变质程度煤尘爆炸残留气体特征研究

为分析不同变质程度煤尘爆炸残留气体成分的特征规律,利用水平管道煤尘爆炸实验装置进行了贫瘦煤、肥煤、气煤、长焰煤4种不同变质程度的煤尘爆炸实验,研究了不同变质程度煤尘爆炸后气体残留物含量的差异,并对气体

设计模式(JAVA语言实现)--20种设计模式附带源码

课程亮点: 课程培训详细的笔记以及实例代码,让学员开始掌握设计模式知识点 课程内容: 工厂模式、桥接模式、组合模式、装饰器模式、外观模式、享元模式、原型模型、代理模式、单例模式、适配器模式 策略模式、模板方法模式、观察者模式、迭代器模式、责任链模式、命令模式、备忘录模式、状态模式、访问者模式 课程特色: 笔记设计模式,用笔记串连所有知识点,让学员从一点一滴积累,学习过程无压力 笔记标题采用关键字标识法,帮助学员更加容易记住知识点 笔记以超链接形式让知识点关联起来,形式知识体系 采用先概念后实例再应用方式,知识点深入浅出 提供授课内容笔记作为课后复习以及工作备查工具 部分图表(电脑PC端查看):

MySQL数据库面试题(2020最新版)

文章目录数据库基础知识为什么要使用数据库什么是SQL?什么是MySQL?数据库三大范式是什么mysql有关权限的表都有哪几个MySQL的binlog有有几种录入格式?分别有什么区别?数据类型mysql有哪些数据类型引擎MySQL存储引擎MyISAM与InnoDB区别MyISAM索引与InnoDB索引的区别?InnoDB引擎的4大特性存储引擎选择索引什么是索引?索引有哪些优缺点?索引使用场景(重点)...

软件测试2小时入门

本课程内容系统、全面、简洁、通俗易懂,通过2个多小时的介绍,让大家对软件测试有个系统的理解和认识,具备基本的软件测试理论基础。 主要内容分为5个部分: 1 软件测试概述,了解测试是什么、测试的对象、原则、流程、方法、模型;&nbsp; 2.常用的黑盒测试用例设计方法及示例演示;&nbsp; 3 常用白盒测试用例设计方法及示例演示;&nbsp; 4.自动化测试优缺点、使用范围及示例‘;&nbsp; 5.测试经验谈。

几率大的Redis面试题(含答案)

本文的面试题如下: Redis 持久化机制 缓存雪崩、缓存穿透、缓存预热、缓存更新、缓存降级等问题 热点数据和冷数据是什么 Memcache与Redis的区别都有哪些? 单线程的redis为什么这么快 redis的数据类型,以及每种数据类型的使用场景,Redis 内部结构 redis的过期策略以及内存淘汰机制【~】 Redis 为什么是单线程的,优点 如何解决redis的并发竞争key问题 Red...

手把手实现Java图书管理系统(附源码)

【超实用课程内容】 本课程演示的是一套基于Java的SSM框架实现的图书管理系统,主要针对计算机相关专业的正在做毕设的学生与需要项目实战练习的java人群。详细介绍了图书管理系统的实现,包括:环境搭建、系统业务、技术实现、项目运行、功能演示、系统扩展等,以通俗易懂的方式,手把手的带你从零开始运行本套图书管理系统,该项目附带全部源码可作为毕设使用。 【课程如何观看?】 PC端:https://edu.csdn.net/course/detail/27513 移动端:CSDN 学院APP(注意不是CSDN APP哦) 本课程为录播课,课程2年有效观看时长,大家可以抓紧时间学习后一起讨论哦~ 【学员专享增值服务】 源码开放 课件、课程案例代码完全开放给你,你可以根据所学知识,自行修改、优化

jsp+servlet入门项目实例

jsp+servlet实现班级信息管理项目

winfrom中嵌套html,跟html的交互

winfrom中嵌套html,跟html的交互,源码就在里面一看就懂,很简单

Java面试题大全(2020版)

发现网上很多Java面试题都没有答案,所以花了很长时间搜集整理出来了这套Java面试题大全,希望对大家有帮助哈~ 本套Java面试题大全,全的不能再全,哈哈~ 一、Java 基础 1. JDK 和 JRE 有什么区别? JDK:Java Development Kit 的简称,java 开发工具包,提供了 java 的开发环境和运行环境。 JRE:Java Runtime Environ...

python实现数字水印添加与提取及鲁棒性测试(GUI,基于DCT,含测试图片)

由python写的GUI,可以实现数字水印的添加与提取,提取是根据添加系数的相关性,实现了盲提取。含有两种攻击测试方法(高斯低通滤波、高斯白噪声)。基于python2.7,watermark.py为主

Xshell6完美破解版,亲测可用

Xshell6破解版,亲测可用,分享给大家。直接解压即可使用

你连存活到JDK8中著名的Bug都不知道,我怎么敢给你加薪

CopyOnWriteArrayList.java和ArrayList.java,这2个类的构造函数,注释中有一句话 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 public ArrayList(Collection&lt;? ...

相关热词 c#对文件改写权限 c#中tostring c#支付宝回掉 c#转换成数字 c#判断除法是否有模 c# 横向chart c#控件选择多个 c#报表如何锁定表头 c#分级显示数据 c# 不区分大小写替换
立即提问
相关内容推荐