doulan9419 2014-01-25 03:04
浏览 23
已采纳

Golang Mgo起搏

I am writing an application which writes to the mongodb rapidly. Too rapidly for mongodb and mgo to handle. My question is, is there a way for me to determine that mongo cannot keep up and start to block? But I also do not want to block unnecessarily. Here is a sample of code that emulates the problem:

package main

import (
  "labix.org/v2/mgo"
  "time"
  "fmt"
)

// in database name is a string and age is an int

type Dog struct{
  Breed string "breed"
}

type Person struct{
  Name string "name"
  Pet Dog `bson:",inline"`
  Ts        time.Time
}

func insert(session *mgo.Session, bob Person){
  err := session.DB("db_log").C("people").Insert(&bob)
  if err != nil {
    panic("Could not insert into database")
  }
}

func main() {
  session, _ := mgo.Dial("localhost:27017")
  bob := Person{Name : "Robert", Pet : Dog{}}
  i := 0
  for {
    time.Sleep(time.Duration(1) * time.Microsecond)
    i++
    go insert(session, bob)
  }
}

I often get errors like:

panic: Could not insert into database

or

panic: write tcp 127.0.0.1:27017: i/o timeout
  • 写回答

2条回答 默认 最新

  • drecy22400 2014-01-25 05:27
    关注

    I suspect you will get much better performance if you allow Go to use multiple threads and Copy() then Close() your sessions.

    To answer your question, this probably a perfect use-case for a channel. Feed the items into the channel in one goroutine and consume them/write them to Mongo in another. You can adjust the size of the channel to suit your needs. The producer thread will block once the channel is full when it tries to send to it.

    You may also want to play with the Safe() method settings. Setting W:0 will put Mongo in a "fire and forget" mode, which will dramatically speed up performance at the risk of losing some data. You can also change the timeout time.

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论
查看更多回答(1条)

报告相同问题?