doulan7166 2017-04-02 14:52
浏览 40
已采纳

尽快将SQL结果转换为JSON

I'm trying to transform the Go built-in sql result to JSON. I'm using goroutines for that but I got problems.

The base problem:

There is a really big database with around 200k user and I have to serve them through tcp sockets in a microservice based system. To get the users from the database spent 20ms but transform this bunch of data to JSON spend 10 seconds with the current solution. This is why I want to use goroutines.

Solution with Goroutines:

func getJSON(rows *sql.Rows, cnf configure.Config) ([]byte, error) {
    log := logan.Log{
        Cnf: cnf,
    }

    cols, _ := rows.Columns()

    defer rows.Close()

    done := make(chan struct{})
    go func() {
        defer close(done)
        for result := range resultChannel {
            results = append(
                results,
                result,
            )
        }
    }()

    wg.Add(1)
    go func() {
        for rows.Next() {
            wg.Add(1)
            go handleSQLRow(cols, rows)
        }
        wg.Done()
    }()

    go func() {
        wg.Wait()
        defer close(resultChannel)
    }()

    <-done

    s, err := json.Marshal(results)
    results = []resultContainer{}
    if err != nil {
        log.Context(1).Error(err)
    }
    rows.Close()
    return s, nil
}

func handleSQLRow(cols []string, rows *sql.Rows) {
    defer wg.Done()
    result := make(map[string]string, len(cols))
    fmt.Println("asd -> " + strconv.Itoa(counter))
    counter++
    rawResult := make([][]byte, len(cols))
    dest := make([]interface{}, len(cols))

    for i := range rawResult {
        dest[i] = &rawResult[i]
    }
    rows.Scan(dest...) // GET PANIC
    for i, raw := range rawResult {
        if raw == nil {
            result[cols[i]] = ""
        } else {
            fmt.Println(string(raw))
            result[cols[i]] = string(raw)
        }
    }
    resultChannel <- result
}

This solution give me a panic with the following message:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x45974c]

goroutine 408 [running]:
panic(0x7ca140, 0xc420010150)
    /usr/lib/golang/src/runtime/panic.go:500 +0x1a1
database/sql.convertAssign(0x793960, 0xc420529210, 0x7a5240, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/database/sql/convert.go:88 +0x1ef1
database/sql.(*Rows).Scan(0xc4203e4060, 0xc42021fb00, 0x44, 0x44, 0x44, 0x44)
    /usr/lib/golang/src/database/sql/sql.go:1850 +0xc2
github.com/PumpkinSeed/zerodb/operations.handleSQLRow(0xc420402000, 0x44, 0x44, 0xc4203e4060)
    /home/loow/gopath/src/github.com/PumpkinSeed/zerodb/operations/operations.go:290 +0x19c
created by github.com/PumpkinSeed/zerodb/operations.getJSON.func2
    /home/loow/gopath/src/github.com/PumpkinSeed/zerodb/operations/operations.go:258 +0x91
exit status 2

The current solution which is working but spend too much time:

func getJSON(rows *sql.Rows, cnf configure.Config) ([]byte, error) {
    log := logan.Log{
        Cnf: cnf,
    }
    var results []resultContainer
    cols, _ := rows.Columns()
    rawResult := make([][]byte, len(cols))
    dest := make([]interface{}, len(cols))

    for i := range rawResult {
        dest[i] = &rawResult[i]
    }

    defer rows.Close()

    for rows.Next() {

        result := make(map[string]string, len(cols))
        rows.Scan(dest...)
        for i, raw := range rawResult {
            if raw == nil {

                result[cols[i]] = ""

            } else {
                result[cols[i]] = string(raw)
            }
        }

        results = append(results, result)
    }
    s, err := json.Marshal(results)
    if err != nil {
        log.Context(1).Error(err)
    }
    rows.Close()
    return s, nil
}

Question:

Why the goroutine solution give me an error, where it is not an obvious panic, because the first ~200 goroutine running properly?!

UPDATE

Performance test for the original working solution:

INFO[0020] setup taken -> 3.149124658s                   file=operations.go func=operations.getJSON line=260 service="Database manager" ts="2017-04-02 19:45:27.132881211 +0100 BST"
INFO[0025] toJSON taken -> 5.317647046s                  file=operations.go func=operations.getJSON line=263 service="Database manager" ts="2017-04-02 19:45:32.450551417 +0100 BST"

The sql to map is 3 sec and to json is 5 sec.

  • 写回答

1条回答 默认 最新

  • douan0729 2017-04-02 16:12
    关注

    Go routines won't improve performance on CPU-bound operations like JSON marshaling. What you need is a more efficient JSON marshaler. There are some available, although I haven't used any. A simple Google search for 'faster JSON marshaling' will turn up many results. A popular one is ffjson. I suggest starting there.

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论

报告相同问题?

悬赏问题

  • ¥15 delta降尺度计算的一些细节,有偿
  • ¥15 Arduino红外遥控代码有问题
  • ¥15 数值计算离散正交多项式
  • ¥30 数值计算均差系数编程
  • ¥15 redis-full-check比较 两个集群的数据出错
  • ¥15 Matlab编程问题
  • ¥15 训练的多模态特征融合模型准确度很低怎么办
  • ¥15 kylin启动报错log4j类冲突
  • ¥15 超声波模块测距控制点灯,灯的闪烁很不稳定,经过调试发现测的距离偏大
  • ¥15 import arcpy出现importing _arcgisscripting 找不到相关程序