doukang5966907
2019-06-05 09:27 阅读 150

如何为简单的Web和数据库应用程序优化响应时间

I'm new to golang and also to database optimizations.

I have simple app written on go and mysql database where send queries initialised through the web.

For receiving requests it take around 5s or little bit more? Is it possible somehow to optimize it?

Also if refreshing several times, then response could be already 50s and even more, exceptions with "invalid memory address or nil pointer dereference" or "Error 1040: Too many connections could appear".

How to avoid this and have all requests to be managiable in a efficient time frame?

This is table structure

 CREATE TABLE sportsmen (
    sp_no int(11) NOT NULL, 
    birth_date date NOT NULL, 
    first_name varchar(14) NOT NULL, 
    last_name varchar(16) NOT NULL, 
    gender enum('M','F') NOT NULL, 
    start_date date NOT NULL, 
    PRIMARY KEY (sp_no)
) ENGINE=InnoDB DEFAULT CHARSET=latin1

CREATE TABLE points (
sp_no INT NOT NULL,
point INT NOT NULL,
date DATE NOT NULL
);

Number of records is around 300000 for sportsmen and 1 000 000 for their points.

this is function which is called on every request

var db *sql.DB

func init() {
    db, _ = sql.Open("mysql", "<connection>?charset=utf8")
    //checkErr(errcon)
    err := db.Ping()
    checkErr(err)
    yt := reflect.TypeOf(db).Kind()
    fmt.Printf("%T: %s
", yt, yt)
}

func sportsmanPoints(w http.ResponseWriter, r *http.Request) {

    start := time.Now()

    sportsmen, err := db.Query("SELECT sp_no, first_name FROM sportsmen LIMIT ?,20", rand.Intn(100000))
    checkErr(err)

    for sportsmen.Next() {
        var spNo string
        var firstName string
        err = sportsmen.Scan(&spNo, &firstName)
        checkErr(err)
        spPoints, err := db.Query("SELECT max(point) FROM points WHERE sp_no =" + spNo)
        for spPoints.Next() {
            var spPoint int
            err = spPoints.Scan(&spPoint)
            checkErr(err)
            points.Data = ​append​(points.Data, Point{Name: firstName, Point: spPoint})
        }
    }

    data, err := json.Marshal(points.Data)
    if​ err != ​nil​ {
           log.Fatal(err)
     }

    fmt.Fprintln(w, ​string​(data))
    elapsed := time.Since(start)
    fmt.Println(​"Date:"​, time.Now(), ​"Response time:"​, elapsed)
    points.Data = ​nil
    data = ​nil
}

func​ ​checkErr​(err error) {
    if​ err != ​nil​ {
        panic​(err)
    }
}

func​ ​main​() {
    http.HandleFunc(​"/"​, sportsmanPoints)
    err := http.ListenAndServe(​":9090"​, ​nil​)
    if​ err != ​nil​ {
        log.Fatal(​"ListenAndServe: "​, err)
    }
}

Thank you.

  • 点赞
  • 写回答
  • 关注问题
  • 收藏
  • 复制链接分享

2条回答 默认 最新

  • 已采纳
    dsh7623 dsh7623 2019-06-05 09:31

    You connect to your database in your handler, every time a request is made to your server. This operation alone could take multiple seconds. Once the handler returns, you just throw away that connection (you don't even close that, so that may be idling for some time before getting closed, hogging db server resources as connections are most likely limited on the server). Don't do that.

    Connect to your database once, on app startup, and use this connection in your handler. The database will keep idle connections open, ready to be reused immediately should you need it in another request.

    Move your db variable outside, e.g. to a package-level variable, connect to your database and initialzie this db variable once, e.g. in your main() or in a package init() function, and just use it in your handler.

    sql.Open() documents that:

    The returned DB is safe for concurrent use by multiple goroutines and maintains its own pool of idle connections. Thus, the Open function should be called just once. It is rarely necessary to close a DB.

    See similar question: mgo - query performance seems consistently slow (500-650ms)

    点赞 评论 复制链接分享
  • douti9253 douti9253 2019-06-05 21:25
    SELECT sp_no, first_name FROM sportsmen LIMIT ?,20", rand.Intn(100000)
    

    Terrible performance. And poor results.

    • It will only pick from the first 1/3 of the table (cf 100000 vs 300000).

    • It will pick the same 20 or overlapping 20 occasionally.

    • It must skip over up to 100000 rows before finding the 20. (This is the performance problem.)

    I have multiple ways to improve on all three of those issues: http://mysql.rjweb.org/doc.php/random

    And by speeding up the query, the "too many connections" is likely to go away.

    点赞 评论 复制链接分享

相关推荐