I'm new in Golang and trying to write simple web-server with json response. One of endpoints is for database access. With static-like response (such as any static string or simple data conversion) average response time is about 0s (<1ms). If I add some database request, average time increases up to 250ms and not depending on function content. So I'm trying to understand the timing of this function.
What I've already know:
- Query to db costs about 40-50ms
- Datetime transformation costs >1ms
-
json.Marshal
costs <1ms -
string(json.Marshal)
<1ms
All my problem is in rows.Next()
:
- all inside
rows.Next()
costs <1ms separately - Each
row.Next()
procedure cost depends on row length and it is about 10-50ms.
If I understand db.Query()
gets all rows in one request and keeps it in memory providing virtual cursor as iterator ( .Next()
). Why does it cost so much?
- All
row.Next()
data processing costs about 180ms. With 40msdb.Query
together it approx equals to total response time (~240ms) - If I comment all
row.Next()
cycle and just makedb.Query
total response still costs 200-240ms
Packages I use:
import (
"database/sql"
"encoding/json"
"fmt"
"log"
"os"
"strconv"
"time"
"github.com/kshvakov/clickhouse"
routing "github.com/qiangxue/fasthttp-routing"
"github.com/valyala/fasthttp"
)
func getRecs(client string, dt string) string {
dtT := transformDate(&dt)
rows, err := db.Query(getRecsRequest(&client, &dtT))
if err != nil {
log.Fatal(err)
}
defer rows.Close()
start := time.Now()
got := []databaseRows{}
elapsed := time.Since(start)
for rows.Next() {
elapsed = time.Since(start)
log.Printf("Next took %s", elapsed)
var r databaseRows
err := rows.Scan(&r.ItemIds, &r.DtRecommendation, &r.Number)
if err != nil {
log.Fatal(err)
}
got = append(got, r)
start = time.Now()
}
jsonData, err := json.Marshal(&got)
return string(jsonData)
}
func getRecs(client string, dt string) string {
dtT := transformDate(&dt)
rows, err := db.Query(getRecsRequest(&client, &dtT))
if err != nil {
log.Fatal(err)
}
defer rows.Close()
// start := time.Now()
got := "[]databaseRows{}"
// elapsed := time.Since(start)
// for rows.Next() {
// elapsed = time.Since(start)
// log.Printf("Next took %s", elapsed)
// var r databaseRows
// err := rows.Scan(&r.ItemIds, &r.DtRecommendation, &r.Number)
// if err != nil {
// log.Fatal(err)
// }
// got = append(got, r)
// start = time.Now()
// }
jsonData, err := json.Marshal(&got)
return string(jsonData)
}
Can I expect web-server response time is about database request time?
UPDATE.
I've tried "github.com/jmoiron/sqlx" with "github.com/kshvakov/clickhouse" and result was the same; only syntax changed.
I also tried "github.com/mailru/go-clickhouse" with "database/sql". It provides access to HTTP interface to clickhouse. I got all rows in one requets as expected, but it tooks about 240ms for the http request and about 0ms for data processing.. So I am looking for opportunity to keep tcp connection opened to be able get all nessasery rows in one request and work with my program side memory buffer (with some buffer limit in combnation with tcp/socket buffer).
UPDATE 2.
Ok, I've tried native clickhouse client (and other such DBeaver) and approx 250ms is the limit for this request. But it may be only clickhouse native tcp interface limitation. But what is the 40-50ms db.Query() operation, which returns no rows. Is the time for query on server side? Or just query string check?