doushi9780 2018-06-04 18:19
浏览 119

使用Docker在简单的Go服务器上获得巨大性能

I've tried several things to get to the root of this, but I'm clueless.

Here's the Go program. It's just one file and has a /api/sign endpoint that accepts POST requests. These POST requests have three fields in the body, and they are logged in a sqlite3 database. Pretty basic stuff.

I wrote a simple Dockerfile to containerize it. Uses golang:1.7.4 to build the binary and copies it over to alpine:3.6 for the final image. Once again, nothing fancy.

I use wrk to benchmark performance. With 8 threads and 1k connections for 50 seconds (wrk -t8 -c1000 -d50s -s post.lua http://server.com/api/sign) and a lua script to create the post requests, I measured the number of requests per second between different situations. In all situations, I run wrk from my laptop and the server is in DigitalOcean VPS (2 vCPUs, 2 GB RAM, SSD, Debian 9.4) that's very close to me.

  • Directly running the binary produced 2979 requests/sec.

  • Docker (docker run -it -v $(pwd):/data -p 8080:8080 image) produced 179 requests/sec.

As you can see, the Docker version is over 16x slower than running the binary directly. Everything else is the same during both experiments.

I've tried the following things and there is practically no improvement in performance in the Docker version:

  • Tried using host networking instead of bridge. There was a slight increase to around 190 requests/sec, but it's still miserable.

  • Tried increasing the limit on the number of file descriptors in the container version with --ulimit nofile=262144:262144. No improvement.

  • Tried different go versions, nothing.

  • Tried debian:9.4 for the final image instead of alpine:3.7 in the hope that it's musl that's performing terribly. Nothing here either.

  • (Edit) Tried running the container without a mounted volume and there's still no performance improvement.

I'm out of ideas at this point. Any help would be much appreciated!

  • 写回答

1条回答 默认 最新

  • douchilian1009 2018-06-04 18:57
    关注

    Using an in-memory sqlite3 database completely solved all performance issues!

    db, err = sql.Open("sqlite3", "file=dco.sqlite3?mode=memory")
    

    I knew there was a disk I/O penalty hit associated with Docker's abstractions (even on Linux; I've heard it's worse on macOS), but I didn't know it would be ~16x.

    Edit: Using an in-memory database isn't really an option most of the time. So I found another sqlite-specific solution. Before all database operations, do this to switch sqlite to WAL mode instead of the default rollback journal:

    PRAGMA journal_mode=WAL;
    PRAGMA synchronous=NORMAL;
    

    This dramatically improved the Docker version's performance to over 2.7k requests/sec!

    评论

报告相同问题?

悬赏问题

  • ¥15 微信会员卡接入微信支付商户号收款
  • ¥15 如何获取烟草零售终端数据
  • ¥15 数学建模招标中位数问题
  • ¥15 phython路径名过长报错 不知道什么问题
  • ¥15 深度学习中模型转换该怎么实现
  • ¥15 HLs设计手写数字识别程序编译通不过
  • ¥15 Stata外部命令安装问题求帮助!
  • ¥15 从键盘随机输入A-H中的一串字符串,用七段数码管方法进行绘制。提交代码及运行截图。
  • ¥15 TYPCE母转母,插入认方向
  • ¥15 如何用python向钉钉机器人发送可以放大的图片?