weixin_39924307
weixin_39924307
2021-01-07 16:20

Connection reset by peer issue (HAProxy + uWSGI)

After we activated health check in HAProxy, we started to see enormous amount of Connection reset by peer error.


[uwsgi-http key: 3152c40a9f48 client_addr: x.x.x.x client_port: 36056] hr_read(): Connection reset by peer [plugins/http/http.c line 916]

We are using latest version of uWSGI (2.0.15)

uWSGI configuration:


# Reload if rss memory is higher than specified megabytes
if-env=UWSGI_RELOAD_ON_RSS
reload-on-rss=$(UWSGI_RELOAD_ON_RSS)
endif=

threads-stacksize=128

if-env=UWSGI_THREADS_STACKSIZE
threads-stacksize=$(UWSGI_THREADS_STACKSIZE)
endif=

module=xxxxx.wsgi:application

# Running uwsgi without a master is a bad idea in any case, according to
# Roberto De Ioris, the main developer.
master=True

# Try to remove all of the generated files/sockets.
vacuum=True

# Set the max size of a request (request-body excluded), this generally maps to
# the size of request headers. By default it is 4k. If you receive a bigger
# request (for example with big cookies or query string) you may need to
# increase it. It is a security measure too, so adapt to your app needs instead
# of maxing it out.
buffer-size=32768    

# Exit instead of brutal reload on SIGTERM.
# If uwsgi is run under e.g. supervisor, this will make it behave nicely.
die-on-term=True

# Enable python threads, for background threads or threads concurrency model.
enable-threads=True

# protect against thundering herd
# https://uwsgi-docs.readthedocs.io/en/latest/articles/SerializingAccept.html
thunder-lock=true

# Serve on port 8000 unless overriden by PORT env var
http=0.0.0.0:8000


# Use the processes+threads concurrency model.
processes=4
threads=20

# Run as nobody
uid=nobody
gid=nobody
pidfile=/tmp/xxxxx-uwsgi.pid

# Stats socket for monitoring
stats=/tmp/xxxx-uwsgi-stats.socket
memory-report=True

# Restart uwsgi workers if any request takes more than 30 seconds.
harakiri=120

# directly serve .gz version of gzip files to clients that support it
static-gzip-ext=css
static-gzip-ext=js
static-gzip-ext=map
static-gzip-ext=svg
static-gzip-ext=ttf

# dedicate threads to static file serving
offload-threads=4

# We are logging requests in app. We don't need server request logging
disable-logging=True

该提问来源于开源项目:unbit/uwsgi

  • 点赞
  • 写回答
  • 关注问题
  • 收藏
  • 复制链接分享
  • 邀请回答

15条回答

  • weixin_39709674 weixin_39709674 4月前

    Did you ever find a solution to this?

    点赞 评论 复制链接分享
  • weixin_39924307 weixin_39924307 4月前

    Nope.

    点赞 评论 复制链接分享
  • weixin_39946300 weixin_39946300 4月前

    I meet the same problem and no solution!

    点赞 评论 复制链接分享
  • weixin_39542936 weixin_39542936 4月前

    I meet the same problem too!

    点赞 评论 复制链接分享
  • weixin_39757626 weixin_39757626 4月前

    try adding these lines to your uwsgi.ini file:

    buffer-size = 32768 post-buffering = 32768

    点赞 评论 复制链接分享
  • weixin_39722375 weixin_39722375 4月前

    do you find a slolution? this happens on occasion for me. i cannot find the reasion.

    点赞 评论 复制链接分享
  • weixin_39722375 weixin_39722375 4月前

    i've set uwsgi config like that, but it doesn't work.

    点赞 评论 复制链接分享
  • weixin_39540725 weixin_39540725 4月前

    Any update in this issue? Or should we create issue on HAProxy side?

    点赞 评论 复制链接分享
  • weixin_39901404 weixin_39901404 4月前

    Using uwsgi with docker and jwilder/nginx-proxy (basic config) + django

    `bash
    [uwsgi-http key: x.x.x.x.x client_addr: 172.17.0.2 client_port: 47324] hr_write(): Connection reset by peer [plugins/http/http.c line 565]
    

    `

    `ini
    [uwsgi]
    http = :8080
    module = demo.wsgi
    master = 1
    processes = 4
    threads = 2
    stats = :8081
    static-map = /static=/app/static
    static-expires = /* 7776000
    offload-threads = %k
    

    `

    Error was shown few times when i first started the container. Now it's not here. It's just not showing anymore.

    Edit: Yet it happend again.

    点赞 评论 复制链接分享
  • weixin_39722375 weixin_39722375 4月前

    On my laptop(4c16G) , I used apache ab to test my http server, found that: hr_read(): Connection reset by peer appears more often when setting processes with a small number and ab concurrency with a big number. So I think a main reason of hr_read(): Connection reset by peer is low http server performance.

    uwsgi: What defines the number of workers/process that a django app needs? shows that processes number should be confirmed after test.

    On my server(8c16G), I setprocesses with 128.

    点赞 评论 复制链接分享
  • weixin_39884832 weixin_39884832 4月前

    It might be that uWSGI is killing workers that have a request in progress, according to some of these settings:

    
    max-requests = 1000                  ; Restart workers after this many requests
    max-worker-lifetime = 3600           ; Restart workers after this many seconds
    reload-on-rss = 2048                 ; Restart workers after this much resident memory
    worker-reload-mercy = 60             ; How long to wait before forcefully killing workers
    点赞 评论 复制链接分享
  • weixin_39532754 weixin_39532754 4月前

    the same problem with uwsgi 2.0.18

    
    100.97.208.244 - - [12/Jun/2020:01:07:35 +0800] "HEAD / HTTP/1.0" 200 93152 226 "-" "-"
    [uwsgi-http key: iZuf61ouik5mwrs7ft8ompZ client_addr: 100.97.208.244 client_port: 6217] hr_write(): Connection reset by peer [plugins/http/http.c line 565]
    [uwsgi-http key: iZuf61ouik5mwrs7ft8ompZ client_addr: 100.97.208.154 client_port: 26633] hr_write(): Connection reset by peer [plugins/http/http.c line 565]
    100.97.208.154 - - [12/Jun/2020:01:07:35 +0800] "HEAD / HTTP/1.0" 200 93152 245 "-" "-"
    100.97.208.152 - - [12/Jun/2020:01:07:35 +0800] "HEAD / HTTP/1.0" 200 93152 275 "-" "-"
    [uwsgi-http key: iZuf61ouik5mwrs7ft8ompZ client_addr: 100.97.208.152 client_port: 47282] hr_write(): Connection reset by peer [plugins/http/http.c line 565]
    100.97.208.204 - - [12/Jun/2020:01:07:35 +0800] "HEAD / HTTP/1.0" 200 93152 282 "-" "-"
    
    点赞 评论 复制链接分享
  • weixin_39827306 weixin_39827306 4月前

    i solved it by raise these values: proxy_buffer_size 64k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k;

    点赞 评论 复制链接分享
  • weixin_39883065 weixin_39883065 4月前

    any solution? where did you put that values?

    点赞 评论 复制链接分享
  • weixin_39884832 weixin_39884832 4月前

    those are nginx configuration options.

    In our case the timeouts are happening within cluster, no haproxy/nginx in between. So far our solution has been to migrate to Node.js.

    点赞 评论 复制链接分享

相关推荐