Running a server with 140 000 page views a day (analytics).
php-fpm processes go for about 10-12M each.
Servers got 10G ram, mysql goes for 1.2G-1.6G
Configuration looks like this:
nginx
user nginx;
worker_processes 4;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
#access_log /var/log/nginx/access.log main;
access_log off;
sendfile on;
#tcp_nopush on;
keepalive_timeout 10;
client_max_body_size 20M;
server_tokens off;
include /etc/nginx/conf.d/*.conf;
}
php-fpm like this:
listen = 127.0.0.1:9000
listen.allowed_clients = 127.0.0.1
user = webadmin
group = webadmin
pm = dynamic
pm.max_children = 900
pm.start_servers = 900
pm.min_spare_servers = 200
pm.max_spare_servers = 900
pm.max_requests = 500
chdir = /
Typically the server can run just fine with 500 simultaneous users (again, real time google analytics used to get this estimate) but stall at times where users are not that many (75-100 simultaneous users).
The configuration is done by my ISP, who i trust, but i still would like to know if the configuration makes sense.