I have a small PHP script that is doing (ideally) 10,000 multi curl requests, in a 250 rolling window. I'm using chuyskywalker/rolling-curl library and I've compiled php/curl/c-ares from source to enable AsynchDNS.
At first it was working great and I was very happy with it however I've noticed that after 1000ish requests (1024 maybe..) curl just seems to send the requests then give up and say it couldn't contact the DNS server.
I believe this has something to do with the file descriptor limits because when I did ulimit -Sn
I saw it was at 1024 (same place curl starts giving up) so I upped the limit and it managed to get further through the run then I lowered the limit back down and it stopped in the same place again. I've currently set the limit to 65535 by changing /etc/security/limits.conf to
* soft nproc 65535
* hard nproc 65535
* soft nofile 65535
* hard nofile 65535
and I thought all was fixed until I tried a 10,000 url run with the same 250 rolling window then curl started giving up again but much later on in the run.
By doing cd /proc/procid/fd and ls -l | wc -l
I can see the process is getting to 6500 file descriptors (not 65535) then causing the problems in the curl.
Can someone explain to me how curl uses these file descriptors? and if there is any way I can overcome this problem?