douchao5864 2019-07-19 16:47 采纳率: 100%
浏览 140

通过HTML表单上传大文件在AWS EC2上中止20秒(ERR_CONNECTION_ABORTED)

I'm trying to use a standard HTML form with a file type input element. This form works fine on small files, but if the upload process takes longer than 20 seconds, the connection is dropped and the upload ends prematurely.

The infrastructure is as follows: 1 VPC containing 1 EC2 t2.micro instance running Amazon Linux with Apache 2.4/PHP 7.3/MySQL installed, connected to 1 EFS mount, accessible through an elastic IP.

I've spent a full day yesterday and more hours today trying to get this figured out. I thought that the EC2 -> Limits was it, as it was set to 20, but that was referring to the number of instances, not the connection time limit.

php.ini directives have been set accordingly:

max_execution_time = 900
post_max_size = 1048M
upload_max_filesize = 1024M
max_input_time = -1

And yes, the code works fine on a non-AWS server. I'm testing with the most basic HTML uploader you can make:

<?php
if ($_FILES) {
 move_uploaded_file($_FILES['wtf']['tmp_name'],
 /localpath/'.basename($_FILES['wtf']['name']));
}
?>
<html>
<body>
<form method="post" enctype="multipart/form-data">
<input type="file" name="wtf">
<input type="submit">
</form>
</body>
</html>

This returns the ERR_CONNECTION_ABORTED message. Of course, this works perfectly fine for all files that upload in under the 20 second cutoff.

I've tried installing swap memory (the default Amazon Linux AMI didn't), adding a load balancer and editing the idle_timeout to 600 seconds. I've tried files of varying sizes, but it has nothing to do with the size, it's all about the time it takes to process. Without fail, it aborts at about 20 seconds every time.

I've had issues like this in the past with AWS, and at that point, it came down to "stickiness" of the ELB (load balancer). I wasn't originally using a load balancer on the EC2 instance that I was testing the code on, so when I enabled one, it appears that the Edit Stickiness option only apply to classic load balancers. The current apparently comparable setting that the load balancer provides is idle timeout (accessible from EC2 -> Load Balancers -> [select load balancer] -> Description tab -> Attibutes).


Additional tests 12 days later:

This is still an issue. I've tried so many things, but nothing has gotten this to work.

I've tried using curl to call the basic uploader provided above. I created a blank file and attempted to submit it to the form:

dd if=/dev/zero of=testfile.txt count=502400 bs=1024
curl -k -i -X POST -H "Content-Type: multipart/form-data" /
 -F "file_field=@/var/www/html/testfile.txt" / 
 https://domain.tld/test_upload.php

Again, it has nothing to do with the size of the files, it's all time related. On a slow connection, I can't upload a 25 MB file, on the server directly, I can upload a 400 MB file, but it cuts off somewhere above that. Again, it always ends after 15 to 20 seconds.

I also tried uploading through localhost, and the same issue exists, so it seems to be a server configuration issue, unless there is an unknown AWS layer at play.

curl -k -i -X POST -H "Content-Type: multipart/form-data" /
 -F "file_field=@/var/www/html/testfile.txt" /
 localhost/test_upload.php

The return from $_FILES on uploads that take longer than 15/20 seconds states:

Array
(
    [name] => testfile.txt
    [type] =>
    [tmp_name] =>
    [error] => 3
    [size] => 0
)

Error code is ERR_CONNECTION_ABORTED, which lead me to look up similar issues others have had with EC2 uploads, and nothing suggested helped.

Directives I've tried in php.ini:

post_max_size = 1048M
upload_max_filesize = 1024M
memory_limit = 512M
max_input_time = 3600
max_execution_time 900
ignore_user_abort = On
upload_tmp_dir = /var/www/custom_tmp

It's been suggested that the default tmp folder might have restrictions that cause issues, but trying to change it to a different folder didn't affect it.

The memory_limit has no bearing on any of this, as mentioned by others, but I tried it anyway.

Directives I've tried in httpd.conf:

AllowOverRide = All
TimeOut 1200
KeepAlive On / Off
KeepAliveTimeout 1200

KeepAlive is for making multiple requests over the same TCP connection, so not exactly related, but either way, on or off, it had no impact.

When uploading, I watched the file in the tmp folder being written. It would continue to grow until it hit that same timeout, then it suddenly was deleted, the html form being returned that ERR_CONNECTION_ABORTED error.

I've tried putting a copy of the php.ini in the base HTML folder, no joy. I've set the Apache directives in the .htaccess file, again no change. I've tried using an HTTP connection instead of HTTPS. Nothing helps.

This is a standard Amazon Linux AMI (HVM) t2.micro instance. I don't think that it being a micro should matter, I've set up an upload server on a t1.micro before which didn't have any issues uploading large files, some differences being it was not in a VPC and it was running on AWS Classic.

  • 写回答

0条回答 默认 最新

    报告相同问题?

    悬赏问题

    • ¥15 求daily translation(DT)偏差订正方法的代码
    • ¥15 js调用html页面需要隐藏某个按钮
    • ¥15 ads仿真结果在圆图上是怎么读数的
    • ¥20 Cotex M3的调试和程序执行方式是什么样的?
    • ¥20 java项目连接sqlserver时报ssl相关错误
    • ¥15 一道python难题3
    • ¥15 牛顿斯科特系数表表示
    • ¥15 arduino 步进电机
    • ¥20 程序进入HardFault_Handler
    • ¥15 关于#python#的问题:自动化测试