File Upload Size Limits: nginx, PHP, API Gateway, and S3 Workarounds

1 min readWeb Development

File upload limits exist at multiple layers independently: nginx client_max_body_size, PHP upload_max_filesize and post_max_size, AWS API Gateway (10MB max), and Lambda (6MB max). The tightest limit in the chain determines the actual maximum. For uploads above API Gateway's 10MB limit, presigned S3 URLs move the upload path out of the request chain.

file-uploadawss3api-gateway

Layer-by-layer upload limits

Every layer in the request path has its own independent size limit. The tightest limit determines what's actually accepted:

Client
  ↓
nginx (client_max_body_size, default 1MB)
  ↓
Application server (PHP post_max_size + upload_max_filesize, or equivalent)
  ↓
AWS API Gateway (10MB hard limit)
  ↓
Lambda (6MB synchronous payload)
  ↓
S3 or database

A 100MB file will fail at the first layer that doesn't accept it, regardless of downstream limits.

nginx + PHP configuration

# nginx: /etc/nginx/nginx.conf or site config
http {
    client_max_body_size 100M;
}
; php.ini (or php-fpm pool config)
upload_max_filesize = 50M   ; max size of one file in multipart upload
post_max_size       = 100M  ; max total POST body (must be >= upload_max_filesize)
max_file_uploads    = 20    ; max number of files per request

The silent failure pattern: nginx allows 100M, but PHP's post_max_size is 8M (default). PHP receives the body but silently empties $_FILES when post_max_size is exceeded. The application code sees an empty upload with no error. Always check both layers:

php -i | grep -E "upload_max|post_max"
nginx -T | grep client_max_body_size

AWS API Gateway hard limits

| Service | Limit | Notes | |---|---|---| | API Gateway (REST) | 10 MB request payload | Hard limit, cannot be increased | | API Gateway (HTTP) | 10 MB request payload | Same hard limit | | Lambda (synchronous) | 6 MB request payload | Invocation payload limit | | Lambda (async) | 256 KB | Event payload for async invocations | | S3 single PUT | 5 GB | Object size limit for single PUT | | S3 multipart upload | 5 TB | Total object size |

When API Gateway's 10MB limit blocks large uploads, route uploads directly to S3 via presigned URLs

ConceptAWS / Web Architecture

Presigned URLs let the client upload directly to S3 without routing through API Gateway or Lambda. The backend generates a presigned URL with an expiry (e.g., 15 minutes), returns it to the client, and the client POSTs the file directly to the S3 URL. The file never passes through API Gateway or Lambda, bypassing both their size limits. The backend receives only a small confirmation payload after upload completes.

Prerequisites

  • AWS S3
  • Presigned URLs
  • API Gateway limitations

Key Points

  • API Gateway 10MB is a hard limit — no configuration can increase it.
  • Lambda 6MB synchronous limit — for file processing, trigger Lambda from S3 event instead.
  • S3 presigned URL: time-limited, single-use URL for direct client upload. Backend never sees file bytes.
  • S3 multipart upload: for files > 100MB, use multipart — each part can be up to 5GB.

Presigned URL upload flow

// Backend: generate presigned URL
const { S3Client, PutObjectCommand } = require('@aws-sdk/client-s3');
const { getSignedUrl } = require('@aws-sdk/s3-request-presigner');

async function getUploadUrl(filename, contentType) {
    const client = new S3Client({ region: 'us-east-1' });
    const command = new PutObjectCommand({
        Bucket: 'my-uploads',
        Key: `uploads/${Date.now()}-${filename}`,
        ContentType: contentType,
    });
    // URL expires in 15 minutes
    return getSignedUrl(client, command, { expiresIn: 900 });
}
// Frontend: upload directly to S3
async function uploadFile(file) {
    // 1. Get presigned URL from backend
    const { uploadUrl, key } = await fetch('/api/upload-url', {
        method: 'POST',
        body: JSON.stringify({ filename: file.name, contentType: file.type }),
    }).then(r => r.json());

    // 2. Upload directly to S3 (bypasses API Gateway)
    await fetch(uploadUrl, {
        method: 'PUT',
        body: file,
        headers: { 'Content-Type': file.type },
    });

    // 3. Confirm with backend
    await fetch('/api/confirm-upload', {
        method: 'POST',
        body: JSON.stringify({ key }),
    });
}

The backend API Gateway routes only handle small JSON payloads — the file bytes go directly from browser to S3.

A user uploads a 15MB CSV file. The stack is: browser → AWS API Gateway → Lambda → S3. The upload fails. What's the issue and fix?

easy

API Gateway has a 10MB hard payload limit. Lambda has a 6MB synchronous invocation limit.

  • ALambda's memory is too low for 15MB file processing
    Incorrect.Lambda memory controls execution speed and memory available for processing, but the issue is the payload limit — the file never reaches Lambda because API Gateway rejects requests over 10MB.
  • BThe API Gateway 10MB hard limit rejects the 15MB request before it reaches Lambda. Fix: use a presigned S3 URL so the upload goes directly to S3, bypassing API Gateway entirely
    Correct!API Gateway's 10MB request payload limit is a hard architectural limit with no configuration override. A 15MB file upload will be rejected with 413. The standard fix is to use a presigned S3 URL: the client calls a Lambda function through API Gateway to generate the presigned URL (small request/response, no limit issues), then the client uploads the 15MB file directly to S3. The file never goes through API Gateway. After the S3 upload, trigger processing via an S3 event notification to Lambda.
  • CIncrease the API Gateway timeout to allow the upload to complete
    Incorrect.Timeouts affect how long the request can take to process, not the payload size. The 10MB limit causes a payload size rejection, not a timeout.
  • DSwitch to API Gateway WebSocket API which has higher size limits
    Incorrect.API Gateway WebSocket message limits are 128KB per message — much lower than the REST API's 10MB. WebSocket is not suitable for file uploads.

Hint:What is the API Gateway payload limit, and what AWS pattern bypasses it for large file uploads?