Nginx File Upload: Incremental Processing, Buffering, and client_max_body_size
Nginx reads request bodies incrementally in chunks. If the body exceeds client_max_body_size at any point, nginx immediately returns 413 and discards remaining data without waiting for the full upload. For large file uploads, nginx buffers to disk (client_body_temp_path) before passing to upstream — adding latency. Direct S3 uploads via presigned URLs bypass nginx entirely.
How nginx processes uploads
nginx reads request bodies in chunks as they arrive. It keeps a running total of bytes received:
Client sends 100MB file at 10MB/s:
t=0s: nginx reads first 10MB chunk → total=10MB
t=1s: nginx reads next 10MB chunk → total=20MB
...
t=4s: total exceeds client_max_body_size (e.g. 32MB) → nginx returns 413
nginx doesn't wait for the full upload before checking the size. It rejects as soon as the running total exceeds client_max_body_size. This prevents wasting bandwidth on uploads that will ultimately be rejected.
Buffering behavior
By default, nginx buffers the request body before passing to upstream:
server {
# Buffer up to 10MB in memory; larger → spill to disk
client_body_buffer_size 10M;
# Temp directory for large bodies spilled to disk
client_body_temp_path /var/cache/nginx/client_temp;
location /api/upload {
proxy_pass http://backend;
# Disable buffering: stream body directly to upstream as it arrives
proxy_request_buffering off;
}
}
With proxy_request_buffering on (default): nginx reads the entire body first, then opens a connection to upstream. The client waits during this phase. Advantage: upstream gets a stable connection once writing starts. Disadvantage: latency for large files, disk I/O for files exceeding client_body_buffer_size.
With proxy_request_buffering off: nginx starts sending to upstream as it receives from client. Reduces latency but requires upstream to handle slow clients directly.
For large file uploads, bypass nginx entirely with presigned S3 URLs — nginx becomes a bottleneck and a single point of failure
ConceptWeb ArchitectureWhen nginx proxies large file uploads, the file travels: client → nginx (buffered to disk) → backend → S3. For a 1GB video upload, nginx must buffer 1GB on the server's disk, hold the upload connection open for the transfer duration, and then forward to the backend. Direct S3 uploads use presigned URLs: the backend generates a time-limited URL, returns it to the client, and the client uploads directly to S3. nginx is not involved in the data path. This scales to any file size without server-side disk usage.
Prerequisites
- nginx proxy_pass
- S3 presigned URLs
- HTTP multipart upload
Key Points
- nginx buffering default: full request body buffered before proxying — adds latency proportional to file size.
- proxy_request_buffering off: stream to upstream as data arrives — less disk I/O, requires upstream availability during upload.
- For uploads > 100MB: use presigned S3 URLs to bypass nginx entirely.
- API Gateway + Lambda: 10MB max payload limit — always use presigned URLs for large file uploads through AWS.
Upload size limits by layer
For a PHP application behind nginx:
# nginx
server {
client_max_body_size 100M;
}
; php.ini
upload_max_filesize = 50M ; max single file in multipart upload
post_max_size = 100M ; max total POST body (must be >= upload_max_filesize)
max_file_uploads = 20 ; max number of files in one request
All three must be set appropriately. nginx's client_max_body_size gates the request before it reaches PHP. PHP's limits gate processing within PHP. A mismatch (nginx allows 100M, PHP silently drops bodies > 8M default) causes confusing behavior where nginx returns 200 but PHP sees an empty $_FILES.
Alternative: direct S3 upload flow
1. Client → POST /api/get-upload-url → Backend
2. Backend → S3 CreatePresignedPost → presigned URL + fields
3. Backend → 200 {url, fields} → Client
4. Client → POST presigned-url (direct to S3) with file
5. S3 → 204 No Content → Client
6. Client → POST /api/confirm-upload → Backend
7. Backend → verify S3 object exists → process/record upload
The file travels directly from client to S3 — no nginx, no backend disk usage. Backend only handles small JSON payloads. AWS Lambda's 6MB payload limit and nginx's buffering are bypassed entirely.
A user uploads a 50MB file. nginx has client_max_body_size 100M but the backend is taking 60 seconds to process the upload. After 30 seconds, the user's browser timeout cancels the request. What does nginx log?
mediumnginx proxy_request_buffering is on (default). The client disconnected before the backend finished.
A502 Bad Gateway — backend took too long
Incorrect.502 means nginx couldn't connect to or read from the upstream. If nginx is still in the buffering phase (reading from client), it hasn't connected to upstream yet. If it's waiting for the upstream response, a timeout would be 504. The client disconnect causes a different code.B499 — nginx logs this code when the client closes the connection before the request completes
Correct!nginx uses the non-standard code 499 (Client Closed Request) when a client disconnects before the backend finishes responding. The upload may have buffered successfully to nginx and started processing in the backend, but nginx cannot return the response to the disconnected client. The backend may still finish processing even after the 499. 499s indicate the backend is slower than the client's timeout — optimize the endpoint, implement async processing (return 202 with job ID), or use upload resumability.C413 — the request was too large
Incorrect.413 is returned when the body exceeds client_max_body_size. The size is 50MB, within the 100MB limit. 413 is not relevant here.D200 — nginx returns success once it finishes buffering, regardless of client state
Incorrect.nginx must send the response back to the client. If the client is gone, there's nobody to receive the 200. nginx logs 499 and the response is dropped.
Hint:What nginx status code represents 'client closed the connection before response was sent'?