ECS Network Modes: When to Use awsvpc, bridge, and host
ECS offers four network modes. awsvpc is the right default for almost everything — but understanding why, and when the others apply, prevents subtle security and performance mistakes.
Why network mode is a consequential choice
ECS network mode determines how a task's containers get IP addresses, how they communicate with each other and with AWS services, and what security controls you can apply. Choosing the wrong mode creates problems that are not obvious until you try to apply security group rules, debug inter-container communication, or hit ENI limits at scale.
There are five modes: awsvpc, bridge, host, none, and a Fargate-implied mode. In practice, the decision is almost always between awsvpc and bridge.
How ECS network modes map to AWS networking
ConceptContainer NetworkingEach mode determines whether a task shares its network namespace with the EC2 host, uses Docker's internal bridge, or gets its own dedicated network interface in the VPC.
Prerequisites
- VPC subnets and security groups
- EC2 ENI basics
- Docker networking
Key Points
- awsvpc: each task gets its own ENI with a VPC IP. Security groups apply at the task level.
- bridge: Docker creates a virtual bridge; tasks share the EC2 host's ENI. Port mapping required.
- host: the task's containers share the EC2 instance's network namespace directly. No port mapping, but no isolation either.
- Fargate always uses awsvpc. You have no other option on Fargate.
- ENI count per EC2 instance limits how many awsvpc tasks can run on one host.
awsvpc: the default you should use
In awsvpc mode, each ECS task gets its own Elastic Network Interface with a private IP address from your VPC subnet. The task looks like a first-class VPC resource — because it is.
{
"networkMode": "awsvpc",
"requiresCompatibilities": ["EC2"],
"networkConfiguration": {
"awsvpcConfiguration": {
"subnets": ["subnet-0abc1234"],
"securityGroups": ["sg-0def5678"],
"assignPublicIp": "DISABLED"
}
}
}
What this buys you:
- Task-level security groups: each task has its own security group. You can allow the payments service to reach the database and deny everything else — at the network layer, not just application logic.
- No port conflict management: since each task has its own IP, multiple tasks can all bind port 8080 without collision. No dynamic port mapping needed.
- VPC flow logs per task: traffic is attributable to the specific task's ENI.
- Direct integration with AWS services: VPC endpoints, PrivateLink, RDS security group rules — all work naturally because the task is a VPC resource.
⚠The ENI limit problem at scale
Each EC2 instance supports a limited number of ENIs depending on instance type (a t3.medium supports 3, an m5.xlarge supports 8). With awsvpc, each task consumes one ENI. On a cluster using smaller instance types, you hit the ENI limit before you hit CPU or memory limits.
AWS addresses this with ENI trunking (also called trunk networking): a higher-density ENI attachment model that allows significantly more tasks per instance. Enable it by opting in to the ECS account setting awsvpcTrunking. Not all instance types support it — check the documentation for your instance family.
bridge: the legacy mode for EC2 deployments
In bridge mode, Docker creates a virtual network bridge (docker0) on the EC2 host. Containers get IP addresses on this bridge network — typically 172.17.x.x — and communicate with the outside world through the host's ENI via NAT.
Because containers share the host's ENI, port mapping is required for inbound traffic:
{
"networkMode": "bridge",
"portMappings": [
{
"containerPort": 8080,
"hostPort": 0,
"protocol": "tcp"
}
]
}
Setting hostPort: 0 asks ECS to assign a random ephemeral port on the host. The ALB or service discovery layer uses this dynamic port to reach the container. This works, but it means your security groups apply to the EC2 instance, not the individual task.
Why bridge mode still exists: some legacy deployments and custom AMIs predate awsvpc. Inter-container communication on the same host using links or Docker's bridge DNS also requires bridge mode. New deployments have no reason to choose it.
awsvpc vs bridge: security and operational differences
Both modes work for standard web services. The differences surface in security posture and operational complexity.
- Task-level security groups — fine-grained network policy
- No port mapping or dynamic host port management
- Each task is directly addressable in the VPC
- ENI count limits tasks per instance; requires ENI trunking for density
- Security groups apply to the EC2 host, not individual tasks
- Dynamic port mapping adds ALB and service discovery complexity
- Multiple tasks share one ENI, so no per-task ENI limits
- Required for inter-container `links` and Docker bridge DNS
Use awsvpc for all new ECS deployments. The task-level security group control and simpler networking model outweigh the ENI density constraint. Enable ENI trunking if density becomes a limit.
host: when performance requires it
In host mode, the task's containers share the EC2 instance's network namespace directly. No bridge, no NAT, no port remapping. The container binds a port on the EC2 instance's network interface as if it were a process running directly on the host.
Use cases:
- Performance-critical networking: monitoring agents, network tools, and services that need raw network performance without NAT overhead
- Container-to-host port transparency: processes that must listen on fixed well-known ports without coordination
The trade-off is complete loss of network isolation. Two tasks in host mode cannot both bind port 8080 on the same instance. The task sees all traffic on the host's interface. Security groups apply to the EC2 instance, not the task.
In practice, host mode is appropriate for infrastructure-layer workloads — log collectors, metric agents, network monitoring — not for application services.
none: isolated containers
In none mode, the container has a loopback interface only. No inbound or outbound network. This is occasionally useful for compute tasks that read and write files but make no network calls, where network isolation is a security requirement.
Fargate always uses awsvpc
When you use Fargate, the network mode is awsvpc and is not configurable. Each Fargate task gets an ENI, a VPC IP, and task-level security groups. You specify subnet and security group at launch time in networkConfiguration.
One implication: if your Fargate task needs to reach the internet (to pull from a public registry, call an external API), the subnet must have a route to an internet gateway via a public IP (assignPublicIp: ENABLED) or a NAT gateway in the routing table. Tasks in private subnets without a NAT gateway cannot reach the internet — a common source of image pull failures and secret manager timeouts on first deployment.
💡Fargate networking checklist
When a Fargate task fails to start or times out pulling its image, check in this order:
- Is the task in a private subnet? Is there a NAT gateway route?
- Does the task's security group allow outbound HTTPS (port 443)?
- Is the ECR endpoint accessible? (Use VPC endpoints for ECR and S3 to avoid NAT costs on image pulls.)
- Does the task execution role have
ecr:GetAuthorizationTokenandecr:BatchGetImage?
Most Fargate startup failures trace back to one of these four.
You need to run 50 ECS tasks on a cluster of m5.large instances (each supports 3 ENIs). You are using awsvpc mode. Approximately how many instances does the cluster need?
mediumOne ENI is used by the EC2 instance itself. ENI trunking is not enabled.
A4 instances — 50 tasks / 12 available slots rounding up
Incorrect.Incorrect calculation. An m5.large supports 3 ENIs total, 1 of which is used by the instance. That leaves 2 task slots per instance.B25 instances — 2 task ENI slots per instance (3 total minus 1 for the host)
Correct!With 3 ENIs per m5.large and 1 reserved for the instance, each host supports 2 awsvpc tasks. 50 tasks / 2 per instance = 25 instances. Enable ENI trunking to improve this density significantly.C17 instances — 3 tasks per instance since all 3 ENIs are available
Incorrect.One ENI is reserved for the EC2 instance's primary network interface and cannot be used for tasks.D6 instances — instance limits only apply to network interfaces, not tasks
Incorrect.In awsvpc mode, each task requires one ENI. The ENI limit directly caps task density per instance.
Hint:Count the ENIs available for tasks, not the total ENIs on the instance.