ECS Fargate: Task Networking, vCPU/Memory Combinations, and What Serverless Actually Means Here

3 min readCloud Infrastructure

Fargate runs ECS tasks on AWS-managed infrastructure. You specify vCPU and memory, not instance types. Every task gets its own network interface. The operational simplicity has real cost tradeoffs — at scale, EC2 with Reserved Instances is significantly cheaper.

awsecsfargate

What Fargate actually does

Fargate removes the EC2 layer. You define task CPU and memory requirements, and AWS allocates the compute, manages the OS, runs the ECS agent, and handles instance lifecycle. You don't choose instance types, patch AMIs, or manage cluster capacity.

"Serverless" here means serverless infrastructure management, not serverless execution model. Fargate tasks are containers running continuously — they're not invoked per-request like Lambda. A Fargate task that runs idle for an hour costs the same as one under full load for an hour.

Fargate task isolation and the awsvpc network model

ConceptAWS ECS Fargate

Each Fargate task runs in its own microVM with dedicated CPU and memory. There's no shared kernel with other tasks. Every task gets its own Elastic Network Interface (ENI) with its own private IP in your VPC — a direct consequence of the awsvpc network mode that Fargate requires.

Prerequisites

  • VPC and subnets
  • ENI (Elastic Network Interfaces)
  • ECS task definitions
  • container networking basics

Key Points

  • Fargate requires awsvpc network mode — each task gets its own ENI and private IP.
  • No shared-kernel isolation issues between tasks — each task runs in a separate microVM.
  • ENI limits per subnet apply: high task counts can exhaust ENIs in small subnets.
  • Task startup involves ENI provisioning, microVM creation, and image pull — typically 10–30s cold start.

Valid vCPU and memory combinations

Fargate does not accept arbitrary CPU/memory combinations. The task definition must use one of the supported configurations:

| CPU (vCPU) | Memory options | |---|---| | 0.25 vCPU | 0.5 GB, 1 GB, 2 GB | | 0.5 vCPU | 1 GB to 4 GB (1 GB increments) | | 1 vCPU | 2 GB to 8 GB (1 GB increments) | | 2 vCPU | 4 GB to 16 GB (1 GB increments) | | 4 vCPU | 8 GB to 30 GB (1 GB increments) | | 8 vCPU | 16 GB to 60 GB (4 GB increments) | | 16 vCPU | 32 GB to 120 GB (8 GB increments) |

In Terraform, CPU is specified in units (1024 units = 1 vCPU) and memory in MB:

resource "aws_ecs_task_definition" "api" {
  family                   = "api"
  cpu                      = "1024"    # 1 vCPU
  memory                   = "2048"    # 2 GB
  network_mode             = "awsvpc"  # required for Fargate
  requires_compatibilities = ["FARGATE"]
  execution_role_arn       = aws_iam_role.ecs_execution.arn
  task_role_arn            = aws_iam_role.api_task.arn

  container_definitions = jsonencode([
    {
      name      = "api"
      image     = "${aws_ecr_repository.api.repository_url}:latest"
      cpu       = 900      # containers can request less than the task total
      memory    = 1800
      essential = true
      portMappings = [
        { containerPort = 8080, protocol = "tcp" }
      ]
      logConfiguration = {
        logDriver = "awslogs"
        options = {
          "awslogs-group"         = aws_cloudwatch_log_group.api.name
          "awslogs-region"        = "us-east-1"
          "awslogs-stream-prefix" = "api"
        }
      }
    }
  ])
}

Networking: one ENI per task

The awsvpc mode means every running Fargate task has its own private IP in your VPC. This is different from EC2 bridge mode where all containers on an instance share the instance's network.

Implications:

  • Security groups apply at the task level, not instance level — more granular control
  • Each task is a first-class VPC citizen — you can reference task IPs in security group rules
  • VPC ENI limits apply per subnet. Each subnet has a maximum number of ENIs. Large Fargate deployments in small subnets can hit this limit
# Check ENI limit for your subnet's instance type
# Fargate tasks use eni-trunking which extends the limit to 120 ENIs per trunk
aws ecs put-account-setting-default \
  --name taskLongArnFormat \
  --value enabled

# Enable ENI trunking to increase ENI limits
aws ecs put-account-setting-default \
  --name awsvpcTrunking \
  --value enabled

With ENI trunking disabled, each task consumes one ENI on its underlying host, and the underlying host is limited by instance type ENI limits. With ENI trunking enabled, up to 120 tasks can share a single trunk ENI, effectively removing the ENI limit as a constraint for most deployments.

📝Fargate pricing: what you're actually paying for

Fargate pricing is per-second, based on provisioned vCPU and memory:

Cost = (vCPU × $0.04048/hour) + (GB memory × $0.004445/hour)

(Prices as of 2024, us-east-1, Linux/X86. ARM64 is ~20% cheaper.)

Example for a 1 vCPU / 2GB task running 24/7:

  • vCPU: 1 × $0.04048 × 8760 hours = $354.60/year
  • Memory: 2 × $0.004445 × 8760 hours = $77.88/year
  • Total: ~$432/year

The same workload on a t3.medium (2 vCPU, 4GB, $0.0416/hour):

  • On-demand: $0.0416 × 8760 = $364/year
  • 1-year Reserved: ~$220/year

At steady-state, EC2 Reserved Instances are 30–50% cheaper than Fargate. The Fargate premium pays for no-ops infrastructure management.

Fargate Spot reduces cost by 50–70% using spare capacity — equivalent to EC2 Spot. Appropriate for batch processing, non-critical workers, and fault-tolerant services that can handle interruption.

Fargate service configuration

resource "aws_ecs_service" "api" {
  name            = "api"
  cluster         = aws_ecs_cluster.main.id
  task_definition = aws_ecs_task_definition.api.arn
  desired_count   = 3
  launch_type     = "FARGATE"

  network_configuration {
    subnets          = aws_subnet.private[*].id
    security_groups  = [aws_security_group.api_tasks.id]
    assign_public_ip = false  # private subnets need NAT gateway for outbound
  }

  load_balancer {
    target_group_arn = aws_lb_target_group.api.arn
    container_name   = "api"
    container_port   = 8080
  }

  deployment_controller {
    type = "ECS"  # rolling deployment
  }

  # Health check grace period: time for tasks to start before health checks run
  health_check_grace_period_seconds = 60
}

Tasks in private subnets need a NAT gateway for outbound internet access (pulling images, calling AWS APIs if no VPC endpoints are configured). Putting tasks in public subnets with assign_public_ip = true works but is not recommended for production — tasks get public IPs directly.

When to choose Fargate vs EC2

Choose Fargate when:

  • Operational simplicity matters more than cost optimization
  • Workload is variable — Fargate scales tasks without managing cluster capacity
  • You don't need specific instance types (GPU, high-memory, bare metal)
  • Security isolation requirements favor per-task microVM isolation

Choose EC2 when:

  • Cost at scale matters — EC2 RI/Savings Plans are significantly cheaper
  • You need GPU instances, Graviton with specific AMI configs, or local instance storage
  • Existing investment in EC2 Reserved Instances applies
  • You need bridge or host network mode for legacy container configurations

A Fargate service running 50 tasks in a single /24 subnet (256 addresses, ~250 usable) starts failing to launch new tasks during a deployment. CloudWatch shows task launch failures with 'unable to place task due to resource constraints'. The subnet has available IP addresses. What is the likely cause?

medium

ENI trunking is not enabled. The subnet has 250 usable IPs. The 50 running tasks are consuming 50 IPs. A rolling deployment is trying to start new tasks before stopping old ones.

  • AThe subnet CIDR block needs to be expanded to add more IP addresses
    Incorrect.IP addresses are available — the problem is ENIs, not IPs.
  • BWithout ENI trunking, each Fargate task consumes one ENI on the underlying host. The underlying hosts have ENI limits based on instance type — the cluster has hit the host-level ENI limit
    Correct!Without ENI trunking, each task requires a dedicated ENI slot on its host. Fargate hosts have the same ENI limits as the underlying instance type. With 50 tasks and new tasks launching during rolling deployment, the host ENI slots are exhausted. IP addresses in the subnet are available, but the ENI attachment limit is the binding constraint. Fix: enable awsvpcTrunking via ECS account settings to allow up to 120 tasks per trunk ENI.
  • C50 tasks × 1 IP = 50 IPs used. 250 - 50 = 200 available. The subnet is not the problem
    Incorrect.This correctly identifies that IPs aren't exhausted, but it doesn't identify the actual constraint (ENI slots).
  • DFargate tasks cannot be deployed in a rolling fashion — only blue/green deployments are supported
    Incorrect.Fargate supports rolling, blue/green (CodeDeploy), and external deployment strategies.

Hint:The error is 'resource constraints', not IP exhaustion. What resource does awsvpc mode consume beyond just IP addresses?