VPC CIDR Planning: Overlapping Ranges, Peering Constraints, and RFC 1918
Overlapping VPC CIDRs block peering, Transit Gateway attachments, and VPN connections. AWS allows overlapping ranges at creation time — the conflict surfaces when you try to connect VPCs later. Planning CIDR blocks upfront around RFC 1918 address space prevents address conflicts from becoming an infrastructure migration.
Why overlapping CIDRs are a problem
AWS lets you create multiple VPCs with the same CIDR block. The conflict doesn't appear until you try to connect them — VPC peering, Transit Gateway, VPN, or Direct Connect all require non-overlapping routes. Once your VPCs are in use, re-CIDRing is operationally painful: it involves re-launching instances, updating security groups and route tables, and coordinating with connected systems.
The fix is simple and free to apply before any instances exist: assign non-overlapping blocks.
VPC peering fails immediately when CIDRs overlap — there's no workaround
GotchaAWS VPCVPC peering requires a route in each VPC's route table that points to the other VPC's CIDR via the peering connection. When CIDRs overlap, the route is ambiguous — AWS rejects the peering request outright. Transit Gateway has the same constraint. The only solution is to rebuild VPCs with non-overlapping CIDRs or use Network Address Translation between them (complex and fragile).
Prerequisites
- CIDR notation
- VPC peering basics
- Transit Gateway
Key Points
- AWS does not prevent creating VPCs with overlapping CIDRs — the error only appears at peering/TGW attachment time.
- VPC peering rejects overlapping CIDRs entirely — no partial peering, no workaround.
- Transit Gateway also requires non-overlapping CIDRs across all attached VPCs.
- VPN and Direct Connect attachments to overlapping VPC ranges cause routing ambiguity.
RFC 1918 private address ranges
Private (non-routable) IPv4 ranges defined by RFC 1918:
10.0.0.0 – 10.255.255.255 (10.0.0.0/8) — 16.7 million addresses
172.16.0.0 – 172.31.255.255 (172.16.0.0/12) — 1 million addresses
192.168.0.0 – 192.168.255.255 (192.168.0.0/16) — 65,536 addresses
AWS VPCs can use any of these ranges (and some public ranges, though that's unusual and problematic). The 10.0.0.0/8 supernet gives the most room for subdivision across many VPCs and regions.
CIDR planning strategy
A systematic allocation prevents conflicts as infrastructure grows:
10.0.0.0/8 — total address space
10.0.0.0/10 — Production account(s)
10.0.0.0/16 — us-east-1 production VPC
10.1.0.0/16 — us-west-2 production VPC
10.2.0.0/16 — eu-west-1 production VPC
10.64.0.0/10 — Staging account(s)
10.64.0.0/16 — us-east-1 staging VPC
10.65.0.0/16 — us-west-2 staging VPC
10.128.0.0/10 — Development accounts
10.128.0.0/16 — developer A VPC
10.129.0.0/16 — developer B VPC
Subnet allocation within a /16 VPC:
resource "aws_vpc" "production" {
cidr_block = "10.0.0.0/16" # 65,536 addresses
}
# Public subnets — one per AZ, /24 each (254 usable IPs)
resource "aws_subnet" "public" {
count = 3
vpc_id = aws_vpc.production.id
cidr_block = "10.0.${count.index}.0/24"
availability_zone = data.aws_availability_zones.available.names[count.index]
}
# Private subnets — larger blocks for workloads
resource "aws_subnet" "private" {
count = 3
vpc_id = aws_vpc.production.id
cidr_block = "10.0.${count.index + 10}.0/23" # /23 = 510 usable IPs per AZ
availability_zone = data.aws_availability_zones.available.names[count.index]
}
# Database subnets — small, isolated
resource "aws_subnet" "database" {
count = 3
vpc_id = aws_vpc.production.id
cidr_block = "10.0.${count.index + 20}.0/24"
availability_zone = data.aws_availability_zones.available.names[count.index]
}
Adding secondary CIDRs
VPCs can have multiple CIDR blocks associated with them (up to 5 by default). This lets you add address space to an existing VPC without rebuilding:
resource "aws_vpc_ipv4_cidr_block_association" "secondary" {
vpc_id = aws_vpc.production.id
cidr_block = "100.64.0.0/16" # RFC 6598 shared address space — usable in VPCs
}
Secondary CIDRs are useful for EKS pod networking — EKS can consume many IPs per node. Assign a large secondary CIDR for pod IPs while keeping the primary CIDR for node IPs.
💡Detecting and resolving CIDR conflicts
Before setting up VPC peering or Transit Gateway, verify no overlaps exist:
# List all VPC CIDRs in an account
aws ec2 describe-vpcs \
--query "Vpcs[].{VpcId:VpcId,CIDR:CidrBlock,Name:Tags[?Key=='Name'].Value|[0]}" \
--output table
# Check Transit Gateway route table for conflicts
aws ec2 search-transit-gateway-routes \
--transit-gateway-route-table-id tgw-rtb-abc123 \
--filters "Name=state,Values=active"
If you discover overlapping VPCs that need to be connected:
- Option 1 — Rebuild: create new VPCs with non-overlapping CIDRs, migrate workloads, decommission old VPCs. Painful but clean.
- Option 2 — NAT between VPCs: use NAT instances to translate addresses between overlapping CIDRs. Adds latency, complexity, and single points of failure.
- Option 3 — Secondary CIDR + re-IP: add a non-overlapping secondary CIDR to one VPC, migrate resources to new subnets within that CIDR, remove old CIDR.
Option 1 is the correct long-term fix. Options 2 and 3 are workarounds that add operational debt.
You manage two VPCs: VPC-A (10.0.0.0/16) in Account A and VPC-B (10.0.0.0/16) in Account B. Teams want to share a database in VPC-B with applications in VPC-A. You attempt to create a VPC peering connection. What happens?
easyBoth VPCs were created independently by different teams before a multi-account strategy was defined. No peering exists between them. The database and application teams now want cross-VPC connectivity.
AVPC peering succeeds but routes must be manually added to avoid conflicts
Incorrect.VPC peering with overlapping CIDRs is rejected at creation time — AWS doesn't let you create the peering connection, let alone add routes.BAWS rejects the peering request because overlapping CIDRs make routing between the VPCs ambiguous
Correct!VPC peering requires distinct CIDR blocks. Both VPCs use 10.0.0.0/16 — AWS cannot create a valid route table entry that distinguishes traffic destined for VPC-A vs VPC-B. The peering request fails immediately. The long-term fix is to re-CIDR one VPC. A short-term workaround is to use PrivateLink (Interface endpoint) to expose specific services from VPC-B to VPC-A without requiring routing between the VPCs — this works even with overlapping CIDRs because traffic goes through the endpoint, not direct VPC routing.CPeering works for different accounts even with the same CIDR — the conflict only occurs within the same account
Incorrect.CIDR overlap blocks peering regardless of whether VPCs are in the same or different accounts. The routing problem is the same.DTransit Gateway can bridge the connection even with overlapping CIDRs
Incorrect.Transit Gateway also requires non-overlapping CIDRs across attached VPCs. It has the same routing constraints as VPC peering.
Hint:What does VPC peering need to add to route tables, and what happens when both VPCs share the same CIDR?