MTU Calculator โ VPN Tunnel Overhead
Calculate the effective MTU and maximum TCP payload after VPN or tunnel encapsulation overhead. Use this to set the correct MTU on your VM, container, or WireGuard interface.
Effective MTU = Physical MTU โ Tunnel Overhead
Common MTU Reference Values
| Scenario | MTU (bytes) |
|---|---|
| Standard Ethernet (IEEE 802.3) | 1500 |
| PPPoE DSL (remove PPPoE header) | 1492 |
| WireGuard over Ethernet | 1420 |
| AWS / GCP VPC internal | 9001 / 1460 |
| iSCSI / NFS jumbo frames | 9000 |
| IPv6 minimum (RFC 2460) | 1280 |
Published: April 2026 | Author: TriVolt Editorial Team
What Is MTU?
The Maximum Transmission Unit (MTU) is the largest single packet, in bytes, that a network interface or path segment can carry without fragmentation. It is a layer-2 concept: Ethernet, defined by IEEE 802.3, specifies a 1500-byte payload limit per frame. That 1500-byte figure has been the de-facto standard since Fast Ethernet (100BASE-TX) became widespread in the mid-1990s and remains the internet's default to this day.
An Ethernet frame is larger than its payload: a standard frame adds a 14-byte header (destination MAC, source MAC, EtherType) plus a 4-byte frame check sequence (FCS), totalling 18 bytes of overhead on the wire. Some implementations also include an 802.1Q VLAN tag (4 bytes), bringing the maximum frame size to 1522 bytes. The MTU value you configure on an interface refers strictly to the payload โ the IP packet โ not the total frame size.
Why does MTU matter? Because every layer of your network stack must fit within the MTU of thesmallest link along the path from source to destination โ the so-called path MTU (PMTU). If a router receives an IP packet larger than its outgoing link MTU, it must either fragment the packet into smaller pieces or drop it entirely. Fragmentation is expensive (CPU overhead for reassembly, increased loss risk) and often disabled outright on modern networks by setting the IPv4 DF (Don't Fragment) bit. Getting MTU right avoids both fragmentation penalties and mysterious connectivity failures.
Fragmentation and Path MTU Discovery
When a host sends a packet with the IPv4 DF bit set and a router finds that the packet exceeds its outgoing link MTU, the router discards the packet and returns an ICMP Type 3 Code 4message โ "Fragmentation Needed and DF set" โ back to the sender. The ICMP message includes the MTU of the offending link so the sender can reduce its packet size accordingly. This mechanism is calledPath MTU Discovery (PMTUD), standardised in RFC 1191 (IPv4) and RFC 1981 (IPv6).
IPv6 removes fragmentation by routers entirely: only the originating host may fragment packets, and only via its own extension header mechanism. This makes PMTUD mandatory in IPv6 environments. The IPv6 minimum MTU is 1280 bytes (RFC 2460), meaning every IPv6-capable link must support at least that.
PMTUD black holes are a pervasive source of mysterious, hard-to-debug failures. They occur when a router along the path discards oversized packets but also blocks the ICMP "Fragmentation Needed" response โ typically because a misconfigured firewall drops all ICMP. The TCP session appears to establish (small packets like the SYN and ACK pass fine) but bulk data transfers hang because large data segments are silently dropped. The classic symptom is "SSH works but SCP hangs" or "HTTPS pages load but large file downloads stall."
The two standard mitigations are:
- Fix the firewall: Allow inbound ICMP Type 3 Code 4 messages through any stateful firewall. These messages are not a security risk โ they carry no payload data and are required for correct TCP behaviour.
- MSS clamping (TCP MSS Clamping): On routers and VPN gateways, rewrite the TCP Maximum Segment Size option in SYN packets to a value safe for the link MTU (
ip tcp adjust-mss 1452on Cisco,iptables -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtuon Linux). This prevents large segments from ever being sent, avoiding the need for ICMP altogether.
VPN Tunnel Overhead
Every VPN or tunnel protocol wraps your original IP packet inside one or more additional headers. This encapsulation consumes bytes from your physical MTU, leaving less space for your actual data payload. Getting this wrong is one of the most common causes of poor VPN performance and PMTU black holes.
Consider a standard WireGuard deployment over an Ethernet link with 1500-byte MTU:
- Outer IPv4 header: 20 bytes
- UDP header: 8 bytes
- WireGuard header (type, reserved, key index, nonce): 32 bytes
- Poly1305 authentication tag: 16 bytes
- Total overhead: 76 bytes โ but typically cited as 60 because the WireGuard documentation counts only post-IP overhead. With outer IP included, effective MTU is 1500 โ 60 = 1440 for the inner IP packet when using the WireGuard convention, or 1424 bytes if you count the full outer IP header separately.
WireGuard's recommended inner MTU for IPv4 is 1420 bytes (giving 80 bytes of headroom to accommodate outer IPv6 encapsulation as well). For pure IPv4 tunnels you could use 1440, but 1420 is the safe conservative default that covers both address families.
IPsec in tunnel mode adds an entirely new outer IP header (20 bytes) plus ESP headers, padding, and an Integrity Check Value (ICV). The total overhead varies by cipher suite: AES-GCM with a 16-byte ICV adds approximately 70โ90 bytes in total. This is why IPsec tunnel mode has a larger overhead than WireGuard or even IPsec transport mode.
OpenVPN adds its own framing on top of the transport header. In UDP mode the total overhead is around 45 bytes (outer IP 20 + UDP 8 + OpenVPN envelope 17). In TCP mode it increases to ~57 bytes because the TCP header replaces the 8-byte UDP header with a 20-byte TCP header.
Jumbo Frames
A jumbo frame is an Ethernet frame carrying a payload larger than the standard 1500 bytes. The most common jumbo frame size is 9000 bytes MTU, though values of 9216 or 9600 bytes also appear in some vendor implementations. Jumbo frames are not part of the IEEE 802.3 standard โ they are a vendor extension supported by most modern managed switches and NICs.
Jumbo frames are particularly valuable in storage networks. iSCSI, NFS, and FCoE (Fibre Channel over Ethernet) all benefit from larger frames because storage I/O tends to transfer large sequential blocks: a 1MB write that requires 667 standard frames only requires ~112 jumbo frames, dramatically reducing per-packet CPU overhead and interrupt frequency.
The critical requirement is that every device on the path must support the same MTU. A single switch, router, or NIC that does not support jumbo frames will silently fragment or drop oversized packets, causing intermittent failures that are extremely difficult to diagnose. This includes uplink ports, trunk links, and virtual switch ports in VMware vSphere or Hyper-V. Jumbo frames must be enabled consistently across the entire L2 domain โ configuring only the servers and storage arrays is not sufficient.
In cloud environments, AWS EC2 supports jumbo frames (9001 bytes) within a VPC but restricts packets to 1500 bytes when crossing the internet gateway. GCP similarly supports jumbo frames on internal traffic. Mixing jumbo and standard MTU segments requires careful PMTUD configuration.
Common MTU Problems and Fixes
The most frequently encountered MTU-related problems, and their standard solutions:
- PMTU black hole (ICMP blocked): SSH/HTTPS handshake succeeds but bulk data stalls. Fix by enabling ICMP Type 3 Code 4 inbound on your firewall, or applying TCP MSS clamping at your WAN gateway.
- PPPoE DSL links: DSL modems in PPPoE mode add 8 bytes of PPPoE/PPP overhead, reducing the usable MTU from 1500 to 1492 bytes. Always configure your router's WAN interface MTU to 1492 on PPPoE connections. Many consumer routers do this automatically, but enterprise routers often require manual configuration.
- AWS/GCP WireGuard deployments: AWS EC2 instances use an MTU of 9001 bytes within a VPC but 1500 bytes to the internet. WireGuard running on EC2 should use an inner MTU of8951 bytes when the outer link is 9001, or 1420 bytes when the outer link is 1500.
- Docker and Kubernetes: Container networking (veth pairs, overlay networks) adds encapsulation. Docker's default bridge MTU matches the host, but flannel, Calico, and Cilium CNI plugins may add VXLAN or WireGuard overhead. Set the pod MTU 50โ100 bytes below the node MTU to accommodate overlay headers.
- Jumbo frame misconfiguration: If you configured jumbo frames on servers but forgot to enable them on a switch uplink, large transfers will randomly fail or be very slow. Use
ping -s 8972 -M do <target>to test the actual PMTU along a path.
MTU on Linux and Windows
Linux commands:
- View current MTU of all interfaces:
ip link show - Set MTU permanently via NetworkManager:
nmcli connection modify eth0 802-3-ethernet.mtu 1420 - Set MTU temporarily (lost on reboot):
ip link set eth0 mtu 1420 - Test path MTU to a host (the
-M doflag sets DF bit,-sis payload size):ping -M do -s 1472 8.8.8.8(1472 + 28 bytes IP+ICMP header = 1500-byte packet) - View effective TCP MSS (subtract 40 bytes from MTU for IPv4 IP+TCP headers)
Windows commands:
- View current interface MTU:
netsh interface ipv4 show subinterfaces - Set MTU on a named interface:
netsh interface ipv4 set subinterface "Ethernet" mtu=1420 store=persistent - Test PMTU with ping (no fragmentation):
ping -f -l 1452 8.8.8.8
For WireGuard specifically on Windows, the MTU is set in the tunnel configuration file under the[Interface] section as MTU = 1420. The WireGuard Windows app applies this when the tunnel is activated.
Related Calculators
- โ IPv4 Subnet Calculator โ Network address, broadcast, host range
- โ IPv6 Subnet Calculator โ Prefix sizes, EUI-64 addressing
- โ Bandwidth Calculator โ Throughput and transfer time
Disclaimer
MTU overhead values shown are typical defaults and may vary between software versions, hardware implementations, and configuration options. WireGuard overhead, for example, depends on whether the outer tunnel uses IPv4 (20-byte header) or IPv6 (40-byte header). IPsec overhead varies by cipher suite, authentication algorithm, and whether NAT traversal is in use. Always validate MTU settings in your specific environment using path MTU discovery tools (ping with DF bit, tracepath, or MTR). This calculator is provided for planning purposes only.