TCP Throughput Calculator
Calculate the maximum achievable TCP throughput for a given window size and round-trip time, and discover whether your TCP window or your link speed is the real bottleneck.
Max Throughput = (Window Size × 8) / RTT | BDP = Window Size × RTT
ping.Why TCP Throughput Is Not the Same as Link Speed
When an engineer provisions a 1 Gbps WAN circuit, the expectation is often that a single TCP connection can push data at 1 Gbps. In practice, a single TCP flow to a host 140 ms away — a typical transatlantic latency — may only achieve 3–4 Mbps. The link speed is irrelevant if the protocol governing the flow cannot keep the pipe filled.
TCP is a reliable, ordered protocol. The sender transmits data and must receive acknowledgement before sending more, up to a limit called the receive window. If that window is exhausted before the acknowledgement arrives, the sender must stop and wait. On a high-latency path, the sender spends most of its time waiting rather than transmitting. The link is idle even though it has spare capacity — a phenomenon network engineers call the "bandwidth-delay problem."
Understanding this distinction is critical when diagnosing slow transfer speeds. A traceroute showing uncongested hops and a ping showing low packet loss does not mean your file transfer will be fast. The bottleneck may be entirely inside the TCP stack.
The TCP Window: Flow Control Explained
TCP's receive window is a flow control mechanism. The receiving host advertises how many bytes of unacknowledged data it is willing to buffer — the "window". The sender may not transmit beyond that limit. Once the window is full, the sender pauses and waits for an acknowledgement (ACK) that advances the window forward, allowing more data to be sent.
On a local network with sub-millisecond RTT, this works beautifully: the sender fills the window, the ACK returns almost instantly, and the window advances before the sender has had time to pause. The link stays saturated. On a cross-continental or satellite link with hundreds of milliseconds of RTT, the same window fills up quickly, the sender stops, and the ACK takes a long time to return — leaving the link idle for a substantial fraction of each round trip.
The receiver window is not the only constraint. The congestion window (CWND), maintained by the sender, also limits in-flight data during slow-start and congestion-avoidance phases. For sustained bulk transfers, the receive window typically becomes the binding constraint, which is why tuning receiver-side buffers matters more than congestion control tweaking on well-behaved paths.
Bandwidth-Delay Product (BDP)
The Bandwidth-Delay Product is the amount of data "in flight" on a network path when the pipe is fully utilised. Imagine a garden hose: the bandwidth is the diameter, the RTT is the length, and the BDP is the volume of water the hose holds at any instant. To keep the hose full, your TCP window must be at least as large as the BDP.
BDP in bytes = Link Speed (bps) × RTT (seconds) / 8
For a 1 Gbps link with 140 ms RTT: BDP = 1,000,000,000 × 0.14 / 8 = 17,500,000 bytes — about 16.7 MiB. The default 65,535-byte TCP window is less than 0.4% of that. The sender will exhaust its window after transmitting 64 KiB, then spend the remaining ~139.9 ms of each RTT waiting. This is why the theoretical throughput for a 65,535-byte window over 140 ms RTT is only about 3.7 Mbps, not 1 Gbps.
BDP calculations answer a practical question: "What is the minimum TCP window needed to fully utilise this path?" If your window is smaller than the BDP, throughput is window-limited. If your window is larger than the BDP, the link is the limit.
TCP Window Scaling (RFC 7323)
The original TCP specification (RFC 793, 1981) defined the window field as 16 bits, limiting the receive window to 65,535 bytes — a perfectly adequate size when networks operated at 10 Mbps with sub-millisecond RTTs. By the early 1990s, networks had outpaced this limit, and RFC 1323 (later superseded by RFC 7323) introduced the TCP Window Scale option.
The window scale option allows both endpoints to negotiate a scale factor (0–14) during the TCP handshake. The actual window size is then the 16-bit window field value left-shifted by the scale factor. At the maximum scale factor of 14, a window field value of 65,535 expands to 65,535 × 2^14 = 1,073,725,440 bytes — approximately 1 GiB. This is more than sufficient to saturate a 10 Gbps link at transatlantic latency.
Window scaling must be negotiated in the SYN packets. If either endpoint does not advertise the option, no scaling is applied and the 64 KiB limit remains in effect. Most modern operating systems enable window scaling by default, but some firewalls and middleboxes strip the option, silently degrading performance. If you observe dramatically lower-than-expected throughput across a specific firewall, checking whether it mangles TCP options is a worthwhile diagnostic step.
Why Your 1 Gbps Link Feels Slow to the US
Let us work through the canonical example. You have a 1 Gbps leased line in London. You are transferring a 10 GiB database backup to a New York server. The measured RTT is 140 ms. Your server is using a default Linux TCP receive buffer, which gives a window of approximately 87,380 bytes (85 KiB) for a fresh connection.
Max TCP throughput = (87,380 bytes × 8 bits) / 0.140 sec
= 698,240 / 0.140
= 4,987,428 bps
≈ 4.99 Mbps
Your 1,000 Mbps link is only delivering 5 Mbps — 0.5% of its capacity — because the TCP window is 200× too small for the path's BDP. The required window to saturate the link:
Required window = 1,000,000,000 bps × 0.140 sec / 8
= 17,500,000 bytes ≈ 16.7 MiB
Increasing net.ipv4.tcp_rmem to 33,554,432 bytes (32 MiB) on the receiving server allows the kernel to advertise a much larger window, and — with window scaling enabled — throughput climbs to approach link capacity. Tools like iperf3 and nuttcp include options to specify the socket buffer size precisely so you can test this before tuning production systems.
Tuning TCP Buffer Sizes on Linux
Linux exposes TCP buffer sizing through the /proc/sys/net sysctl interface. The key knobs for bulk throughput on high-latency paths are:
net.core.rmem_max— The maximum socket receive buffer size the kernel will allocate for any socket, in bytes. The default is often 212,992 bytes (208 KiB). Raise it to at least your path's BDP, rounded up to the next power of two.net.core.wmem_max— Same limit for the send buffer. Must also be increased for sending large volumes.net.ipv4.tcp_rmem— Three values: minimum, default, and maximum receive buffer per TCP socket. The default (typically 87,380 bytes) is what a new connection starts with before auto-tuning kicks in.net.ipv4.tcp_wmem— Same for the send buffer.
For a transatlantic 10 Gbps path (RTT ≈ 140 ms), recommended values:
# /etc/sysctl.d/99-tcp-tuning.conf net.core.rmem_max = 134217728 # 128 MiB net.core.wmem_max = 134217728 # 128 MiB net.ipv4.tcp_rmem = 4096 87380 134217728 net.ipv4.tcp_wmem = 4096 65536 134217728 net.ipv4.tcp_window_scaling = 1 # should already be 1
Apply with sysctl -p /etc/sysctl.d/99-tcp-tuning.conf. Linux's TCP auto-tuning (enabled by net.ipv4.tcp_moderate_rcvbuf, on by default) will grow buffers up to the maximum based on measured RTT, so you do not always need to set the default value high — just ensure the ceiling is large enough.
Common Bottleneck Scenarios
TCP window constraints appear across many real-world scenarios beyond the obvious WAN transfer case:
- Satellite links (LEO/GEO) — Low-Earth Orbit constellations (Starlink) typically see 30–60 ms RTT, making moderate window sizes sufficient. Geostationary satellites (600+ ms RTT) remain severely window-limited unless application-layer acceleration or UDP-based protocols are used.
- Storage replication over WAN — Database sync and volume replication tools (rsync, ZFS send, MySQL binlog replication) are often single-threaded TCP streams. Even with a 10 Gbps link, a single-stream rsync to a remote site may transfer at only 50 Mbps. The fix is either to tune buffers per above, or use multi-threaded tools like
rcloneorbbcpthat open many parallel streams to bypass window limitations. - Database connections — OLAP queries returning large result sets over a WAN can stall while the client slowly reads the result buffer. Connection poolers close to the database server and application-level streaming cursors mitigate this.
- Backup software — Commercial backup agents frequently open a single TCP stream per job. For remote backups over high-latency links, this architectural choice becomes the dominant throughput limiter regardless of allocated bandwidth.
- Containerised workloads — Containers inherit their host's sysctl limits unless overridden at the pod level. A well-tuned host kernel does not automatically tune containers; you may need to configure
net.ipv4.tcp_rmemvia KubernetessecurityContext.sysctlsfor affected workloads.
Related Calculators
- → SLA Uptime Calculator — Convert SLA percentages to downtime budgets
- → Subnet Calculator — IPv4 network and host addressing
Disclaimer
Results are based on the theoretical maximum for a single TCP stream with the given window size and RTT. Real-world throughput is also affected by congestion control state, retransmits, receiver processing speed, application buffering, and network queuing. Always validate with tools like iperf3 before committing to a capacity plan. Sysctl values shown are examples — test in a non-production environment before applying to live systems.