Caveats of Traditional Network Tools – Part II: Iperf

Posted by on August 6, 2013

This is the second part of a series about the caveats of traditional network troubleshooting tools; the first part was about traceroute and mtr. In this article I will cover Iperf – a popular tool to measure available bandwidth between two end points.

Bottleneck Capacity vs. Available Bandwidth

The network capacity of a path is the maximum sustained throughput that packets can be sent from a client C to a server S. Capacity is limited by the interface speed of the slowest link in the path, the bottleneck link. Capacity depends mostly on the transmission rates of physical network interfaces, so unless there are routing changes or equipment changes, it should be fairly stable over time. The available bandwidth is the share of capacity that is not being used by existing cross-traffic in the path. Therefore, available bandwidth should converge to capacity in the absence of cross-traffic.

network path bandwidth capacity
Figure 1: Network capacity in this path is 50Mbps.

Given a network path as shown above, C-B-D-S, network capacity is defined as the speed of the slowest link in the path, in this case link B-D 50Mbps. If there is 20Mbps rate of cross-traffic going through B-D, then end-to-end available bandwidth will be 30Mbps (assuming there’s no other limiting cross traffic in the path).


Iperf was originally developed by NLANR/DAST as a tool to measure maximum TCP and UDP bandwidth performance. By default it uses TCP and sends blocks of 128 KBytes of data between the Iperf client and the Iperf server, measuring the time it took to transfer each block and computing the average throughput of the TCP connection over a period of time (10 seconds by default). In most cases the results reported by Iperf are close to the real available bandwidth, however the results of the test can’t always be taken at face value:

  • TCP throughput is not available bandwidth: Iperf returns the rate at which data is reliably transferred from C to S over a single TCP connection (by default). Even though in modern implementations of TCP, the protocol should recover quickly from loss events, even in LFNs (Long Fat Networks, think 1GbE interfaces) TCP is still a conservative protocol and might underestimate the actual rate at which data can be sustainably transmitted end-to-end. But even worse, the TCP Advertised Window from S can make a substantial difference in the throughput, which could be mitigated by increasing the default window. The maximum window size in Linux Ubuntu 12.04 is 1,031,920 bytes by default, which means that unless you tweak the kernel params you won’t get more than 85 Mbit/s on a single TCP connection over a 100 ms RTT path (remember that TCP throughput is defined as WindowSize/RTT).
  • TCP throughput is not bottleneck capacity: Since Iperf measures TCP throughput, it’s subject to the conditions of cross traffic at the time of the measurements. If there’s sustained cross traffic, Iperf will never approach the bottleneck capacity rate.
  • First bottleneck is CPU of client machine: In cases of high CPU contention, the sending process can be preempted by other processes, in which case the client will send packets at a slower rate than what it’s local interface is able to send, providing lower values of TCP throughput. A way to mitigate this is to spawn multiple connections in Iperf.
  • UDP traffic is often throttled in the network: If you get much lower values of throughput with UDP traffic than TCP traffic, don’t be surprised. UDP traffic is often rate-limited by network devices because of the lack of inherent flow control (contrary to TCP).

In sum, Iperf can be a good first estimate of a lower bound for available bandwidth, but there are several factors that can affect the measurement and need to be taken into account.

What can you do if you can’t instrument the server?

If you don’t have control over the server (e.g. third party app), you can try to:

  • Use some other tool that sends ICMP Echo Request packets: Some precautions with ICMP though 1/ with ICMP you’re measuring the bottleneck in both client to server, and server to client directions (the minimum of the bottlenecks); this is mainly because the size of the packets reflected by the server are the same as the ones hitting the server, so there can be saturation both ways; 2/ ICMP traffic (as with UDP), is often throttled by network devices.
  • Download an object from the server using HTTP (e.g. wget): Has similar limitations as Iperf since it’s measuring TCP throughput, and requires a reasonable sized object; also it measures only downstream TCP throughput.

As part of ThousandEyes Deep Path Analysis , we have developed technology to compute bottleneck capacity and available bandwidth at Gigabit speeds without requiring any instrumentation at the server side, and using TCP packets (therefore bypassing the rate throttling seen in ICMP/UDP traffic). Stay tuned for forthcoming posts about this capability.