Comparison of latency and throughput

From Free net encyclopedia

This article is a comparison of latency and throughput in telecommunications. A common misunderstanding of communication is that having more throughput means a "faster" (lower-latency) connection. But, in many cases, the reverse is true, depending on context and needs.

Contents

Definitions

Throughput 
The amount of information that can be transferred over a connection in a given period of time. It's usually measured in bits per second. For most transmission schemes there are 8 bits in a byte, so you should divide your bandwidth by 8 to get the transfer rate in bytes. However, adding in overhead and other factors that slow down the transmission, it is likely that your actual data transfer rate will be lower. A good rule of thumb is to divide by 10 to get the data transfer rate. Note: Because throughput is often increased by increasing the bandwidth the terms are often used synonymously.
Example: A connection with a signaling rate of 1.5 Mbit/s (1.5 million bits per second) gives a rough data transfer rate of (1.5 /Mbit/s)/10 = 0.15 MB/s or 150 KB/s. Note that this is in decimal kilobytes (1000 bytes) and not binary "kilobytes" (1024 bytes, also called kibibytes) as measured by an operating system. Since it's an approximation the distinction is not important. See Binary prefix for more details.
Latency 
The amount of time it takes for a response to return from a request. Usually this is measured in a simple time value. On the Internet, this is typically in milliseconds. 1000 ms is equal to 1 second.

How latency and throughput interplay

Latency and throughput together determine the perceived speed of a connection, and the perceived speed of a connection can vary widely depending on your needs.

To view a web page over a 56 kbit/s modem (56,000 bits per second) from a server 4,800 km (~3,000 mi.) away is done very effectively over the Internet. Latency is fairly low (typically about a quarter of a second) and the size of an average web page (around 30–100 kilobytes) will transfer in 10–30 seconds.

However, to transfer the contents of a DVD over a modem could take a week or more at this rate. Simply packing the DVD into an envelope and mailing would be faster! (See Sneakernet)

Using a T1 line with similar latencies, you could download that web page in under a second, which is a significant improvement. To download a 5 GB DVD over this 1.5 Mbit/s connection would take about 7.4 hours.

The postal service is "faster" than the Internet

The postal service has a latency of about 3 days in most cases, but the amount of information that can be put into a box (e.g. many DVD discs) is incredible.

Assume that you ship 500 DVDs in a medium sized-box from Los Angeles to New York. To match this amount of bandwidth, you'd have to transfer 9.6 MB of information every second. This is roughly 65 T1 lines worth of throughput!

It has been reported recently that Netflix transferred more information in an average day than the entire Internet. From the above, you can see this is very likely true.

Conclusion

This relationship between throughput and latency would also explain why satellite Internet has not been very popular. Although the throughput of a satellite connection can be very high and very economical, the latency added by the round trip through the satellite (1–2 seconds) makes latency-critical Internet tasks such as network gaming a very bad experience. And the latency of satellite Internet is nearly intolerable for using a VPN connection.

On the other hand, a DSL or cable connection can have excellent latency. For instance, the writer of this paragraph in Connecticut (whose home has DSL service) just pinged the website of Oxford University in the UK, a great-circle distance of about 5400 kilometers, so the round-trip distance covered by these packets is about 11,600 km (as a reasonable minimum). If these packets travelled at the speed of light between Connecticut and Oxford, the round-trip time would be about 36 milliseconds. The actual ping time was 97 milliseconds, less than three times as long. So although connection bandwidth may continue to increase dramatically in coming years, the laws of physics say the latency between this Connecticut residence and the UK will never be a factor of three shorter than it is today.

Similarly, video is currently a high-bandwidth, high-latency application. If all television broadcasts in use were to be transferred over the Internet, the network would buckle immediately as the infrastructure is insufficient by several orders of magnitude to handle the amount of information required. However, the high latency inherent in driving to your local video store and renting a movie to watch is perfectly acceptable.

Understanding this key difference in "speed" can greatly help one understand the implications of speed.

External links