The Performance Beacon

The web performance, analytics, and optimization blog

Here’s why more bandwidth isn’t a magic bullet for web performance




One of the most common arguments I hear when people rationalize why things like page bloat aren’t that big a performance issue is that ever-evolving networks mitigate the impact. In this post, I’m going to demonstrate that increasing bandwidth up to 1233% makes pages just 55% faster — meaning that faster networks aren’t the performance cure-all that some folks assume they are.

(First, a hat tip to the folks at NCC Group, who came up with this simple test a couple of years back. And another hat tip to Mike Belshe, who was writing about this back in 2010. I liked both these posts when they came out, and I’ve referred people to them many times. My goal in this post is to recreate their process to demonstrate that the findings still stand.)

Approach

Using a site that’s already enviably fast by most people’s standards — in this case, Etsy.com — I tested its load time across a variety of different connections. I used WebPagetest, a synthetic tool that simulates different realistic connection speeds and latencies.

I tested the Etsy home page nine times each across the following connection types:

*RTT = Round Trip Time, aka the amount of time it takes for the host server to receive, process, and deliver on a request for a page resource (images, CSS files, etc.). “Latency” is another word for the delay in RTT.

Results

I plotted the median load times for each test on a graph alongside the bandwidth numbers for each test. At a glance, you can see that the bars indicating load times are not nearly as dramatically stacked as the bars indicating bandwidth numbers.

Bandwidth versus load time

The graph below is another way of looking at these numbers. If people’s supposition that bandwidth improvements correlate to proportionately faster load times was correct, then the two sides of this second graph would be more or less mirror images of one another. Clearly they are not.

Observations

1. While download bandwidth is 300% greater for FIOS (20 Mbps) than it is for cable (5 Mbps), the median load time over FIOS (2.554s) is only 24.6% faster than the median load time over cable (3.386s).

2. The distinction becomes even more pronounced when you compare DSL to FIOS. Bandwidth is 1233% greater for FIOS than it is for DSL (20 Mbps versus 1.5 Mbps), yet median load time over FIOS is only 55% faster than over DSL (2.554s versus 5.675s).

3. While DSL and ‘3G – Fast’ have comparable bandwidth (1.5 Mbps versus 1.6 Mbps), the 3G connection is 32.5% slower (5.675s versus 7.518s). The key differentiator between these two connections isn’t bandwidth — it’s latency. The 3G connection has a RTT of 150ms, compared to the DSL RTT of 50ms. (That reminds me of this great post — It’s the Latency, Stupid — that’s been kicking around for the past two decades and is still worth sharing.)

4. You can see the combined impact of latency and bandwidth when you compare the fast and slow 3G connections. The fast connection has 105% greater bandwidth than the slow connection, yet the median page load is only 38.5% faster (7.518s versus 12.232s).

Takeaway

Promises of ever-greater bandwidth are good PR. The public and the media latch on to these promises, because they sound great. “Double the bandwidth” sounds fantastic if you believe that “double the bandwidth” means “twice as fast”. (I hope I’ve proved that you shouldn’t believe this.)

While additional bandwidth has a definite positive impact on performance, it’s not the cure-all that some folks suppose it is. A well-rounded performance solution should address latency (e.g. use a content delivery network) as well as optimize pages at the front end (e.g. compress and consolidate resources, leverage the browser cache).

I encourage you to conduct some quick and dirty tests like this on your own pages. You can use WebPagetest‘s default settings for bandwidth and latency, or you can customize those numbers (click the ‘Custom’ option in the connection pulldown menu in WebPagetest) with what you know about your users’ connections via your real user monitoring (RUM) data. You’ll gain some valuable insights into how your site performs for real users, and you’ll see why throwing more bandwidth at the problem isn’t enough.

mpulse-banner


Tammy Everts

About the Author

Tammy Everts


Tammy has spent the past two decades obsessed with the many factors that go into creating the best possible user experience. As senior researcher and evangelist at SOASTA, she explores the intersection between web performance, UX, and business metrics. Tammy is a frequent speaker at events including IRCE, Shop.org Summit, Velocity, and Smashing Conference. She is the author of 'Time Is Money: The Business Value of Web Performance' (O'Reilly, 2016).

Follow @tameverts