Bandwidth: Easily underestimated and over-looked

Posted by on

Bandwidth needs are frequently underestimated

When you think of an application that utilizes a lot of bandwidth, your thoughts generally gravitate toward sites serving digital media content: movies, music, images, etc. Although certainly true, many might be surprised to hear that bandwidth is a common performance bottleneck in seemingly unlikely applications.

Take a wizard-based shopping cart/ordering process, for example. In these applications, the state of the order, along with the order details, is passed from one step to the next. As a result, the requests and responses grow in size with each step in the process. The response data for each page alone routinely exceeds 100KB – let alone any static assets associated with the page. Despite typically longer think times in these types of scenarios, overall they are high bandwidth consumers. This was the case with a recent customer test where a surprisingly low number of concurrent users maxed out the 1Gbit/second ISP link going into the customer’s datacenter. This was true despite the fact this customer had properly implemented a CDN to offload static resources. These resources were being served from the CDN at a rate of 3Gbit/second under load. Had this customer not utilized a CDN, the ISP link for this heavy-bandwidth application would have supported only a fraction of the target users.

Not only can applications themselves be heavy consumers of bandwidth, but application development platforms also impact bandwidth usage. Take, for example, .NET applications, which utilize viewstates to maintain state throughout the use of the application. Depending on the application and the complexity of maintaining state, the overhead created by viewstates can have a significant negative impact on performance.

Why do performance teams overlook bandwidth as a bottleneck?

Clearly, sufficient bandwidth is an important part of the performance of any web application. So why don’t more performance teams pay closer attention to it? A number of common reasons come to mind:

(1) In some cases, performance teams overlook the issue because they simply don’t know what their corporate bandwidth limit is. ISP links are maintained by operations or networking teams to which, particularly in large corporations, the performance team members may not have direct access. Not only that, it may not be known by the application tester what other applications or websites are on the same ISP link. In a recent test, the customer learned the hard way that the application under test shared the same 100Mbit/second ISP link as its parent company’s corporate website. The lesson: communication with all stakeholders involved in the performance project is critical – especially the team that manages the application’s ISP link.

(2) Performance teams have never had the ability to actually run external tests to the scale their applications will see under load. In the past, they’ve done testing on an internal 1Gbit/second network where, presumably, bandwidth wasn’t a limitation. With SOASTA CloudTest, the actual path of user traffic (including the ISP link) is tested – just like it will be when the application is under heavy load.

(3) Simple under-estimation of how much bandwidth is actually available. Someone might hear they have a 100Mbit/second or 1Gbit/second ISP link available and think this is a lot. It may or may not be. It depends. Make sure you do your bytes-to-bits conversions, test at volume, and analyze how much data is flowing across your ISP link.

Ultimately, bandwidth usage is critical and can dictate the scalability of an application. It is just as important as a properly configured load balancer or setting the application server’s heap size correctly. It can’t be overlooked or your customer experience will suffer.

Leave a Response
  • (will not be published)