The Performance Beacon

The web performance, analytics, and optimization blog

Four big lessons learned from seven years of production testing in the cloud with SOASTA and Amazon

I finally had a chance to meet my counterpart at AWS for lunch last week. With both of us on the road every week, aligning our travel schedules to be in the same city has been a challenge that the two of us have been trying to tackle since February. Well, we probably should have started the conversation by asking the question “So, where is your home office located?” It turns out we both live in the same town, just a couple of miles from each other. He is the only AWS employee in Florida. I was the first one for SOASTA. (We have three now).

The town? Gainesville, Florida. Home of the Florida Gators. (The Gator Nation is everywhere.) So, now you know why we didn’t ask that question first — not exactly a hotbed of technology. Lesson learned for sure.

Our discussion was centered around how SOASTA uses AWS for many of its core offerings and has for quite awhile. Our flagship product, CloudTest, has been executing performance tests in the cloud since 2008. (Yes, SOASTA was the pioneer of cloud-based performance testing.) As we started talking about how mainstream testing in production has become and the volume that has been, and continues to be, executed, we both thought it might be a great idea to tell the story to AWS followers and others who may not have dipped their toes into the cloud and provide them a real-world, large-scale example that has gone mainstream.

Cloud computing changes the rules of testing

Performance testing has long been a hidden process — hidden behind the firewall internally and managed by a Quality Assurance (QA) team located somewhere between development and production. Performance testing was an inexact science. Testing consisted of running load in a scaled-down version of production (typically 10%-25% of production capacity, but not necessarily) with all other production capabilities included, including other unrelated applications, with configuration settings usually coming from development and not the IT operational team that owned the production environment.

Performance tests were based on guesswork and logs. Scripts were written in code just like the software that the QA team was trying to test. And tests were run blindly.  That is, they were run until either the desired load was attained or the QA environment under test came crashing to a halt. Typically the latter occurred more often than the former. (Which is where the term, “We don’t test in production.” originated. Would you test in production if one of the potential outcomes was that your production environment would come to a crashing halt, thus killing your revenue stream?)

And none of the performance test data obtained was analyzed live. Everything was analyzed post-test. Why? It consisted of log files and spreadsheets.

The “dirty little secret” of application testing

Online application performance is critical. This we all know now. Yet the “dirty little secret” contained in the process was that applications were rarely performance tested.  When performance testing was done, it was done with the hidden process.  The results of this process were predictable-it was not delivering the answers that leadership needed to know about their application performance.  Simply put, no matter how you slice it, a lab/QA environment is not a production environment.  Running a few hundred virtual users in this “hidden” environment does not equate to being able to perform in production at a load twenty times, or more, in the actual production environment.

This all changed with the introduction of AWS in 2006. Shortly thereafter, SOASTA (also formed in 2006) got started on the first production solution that provides the capability to test in production and introduced CloudTest in 2008. Needless to say, this was a game changer for the industry.

Why? Because until SOASTA CloudTest came along and leveraged the power of AWS, the most common test tool in use at that time for web application testing was never designed for this purpose. In fact, none of them were. From technology to consulting test methodologies, these legacy solutions were built and architected for an older time — client/server architecture. An era of “the dirty little secret”.

SOASTA built CloudTest for the express purpose of executing performance testing in a production environment, from the cloud.

There are four critical components for being able to test in production from the cloud

1. You need real-time analytics in your test tool

The first critical component for being able to do a large-scale test with live customers is having real-time analytics in your test tool. With up-to-the-second information about site performance, you can quickly tell if a site has started to experience poor performance or become completely unresponsive. CloudTest answers the mail here in a big way.

2. The product needs a good kill switch

Pressing stop or abort in a running CloudTest will stop the load immediately. Bandwidth, concurrent connections, threads in use, and other typical pinch-points will all drop back down to normal. With real-time analytics and the kill switch, you can stop a test as soon as you suspect customers may be impacted. If the test tool takes a long time to kill a test, this poses a serious risk to any production environment. CloudTest is specifically designed to accomplish a near immediate stop.

3. You need rock-solid internal monitoring practices

Having good monitoring practices internally, preferably integrated with CloudTest, can prevent you from ever needing to abort a test because of live user impact. Being able to watch the CPU growing to high levels or the number of connections reach its maximum on a load balancer or firewall in real-time can help prevent those thresholds ever being violated by routine testing.

4. The solution must be scalable and be able to be deployed — and torn down — in minutes

SOASTA CloudTest can deploy the CloudTest Main Instance and the desired number of load generators from any number of locations, in minutes. The infrastructure is on-demand. You deploy it when you need it. You tear it down when you are done.  CloudTest’s patented grid management capability, shown below, is another differentiator in the marketplace that enables a customer to spin up any number of AWS servers in minutes, complete with healthchecks, monitoring capability, and results servers needed for testing.


Cloud testing has grown by almost 4000% since 2009

The validation of the solution and the cloud can be found in the metrics. Check out this “hockey stick” for cloud hours used in AWS by SOASTA for this purpose. In 2009, we used around 110,000 hours. By 2014, that number had exploded to almost 5.3 million hours.


One final takeaway

Today’s fast-moving online world, especially with the hockey stick uptick of mobile usage, demands that companies have the right to have their performance testing solution be successful. And to be successful, the test must have the capability to validate that the customer’s experience will be one to remember — not one that is catastrophic and ends up on social media with screen shots of the application outage all over the device world. These types of poor endings typically result in lost revenue, as well as poor brand visibility and a customer retention problem.



Dan Boutin

About the Author

Dan Boutin

Dan is the Vice President of Digital Strategy for SOASTA. In this role, Dan is responsible taking the world's first Digital Performance Management (DPM) solution to market as a trusted advisor for SOASTA's strategic customers, and changing the way ecommerce organizations approach the marketplace.

Follow @DanBoutinGNV