An interesting discussion with a former LoadRunner Product Manager piqued my interest and triggered this post. The real trigger was this part: CloudTest (like others, including LoadRunner) is still just a testing solution and as such serves as a filter or gate-keeping function in the lifecycle. [attrib: Jim Duggan, Gartner]. I’m gonna disagree on that one as I’ve had a different opinion on the “gate” part for the past 15 years. In my opinion, the goal of performance testing (and software testing in general) is not to be used as a gate. Testing is only one part of the equation when you need to decide whether or not to go live. There are other important factors to take into consideration, and depending on the context and timing they might be more important. The marketing and sales side of the software industry often prevails, for better of for worse.
The true goal of Performance Testing is to provide wisdom. Maybe the sole purpose of LoadRunner is to be a testing solution that serves as a gate, but at SOASTA we’re in the wisdom business.
The first objective of performance testing is to gather relevant data from EVERY components involved in the overall performance of your application. This is your raw and dumb data.
- Data from your application itself, memory, CPU consumptions, number of processes, heapsize, etc.
- Data from the application’s underlying infrastructure: Application servers, web servers, databases, SSL servers, CMS, memcache servers etc.
- Data from your application’s ecosystem: CDNs, Load balancers, switches, routers, DNS, etc.
You end up with LOTS of data. To give you an idea of the magnitude of data you might gather, one of the tests we’ve performed with a major TV network generated over 7 terabytes of data in less than one hour. Transfer was 17GigaBit per second! Thanks to CloudTest we’re able to gather these terabytes of data in real-time. But they’re USELESS data if you don’t have a mechanism to transform them into information. In order to deal with this BIG DATA PROBLEM you need a real-time, in memory OLAP engine. Gartner predicts that By 2014, 30 percent of analytic applications will use in-memory functions to add scale and computational speed. It looks like SOASTA is ahead of the curve!
Information is created by analyzing relationships and connections between data. Information should help you make some sense from your data and become relevant to your business:
- What’s the relationship between the number of virtual users and the memory consumed by my application server?
- How much server capacity am I left with when I reach 10,000 concurrent users during my test?
- What is the correlation between the number of process counts on the database server and the overall throughput?
- At what stage during the test do I see a drop in overall response time? Can I correlate this drop with another data to understand this behavior?
- What is the correlation between an increase in response time and the number of errors coming from my SSL Server?
- Why is 90% of response time for my overall homepage taken by this particular file? Where does it come from? Why does it take longer than the other page assets?
By combining, correlating and aggregating your data you’re able to build enough information to understand the behavior of your application and its entire ecosystem.
Knowledge is created by receiving, absorbing and understanding the information. From this knowledge we can make decisions and take actions. If you’re observing a surge of traffic going into one particular webserver, impacting the overall response time for some of your visitors, you can pinpoint the problem to a misconfiguration in one of your load balancer. You’ve got sufficient knowledge to take action and make a change on the fly. That’s what we call actionable intelligence at SOASTA. And with modern testing tools, such as CloudTest, it can happen in real-time. That’s the agile way of doing performance testing!
Wisdom deals with the future and predictability. This when you apply knowledge to change your process and reach your true objectives. This is the level where you test your predictions, execute them, monitor the results and adapt. As an example, what if you’re expecting 200,000 concurrent users on your website after a big product announcement? You think your visitors will behave in a particular way (you’ve got historical data to back this up): 50% browsing the site, 15% login, 15% putting items in and out of the shopping cart and 20% watching a video. How much bandwidth do you need? What should be the right configuration for your load balancer? What assets should be within the CDN? What should be the configuration of the memcache server? What if one of the web server crash? What would be the impact for your visitors? What would be the impact on your business?
Today, a lot of companies stop at the information stage and analyze their metrics retroactively to understand what has happened. There are some behavioral reasons, but most of them are just not equipped to reach further levels. You need a mechanism to:
- Gather raw data in an efficient manner.
- Build information by combining, correlating and aggregating these data to start to understand behavior.
- Bring you knowledge, in real-time, so you can make decisions and take actions, FAST.
- Perform predictive analysis, build prediction models so you can help your business and your customers in the best possible way. Gartner predics that by 2014, 30 percent of analytic applications will use proactive, predictive and forecasting capabilities. SOASTA is already there!
At SOASTA we’re not in the gate business. We’re in the wisdom business.
About the Author