How much is it really worth to spend your time, or your team’s time, focusing on testing and tuning various parts of your application architecture? I’ve heard leaders of numerous engining teams tell me that they’ve been caught in long expensive cycles of testing and tuning and gotten very little out of it for the effort they put into it. Anyone who has worked in the web space for any amount of time is familiar with typical dev and QA cycles.
If you’re a more modern development operation, performance testing is part of the software development lifecycle too. Ideally, performance testing should be done in an ongoing fashion just like functional testing in an agile world. However, what usually happens is that performance testing is done in the last few weeks or days before a launch to production. Because the effort is time boxed, if you don’t have a tight plan then the testing can have questionable value to the successful release of the code. Even if you are testing continuously and with agile processes, troubleshooting performance problems can still fall prey to a seemingly infinite loop of performance testing and getting nowhere trying to solve problems.
For purposes of this post I will define effort as a combination of time and money. Gain is defined as improvement to the overall application’s customer facing performance, capacity, or it’s stability. When looking at performance testing efforts I like to use this model for categorization:
If something takes a little bit of effort but gets you a lot – thats pretty good, right? In contrast, if it takes a long time and gets you little, it should be way at the bottom of the list of things to test and improve in the application architecture. The red areas are where I see a lot organizations getting trapped. A medium effort/medium gain isn’t necessarily bad, but they eat up a lot of time and money if they aren’t prioritized right. The same goes for high effort, medium gain. Redesigning a page to have fewer hits in it might fall into these categories.
I put high effort, high gain as a green zone. Here is an example of that: implementing database clustering and redundancy. It takes a lot of time and money to do, but gets you a huge benefit. Small effort, small gain things like trying to shave 20ms of response time off of a method response time thats already 150ms is often times not worth it compared to other activities (but I see a lot of teams doing it).
The best things are configuration changes that might yield huge results. Changing a load balancing algorithm, increasing the Java Virtual Machine heap size, or increasing a cache size are all low effort, high gain activities.
Try using effort vs. gain to prioritize whats going on with respect to testing and tuning and hopefully that framing helps improve the focus, and ultimately output of a testing exercise.
About the Author