The Performance Beacon

The web performance, analytics, and optimization blog

We are in a ‘Caching’ Renaissance Right Now

We all know how important caching is.  The concept of caching in web applications has been around for a long time, but I believe that we are in a period of caching renaissance within n-tier application stacks.  It started out back in the late 1990’s and early part of 2000 with trying to take load off of the database on high traffic e-commerce sites. Repositories and caching setups at the application server tiers were trying to keep requests from going to the database (rightfully so).  Even before that, the databases themselves had their own caches to speed up queries so they weren’t going to disk every time.  Keeping load off of the database makes a huge difference on performance and capacity, and it’s still a mainstay principle of performance engineering to this day.

In the not so distant past, memcached came along, and gave us a way to pull entire arbitrary objects out of a cache that was outside of the application tier and database tier.  Naturally, it was only a matter of time until that concept was put in place to cache content one level higher, on top of the web tier.  Now you have things like Varnish sitting above the web servers and caching entire pages and pieces of content to keep load off of the 3-tier architecture as much as possible.  What’s next, someone takes it one step higher and puts a cache layer on top of your content cache layer?  Well, guess what?  Akamai sort of does that with their Dynamic Site Accelerator.  They cache dynamic pages and content and keep all of the load off of your entire infrastructure where possible.

Caching is really important, and it needs to be applied properly and judiciously throughout the architecture.  Here is a real world example of a test I ran that illustrates the importance of caching:

Virtual users versus average response time screenshot in the CloudTest Dashboard

The teal area chart is virtual users ramping up over time.  The orange area in front is response time. For the first two minutes of the test, response times are crap.  They range from 2.5 to 10.5+ seconds. To make matters worse, this is a WEB SERVICES CALL.  So it’s just one component of a much larger transaction.  In essence, the end user would see this time, plus all of the remaining time to finish whatever they were doing in.  In this case, it’s an order placement.  The worst place to have this kind of problem.  But, something happens at the 2 minute mark and response times immediately get better, into the 100ms range, up to about 1 second under max load.  Here is another view:

Send rate versus average response time in the CloudTest Dashboard

I’ve kept average response time, but I swapped virtual users for throughput.  Yeah, when response time was crappy, so was throughput.  But then, response time goes down, and throughput immediately shoots up.  Any guesses on what happened there?  Yep, you guessed it!  The cache fully populated.  Part of this order placement call was to do inventory lookups.  Those inventory lookups fetch other associated metadata about the products being bought that should come from cache.

Having a cache in place doesn’t always mean it’s working as intended either.  Thankfully, it was in this case.  The customer was happy.  The only way to ensure that it is, is to test.  I have seen a lot of applications behaving in ways the teams didn’t think they would.  For example, if even one flag on a product object is changing with each instantiation or request, the cache might think it’s a new, fresh object and hit the database anyways.  In the case of pages, if you’re using Varnish or a similar page accelerator and you’re passing in dynamic values on the query string or in the POST data, the content accelerator might not cache the content.  You must test whether your caches are working properly.

First, though, you need to have one 🙂

SOASTA Marketing

About the Author

SOASTA Marketing

Follow @CloudTest