It’s true. Whether you realize it or not, the typical n-tier web architecture is an out-of-the box bottleneck. Pretty much all popular application technology stacks are a performance problem waiting to happen. When you picture your web application infrastructure, think of it as a funnel. By default, the capacity is wide at the top and bottlenecks in tightly at the bottom. Here is a diagram of a typical 3 tier architecture with a load balancer:
Let’s use an F5 Networks/Apache/JBoss/Oracle stack for reference since this closely mimics most enterprise level e-commerce applications. If you haven’t tuned and optimized your environment at all levels, your capacity model looks something like this:
- F5 Load Balancer (Hardware)
- Capable of thousands and thousands of connections
- Built to handle tens to hundreds of thousands of concurrent users
- Bound by CPU, memory and bandwidth
- Apache Web Servers
- ~128-256 threads by default
- Bound by CPU, memory, and bandwidth
- JBoss Application Servers
- ~25-100 threads by default
- Bound by CPU and memory
- Oracle Database Servers
- ~10-40 connections typically
- Bound by CPU and memory
You should see a pattern here. The further down the stack you go, the less throughput you get. Apache has configured by default usually no more than 128 or 256 threads. This almost always has to be tuned to 512 or 1024, if not higher depending on the nature of the traffic. JBoss is set to no more than 100 threads out of the box. IIS for ASP .NET applications is in the 15-25 thread range. Your database connections are always a fraction of the total thread count of your app servers. Usually no more than 25%.
So, if your load balancer can support 10k simultaneous requests at the top, but your 2 web servers can only process 256 simultaneous requests combined, your 4 app servers can only process 160 combined, and your database can only take 40… this model will likely not serve your expected traffic pattern very well and needs some attention.
All of the major technologies at play in modern web applications need tested and tuned for optimal throughput and behavior. This means Apache, JBoss, Tomcat, IIS, Oracle, MySQL, Postgres… you name it, they all have out of the box configurations that are generic and need tuned to suit your app. Incidentally, optimizing doesn’t always mean increasing thread counts. If each thread running on your JBoss servers is eating up mad CPU cycles, you might need to tune down then scale the tier wider horizontally.
The ONLY way to ascertain the right number for these configurations is with testing. I was just on a test last week where folks were looking at their thread pool settings on the app servers for the first time ever in years of running a popular online application. Opening up the count in their case meant a massive throughput increase Don’t overlook these key performance pinch points. Again, these are the ‘default’ settings… if you’ve never looked at them under load for your app, you really, really need to.
About the Author