The Performance Beacon

The web performance, analytics, and optimization blog

9 lessons learned from the most popular Performance Beacon posts of 2015

9 lessons learned from the most popular Performance Beacon posts of 2015

It wouldn’t be the end of the year without a roundup post, am I right? According to Google Analytics, these were the most-read posts of 2015 here on the Beacon. (I chose nine — one for every month this blog has been alive.) Mobile performance, page bloat, single-page apps, continuous testing, and more — enjoy!

1. Case study: Mobile pages that are 1 second faster experience up to a 27% increase in conversion rate

We conducted this research to prove (or disprove) a couple of widely held beliefs about how people engage with sites on their phones. First, many people believe that mobile shoppers are tolerant of slow load times. And second, many of those people also believe that slow mobile load times don’t have a significant impact on conversions (and ultimately revenue).

Lesson learned: Speed matters, even on mobile — and perhaps especially on mobile. Even if your pages are already relatively fast, optimizing them further can pay off.

2. Page bloat update: The average web page is more than 2 MB in size

When we posted that, according to the HTTP Archive (which tracks the page size and composition of almost half a million of the most popular websites in the world), the average web page had surpassed the 2 MB mark, it galvanized a lot of people. Page size isn’t the only factor that can slow down your pages, of course. There are many other potential problem areas: latency, page complexity, server load, slow database queries, and so on. But page size should be on every site owner’s radar.

Lesson learned: The difference between a 1 MB page and a 2 MB page can mean several precious seconds added to a page’s load time. We know that making pages just one second faster can correlate to conversion increases of 9% or more. Putting your site on a diet is the first step toward making these conversion gains.

3. 23 stats you should know about mobile web performance

Mobile is always a hot topic. This collection of mobile stats includes some of my favorites — including some great stats around how mobile users shop more, spend more, and are generally more engaged than desktop users.

Lesson learned: Now that mobile usage has surpassed desktop usage, everyone knows they need to care about what kind of experiences we’re serving to mobile users. While mobile users are, on average, more engaged than desktop users, they’re also more dissatisfied with their online experiences. The gap between engagement and dissatisfaction represents a huge opportunity for business owners who focus on optimizing the mobile user experience.

4. How to provide real-user monitoring for single-page applications

In their own way, single-page apps are as hot a topic as mobile. But apps built with AngularJS present some significant challenges when it comes to real-user monitoring. This post gives a detailed breakdown of how we overcame those challenges at SOASTA.

Lesson learned: People used to say that monitoring SPAs was next to impossible. Nothing is impossible.

Download: How to measure the web performance of single-page applications

5. When it comes to delivering the best possible user experience, how fast is fast enough?

“How fast is fast enough?” is probably the most commonly asked question in the performance space. If there were a single answer, that would be awesome — but like most things in life, there is no simple answer. There’s no one-size-fits-all number that applies to every user and every website. But you can determine what’s “fast enough” for your site. This post contains some guidelines for getting started.

Lesson learned: Sometimes it’s okay not to deliver the very fastest user experience, when “fast enough” will do.

6. Performance Monitoring 101: A beginner’s guide to understanding synthetic and real-user monitoring

Website monitoring solutions fall into two types: synthetic and real-user monitoring (RUM). Each of these types offers invaluable insight into how your site performs, but neither is a magic bullet. This post covers how synthetic and real user monitoring work, the pros and cons of each, and how they complement each other.

Lesson learned: Every user experience is different. And every site delivers both great and terrible user experiences to users every day. Your RUM numbers illustrate the broad spectrum of user experiences. Your synthetic numbers give you snapshots of user experiences within very specific parameters. Understand what each type of monitoring tool can help you with, and use each to its best advantage.

7. Your mobile application testing checklist: Top 5 mobile test conditions

Every day, new mobile applications are rolled out by an increasing number of businesses and organizations. In order for these apps to be successful in the marketplace, they must be tested to ensure that they offer the best possible experience for end users. This post describes some of the top considerations for mobile app testing.

Lesson learned: Mobility is now a higher priority in the enterprise than ever before. There are several items​ teams should consider as they run mobile app testing processes. These are specific to different areas of the app, including the network, installation, performance, interrupt and device integration.

8. Here’s why more bandwidth isn’t a magic bullet for web performance

Ever-evolving networks mitigate the impact of page bloat, right? Not so fast. This post demonstrates that increasing bandwidth up to 1233% makes pages just 55% faster — meaning that faster networks aren’t the performance cure-all that some folks assume they are.

Lesson learned: While greater bandwidth has a definite positive impact on performance, it’s not the cure-all that some folks suppose it is. A well-rounded performance solution should address latency, optimize pages at the front end, and of course ensure availability.

9. Why performance testing in production is not only a best practice — it’s a necessity

You can’t test for every contingency in your QA lab. There are many performance bottlenecks that happen in the wild: issues with third-party services, content delivery networks, shared environments, firewalls, bandwidth constraints, misconfigured load balancer settings, latency… and the list goes on and on. This post covers a set of best practices for performance testing in production.

Takeaway: You need a solid performance testing process that includes testing in production. This process must include your CDN and third-party providers.

Do you have a favorite post from the Performance Beacon — or any other performance blog? If so, post the link in the comments. I’d love to check it out.

web performance monitoring case studies

Tammy Everts

About the Author

Tammy Everts

Tammy has spent the past two decades obsessed with the many factors that go into creating the best possible user experience. As senior researcher and evangelist at SOASTA, she explores the intersection between web performance, UX, and business metrics. Tammy is a frequent speaker at events including IRCE, Summit, Velocity, and Smashing Conference. She is the author of 'Time Is Money: The Business Value of Web Performance' (O'Reilly, 2016).

Follow @tameverts