The Performance Beacon

The web performance, analytics, and optimization blog

Performance Monitoring 101: A beginner’s guide to understanding synthetic and real user monitoring

I love to bake. Baking is a precise science, so I learned early on that I needed different measurement tools if I wanted to avoid demoralizing kitchen fails. See-through Pyrex cups for measuring liquids or melting butter in the microwave. Measuring scoops in a variety of sizes for dry ingredients. Scales if I’m following a European recipe.

Just like there’s no one-size-fits-all measurement tool in my kitchen, there’s no miraculous all-purpose tool for measuring the performance of your website. But that’s okay. Here’s why.

Demystifying website monitoring

Chances are, your site already uses some kind of performance monitoring tool. And if you don’t, someone out there in your organization is probably planning to implement one any day now. But despite the fact that performance testing solutions are becoming commonplace, many people still only have a vague idea of what they actually do.

Website monitoring solutions fall into two types: synthetic and real user monitoring (RUM). Each of these types offers invaluable insight into how your site performs, but neither is a magic bullet. Today, let’s talk about how synthetic and real user monitoring work, the pros and cons of each, and how they complement each other.

But first…

Why care about website performance?

If you’ve been in the performance community for any length of time, this next bit might be old hat for you, but I find that it always bears repeating:

Website performance — the speed at which your pages render for your visitors — has a direct impact on every metric you care about. Here’s a random sampling:

These business metrics are why it’s never been more critical to truly understand how real users experience your site. This is where website performance monitoring tools come in. You need them, but in order to get the most value from them, you need to know why you need them.

Real user monitoring (RUM)

Real user monitoring (RUM) is a form of passive monitoring that “listens” to all your traffic as users move through your site. Because RUM never sleeps, it gathers data from every user using every browser across every network, anywhere in the world. We’re talking about petabytes of data collected over billions of page views,

If this sounds kind of awe-inspiring, that’s because it is. If you’re a site owner, RUM is an incredibly powerful tool to have in your toolkit. The word “passive” is a bit of a misnomer, because modern RUM is anything but passive. Today, the best RUM tools have powerful analytics engines that allow you to slice and dice your data in endless ways.

How it works

RUM technology collect your website’s performance metrics directly from the browser of your end user. You embed a JavaScript beacon in your web pages. This beacon gathers data from each person who visits that page and sends this data back to you to analyze in any number of ways.

What real user monitoring can tell you

In addition to the usual page metrics — such as load time, etc. — real user monitoring can teach you a great deal about how people use your site, uncovering insights that would otherwise be impossible to obtain. Here are just a few questions your RUM data can answer.

What are your users’ environments?

Think about this: there are more than 18,000 different Android devices. Now think about the vast array of other devices, networks, and operating systems out there. This dizzying array of environmental permutations is being used by the wide swathe of humanity that visits your site, each of whom expects you to deliver a user experience that is optimized to meet their unique needs.

synthetic vs real user monitoring

How do users move through your site?

Visitors rarely travel in a straightforward line. Real user monitoring lets you measure performance for every permutation of a navigational path an end-user might take through your website.

For example, understanding what pages they are most likely to enter your site lets you focus on optimizing performance for those pages. Or if you run an ecommerce site, you may learn that the shopping cart page is a common entry point, which lets you know that your visitors are temporarily abandoning their carts but returning to them later. Knowing this, you can make sure you’re optimizing that user path, including building load tests to ensure that the path can handle traffic.

Real user measurement: Understanding user paths

Real user monitoring can also give you insight into where people are coming from before they arrive at your site. User behavior can vary from referrer to referrer. (I’m working on another post about this, so more on this another time.)

How are your third-party scripts performing in realtime?

Third-party scripts offer a lot of value to site owners — from revenue-generating ads to tracking beacons that gather valuable user data. But they also mean you potentially lose some control over the performance of your pages. Third-party outages happen, and they can kill your pages. Less dramatically, but much more frequently, a slow third-party script can delay your page from rendering.

As this beacon waterfall illustrates, RUM tools give you visibility into how all the third parties on your site are affecting your pages:

Real user measurement: third party performance

Are you finding all the needles in all your haystacks?

Between your users’ environments, their paths through your pages, and the construction of the pages themselves, that’s a lot of combined complexity. Finding website performance issues can feel like looking for an unknowable number of needles in a nigh-infinite number of haystacks. Sometimes you don’t know what you need to know until it jumps out at you from a massive heap of data. This is one of the surpassing benefits of real user monitoring. It lets you slice and dice your numbers until the patterns emerge from the noise.

Problems aren’t predictable. You need a way to do regression analysis across all your data to locate the cross-sections of poor performance (as opposed to doing predefined drilldowns, which is how people tend to look at their data). This visualization of a cluster analysis is a great example of this approach:

Real user measurement: cluster analysis

What impact does website performance have on your actual business?

This is the biggest question of all. Yes, optimizing website performance serves users. But ultimately, serving your users better should serve your business better. You need to connect the dots between website performance and business performance. Real user monitoring lets you do that.

For example, this chart shows the relationship between load time and conversions on key pages, assigning a Conversion Impact Score to each page. Knowing that a product page has a relatively high Conversion Impact Score, the site owner can focus on better optimizing and testing that page to ensure that it delivers an even better user experience:

Real user measurement: Business metrics

There’s a great example of this from Looking at the site’s massive trove of historical RUM data, Walmart was able to draw a powerful correlation between load time, conversions, and bounce rate. Here are a couple of slides to illustrate…

Visitors who converted experienced pages that were twice as fast as visitors who didn’t convert:

Real user measurement at Walmart

Bounce rate strongly correlates to page speed:

Real user measurement at Walmart

What real user monitoring can’t tell you

  • Because of the way RUM is implemented (with JavaScript beacons embedded in your pages), it can’t help you with measuring your competitors’ sites and benchmarking your performance against theirs.
  • Because RUM is deployed on live sites, it can’t help you with measuring performance on your pages when they’re in pre-production.
  • While RUM does offer some page diagnostics, most tools today don’t take a deep dive into analyzing performance issues.

These are all areas where synthetic testing shines.

Synthetic performance testing

Synthetic performance testing (which you may sometimes hear called “active monitoring”) is a simulated health check of your site.

How it works

You create scripts that simulate an action or path that an end user would take on your site, and those paths are monitored at set intervals.

What synthetic performance testing can tell you

Synthetic performance tests offer a unique set of capabilities that complement RUM extremely well. You can measure a number of metrics — such as response time, load time, number of page assets, and page size — from a number of different geographic areas. You can also test your site in production to find problems before the site goes live.

(Shout-out to our friends at SpeedCurve for the awesome graphics below!)

How do you compare to your competitors?

One of the niftiest things about synthetic performance testing is that, unlike real user monitoring, you can test literally any site on the web — not just your own. This lends itself perfectly to competitive benchmarking. You can gather this data over time and, if you discover your competitors get faster or slower, you can even find out what they changed on their site to cause the change.

How does the design of your pages affect website performance?

Synthetic performance measurement gives you an object-level detail of page assets, letting you closely inspect the design and physical make-up of page in a controlled environment. Before your pages ever go live, you can ask questions like:

  • How is the site designed and built?
  • How does the design of the page affect performance?
  • How does the use of third parties affect the user experience?
  • Is the rich design of your site really worth it when it comes to performance? What are the tradeoffs?

How does the newest version of your site compare to previous versions?

This is really handy for companies that deploy their sites multiple times per week or even multiple times per day. You can benchmark performance before and after in order to understand what exactly made pages faster or slower.

Synthetic web performance measurement: deployment

How well are you sticking to your performance budget?

You have to make compromises between design and performance. Performance budgets are determined by inter-disciplinary teams within an organization, early on in the design and development process. Everyone agrees from the start that they’re not going to exceed certain metrics, such as start render time, number of assets, total page size, number of JavaScript requests. Synthetic tools keep everyone honest.

There’s more…

Filmstrip views let you see how a page renders frame by frame:

Synthetic web performance measurement: page load filmstrip

Cool visualizations connect the worlds of design and website performance:

Synthetic performance measurement: Third party


What synthetic performance testing can’t tell you

  • Synthetic performance testing gives you a series of performance snapshots, not a complete performance picture. Synthetic tests are periodic, not ongoing, so you can miss changes and events that happen between tests.
  • Because you have to script all your tests, you’re limited by the number of paths you can measure. Most companies only look at a handful of commonly trafficked user paths.
  • Synthetic tests can’t identify isolated or situational performance issues — for example, how your performance is affected by traffic spikes or short-term third-party outages and slowdowns.

Takeaway: There’s no single magic bullet for  performance testing

One of the most common — and understandable — questions that people have when they look at their synthetic and real user monitoring numbers side by side is: “But why are they so different?” That’s because an average is only a calculation on a page. There is no such thing as an average user. Every user experience is different. And every site delivers both great and terrible user experiences to users every day. Your RUM numbers illustrate the broad spectrum of user experiences. Your synthetic numbers give you snapshots of user experiences within very specific parameters.

Remember at the top of this post, when I said that there’s no magic bullet for measuring performance? If you take away only one thing from this blog post, this should be it. Understand what each type of performance monitoring tool can help you with, and use each to its best advantage.

Try mPulse for 14 days of free real user measurement

Tammy Everts

About the Author

Tammy Everts

Tammy has spent the past two decades obsessed with the many factors that go into creating the best possible user experience. As senior researcher and evangelist at SOASTA, she explores the intersection between web performance, UX, and business metrics. Tammy is a frequent speaker at events including IRCE, Summit, Velocity, and Smashing Conference. She is the author of 'Time Is Money: The Business Value of Web Performance' (O'Reilly, 2016).

Follow @tameverts