It’s all about conversion. Every e-commerce business that cares about improving revenue has a narrow focus of optimizing their website to improve customer experience.
However, most companies still lack the ability to create realistic website performance tests due to limitations in their current test methods.
In this webinar you’ll learn:
- How to tie business metrics (ROI) with website performance metrics and real user data
- How to build performance tests that will model user behavior on your site
- How to correlate data analytics so you can troubleshoot bottlenecks to improve performance
Join SOASTA’s performance experts as they explore best practices of leading retailers and ecommerce companies who have implemented cloud testing with real user monitoring to ensure their site performs when it matters most.
Hi, good morning everyone. Thanks so much for joining. My name is Kathy Lam and I’m happy to invite you all to our webinar on Secrets of Realistic Load Testing. Today, we’ll have … Let’s see, today’s webinar will cover how do you go ahead and build performance tests that actually models real user behavior. We found that, with speaking to a lot of performance engineers, that often that is a mystery and we wanted to take a lot of what we’ve learnt from working with our customers and providing that intel back to you guys.
What we found most importantly before people can invest in solutions is to really like be able to help management understand the benefits of improving performance and do, tying like business metrics ROIs is very important. We’re going to cover that as well. Then finally, how do you take all that data and use that to improve the performance of your website. Today’s speakers will be Craig Combs and Mike Ostenberg. Both of them are performance engineer and they’ve been with SOASTA either doing performance tests for our customers or they’ve been with large retailers working on engagements. With that, I’d like to turn this over to Mike. Mike?
Okay, thanks Kathy. Yeah, we’re going to be talking today a little bit about how we put together realistic performance tests. One of the first things that you need to understand when you’re putting together a performance test is what are your objectives for a performance. As you load up your site, you’ll certainly going to see some degradation perhaps in performance and you need to understand what acceptable performance is. How do you set that goal? How do you understand exactly what the threshold or the objective should be for the site performance that you have.
Certainly, I think that a lot of times customers will be looking at existing web pages, speeds and saying, “Hey, let’s make sure that we don’t exceed this or 20% above it as we go under load,” or they might be looking at industry benchmarks or they might just say, “Hey, let’s take that existing site and we know that we need to get faster so it’s better. Let’s go ahead and see if we can shave 20% off that.” How do you exactly get those numbers and when you do come up with those numbers, how do you make sure that you’re not doing too much? Because as everyone knows that you start to move towards improving the site performance. Improving that performance costs money and there’s going to be a point of diminishing returns.
As you start to invest in either upgrading and moving more content to or revamping your site so it has more active to dynamic controls, all of that involves effort both in engineering as well as effort for … Efforts in engineering as well as efforts for understanding exactly what the performance is going to be for that site. Understanding the objective for what you have and setting it appropriately so that you’re not, first of all, setting it too high where you might be losing business or setting it too fast where you’re spending money unnecessarily past the point of diminishing returns is one of the things that we’ll understand today.
One of the other things that you should understand when you put what is the appropriate site loading? How high should you be going in terms of the site’s capacity? Where should those users be coming from? How do you put together a load that’s going to be realistic in terms of the actual site loading that your users are giving to your site so that when you do run that load test, you can be assured that you’re getting valuable results. There’s a lot of answers or possibilities that we can do that but what we’re going to be talking about today is how your actual customers will be the guide in letting you understand both what is the point of diminishing returns on making your site faster and faster and where you can actually hit the optimum performance or the goal that you should have for your site.
Also, other objectives about where are the users coming from? How many users, et cetera? What’s the geography and what pages? How long are they taking on the pages? All the information that you need to understand exactly what your users are doing and also, what your performance objective should be are actually going to be given to you from the actual data from your users and that’s what we’ll be covering today. Maybe, I’ll hand it over to Kathy and we’ll kind of go into a little bit of a survey to ask a little bit about what we’re doing now for site performance exactly. Kathy?
Great. Before we like to begin this webinar, we like to ask a quick question and so, the question we’d like to ask you all is how do you determine your site performance goals? You can see the questions in front of you. Oftentimes, companies use previous metrics, previous load tests, they use industry standards or they look at analysts reports. Our question to you is like do you have a good way of determining what those metrics are. Great. There was a few technical issues back there, so you should see the questions now. If you can take a few minutes just to answer that, great. We’ll just wait a few more seconds and let’s go view the results.
Okay, excellent. Okay, so this is very interesting because it shows that 90% of you folks either use previous load testing, competitive benchmark or customer feedback so that’s really good to know. This provides us some really good information because SOASTA’s goal is to … SOASTA provides a platform where you can actually accurately measure the performance of your site and the experience the users are getting. Most importantly, we do this not through generating synthetic load but actually looking at real users. With that, let me go ahead and transfer this straight back to Mike.
Okay. Great and thanks everyone for taking that poll. I think it’s interesting to see that and we have a lot of responses for past load testing outages and that’s basically saying you’re using existing results, the existing performance of your site and trying to keep it steady or keep it in a certain range as performance goals. Now, that’s certainly one way and kind of a common way that we see, obviously, from the poll but what we’re going to show today is that we actually have a way that you can measure specifically what might be the optimum site so you’re not just using the way we had it last year, it’s maybe 20% better.
To do that, we’re going to focus on two main products today. What you’re seeing on the screen now, the round circle there. That’s the three products of SOASTA: CloudTest, TouchTest and mPulse. Today, we’re going to be talking about two of those. The first of those is CloudTest. What CloudTest is our load testing platform. It allows you to run load tests of full user production load even the expected peak production during seasonal readiness or Christmas or Black Friday up to whatever scale you need and also, to distribute that load around the globe so that you’re coming from those locations that your users really come from.
With CloudTest, you can understand and test your site from a variety of locations realistically to generate load emulating that real user behavior so I think between the two of these, you can see that we’ve got complimentary products. CloudTest and mPulse are two products that can work together to give you that visibility into understanding how you should test the site to make it realistically. mPulse will give you information such as how many users or what peak load on the site should be, what pages those users are viewing, how long are their sessions, how long did they wait in between session, where are those users coming from and then, for the performance objective, when is it that you start to hit a point of diminishing return, when is it that the peak performance that will give you the best conversion rate or the best order value coming in for the site. That’ll drive then into CloudTest which now that you have these goals and these metrics about how to take the load that you have distributed and kind of where these are coming from and what the performances are.
CloudTest will allow you to test at that load, distribute that across the world in different geographies so that you’re emulating accurately and tune the site performance that you can hit that. One of the other items about mPulse is we not only tell you specifically what the site performance is today and what the diminishing point of returns is for your users, we’re actually going to be able to give you what if scenario so that you can actually say, “Okay, well I know my site today is perhaps that 4.1.2 seconds but if I could get it down to 3.1.2 seconds, how would that affect the orders? How would that affect the conversion?” I think we all anecdotally know that the site performance is a little bit better, you’re going to get perhaps a little bit better return on investment in terms of performance conversions, et cetera but exactly, how do you quantify that? mPulse can do that both in terms of what you’re actually getting today but also predict what you’ll get if you can improve the site performance. I’m going to toss over to Craig next. He’s going to talk a little bit about some of the factors that you need to consider when you’re putting together the realistic load and performance test so that you are actually emulating the conditions that you have for performance testing. Craig, I’ll go ahead and hand it out to you now.
Yeah, thank you Michael. Yeah, this would one would actually been another good area for a polling is where are people today now getting a lot of their information for building their test. On that survey, a lot of people are using they said outages and past test to kind of determine like where their performance should be and so on. That’s fine but the question is it’s like … Especially if you’re using an outage. Okay, so our goal is that we don’t have an outage but is that good enough? That’s fine but now we have our site that it’s not crashing anymore but is the performance good enough for our users? Is the performance good enough for our guests? Is it fast enough? Are we losing money because our users are leaving the site because it’s not fast enough? Are we missing opportunities to competition because our competition is faster so that’s getting into the competitive bench marketing or so on. If we’re just using existing tests or outages for your goals, are those metrics that define your load test or testing strategies, are those good enough?
Are they fast enough or what you should be testing? Where did those metrics come from for defining your work loads or your test, where are those coming from? Are your guests … Are your customers guiding you … Are you getting the results from your real users on how fast your site should be? Are you getting information from the real users about where they’re coming from? What devices they’re using? What operating systems? Are you testing the right devices and operating systems? I’ve had a lot of companies and applications where they’re always testing on Chrome and then for some reason, there is an issue on some version of Internet Explorer that’s … The application is slowing down. What version … What percentage of the customers are on Internet Explorer? Are the customers guiding you to what you should be testing and what should you be monitoring and is the performance good enough on those because you don’t want to be missing those opportunities for performance and tuning and also, testing for the areas that you need to be or where your customers are actually at.
Sometimes, we don’t want to be taking a guess as we have here or we can be … Or we’re going after are very good because that’s where the traffic is actually coming in and that’s what’s hitting the systems that typically you’re responsible for and that’s what you’re maintaining. That’s typically the area that you control. The downside is if you have a web-based application. It doesn’t count for a lot of traffic that a content delivery network might be handling. Again, if you have a lot of customers that have different browsers and devices, if you’re just using as a metric for information, any information that might be hitting on a different third party, different partner or a content delivery network, you won’t be having that information about what users are hitting, what they’re browsing or so on. Then typically, I mean, you should be using a lot of third party tools but then the question is how many different tools do you need to go to and access to get the different information you need.
You might be using one tool to get conversion rates so I know, “Okay, these are the paths that the users go through to my site.” Okay, but how many page do you use or how many pages are they actually hitting? I know the paths that they’re taking but I don’t know how many times. Okay, I’ll go to another tool to see how many times they’re going through them. Okay well, this is how many times but what’s the performance of these pages? Okay, we’ll have to go to another tool to see what the performance of these pages are or what’s the page breakdown? What are the resources? What’s the waterfall look for that page? Then, they have to go to another tool. A lot of times, I worked at a lot of different companies where I have these multiple tools to really get a full view on a single pane of glass of what does my workflow look like for my users. D
What does this workflow look like for this application? What paths are these users taking? What devices? What operating systems? What do I need to really build a realistic model of user’s behavior on my site so I can most accurately reflect what’s going to happen when users come to my site. That takes a lot of time to do that. Then, if you have a new system, where do you start from there? I don’t have any logs, third party tools aren’t going to have any data in them and I don’t want to be guessing so … Then you’re usually typically going to go back to the business and they’re going to give you a number and I’ve had a lot of applications where the number’s been way too high. Then, you’re going to test and you’re going … You’re having a lot of tests that are failing and you keep on revisiting that number. Eventually, you’re able to work with them and find out a more realistic number.
Typically, any application … I mean, there’s a requirement. There’s a reason behind what you’re building. It’s either the support, some sort of customer base or some sort of order volume. We’re going to sell so many devices this year. These devices contact our API system. When these things go live, 50% of them in the field are going to be active and they’re going to generate so many transactions. You’re going to build your workflow model from there but again, you have to go through that process. Then, even only for a new system, you’ll eventually going to want some sort of monitoring so when they go live then you can monitor your assumptions. We made some business assumptions early on about what we’re going to need to test so even if it’s on a new system, you made some assumptions, you’re going to want some ability then to validate your assumptions on were you right, were we right about our assumptions or do we need to run some new tests after we went live about our model just to make sure that we’re going to be on the right track.
Those are some of the things that you and I will take a look at. Based on that though, I mean some of the information though that we’re going to be looking for, we’re going to be looking for some of that … What was our peak? Second, minute, hour and day. A lot of these systems, they have gone to … They’ll give you the information but they’re only going to hold it for like a month, maybe two months or what you’re going to be looking for is hopefully, someone saved up. What was the metrics from last year and then hopefully, that person is still there or that group you worked with is still there and you’re able to find that information. Another thing is being able to hold on to that information because a lot these tools don’t … information from year-to-year over long periods of time or they aggregate it. I mean, I’ve had a lot of spreadsheets where I get … It’s like aggregated over a month.
Then, you’re trying to extrapolate it down to a minute or an hour and then, you’re just assuming an even distribution so you miss a lot of piece of. Then obviously, other things you’re looking for, sessions, unique users, phase visits and orders. Again, I mean there’s all these information in one tool or do you have to go across multiple different tools which also might be owned across a bunch of different groups as well and then, how easy the information is to get or not. Then, some other information that typically we’re looking for. Again, depending on the application that we’re looking for, session links or so, depending if it’s a web application again and you’re maintaining that in memory, sent data base. Again, the user password they take, devices, locations, where are those users coming from because depending on where they’re at. Obviously, it takes a certain amount of time for information to go from one location to another and it’s going to be delayed for some of that.
Then, test is going to be very important as well. I mean, with the user paths that users comes from … I mean, what was the average number of items that were in the cart. If we’re talking about like an e-commerce application, what was the value of the order, what were the different types of searches that were made if they’re … Just assuming that it’s an e-commerce application, I’m just using that as an example because it’s probably something that a lot of people can easily relate to. Good test data is also something that’s very important and having the application that can report back or capture the data that users were using for the application is also important as well.
Fantastic. Thank you, Craig.
I’m trying to turn it back over to Mike to give you a little … Better view of mPulse and what it’ll be able to do.
Okay. Fantastic, Craig and thank you for going through that. As Craig mentioned, there’s a lot of factors that you need to understand about your real users in order to put together a comprehensive load test that’s going to be realistic in terms of the volume of users, the sessions, the lengths of the sessions, et cetera. What we’re going to go into next is we’re going to go and walk you through mPulse which gives you a complete view of your actual users. One thing to point out is … I hope you guys are seeing the browser now on the screen. This is mPulse. mPulse is a completely hosted model. mPulse setup is basically done … If you go into mPulse and have a log-in to mPulse, you can go down to this app section here and you can go ahead and create an app.
Now, that basically means that we have a lot of power then because we’re seeing now in real time every page viewed by ever user. We now have users that have a good performance, we have users that have bad performance and we can see how these performances are different, how their behavior is different between those users that have good performance and bad performance. This first dashboard that we’re looking at here, this is something that’s called the mPulse globe. You can see that we’ve got a kind of a map here … A globe of the world and these beacons firing are basically users viewing the site. Every time somebody views the site within about seven to eight seconds of them viewing the site, a beacon fires off to the mPulse servers and you’re able to see that data right here. If you do have real time events that you need to understand what’s going on, mPulse will see you in real time. Perhaps a flash sale, what’s going on.
If you do have load testing that needs to be done next week, mPulse can be deployed in minutes and you can start collecting the data in minutes so that you can understand what actually the performance that your users are seeing but also, how does that performance affect their behavior on the site. This is the first portion is that mPulse is able to capture those beacons and every one of those beacons in real time. I think, another point to note and it will probably come up a little later when we talked something about the data workbenches that all of these data, the comprehensive data for performance of every page view including waterfall charts of every page view for every user is stored indefinitely on a very granular level for every page. You now have a massive data set of every page of every user that you can go to and look at a variety of different things. Do users behave differently if it’s a sale versus if it’s not a sale on the weekends versus other days?
There’s a lot of ways to use that data. We make that certainly directly available to you to analyze afterwards and we’ll talk a little later about something called the data sent workbench that helps you do that. Now, that we have this data stream coming in here in terms of the users, how do we take a look at these data and start to understand those aspects that Craig mentioned about the performance involving users. Well, for that I’m going to go to a few different dashboards. We have probably about 20 stock dashboards in mPulse and a variety of widgets so that you can create as many additional custom dashboards as you might need to do. I’m going to go to one of the maybe more generic dashboards that we have here. This is something called the ops dashboard.
What we’re going to use this dashboard to talk about is how do you get that visibility into user performance. I’m going to start in the upper left-hand side here where we talked a little bit about just the performance that your users are getting today. Here, you can see that the medium page load time is about 2.4 seconds and this little spark line at the bottom kind of shows you how that’s varied over the time period that we’re looking at. We’re looking at the last 30 days here. Over the last 30 days, we’ve had a medium page load time of 2.4 seconds, we’ll break that down into the front end time and the back end time so you can understand where you might be able to make better performance improvements, what is the best thing for the back-end if you will.
When we go down to the left-hand side here, you can also see the geography, where these users are coming from so you can see the United States is about 60% of this site but we have kind of a global distribution for these sites and then as Craig mentioned, you need to know kind of what are the common pages that users are going to and the common flow. Down below here, we’ve got the first level here. Something called page loops which tells you what are the sites or pages that the users are going through most frequently and here, you can see that we’ve got homepage and product pages, testing pages going through here. This is going to help you put together the scripts or the combination of loads that’s going to emulate those pages that the users are going to.
In the upper right-hand corner here, we’ll talk a little bit more about session level stuff and here’s where we can see things like the average session duration, how long do users typically stay on the site and how many pages do they view. When we’re putting together scripts for our users, we need to understand, “Okay, is this an eight page session? They log in, they click on a product, they put it in the cart, they do checkout, maybe review their order or they’re just browsing typically.” You can actually go through here and you can see what is the average session duration, how long are they typically on the site, how many pages are they typically on for each of those session duration as well and some of those other factors that Craig was mentioning. Operating systems, if you want visibility into where they’re coming from.
Are they coming from mobile or they’re coming from browser? You can actually see the breakdown of the operating systems that they’re coming from here, a Mac OS, iOS, Windows, et cetera, as well as the specific browsers on each of the operating systems that they’re coming to. This first dashboard that we’re looking in here, you can see a pretty good overview in terms of the performance that the users are currently getting. It also gives you visibility into the volume of users that they’re coming from … The volume of users all stay in this middle one here, the page view the dots show you how many page views we have at any time period and we can drill that down to the minute level or any time period that we would like to which is any of these filters at the top here so go ahead and select the time period that you’d like to use and this will adjust in real time.
We’ve got visibility now of the performance that the users are currently getting, we’ve got visibility into where the users are coming from, how long their sessions are, how many pages they typically have in the session. The next question then is, “Okay, well now I know what our users currently have but how do I know what they think is enough. Do they consider our existing performance good enough?” To help with that, I’m going to move over to this next graph called the metrics graph and what this graph does or this dashboard does is it gives you visibility into, first of all, a few generic metrics: Session duration, session length, bounce rate on the site and these are measurements of basic user engagement. Let’s take bounce rate. If you can see I’m looking at bounce rate specifically right now.
Let’s start with bounce rate and talk about that. Bounce rate is a measure of how many users or what percentage of users show up to your site, they view the first page and then they leave the site without even bothering to click on that second page. What you see down below here in the second graph here is we’ve got two lines here. The blue section here, this is the history map. This is the distribution of performance for all of your users and you can see here that it kind of peaks up here at the most is about 1.5 seconds that we’ve got … It’s on the right-hand scale about 702 users per minute … I’m sorry, 702 users that have a performance of 1.5 seconds. That’s a common percentage. Then, certainly you have a distribution of users with different performances that they experience on the site for a variety of reasons and you can see that some users have a three second performance, some users have a four second.
This blue is a histogram, the distribution of performance. You may see users have a six second performance. There’s only about 200 of those sessions that had a six second performance. That’s the performance that your users are actually getting in a histogram view. What we see overlaid on top of that though is this green line. The green line is, in this case, is the bounce rate. This is basically what percentage of our users showed up to the site and they left without even viewing that second page. You can see here that when the performance is really good at about .7 seconds that the bounce rate is pretty low, only about 11.9% of users don’t bother to click on that second link. As the performance slows down even to just two seconds which is still a pretty fast performance, the bounce rate has gone up to 35.6%. You can tell right away that we’ve got a high sensitivity to performance based upon the bounce rate metric. We’ve gone from 11.9% to 37%. That’s about 26% change in bounce rate for 1-1/2 seconds of performance here.
Now, you’ve got the data that let’s you understand, “Okay, if I could bring down the performance of my site, I know that the bounce rate is going to improve.” Bounce rate is maybe an indicative measurement. It helps you understand that users are likely to stick around more if they’re seeing the type of information they want but more importantly, it’s going to be, I’ll say specifically, business metrics. Are they more likely to convert when they go onto the site. I’m going to stop sharing for a second, we’re going to switch back to the … Back here. Okay, thank you. Thank you to my colleagues here for helping me understand how to unshare the product because what I wanted to go in here as well and talk a little bit about and the reason I can’t show it is because we don’t want to share revenue metrics for any customers here.
I’m going to show kind of an anonymous graph here about just that other thing that we can do which is we can actually measure conversion rates. We can actually measure the dollar value of orders purchased onsite and so, on the slide that you’re seeing now, we’re looking at a metric. You can see kind of up in the upper right there of the conversion rate. The conversion rate is what percentage of the user showed up at the site and they actually bought something. That’s kind of giving you visibility directly into the revenue conversion factors here and here you can see that about 2.5 seconds that we’ve got a conversion rate that keeps up at around around 3% there. As this goes to four seconds or five seconds, that conversion rate starts to go down so you can actually see the effect that your performance has in your conversion rate.
These are the numbers that you can take into the performance testing for objectives but also take back to the business owners of the site and say, “Hey you guys, you were wondering why we’re doing the performance testing? You were wondering why we’re pushing to get the site from four seconds to three seconds? Well, it just turns out that our conversion rate goes from perhaps 2.6% to 3.1% when we can get one second of performance out of the site.” That’s going to give you kind of the oomph to justify the performance testing that you’re doing is I can try and get better performance. I was kind of drilling on that graph here. This particular example here, you can see in this case, again, it’s kind of another real example that when we move this site from a 2.97%, those users that had a 2.97% or a 2.5 second average response time for their pages that they were viewing, their conversion rate was 2.97.
When it slowed down to users and some of them had a 3.5, the people that had a 3.5 seconds conversion rate, those people only had a 2%. That’s a 50% from 2% to about 3% change in the conversion rate. You’ve now got a real data about your customers and how the performance of your site affects them. It’s kind of important to note that this is very different from taking industry benchmarks where you look at just the e-commerce, et cetera because maybe it makes sense for Target versus … Walmart versus Amazon where they’re all doing the same product but for a lot of companies, it’s not that way. A lot of times, it’s going to be, “We have a different brand. We maybe have a higher level of service.” The loyalty of your customers and their willingness to stick around varies based upon that, based upon loyalty programs. The only way to really know what the effects of performance is on your customers is to measure it directly. That’s exactly what mPulse allows you to do.
It’s also kind of important to note that the graphs or histograms that we’re looking at the screen here, this is the real data, actual data from your real user today and you can see that those users that do have a 2.5 second page load are 2.97 and those users that do have a 3.5 second response time are about 2.4 seconds. The next question is, “My gosh, what if we could actually change the performance of our site and improve it. What would that do to affect the conversion rate or perhaps the revenue that comes into the site?” The next page that we’re looking at here, again this is a screenshot but this is live in the product now, is something that we call the what if. What this does is if you look at the top bar here, the top bar is the actual data that’s coming in right now from your customers right now. You can see that in this case that they’ve got a load time of 4.12 seconds so existing data is on the top bar.
Now, down at the bottom, you can see that we’ve got this what if analysis tool and on this what if analysis tool, you see that there are sliders under each of these and the slider will start with the very left most one here, the load time slider here. You can see that it’s kind of … We pull the slider to the left already and pulled down from 4.12 which is what we’re currently getting today to one second … 3.12. We’ve pulled up one second of response time there. When we’ve done that, of course you can’t just pick up all the users that are at 4.12 and just drop them at 3.12. What’s going to happen is you’re going to change the shape of the curve. Again, we’ve kind of statistically modeled what the shape of your curve, the distribution of your users is going to be when you get to a medium response time of 3.12 seconds.
Now, that we have a new model for what the shape of the curve again based upon your existing user base, getting them to 3.12 seconds, we can now see what the change in those other metrics is going to be. When we move the load time down one second to 3.12 seconds, the bounce rate which currently is at 20% goes down to 17%. Real mathematical data about where your bounce rate is going to be when you move that down. Probably more important though is these other two further to the right here which says, “Okay, if we can move down the load time of the page from 4.12 to 3.12, the revenue of the site is going to change over the time period that we’re looking at here for $6 million and the conversion rate is going to go up 1.3% to 4.0%.
Now you have, not only, unarguable data about exactly what your user’s performance is like and behavior is like at the performance that they currently have but now, we also have forward-looking statistically modeled data that says, “Okay, if I can move it up to 3.2 seconds or 2.5 seconds, what effect is that going to be?” Conversely, if you slide the slider the other way, you can actually see and where to slow down to five seconds, how much revenue would we lose? How much would that affect our conversion rate? This feeds directly back into that performance testing model that we have about, “Okay, I know that we have certain users distributed around certain locations, I know that we have certain performance objectives. How do we set those?”
Here, you can see that you can actually decide where your performance objectives is. If you base it upon how much revenue it’ll get versus whatever the cost will be for improving that performance of the site. Let me go back very briefly though to one last item that we have in the mPulse here is that with this mPulse data, we also have … You’ll see that I’ve gone back now to the bounce rate graph, this is live data again the bounce rate graph and we’re showing live data for bounce rate for the site as the performance goes up, the bounce rate goes up. You’ll also see it down at the bottom, whatever that metric is that you’re looking at, whether it’s bounce rate, conversion rate, revenue or order value. We also have a geographic distribution so you can actually see how does that balance rate change as you go into each of those different sections. If you want to see kind of within the US what’s the conversion rates or the bounce rate or the revenue from those different users in the US, you can actually go down to that.
We’ve got that geographic data and all those business metrics collated here. One last point, just as maybe a bit of an aside for mPulse is that we’re also capturing specific waterfall charts for everyone of those user views. This is all non-identifiable information but all the resources and images that are downloaded and the timing information for those. I’ve mentioned that we had every case for every user, you can actually go in and look at every page for your every user. We don’t just tell you that the page load times was, in this case, it’s page load time of 2.58 seconds. If you want to click on the left-hand side, every one of the page use of every one of your users, you can click on one of those. It’ll pull up a waterfall chart for that particular user.
You can actually see his initial load here. You can go down and see all the resources, figure out which resources are keeping longer to load. You can figure out if some users are having a lower cachet ratio than others by actually looking at their data, and when we do click on one of those users, you’ll notice that the palette below these other items in yellow, which are the other pages viewed by that particular session. You can actually go in and track the specific pages that that user went through, and when we said he had a median page load time of 4.12 seconds or 3.12, you can actually see every one of the pages and see which of those pages was the biggest contributor to that additional performance degradation.
Again, mPulse now is giving you, not only comprehensive visibility into the performance of your users, where that performance is coming from, the volume of pages that those users are hitting, what sessions … I say what sessions, how many pages in this session, how long it takes to go through those sessions, but also what the performance should be, and you can even dig into the specific pages to see why the performance was that way.
With that, let me go ahead and stop sharing here. We’ll go back to the slide deck, and I guess talk a little bit about it. I guess, Craig, do you want to talk a little bit about how we can further use that data from mPulse to specifically not just model what pages they went to overall but maybe get a little more detailed information about where the users are going. Craig, back to you.
Yeah, this is actually really the key of really the webinar and kind of the whole thing about being able to … The secret about building a really good test with real user data because we’re able to capture all the data using mPulse and using mPulse to get the page groups and so on. Really, the secret here is using the Data Science Workbench. Now, a lot of customers might be concerned, like, “Wow, we’re going to put this on our website, we’re going to be loading this, we’re going to be collecting all this data, this has to slow down our site, this has to slow down our pages.” To answer that question is, “No, it’s not.”
What’s very different about us is we’re a lot lighter, we’re a lot faster, and the biggest difference is we keep all the data. We’re 100% and we don’t aggregate the data and we don’t get rid of it, and that’s the thing that I loved when I started working with mPulse. From being in this industry and looking at this as like, “I had all this data. I mean, it’s 100%. It’s not sampled. I don’t know how many tools I’ve always worked it and it’s like, “Oh, we only captured 3% of the data.” Why we only do 3%? Well, we get too much volume, we can only afford 3%. Then, “Oh, during holiday, we can only do less data because actually the more popular the site, our site gets, the less we can monitor because we can only afford stuff.
Actually, the more successful you get, the less you can monitor, which is not a really good model. A lot of tools are kind of structured that way, like, “Oh, the more successful you are, then, well, you are going to be paying us more or you’re going to be monitoring less.” With mPulse, though, we are collecting 100% of all of the users. I mean, we don’t do any sampling. We’re capturing it all. We don’t get rid of the data after 30 days, we don’t get rid of it after 60 days. I can go back a year and a half and see what happened, so I can go from one holiday to another and compare the performance from down to the minute level and see how the performances I can see. How many users were? I could see what the check out was and I can compare how many users were tolerating versus good, successful. This is valuable because I’m able to see how things are trending and then I can also create better workflow models and then I can create better tests and so on for that. I can create better assumptions.
Now, using the Data Science Workbench, the Data Science Workbench is able to create very good models and I can see lots of analysis because we have all that data. In order to create and use data science, you need data. Since we keep all that data, we don’t get rid of it and we don’t aggregate any of that data, we have all that data to start creating, to start doing science on it, doing analysis on it. When we’re talking about, “Okay, what was your peak day? What was your peak minute? What was your peak hour?” We have all of that. We have built-in functions into our Data Science Workbench, where you can do a query and go, “Give me our peak date, give us our peak minute.”
Down below, and I’ll get on to the next slide, we have a chart and then that can help us to find what was our different paths. Where are our paths for the users? What was the percentage of users that did this? What are the percentage of users that did this path. Then I can identify what where the locations of our users, what were the operating systems of our users. Then I can combine that. Well, what were the number of users that had a response time over this? What was the number, response times of users that had this? What about the locations? Give us the top locations. Give us the slowest ones.
If you’re looking for opportunities for tuning and so on, a lot of times, what you want to do is you want to find, if you’re looking for like a return on investment, with like, “Okay, I want to tune the slowest page.” Well, you could do that, but the slowest page is not the page that is used the most. That may not be the best investment. “Okay, well I want to tune the page that’s called the most.” Well, the page that’s called the most is the fastest. That also may not be the best investment. I can go into the Data Science Workbench, say, “Okay, give me the pages that are maybe that are between this range and performance, but then also between this range, as far as the users and percentages.” I can easily identify pages or page groups that are good candidates for optimization that are going to be probably give me the most likelihood of the best return on investment opportunity. When we go back to that analysis, as far as tuning on investment, to help me identify which page groups or pages would get me the best opportunity for return on investment.
Then kind of going back to also like competitive benchmarks, about how do we tune, a lot of those competitive benchmarks, so you might take something like, “Okay, we’re comparing all these different vendors on how long the checkout takes.” A lot of different companies have different processes on checkout, and so when they’re doing competitive benchmarks, I mean, it’s not necessarily, I would see it, a fair comparison, because the process of checkout is different. They only provide different functionality to their guests. Different guests, different companies, sometimes want different features and so on, so just because one checkout of another company is slower doesn’t mean that it’s really any worse or better than another one. They’re just providing different feature sets or different user experience to them. Just because it’s slower from a transactional standpoint or the benchmark means it’s any worse.
Then also, like, when you’re looking at the kind of benchmarks for home pages, again, it’s all about the user experience or perceived performance about what’s being presented to the user. Those competitive benchmarks, just do, “Hey, this is what the performance is and this is what we’re measuring on,” but it doesn’t tell anything about what the users are actually doing. mPulse would actually let you and your businesses know what the users are doing. Are they actually leaving the site, are they bouncing, or are they actually converting? There, it’s actually able to tell you what the users are doing and give you more actionable information.
Then, so here’s another example in the Data Science Workbench. From here, this is giving an example, where I can define my user path. Here I selected a particular user path and I’m able to see that 10% of the users execute, hit this landing page. This is the starting landing page, and this is the path that they take. Then I can take this information and use this to create a test that I say, “Okay, I made this scenario, and for this scenario, 10% of users need to execute this path.” I can take this information into CloudTest and help me define my test, as far as my distribution for my testing and so on.
Then, taking all the information, we’re going to be ready to test. With that, I’m going to move into a quick little demo, where some of that information would go into CloudTest and kind of show you our latest version and so on. Right now, this is our latest version of CloudTest, and I’m just showing, this is our globe right here. Where I would take that information, like for locations, I would use that location information to define the grids that I would want to start up. I might take the information, say, “Okay, I have people in the east, the west, I have some people coming from Europe and Japan,” so kind of going back to the what if analysis. There’s nothing that really limits that from reporting euros or pounds as well. It’s not limited to just dollars. That’s what we’re displaying, but there’s localization as well.
I would take that information to define the locations I need to test from, and then back into composition. I’ll go in here. Now I can define, “Okay, these are the scenarios that I need. Here’s the percentages that I need to execute for those scenarios and locations. I already have that defined. Now that I already have those defined, and in here, I already defined the steps, based on that chart. These are the pages that those scenarios need to hit. I’m going to load it up.
Now, I’m already loading it into the cloud and distributing those scenarios to the cloud and getting ready to test. Now I’ve already loaded those scenarios across the globe. I mean, I’ve already sent that to the east, the west, to Japan, and Ireland, and I’m ready to test. I have 120 users, so again, I got that from mPulse. I already know that I need 120 users. I got that information. Looks like I’m just waiting for some monitoring to start. Let’s go back real quick and I’m just going to stop this real quick and check the monitoring here.
Actually, Craig, I think I know what’s going on. The server we’re testing against an actual site. That server was brought down automatically by Amazon right back up, so apologize for that. It’ll be back online in about one minute. I guess, for those on the phone, what’s happened here is the load test is going to be hitting the site here. We actually have a site that we demo these load tests against. An actual site. It’s hosted on Amazon but we actually have a routine that automatically pulls down the server that’s unused for a period of time, so it’s done that and I had forgotten to restart it, so my apologies, folks, for that. Servers back up in just a couple of minutes so we can restart the load test.
Actually, this would be good. Actually, what will end up happening is we’ll start getting a bunch of errors showing up in the error analysis.
It’s a demo of our error analysis capability, but we got that all, so servers coming up here.
It’s coming in now. It’s just delayed. Here, right now, I’m seeing live traffic now. As we saw, there was no traffic coming initially. Now, as soon as the traffic started coming from the different centers here. Now, I’m able to see the traffic coming here from Ireland, the east and west across here, I can see the traffic come here from Japan. The different arcs indicate the performance and the volume of the traffic from the different data centers. I could actually break it down to specific country. I’m sure you weren’t aware they were going to do a geography lesson here. Here, I can see the send rates, performance and response time. If I wanted to, during the test, I can also dynamically change the volume of users. If I needed to change the number of users for a particular scenario, I can do that here if I wanted to. During the test, I can bring it down or I can increase the number of users, or I can increase the total number of users during the test if I wanted to as well.
I could also do that during the composition. I can even just have a program during the test. Here, I can see real-time, I mean, within a second, that there was a spike in response time, and then now the response time has dropped. I don’t see an overall impact in send rate. Earlier, we talked about page groups in mPulse. Now, what I would do is, when I’m running a test after I’m done, I would double check here to see my accounts here and as far as my numbers here and see how that correlates to my page groups in mPulse, and then I would use these response times here and see how those correlates to what I saw on mPulse as well, and that’s how it would correlate.
Here, I got my real-time analysis for error rate, and I can see what errors are happening, where they’re coming from, and over time, where they’re coming from. If we have any monitoring going on, I could take a look at that. Here, I can see any sessions per minute, number of virtual users, confirm virtual users, total pages, and any other metrics that I’m collecting at this time. A lot of these charts are live, dynamic. I can drag and drop them and have them correlate it on top of each other if I wanted to. If I wanted to see how they correlate to each other.
I can blow them up if I wanted to, I can also export them to Excel, so then I can do more manipulation if I wanted to as well. That’s just kind of a brief overview of how I would take that and use that mPulse data to kind of build a test. Again, I would just take that to define my locations, and then I would use a chart to build my scenarios I need to build and then build my pages that I need, and then my distributions here. I use my user accounts that also helps me to find my delays. Then I would also use the data that I’m getting from that, that, after the run, I’d use it to correlate and validate my results, as far as, “Okay, did I hit my targets? Did I hit my numbers? Okay, am I hitting the … Am I response time accurate?” And so on. Then I’m going to go back to the test here. Then, again, here’s the view, if I wanted to view it flat instead of a globe, I could do that as well. With that, I’ll give it back to Mike. Mike?
Okay, very good. Thanks, guys. I think that we’re pretty much ready to wrap up here. Just as a quick summary of what we’ve gone through today. We talked a little bit about how mPulse can capture the real user metrics that you have from your customer base as they hit your website. That includes things such as the performance of this team, but also how that behavior affects their performance, so that you can understand how much more revenue perhaps you can get on your site if you were to change your performance objective. It’ll also give you information about the number of pages that they’ve viewed, which pages they’re seeing, the session duration and the session ranks that they’re coming from, which geographies that they’re coming from, what are the common search terms that they’re using, etc., so that you could actually put together a load test which will accurately emulate that user behavior.
Craig has then gone on to show you a load test doing exactly that. We’ve actually put together a load test that runs from multiple different locations, based upon data from mPulse that’s running on user volumes based upon data from mPulse, search terms pulling from mPulse, and how you can see that data in real-time during the load test. I guess one other point to note is that, a lot of times, when you’re running these load tests, customers have a little bit of fear about maybe press testing their production site. Certainly we work with a lot of customers, we test both, you know, stage and performance testing sites, as well as production sites.
The combination that we have with CloudTest to give you real-time control to see the data during the load test in real-time and also pull down the volume of users during a load test, but also, with mPulse at the same time during the load test, to see the users’ behavior, makes it so you can actually run on your production systems and be assured that your users are not being impacted by that. We’ve actually gone through and kind of shown both halves of that running the mPulse and running the CloudTest portion of that.
With that, I guess we’ll wrap it up today with any last questions. I think we’ve addressed several of them in the chat through the session here, and we’ll look forward to your feedback for the next one. Kathy, do you want to talk a little bit about our next webinar coming up, before we wrap up here?
Yeah, sure. Actually, before we go ahead and jump on that, I wanted to talk a little bit about the CloudTest on demand offer that everyone of you will get. What we’re doing is offering a slight promotion here. Wherefore, around $2,000, we’ll help you kickstart a CloudTest today, even. What each project entails is we’ll have a kickoff meeting, where we look at the goals, we define what your business objectives are. Then one of our performance engineers will actually help you script the load test and then actually run and execute the test. Finally, we’ll sit back and review the results with you.
Then, one of the things we’re adding on here is also a 90-day free trial mPulse as well, so that you can actually see the benefit of real-user measurement on your website. If you are interested in this offer, you will get an email with all the details and then, if you’re interested, let us know for that. Great. Anyway, thank you so much for attending this call. I hope you found it as informative as I did and see you next time. Thank you.