Join SOASTA evangelist and O’Reilly author Tammy Everts for a sneak peek at the preview edition of her new book, Time Is Money: Business Value of Web Performance.
Tammy shares the highlights from the book, including material about the fascinating psychology of web performance. She’ll also walk through some of the case studies that make an appearance in the book, demonstrating the impact of performance on a variety of metrics such as bounce rate, page views, traffic, revenue, and ad engagement.
I’m excited about today’s talk. I have spent the last six or seven years researching the intersection of web performance with user experience and business metrics. It’s an endlessly fascinating topic for me, so I’m always happy anytime I get an opportunity to talk about it with other people.
On that note, the little quick poll, wait a second. There we go. I’m going to poll everyone who is participating today. This is anonymous, so nobody is going to know what your answer is. How many of you have done research on how performance has impacted your own business metrics? Just your own internal case studies, regardless of whether or not you have shared them with the outside world, how many of you have sat down with maybe synthetic monitoring tools or real user monitoring tools and actually tried to correlate what impact performance has on your business, using real data as much as you can? If you wouldn’t mind answering this poll and I want to come back to it at the end.
A little bit about me. My name is Tammy Everts. This is how you can reach me on Twitter, where I actually will be posting up a follow-up blog posts about this talk and I will be posting the slides there as well. I have also run the performance blog, performancebeacon.com. Our team of bloggers covers everything we can think of with regard to performance testing, measuring, monitoring, and analytics. It’s our repository for all the research we do. If you are interested in that and you haven’t checked it out already, I would encourage you to go there.
I always say at the top of this call, I have spent many years researching the intersection of performance with UX and business metrics. I was very excited when a year ago I was contacted by the folks at O’Reilly Media to corral it all into a book. This is the book. It will be available in its entirety in 2016, but there’s a sneak peek, a preview edition, it’s a couple sample chapters that O’Reilly has pulled out and packaged into this neat little e-book. It’s very rough. This is where you can go to download it – soasta.io/timeismoneybook. The download window actually ends today, so we tweaked in under the wire. If you were to like to download your own copy and distribute it to your team, I would encourage you to do that.
What I wanted to do today was give an overview of the general themes of the book, why we should all care about performance, give a little history lesson really. I had a lot of fun putting of this back together because I wanted to create a story around performance that I have never actually told before, which is a historical story. I hope it is as interesting to you as it is to me.
Back in the day, in the very early days of performance, there’s a lot of conjecture about what we say we want, what people expect from their online experiences. I think it’s interesting. I think surveys are a very interesting way to at least begin to ask questions about anything, about any topic. A couple of surveys, this is one that we did at SOASTA, where we surveyed a large number of people using Harris Interactive and their other third-party research group and we asked people what do you care about in your online experiences?
Not surprisingly, the top issue people cared about was safety and security, no surprise there. The remaining issues were all around performance, slow performance, pages being unresponsive, site crashing or outages. Those all fell out after safety and security. Clearly, people care about performance. It is on their radar. They know they care. This is a little bit of a backbone for why we all do what we do.
Other interesting survey data, this is from Akamai, where about half the people say they will go to a competitor’s site next if a site performs poorly. One out of three say they will never return to the site and one out of five say they will share their problems with the site. They will complain to their friends and family, possibly post about it on Twitter and Facebook. People say they are very vocal about their complaints. That’s all well and good. It’s interesting to know what people say, but how do we actually translate what people claim that they want to the business? How can we make all that conjecture meaningful at a business level, so we can actually measure how that customers’ dissatisfaction or user dissatisfaction translates to business outcomes?
Kathy Lam from SOASTA just posted in a chat and I’m going to mention here, feel free please to post questions in the chat channel. There will be a Q&A at the end of the talk and I just find the questions interesting and helpful and I can address those in my blog post as well, so if you have questions please fire away. How do we translate performance to the business? How do we translate user experience to the business?
There are a lot of things we measure when it comes to performance. This little table shows some of the performance metrics that I talk about and write about on a regular basis, ranging from start- render and DNS look up times over to redirect times, SSL negotiations. There are a lot of different things we can measure to measuring user experience in business. This red one is the one that probably people get most interested in and it’s probably the thing I get asked most commonly: what is the impact of one second delay, one second improvement in performance on just your bottom line? A lot of what I’m going to talk about for the remainder of this webinar is going to be that particular question.
I’m also going to be talking about that as it relates to conversions. Just in case there’s anyone out there who is unfamiliar with any other phrases I’m going to be dropping a lot and it’s something I think everyone who touches a website should know. If people use your site and there is some expected outcome, then at the end of your site you measure success, that could be did they complete a transaction? Did they download something? Did they sign up for a newsletter? All of those things are conversions. How many people do you convert from somebody who is merely browsing your site to actually completing whatever engagement you want them to complete on your site?
What I diagrammed here is an incredibly optimistic conversion funnel where you start with all the visitors to the website and then at the bottom of the funnel, the conversion rate are those who complete a purchase or complete whatever transaction or engagement you want them to complete. 10% is incredibly high by the way. Generally, I see 2%-4% and anything above 4% is golden. If you have conversion rates of higher than 4% on anything, you should be very pleased and proud. There’s always room for improvement, but that’s really great.
This is where we get into the history lesson. I would call this a brief history of performance ROI and through the lens of when I got involved in performance back in the day. I want to give this history lesson in terms of questions. Roughly, I think we can break down each year since 2008 when performance first burst onto the tech scene into what big questions were we able to answer that year or what big questions did we at least start to try to answer that you?
In 2008, the big question was does front-end performance even matter? Who cares? The answer to that question is yes, it does matter. I don’t know how many people who are sitting in on this talk have been to an O’Reilly Velocity Conference. It is the web performance and operations conference. It started out, it was focused in Silicon Valley and now it’s global. There’s one every year in New York. There’s one every year in Europe. There’s one every year in China. It’s where people go, leaders from the performance space to talk about what are the cutting edge questions and issues and case studies and things we can learn about performance? I highly recommend if you get an opportunity to go that you go. If you go to the Velocity Conference website, you can download a lot of the case studies and see what other people aren’t sharing and what kind of people are going to their conference.
In 2008 was the first year of Velocity Conference, the thumbnail photos, you see the different images here, the gentleman on the right is Steve Souders. He is still one of the co-chairs and is a leader in the performance field. 2008 was a great year because it brought a lot of different people to the table and there was a lot of just mutual recognition that you’ll, “Performance matters to you? It matters to me, too. This affects my business.” It was really great in that way. 2008 was also a good year for performance because this seminal study from Aberdeen Group came out a third-party study where they surveyed several hundred businesses and just asked them what is the impact of a one second delay on your business?
This study was important because it indicated that not just for a single company, but for a lot of companies performance had a significant impact on everything from page views to customer satisfaction to conversions. As I said earlier, these numbers have been widely cited. They’re old now. So I don’t actually cite them myself personally anymore because it’s dated information, seven year old stats. But I actually have spoken with the folks at Aberdeen recently about this and they are interested in revisiting this study and updating these numbers. I’m going to be excited and you can definitely expect me to cover that when they do update those numbers.
In 2009, based on the success in peeking people’s interest in 2008, the conversation got a little bit more nuanced. We started exploring what business metrics does performance affect? Let’s think about what all the metrics affect? Aberdeen touched on them, but what else can we learn? What we learned was great about 2008 was it brought all those people to the table to talk about performance at Velocity. Then in 2009, we realized a lot of those people had gone away, gone back to their organizations and done their own research, actually generated their only case studies. They came back to Velocity in 2009 and shared those findings.
Again, these are some of the early, widely publicized case studies around performance impact. I’m not going to read these slides to you. You can look at them yourself. If you want, you can always see them in my blog post, but you can find any of these easily online.
What these case studies indicated is what do you care about revenue, page views, search engine marketing, bandwidth, even servers like the Shopzilla case study indicated by making performance improvements, they actually cut the servers they needed in half. It just improved bandwidth that much. Download conversion, increasing traffic, more page views, all of those things, performance moved the needle for a lot of organizations. I think without Velocity and the Velocity conference and specifically, without the participation of those early adopters who shared their own case studies, I think we would be a lot further behind than we are and just being able to make this case for performance. There would be a lot of companies out there still wondering why is my business not a successful as it could be and not realizing that ten second downloads are the reason.
In 2010, should we care about slowdowns as much as outages? This was, again, getting a little more nuanced in terms of what we look at. People looked at 2008 – 2009 and said, “Yeah, I get the performance managed for my business and ideally, my pages should be faster, but what my boss cares about, what my business cares about is outages. Outages are what ground the headlines.” If you run a major website or a major web service and you go down, you are all over Twitter. Tech publication is write articles about you. There is a lot of incentive, if you have only a limited number of resources to focus on outages over slowdowns.
2010 became the year we asked the question which should I focus on more? There’s a great piece of research, third-party research, similar to the Aberdeen study, done by Track Research and I love this study. They, again, surveyed a few hundred businesses, asked them what their revenue losses were per hour in downtime and per our poor performance slow down. They defined performance slowdown as pages rendering in 4.4 seconds or slower. What they found was downtime is much more expensive per hour. It’s about five times more expensive than a slowdown.
However, slowdowns occur ten times more often than outages. It’s interesting actually because every time I present this stat I get a lot of people saying, “I think it’s more often than that,” to which I would say, “I would probably agree with you, but at the time that this research was done this was the number that was put out.” Again, it’s averaging the numbers for a lot of business. I think and the takeaway here is whether it’s ten times more often or 20 times more often, if you actually factor in the frequency of slowdowns versus the frequently of outages, you can make a very strong argument that slowdowns matter definitely at least as much as outages. Ideally, you don’t want either, but some outages are inevitable. Sometimes slowdowns are inevitable. You want to mitigate both as much as possible.
In 2011, this slide will make more sense in a minute, we started to ask in the past what we have been doing a lot of is comparing apples to oranges. To illustrate what I mean by that, going back to the Shopzilla, Mozilla, Yahoo!, AOL, all those different case studies, what a lot of them were about was the fact that say in March, we’ve noticed our website is unacceptably slow. Our company, we go on an eight week sprint to optimize, do some very aggressive optimizations, consolidate resources, get rid of extraneous scripts, maybe optimize images, do whatever you can to cut load times. Two months later, the new version of the site is faster and you see a positive improvement to conversions, bounce rate, whatever metrics you care about.
But there will always be the naysayers who will look at those before and after numbers and say, “Yeah, but we also had a big marketing campaign. Maybe this big inbound marketing campaign is the reason why conversions increased,” or “We did some other things. We do a lot of work on SCO and so we drove a lot of inbound traffic that way.” It’s hard to convince everybody using before and after scenarios that performance always matters because there will always be people who can point to it might have been a lot of different variables that changed and fair enough. If before and after isn’t a valid argument for you, then we should be able to do better than that.
At the time, I was working for a company called Strange Loop Networks, which developed front-end optimization solutions. We also did some rudimentary, real user monitoring in we were able to track all of the user experiences for everybody coming to the site and then optimized the experiences to learn from those experiences, so we could do even better optimizations in the future with this optimization engine we had.
We had a customer who did not want to have an apples to oranges case study. They didn’t want to have a before and after. They wanted to see a side-by-side comparison with the only differentiating variable being site speed, what is the impact on business metrics? What they agreed to let us do was to conduct an 18 week study where most of their traffic, most of the people who came to the site received an optimized version of the site. It’s fast and the site optimization engine we had could make their pages, which was generally significantly faster.
Then we artificially introduced a series of three delays to a very small cohort of their traffic. I can’t remember what the exact numbers were, 97% of their site traffic was optimized. The remaining 3%, 1% had a 200 ms delay. 1% had a 500 ms delay and 1% had a 1000 ms delay. We tracked that first over 12 weeks and what we found was the people who experienced the 200 ms delay did not move the needle very much on the metrics we were looking at, bounce rate, conversion rate, part size and page views. But at 500 ms at half a second and what I’m talking about when I talk about introducing delay, I should clarify a little bit is we introduced delay speeds to HTML. Basically blank screen until the HTML shows up and then everything else renders afterwards as normal. Half a second and then even more so at one second, we saw a significant change to business metrics.
If you see your cart size reduced by 2.1%, basically the dollar value of what’s in people’s shopping cart because they are simply not visiting as many pages and putting as many items in that’s significant. You see your conversion rate is reduced by 3.5% that’s also significant. All of these numbers at 500 ms and 1000 ms were extremely significant. Actually as an aside, we had another customer who wanted to do something similar with their site. We actually had to cut it short because after a couple of weeks, they saw … and the results are instantaneous. As soon as you slow the traffic down, all these metrics are negatively affected and they actually said, “You know what? You can’t do this. This is costing us too much to do this experiment,” so we had to kibosh it.
This is a great study for us because it gave us that apples to apples data we were looking for, that the industry was looking for it. At the time, there just weren’t a lot of case studies like this, so this was great and even to this day, there aren’t lot of case studies like this.
I mentioned earlier this is an 18 week study. What we did was we slowed down the traffic for 12 weeks as I talked about earlier and then for the following six weeks we were able to track return visits by those visitors. What we saw is you could see the bottom two lines on this graph were the basic customer retention. You see for the shoppers who had a 500 ms delay and shoppers who had the 1000 ms delay that return rate could range between on average 37%-38%, a little bit higher for the 500 ms delay compared to the baseline optimized traffic, which ranged the average 40%-41%. What’s interesting is you see six weeks following the end of that experiment, all of these people are actually all receiving the optimized experience, but the people who had earlier received the slower experience were slower to return to the site. They just were not returning at the same rate.
What I like about this particular finding is remember earlier I said I find surveys interesting because they present questions and they get you thinking about things. There was an earlier survey by Akamai, people had indicated that if a site would slow, they would not return. One out of three said they would not return. These numbers though indicate that one out of three did not return, but they definitely indicate that not everybody returns. Knowing that people claimed they wouldn’t return to a slow site in significant numbers gave us the inspiration to track return rates for the six weeks after people had received a slower site. Again, survey data, interesting, raises some good questions, but looking at real data is where we actually find the real answers.
Moving along from there, in 2012, it’s funny. My husband and I both work from home, so he saw me putting together my slides and he was looking at all of these. If you haven’t noticed, there’s usually a lot of primates in my slides. He asked, “Is that professional?” I was like, “Yeah, they’re wearing suits and ties. Of course, it’s very professional.”
In 2012, we asked what can we do? If we can extract meaningful insights from looking at small amounts of data over small amounts of time, what can we learn by looking at a lot of user data over longer periods of time? 2012 was the year that real user monitoring came on the scene. It’s funny because to me it feels like it’s always been around, but I think that’s because I have a very poor sense of time. Real user monitoring has only been on the performance scene in a meaningful way for about three – three and a half years.
One of the early case studies that ran real user monitoring data came out of Walmart. At Walmart, they had an internal tech incubator called “Walmart labs,” which is really cool. If you are as big as Walmart, you actually get to have your own in-house tech incubator, where they develop new solutions, they explore writing different technologies that are all around improving online experiences and improving behind the scenes tech adoption as well.
At Walmart, there were a group of people who knew they wanted to work on optimizing performance and improving user experience, but they needed to make a strong case because it was going to require a lot of resources to do that. When they were tasked with how do we make that case, they had these early adopters using real user monitoring tools. They decided to just look at all the data they had been grabbing, passively grabbing about people using their site and use the existing data to make the case for them.
What they did, as you can see in this graph, is they took all of their user traffic and they dumped it into this histogram that shows the distribution of user experience in terms of load time. You can see from left to right the bulk of people ranged from one seconds. What’s interesting is they also overlaid this distribution graph with a conversion rate line. You can see the faster pages, much higher conversion rates and conversions dropped off steeply from one second-two seconds and there on out. It plateaued at four or five seconds and basically plateaued until you get to the 11 second mark. There’s some interesting stuff there. That was just looking at their existing traffic before doing any optimization.
They found some other interesting things which were again looking at shoppers who converted versus shoppers who did not convert, converted shoppers on average were served pages that were twice as fast. Non-converted shoppers received category pages, just to call out different page groupings that were two to three seconds slower than category pages served to people who did end up converting. They also found looking at their existing traffic, for every 100 ms in improvement, they increased incremental revenue by up to 1%. If you are Walmart, 1% is a lot. It doesn’t sound like a lot because it’s huge.
Just to get some closure on that story, just by presenting this case, by taking their data, creating a series of graphs, showing these correlations, they were able to convince people that they needed to do an optimization sprint. They did it. As a result sure enough, when they looked at their real user data, after that, they found that yes, they improved their business metrics across the board.
Let’s skip to the present, 2015 and talk about what’s new? What are we looking at today? What have we learned in the intervening years and where are we going when it comes to looking at business metrics? I have a great job. One of the fun things about my job is I’m the ultimate beta tester for SOASTA for our run product mPulse and the data science workbench tools that accompanies it that actually lets you go into all the run data and do cool slicing and dicing of the data. I can do things similar to what Walmart is doing. We have access to billions of beacons worth of user data for a number of leading websites. I can go into this data and look at a variety of different verticals and different times a year and see what are the correlations between performance and business.
This is based on a case study that we did in the summer time, using anonymized data for some retailers and who had significant amounts of mobile traffic. People coming to their site from mobile devices. When I did this study, I wanted to do a few things. I wanted to do (a) to explore a few different myths, misconceptions that people have around mobile performance. A few myths and misconceptions I encountered frequently are performance doesn’t matter as much on mobile as it does on desktop because mobile users expect pages to be slow. People don’t even convert that much on mobile devices anyways. You are not going to find anything interesting there. Those are two things that I hear a lot and people are more willing to be patient because they know mobile devices are slow.
That goes back to some user surveys as well. There is a couple of different warning surveys around mobile user survey expectation. There’s some survey data says, actually mobile users say they expect pages to load as fast from their mobile devices as they do on the desktop. Then there’s others surveys that say if you ask the average person do you expect pages to be slow on mobile or are mobile experiences slow for you in general, most people will say yes, I know. When I try to look at a site on my tablet or a phone, it’s really slow. I know that.
Because the survey data is so contradictory, it’s easy when data is contradictory just to choose data points that suits you and disregard the rest. We know what people say they will do, what do they actually do? For this particular set of data, this particular set of representative retail sites, and I created a histogram that’s similar to the one that Walmart did because again, it shows real-time distribution and you can see what experience most people are getting, which is in the 2.1-4.8 second range. Then you can see where the peak conversions happen and how conversion rates drop off.
What’s interesting to me is peak conversions, that 1.9% conversion rate, happens really soon, 2.4 seconds. The other thing that’s interesting is you hit what we call the performance poverty line when things bottom out early as well at 5.7 seconds. That’s quite early when you consider on mobile devices, a lot of pages are not fully rendering until eight seconds or longer. The other thing that’s interesting is just that steep decline from 1.9 seconds to 0.98 seconds. That’s just between 2.4 seconds and 4.2 seconds. Basically to me, expresses the fact that yes, performance does matter for mobile. People are converting on mobile devices and people’s expectations in terms of their actual behavior indicates performance really matters.
One of the things I hear a lot is people saying that that’s not my site. All the other case studies are great that’s not me. My users are different. My site is different. To which I would respond, “Of course, it’s not your site. Your site is different. Your users are different. This is why you need to actually know your own users and use your own data.” But there are a few ways we know that user experiences are different, depending on where people are coming from and how they are using the Internet.
I would even go so far as to say it’s not even just that your users are different. I, as a user, am different depending on whether I am visiting your site on my phone, on my tablet or on my desktop. I am different depending on whether I’m visiting your site during the day or on an evening or on a weekend. I am different depending on whether you are a specialty goods retailer or a general merchandise retailer. I am a different user across all those use cases. This is why you need to understand all your user experiences.
This is an example. We see, again, a lot of user data and how people respond in terms of bounce rate. We see if you are a specialty business retailer, you are going to experience increased bounce rates much more rapidly as your pages slow down, especially … Rewind. Your bounce rate is going to increase but not that much. If you are a general merchandiser, your bounce rate is going to increase much more sharply as page load increases. Similarly, with conversion rate, if you are a specialty goods retailer, people will come to your site, they can’t go anywhere else to get what they need. They know they are stuck and so they are more willing to stick around if your pages slow down. But if you look at that green line, general merchandisers, you can see that swift, sharp drop-off in conversion rate as pages slow down. In fact, people abandon the pages because they know they can go to a competitor’s site or go back to Google and try again.
We also know when people come to your site, their willingness to complete a transaction and to stay on the site varies depending on where they are experiencing poor performance. If they are experiencing slow load times on checkout pages, the conversion rate drops off somewhat. Ideally, you want all your pages to be fast, but it’s not hugely significant. However, if you are browse pages, and we define browse page as being your category pages, your product pages, the pages people tend to come in on from search results, if those pages slow down, then we see conversion rates shrink by about 50% when the load times increase from 1-6 seconds. Again, really meaningful, one or two seconds, you are okay, and three seconds is boom, boom, boom, down to six seconds.
The pages that people experience performance issues on, that matters. People’s expectations change from year to year. These charts are from a study I did just a couple of months ago where we looked at data, all the real user data, again billions of beacons worth of data for the four-week back-to-school shopping period for 12 major retailers that do a lot of back-to-school business. Again, anonymized the data, aggregated it all together and you can see the top chart. We decided, rather than just look at 2015, let’s compare 2015 to 2014.
2014 we saw that peak conversions was around 3-4 seconds and it was about 1.5%. 2015, peak conversions happened at two seconds and was about 1.8%. What you see in terms of the load time distribution however, is these are more or less the same. All these users in both years are distributed along the same load time curve. They are getting roughly the same user experience. If you had just looked at the load time curve and took out that conversion data, erased that blue line, you would think, “Our site is more or less the same and look, we increased conversions. This is great. We have more sessions in 2015. Our conversion rate was higher, so businesses better.”
However when you factor in the actual conversion rate and map it to load time, you see in 2015 the gap between the blue line and all those yellow bars that’s the opportunity gap. User load time expectations changed significantly from three to four seconds in 2014 to two seconds in 2015, but the user experience did not improve to correspond to that increased expectation. The argument is if you could move those yellow bars in the bottom graph to sit underneath that blue line that’s basically money that was left on the table.
Another question we have in 2015 is do you know, do you understand how performance affects different pages on your own site? This is getting back to that idea of understanding your own users and understanding your own pages and how people interact with them. We are going to be talking about conversions again and conversion rates for pages on a site.
At SOASTA, we wanted to understand how the user experience, how different pages are affected by conversion. Based on this much earlier graph where we know that browse pages, for example, are very influential and the speed of browser pages is very influential in determining people’s happiness and willingness to complete a transaction, how can we quantify this? Can we develop an algorithm that actually lets us look at real user data for a site and let’s people know what are the most important pages on their site? We developed this thing. We call it “the conversion impact score.” This is the long version. It’s a very long algorithm. This is the short version. How much impact is the performance of this page have on conversions? Here is how we mapped this out.
This is how we generating a graph using our data science workbench it in accompaniment with our real user monitoring. You can see from left to right these long blue bars. These are presenting the conversion impact score for each page grouping on a particular site. The green little bubbles here with the lines connected to them, represent the average load times for the page groupings for this particular site. What conversion impact score lets us do if you were to just take out the blue bars and just look at the load times, if you were a developer or a front-end engineer, engineer, you might just focus on what are the highest green dots? You would say, “Our order billing that’s really slow, shopping bag and homepage, pretty slow, order and review, also pretty slow.” You would be looking at those green dots that are hovering high up on both pages and you would say, “Those are our really slow pages. Those are the ones we need to tackle and fix in terms of making them faster.”
But those might not even be the right pages in terms of moving the needle. When I talk to some people, I go to conferences and people say to me, “Yeah, we worked on our site. We made our homepage faster and we worked on some specific pages on our site and nothing changed. We didn’t see an improvement in any business metrics whatsoever, page views, bounce rate, anything.” It might be just because they focused on the wrong pages. They maybe focused on the slowest pages, thinking fixing those slowest pages would improve business, but those were not the right pages.
What the conversion impact scores shows us for this particular site is the pages that have the highest correlation to performance, meaning if these pages become faster or slower, you are going to see the greatest positive or negative impact to business metrics and these are the pages to focus on. The second one is blurred out because it is unique for this particular site, but product pages and category pages are actually the most meaningful pages for this particular retail site and then falling out from that, shopping cart, homepage and search results. Whenever I do this kind of conversion impact scoring for a lot of businesses, generally I find the same results. It’s these five page groupings in the top five page groupings that need to be looked at in terms of performance data impact. Hopefully that makes sense to everybody.
If you are interested, I have written a blog post about this. It spells out a little bit about the math behind the conversion impact score and the kinds of conclusions we can derive from it here, so you can do that later if you are interested in exploring that a little bit further. When a lot of that conversion impact score is that we had a question and we sought to find a way to develop a tool to give us a data-driven answer not a survey-driven answer, but a real data-driven answer using all the user data we have at our hands.
We have more questions and we don’t always have answers to all these questions. I’m going to come and put them on the table and these are things we are asking at SOASTA. I just came back from Velocity Conference in Amsterdam. These are a lot of the questions we were asking there. Can we better measure how performance affects user satisfaction? This is huge. We talk about user satisfaction as a metric, but how do you actually measure that and how can you as tightly as possible correlate load times to the happiness of the people who come to your site? We know that happiness matters that satisfaction matters.
One of SOASTA’s customers in the UK and they did a great case study, Velocity Amsterdam, where they showed side-by-side their real user data about site speed and then their customer satisfaction around site speed. They had to grab it from two different sources and show us this mirroring of the graph. What was great about this graph is you can see, according to these two different graphs, if the pages got slower, people were less satisfied. Pages got faster, people were happier.
These were really great, satisfying to me as a user experience as a performance person to see this, but again, it’s pulling data from two disparate sources. I would like to see those sources be pulled closer together, so we can ensure they’re as meaningful and real as possible. What impact does performance have on customer lifetime value? If you are not familiar with the concept of customer lifetime value, referred to as CLV, what that means is the total value your business will derive from a person over the time they are a user of your service, a customer of your business. Generally, e-commerce to the average CLV is calculated over three years.
Traditionally, when we look at real user data or just performance metrics in general, we look at sessions. We look at user sessions. A person comes into the site. They complete a transaction. We measure the success of that transaction and we go away and we say, “This is how performance correlated to business success for that transaction.”
Right now what we want to work on next is how does performance affect not just a single user experience but an entire customer experience over a much longer period of time? We touched on that earlier with the Strange Loop 18 week case study, where we were able to track the six weeks following a user experiment, but can we find ways to gather more robust data over longer periods of time, so we can make a compelling story around the fact that making pages slower now might affect metrics in the short term and it might do that in very minimal ways. But maybe over the long-term that you are seeing the most damage done to your business. Or to spin it around and put a more positive light on it, making your site faster could only incrementally move the needle on business metrics for you in the short term, but if you look at your entire customer lifetime value, perhaps the impact is much more substantial, especially when you think about word of mouth and other effects.
To speak to that a little bit, this is based on Google and from a consulting firm called CEY. We know right now, 90% of people who shop for or engage in a transaction online, 90% of them move between multiple devices. The average shopper uses 2.6 devices and makes 6.2 visits to a website before they buy. Yes, some people are going on your site and boom, boom, boom, completing a traction, but a lot of people are not. They are storing stuff on their shopping bag.
They are visiting when they are at work and then they are going home later, accessing the back shopping back and then emailing it to their partner and getting their partner’s input on whether or not they should buy something, especially if it’s a bigger ticket item. People are moving from device to device and can we better capture that movement and understand how maybe … Maybe your desktop performance is great, but your tablet performance is really, really poor, which is actually a very common use case amongst the people I talked to. If you improve tablet performance, you could improve overall metrics. We can do a better job of gathering that data.
This is a big question. What does performance have on enterprise productivity? We talk a lot about e-commerce and staff performance because those are easy metrics to capture and a lot of people are creating those kinds of case studies. But how many people are measuring the impact of internal apps or the performance of internal apps on adoption rate of new applications, on people’s ability to complete transactions, complete tasks. There’s a lot of academic research about the impact of waiting and delays on how people use applications, but are there any organizations that are studying this within their own organization?
Are we even measuring the right things? We talk a lot about load time or start-render, but those are crude metrics, as pages become more and more complex for measuring the user experience. We know load time does not necessarily reflect what people see on their screen because load time can include in a lot of things that fall between the so-called “under the fold” or are third-party scripts that are loading in the background. Are we even capturing the right metrics?
If you’re interested in knowing more about this, the metric that’s being talked about a lot and I was gratified to see this being spoken about at Velocity Amsterdam and then in Amsterdam afterwards is user timing: the ability to make marks on a page in a code that lets you gather data about when that particular aspect on the page renders. It could be a hero image. It could be the primary product image if it’s a retail site. Perhaps it’s a specific scripts you want to capture or ads, you can add timing specifically to the page and user timing are being supported by more browsers now. Impulse supports it. If you are an impulse user, then you can get user timing data as part of all the other real user data you are gathering and you could actually correlate the load time a specific pieces of content on your page to business metrics. It gives you a deeper, and to my opinion, better insight to how your pages are performing and the impact it has on your business.
Takeaways. I’m not going to read these to you. There is a lot of things I would like you to take away from this talk. The real one is just you need to measure your own business. If you want to understand your business is to know your own users, have your own data, have as much data as possible so you can figure out what you need to optimize. What you don’t need to worry about optimize or really focus your efforts and to create a common language for performance that everyone on your team can rally behind.
At the beginning of this talk, I talked about the many metrics that we talk about in performance from start-render to the load time to DNS lookup, etc. What we are trying to create is a common language so the whole business can talk about performance. If our start-rendering for certain assets is this, then this affects our bottom line over here and these are things we could all agree on that everyone in the organization can agree on and get behind.
Again, if you are interested, this is the e-book that I’m working on. It’s basically, they are little case studies. It’s not just around e-commerce. Some are around media and others. The final book will continue more of the back-end enterprise case studies as well. I encourage you to read it if you’re interested in reading more about these case studies. I find this material endlessly interesting. I hope you do as well. That’s it.
We have a few minutes for questions. If anyone would like to ask them here or if you would like to reach me online … I will go back to the very beginning. You can find me … I’m sorry. I don’t know how to go back to the beginning, take any shortcut. You can find me on my Twitter feed @TamEverts and also on our blog.