Three Tips to Increase Your Mobile Test Coverage

With the proliferation of mobile devices, there is a renewed discussion on Test Coverage as it relates to mobile functional testing. Many of our customers have taken a fresh look at their mobile strategy with a renewed focus on their test coverage strategy. What do they discover? That test coverage is not just about devices. Join us for an hour and we will walk you through the key areas that you need to focus on to keep your mobile strategy covered.

In this webinar, we will cover:

  • How to determine the optimal test coverage model for your apps
  • Strategies for device & application coverage
  • Tips to increase mobile test coverage



Karl Stewart:   

Welcome to the SOASTA webinar series on Seven Steps to Pragmatic Mobile Testing. This webinar is about three tips that help you increase your mobile test coverage. My name is Karl Stewart, I’m the Director of Product Marketing here at SOASTA, and with me today is Dan Boutin, Senior Product Evangelist and Tom Chavez, Senior Product Evangelist here at SOASTA.

Thank you for taking the time to be with us today and we’ll try to keep ourselves to an hour here and get through the topics. If you have questions, you can use the chat. We won’t have any actual audio questions today so we’ll be taking questions via chat. With that, I’m going to go ahead and get us rolling.

From an agenda perspective, we’re going to just deal with the concept of what is test coverage. We’ll move into functional testing as an overview and then basically how the game and the focus has changed, especially for mobile testers. Then we’ll talk about how I determine optimal test coverage. We’ll step outside the box a little with regards to how you can do that and then we’re going to start to show you some of the tools that you can you in order to be able to more precisely determine your test coverage. Then a couple trivia questions here for some swag at the end for you to walk away with today.

One of the things I’m going to have-a poll-here in just a second, and basically I wanted to put this picture up of someone’s attempt to fertilizer their lawn or get a coverage fertilizer on their lawn and you can see the gaps that are there. What is test coverage, anyway? I’m going to ask you to answer so we get an idea of where you are in the process here. This is a poll where you’ve got multiple choice. You can select as many as you wish. I’m going to let you go ahead and start to answer those questions, and we’ll take a minute here to just watch to see what comes in, then we’ll take the results and move forward.

I’ll just wait 10 more seconds here, and then we’ll go ahead and look at the results. Dan, I’ll turn this over to you and have you comment on the coverage here and roll from there.

Dan Boutin:                          

Fantastic. Thank you, Karl, and welcome everybody. Really appreciate your taking the time today to join us for our webinar series on functional testing. I’m looking at all the answers here and the correct answer is there’s really no wrong answer. Certainly, we’re going to talk today about all the different features for functionality and we’re going to talk about device and phone coverage, as well, and we’re going to touch on automation. Karl, with that, let’s go ahead and get rolling. Next one.

I always like to include a little bit of Dilbert because I came out of aerospace and that was where every one of these that’s ever been developed was developed, in theory, where I was. This is a funny one around functional testing. I’ve always chuckled at this one because Dilbert here has written a test script for the product and Wally wrote a test script to test the test script. I always say, “Well, who tested Wally’s script?”

In the past, functional testing, and even components testing, for that matter, has really been about writing code to test the code. It hasn’t really about automation. It hasn’t been around test coverage. Those days are gone. Next slide.

As Karl mentioned earlier, let’s start off with a little definition so we’re all on the same page. Functional testing … What it is really it’s testing what it says. It’s the functionality or use case of an application to meet a particular business requirement and those business requirements are usually reflected in the software documentation or requirements documents. I’ve seen them defined many different ways. Example here is testing an application for Sears, where a user logs into an account, does some searching for products, selects the size and color they want and then go to check out and purchase. I’ve seen customers, actually, they’re buying that whole end-to-end process as a function, because that’s the way it’s applying to the use case. I’ve also seen each one of those individually defined as functions. There’s no 100-percent right way to do it, it’s just a matter of making sure you’ve got it correctly modeled in your test plan and test coverage. That’s the definition for functional testing.

Next slide. What is test coverage? It’s really-and this is straight out of the definition-it’s a measure of the proportion of an application exercise by either a test suite, a series of tests or a combination thereof. Usually, you see see that as a percentage. Our test coverage is 80 percent of our functionality. There’s different ways to measure that and reflect those percentages, as I mentioned a second ago in the Sears example. It can be by feature or function. It could be a scenario or entire use case … A scenario of someone coming in and actually wanting to make a purchase right through the end, where they’ve paid and it’s shipped. It could also be the click-through path. In other words, the transition from one to another with all the interfaces and components that go with it. That is test coverage and this gentleman did a better job of doing his lawn.

How have the game and the focus changed? It’s really all about, obviously, smartphones and mobile devices. The first change was when the iPhone came out in 2007. That brought a whole plethora of different devices and screen sizes that were smartphones. That added a lot of burden to the functional test world. Then when the iPad was introduced, that complicated the equation even more, because now you’ve got different screen sizes. In addition to the smartphones, you’ve got a whole series of new platforms that had to be tested.

Right around this time, because of the different devices that had to be tested, the focus of functional testing really shifted towards just devices and off the application. We here at SOASTA believe that devices should be in focus, but really, the laser focus needs to be back on the application.

Next slide. Why do we feel that we? We feel that mobile is really a customer acquisition strategy and it’s really about getting customers that are going to come to you because of differentiators in your application. We used a retail example. Let’s use a banking one here. Say I’ve got a bank and I want to do as much online or automatically as I can. My bank currently does not, say, a Snap Deposit feature, where I can take a picture of a check and it just deposits it. Yet, the credit union across the street has that feature with their app. That may just make me change, so that way any time I want to make a deposit, I can easily use this application to do that.

Early on, there were a lot of banks that didn’t have the applications, and that was a differentiator. Now, it’s more towards feature and function. It’s the same way in retail, as well. I know I have probably about 90 apps on my phone and just about every retail store, and in my particular case, just about every travel app. I’m more prone to use applications that have features that differentiate from the others, whether it’s easy check-in, room selection, pretty much any kind of a feature. That’s what it’s about in the mobile world is literally trying to one-up your competition almost on a daily basis, which adds to the stress of functional testing because those development cycles are fairly rapid and the testing needs to be fairly rapid.

Next slide. Because of the fallout from new devices, really what you’ve got is a fragmentation of platforms and devices, all kinds of different operating systems, especially on the Android side, and screen footprints that are a multitude of different sizes, which it’s really put a burden on the testing community, because it’s not just about a single desktop and one version of an operating system anymore. You’ve got different capabilities to test as these devices and as these features are added. That really does add to your testing matrix. Also, because of the advent going towards agile development, the mobile life cycle is compressed, which makes regression testing in those spikes really a burden on today’s QA organizations.

Next one. Whether you’re doing agile or waterfall, it decreases cycle time, certainly, for the developers, and agile streams certainly can be a very fast process. The testing that is involved in that increases as those features get added in the agile stream and that functionality gets built over time. As the different features are built and released, that adds to the regression testing required for each build and each capability, as you see here. It definitely increases the testing burden over time and actually requires you to do it in a shorter period of time. Everyone’s looking at ways to automate.

Next one. Functional testing coverage … I always used to ask this as a program manager in aerospace would come to me and say, “I’m done.” I’d say, “Okay, let’s dig into that a little. What does actually “done” mean?

Next one. Really what done means is I want to be able to … If someone comes to me and says, “I’ve tested it,” I dig under it and say, “Okay, what does that mean? How many paths did you actually test? How many functions did you actually cover? How many of the data sets did you actually exercise against those paths.” You have to account for different inputs, obviously. If someone says, “Well, I tested 60 percent, and really only four of the five data sets,” you can see that that coverage model goes down quite a bit from the ‘I’m done,’ if it’s then tested. Really, that’s what you have to look at from an application standpoint, otherwise your test coverage may look like this lawn. You really have to dig in, look at the criticality of the functions and be able to test those, essentially. Especially the ones that are tied to your business.

Next one. What mobile has caused … a lot of you functional testers will be very familiar with test matrices. In the past, for a desktop app, you may have a number of test cases that had to be utilized and run through, but you really only had a couple of browsers. You were traditionally on the same platform. A typical test with 300 test cases, in this case, would be about 10 days of testing.

What mobile has done is take those same test cases and essentially, it’s exploded because of the number of devices, which I’m going to get into that a little later, as well … How you pick the devices and determine which ones you support. In this case, the matrix included 26 different devices that were critical to their business. As you can see, those 300 test cases across those 26 different device types, that’s a lot of time to be testing and really if you’re in that agile world, where you’ve got the regression testing just adding up-piling up-over time, you can’t take 26 weeks between releases in today’s world, where mobile releases literally happen almost weekly in some cases, mainly because of marketing promotions and things like that on the retail side that happen weekly, if not daily.

Next one. I’ll give a little overview on functional testing and the background of functional testing. I’ve been to a lot of customer sites where they’re all striving for automation. In reality, where many of them are now is they’re on the side of manual testing almost 100 percent on the device side. I always see things like hundreds of devices sitting on a table or in a lab where people grab a device and they’ll run a test suite against it, either using old-fashioned Word document and spreadsheet. What they’re looking for now is to strive towards automation and many of them are looking at the devices only, because that one went from two to 26, if you look at the previous slide.

In reality, that needs to be a combination of the device types and number of devices you support, but also the functionality. The focus needs to be back on functionality, as you make that turn from manual over to automated testing. The other things I want to be able to do is actually show you one of our products, called TouchTest that actually does automated functional testing and I believe that’ll be coming up in a second here.

Next slide. I’m going to turn it over to Tom to go ahead and show you a little on touch testing. Once Tom’s done, we’re actually going to get in to a little bit of innovation and how do I know where that coverage line is drawn? In other words, where is it safe to say that I’m pretty confident that if I draw the line here, and test everything above it, that the things down below it aren’t going to come back and bite me? We’ll go ahead and go through the TouchTest demo now, and then when Tom’s completed that, we’ll go on to a little different way of doing things, as well.

Tom Chavez:     

Great. Thanks, Dan. I am Tom Chavez and I’m our evangelist here-another evangelist here-at SOASTA. I’ve got an iPad on the desktop next to me that hopefully you can see on your screens. It’s a sample banking app. In the days of doing manual testing, of course, we would just use our test script and log in to this application and run through this scenario. What you see on your screen is everything that I’m doing on my iPad next to me. I’m just reflecting it up on the screen. All of these tasks of running this application, I would go through it manually and that’s how I would build my test, but in the days of automation, we have a product here at SOASTA, called TouchTest.

If I just move this aside a little bit, I can show you TouchTest. This is the main welcome window here. From this screen, I can go into our Clip Creator tool and create a new test clip that will record everything that I’m doing manually on this iPad and turn it into a test clip. I’ll just choose to record a mobile app and I’m going to record it from my iPad. Just going to connect up my iPad by moving it into Safari and you see that now my iPad is here and connected and I’m going to choose to record my banking app, and I click record.

You’ll see on my iPad … it’s switching over to the banking application and as I type things in, I will go ahead and clear that out and type in Kristin and then I’ll type in her password. You see over on the left side of the screen, everything that I’m doing, all of these taps are being recorded. I’ll just continue. I’ll log in then we’ll go over and check in the value of the checking account. I’ll scroll this a bit and I might be looking for a specific value. Then I might go back and check the credit cards, etc. I then scroll through here and everything is being recorded here, all the scrolls, all the keyboard entries, all the taps, all being recorded.

I’m not a programmer, I’m just somebody who’s testing an application the same way I would from a script, but now I’ve created something that is automated and in fact, I can go ahead and say, let’s stop the recording here, and now my iPad is available for TouchTest to run this test clip. As easily as I’ve created it, I can say, “let’s play this test clip,” inside of a composition. First, of course, I should save it. I’ll save it away and now TouchTest, which runs in the cloud, will connect up again to my iPad, as it’s still connected to my iPad and it will start to run this test clip.

Here it goes, and you see it’s launched the application and it’s going to the page and it’s going to click in username field if everything goes well. Let’s see … Oh, actually, I think I know what went wrong. I just recorded a test … The fact that there was data already there meant when I started with the app in a dirty mode, I made a test that was not actually re-playable. There’s a case where what I teach myself and remind myself and teach all of us on the line is you need to start your application in a clean mode and so I’ll just do a quick shorter clip this time to show how to do that one more time, because the great thing is you can automate … You can edit your clips after you’ve created them and I can take out those steps where I cleared out her data. That’s what made it dirty from the beginning.

I’ll just do a couple of steps to show the automation here. You can also modify tests so that they would be screen-aware of running in one device mode or different orientations or even against different devices like an iPad versus an iPhone like Dan talked about all the devices that have been created and delivered by Apple. They’re just adding to the proliferation of making testing more difficult. There was just a quick test and let me go ahead and run that so we can see what it looks like again in running a successfully test.

I want to show you a quick look at, in addition to running the test functionally, we also collect interesting data from the device about what the device has been doing while the test has been running. Here we are collecting … Now we see this test is running successfully and just a short clip, but what we see down here in the bottom is that the CPU information has been collected while the test has been running. The memory usage has been collected while the test has been running and even the battery drain has been measure while the test has been running. This is vital information, in addition to just running the test, but knowing how your application behaves on the device across all of the devices on which you run your test, is available all within TouchTest. That was a quick look at going from manual to automated by recording a test in TouchTest and playing it back.

Just had a request to also show … We’re talking about how many different devices you might need to be testing on and one of the features that we have in our TouchTest product is a remote device cloud. I’ve logged in here to the remote device cloud where you can see we have over 300 devices available across different releases of Android and iOS and especially in the Android world, where there are so many different versions of Android and not every customer upgrades to the latest version, either because they like the version they’re running or the device manufacturer hasn’t released the latest version for their device. You might need to be testing your application across many old versions of Android and even some older version of iOS.

A device cloud having access to hundreds of devices can really make your life easier and you won’t have to buy all of these devices, especially across all of the parts of the planet, where they might have differences. Some of these devices are in Japanese. Some of these devices are in English. I may go ahead and pick a device that’s in English and as easy I say ‘Rent’, I can choose to use this device for 30 minutes and this device is actually sitting in a data center in Japan and yet here we see on the screen, it’s almost as though the device is right in front of me. If I click on the calendar, it’s very, very responsive. I’m tapping on changing the dates and at greater than 15 frames per second resolution, I am seeing this device just as though it was right in front of me.

Without having to buy this device, without having to go to Japan and get one of these, I’m able to run this device with my application on it. I can load my application on it and test it manually and we’ll soon be able to test it in the automated manner, too, here.

Karl Stewart:        

A couple questions here, Tom, that have come in. First question is I want to know if the iPad shown here is-this was during the TouchTest aspect-an emulator or an actual device connected to the recorder?

Tom Chavez:      

It is actually a device sitting on the desktop next to me and the only way it’s connected up to TouchTest, which does run in the cloud, is over the WiFi connection between the device and the internet. All I had to do was connect up to Safari, pointing it to TouchTest in the cloud and then TouchTest was able to take over control of the device for recording the application and then for playing it back.

Karl Stewart: 

The second question there is can I run my test scripts on multiple devices simultaneously?

Tom Chavez:   

Yes. That’s actually one of the great things about TouchTest is you can run it on one or five or 10 or 50 devices or even more all simultaneously running the same test clip or different test clips, depending upon how you’d like to lay out your test suite.

Karl Stewart:     

Last question, what application was used to reflect or stream from the iPad?

Tom Chavez:   

Ah, that’s just the third-party application. It’s actually called Reflector. $30 from a good vendor and it just lets you airplay your iPad into your Mac screen.

Karl Stewart:  

All right, very cool. Stop sharing there and turn this back over to Dan, and next slide. Dan, you’re away.

Dan Boutin:  

Alright, fantastic. Thank you, Tom. Appreciate you doing that for everyone and thank you for the questions. Now that we’ve actually shown you how we can go ahead and do a record and a playback, the traditional functional test execution effort, what we want to have you look at is functional testing really needs some innovation, so … You’ve all seen the La Quinta commercial where they all dare the guy to get out of the box and he almost gets out of the box but then stays in the box. What we’re going to show you here is a different way … How we’re approaching functional testing, not just from a device perspective, but from an application perspective and really from an analytic perspective.

Next slide. In the past, the way functional tests have been built really have been guesswork. I know I’ve mentioned a couple of times, where do I draw the line? How do I know that I’m covering enough of the application? What if I draw the line too high and I don’t test some paths that end up being business-critical? In the past, it’s really been done by guesswork, a coin flip, log files. Really, what ends up happening is an application seldom gets used exactly the way you think it’s going to get used. As you’ll see in the example here: the road less traveled, where you get this great paved road and somebody finds a shortcut and it ends up being used more than the road itself.

What we do is we actually have a way to actually look at taking all this guesswork out of building functional test. Next slide. The model really should be let your customers be your guide. How they’re using your website or your mobile app, that should really be how your test cases are built, how you determine criticality and coverage. I’m actually going to show you some of that in a second.

Next. The way we do it … Obviously, there are pieces of manual testing that you still will have to do as you travel down that path to automation, but the way we really look at it is we look at it from not only a device perspective, but also from a real user perspective and we collect data from real users. We have a product called mPulse. Really, those analytics that are collected and analyzed, they really do tell you a lot of data that you can use for functional testing as well as performance testing.

Next slide. I’m going to go ahead and show a couple of screen shots and then I’ll go ahead and do a demo. I’m hoping I’m still on a really good connection here. What we’re able to do … What you want to be able to do is analyze the most common devices that are used by your users, what operating systems are they on and which ones are the ones for the top browsers are on those devices? We’re actually able to collect metrics based on a product called mPulse. We’re actually able to provide those kinds of information and actually look at performance across different devices, functionality that’s being utilized across devices, what are the top Android devices, what are the top iOS devices and what are the overall top devices? What are the top operating systems that are used? That’s where we start to draw that line as to what needs to be covered and what can be left uncovered.

Next slide. Once you’ve got the devices, obviously what you want to be able to do is to understand how your users are actually clicking through. Are they taking that paved road or have they found a shortcut and you need to cover that in your test coverage? What our product mPulse … It has a feature in it called that’s called Data Science Workbench. We’re actually able to look at the click-through paths of applications that are used by users. What we found is pretty amazing. I’ll actually get into it in a second. I’m going to hope that my connection works here. Karl, can you grant me control and I’ll go ahead and give it a whirl here?


Dan Boutin:

Fantastic. This is our product called mPulse and the blinking red-yellow-green dots you see, those are actually real users hitting one of our customer’s websites and essentially either buying or performing some kind of an activity. This glob, it spins. You’re actually able to see where people are coming from worldwide. This particular retailer is really just focused in the U.S., so that’s the view I have up here.

One of the things that we’re able to look at is performance across what the customers are doing. We’re able to actually see here’s the home page, product page, browsing, searching. What are they doing? What’s the performance of each one across the spectrum of functions, and then the slide I showed earlier, we’re actually able to dig in and say, “Okay, our top five iOS devices are here: iPhone, iPod, iPad, and then down below, you’ve got Motorola, Blackberry and other … A category here. I’m not sure what ‘other’ is defined as, probably as Windows.

Then you can look at top Android devices and obviously what you want to be able to do is to make sure your test coverage goes as far down as you feel comfortable with. On the other side, the top iOS devices and you can actually see the performance that’s being given for each one of those. Same thing again for Android. With this type of data, I’m actually able to define what the devices should be …

When Tom showed you the device cloud, the private device cloud, what you want to be able to do is just draw that line and test the devices that are your 90th percentile-which ones are your customers using? Do you have one device that somebody is using and they’re using it very rarely, that’s probably more of an education and definitely a below the line. You’re able to actually pick the devices based on real user data, so your test coverage model and your confidence in that test coverage from the device side just became a lot more firm and believable than guessing as I showed earlier.

On the other side now, I want to show you the Data Science Workbench and what this is essentially inside mPulse, we’re actually capturing the click-throughs that the customers are doing and what we found was really interesting. In the particular case, this is called a Sunburst chart and this is actually all the conversion paths mapped out for customers. Like this particular one, if you look at the top, it’s got this path-5.13 percent of the people that converted from a purchase from the site started at the product page. You would think that’s a typical functional test is starting with the product page-somebody dumping a couple of things into the shopping bags, signing in, paying and leaving.

One of the things we found, and this is fascinating because it actually … A retailer … You don’t think of these things as a tester, but the retailer, it was pretty clear. 37 percent of visits and conversions to this particular site started with order confirmation. What we found was what folks would do, and this particular retailer had a Facebook page, where they were doing coupons. They essentially give them out or make them available on Sunday. The coupons would be good Friday during a particular time slot, where say from 8 a.m. to noon, and at 8 a.m., the discount was 40 percent. At 9 a.m, it was 30 and it would descend all the way down to zero at noon.

What people were doing during the week … They essentially were putting things in their cart, and women were typically doing it as well … You’re actually able to look at demographics. They’d pick a pair of paints, throw it in the cart, pick a sweater, throw it in the cart, pick some accessories, throw it in the cart, and then that cart would sit there until Friday morning and what they would on Friday morning is they would come in, they’d look at their order and then they’d go ahead and pay. Traditional click-through coverage pass functionally isn’t typically something that functional testers would put high on their list of criticality because that’s not the way you typically functional test. You build through and you try and do all the end-to-end, like this one, or even this one, where someone’s just signing in and paying.

What we found was we needed to look at demographics and how people were using the function and the application, and then our coverage model for the actual functionality tested would depend more accurately on the way people used the site. That’s a pretty fascinating way to get your data. Again, kind of takes the focus off the devices and puts it back on the application. Now you’re armed with the way people are using the application and the devices that they’re coming from. You can really draw that line in the sand with a lot more confidence knowing that you’ve now got user behavior helping define your test cases. Karl, I’ll go ahead and turn control back over to you.

Karl Stewart: 

One of the things, just like to collect more information from the folks online for … Actually not for us, for information, but if you would like more information at this point … We’re not done with the presentation yet, but I wanted to make sure that we get this taken care of. We’ve basically got three options. You can pick any that you are interested in here and we will get back to you.

We have an offer with regards to the device cloud that you saw, that you can get 10 hours of free device access to try it out, try your app on it, see what the response is like in your environment, see if that’s a way that you can be able to manage your devices, take a look at that technology.

We also have TouchTest Regression Suite Starter Kit. This is an automation starter kit that basically allows you and our professional services folks to come in and in a 30-day period of time will take you through an introduction of TouchTest, will take you through basically building a clip library, a utility library that you can use in multiple user stories and then we’ll actually build out five of your user stories and have it automated with Jenkins so that you can see the full path working. That’s a 30-day project that you can get to start to see functionality right away.

Then the last thing that’s available, or just … There’s lots of stuff that’s available, but today, if you’re interested in it, we have a multiple test automation assessment that basically is a two-day assessment and it’s actually an online version of this. If you’re interested in just taking a look at what you guys do compared to what other folks are doing, click that and I’ll send you the URL to that so that you can end up getting … Ask some questions and get some answers back along that line.

We’ll skip to the results here but I’m going to actually move on fairly quick to the next section, Dan?

Dan Boutin:  

Sure, thank you, Karl. Thank you for putting that offer up there. The part about the assessment I think is critical because it gives you an opportunity to look at not only the device side but also the functionality like I essentially showed in the quick demos I had on the mPulse and Workbench and what Tom did around TouchTest.

Going to close with some mobile trivia and there are prizes awarded here I have in my hand, if you’d be able to see my hand chargers for your smartphones and the first ones that correctly send the answers to-I’m going to walk you through a couple questions here-either to the moderator or to me at my Twitter address … Every mobile phone has on it essentially an application or capability that has been around since literally 1978. That capability that has been put together and built by three different aerospace companies, one of which I worked for, and at any time during its run, when it’s fully functional, it has a certain number of satellites that constantly circle the globe. What I’m looking for is the number of satellites that make up this configuration; what the satellites are-what are they called and where that snowstorm was that’s on there?

Go ahead and either chat or tweet. My Twitter is already jumping … What I’ll do is I will tweet back whoever the winners are and ask for your address and everything and I can mail you the USBs. I’ll give everyone a second to copy down my Twitter address in case they didn’t want to do the chart … The chat.

Karl Stewart:       

I’ve got a chat in here, Dan, that we’ve got a 36 number of satellites …

Dan Boutin: 

I’ve still got some tweets coming in, so I’ll tell you what, right as we close, I’ll go ahead and give you the answers.

Karl Stewart:  

All right, Dan.

Dan Boutin:      

That gives somebody a … Yes, I’m looking for the number of satellites that are circling the globe right now for this configuration. We can go ahead and go to the next one. It’s time for everyone on the phone to step out of the box. One of my favorite shows, Breaking Bad. It’s time for “Bad things should come to an end,” and bad things are taking guesses on your testing coverage. We have the ability with our technology to essentially give you a lot more confidence in not only are you testing the right devices, but also are you testing the right portions of your application that your users are doing that are impacting revenue.

You can go ahead … Tom showed you TouchTest. You can actually go, there’s a link there. You can actually go, download it yourself and play around with it and try it out. Also on our blog, we have several posts around functional testing. Tom did a great one on the device cloud that he was showing you. You have any questions, again, continue to chat, but also feel free to tweet me and I’d be more than happy to respond.