The Mobile App Testing Checklist

Testing mobile apps is different. There are more form factors, more combinations, more complexity and more users. You need a checklist to be sure you don’t overbuild or under test. SOASTA and Utopia have the experience and technology you need to be successful.

Join this free webinar and learn:

  • The most common mobile app issues
  • Missed areas like app interrupts, poor connections and device settings
  • When to automate for functionality and performance
  • Technology for end-to-end mobile app testing
  • How to collect mobile user information for continuous improvement

Utopia Solutions Founder and CTO, Lee Barnes and the SOASTA team will share customer experiences and demonstrations that will help you cross off every critical element of your mobile testing checklist.



Brad Johnson:      

Welcome everybody to today’s webinar. I’m excited to have a ton of you on the line so once again, we’ve really hit an important topic and I guess we don’t have to repeat. Mobile testing is really a popular topic right now and so, we’re going to have a good time talking to some of the key points of a checklist and showing you some technology to help you succeed. I’m Brad Johnson. I’ll be moderating and looking at the questions. I’ve got some other help here, SOASTA will be doing that. I run product marketing for SOASTA and I’m joined by Lee Barnes, SOASTA partner and really a specialty IT services company … The founder of Utopia Solutions. Lee and his team are at the forefront of a lot of change in the marketplace. He and I know each other for a long … Have known each other for a long time. He’s been a long time partner of SOASTA and other testing companies that I’ve been involved with. What’s really great is to see how his company has adapted to the needs of his customers and they range from Fortune 1000 all across the US to other companies. What’s interesting is they’re all moving into mobile and he’ll talk about what he’s learned and how he advises those companies as he continues to help everyone move along.

Also, seating across the table from me is Mike Ostenberg who runs solutions engineering here at SOASTA and he will talk about and kind of underscore what Lee talked about with the technology that SOASTA offers. If you think about the agenda today, we’re going to have Lee really talk about the landscape of mobile testing and how it’s different, why it’s different, walk through these checklist so have your notepads ready and to answer questions in advance, yes, this slide and this recording will be available after the webinar and we’ll send out a note to you that has all of those details. Then, Mike is going to walk through TouchTest which is a mobile test automation capability from SOASTA, cloud test which is large scale web and mobile testing that gives you the ability to test the backend impulse which is really user measurement and he’s going to talk about how mobile monitoring is really changing how we build and test for mobile. We’ll spend a few minutes at the end to answer some questions and throughout, we’ve got a couple of poll questions that we’re going to answer.

First, I’m going to … That will be the best way to kind of come out is would be to take a quick look at the industry and if anybody has been looking at mobile testing or mobile development, you’ll understand that if you do a search today for mobile and the enterprise or mobile testing, you’re going to get new results from what you got yesterday or last week. It’s always interesting to see what’s happening in the marketplace. The first study I pulled up was a Gigaom survey from, I think, it was back in April or May and they ask Fortune 500 companies what they were doing around mobile and I want to kind of emphasis that, of course, mobile is impacting everything we do, right? They’re telling you how we shop, how we communicate, how we talk to our friends and our famly but there’s still a lot of speculation about what about mobile in the enterprise? Well, if you look at what’s happening, this is a great kind of data bite of what’s going on. 80% of Fortune 500 companies are building or planning to build and deploy for iPhone, right?

Whether it’s their own internal phasing applications that they’re modifying for internal use on a mobile device or it’s new ways of reaching customers, everybody in the enterprise is now working on mobile initiatives. I thought what might be fun also is a little bit of anecdotal evidence, I spend a lot of time kind of in the market, in the testing marketplace, talking to developers and testers and I just thought it was interesting that we’re on the same … Essentially, the same presentation for about a year and a half around mobile testing. The way I open that session every single time is with the same question, do you currently have a mobile initiative and I emphasis you, right? Are you involved with a mobile initiative and what I found over the last couple of years is people have gone from investing … I’ve heard about this, I know it’s coming and so in spring of 2013 with a group of about 25 people, 14% … Actually, my data is off here but more than 50% of the people really were involved in an initiative. Really, a large percentage were still just kind of kicking tires and investigating.

As we move into fall of 2013, six months later, 71% of the people in that room were directly involved with the mobile initiative and the last time I ask the question of this type of group in a seminar, everybody except two people were personally involved in mobile initiatives, right? This is anecdotal for me, for SOASTA, for our partners that this is affecting everyone we talk to right now more and more, month by month. The last thing I did was actually went out and saw what the analysts from Gardner had to say about enterprise requirements for mobile and they have a relatively new study called traditional development practices will fail for mobile apps which is obviously pretty telling. I kind of looked at that and took from the top of that report that these are the challenges that enterprises are saying that they are facing. They are realizing that traditional practices do not work for mobile and I think whether directly or kind of through and through being exposed to it. We all know this that traditional dev is changing because of mobile.

The other thing is that apps are so disposable and everybody who has been around for awhile hates to hear this but we know it’s true, right? Because of the disposable nature of these apps and because that they’re always changing, they require constant refinement, constant adjustments to make user experience better which obviously means constant testing. This is … If you resisted agile or continuous development methods over the years, it’s time to stop resisting, right? Because this rapid change can only be addressed by rapid development practices and moreover, as we move into kind of continuous deployment, rapidly deployment processes. The recommendations from these challenges are that … This one kind of I always get a kick out of but it’s never changed and it probably never will, better collaboration between all of the stakeholders and why this is so much more important now is that the stakeholders change their requirements month to month. The development teams really has to be plugged in. Obviously, as we get into the topics of this webinar, Gardner recommends … Again, it’s kind of common sense.

If you’re doing mobile test development or mobile development, you have to still do all of the types of testing you’ve always done that Lee is going to talk about and more, right? Functional testing, performance validation, load testing and looking at user experience, all of these is critical and you got to do it faster. What’s really compelling … Also, Mike is going to really talk about this in the aspect of monitoring is you need to really pay attention to what real users are doing in production because how better to understand how to modify your requirements, how to build better tests than to look at what real users are doing in production and then feeding that as quickly as possibly back into how you build your … Build you software. Then finally, it’s just underscoring everything as you need to adapt agile development and deployment practices. To do that, you’re going to have get better at testing, you’re going to have get faster testing, you’re going to have get different people doing the testing at different times in the delivery cycle.

I thought what would be fun is that since I’ve shared kind of my experience with small groups over the last course of a year and a half, why don’t we go for a large group poll because there is a large group of you. Just think carefully, are you currently involved in a mobile development or testing initiative at your company i.e. are you being drawn into this mobile development world? Why don’t go ahead and respond, I’ll just wait for about five seconds and I’m going to show you results because I believe you can … I think you can see the results now. Keep on voting, I’m going to countdown from three, two, one and we’re going to stop and checkout the results. Obviously, a mass … A majority of you guys are currently involved with mobile testing initiatives or mobile development initiatives so a little bit of a no duh, you’re at this webinar but I think it’s still interesting that this is affecting you today so 84% of you. I’m going to close the poll, are being affected right now by what you need to do for mobile delivery. Lee, I can’t imagine a better segway over to you to kind of talk about, “Okay, so what kind of checklist do we need to build to make sure we’re covering everything we need to cover.”

Lee Barnes:   

Thanks, Brad. Appreciate it and thank you everyone for attending. Yes, today we’ll talk about a mobile testing checklist. One of the things you need to think about just because you’re testing on a mobile device and I think the first realization we need to make is that mobile testing is different. It’s not just your father’s software testing on a mobile device. There are many unique challenges, both process and technical, but it is just because your application is running on a mobile device. Really, the first difference that everyone runs into is the mobile platforms and especially in the case of Android, the fragmentation across those platforms. We’re just looking at Apple and Android here. I’m sure quite a few of you are developing for platforms outside of iOS and Android. Just looking at these two, Apple does a decent job of moving their user base along as they update their OS but as you can see, Android has a fairly long tale in terms of OS versions that are actively in use and these are all the things that you may have to consider depending on the business value, the risk, etc. associated with your app.

Let’s also look at device diversity because devices themselves contribute to the problem. Device diversity on iOS is certainly tolerable but the potential number of target Android device and all those combinations can be overwhelming. I’m going to talk more about how to address this later in the slides but certainly, you can start to feel the burden building. In some instances, the mobile platform and in addition to device diversities, adaptations that carriers make for Android OS can cause issues. It’s relatively rare but certainly difficult to pinpoint so it’s worth mentioning. These are handful of issues that were traced back to specific modifications made by carriers and certainly because of the cost of testing across multiple carriers for the same device/OS combination, these are issues that are often found in production but it is something certainly to be considered. The moral of this story is that the application functionality that we’ve become very used to being really the large … A lion’s share of the testing burden is really only … It’s really only the tip of the mobile testing iceberg.

We have the mobile specific test conditions or the non-functional mobile test cases that have nothing to do with your application functionality but it’s to think about just because the app is running on a mobile device and that’s the focus really of today’s topic. When you combine those test conditions with the multiple mobile platforms and operating systems and of course, the devices, you begin to see how that perceived test burden starts to grow exponentially. If the process for the potential number of test conditions down to something that’s not only appropriate for your app and for your business but reasonable, it could be the topic of a whole additional webinar and we’ll spend a few minutes on it later but let’s dive right now into the mobile specific test conditions. You can categorize test conditions really any way you want within your organization. We’ve chosen to group them to these five high-level categories. The interrupt conditions, installation, network, performance and that’s device performance, not system performance as whole and device integration so how the application integrates with device features.

Let’s take a look at the first category. Interrupt conditions are exactly what you think they are. Your normal interaction with the app is interrupted by some external event. Essentially, we’re testing to see if your application handles being backgrounded appropriately. More than likely, incoming calls are going to be your apps most common interruption and certainly, you may have to interrupt your app to make a call yourself, forward that call depending on the application might be initiated from inside the app. The incoming/outgoing calls are very important conditions especially when they’re a part of the application functionality.

Unexpected power cycle is also a very interesting test so not shutting down the phone but typically taking the battery out of the phone. You ought to understand how gracefully or in a lot of cases, ungracefully, the app recovers from that sudden device shutdown. Also, observing the behavior of the app after it’s been backgrounded for a long period of time can be important. We’ve seen lots of apps that handle … Very short interruption is fine but fail to recover from longer hibernations. Depending on the OS and version, certain alerts so calendar, alarms, SMS, etc. they may background your app or they may just be directed to the status bar or limited to an audio alert.

This really highlights the importance of understanding how the different mobile platforms and OS versions handle these external events. In other words, your expected results for a specific condition may vary by your device configuration. This leads to one of the main reasons we use a checklist for this type of testing versus step-by-step test case documentation. By including only the high-level test condition in the checklist, we trade off less detail certainly in our test condition but in return, we get agility and we get maintainability. As Brad mentioned, the dev cycles are much shorter, much quicker, we need to be much more agile and we can’t spend our time maintaining test documentation when we should be testing. Of course, this requires that the testing on the list acquire the knowledge of how various mobile platforms and all those conditions behave. We found that … I think you will too that this tends to be a fairly quick learning curve and it’s well worth the pay off that we get in increased agility and decrease maintenance.

In installation conditions and I think they’re fairly self-explanatory but we found lots of issues around incomplete testing for installation conditions. Depending on your audience, whether they’re consumers or employees, corporate devices or BYOD, the way you install and update your app will vary. Of course, the end user experience will vary whether or not the user is installing for the first time or updating. There’s not a lot of test cases here but you really have to think about those various conditions where someone might be putting your application on their device. Install on clean device and uninstall and re-install, updating from the N-1 version or updating from an older version which can cause some issues as well depending on what OS and what the application is doing. Probably one of the most interesting category is network conditions. The very nature of mobile apps highlights the importance of understanding how various network conditions affect the app. We’re not connected to the system, at least not wired. Typically, we’ll test network loss, varying network quality conditions primarily so varying network quality like where you’re from, type of network virtualization solution.

Typically, we’ll test over WiFi with that network virtualization solution and that’s common for us to test over actual carrier networks. That fits into the project budget quite frankly. We do have a small number of devices on carrier plans but more often than not, we’d use a mobile cloud solution if we need to charge this specific carrier in a specific geography. That’s relatively rare. This is also a good time to talk about the way the checklist conditions are executed. Our checklist doesn’t relate the test condition to specific application functionality so essentially use the same checklist as a starting point for all the testing we do for all the apps. Again, we’re on high-level guidelines but it’s mostly common sense. The network conditions affect application communication but won’t improve anything if we executed actions on the application that didn’t invoke any communication. We need to think about what your application is doing relative to the test conditions and make sure that they make sense.

Moving on to performance conditions and again, talking about performance of the device itself versus performance of the system under a load which is certainly a very important but again, a topic for another day. In general, we want to understand how the application consumes device resources and in turn how the app behaves on the given device configuration. As testers, we rarely had to consider performance of the client fro desktop apps. Developers essentially are developing now … On the desktop, at least, for limited resources in terms of power or memory storage, etc. It’s likely one of the main issues we … Or the main issue why we see issues here. Obviously, there are limitations on mobile devices and we need to understand if and how those limitations affect our app. What we’ll do is typically run a consistent set of actions, taking the common user stories for the app over our target devices and measure the important performance metrics such as resource usage both memory and processor, battery drain and storage if applicable. Even around storage, for the applications that use the or SD card storage.

We’ll look at low storage conditions both for memory cards, we’ll remove the memory card and see how gracefully or again, ungracefully, the app handles those conditions. Finally, we have the device integration conditions and this is the user conditions that look at how the app integrates with various device functions and features include location-based testing. It’s very important if your application has functionality based on where you are or even more interestingly, how you’re moving, integrations with their video and still images. Of course, screen size, resolution, screen orientation are very important especially with all the fragmentation on the Android side. We’ve seen a lot of issues with … I’ve got accelerometer listed there but also gestures, what gestures does your app support and does it react to them appropriately. We’ve seen a good handful of instances where applications react unexpectedly to gestures that they’re not supposed to support or inputs for the accelerometer that, again, they’re not supposed to support but somehow the impact was left in there.

We got this universe of testing conditions that we consider. Very neatly categorized. The question is do they all apply to every app every time and the answer is no. You need to will that down to what is, again, appropriate for your application. The process for determining the test conditions isn’t all that complex. First thing is you look at the type of app, is it native, is it a mobile website, is it hybrid? Native apps tend to have the highest degree of integration with the device. Mobile websites go least. That’s important to know.

Next, you need to understand how the app is being used. If it’s in production, hopefully it’s got some monitoring data. If not, you need to think about that with the business and look at other similar apps, but if your app is going to be in constant use with heavy data interaction, it’s likely to require more processing power, likely to be impacted more by battery drain, or impact other apps due to its battery drain, and certainly more exposed to varying network conditions.

Finally, related to the first filter there, understand which features and functions the application integrates with the device. Does it use location-based services? Does it integrate with the camera or the accelerometer or other phone features? This will result in your final set of mobile-specific conditions that are affordable to your app. It’s your mobile testing checklist.

That’s half the battle. The other half, as I alluded to is device selection. Now that you have established the test conditions for your app, you need to see what you can do about the overwhelming test we discussed earlier. That means passing the super set of possible mobile devices and all that configuration through a set of filters to identify that final set of appropriate and reasonable test conditions.

The process for determining the set isn’t really all that different from what we just did to the test conditions. First we look at the type of app itself. Is it native? Is it hybrid? Is it a mobile site? What platforms is it distributed on? What versions are you going to support of the OS? Again, how the app integrates with the device? This gives us our first cut of illuminating platforms and devices.

Next one, you understand how the app is being used. This time, looking at what devices might be prevalent in your database and realize that generally accepted sources of statistics around mobile market share don’t always apply. We tested an app for the entertainment industry and it turns out that the users of this app, over 98% use iOS devices. Certainly, if you took a cross-section of mobile devices in the US, will be much lower than 98%. That eliminated a large, large, large degree of the Android testing that we had to do. Had we not looked at our user community, we would have missed that and probably done a lot of unnecessary testing. Look at preferred browsers. If it’s a mobile site, obviously, concurrent usage and load profiles and input to load and stress testing that you will be doing.

Finally, we consider the business objectives. What’s the value of the app, the business, and, similarly, what’s the risk of app failure? This will help you make the more difficult decisions when determining the final list of configurations to keep, and we can’t kid ourselves: budget and timelines always factor into that equation. If you apply this process, at a minimum, you’ll know that any configurations you cut will be lower priority than the ones you kept. Not only it determined our target test conditions and test configurations, what we’ve really done is transform our checklist with our conditions on the left side into a matrix with our configurations across the top, and I’m always looking for any good reason to show a Matrix reference.

Before I hand it off to Mike, I want to cover a couple of quick topics. Here’s a quick look at a small subset of our mobile testing checklist. Not showing it to you for an eye chart, but rather just to understand that, again, we don’t document our mobile-specific conditions in traditional script form, because, again, things change quickly and they vary across devices and platforms. We need to be agile and maintainable. There are situations where certain test conditions need some additional instruction. In that case, we’ll include a hyperlink to some more specific instructions. Here’s an example for how to perform a battery drain. Battery drain test. Again, we try to keep even them at a high level and make them as insensitive to changes as possible.

One last topic, which is relevant, just because of the potential for the return on investment and also the large testing burden that we just talked about. The ultimate goal for test automation would be to execute a single set of test cases across a diverse set of devices, but because of the diversity of those devices, that leads to some issues. How do I handle the fact that the UI might be different? Certainly, the way that UI is technically described is different across platforms. Mobile platforms.

The best way we found to handle that is to abstract the test cases, which is the common factor in the equation. We want to execute those same test cases across all devices and strike that away from the automation framework that interacts with the different devices, which is obviously the variable factor. This way, you maintain one set of test cases, constructive framework that handles the platform and the OS and the device differences for you behind the scenes.

With that, I think it’s a good topic to turn over to Mike, who’s going to talk to you a little bit about SOASTA Solutions for mobile test automation. Without further ado, Mike Ostenberg.

Mike Ostenberg:     

Thank you, Lee, and great summary there. What I’m going to be doing is I’m going to be taking us through a few SOASTA products that help deal with the complexities and all the variety that we have mentioned here. SOASTA again is a tool and a platform dedicated to testing of web and mobile applications, and we have three products that we’re going to speak directly to a lot of the capabilities that we have mentioned here. First, I’m going to start with tackling the question of automation across multiple, different types of devices and different form factors. For that, I’m going to show you SOASTA’s TouchTest product.

TouchTest is our mobile application. Functional testing tool. It allows you to record applications that are either native or mobile or hybrid in an object-based way, so that you can take a test case and then play it back across many different types of devices on many different types of operating systems across many different types of carriers, so if you need to take a test case and confirm it works across all that variety of different supported devices, your Android or iOS, and play those back and confirm them, that’s what TouchTest would go into. We’ll show that one first.

Following that, I’ll show you very briefly how we can then take an application and perform this test via a backend infrastructure that’s supporting that mobile client, whether it’s a native application or a hybrid or a mobile web, and for that, I’m going to be showing you SOASTA’s CloudTest product. What CloudTest allows you to do is to capture the traffic. The calls being made by your native mobile application or hybrid or mobile web application, while calls are being made to the backend server, and then to play those back in high volume, so that you can confirm that the backend infrastructure is going to be able to handle a large number of these devices and applications out in the field, and also have acceptable performance there. That will be the second one that we’ll be showing you.

Lastly, we’re going to be going through SOASTA’s mPulse product. What SOASTA’s mPulse product does is it helps you address that question of, “Well, where do I focus my efforts? I know that I have different customers out there that are different versions of operating systems that are on different types of devices that are using the applications in different ways; where are my customers coming from and which devices are important to them and which operating systems are they on? That’s what mPulse will help to address.

mPulse is a really user measurement tool. It allows you to capture real performance statistics across all of your customers, as they use your either website or mobile application and report that data back in real time. It’ll give you demographic data about which devices they’re using and what volumes, which operating systems they’re on and which browsers they’re using, as well as performance data. How long does it take them to go through processes on either your website or native mobile application, and then, lastly, tie all of that information back into business metrics, which might be of importance to you.

If you have an application that’s dedicated to helping people purchase something from your site, it might be important for you to understand how many of your customers are doing that, and also what is the correlation between the speed of your application and how likely they are to purchase something. I think everybody understands that if a website or an application is faster, people will stick around. They’ll actually do more stuff on there. We can actually quantify that for you. We can tell you that, if you’re able to take one or two seconds off of the process to purchase an item, your conversion rate will go up X, Y percent. This helps you understand the entirety of your customer base, the performance that you’re getting, and also the business metrics.

We’ll be covering all of these. These are all topics that we have dedicated webinars to going into a full hour, so this is going to be a very high-level overview, but kind of give you that visibility into it. Let me go ahead and share my desktop, and we’re going to go ahead and start off with showing you TouchTest for automating mobile functional testing here. Let me find again where the shared copy is. There we go.

Okay. Cool. Okay, you guys should be seeing on the screen right now; a browser opened to, so this is the TouchTest last cloud tester. This is a server that you can actually connect mobile devices to, from any location, and then start to begin to automate the testing of those mobile devices. The way it works for TouchTest is we have a utility, if you’ll notice off on the right-hand side here. On the download section, we have a utility called Make Out TouchTest-able. When we run that utility and give it the name of an IPA or an APK file, for either an iOS or an Android application, what it will do is it’ll take your existing Android or iOS application and it’ll make what we call a testable version of that application.

That testable version of the application is entirely identical to the original application. It’s just now has an that allows us to pick out user interactions with the application, as well as play back the user interactions with the application. After you run this utility , Make Out TouchTest-able, you’ll have this testable version of the application and you can install it on any device that the normal, untestable application can be installed on.

We’re going to go ahead and start that process here. I have already installed on my iPad, let me share the display of my iPad. On the desk next to me, I have an iPad. This is a physical iPad, if you will; this is not simulator. Hopefully that’s coming into the screen now. This is a physical iPad that’s on the desk next to me. I’m using a software program called AirPlay, so that you can see the display of my iPad on the screen here, but this is actually a physical iPad.

What I need to do now is I’ve already installed some testable applications on this iPad. I need to take the device and connect it to the TouchTest server, so no matter where the device is at, it can participate in the testing, so long as you can establish an HTTP connection from your device to the TouchTest server. I’m going to do that by opening the browser here. I’m going to tap on Safari, and I have this bookmarked, so I’ll go ahead and go right into the bookmarks. Now I have a connection from my device to Again, it doesn’t matter where the device is; you can have testers that are on the east coast, the west coast, even working from home. If they can connect their device to the server, they can go ahead and create tests like I’m about to show you here.

Now, in this case, if we want to create a new test case, we just go into the left-hand side and select New Tester. In the recording options, we’re going to record a mobile app. This is for functional testing of an application. It’ll come up with a list of the different devices that are connected to this server. In the status column, you can see all the different devices that are connected to the server. In this case, I’m going to go ahead. This is Mike’s iPad 2. I’m going to go ahead and select that device that I’d like to record on. Down below are a series of testable applications that are installed on this device, and I just pick whichever application I’d like to use and hit the record button. When I do that, the application automatically launches on the device. From here, the user just has to go through and use the application as they normally would.

While we’re doing that, TouchTest is going to be recording our interactions, and you’ll see every one of the steps recorded in the left-hand side here. First, I’m going to decline nicely their offer to have me enrolled in their marketing program. What we’re going to do is we’re going to go ahead and walk through the process of looking up a flight, so I’m going to click continue as guest here, we’re going to go check flight status. I’m going to book a flight. Let’s see, where do we want to go? Let’s go from Atlanta; we’ll go to Austin.

What you see happening is, as I’m just using the application as I normally would, in the left-hand side, TouchTest, every time I interact with the application, it’s storing the details about that step. You can see, every time that I tapped on the close button, tapped on continue as guest, when it plays back, you can play it back on this device or on other devices. You’ll notice that we don’t use coordinates and we don’t do screen scraping, so it’ll play back across different versions of operating systems, different form factors, of devices, etc.

I’m going to go ahead and look up the point now. Let me tap on look up. Now, when we do automated functional testing of course, it’s more than just recording them, playing back the steps. We’ll want to verify that, as you get to these different points in the application, that the appropriate response is there. In this case, that we’re trying to book our flight from Atlanta to Austin. We might want to check to make sure the right flight numbers are listed here.

To do that, we can add a validation. I can go into any step here, and you can see, when I the step, there’s these five icons that appear. The first two are dynamic wait conditions. We can actually tell the automation when we play back that we should wait for certain conditions to occur before we try and attempt that next step. In this case, if we want to use a validation, we just click on this green check mark, and what this will do is this will add a validation. It’s verify element present.

If I click on the list of options here, you can see that we have about 80 available validations. These are all pre-coded. There’s nothing for you to do or no code for you to write. Just select the type of validation you’d like. It’s a long list but, in general, it kind of boils down to you can check any property of any element in the display, whether it’s the width, the height, the text, the opacity, the x coordinates, y coordinate, etc. Any property of any element in the display. Or you can take screen captures. You can actually take a snapshot of the page as it appears now, and when I place the touch back in the future, take a snapshot then too and compare those. If those are different, fail the test.

In this case, if we wanted to verify that it has this element present, that has the flight, I’ll just use verify element present. Down here, where it has the locator, this is where we type in the properties of the item that we’re looking for. Now, if we know them, we could just type them right in here, but we don’t expect for people to know the properties of all the objects in their application, so whenever you see this locator this field, there’s this button to the right called the touch locator. When I tap on that, the screen is grayed out. Now, I’m going to reach over with my finger and I’m tapping on the iPad, and, as I tap on different items, you’re going to notice that it highlights those items. It puts up here a list of different properties.

These are different ways that we can identify whatever item is being highlighted there. Now, in this case, we’ll want to check to make sure that it has the flight number, so I’m just going to tap on that flight number. When it has the item highlighted that you’re interested in capturing properties for, just hit this little blue button and it’ll fill in all these values back into the form for us. I’m going to go ahead and do that now. I’ll tap that, you can see that it’s filled in the locator here, and there’s a few ways that we could identify those items. In this case, simpler is better, so when we get to this step, we’re going to check to make sure that there’s an element present with that text, so the flight number 5150.

That’s a simple validation. I’m going to go ahead and hit the back button. Typically again we’d go through many types of validations, but, for the sake of brevity, we’re going to just do the one today, and I’m going to go ahead and stop the test. You can see, in just about a minute, I’ve been able to record a test case that will open up the application, book a flight, check the flight number, and play it back. I’m going to save this set application. We’ll call this Southwest webinar. Hey, there we go. Called it Southwest webinar.

I’m going to play it back very quickly, and we’ll see that you can just play it back and it’ll just go through those series of steps that we went through when I play back this test application. As it’s playing back here, I’ll kind of slide my iPhone up to the right here, so that we can take a look at the browser window. What we’re capturing here is in the browser. You’re going to see the details of each step as we go through it. You’re going to see down below here the CPU and memory utilization on the application of the device and details about exactly what’s happening to this stuff.

Here’s CPU, here’s the memory utilization, battery utilization, and you want to be able to track to make sure that your application isn’t overtaxing the memories, or using too much, in terms of the phone. It’s kind of gone through this here. You can also go into any one of the steps here. If we take a look at effects and six or seven, typically, you’re going to have your test run automatically. Perhaps to a CI integration, which we include with TouchTest, and you might come in the next morning and want to review it. Well, you can go into any one of these steps and you can take a look at the details. You can see the screenshot in any particular step to see what was going on. You can take a look at all the properties of all the objects to see what was going on here.

We just created a test case. It’s played back on that same device that we recorded it on. Of course, now, we’ll want to test across different devices, different form factors, different versions of operating systems. How do we do that? Well, to do that, I’m going to go back and we’re going to edit this test case that we have here. On the edit window, you can see that I’ve got this Southwest webinar. This is the series of steps that we just played back. I played it back, by the way, on an iPad 2 on Wi-Fi.

Now, if we want to play it back on other devices, I can just right click here and say I’d like to duplicate this. I’m just going to duplicate three times here, and on the select device here, just as I connected my iPad to the server just by opening up the browser, I’ve actually got several devices here on the desk next to me. The iPhone 4S. I’ll go ahead and do this. iPhone 4S. I’ve got an iPhone … yay for me. Got the iPhone 6, so let me go ahead and open that up. Of course, as soon as I have the iPhone up, that means somebody is going to try and call me again. I have also here the iPad Mini.

Now, we’ve got a series of different devices we might want to play back on, and I will select the iPad Mini here as well. I will share the display of these, so you can kind of see it playing back in each of these. The process is simple: just pick the different devices you’d like to play it back on, connect them to the server from any location, and we’ll be able to play back on those devices. I’m going to share the display of these. Hopefully, the screen doesn’t get too cluttered. I think this means I need a bigger monitor. We’ve got lots of devices here. Shrink them down Apple, which is they could just resize their devices as easily.

Mike Ostenberg:     

Yeah. My pocket already feels full with these. Six in it now. Let’s go ahead and hit the play button. What you’re going to see is we’re now going to play back across all these different devices and capture the same level of information. The screenshots, the CPU, the disk, the memory utilization across these different devices, instead of going into them now. Same back across multiple devices. As it’s going through here, again, kind of a point to know here is that off on the left here, you can see the steps of any one of these devices that have placed through here. The same as we thought for the single device here.

We can also view this in a variety of different ways. It gets pretty … My devices are very jump into the middle there. I think it’s when it resets it. Anyway, sorry. I have tendency to do that. Sometimes I’d fight it, sometimes it would just let it go. In just a second here, I’ll kill reflections and … Okay, I think when it stops …

Lee Barnes:

You know, a lot of people have asked about how are we actually sharing? Just to reiterate, it is reflection. It is using AirPlay but it’s an app that uses …

Mike Ostenberg:

Yeah. There’s this app here called Reflector. You can get it at It allows you to take any iOS device and select the AirPlay option to show the display on your Mac, so those four devices that are sitting on the table next to me, that’s the display you’re seeing here.

Lee Barnes:           

I’m fully blaming that app for the jumping in front of the screen.

Mike Ostenberg: 

Yeah. It’s not the egotistical iOS application. We’ve actually created a test case, we played it back across multiple devices. We can capture and take a look at the details of any one of these particular steps in the application, and that’s great. I’ve got these four devices on the desk next to me, but I guess the next question is, “What about the folks that aren’t so lucky to have these four devices with them? How do you work when you have a team and you have maybe more than four devices you need to test things on. Can you have iOS and Android devices. How do you manage that situation? Do you buy multiple devices for everyone?”

Well, I’m going to kill Reflector here. Bye, Reflector. Okay, so to help with that, we have something else called the Private Device Cloud. What the Private Device Cloud is it’s a bank of devices that you can actually take, and rather than having these devices on the desktop next to me, you plug them into a tether. You can wheel a cabinet into a room, typically at the onset of the customer location. With this you have three options. First of all, you can upload all the different applications that you might like to do testing across. Just upload the IPA or APK file to the Private Device Cloud and it’ll be available for you to install in devices. Secondly, you can install any devices just by plugging them into a USB cable that’s available here. You can see we’ve got a series of devices here.

It’s very easy for anyone to go ahead and take a look to see which devices are available by the green dot here, which devices are in use by the red dot here. If you want to go ahead and see who’s using what device, you can go ahead and click on that and see down below here that this one is being used by Ollie. I might pick him up or I might let him know, “Hey, I need that device.” You have visibility here and a way for a team to share access to a variety of different devices that you need to do testing on.

What you can also do is you can remotely access and interact with these devices. If I pick any device here and say, “I’d like to connect to it,” it’ll go ahead and ask me if I want to launch an application on there. We’ll go ahead and pick an application. Then it’s going to come up and ask you about how you like the display. Certainly some of these devices have very high resolution displays, so we give you a scaling factor that you can put in here, but you’re able to bring up that display of that device just as we saw with Reflector. The different here is that the display is interactive. Just with your computer monitor and mouse, you can go ahead and click on and kind of interact with the device here. It’s taking a second to come up here. I’m going to pull up the display of the device. I’ll put it at 25%, just because it tends to be a rather large display. It’s got a high resolution display here.

When I hit the start button, we’re going to pull up an interactive display, so now that question about, “How do I manage all these devices that I need to test on?” Well, you can go ahead and kind of interact with the device. Anything that that you can do with the device in hand, you can do through this interaction here. You can install applications, change the network settings, uninstall the application, reboot it.

You can kind of fully interact with the application of the device directly in the interface here, so now you’ve got a capability to go ahead and work with the devices remotely. You don’t have to have, if you have a cabinet where everyone’s wondering who’s got the iPhone 4S versus the iPhone 6 or the iOS 6 or 7 devices. They’re all cataloged for you right here. You can see the operating system, you can see who’s got the device, etc. A great way for you to interact with the devices.

That was kind of covering the questions about how do you remotely access, how do you work with this variety of devices. Let me kind of move on to the next topic here which is, “Okay, we’ve now shown that you can record and play back functional tests on these mobile applications and devices. What about performance tests? How do we test these applications and make sure that the backend server that supports these mobile applications will also be able to handle that load?” Well, again, I’m going to go back to the test server here.

We’re going to create another test case. This time, we’re going to create a test and just record the traffic from the mobile device. We’re not actually going to record the interaction but instead record the http requests. When I record http, it gives us the apps, and I’ve got to turn on Reflector again so that we can see the specific device. I’ll use the iPad here and see the display there, so you can see what I’m about to do. I’ve got to turn on Reflector again. Sorry for that. I killed it prematurely. There we go. Okay, so that’s the display of my iPad.

What I’m going to do now is I’m going to take these same applications that we just created the functional test for and I’m going to run it, and this time, I’m going to record the call http requests being made by the application. On the left, in this case, it’s going to be cloud tester recording of the post and guest of an application and playing it back in high volume with cloud test or load testing product.

Here I can select, I have installed locally on my laptop something called a conductor, which is going to act as a proxy server and capture the requests being made by the mobile application. When I hit the record button here, those requests are being captured directly from the application and I just have to go into my mobile device and tell it to route all the traffic from the device through the conductor on my local system. I do that in the Wi-Fi settings here and go into the list of private networks here.

Down at the bottom, you can see that there’s a manual setting, where you can set the proxy setting for the mobile device. Simply tell it to route it requests through my laptop where I can pick them up and kind of record them as they go here, and I will apologize in advance for my poor thumb typing here. I’m putting in the IP address. By default, when we have this conductor installed on our local, you can change that if you like. Here I changed the proxy settings of my local iPad here to route all of the requests as I use the application to our conductor, so we’re going to pick up the traffic that way.

Now I just go ahead and launch the application here. I will again decline their very kind offer to put me in their marketing program, and I will continue as guest. Now, we’re going to go through the same flow that we just did a couple of moments ago. I’m going to check flight status here and what you see happening in the background here is that the calls being made to the backend infrastructure are being recorded here, and we’re going to be able to play those back in high volume. From Atlanta to Austin, we’re going to look at my flight. Okay, good for me, and we’ll call that there.

We picked up a few steps here. These are the calls that are made to the backend infrastructure. I’m going to save that here. In about a minute and a half, two and a half minutes, if you count my messing around with Reflector there, I created a recording here of the particular traffic that made to the site. I’m going to call this the Southwest performance test. I can now take this and I’m going to run this in high volume. Rather than just simulating a single user here, I’m going to go ahead and open a test composition here. I’m going to identify that I’d like to run maybe 10. I’ll put 10 here because this is in our site. I don’t want to hit it with a thousand users, but this is where you can put whatever volume of users you’d like to use for this. We’re going to go ahead and run 10 of these at the same time. I can ramp up to that 10 users over. Let’s give it one minute to ramp up.

I’m going to save my Southwest load test. We’re going to go ahead and start this. What we’re doing now is we’re actually testing the backend server. We’re actually going to send the same requests that we went through, as we looked up a flight there to the backend server. As we’re going through there, we’re going to capture performance information. Let’s go ahead and pull up a couple of dashboards to see what we could look at here. Here, you can see the average response time of the different requests that are being sent out to the server here. You can see the send rate. How many requests per second are being sent. You can see the virtual users. We’re up to … We’re waiting so far only three, but you can go as high as you need to test your particular application up to the number of users you want.

Now, if you want to dig in to more detail than any of these, you can pull up things like the waterfall dashboard, which is going to show you visibility into how long does a connection time takes for these things? You can also go into other dashboards, where you can dynamically interact with the load during this test here. In this dynamic ramp dashboard, we have a slider down below here, where we can crank up the load up to hundreds of users or thousands of users. Again, not our site so I’m not going to do it. Otherwise, Southwest would be giving me a rude call, but this is how you can actually test that site and make sure that you’re able to test at high volume and that you get acceptable performance as you go through the load test here.

Again, I’ve got maybe five minutes left, but again, you can now quickly record the traffic, check to make sure that your backend server will be able to respond appropriately and quickly to the requests being made, as you go up to a higher volume here during the load time. In the last 4.7 minutes here that we’ve got left, let me talk a little bit the last aspect that we wanted to cover, which was, “How do I get that visibility now into which applications are most important to my users? What’s the performance that they’re getting? Where are my users coming from? Which versions of operating systems are they coming from?”

For that, we’re going to switch over to our mPulse product. What our mPulse product does is it gives you real time data about the users of your either website or mobile application, so if you pull up here, we’ll start off with the globe here. This is where you’re going to see kind of the traffic coming in in real time. All of those users of your application or website, every time they do a screen of your webpage, it’ll fire up the beacon and you’ll be able to see it showing up here in real time. What we then do with that data is we aggregate that data so that you can review it and take a look at information about the users of your site.

If you wanted to see the median load time, how long does it take to look up these particular pages or navigate through the pages, how many users you have, what’s the http connect time, as well as what browsers and page group … You know, pages of your site or application are going here. You can pull those up, as well as, and let me make this a little bit bigger. I will kill Reflector again. Goodbye, Reflector. Thank you for your help. We’ll make this a little bigger, so you can see the data here. You can see how many users you have. Of course, you’re going to want to be able to understand how many people are using iOS versus Android, how many people are using different versions of the browser, so I’m going to pull up things like the operating system breakdown.

Here’s where you’re able to see which operating systems are most prevalent among your users. Let me give it a second to pull up here. Here, you’re able to see … This particular one, we work with mPulse with both mobile and website. This particular site here today that we’re looking at is focused more on web, but you can see the operating system breakdown, as to which versions of Windows, Mac, etc. You can see down below here, for each of those operating systems, what’s the performance characteristics of that operating system? Or people on iOS or Android and different versions of those getting different performance.

If you want to take a look to see information about demographic breakdown, which countries or states are my users coming from; that same information is available to you here. Now you have comprehensive visibility into what the performance is like for your users, where are users coming from, which operating systems are they using, and which devices are they using and which browser. This is the guide that you use for figuring out what’s the most important.

Most of your users using iOS, as Lee mentioned, 98%; in which case, you probably spend too much time in testing for the Android applications. Or, if you’ve got a 98% on the latest version of iOS and don’t need to go back to iOS 5 or 6, you can understand that by looking at your user base. Maybe I’ll kind of cut it here, and we’ve got a couple of minutes here. Let’s kind of throw it back to Brad for any questions, or maybe open up the lines and address questions in the last couple of minutes here.

Brad Johnson:    

Yeah, we’ll definitely take some good questions that have come in. You know, I just kind of want to, as we look at the data come in live from the internet right now on mPulse, one of the most exciting aspects of real user measurement is understanding some questions that we’ve always had to guess about for performance testing, right? You know, there’s a myth out there that two seconds for a website to load is too slow. It’s a myth because the answer really is how fast should my website be is it depends. How fast should my mobile app be? It depends. It depends on what your users are doing. It depends on where they’re coming from. It depends on all kinds of factors. What’s their expectation on a mobile device, as opposed to a web device, so as we as testers are starting to build our test scenarios, how do I build a test scenario for do not exceed two seconds response time when people don’t even begin to leave the site until five seconds?

This information about what users really need and what they really expect is going to change how we build our test cases. Just want to leave with that thought and then some of the questions that have come in, let me go ahead and come back to the slides and we’ll kind of take a few questions. You know, going back to some of the things that Lee was talking about; we talk about how important are performance aspects on a device, like memory conditions and do we run a 25% while we run a test to measure the performance or to measure the functionality on that device? I mean, how do we actually begin to bring together these device performance characteristics with functionality. Are you seeing a change there and do we have to think about that differently?

Lee Barnes:    

Sorry, Brad. I was on mute. Yes, I think the impact of the performance and the importance of those performance test conditions really varies by the type of app, and it kind of goes back to the user profile again. Certainly the app has its own performance characteristics, but the importance of those, and whether or not they can impact the app depends on the apps that’s going to be used. Typically it’s something that we measure, those high-level performance characteristics that might show in the demo trends. Sometimes you might drop them, but if it’s part of an automated test, we usually leave them in and we look for continuous trends over time. Anything that might change, in terms of the way the resources are consumed might be a flag for us to consider.

Brad Johnson:       

Thank you. There’s a lot of questions about choosing devices. How do I begin to build a lab? Do I need to buy a bunch of carrier plans? Mike, you want to kind of talk about maybe where do you start? If you’re just beginning to build … Obviously, we’ve got automation using real devices, you need some real devices, so what are some suggestions …

Mike Ostenberg:    

I mean, if you already have a customer installed base, mPulse kind of gives you visibility into who’s using what devices and what operating systems you should be focusing on. Again, iOS tends to have a fast adoption rate, as we were talking about, and you can see that the older ones aren’t going to be as pertinent. You’re going to have a smaller subsection there, but it’s really going to depend upon your application and your customer base, so mPulse will give you that visibility. You’ll probably start with … Pick with some likely devices. iOS of course being an easier, a smaller subset of devices that you can choose from and the more recent ones along those lines.

Then, I think, in terms of carrier visibility, that’s something that mPulse will help you with because that’s tough to understand. It’s tough to really get that picture of, where are your users coming from and what carriers they’re using without data like that from mPulse that stick to your customer base and you might have different market demographics than other users of different applications, so getting the first hand data is kind of key to that. Beyond that, when you use kind of the Private Device Cloud, it can swap in, when I’m showing you the remote access to devices, once you’ve come up with that list of devices you’d like to use, you can plug those into the Private Device Cloud and use those specific devices as the ones that are remotely acceptable by the members of your team, so everybody has that visibility to those devices to start testing on them.

Brad Johnson:    

Yeah. My favorite is, when you’re building a new lab, it’s always easy to choose, to figure out what you need to do for Apple, right? I mean, that’s not hard. Pick two operating systems and pick one of each device and you’ve got your Apple lab set up. In the big scheme of things, it’s not expensive. As long as you’re using a solution where you can take an off-the-shelf device and turn it into a test device, you’re in great shape. Android’s definitely a different animal, and there was one comment about, “Gee, a Samsung phone might have very different performance characteristics than a HTC phone.”

Again, pick the top of the market devices, based on what you know, and then build as you need it. I think that’s probably a good place to stop there. I just want to comment on, there’s some questions … We may seem to be a cloud-based company. All of the SOASTA products that Mike demonstrated are available both as on-premise software, as well as potentially hosted software, so there’s a couple of questions about latency and I would say that the only latency that we experienced when we’re creating a test with a mobile device or we’re running a load test on the real interest is real internet latency. I would say that, if you are recording a test and your server, your TouchTest server, is a continent away, you’re going to have at least a hundred milliseconds delay in that test creation. Likewise, the reason you’re using cloud to do external testing anyway is because it has real internet latency, so you actually get that built in.

I think, with that, Lee, if you have anything final to say, I’ll give you the last word. I’ll have you sign off. In the meantime, folks, what I’d like to do is, while Lee is saying goodbye, which will probably take him two seconds, I’m going to open up a poll. We’re going to run another webinar on October 22nd, and that one is on accelerating web and mobile testing for continuous delivery. Kind of picking up on the same theme but really looking at kind of more of what you need to do. How do you actually need to focus on different test areas to get to market faster with the continuous delivery approach, so go ahead and vote and I will give the last word to Lee and thank you all for joining us.