Four Best Practices for Testing Mobile App Performance

Mobile apps may appear no different than web apps, however, applying the same testing techniques will produce inaccurate results, or fail to expose underlying performance issues.

This webinar will discuss how to avoid these pitfalls and address challenges unique to mobile apps such as:

  • Device/OS diversity
  • Network variability
  • Device performance
  • User location and volume

You will walk away with an understanding of best practices for ensuring the performance of your business critical mobile applications.



Karl Stewart:    

Good morning and welcome to the 4 Best Practices for Mobile Performance Testing with Utopia Solutions and SOASTA. My name is Karl Stewart and with me today is Tom Chavez and Lee Barnes. Tom is our senior product evangelist with SOASTA and Lee is the founder and CTO of Utopia Solutions. In just a moment you’ll be hearing from Lee but before we get there, let me find out a little bit about who we have on the line with us. If you can respond to this little poll question: what business organization do you work? That will give us a little bit of an idea of who we have out there and Lee, while we’re waiting for some of the answers with regards to this, if you could just give us a little bit of your background and what Utopia Solutions is all about, that would be great.

Lee Barnes:  

Sure. Thanks Karl. Lee Barnes, CTO and founder of Utopia Solutions. Utopia Solutions is a software quality testing firm. It’s focused primarily on mobile quality, performance testing and test automation. We’ve been doing this for about 20 years.

Karl Stewart:      

Thank you Lee. This is a…Okay we’ve got fairly heavy testing and Q&A, that’s awesome. Also from a performance perspective, it’s good to have you on the line as well as you can see there. Everybody can see the develop, actually I need to skip to results so you guys can see it. We have a good coverage. Lee, that will give you an idea, and Tom, sort of who we’re talking to from an audience perspective. Thank you very much for your feed back. This is a carry on to a previous webinar that’s called 7 Steps to Pragmatic Mobile Testing. This is step 6 which is know your mobile performance. One of the things that, with regards to the overall area, if you’re interested in the white paper or the previous webinar, let us know at the end. There will be a poll question where you can ask for more information on that and we’ll get you that information following this webinar.

Lee, I’m going to title slide you up here and turn the controls over to you. If you’ll take it away, thank you very much.

Lee Barnes:          

Sure. The title as you can see there, the title slide initially was, Best practices for mobile testing. I have a quick apology, despite the title, I don’t believe in best practices. That says to me, this is the only way to do something. Probably a ridiculous statement. The things that we’ll be discussing are things that I’ve found to be successful in the work that I’ve done and ideas and approaches that I’ve learned in communication with others in the field. Please don’t apply these approaches blindly to your work. Think about them. Incorporate them into your existing body of knowledge. Understand how they do or don’t apply to your context. Certainly challenge them and most importantly contribute your thoughts and experiences to the community.

The three questions that I want to discuss and hopefully answer today are first, why is performance of mobile important? I think it’s becoming pretty clear that performance is everything when it comes to mobile applications. Just in case, I’ll share some statistics and hopefully convince you. Second, why is mobile different? Certainly we’ll see that there’s a lot of similarity in testing mobile systems but what is different? Finally, how can I adapt to those differences?

Let me show you a few statistics. First of all, according to mobile M-Commerce insights poll, 66% of mobile shoppers abandon their transactions. Half of those are due to poor performance. You may get your customers, your users involved in your application, involved in your business but performance sends them away quickly. Another statistic, a 1 second delay equals a 7% drop in conversions. That’s huge if you’re doing any volume at all. Some other things to think about, mobile users typically expect the same or better response time on their mobile devices. As technicians, we have a lot of testers on here, that may be counter intuitive. We know that mobile networks are typically slower than connected users. We know that mobile devices are less powerful than desk top work stations. So why would we expect better response times?

It really, I think has more to do with the context of where and how the app is being used. I think the word mobile is key here. Users are out and about, they are running around, they need to get things done. They’re just not in the mind set of thinking, yeah, this transaction should be slower. If you’re still not convinced, looking at the importance of mobile, if you look at last year’s world quality reports, still again, performance is at the top of the priority list in terms of mobile related issues. If you don’t think it’s important there’s probably someone in your organization and certainly the consumers of your applications think it’s important.

Before we get into details, let’s take a quick look at the evolution of performance testing. I won’t spend a lot of time on this but just as performance testers, if I look over the last 20 years I’ve been doing this, when I first got started, we had a lot of proprietary protocols. We didn’t have a lot of skilled resources in the sense that people focused on performance testing or performance engineering. We were doing a lot of in lab testing. Didn’t really have a lot of mature tools back then. As we moved into the next decade, systems became larger. They became more complex. We started to see the emergence of open source tools and specialized resources. As the focus changed from in house enterprise systems to large consumer facing systems, now we were faced with the specter of very large user loads.

As we moved into the end of that decade and into the next one we started to see Cloud based testing, certainly near and dear to the SOASTA folk to address that. Now we have mobile. The rest of this presentation is going to focus on what’s different and how do we adapt, how we can adapt there.

Let’s take a look at some of those challenges. First of all, your users have options. How they access your application will certainly effect the user experience. Users accessing your system over different channels may access different components in the system. They may have different modem systems and variances in the UI and the network conditions and device performance and we’ll talk all about that in a few minutes. What that highlights is that it’s very important for you to understand your user load profile and I think if we had a lot of performance testers on the call like we do everyone’s acutely aware of that. Everyone is focused on the same, are specific or at least more important from a global perspective.

Traditionally, the performance tests have evolved that isn’t mobile. These factors haven’t bubbled to the top. The reasons are somewhat valid I guess. Client time associated with the total response time was typically assumed to be negligible. We didn’t care about the rendering time in the browser. All users were typically accessing the same code base independent of their client configuration. All users access the systems certainly over different network conditions we typically said we weren’t going to test what we couldn’t control unless we had a very specific network condition we were trying to emulate.

In addition to including important mobile factors I also want to highlight that your performance metrics needs to be in a continuous feedback loop. What your users are doing. What’s your most common paths? What are your load conditions? Your peak, your average load conditions? You have that data. Certainly if you are an impulse customer you have phenomenal data in terms of feeding back data into your load test. These attributes are extremely important. What type of access your users have in terms of native app, mobile site, full site. What’s their preferred browser, if it’s a mobile site and what are their network conditions? It’s very important to understand that in addition to all the other things that we as performance testers assess.

The second challenge I wanted to talk about is network and certainly this warrants it’s own webinar but we’ll highlight it here. So many factors added a variability of mobile network. Wi-Fi versus carrier network. The type of mobile network and of course the quality of the connection. The best 4G network doesn’t mean anything if you’re in a tunnel. Why is this important? Really a small percentage of users can have a large impact on performance. Slower connection speeds, poor network quality result in longer connection times that are going to consume more resources on the service side. I’ve seen extreme cases where an extra 10% of mobile users have exhausted all the available connections and where if network conditions were ignored we would have missed that bottleneck. Certainly networks need to be considered.

How do we assess the impact of network variability. Again, probably the most ignored aspect of performance testing for mobile, a high percentage of the mobile tests plans and strategies that I see don’t speak to network conditions surprisingly. At a minimum look at baseline performance and see the user mode. Typically we’ll do that with real devices using some type of network virtualization solution which is just fancy language for simulating the characteristics of a particular network. We’re simulating things like band width, latency, sometimes jitter depending on the tool. They often come with popular presets to get certain mobile conditions from good to poor to even complete network loss. We also need to access the system under load. As we’ll see in the next slide, there are some ways we can do that as well.

Again, for a single user, we might use on of the mobile platform tools, Apple Network Link Conditioner, one of the Android emulator. We also use Charles Proxy as an open source solution. It has some nice network virtualization options. For the system under load, we can put our load generators behind Charles Proxy as well and simulate various network conditions.

The next challenge I want to talk about is user location and volume. Anyone doing performance testing now for more than a few years has been in a situation where you’re in a lab, very controlled, very controlled setting. This works for small, tests maybe a few hundred to maybe even a few thousand users but reproducing large volumes of users coming from geographic diverse locations is really impossible to test inside a lab giving most of our performance testing infrastructure that we have availability to inside of our labs. To address that, where we have a large volume of concurrency from thousands, even hundreds of thousands you typically use Cloud based solutions, such as Cloud test. This allows us to very efficiently generate a required load as well as exercise the same infrastructure that our users do. It also allows us to put our load generation in the same geographic locations that our users are coming from.

In practice, we’ll generate a large percentage of loads using Cloud test and then observe the user experience on real devices while the system’s under load. This can be done using in lab devices that are driven by touch tests or even just held in your hand. We’ve also run tests in production as well. In that case, it’s essential that we monitor the impact on production users using a monitoring solution like Impulse. Tom will have that more during his part of the presentation.

The next challenge I want to talk about is the device itself in terms of the application on the device. Again, if you’ve been doing web performance testing your entire career, you’ve probably in most situations have ignored the performance of the client. Given that we have essentially unlimited, or maybe an abundance of resources on the work station in terms of processing power, memory, electrical power, it’s not really a terrible assumption, at least not for business systems. All of mobile devices, we certainly have limits. Those limits will effect the performance of our apps. Of course, the apps themselves effect the performance of our device. As our app cuts out battery life in half, it’s very possible the usefulness of our device and of course the app we’re testing and all the other apps are now limited. We want to test the performance of the application on the device itself, when it’s used in terms of processing power, in terms of memory, battery drain, especially if you’re in an application that’s used out in the field in a business, maybe a data collection type scenario.

We don’t have unlimited storage on the device. If the application uses storage, that can be a big consideration as well. How do we go about that? Not all that much different than we would do it for a work station. If at a minimum we’ll measure and track an average use of CPU, memory, battery drain and storage if that applies. We’d like to include a number of target devices and configurations based on the mobile test strategy that we have in place. We’ll typically incorporate an automation tool for consistency. This allows us to highlight really only the variables that we’re interested in versus trying to understand if our user actions that we’re performing have some variation that effects our results.

I listed a list of solutions that we use to measure performance. Of course you have the mobile development platforms themselves that have the ability to get you this data. On the touch test can give you this data and of course there’s some mobile lab Cloud solutions as well that you can use. As we get near end, I want to talk about some of the key take aways and probably one of the most important is mobile users are not the same as connected users. Certainly that will effect the user experience but also the effect they have on the system themselves is very important to understand. Again, like we just talked about performance is more than the back end and the network. We have limited resources in the mobile devices and we understand how that impacts our application and again how our application effects the device and the rest of the applications.

If you can, push performance analysis to the left in the development cycle. A lot of our mobile development cycles that we see are agile. A big bang system level performance test at the end just before you deployed your users doesn’t take advantage of a lot of the benefits that agile can offer in terms of early feedback so look to push performance analysis to the left in the development cycle. With that I’m going to turn it over to Karl to introduce Tom and I’m looking forward to the demo.

Karl Stewart:     

Thank you Lee. Tom Chavez is our senior product evangelist here at SOASTA. He’s going to take us through the concepts that Lee was just talking about but within the SOASTA products to give you an idea of how you can start to track mobile performance using our technology as well as the impact of load on your system and the actual mobile device while you are doing load testing and pieces like that. Tom, I’m going to turn this over to you and you can take control of the screen and move into the demo.

Tom Chavez:  

Great. Thank you Karl. Let me see, I’ll go ahead and share my desktop here. Did that come up?

Karl Stewart:


Tom Chavez:

Wonderful. Thanks everyone. Happy to have you here and we’ll give you a quick demonstration of the products that SOASTA has delivered to market that answer the questions and needs that Lee has mentioned in his slides. First when it comes to knowing how your mobile performance is, whether it’s your mobile website, or how your mobile application is performing, you need some sort of monitoring and from SOASTA you have our Impulse product which is a real user monitor. What you see here on this globe that is spinning, for every dot that is flashing, that is a real user who’s accessing one of our customer’s websites and if it’s a green dot, it’s letting you know that the responsive time to that customer is within the parameters that they’ve set. For this customer it’s that the site has been responding to them within 3 seconds. If it’s yellow it’s a 5 second page load and if it’s red then it’s greater than 5 seconds.

This is monitoring every user that’s accessing the site whether they’re from a desk top or even more important as mobile grows whether they’re from a mobile device. I think Lee mentioned the growth of mobile, I don’t know if people saw the IBM report from last Thanksgiving that talked about how on Thanksgiving day, there was more mobile traffic to websites than there was from desktops. If you think about the day being Thanksgiving, that makes sense, people are away from their desktops but mobile traffic is just growing and growing. In fact you see it here in the statistics that are represented on this page too. We’re actually just looking at this customer’s website over the last 60 minutes and during the last 60 minutes, we can see that for example mobile Safari, here in the lower right corner was the top browser that was used to access this website. The second was Chrome and that would be from the desk top and then third was Chrome mobile.

At the moment, more people are actually seeing this website from a mobile than from a from a desk top device. Then around the world, you want to know where your customers are coming from and this company, if you see by the dots all over the map that they have customers all around the world but you also see that represented in the country monitor that is looking at about half of their traffic is coming from the United States and then the next is Germany, United Kingdom, all that making up the other 50%. This type of live monitoring and in fact recording of this data, Impulse records all of these beacons, all of these data packets about the customer experience, even as far back as 18 months so you can look not just the last 60 minutes, not just the last hour but even further back. I’ll switch over to a summary dashboard where it’s much easier to do some slicing and dicing.

For example, let’s just look back at the last day. Within the last hour we see there were over 617,000 page views on this customer’s site. If we go ahead and switch to one hour, what was a real time view of just less than a million page views is not being loaded up from the database and looking back over the last 24 hours now we see 11.9 million page views. What we’ve done here is move from a real time monitoring tool that’s still showing real time data, everything here on the right is still happening in real time and will advance minute by minute through the day showing this snapshot without showing the snapshot of 24 hours increasing the view to a big data view of a whole day. Again, we can expand this view to a whole week, a month and then even compare month over month to see how the experience has been for the users and especially looking for mobile users.

Here’s an example we’ve seen across multiple customers. When they’re looking at whole world view, if you notice in the bottom world map down below, their experience was for Africa is typically less green than the experience across the other countries. When you look at the devices across Africa, many of their customers have mobile devices that have a poorer experience if the application has not been modified and optimized for mobile experience. In fact we can see that the page response time and such are not as good in Africa. In fact it’s easy to drill down into any country. All you have to do in Impulse is click on it and so now I’m going to zoom just into the data for this country in Mali and they haven’t had so many page views in the last month on the site but it does say the page load time there was 12 seconds. This gives them the opportunity to look at what’s causing that. We have additional dash boards that allow you to drill in to the water fall of what are the objects that were loaded by this customer in Mali that contributed to the response time that we saw.

If you go back and visit another country, in fact we could even search the whole world by country but the great thing about Impulse is it gives you all of the data about all of your customers accessing your site even across both mobile website, desk top website and mobile browser and mobile application. It’s collecting all of the data about your customers. If you want to know what the mode on your site can handle and how many customers your site is able to handle as you grow across the planet. First, it’s good to know how your customer mix is around the planet. To know where your dominant countries and what are your countries where you don’t have that many customers and you maybe don’t need to worry about the load from those countries.

But to generate a load test, we have a product called Cloud test. Cloud test generates load against your site from different load providers in the Cloud. I will bring up a grid that I have built of load providers and this grid is built up of servers from Amazon east, so we’ve got the east side of the United States and Amazon west, the west side of the United States, and also Amazon Sydney, Amazon San Paulo from Brazil, Amazon Singapore, Japan and Ireland, and Frankfurt in Europe. Just like my customers may be distributed around the globe, I can generate load from around the globe to see how my website will respond to customers from around the globe. Now in order to build a test of what I might want to see how my site responds I can go into our touch tech product and build a test.

I’m going to say, let’s build a clip, which is what we call our test script. They are built visually. I’ve got an iPad on the desk next to me and I will distribute a screen of it. Karl, did that show up?

Karl Stewart:       


Tom Chavez: 

Okay great. This iPad is just running Safari and on Safari I’ve logged into the server that’s running our touch test recording and play back. From this iPad I’m going to ask touch test to let me record a new mobile website. I’m going to test our mobile website and record a clip against it. The website I’m testing is our sample site called Soastastore. You can visit and this is site that we use when we are doing customer demonstrations of our product and everything that I do in the browser on the screen is going to come up in touch test and is being recorded live.

Our tests are generated by actually manipulating the application. We don’t have to write anything in Java or in Ruby or another language but everything I do on the screen of this iPad, I say, let’s go to our store. This is a sample purchasing site for a bookstore. I’m going to tap the store button and let’s see, let me, I’ve got a load test running against the store at the same time so we’re actually seeing…This site was not built to handle 100s and 100s of customers like your site is and by writing a load test and building a test script at the same time, it’s very easy to overwhelm this little word plus server we’ve got running here.

I’m going to go ahead and buy this book by David Baldacci, The Sixth Man, and I’m going to scroll the page and I’m going to buy another book, this very pretty book and I’m going to add it to my cart. What you’re seeing, as I’m moving this iPad device around you’ll see a scroll action being recorded here in the touch test window as I press a button, say add to cart for this purchase of the spy book, you’ll see that over here there’s a click action that’s recorded. All of my actions down to the object level that are happening in this website are being recorded for playback later at load or functional testing. I’m going to go ahead and stop this recording. I’ve got a longer recording and multiple clips that have already been set up so I can make a better load test on my website and I’ll leave this.

Actually I’ll just show how easy it is to then run this as a functional test by opening it as a test composition. I’ll save it away. You’ll see this very quickly what I’ve done just a minute ago is going to be run back step by step in my iPad sitting next to me. The iPad by the way is not wired to the server. Obviously it’s connecting over the internet to the server but it is not wired to my laptop. It’s just connecting over Wi-Fi so this works across disconnected devices that can in fact be a mile away from you, 10 miles away or even 1,000, or 3,000 miles away. One of our new features of touch test is a remote device Cloud with devices in servers, connected to server locations around the planet in Japan and in America and elsewhere so that you have access to remote devices that you might not own or have access to because of where you are on the planet. That’s really important.

If a customer tells you that they’re having a problem in Europe where you don’t have any employees and on a device in a carrier network that you don’t have any devices connected to that network, remote device Clouds give you access to those devices and we have that through our remote test kit. This quick little clip just ran back and just as Howard had shown in his slides it’s very important to know what the CPU usage is of an application. How his memory is being used during an application. How the battery is being effected by an application and even what is the IO that’s happening by an application. It’s this type of data that lets you know if your application is, let’s say a data hog and it’s going to be effecting customer’s experience, burning up their data on their mobile or if the application is a battery burner that they’re not going to like that application so much because the battery gets burned by using up too much power because maybe it’s also running at a high load.

We can see on the graph here this application’s only running I think that said about 60% of the CPU and it’s not using much memory and the batter wasn’t impacted at all while it was running. Very light load, very easy little test that we ran here. Let me go ahead now and take this and expand it to from just one test and one application running to something much bigger. For that, I will stay in Cloud test. I will show, I’ve got my grid up and running. I’m going to come back to my favorite of the globe. What I’ve got running here now is I told you about the 10 locations of tests running and I’ve gotten this test here on the globe. Now we can actually see the data paths from the load generators, those AWS instances back to the server where the test is actually running.

I can spin the globe and go to certain locations and what this shows is the application is running also in America in Virginia and my load hits the application is coming from Amazon East, Amazon West, Amazon Europe, even from South America here in Brazil and actually we see from Brazil there are multiple arcs which show me that the application is hosted in Virginia but part of the application, the graphics are hosted at a CVN. That’s something you won’t see if you’re just running a test yourself from a load server inside of your firewall or a load server that’s monitoring traffic to your main site. Here we can see the performance path of the content that’s being delivered to a sample user in Brazil from the CVN. If this pipe goes red, we’ll see if the CVN is not able to meet the load demand that we’ve estimated and are testing it for our website. It’s effecting not just your site and your application, your website, your mobile server but it’s also testing your 3rd party components that are part of your delivery solution too.

Those might be graphics. They might be 3rd party Java scripts. They might be plug ins on your website that are vital to your customer’s experience but if you’re not monitoring them and testing them you’re missing out on an opportunity. One of the great things also about our testing capabilities is because we are testing from the Cloud we can increase the burden, the load of the test dynamically during the test. Here you can see I’m testing it for not so many users but let’s say we know we have a lot of customers in a certain area or we want to be able to see how will our test do if we increase the number of customers accessing the website. I don’t want to go crazy but we have a very high amount of capacity up to millions of customers that we could do but I’m just going to raise it a little bit and see how that effects the site and now we’ll see across the globe the pipes will get a little bit thicker.

The response time for this application will probably start to go down. Here’s it’s been pretty good at less than a second but if we look at some of the other monitors we’ve got going on we’ll be able to see the number of virtual users will be going up. As we see down here we’ve been static at about 24 users now the sites are starting to ramp up the amount of users. We can continue to monitor the site to see can our site handle hundreds of users. Can it handle thousands of users? Can my application and my application’s back end handle the load that we want it to be able to handle, maybe for Black Friday or Cyber Monday.

That’s why I run the demonstration but I hope you see how the combination of our tools from Impulse from live monitoring of your application and the back end services across the planet form all of the locations and all of the experiences your users are having is vital. Also the ability to test your application at loads using a product like Cloud test to test it to the 1,000ths and millions of users that you expect your application to be able to handle and to measure those results and be able to see where your application can or cannot handle that and what to do about it and how to fix it.

Karl Stewart:

Very cool. Thank you Tom. I’m going to take control back here. What I want to do is open it up for questions for Lee and for Tom. The audio actually is locked up a little bit so ask your questions via chat and we will try to get these answered in the next little bit. As you’re typing your questions I’ll just talk a little bit about where we can go from here. Steps that we can take going forward. SOASTA and Utopia have what we’re calling a mobile readiness score card. It is a simple 2 day evaluation of your testing environment, your people, your processes and software and we can come along side you and help you understand sort of what the next steps should be based on your goals and where you want to go from the aspect of if you’re interested in getting into mobile automation or performance testing or whatever. It’s a simple little evaluation and score card where we come in a couple of days and evaluate where you’re at. Give you a landmark on this roadmap and help you move forward with a plan going forward.

Lee’s company, Utopia Solutions is also capable of much larger, as is SOASTA, but much larger assessment to get you sort of building the product and moving things forward. We wanted you to realize that we have this simple step forward from an evaluation perspective to help you get started without a huge sort of cost for you with regards to getting started. I just wanted to give you an idea of that basically. We’ll be using the same people who are implementing the mobile automation solutions that our fortune 5 customers and all of our top 10 internet re-sellers are doing this assessment for you so they’ve been there, done that. They can help you moving forward if you’re interested.

I’m going to put up this slide here with regards to, for you for more information. This is sort of a click all that applies. If you’re interested in this stuff we can get back to you on it whether it’s a mobile readiness score card, Cloud test, load testing. Whether you’re interested in touch test, mobile test automation as a product, we are having an Impulse mobile beta test program if you’re interested in becoming involved in that let us know. Like I said before, this is part of a series called 7 Steps to Pragmatic Mobile Testing and there’s a white paper associated with that. We can get that to you. If you’re interested in SOASTA meet up events in your location, let us know our location. We’ll try to get you an idea of where we’re coming and where you can meet and then if you’re interested in the March 18th webinar, to build a web perf team, how to find and build a web performance team, let us know that as well.

I’m going to switch to the results as people are making their information, their requests for information known here. Let me just take a look…One of the questions earlier Tom was with regards to the, can you show a particular area, can you show India in the same, my answer was yes. Is there any sort of limitations with regards to what areas that we can show?

Tom Chavez:  

No, in fact we can drill down across any of many dimensions whether it’s the location, into a country, if it’s a specific zip code, a city and state. We can draw regional lines across larger regions and even down into smaller regions. I believe we got down into city limits and such so that you can pretty much drill down pretty deep. You can collect across multiple if you are interested in like all of the countries of Africa, you can add them all into one group and look at them. Then across that, again, like I said, since the mobile experience is one of those that companies need to focus more in Africa and places where handhelds are predominant and even lower speed handhelds, you can then say, ignore the desktop traffic for Africa, let me just look at the mobile traffic.

Same thing for India, where everybody has a mobile phone or multiple mobile phones, you might want to look at all the Android phones across India to see how your user experience is for either your mobile website, or worse, you might be building your desktop website to mobile phones in a certain area and you’ll see that very quickly by the data you get back from Impulse. We collect all the data, every customer’s experience, all of the timing features of that customer’s experience.

Karl Stewart:    

There’s a question here Lee with regards to Utopia Solutions and the types of services that you apply from your company. Can you give us a little bit Lee with regards what are the kinds of services you do and I do have your contact information on the last slide of the day as a matter of fact. I will just pop that up and folks, you can get an idea of how to contact Lee and Tom but Lee, just give me a little bit of background to answer that question. What kind of services does Utopia Solutions provide?

Lee Barnes:                          

Sure. Utopia Solutions provides in the context that we are talking about today mobile testing strategy. Coming in, helping you understand how to approach testing your mobile application or applications. Extending your existing testing performance practice to mobile or implementing a mobile performance testing function. Still auto-test automation. If you need to test automation, coming in and helping you understand how to best approach that to get your ROI. Understanding what tools are appropriate and of course helping you with the implementation. If your organization is not ready to take on those functions, we as a company can manage that for you as well.

Karl Stewart:  

Thank you Lee. Tom, there’s a question here. The client information, battery drain, CPU, etc. is that for a particular iPhone, or Android phone? What was it you were showing?

Tom Chavez: 

The data that came back during that test that I generated, built the test and then ran the test was the live data coming back from the iPad sitting next to me. If we build a mobile test and run it against a grid of real mobile devices, that data will come back for each and every mobile device in the testing pool. That might be 10 devices, 50 devices, or even 100 or 300 or more testing devices running simultaneously. There was another question asking about the timing of those tests since different devices run at different speeds even where we have devices remotely located in other geographies. All of these tests run on independent tracks inside of our tool touch test and can finish at their own pace with all the results then being collected and aggregated. You can also drill down into the results of each one and so you’ll see the battery behavior on a specific Android device that is 3G in India versus a 4G device here in America or such. It’s coming from live devices and that’s what touch test enables.

Karl Stewart:  

Another question here, will the slides be made available either for download or on demand? You’ll have a follow up email that will have a link to this recorded session and be able to get your information there. The other question, can you use the same grids as Cloud test or would you need to have different sets of grids or mobile Cloud to generate the load?

Tom Chavez:   

That’s a very good question. When you are doing load testing, all of the mobile traffic is simulated by the load generators. You create a load testing recording for your mobile website by actually using a mobile device to generate that test. Everything your mobile device requests, every specific graphic that’s for your mobile version of your website, all of it’s recorded into that load test and then you can run that load test to scale to see if you’re website can handle the burden of 1,000 of those users at the same time or 10,000 or a million and even as I showed in the demonstration, distributing it across the globe.

Karl Stewart:

The question here, I’m into manual testing. How can I get into mobile app testing?

Tom Chavez:  

I guess starting. The best thing we can do is you can actually use touch test on the website. There’s a free are with touch test light to get access to this and be able to run some of your own tests and get an idea of what’s available out there. That can help you get started and obviously send us any, send us an email and we’ll help you get rolling along this path of automation.

Lee Barnes:      

I’d say touch test is the perfect tool for someone who wants to move from manual to automated testing because you just do your manual test on the screen and touch test turns that into an automated test. You don’t have to know any Java, you don’t have to know any Ruby, you don’t have to know any Python, you just run your manual test and touch test records it. Then in the graphical view of that touch test fields of your touch test you can then modify it and add special behaviors for a specific device or to loop 100 times if you want to do something again and again and you don’t want to repeat it on the device itself. You can even program the test and add more functionality. If you do want to write code, you can write Java script. Touch test makes it really easy to make the transition from manual testing to automated testing.

Karl Stewart:

Quick question here, maybe a longer answer. Are the scripts auto correlated? I’m not sure I understand the details of that. Maybe I should have read it all out first before I read the question.

Lee Barnes: 

That’s a great question. That’s obviously coming from someone who knows testing. Yes, we have tools that will pull out data that is related to say, session Ids or such that you would want to be tracking for running a test again and again at load. We have tools that will automatically scan the test after you generated it for those and pull out those variables and help you then use either seed data or such for turning that into an automated test you can run at load. They are automated to the extent that our intelligence can and where it doesn’t or where you want to optimize differently, you can manually set those and modify.

Karl Stewart:  

What LS system does your tool support?

Tom Chavez:   

If you’ve noticed, all of these tools were running in the browser and all of our tools are SAS based. Impulse, touch test, and Cloud test are run in the Cloud itself and they run inside any browser, Chrome, Safari, etc. the devices that you are testing against we support Android and IOS for touch test testing and play back of scripts and we’ll follow the market into other devices as they become prominent. Most of our customers are fine with IOS and Android devices.

Karl Stewart:  

Lee, a question for you. How do you simulate carrier speeds such as 3G, 4G, LTE, when performance testing?

Lee Barnes:   

When you use a tool such Charles Proxy it’s open source tool and it allows us to set various network conditions that can simulate 3G, 4G, etc. and because it operates like a proxy we can run our virtual users from our load generators though that proxy and it simulates the various network conditions between our load generators and the systems under test.

Karl Stewart:    

Another question here. Our organization’s currently focused on implementing mobile test automation which I think touch test offers. Yes, that’s correct. Could you please let us know if there’s a webinar on touch test. Yes, there is. There’s a Getting started on touch test. If you happen to click a link on getting more information on touch test automation, we’ll definitely get in touch with you on that. Otherwise, Tom’s email is up there. Pop him an email and we’ll get you set up that way. I’m trying to flip through here and not take these guys off by chance some of the questions that are here.

There’s a question here and it might be for both, Lee and Tom. Should I be concerned about privacy data and data security when using Cloud services from mobile application testing that requires using personally identifiable information or HIPAA type clinical data? Lee, do you want to take a swat at that?

Lee Barnes:    

Sure. If you have that data in your test environment, you may have a problem independent of how you’re testing it. Certainly if that system’s accessible via the Cloud servers you’ve allowed access to your firewall in some way. Typically if we’re testing against a test environment we have the data obfuscated in some way and we’re not using direct, actual live HIPAA data or person identifiable data. I can’t think of an instance of where we executed a test from the Cloud in a data centric situation that way. Certainly we’ve tested in production but not against systems that would be protected under HIPAA or other similar regulations. Tom, do you have anything to add to that?

Tom Chavez: 

Yeah, when you do load testing of your site, we definitely recommend using sample of seed data and all of our tools support that so it’s Joe sample 1, Joe sample 2, and Joe sample 3 up to 10,000 with sample passwords and such so that they’re not accessing real customer data even while they’re running on the real customer system, the real production system. They shouldn’t be logging in as me or as Lee or Karl, real humans, but they should be logging in as samples seed data.

Karl Stewart: 

Another question. In order to test 80 concurrent mobile users do you need to request to have 80 physical devices?

Tom Chavez: 

That’s a great question and it’s got a great answer. The only time you would need 80 devices is if you wanted to run 80 functional tests against 80 different devices that were either different functionality or different geography locations. If you just want to test if your site can handle the load from 80 concurrent users, what do is you sample the test by creating a test from your mobile device but it turns into a load test which is actually run from a Cloud server. It’s disconnected from the mobile device when you load it up into Cloud test and run it that way. The load that is sent is simulated is from load servers, not real mobile devices but that doesn’t mean you can’t run tests against 80 devices at the same time but those would likely be more functional tests than load testing of your back end server.

Karl Stewart:  

Okay, so we’re kind of up against the time here. There are some other questions online and we may be able to get some answers to you in the chat responses here but I just wanted to thank you for your time today. Just as a reminder, please let us know if we can help you down this path as you head down mobile automation and mobile performance testing. We’re here to help and we’d love to get involved in your project. Thank you so much for attending and have a great Wednesday.