Before joining SOASTA, much of my career as a performance engineer was focused on testing large-scale web service applications. While each application was different, two things remained constant:
There is a very large and diverse set of data flows into and out of the application.
The vast majority of API calls into the system required data returned as a response from other service methods.
This can present a difficult challenge when planning and constructing accurate and repeatable load profiles and stress tests. For one thing, how do you construct tests to emulate all of those flows? And when a call changes, but exists in multiple scenarios, how do you quickly and easily update your tests to include those changes?
Let’s imagine just a small web application that serves as an authentication service and data repository for an online retail site. Our imaginary services will handle validation of user credentials, management of sessions, and provide product information and shopping cart services for items being sold.
Creating a suite of tests for an application like this is pretty straightforward: you create a set of scripts that perform each flow you would like to test. One script may log on, browse through some products or categories, and log out. Another will browse to a product and then follow steps to purchase the item. And a third might log on to the site and browse for a while before abandoning the site without making a purchase.
What happens when our imaginary company starts providing services for external partners and the login operation now requires an additional parameter declaring the origin site for the user who’s logging in? In our imaginary web service, it’s not too difficult; we only have a few scenarios where the login operation must be altered to accommodate the changes. What if, however, our scenario is designed to test a large-scale multi-path application? Using other tools, one may be required to manually edit dozens of scripts that reference the method that has been modified. This maintenance task can quickly become cumbersome and time-consuming.
The CloudTest® platform’s drag and drop composition interface makes the maintenance process easy by allowing you to break your tests down into smaller components.
In the example shown above, we have a simple test scenario (in CloudTest this is called a Composition) for our imaginary company. As you can see, the complete flow has been broken into smaller components placed in order on a track. Our first track will emulate a user logging in to our site, browsing through items, and selecting an item to add to the user’s shopping cart. The second track opts for logging out of the site after browsing. Data is passed from clip to clip with the use of track properties, which function identical to properties at the clip level, but within the scope of an entire track. (For more information on properties visit Cloudlink.)Clips can read, write and modify property values created by the clips located before them in the timeline. This enables very easy data exchange between the sub-components of the test.
Let’s take a look at our BrowseProductCategories test clip, specifically the browseProducts request:
As you can see, we have a simple web service call, which we manipulate via an intuitive form interface, with no need to manipulate and manage complex XML data or name/value pairs. This service call uses asessionId variable stored in a track property. The value of sessionId comes from the response of the createNewSession call, which executed earlier in the timeline. We’re also referencing a clip property, which contains a random category ID. This call will return a list of all available products in the selected category, which we can then use tocreate a property that stores this list of products to be used later by any clips that may need them (like the AddItemToCart clip).
This modular approach to test design provides some terrific benefits.
First and foremost is maintenance. As was mentioned earlier, if there is a fundamental change to the way a user authenticates, we will only need to change a single clip (in this case the createNewSession clip). The changes will immediately take effect in all of our scenarios, saving us significant time in updating all of our test cases. Another great benefit of this type construction is the ability to quickly modify the flow a user takes through the application, or add an entirely new flow to the composition, simply drag and drop your clips into the right places.
Additionally, since each operation is isolated in individual clips, we are able to use the clip repeat functionality to quickly change the behavior of the track without making any permanent or significant changes to any of our components. See the example below.
In this case, we’ve added five clip repeats to the second clip in the track, BrowseProductCategories. Now when this test isexecuted, this user will browse through five categories before adding an item to their cart.
Prior to SOASTA I worked on the flagship product for a major media corporation, maintaining more than 150 different possible data paths, many of which were continuously undergoing major overhauls. This test design and unique interface saved me countless hours by cutting down on the tedious maintenance and gave me the ability to make fundamental changes to multiple test paths in minutes, not hours. In turn, this gave me the freedom to better use my time to expand test coverage in areas that were lacking. While my test scenarios and application coverage grew more robust, the time required to modify and maintain my test suite remained consistently low.
Take a moment and consider where this type of testing approach might apply to your current or future tests; it may end up saving you significant time in test maintenance and construction. If you have implemented tests like this already, we’d love to hear about it, leave us a comment and tell us your story.