Strategy, Planning, Implementation, Execution and Analysis

Leveraging the Performance Testing Methodology, SOASTA provides consulting services both directly and through our partners. Consulting services are integrated into CloudTest On-Demand engagements or are available on an ad hoc basis and range from test creation and execution to project management, strategy and implementation planning.

SOASTA’s Performance Test Strategy Workshop is typically a multi-day session with a Senior Performance Engineer designed to identify and document the requirements and goals that will drive a test plan and:

* Align business requirements with processes
* Integrate performance testing into ADL
* Establish success criteria
* Define a monitoring strategy matching KPIs
* Align teams to deliver to performance goals

The Performance Test Strategy Workshop often leads into an Implementation Planning engagement to document the people and processes necessary to deliver on the strategy. The Implementation planning session:

* Clarifies roles and responsibilities
* Establishes communication mechanisms
* Defines the execution model
* Creates a framework for building test plans
* Focuses on taking action on results

Test Execution

Once a strategy and implementation plan are in place, SOASTA can assist or manage the execution of the tests. Desired test types are identified in the plan, which, when taken together, make for a well-rounded view of application performance and reliability. The most successful online application companies are executing on well-defined performance and readiness plans that include a mix of tests, such as:

Baseline: the most common type of performance test. Its purpose is to achieve a certain level of peak load on a pre-defined ramp-up and sustain it while meeting a set of success criteria such as acceptable response times with no errors.

Spike: simulates steeper ramps of load, and is critical to ensuring that an application can withstand unplanned surges in traffic, such as users flooding into a site after a commercial or email campaign. A spike test might ramp to the baseline peak load in half of the time, or a spike may be initiated in the middle of steady state of load.

Endurance: help ensure that there are no memory leaks or stability problems over time. These types of tests typically ramp up to baseline load levels, and then run for anywhere from 2 to 72 hours to assess stability over time.

Failure: ramps up to peak load while the team simulates the failure of critical components such as the web, application, and database tiers. A typical failure scenario would be to ramp up to a certain load level, and while at steady state the team would pull a network cable out of a database server to simulate one node failing over to the other. This would ensure that failover took place, and would measure the customer experience during the event.

Stress: finds the breaking point for each individual tier of the application or for isolated pieces of functionality. A stress test may focus on hitting only the home page until the breaking point is observed, or it may focus on having concurrent users logging in as often as possible to discover the tipping point of the login code.

Diagnostic: designed to troubleshoot a specific issue or code change. These tests typically use a specially designed scenario outside of the normal library of test scripts to hit an area of the application under load and to reproduce an issue or verify issue resolution.