The Performance Beacon

The web performance, analytics, and optimization blog

It’s continuous with Jenkins: User experience drives DevOps focus

performance testing with Jenkins

I recently attended and presented at the east coast version of the Jenkins User Conference held this year in Washington, DC.  The weather certainly fit the theme of the conference:  The heat was continuous. The humidity was fully integrated with the heat. And, most importantly as you can see above, SWAG was out in full force.

State of the State

Right from the opening keynote by the founder of Jenkins, Kohsuke Kawaguchi, this conference was jam-packed with all the latest capabilities of Jenkins, including discussions around the new capabilities like workflow, and several sessions on Linux containers, micro-services, and, everyone’s favorite topic, DevOps.

I was fortunate enough to be able to speak on SOASTA’s behalf on one of my favorite topics — continuous integration — with my session appropriately named It’s Not Called Continuous Integration (CI) for Nothing!

In this session, I dug deep into continuous integration and the key factors that make up the overall CI process. I covered:

  • the relationships and process flows between change management, configuration management, and release/build management;
  • how the CI process, when coupled with a solid performance engineering discipline across the product lifecycle, can result in a better user experience for web and mobile applications; and
  • the entire lifecycle, the “conveyor belt” of the application lifecycle, with a concentration on these three processes — or as I call them, “The Big 3” — that support the overall CI strategy.

As our ultimate example, since SOASTA “doesn’t have a dog in this hunt” as the saying goes (i.e., we do not offer products in this space, but we do use products in this space to build our solutions), I walked the session attendees through the SOASTA product lifecycle and how SOASTA uses Jenkins to bring world-class products to market with speed, quality, and efficiency.  You can view the session online, but let’s cover some highlights.

The 4 main takeaways from my session

1. Configuration management is more than version control

Without version control, there is no reliable way to know what a given unit of work contains. At any point in the development-testing-release process, the first debugging question should always be “What changed?” A version control system helps answer that question — but it’s not the whole story.

Version control typically covers just the code under development, whereas configuration management covers the entire process that includes not only  software, but hardware, tests, documentation, connection pool settings, other configuration files, and more. It identifies every end-user component and tracks every proposed and approved change to it from Day 1 of the project to the day the project ends.

Configuration management also ensures reproducible builds. It removes the waste of manually assembling code. Reproducibility is a key component of safety: you can’t safely deliver frequent changes if you don’t know what you’re releasing.

Few things are more frustrating to hear than this: “It worked on my machine.” Environmental inconsistency is the primary cause of this situation. Manual system configuration exacerbates consistency problems and it’s common for developers to be running different versions of an SDK. They also often test against system software that differs from the software running in the integration environment. That environment, in turn, doesn’t match production. The result is pure waste.

Fortunately, this problem can be solved using configuration automation and by treating infrastructure as code. A single set of configuration scripts can be used to provision development, testing and production environments. Consistent deployment of configuration changes across large numbers of environments and machines becomes as simple as checking in a configuration script change. This is one of SOASTA’s internal best practices. It should be yours, too.

A sound configuration management process can ensure that the functionality and performance and physical attributes are known, and managed, across the lifecycle. A good user experience (UX) starts here, whether you are developing with an agile, lean, waterfall or other process.  This is the foundation for the “Big 3”, and a key part of any solid DevOps environment.

2. Understand change management (aka “The only constant is change”)

This one seems simple, and it is… if done right. This process involves the identification of potential changes to an application, no matter where it is in its lifecycle — from day one to sunset.

In reality, potential changes should only originate from two places:

The business owner/customer who requests that the application be developed (e.g. submitting a requirement), and a bug which can originate from an end-user through a ticketing system (e.g. a problem report or a feature request), or internally from QA during any of the different testing phases for any and all functional and performance tests that should be executed continuously across the development lifecycle.

3. Know the difference between release management vs. build management

Even though I’ve seen these two terms used interchangeably by way too many vendors (including a few key two- and three-letter acronyms in my work history, these two processes are not the same.

The standard definition of a release is “a set of approved changes or features approved for the application”. This may consist of several change requests that have several new features being added, as well as several problem report resolutions that impact features being added/changed in this particular release.

A build, on the other hand, is typically an incremental set of requirements/changes/problem resolutions, that in the CI process world are released and TESTED incrementally throughout each testing process in the conveyor belt (e.g., functional testing and performance testing in Dev, Test/QA and in pre-production/staging).

Continuous integration build failures may be relatively quick and easy to fix. As much as possible, however, they should be avoided. Development teams should have the goal of checking in complete, correct code. The further “shift left” in the lifecycle you find a bug, the less costly it is to fix. Finding bugs prior to check-in is less costly still than finding them during a continuous integration build.

The best way to catch bugs prior to check-in is to have developers and testers work together, rather than to treat testing as a post-coding activity. Short-lived feature branches are ideal for this purpose. Developers and testers can share, review and run each other’s code. By running the test suite locally, they can minimize the likelihood of discovering bugs later in the lifecycle.

The value of running tests depends on the quality and completeness of the tests. In a continuous integration environment, tests are code just like any other code. Story planning needs to account for test development time. Development teams need to pay as much attention to specifying needed tests as they do to specifying needed functions. Test specification needs to account for both non-functional (e.g. performance testing) and functional validation.

Without a good test foundation for your CI process, your application will end up just like this slide presented during the Jenkins User Conference keynote:

performance testing: deployment problem

4. Final takeaway on continuous integration: Achieving high-speed cycle times requires automation of more than just the test itself

SOASTA’s internal approach is particularly conducive to compressing the SDLC of web and mobile applications through the use of CI tools and best practices involving the key processes above, all the while coupled with a solid performance engineering process.

When combined with Jenkins/Hudson, for example, it is possible to automate the entire process from build through test and into reporting and diagnostics. Results are displayed in a common interface and automated regression testing can be done completely hands off. This alone would not necessarily obviate the need for any manual testing, but it does make automation, maintenance, and reusability accessible to developers and testers to achieve speed with a quality focus.

The best thing about the way SOASTA implements these best practices in our own products. We make them available to our customers. You can install TouchTest and CloudTest for free to try them out for yourself:

  • Our mobile functional test automation solution, TouchTest, is tightly integrated with Jenkins. Test results can be seen in Jenkins and drilldowns provided for test results, regression testing, and current build status/completion.
  • Our CloudTest solution is also tightly integrated with Jenkins (or Hudson), and provides the same capability as the TouchTest solution.

If you have any questions about either of these testing solutions — or testing in general — I’m always here to answer.

performance testing


Dan Boutin

About the Author

Dan Boutin

Dan is the Vice President of Digital Strategy for SOASTA. In this role, Dan is responsible taking the world's first Digital Performance Management (DPM) solution to market as a trusted advisor for SOASTA's strategic customers, and changing the way ecommerce organizations approach the marketplace.

Follow @DanBoutinGNV