How to do Continuous Load Testing with CloudTest and Jenkins

Two key challenges to continuous load testing are provisioning a test system to handle the load and accessing load generators to drive the traffic. Watch this webinar to learn how you can overcome this challenge.


Brad Johnson:

Thank you operator and thank you everyone for joining another SOASTA webinar. Today we’re joined with our partner CloudBees. We’re excited to present on a topic that is clearly very interesting to lots of people. We’ve hit a record again for attendance and registration, so hopefully we’re going to fulfill your expectations this day. My name is Brad Johnson, I’ll be kind of the moderator and I’ll open up the session. I’m joined on my left by Mark Prichard from CloudBees. He runs product management there. On my right is Mike Ostenberg who runs systems of engineering here for SOASTA. They’re really going to deliver the meat of this presentation which is showing you all of the capabilities that you have to integrate load testing into your Jenkins implementation as well as using the cloud to do so. Just to make it very clear, SOASTA and CloudBees are great partners. SOASTA is the leader in mobile and cloud testing and CloudBees is the home of Jenkins in the cloud with amazing capabilities for deploying, building, and managing your applications in the cloud. Together, we’re really enabling continuous delivery for web and mobile.

Going to run a few poll questions today to keep our pulse on all of you. There’s a bunch of you out there, and again, thank you for joining. The first poll question is really where are you when it comes to continuous integration? Are you already building everything you do using CI of some sort? Are you building some apps using CI? Is it on our implementation schedule? Are you still looking at if CI is a good option for you or are you really learning about this thing called CI and just beginning to investigate? Keep voting. A lot of good results here. Guys, as we see this come in we see a lot of good CI usage and I’m going to assume that the majority of this is on Jenkins, Mark. It’s a really nice cross mix. Clearly, more than 50% of you are using CI regularly which is a great kind of place for us to frame our conversations. For the rest of you who are beginning to use or still learning about it, this should be really educational around, in this particular topic, how do you use performance testing as part of your Jenkins continuous integration process because it’s something that isn’t that common today.

I’m going to go ahead and close this poll in 5, 4, 3, 2, 1. Thank you all for voting on that. Clearly we have a good majority of folks very familiar with CI. Let’s get right into what we’re going to be talking about today. I’m going to start off a little bit just kind of setting the stage on some of the basic things. Why should we load test earlier? Then, Mike and Mark are going to really drill in again to the meat. Building tests and preparing the test environment, that’s very important. On the first slide if you saw it, there’s hard bits to load testing continuously. We’re going to really focus on those and 2 of those things are building a test environment and getting your load test infrastructure ready. Mike’s going to follow up after Mark when Mark talks about what is Jenkins and how do you set up an environment in the cloud and then how do you build and connect those load tests to that environment. We’re really excited to show you today, really a new widget inside of Jenkins that’s going to allow you to manage your projects with a performance based line. I think out of everything we’re doing here, that because of everything we’re showing today, you’re going to be able to manage your projects differently and thankfully with a performance metric that you look at every day.

The big question that I think we all know the answer to but, why should we load test earlier? When I thought about putting this together, I could only think about one response and that is, do we really need to answer this question? You look in the press, in the US, we’ve got to thank. We’ve got so many obvious failures in the marketplace related to user performance and user experience that I’m not even going to present some of the cases that we all know so well about why we should be doing load testing earlier. Clearly it’s not happening. Let’s kind of modify this. Why don’t we load test earlier. I think that’s the most important question that a lot of you who have been in the business for a long time know intrinsically, but if you’re on the developer side of the house you may never have really thought about these things or you may have responded in some of these ways. I would say one of the key reasons why we don’t load test earlier, at least from people that aren’t professionals in the load testing space is because it’s hard. There’s a lot of things that make load testing difficult.

One of these things are tools that require coding skills. Very frequently, and this is really important, they’re skills that our development team doesn’t have. Does your tool need seed programming? Does it need VB programming when people are running in Java and every other new language? The other thing is that every developer is very willing to do testing on their own desktop or their own local environment, but they don’t have a scalable test system. That’s hard to get that system in place because not only do they not have the test system, they don’t have the hardware for load generation. When they do their testing, they have to go into an entirely different tool. Another excuse because it’s hard is because they’ve got so much else to do. Mark’s going to talk about the value of Jenkins, the value of continuous integration. Really, what it’s all about is automating testing in a lot of ways. You need to test, yes, you need to do performance testing, but we need to make it faster and easier.

Another excuse for why we’re not load testing earlier is that it takes too long. This is probably one of the core excuses for why load testing takes long and is inadequate. One of the areas here relates back to the scripting and the coding is that it takes a long time to build these tests. When I run a test, it takes a long time for me to dig into the results. Before I even do that, I need to set up the environment, so that’s impossible, particularly for depending on the IT team to give us servers or to give us resources for that test. Developers move fast, they don’t want to wait. I kind of just threw this one here because I think it’s interesting is we’re too agile to do that kind of testing. I call BS on that because it’s all about testing. It’s all about quality. There’s no such thing as too fast if you can’t deliver quality at the same time. Next, I hope I’m a little controversial here, but because nobody ever told us to. Nobody ever said you need to do performance testing. Nobody ever said you need to do performance load testing earlier in the stage. You can kind of read these.

I think nobody ever really built or actually managed to a performance coverage requirement report because, frankly, it would be dismal. If you did a performance cover report it would look like .05% of code coverage. I think there’s a way to get there. The other thing is that we’re not managing projects early with a performance baseline. Do early stage engineers actually know that 2.5 second response time is the expectation from the business? Yes and no, but I think more frequently no. Then it’s hard to build if you don’t have a target. You don’t have something that you’re continually looking at and measuring to that helps you understand where you are. I think this is the most important aspect of that, at least that no one ever told us to is nobody’s actually managing us to this metric. When we exceed a performance threshold, it doesn’t break the build. We need to think about how do we do that. What we’re here to tell you about today is that there are no excuses. These are bad ones. We’re going to try to show you how to make load testing part of that continuous process.

Having been through those set of things, what I’d like to do now is read the room. Here’s a poll that’s really looking at what do you see holding you back from more iterative load testing approach. Test whichever ones you think are appropriate. Is it a skills gap? We don’t know how to code, we don’t know how to use the tools. Is it that we don’t have the scalable environments? Is it that tests take a long time to build or limitations of the tools? This is important. Third party dependency, that’s a little vague. Maybe it’s third party services that you don’t even know are going to be part of the app or maybe it’s dependency on the IT organization to get your load servers or is it a fact that we’re not managing to these kinds of metrics? Take a look at these, keep voting, got a lot of votes flying in. I’m going to do the same thing. Here’s some results for you to take a look at. Pretty nice spread. It looks like a lot of you have a lot of these problems.

Absolutely. I mean lack of the test environment, hey, I think we should have a webinar on that. What do you guys say?


Yeah, I’m good for it.


We’re going to talk about that today, but really, really great feedback on them, why this isn’t happening earlier. I’m going to countdown from 5, 4, 3, 2, 1, and I’m closing the poll right now. Really great statistics everyone. Clearly we’ve hit a cord. Whether it’s obviously the lack of that environment to test in or undefined performance requirements, we’re going to talk about those things very specifically today. Mike’s going to give you a glimpse of CloudTest and you can come back to any of our other webinars or come to our website to understand how that works. We really feel that we’ve solved a lot of these problems for you. Let’s take another perspective on how load testing has been and needs to change. As I think I mentioned to all these guys in the room at least, this is probably my 675th flow chart that shows testing.

What’s important here is that almost everyone we talk to is doing iterative development. That’s a very vague and open kind of concept but it just means that whether it’s every day, every week, every month, every quarter, every year, you’re running through iterations and as you lead into those iterations you’re either depending on which approach you’re taking, you’re doing smaller sprints or smaller iterations. The point is in the world today in a lot of … SOASTA deals with many, many enterprise customers and I just heard from there’s this really reality that nobody’s pure agile. In the enterprise, what Gardner likes to say is it’s kind of a water type of approach. I like that one, I’m going to give him full credit for it. There’s all kinds of other variants to that approach, but the point is we are forced to do faster development in an iterative way. We’re bringing some of our habits from the past into that new method and one of those, forgive me if I define it as a habit, is leaving all of the load and performance validation to an elite team.

I definitely want to call out, right here, I was talking to the other guys earlier, we’re not saying you don’t need an elite team of performance experts. What we’re saying is if you leave everything related to performance to that elite team, this scenario’s going to happen every time. We’re going to be pumping along, doing all the right things. We’ve got our unit tests. we begin to implement functional tests. If we’re doing mobile we start doing some mobile validation. It’s all functional so it’s all plugged into Jenkins nicely or plugged into whatever CI framework we have, pumping along nicely, and all of a sudden we hit some roadblock which says “Okay, we need to do performance testing now.”

What happens is that we’re expecting that elite performance team to find everything in that one period of time whether it’s a 3 day test or 3 week period, doesn’t matter. We’re expecting them to find code bugs, memory leaks, and database issues, as well as infrastructure constraints, and configuration problems, and bandwidth limitations. If they find any of these things it stops or at least delays deployment. Decisions are made at that point and very frequently they’re bad decisions. They’re decisions to say “You know what? We’ve got to go, let’s just go.” What happens is that all the goodness that can be provided by load and performance testing is lost because only the most critical tester run and the only the most obvious bugs are found instead of finding things earlier. I think you get the point here that we have to figure out a way to integrate performance testing earlier in the whole process.

What does that look like? One thing that struck us is that as we were putting this together is well, if you’ve heard about DevOps, this is what DevOps is about. It’s about spreading what the tasks that are appropriate to the right people across the delivery cycle. What we’re saying is that as your engineers are building their software, doing the right things, because every good engineer writes 100% code coverage with their unit tests. They’re doing that and they’re starting to add more functional automation into it. What we are proposing and what we’re seeing with some great customers is they’re beginning to put small performance tests into the task list of what developers do. If you’re in the community, clearly performance engineers are thinking about or developers are thinking about front-end web performance. That’s awesome. We need to start thinking about how does my unit or component behave when I scale it to 10 users? Maybe that becomes part of early stage performance testing. We’re not talking about complex tests that hit and end with all kinds of dependencies and so forth. That’s the domain for the performance testing experts.

What we want developers do is to think about smaller performance tests, earlier with their entire set of tests, and then that rolls into our aggression suite. Imagine our aggression suite that has the unit test and the functional automation as well as performance test running. You can get really creative and this is really what we see as the future of continuous quality where every engineer is taking a stake in the game. What happens to that elite performance team? Number 1, they advise what’s happening earlier, but more importantly they are deployed on really hairy user focused tests that enable end to end validation and they’re not fixing code bugs, they’re fixing configuration issues and they’re fixing infrastructure limitations and things like that. Imagine, they’re even running production tests in live production environments as we see so many of our customers doing today. Hopefully that kind of squares out why and what the endgame should look like.

Let me introduce what Mark and Mike are going to be talking about today and that is continuous integration for continuous performance. Hopefully you’re all relatively familiar with the concepts around continuous integration, so this is really an abstract of that that says, “I’m a developer. I made a code change. I wrote all my tests including a performance test.” What happens now is that Jenkins has a task that spawns off that test environment. In this case we’re going to be talking about the CloudBees environment, but it spawns off that test environment. It can be big or small, folks. We’re talking about early stage. I might only have a couple tears up, but you know what, as I move deeper into the development stage, I might actually build a relatively large test environment and I can spawn that off hands free. That’s all the point here is that it’s part of the Jenkins task list and once that happens, then we can run the SOASTA cloud tests directly against that environment and do all the goodness there. Push off the results into the Jenkins infrastructure, the Jenkins UI with detailed performance metrics that come out of CloudTest. Most importantly what we’ll show you today is in my Jenkins console I’m looking at a performance baseline and so I know how my whole team is doing against a single performance metric so that we can manage this every single day.

Hopefully, that kind of sets the stage. I’m going to start again with one more poll question to kind of, again, read the room. Take the next poll question and then I’m going to turn it over to Mark. To help us understand what you’re doing, a big crowd, are you load testing today? I realize right now maybe we’re not testing. That’s missing here. Anyway, are you doing non web protocols and not mobile, kind of more traditional ERPC or that kind of testing. Are you doing primarily web, primarily mobile, or a good mix of web and mobile, or are you responsible for everything? We’ve got a good fast inflow of results. I’m going to skip to those results.


I want to see this.


Yup, and here we go. Obviously, more than half of you are doing primarily web testing. Not a whole ton of web traffic which is interesting because I’d like to look at the mix and we will later about cable. Do you or don’t you have web users out there but that’s interesting. Clearly, we have a strong mix of different interests on the load testing side. Like before I’m going to count down from 5, so get your votes in. We’ve got hundreds of you who’ve voted which is awesome. 5, 4, 3, 2, 1. Closing the poll. There we go, there’s the results everyone. Looks like we’ve got a lot of you who are doing back office kind of testing stuff. We’ve got half of you roughly doing web protocols and another 16% of you doing both web and mobile. Actually, a significant of mobile traffic testing represented. 20% of you, god bless you, you’re doing it all. I think you’re going to get a lot of value now out of Mark’s presentation as he starts to show you how would you set up your Jenkins based or your cloud based system under test using Jenkins. Then, Mike’s going to follow up with how you test it. Mark, over to you.


Great, thanks very much Brad. That’s really interesting and obviously I agree with everything you said there. Let’s get straight in because we’re going to get into the meat of the presentation now. Just for some of you who aren’t familiar with continuous integration or perhaps you don’t know the Jenkins project, a very quick introduction. Jenkins is massively adopted all around the world. It’s far and away the number 1 open source continuous integration server. Some people were asking about what is continuous integration. I’ll hit that in the next slide. This was a project set up by my colleague when he was at Sun Microsystems. We set up cloud fees very much around Jenkins. This is the whole of Jenkins as a project is all open source. It’s highly extensible if you ever met Cosakay this is his absolute mantra. The proof of the pudding is we have over 1,000 plugins. I know it’s over 1,000 because I had the honor of writing the 1,000th and that was some time ago. We get about 5 new ones a week. I’ll explain just why that’s so important in a second.

There’s a huge community. It’s very easy to extend and you can deploy it in all sorts of ways. You can deploy Jenkins on-premise, you can take the software, build out. We have people building out huge on-premise installations. You can deploy it in the cloud in all sorts of ways. You can use tools like vSphere. You can use. CloudBees provides a cloud based service as well. There’s all sorts of hybrid models if you want to do on-premise deployment but cloud based testing. All of this is possible. We provide the sort of expert support from the call contributors there. I hope that gives you a feel for Jenkins. Just to explain why continuous integration is so important, why this was originally needed. I think the key thing is that as a developer you want to be able to carry on developing. You want to concentrate on your core task which is building core software, but you want to make sure that all of the building, the packaging, testing is all automated. It all happens immediately you do any kind of a push.

Somebody asked on the window, “How do I tile this back so that if for example If I’m using Git, I do a commit, I push that up to my repository, wherever that is, on-premise or in the cloud.” All the actions that we’re going to show you, everything we’re going to talk about will happen automatically, immediately you do that push. If there’s a problem, whether that problem is you broke the build and it won’t compile, there could be a problem where there are code quality issues or you’ve pushed in code that isn’t properly tested, there’s no code coverage there, you may find that you’ve caused a performance degradation. You need to know those results quickly, and you need to be able to track that straight back to the commit that caused it. That traceability is the kind of key to continuous integration, so we need to able to monitor that over time. We need to make sure that quality is improving and if there’s a problem, we need to be able to find it and find out what caused it and fix it quickly.

One of the things that I was thinking of while Brad was talking is that of course the beauty of this huge community is you have plugins for a whole variety of different tools. Whether we’re talking about code coverage from things like Ammer and Cobertura, there are many others if you’re talking about testing mobile applications or things that connecting to device clouds or things that doing Android link coverage. I’m just picking examples out of my head but the whole point is you can build up all these tests and you don’t have to be the expert in all of those things, nobody is. There are plugins in there even Cosakay doesn’t know what they do. The point is you can pick them up if you want to integrate with Gerrit code review, that’s fine, there’s a plugin that does it. You can pull it in. The results are automatically sucked back into Jenkins in a standard form so they can be displayed in a sort of dashboard view. I’ll show you the beauty of that in just a minute.

Just remember, what we’re going to be showing you today, we’re using a cloud based platform to do our testing. Now for performance testings, sort of soak tests, long running tests, that’s really good and we often talk about cloud bursting where you do your standard build and you may do parts of your testing and your functional testing using an on-premise deployment. When you want to do large scale performance tests, you need extra environments. You can spin those up in the cloud, run your tests, tear them down. That sort of on demand elasticity is a really valuable way to go. What we’re showing you is cloud based testing, nothing depends on that. You can do all this in many different ways, and of course as I say, we can actually help you with that as well. Just to give you a feel, for example, how we and many large customers set up these kind of environments and to show you how easy it is, what I’ve done is I’ve taken an application and I’ve used what we call a click start. A click start is really just at its core, it’s a JSON file that just describes the structure of a complete application. There’s a little mechanism that reads that file and will instantiate the project. This is not sort of CloudBees deep magic, these are things that you can build for yourselves. Many customers do that.

The beauty of it is, for example, when we set up this webinar, Mike needed a test app and I had a test app, I can just give him this click start. He can click on it, he can build as many environments as he wants, he change them, he can play around with them, if he messes it up he can throw it away and build another one. It just comes repeatable and super easy. The other nice thing is you can do it yourself and I’ll explain that at the end. All the stuff we’re showing you, the examples are online. Make a note of that URL, You can set all this up, you can see everything we’ve done, you can copy it and so on. What I’m going to do now is I’m just going to jam my desktop using the … Okay. The application that we’re using to test against is this application called BeesShop. We like bees and we sponsor various bee hives to deal with colony collapse disorder and other grave of our time. This is an application. We don’t need to drill too much into the sorts of details in it.

If you’re interested, it’s all available online, just look for Bees-shop-clickstart. You can Google it and the apps all’s open source. It’s on GitHub. All you really need to know for now is this an application, it runs on a Tomcat container. It’s a classic eCommerce sort of shopping application and it’s working against the back end MySQL database. If you’re interested, there are all sorts of variants, there are all sorts of things you can add to this where you integrate with AWS servers, F3, CloudFront. All the details are in here in the readme for the application. The great thing is I really don’t have to know anything about that and Mike when he’s doing the performance when he’s doing the performance, he doesn’t have to worry about that because in here I’ve got this simple JSON descriptor which is just going to enable me to basically deploy that entire application, that entire environment in the cloud in one click. Let me just show you what that would look like. This is CloudBees, this is my partner demo account. This is read only to the world. You can sign up for free and get your own account if you want to try this out. Just go to the click start menu item up there and we’ve got lots of click starter, do all sorts of fun things.

One of these about 4 lines down is called the Bees Shop demo. All you have to do here is click on that and this is basically taking that descriptor so I’m going to do something like this. Bees Shop click start, and what this is going to do as soon as I click that button is it will create a cloud based Git repository hosted on CloudBees which it will clone from the GitHub repository, so that’s in my repository. I can make a local copy, I can make whatever changes I want. Whenever I push those changes up, any change will trigger a Jenkins build which will automatically be created for you using our cloud based servers and then we’ll deploy the application in the database. All you have to do is click create app. In about a minute or less, you’re going to have that complete environment available for you to work with. If you want to follow on and try it out afterwards, this is how to do it. What I’m going to do in the meantime is I’m just going to back to my main site, again, I’m just going to show you what we set up a little earlier. This is what you’re going to end up with from once you run that click start. There’s a repository which we cloned and I’ve now got a Jenkins job that is running off that.

Somebody asked the question, it’s a very good one. What happens when I make a code change? The answer is soon as I make a code change, it automatically runs this job. Everything that we’re going to talk about will happen every time anyone makes a change. If there’s an issue, I can trace it back and find out what caused it. I’m going to come down here. It’s a simple build. There’s all kinds of stuff I can stick in here. To keep it clean today, we am focused, I am built in all these code coverage and metrics tests and all the other things I can do. What we’ve done is a simple test where we’re just going to build the application and we’re going to deploy it. Somebody asked about a smoke test. A smoke test is going to be a very low touch test. Maybe we test a few key strands of the app functionality, just to test that the thing is basically okay. I always like to think of testing as you’re sort of building up. Continuous integration builds up in layers, sort of like a pyramid. You start with lots of small cheap easy tests and then you build up as you go up the pyramid you’re adding more and more values because you’re testing more and more interesting things. You can start off with a very simple test that just says, “Does this thing build? Did I break it?”

All the way up to long running performance tests that could run for days or weeks where you’re really getting into that very high value and you’re really discovering what your application’s all about and what it’s capable of doing. We deploy the application and then we’re going to run a couple of projects hanging off that. One of the key principles of Jenkins is that you can extend this not just in terms of building out the job itself, but you can have jobs that trigger other jobs and you build out these pipelines so that you can get more and more sophisticated and more and more targeted in terms of your testing. All of those tests are all going to hang off that single event that you can trace back to which is the code commit that triggered the whole thing. Any issues we can trace back to that commit. I can find the exact line of code and the committer who introduced the problem. With that as an introduction, what I’m going to do is take us back to the slides and I’m going to hand it over to Mike who’s going to explain a little bit about how we take this basic environment where we’re building the app, we’re deploying, we’ve got our simple tests and now we want to layer in those richer performance tests all in this continuous automated testing environment.


Okay, great. Thank you Mark for that overview and thank you Brad for describing the problem. What Mark has just shown us is an application that built in Jenkins and the CloudBees hosted Jenkins, and how you can quickly and automatically deploy it. Mark as a developer would be a busy engineer and he doesn’t have necessarily time to go ahead with each of these builds that are happening with him and his colleagues to go ahead and run performance tests and check the results. What we’re going to be covering next is how we can take that build that might be coming and triggered several times a day with different developers checking in and code and automatically run the performance test. Then, only in the case that the performance of the application degrades beyond a certain level, notify them that the application performance is going to affect it so that they can take immediate action to that.

We’re going to start with kind of showing you CloudTest for a SOASTA platform. Let me kind of advance the slides here. The SOASTA platform has 3 main products on it. We have TouchTest for mobile testing of applications, mobile functional testing. We have CloudTest for load and performance testing. For real user monitoring. We’re going to focus on the performance aspect of the SOASTA platform today so we’re going to show you how, 1, that build completes. We can actually automatically have it run through and kick off a load test. I’m going to share my desktop now and let’s start by taking a look at SOASTA’s CloudTest. You guys should be seeing on the screen right now a browser. What the browser is open to is the CloudTest server. The CloudTest server is essentially a web server. Different users can access it if I open a browser just logging in, and they’ll have access to all the assets within the CloudTest server. CloudTest can be used for testing either full applications like we’re going to show today.

We have a load test that runs against the Shop Bees site or if we’re testing just specific API calls or calls or SOAP based calls. You don’t necessarily have to test the full application in its entirety. You can break off different sections, different APIs, wretch calls and load test those as well so that you’ll immediate visibility to change it in performance of those applications. We also can do direct to database testing. For today’s demonstration, what we’re going to show you is a load test that runs against the Bees Shop application. I’m going to open up this Bees Shop test and in this particular case, it’s a single script here which is a user going into that Bees Shop site that Mark showed you earlier. If I double click on this what you can see is it’s going to a series of different pages. We go to the home page, click on the product page, grab a product and add it to the cart. Very simple load test, very quick to create this sort of script. We now have a way to emulate user activity as they go through the site.

I’m going to go back to this other tab, the Bees Shop composition now, and we’ll talk a little bit about this composition. When we run this load test, it’s going to be kicked off potentially every time that a build is checked in for the Bees Shop application. We need the test to automatically run for a certain period of time and stop. The way we do that is if you click down at the bottom here, there’s a properties tab for this load test. Under the track there’s a section down here for parallel repeat and minutes to ramp up and minutes to ramp down so a couple of things you can know here. First of all, this repeat by method, this 100, also kind of noted up here this 100, this is the number abuser. We’re going to simulate a load on this application today for just 100 users. Certainly with your CloudTest installation, you may decide to do performance of up to 1,000 users or more.

What we also have down here is the minutes to ramp up, minutes to ramp down and maximum duration. This minutes to ramp up is set to a quarter minute so we’re going to ramp up for 15 seconds. We’re going to run for 60,000 milliseconds or 1 minute and then we’re going to ramp down, so it’s a very quick load test. In actuality you’d probably put a little bit longer of a load test and perhaps a higher volume if you’re having this in here, just so that you can normalize the data a bit more. Now that we have this load test, we can at any time and kick it off if I hit the play button here. It’ll run a load test basically going to the website.


Going up to 100 users on that website and give us performance data about how fast it is for that website to go through those different pages. This sort of data is available both for the website itself, but also if you have specific APIs that you’d like to test you can do a load test into those APIs. Now the type of data we’re going to get back in CloudTest is things like average response time, send rate, error rate. There’s a wide variety of these different metrics that are going to be provided on the CloudTest platform. What the goal here is today is to basically take some of these key metrics. The average response time is a 90 percentile response time. Report those back into Jenkins and alert if they go above a certain threshold so that you’ll know that your performance is degraded and you need to take some action. If I go into this collection enough, as you can see a variety of the metrics that we’re going to be passing back into Jenkins. Things like the average duration, the minimum and maximum, 90th percentile, and bytes sent and received. This is a load test itself. It should be running up for about a minute and then it’s going to be stopping.

Then we’re going to move next into Jenkins or CloudTest here, a hosted version of Jenkins and CloudBees, and show you how you can take this load test and have it trigger automatically and report back into Jenkins. You can see that our load test is ramping down now. This is a typical load test. What I’m going to do now is I’m going to switch browser tests and go over to the Jenkins tab on CloudBees here. We’re going to take a look at 2 things. First of all, this job up here, this Bees Shop job. This is the job that Mark described here earlier. This is how whenever a developer creates his code change, he checks in code, he’ll actually build the application and with his click start that he showed you it lets you deploy it to an environment. We can run a load and are running our load test against the environment. What I’m going to show you next is that you can go ahead and have a performance test which is going to trigger off of this. If this Bees Shop job compiles correctly and if it goes through, if there’s unit test involved, we can actually have another job that is going to trigger off of that. This is the Bees Shop performance test here. Let’s go ahead and drill in on that and see what sort of data we’ll get with the Bees Shop performance test.

I clicked out of the Bees Shop performance test and off on the right, you can see things like the test result track. You can see every time that the application has passed or failed for each of these things. As a developer again, Mark doesn’t necessarily have time to be looking at all the performance data day to day but he might want to understand the trend of the performance. That build’s going in with him and other developers working on the application, are we suffering bloat? Is the application starting to get a little bit more lethargic in terms of a response? If I go over into the plot section of Jenkins, you can go ahead and see that we’re now capturing in performance data from every one of the load tests that we’ve run that were triggered off from Jenkins. You can see the trend here and you can see that recently we’ve had a big spike in the performance here. We’ve passed this data in so you can see this code change that we had in here there was an issue. Something went wrong, we’ve got to take a look at that.

We can actually go back into the data on CloudTest and we’ll take a look at that in just a second, but we now have that visibility. As different builds are coming in, you can see every one of the builds that occurs to a time. What was the size of that, this page sized change? What about the average response time? What about the 90th percent time? 90% of the people are faster or slower for each of these things. We have comprehensive data now about the load test itself. Now again, Mark’s a pretty busy guy and he’s not necessarily going to have time to go in a look at these every time a build is checked in by him or other developers. The other thing we need to do is, other than just providing this trending data, is to give an alerting and automatically fail this build if the response time goes out of bounds. The way that works is when we set up the job we can put thresholds on this that will fail the build if it goes above a certain threshold for either the 90th percentile time or the average response time.

Let’s go ahead and take a look at the build data and begin a little bit more to see what that test result looks like. On this particular test result, and I’ll dig in to the performance for the test result, you can see all the failed tests at the top here. If I dig specifically into the performance, you can see every one of the thresholds that we put for performance or average or 90th percentile response time. In this case, you can see that this job is going to alert if the 90th percentile is slower than 3 seconds, and if the average response time is slower than 2.5 seconds. Off on the right, you can see the actual duration of these and you can see right away that this was a big failure there. We were hoping to be faster than 3 seconds and we were a minute and 16 seconds. Right away, he’s got immediate feedback if that build caused a performance problem.

If he wants to go in and take a look at the detailed information about the build, here’s the link to the CloudTest results, you can take this link, copy it in, I’ll go ahead and copy that there. We’ll put it into a browser tab up here, and this will take us back to those specific results for that job. You can see if this was just a spike at the start or if this was through the length of the load test, however long you ran that load test. Here we’ve got access to the full data for that particular load test. You can take a look at things like the send rates, the virtual user count, the average response time. If you have monitoring set up on the server, you can actually see server side statistics as well such as CPU and back end memory. All of the data that’s available for the load test is available for you to look in here to see what’s going on. Was it a particular page, was it a particular resource on the page? I can pull up something like the waterfall chart and take a look at the specific resources to see which one of those may have been the contributing resource.

The point here is that Mark doesn’t have to be continuously looking at this. We’re going to set the thresholds and alert and let him know when it’s out of bounds so that he can dig into the results at that point. Let’s go back into Jenkins and talk a little bit more about how this was set up. I’m going to go to the Jenkins build here, this Bees Shop performance test, and we’ll show how the build is configured. If I click on the configure link here, we’re going to go into the configuration setup for this. Now the way that this works sort of under the hood is we provide with CloudTest a command line utility called S command that allows us to run compositions or load tests to capture results back.

I basically put a build job in here down below here or written a Perl script that’s going to use our S command to run those load tests, pull back the results and parse it and push it in a way that is consumable by Jenkins. You can actually download this code and put this as a build step into one of your build jobs. We’re going to do this in just a bit so that you have access to the script which will automatically allow you to kick out the load test. After you’ve put in the Git job which is going to pull this Perl script, next thing you need to do is add in here a step that is going to run that Perl script. So here’s where we say we’re going to go ahead and run the Perl script, we’re going to run it here. You have to patch in a few command line arguments. The thing you have to patch in is the name of transaction, such as the home page, the metric the measuring against, in this case the average response time, and the then the threshold. Above what time should I alert? Here we’ve got several different thresholds or alerts going on. The bees home page average. It should always be less than 2 and a half seconds. The product page should always be left in 2 and a half seconds. The 2 and a half seconds.

You can also put a lurch of thresholds on things like the 90th percentile or the minimum or maximum response time. All of these are ways that you can go ahead and automatically alert in case there’s slight performance degrade that time. At the end here, you can see that you’ll have to put in the user credentials to that CloudTest server. The CloudTest server’s a separate server so Jenkins is going to reach out to that server through the S command utility that we provide. Log in with this username to that server and then this is the name of the composition that you’re going to run. We’ll go ahead and execute this Perl script. What it does is it’s going to go ahead and run that, capture all the results back into XML files so you’re on the Jenkins server. Let me bounce back here. If I go into the workspace for this job, you’ll see that after we run this job, there’s a series of CFB files that we use for the plot data. There’s a series of XML files that you can just review to look at the results and then down below here is another file which is the JUnit format results that will report pass fail on all those transactions. This is the results of the Perl script.

The next steps on the build job are to plot those. There’s a plot button that you can use that will take any of these TSB files, I didn’t put them all in this example, but you can see there’s a lot of available options here, and plot the data for all of these files. Then, we’ll use a build step that will use the JUnit results to pull in the past build here. Let’s go back to the configuration on the build and see how we pulled all the data from the Perl script to display it directly in Jenkins. If I go back to configure here, we’re going to roll it down here to the bottom. After we execute the shell, all of those files are available and then, again, Jenkins very configurable, this is a plot file plugin, available in Jenkins. You can go to manage plugins to install the plot file plugin. All it asks you for is the name and location of a CSB file. This is one of those files here. All of those plots you saw before, we’re just using the plot plugin in Jenkins and pointing at the files that were created by that Perl script.

Down at the very bottom here, after all of these plots and there’s a lot of them, we have here this published JUnits report. This was another file that was created by the Perl script and this is how you patch in the pass fail data. That last file here looked at the actual response time, compared it to that threshold that we’d set when we called the Perl script and created this pass fail script. This is how we’re going to automatically pass or fail the composition, based upon if it met or exceeded those particular requirements. That’s how it works and now we have that full automated loop here. We’ve got the build we’ll run automatically when developers check in their code. After that, it could the units check etcetera. When the job is completed, we’re going to go ahead and trigger this job, I probably should’ve pointed that out at the very top, that we triggered this job based upon the Bees Shop performance test. This performance test job that we’re looking at is a separate job from the one that Mark was showing you earlier that deploys the applications. After that job completes, then we’re going to go ahead and deploy this performance test if it completes successfully.


Mike, could I just add that the way that these jobs are sort of chained together in a pipeline, of course, we’ve shown a fairly simple kind of linear progression just from one job to another, but the beauty it is all this data Mike collected is now available in Jenkins so you can make quite sophisticated decisions about how you’re going to chain these together, provided you could set thresholds and say provided those tests and meet 90% of the goal. Then we’ll go on and do the next stage of the test. If we’ve regressed back to, say, 75%, we’re actually not going to continue with the pipeline and it needs to be addressed.


Absolutely and that’s a great point. This isn’t the one performance test, you could have an API test, test, and you don’t do the other one unless maybe the but we have that capability now to put all these in. What we also have is if I go into this particular build that we ran here. If you look at the console output, you’ll be able to see details about every step that went through. We actually ran this Perl script. You can see that we picked up all those thresholds that you put in there. We’ll report back the full results. These are all the metrics from CloudTest. Then we’re going to automatically look through the metrics, compare them to what you expected values are, and put them into that pass fail result that automatically goes into Jenkins. We’ve got kind of a comprehensive visibility now for the developers as they check in the code so that they can see both the performance trend as well as automatic alerting of failure of that build if the performance goes out of band.

If we wanted to create this from scratch and I think we’ve got a couple of minutes, let’s go ahead and do one from scratch. I’m going to go ahead and we’re going to create a new job and walk you through the entire process. If I click on new job here, we’ll give it a job name, we’ll call it the webinar test. I’m going to create this as a freestyle project here and we’ll click the okay button here. The first thing we’ll want to do is download into this build job, and anybody by the way can do this as quickly as I’m doing it right now. You can go ahead and download the Perl script that downloads this project here. We’re going to ahead and … Oops, I’ve got to go to the search control management first and my mouse. There we go. We’ve got to download that Perl script from, yes, it’s posted on Git so your locations. This is the name of the build job or of the code, the Perl script essentially that’s hosted there. Let’s go ahead and save this. I’m going to download that now by hitting the build button and that’s going to go reach out to GitHub and download that Perl script with all that assets and capability that we just saw.

Then we’ll show the next step which is, okay, I’m going to run the Perl script that we just downloaded so that we can capture performance results back. We’ll give it a minute here. As we’re waiting for this, if you do have questions, go ahead and type them in the chat and we’ll be answering them as they come up here. I was going to wait for the job to run, but if there is a wait for the job, I wanted to show the Perl download. There it goes, so we’ll give it 1 minute and I’ll show you that the code has been downloaded with this step here. We’ve downloaded the code. If I go to my workspace, there we have it. We’ve got the Perl script that we were talking about running here. The next step is we can actually execute that Perl script. I’m going to go back and configure the job here and say that now we have a Perl script that is going to capture, let’s go ahead and run it and capture back results from a load test.

To run it, we will use the execute shell and we just use Perl for this, so we’re going to go Perl. The name of the file is parse SOASTA results summary. Then we just have to pass in those command line arguments that we showed before, give it the name of the server, give it your username, give it the name of the composition that you want to run. Then, put in there what are the thresholds. When you run that composition, we’re going to capture back all those results. What are the thresholds and I’ll just put in a couple here. Okay, so that’s it. I’ve set this up so it’s going to run the Perl script. The Perl script is going to go ahead and run that load test and we’ve set thresholds here that the homepage that shouldn’t exceed 2 and a half seconds, that the product page shouldn’t exceed 4 and half seconds. I’ll save that. We will run this now. We’ll go ahead and do the build now. It’ll take a minute or 2 to run. I could’ve done this, by the way, all at 1 step, maybe I should’ve rather than doing it in the middle.

I wanted to kind of show that process, that we download the Perl script and then you can run that Perl script and then pull back the data. The last step after this is, after we run the Perl script, we’ll have all those files and then we just use the plot plugin to plot any of those files. I think we had 90th percentile average bytes sent, bytes received, and a variety of other metrics that we can display in graphs in Jenkins as well as the alerting. Give it another 20 seconds or so to run here. There we go, the build’s running. As the build is running, let’s go ahead and take a look, make sure we’ve got everything. I didn’t make any typos there I hope. It’s running the build job now. Nope, it doesn’t like my credentials. Yeah, I typed in the wrong credentials. Let me finish up the build job. This should be CI so I did do a typo there. It did give me the feedback here that I’ve got invalid credentials so I’m going to go back and modify that. Okay, there we go. The reason why it’s helpful to do this first run is after the workspace is completed, you’ll see all those files in the workspace.

Then, I’m going to just grab the names of those files and drop them into the plot plugin. I could type them in if I wanted but at the expense of time … Maybe I shouldn’t of done this. It’ll save me some typing for I can just copy those file names and I think that’ll be the easier way for you if you implement this on your own to go ahead and run it once. You’ll see the files in your workspace and then you can copy the file names and put them into the plot plugin to allow you to graphically display the results of the load test. I’ll go back into this build, hopefully we won’t get the invalid credentials error message here. You can see that it’s run the Perl. It’s picked up the thresholds for 90th percentile for the product page and average response time. It’s using S command to run the load test on the server and then we should capture back of the results of the load test itself.

We’ve got about 3 minutes until our official start time so we’ll be wrapping up here in just a bit. Maybe, just to prep for that, I think kind of a quick summary of what we’ve discussed here today was we’ve showed how with CloudBees on Jenkins, or even internal Jenkins, which is largely hosted by CloudBees, you have the capability to run builds that automatically occur either on a regular schedule or that occur whenever code is checked in. We also have the capability with CloudBees and click start to automatically deploy an environment so now we have something that we can load test again. After that, CloudTest allows you to run a load test of either the browser level or at the API level, so that you can confirm the performance of that application and then with this Perl script and the S command utility provided with CloudTest, you can see that we can capture those results back in, show trending of the performance results, and also alert or fail that transaction of that build if the performance is outside of acceptable bounds. That’s it there, should we-


I think it’s a lot to cover in a very short period of time. Mark and I have both been answering question kind of feverishly as we asked the question send your questions in. I think we can pull out a couple and I just wanted to get to a kind of point here where we could talk about where we can get some of these resources for instance. Obviously we talked about the Jenkins plugin from SOASTA which allows you to do a lot of what Mike has shown. Maybe, Mark, you want kind of talk about how could people get started on the app there you showed and get started on Jenkins. The reason we very deliberately called out that we are 2 partner companies completing this whole continuous integration deploy and load test scenario. We’ll make it, say here for a minute, which tools are being used for which piece just to kind of wrap it all up. Mark, you want to kind of talk about what’s available-




From the CloudsBees side and then we’ll kind of talk again about what the CloudTest pieces are.


From the CloudsBees point of view, I mean there are 2 ways you can do this if you want to replicate this environment it’s very easy to do. If you want to work entirely on-premise, you just download open source Jenkins and you load up the SOASTA CloudTest plugin. It’s just available through the updates center in exactly the normal way. It’s a great open source plugin. You can download that and basically you can just copy the examples that you can see there at that URL, If you don’t want to have to bother with all that, you can just go to There’ll be a big button there that lets you sign up for an account. It’s all free. You can just try it out. You don’t have to enter a credit card or anything. Just click on that button and it’ll take a couple of minutes and you will have your own account. Instead of being called partnerdemo.CI, it’ll be: All you need to do as soon as you’ve created your account, you just go in, go to that menu item at the top left that says click starts. Click on that, you’ll have that big panel. Look for the Bees Shop, look for the bee with the honey.

Click on that and again you’re going to get a job which has that full application. It’ll be deployed to your URL and now you’ve got an application that you can test against. You can create as many as you want. It will automatically suck the source code for that app into a CloudsBees hosted Git repository. From our site you can see how it’s very simple to clone that repository locally. If you want to change the code and actually push code changes in, you can do that. Then you can just follow Mike’s configuration there. I posted the link a couple of times to the Perl script. The job configuration is all there. You sign up for your SOASTA account and so on and you can do everything we’ve shown you.


I guess one other to point out that the load test that I showed in today’s demonstration, we just did a small load of up to 100 users. SOASTA has a product called CloudTest Lite which is an entirely free product. It will allow you to do load testing up to 100 users. You can download that, install that on your laptop. Everything we showed you today, you can try it out, run it start to finish, make sure it works without purchasing any products.


I just signed myself up to do a follow up blog where I’ll make sure that I cover all this with the correct links and so on.


Yeah, on SOASTA’s side we’re going to try to do the same thing. Look for our blog post in the next day or 2. Tons of questions about, “Do I get the slides?” We’ll send an email out with a link to the slides, we’ll put it on SlideShare I believe, but at least we’re going to provide a link to the slides themselves. I just reiterate what Mike said, all of what we showed today, automating the hard parts of load testing for continuous integration can be started for free. You can set this up. There was question about, “Can I set this up privately?” Yes, of course you can. I guess I’ll wrap with a … There’s a theme at least across about 3 questions in that, “Can I use this with other development frameworks or management frameworks? Things like Visual Studio TFS. What’s here? Eclipse and even Salesforce I guess.” The answer is yes.

What I want to put out there and probably end it with is that the beauty of Jenkins, if you’re in the testing side of the house, it doesn’t really matter what the development teams are using. You take a build, you put it into your continuous integration framework, and you can do all the magic that we just showed you today regardless of what your management framework is. Of course, Mark probably can attest, there’s a lot of connections to these other frameworks as well. A lot of really good things are happening with Jenkins, with CloudTest, and with the ability to deploy in the cloud on a CloudsBees infrastructure and past offerings. With that, thank you Mark, thank you Mike, thank you everybody who stayed on. We look forward to virtually seeing you next time.