Exploratory Performance Testing

Exploratory Performance Testing

Author by Alex Podelko

Exploratory performance testing is starting to attract attention and is getting mentioned in a variety of places. I assume this is due to the growing popularity of [functional] exploratory testing. However, not much is published about the topic - and even what is published often refers to different things.

I saw attempts to directly apply functional exploratory testing techniques to performance testing. SmartBear blog posts contrast exploratory performance testing with "static" traditional load testing: Why Your Application Needs Exploratory Load Testing Today by Ole Lensmar and Should Exploratory Load Testing Be Part of Your Process? by Dennis Guldstrand. My view is closer to Goranka Bjedov's understanding as she described it back in 2007 in her Performance Testing post.

I have written about the agile / exploratory approach to performance testing in traditional waterfall software development environments for CMG'08: Agile Performance Testing, paper and presentation. Working now in an agile development environment, I see other aspects of agile / exploratory performance testing, some of which I presented at the Performance and Capacity 2013 conference by CMG.

The words agile and exploratory are definitely not synonyms. They are periodically and loosely used in relation to performance testing – but doesn't look like we have an accepted definition. However, both terms are, in a way, antonyms of traditional waterfall-like performance testing – so their meaning may somewhat overlap in certain contexts. I explained my view of using the word "agile" for performance testing in the previously referenced presentations. Now it is time to contemplate about the word "exploratory" in the context of performance testing.

If we look at the definition of exploratory testing as "simultaneous learning, test design and test execution", we see that it makes even more sense for performance testing as learning here is more complicated and good test design and execution heavily depend on a thorough understanding of the system.

If we speak about specific techniques used in functional exploratory testing, some may be mapped to performance testing - but definitely shouldn't be copied blindly. Working with a completely new system, I found that I rather naturally align my work around "sessions" – so session-related techniques of functional exploratory testing are probably applicable to performance testing. I wouldn't apply such details as session duration, for example – but the overall idea definitely makes sense. You decide what area of functionality you want to explore, figure out a way to do that (for example, a load testing script) and start to run tests to see how the system behaves. For example, if you want to investigate creation of purchase orders you may run tests for different concurrent number of users, check resource utilization, see how the system behaves under that kind of stress load, how response times and resource utilization responds to the number of purchase orders in the database, etc.  The outcome would be at least three-fold: (1) [early] feedback to development about found problems and concerns (2) understanding the system dynamic for that kind of workload, what kind of load it can handle and the resources it requires (3) getting input for other kinds of testing such as automated regression or realistic performance testing to validate requirements. Then we move to another session exploring performance of another area of functionality or another aspect of performance (for example, how performance depends on the number of items purchased in the order).

The approach looks quite natural for me and it maximizes the amount of early feedback to development, which, in my opinion, is the most valuable outcome of [early] performance testing. However, when I try to share this approach, many do not agree. Objections mostly align along three notions, which, in my opinion, are rather idealistic and not quite applicable to performance testing of new systems:

-          Creating detailed project plan (with test design, time estimates, etc.) and adhering to it

-          Fully automating performance testing

-          Using scientific Design of Experiments (DOE) approaches

I mention all three objections here because (1) they are often referred as alternatives to exploratory testing and (2) they all are idealistic due to the same reason: we don't know much about new systems in the beginning and every new test provides us with additional information. And often this additional information makes us to modify the system. Somehow the point that the system is changing in the process of performance testing is often missed.

For example, if your bottleneck is the number of web server threads, it doesn't make much sense to continue testing the system as soon as you realize it. As you tune the number of threads, the system's behavior will change drastically. And you wouldn't know about it from the beginning (well, this is a simple example and experienced performance engineer may tune such obvious things from the very beginning – but, at least in my experience, you will always have something to tune or optimize).

So actually I believe that you do exploratory testing of new systems one way or another even if you do not admit it – and you would be probably less productive if you don't understand that. And will feel bad facing multiple issues and need to explain why your plans are changing all the time. The great post TDD is dead. Long live testing by David Heinemeier Hansson discusses, in particular, issues related to using such idealistic approaches.