Exploratory Testing FAQs

I’ve come across a number of Frequently Asked Questions about Exploratory Testing and I’ve got what I hope are pretty good answers.

Exploratory Testing FAQs

Frequently Asked Questions about exploratory testing. Got a quick question? Get a quick answer.

Yes, there are many examples where people have used tools to enable and enhance their exploratory testing.

Is all testing exploratory?

No. Not unless you change the definition of testing to specifically exclude testing done by machines.

What is Exploratory Testing?

ET is an approach (or style) to testing that emphasizes the individual tester focusing on the value of their work through continuous learning and design.

Is Exploratory Testing used in Agile teams?

Yes. ET is about optimizing the value of your work given your context and so it’s a natural fit in agile projects and agile teams.

What is the definition of Exploratory Testing?

Exploratory Testing is a style (approach) of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the value of her work by treating test-related learning, test design, test execution and test result interpretation as mutually supportive activities that run in parallel throughout the project.

Is Exploratory Testing a test design technique?

No. You can design tests in an exploratory or scripted way (to a degree each way). This is why it’s called an approach. But ET itself is not a technique (a way to group, design and interpret results of similar kinds of tests).

Does Exploratory Testing require charters?

No, but charters can certainly be helpful.

Does Exploratory Testing require a timebox?

No, but a timebox can help you create similarly sized sessions for Session Based Test Management.

Over time I hope to add to these.

Last Updated: 03/07/2022

Exploratory Testing Charters

An exploratory testing charter is a mission statement for your testing. It helps provide structure and guidance so that you can focus your work and record what you find in a constructive way.

How to Write an Exploratory Charter

My favorite way to structure exploratory testing charters is to base them on “A simple charter template” from Elisabeth Hendrickson’s awesome book Explore It!, Chapter 2, page 67 of 502 (ebook).

Explore (target) With (resources) To discover (information)

  • Target: What are you exploring? It could be a feature, a requirement, or a module.
  • Resources: What resources will you bring with you? Resources can be anything: a tool, a data set, a technique, a configuration, or perhaps an interdependent feature.
  • Information: What kind of information are you hoping to find? Are you characterizing the security, performance, reliability, capability, usability or some other aspect of the system? Are you looking for consistency of design or violations of a standard?

Examples of Charters

While this is my favorite way to structure exploratory testing charters (I think its a really straightforward template) it isn’t the only way. As a way to learn I’ve complied a list of example charters you can look at that can be found on my Guides Page.

A few examples include:

  • Explore input fields with JavaScript and SQL injection attacks to discover security vulnerabilities.
  • Check the UI against Apple interface guidelines.
  • Identify and test all claims in the marketing materials.

You can see some use the template and some don’t.

How do charters relate to Session Based Testing or Session Based Test Management?

Exploring can be an open-ended endeavor which is both good and bad. To help guide those explorations you can organize your effort with charters and then time-box those charters into “sessions”. You can use sessions to measure and report on the testing you do without interrupting your workflow; this is called Session Based Test Management.

You can use exploratory charters without using Session Based Test Management. I’ve seen many examples of people using Charters in JIRA stories as part of the testing criteria for sign off for testers, developers and product managers.

Good Tests vs Bad Tests and why you shouldn’t repeat them

A little rant on this concept of Good Tests vs Bad Tests and whether a good test (case) is a repeatable one.

Here’s the imperfect transcript:

Hi everyone, Chris Kenst here.

So my topic today is I want to talk about the difference between a good test and a bad test.

There’re two reasons that I bring this up today:

  • One I saw yet another article talking about how to create effective test cases and then going on to say they should be repeatable, which is of course, wrong your test should not be repeatable.
  • Two I have been asking this question as I hire for my third software tester here at Promenade group.

I am hiring somebody that’s going to be a senior tester. And I think one thing that a senior-level person should be able to do is differentiate between what makes a good test good and a bad test bad.

But so far in the, I want to say 15 or so, maybe 20 at this point candidates that I have asked this over email, very few have been actually able to describe what makes a good test and what makes a bad test. Most candidates seem to fall into this trap of what I saw in this article which is just bad advice where every test is a scenario regardless of what it is that you’re doing.

And this strikes me as very odd because it’s deeply violates this understanding of providing value.

So if you were to hire somebody to do a job and regardless of how the job changed over time, they were to do it the exact same regardless of changing circumstances, you would think that’s a bad candidate. Somebody wouldn’t want to hire.

It’s the same thing with test design. A good test versus a bad test is about value and it’s about focusing on what that test is going to do: its Mission. And so you can’t have the same approach to every kind of test because that means you’re not providing value. You’re not actually aiming to achieve your mission with the testing.

So if you see advice, or you read advice about good test versus bad test and it’s like you should make sure that all your tests are repeatable that’s just not the case.

For example, with boundary analysis and equivalence class partitioning, you don’t want repeatable test because they’re not built to be repeatable. Sure you could repeat them but now that test is no longer very useful and it’s probably not worth running again.

So it’s really challenging.

I will probably write an article about this too but if you’re ever thinking about how to write good test versus bad test, think about the value and think about the mission of the test itself. Then, try and think of all the different types of attributes that might make something valuable now, and something less valuable.

And then focus, on those things that deliver value, whether their power, whether the repeatability, whether they’re easier for coverage, for understanding that, because the better that we get as a community and better, we become better at testing will see that there are lots of different ways we can run tests.

And the more we understand that the more skilled and knowledgeable about our craft would become and this other kind of cool thing is that you can really set yourself apart from other people who may be interviewing.

If you can easily tell someone what a good test isn’t a bad test. If you are looking at someone’s test and going these are all scenario test you built the same thing just over and over again. How come you’re not varying anything?

All of a sudden you are in a much better position, out of all the other candidates because you can easily differentiate your work from their work and you can tie it to value.

So, that’s my rant today about good test versus bad test.

Have a great day, everyone.

Regression testing isn’t only about repetition

Often when I’m chatting with someone about their regression testing strategy there is an assumption regression is all about repeating the same tests. This is a bit problematic because it ignores an important aspect which testers tend to be good at: focusing on risk. A better way to think of regression testing is it can be applied in two different ways: Procedurally and Risk-Focused

Procedural Regression Tests

When I speak of procedural I mean a sequence of actions or steps followed in regular order.  As I said above this seems to be the primary way people think about regression testing: repetition of the same tests. This extends to the way we think about automating tests as well.

Procedural regression testing can be quite valuable (so far as any single technique can be). The most valuable procedural regression tests are unit tests when applied to our CI system and run regularly. In this way they become a predictable detector of change, which is often why we run regression tests in the first place. (Funny enough automated UI tests are some of the most common procedural regression tests but aren’t the best detectors of change). 

The big problem with procedural regression tests are that once an application has passed a test, there is a very low probability of it finding another bug. 

Risk-Focused Regression Tests

When I speak of risk-focused I mean testing for the same risks (ways the application might fail) but changing up the individual tests we run. We might create new tests, combine previous tests, alter underlying data or infrastructure to yield new and interesting results.

To increase the probability of finding new bugs we start testing for side effects of the change(s) rather than going for repetition. The most valuable risk focused regression tests are typically done by the individual testers (or developers) who know how to alter their behavior with each pass through the system.  

A Combined Approach

Thinking about regression testing in terms of procedural and risk focus allows us to see two complementary approaches that can yield value at different times in our projects. It also gives testers an escape from the burden that comes  with repetition while still allowing us to meet our goals.

On other platforms:

If you liked this article please consider sharing it or buying me a coffee:

BBST Domain Testing – An Experience Report

The Domain Testing Workbook

In late January of 2014, after the Workshop on Teaching Software Testing (WTST) at Florida Institute of Technology, Dr. Cem Kaner and Dr. Rebecca Fiedler put together a 5-day pilot course to beta test a new Black Box Software Testing (BBST) course called Domain Testing. I was one of ten participants to try it out.

Two series of BBST

  • The first BBST series (Foundations, Bug Advocacy and Test Design) came from research funded by the National Science Foundation and Dr. Kaner. Part of the agreement with the NSF was to make the materials open source while also creating a way to teach the materials online with high standards. The Association for Software Testing (AST) was the lab for the initial classes and they continue to teach them to this day.
  • BBST Domain Testing represents the next step in a second BBST series focusing on specific test design techniques (Domain, Scenario and Specification based testing). Unlike the first series, the NSF didn’t fund the development of these classes so the materials won’t be open sourced. They also won’t be taught through AST, instead they will come directly from Dr. Cem Kaner’s corporate training firm Kaner Fiedler & Associates (or KFA for short).

What is domain testing?

Although domain testing sounds like it relates to domain knowledge, it is an umbrella term for equivalence class analysis (partitioning) and boundary testing. Domain testing is a sampling strategy where possible values of a variable (anything that can change) are divided into a subset of values that are in some way equivalent (equivalence class analysis). Then tests are designed to only use one or two values from each subset, along with boundary and extreme values that increase the likelihood of exposing bugs.

Why use domain testing?

Designing tests is about making choices. Choices such as:

  • How many tests do we want to run (from an impossibly large set of potential tests)?
  • How powerful are the tests we have chosen?
  • What do we hope to learn from the tests we’ll run?

When used appropriately domain testing can help us increase our efficiency by helping us run less redundant tests and increase our effectiveness by helping us find bugs thanks to powerful tests.

The Workbook

The Domain Testing Workbook (available on Amazon or directly from Context Driven Press) was published for those interested in self-study but also with the intention of using it in a class. The book contains among other things, a schema (list of ideas or step by step sequence of questions to answer) and examples ranging from the easy, classic problems, to increasingly more difficult examples. During class I used the book frequently for background context, explanations of concepts and examples, details of the Schema and for the many ideas sitting in the books multiple test catalogs.

The Class

Prior to the class, my experience applying domain testing consisted of trying to overflow input and output fields. The problem with the way I was doing things, besides being ad hoc, was I didn’t utilize the power of domain testing as a sampling technique. That’s what inspired me to take the class.

The pilot class was in-person for five full days at Florida Institute of Technology’s campus. Dr. Kaner delivered lectures in the morning and afternoon followed by exercises after the lectures and concluded the day with homework for additional practice. For example, on our first day, we learned how to characterize variables and worked through classic examples that originally appeared in the book Testing Computer Software.

Our first assignment was to demonstrate we were comfortable doing a variable tour (variables the basis for domain testing) of our sample application, FM Starting Point (a FileMaker Pro template), using Xmind. After the tour it was time to classify a smaller set of variables (a dozen) based on their data types; determine their primary dimensions and what benefit, if any, would come from doing domain testing. Finally that information went into a classical table.

On our second day we pushed further. We continued to classify variables and build classical tables only with more complexity than before. Continuing with the Schema we turned to creating a risk equivalence table. Risk equivalence tables are essentially risk-based versions of the classical table but you explicitly talk about the risks (or failures) your tests hope to expose and therefore describe why your test designs are powerful. The challenging and time consuming aspects of creating risk or classical tables is coming up with failure modes for tests – luckily the workbook has catalogs you can use for inspiration.

Our third day continued our practice of variable analysis; creating classical and risk based equivalence tables but added in pair testing of independent variables using the ACTs tool (Microsoft’s PIC tool that we used in Test Design was also an option). Our final days in the class focused on putting together the individual things we learned from the Schema, culminating in a capstone project.

The capstone project allowed us to showcase the skill and understanding we gained from the class into action by choosing a piece of an application and working through each step of the eighteen step schema. It was a difficult assignment but certainly the most fun part of the class. I made a lot of mistakes as I went through the steps without the assignments as a guide but the capstone allowed me to figure out what I like and don’t like, what order to address aspects of the schema (I didn’t like going in logical order) and going forward I’ll be more efficient and effective.

Future Classes

The pilot was a bit different from how BBST Domain Testing courses will be handled in the future. Besides the obvious in-person versus online transition, there will likely be a wide range of example applications to practice domain testing on.

At the time of the pilot (January of 2014) the class had two finished sets of real world example videos – QuickBooks (account software) and Electric Quilt 7 (design software). There were plans to add more real world examples including a Sewing machine (not your grandmothers sewing machine but a modern one to serve as an example of an embedded device), a video game (for a glassbox approach), an investing application and a database application.

Taking the class changed the way I look at testing

I barely remembered the introduction of domain testing in BBST Test Design after the class was over – I just didn’t get it. Things went by so fast, at such a high level that made it difficult to understand how to address things properly. After the class was over I couldn’t apply it to my work. This time things are different. After BBST Domain Testing, using the Schema and workbook I get domain testing.

When I look at an application today, I have a strong sense of where the strategy can be applied and where it shouldn’t. That makes me more confident in my abilities (not over confident, I’m still a novice), gives me lots of new and interesting ideas and a place to start practicing. More importantly I can actually see myself applying the technique to the software I test on a regular basis.

When Becky and Cem asked what I thought of the class I said:

it wasn’t as easy as I thought, but it was fun!


  1. Kaner, Padmanabhan & Hoffman. The Domain Testing Workbook. Context-Driven Press, 2013.
  2. Kaner, Cem. Teaching Domain Testing: A Status Report.

This article was originally published in the February 2014 edition of Testing Circus Magazine. This was such a great learning opportunity I wanted to highlight it by slightly updating and reposting it.

How To Run Your Selenium Tests with Headless Chrome

The Problem

If you want to run your tests headlessly on a Continuous Integration (CI) server you’ll quickly realize that you can’t with an out-of-the-box setup since there is no display output for the browser to launch in. You could use a third party library like Xvfb (tip 38) or PhantomJS (tip 46) but those can be hard to install and aren’t guaranteed to be supported in the long run (like PhantomJS).

A Solution

Enter Headless Chrome. Headless is simply a mode you can put Chrome into that allows it to run without displaying on screen but still gets you the same great results. This is a better option than using Chrome in a Headless manner such as in a docker container where the the container actually uses Xvfb.

Starting with Chrome 59 (Chrome 60 for Windows) we can simply pass Chrome a few configuration options to enable headless mode.

An Example in Ruby

Before we start make sure you’ve at least got the latest version of Chrome installed along with the latest version of ChromeDriver.

Let’s create a simple Selenium script (the example is posted below).

  1. We will pull in the requisite libraries and then create our setup method where we will pass Chrome our headless option as a command line argument. The first add_argument of ‘– headless’ allows us to run Chrome in headless mode. The second argument is, according to Google, temporarily required to work around a few known bugs. The third argument is optional but gives us the ability to debug our application in another browser if we need to (using localhost:9222).
  2. Now let’s finish our test by creating our teardown and run methods:

Here we are loading a page, asserting on the title (to make sure we are in the right place), and taking a screenshot to make sure our headless setup is working correctly. Here’s what our screenshot looks like:


Expected Behavior

When we save our file and run it (e.g. ruby headless_chrome.rb) here is what will happen:

  1. An empty chrome browser instance will open
  2. Test runs and captures a screenshot
  3. Browser closes


Hopefully this tip has helped you get your tests running smoothly locally or on your CI Server. Happy Testing!

Additional References

This was originally written for and posted on the Elemental Selenium newsletter. I liked it so much I decided to cross post it with some updates.

Oh and if this article worked for you please consider sharing it or buying me coffee!

Good and Bad UI Test Automation explained – Inspired by Richard Bradshaw’s Tweets

Generally speaking there’s a scary trend with the influx of people interested in test automation where (seemingly) everyone wants to automate at the UI level. For example the phrase “Test automation” seems to be synonymous with UI automation which seems to mean using Selenium. To be fair there are numerous reasons for this. First, if you primarily focus on functional testing through the UI, this application of automation is the most relevant and straightforward to understand. Second, there are a lot of resources out there (books, blogs, classes) for Selenium. Third it is much easier to write some blackbox automation at the UI level then to develop competence at writing automation at the API or unit level. (You should still learn to do this lower.)

So Richard took to Twitter to explain his thoughts. Let’s take a look:

Before we go very far, let’s look at what Richard means by targeted tests.

Tests are “targeted” if they are designed to reveal a specific risk or failure (we might call this risk-based). If you are doing risk-based testing, your tests should ALWAYS be designed this way, regardless of how you intend to execute the tests (automated or not).

Seams are how you use the system’s implementation to help you build more streamlined and realiable automated tests. Seams can be can be APIs, mocks, stubs, or faked dependecies of your real system.

Want some examples of Seams? Checkout this blog post by Angie Jones.

Yes, most times we can use better designed tests! Want to test the various ways to get to a particular feature? Use PATH testing. Want to test the feature itself? Use functional or domain testing, etc. Some test techniques will be better at surfacing one type of problem and bad at surfacing another. It’s not always necessary to design your UI tests as long feature rich Scenarios.

When we talk of interfaces and automating at the lowest level it is usually a good time to mention this is what the Test Automation Pyramid tries to describe. There are lots of versions of this but I recommend Alan Page’s Test Automation Snowman.

If you can find the failure by writing a test lower in the stack (at the API level intead of UI level or at the unit level instead at the API level), the faster it runs and the faster you get feedback. If you do have to write the test at a higher level like the UI can you find some way to decrease the setup time / data of the test? (See above for Angie Jone’s blog post.) If you aren’t sure maybe thinking about whether you are testing the UI or testing through the UI will help?

Yes! Adding testability like IDs, classes, data-test attributes, setting up test data, mocking, etc. should be done as quickly and dependbly as possible. If you can’t do this on your own (or you have a different process for making changes) bring it up during sprint planning in order to get the development teams help behind building a solution.

This IS a thing. Selenium is great, it’s open source and the WebDriver protocol is the soon-to-be de facto standard for interacting with browsers. However there are lots of tools, some will use Selenium under the hood and others that are completely different (think Cypress.io). Depending on the problem you need to solve, you may find a specific tool much better than Selenium. GitHub is a really good resource for finding open source tools and when you find something that might work, give it a try.

If you want to write Good UI Test Automation your tests should be ideally suited for the UI (and not some other layer), targeted, reliable, use seams where possible andcover a specific risk. If you aren’t here yet work to refactor your test code until you can get there.

Did I miss anything about Richard’s tweet storm that was important?

Opting out of A/B Tests while Running your Automated Tests

At Laurel & Wolf, we extensively introduce & test new features as part of A/B or split testing. Basically we can split traffic coming to a particular page into different groups and then show those groups different variations of a page or object to see if those changes lead to different behavior. Most commonly this is used to test if small (or large) changes can drive some statistically significant positive change in conversion of package or eCommerce sales. The challenge when running automated UI tests and A/B tests together is that your tests are no longer deterministic (having one dependable state).

Different sites conduct A/B tests differently depending on their systems. With our system we’re able to bucket users into groups once the land on the page which then either sets the AB Test and its variant options in local storage or as a cookie. Here are a few ways we successfully dealt with the non-determinism of A/B Tests.

Opting out of A/B Tests (by taking the control variant):

  1. Opt into the Control variant by appending query params to the route (e.g. www.kenst.com?nameOfExperiment=NameOfControlVariant). Most frameworks offer the ability to check for a query param to see if it should force a variant for an experiment and ours was no exception. For the tests we knew were impacted by A/B tests we could simply append to the base_url the query params for the experiment name and the control variant to avoid changing anything. This method didn’t require us to really care about where the experiment + variant were set (either as a cookie or localStorage) but really only worked for a single A/B test for a given test.
  2. Opt into the Control variant by setting localStorage. We often accomplished this by simply running some JS on the page that would set our localStorage (e.g.  @driver.execute_script("localStorage.setItem('NameOfExperiment', 'NameofControlVariant')")). Depending on where the A/B test sat within a test and/or if there were more than one, this was often the easiest way to opt out assuming the A/B test was set it localStorage.
  3. Opt into the Control variant by setting a cookie. Like the example for localStorage we often accomplished setting the cookie in the same way using JS (e.g.@driver.manage.add_cookie(name: 'NameOfExperiment', value: 'NameofControlVariant'). Again this was another simple way of allowing us to opt out of an experiment or even multiple experiments when necessary.

I know there are a few other ways to approach this. Alister Scott mentions a few ways here and Dave Haeffner mentions how to opt out of an Optimizely A/B test by setting a cookie here. How do you deal with your A/B tests? Do you opt-out and/or go for the control route as well?

Oh and if this article worked for you please consider sharing it or buying me coffee!

What are Quicktests and when are they used?

What are Quicktests?

Tests that don’t cost much to design, are based on some estimated idea for how the system could fail (risk-based) and don’t take much prior knowledge in order to apply are often called quicktests (sometimes stylized as quick tests or even called attacks).

When are Quicktests used?

When I’m about to test a new application the first part of my strategy is often to start with a few quicktests. I might choose to test around a particular interface, around error handling or even around boundaries cases. Similarly, when I’m about to test a new feature I can take a strategy where I look for common places where bugs might exist based on past experience (keeping an internal list of common bugs is a very good idea).

Boundaries are a good quicktest example: If there’s an input field on an billing or contact form we might decide to try testing some boundaries. Try to figure out what the upper limit is, lower limit, enter no information, try some weird characters, etc. Turns out you don’t really need to know much about the program to be effective with this approach and there are some handy tools that can help you.

The vast majority of us (developers, testers) use quicktests on a daily basis. And why wouldn’t we? They’re great if they work and if they don’t you can switch to something else. It’s not until these tests run out of steam that we’ll either need to switch focus to a new failure type or start testing the product in a deeper way. Hopefully by then we’ve gained some knowledge about the product and built a strategy around where we think additional valuable failures are so we can make better decisions about where / what to test.

More Quicktest Examples:

  • Interface tests
  • Boundaries
  • Intial States
  • Error Handling
  • Happy paths
  • Variable tours
  • Blink Tests
  • And many more… I’ve collected a few dozen examples and put them on this GitHub list.

When should Quicktests NOT be used?

Not every test is a quicktest, nor should they be. While boundaries are a good quicktest example, applying domain tests (specifically equivalence class partitioning) are not. In order to partition our field(s) (sometimes called variables) to develop our equivalence classes we need to know about the underlying data types for our variable (we might not know for sure, but we can make educated guesses). We need to know it’s primary purpose and what other variables might be dependent upon it. All of these things take time and effort to understand and apply. They’re still risk-based but they require more knowledge of the underlying system and are more costly to design.

It’s important to understand this concept of quicktests for a few reasons:

  • Test Strategy. Depending on the context of our work our test strategy should probably consist of quick and deeper tests.
  • Tool Selection. Really great tools like BugMagnet help with quicktests but not deeper boundary + equivalence class tests.
  • Creating Examples. It’s hard to find good examples of deeply applied test techniques. Most are only quicktests.

As I look around the web for the available resources on teaching test design many of the examples we have of particular test types or techniques revolve around showing these quicktests. Hey put in some boundaries. You are done! Yay. (facepalm) Starting with inexpensive tests optimized for a common type of bug is a great start but not a great ending. Here’s to better endings!



Selecting a few Platform Configuration Tests

I’ve been developing a GUI acceptance test suite to increase the speed of specific types of feedback about our software releases. In addition to my local environment I’ve been using Sauce Labs to extend our platform coverage (mostly by browsers and operating) and to speed up our tests by running more tests in parallel.

This pretty similar to what I consider traditional configuration testing – making sure your software can run in various configurations with variables such as RAM, OS, Video card, etc. Except on the web the variables are a little different and I am more concerned with browser differences than say operating system differences. Nevertheless, with over 700 browsers and OS platforms at Sauce Labs I still need to decide what configurations I start with and what configurations I add over time in the hope of finding failures.


I figured the best place to start was with our current users and since the only “hard” data we had comes from Google Analytics, I pulled up the details of two variables (OS and Browser). Using TestingConference.org as a replacement for my company’s data, our most commonly used platform configurations include:

  • Browsers:
    • Chrome 47 & 48,
    • Firefox 43 & 44,
    • Safari 8 & 9, and
    • Internet Explorer 10 & 11
  • Operating Systems:
    • Windows 7,
    • Windows 8.1,
    • Windows 10,
    • Mac OSX 10.11 and
    • Mac OSX 10.10

Excluding a few constraints like IE only runs on Windows and Safari only runs on Mac, testing browsers and operating systems in combination could potentially yield up to 40 different configurations (8 browsers x 5 operating systems). Also, we designed our application to be responsive and at some point we probably want to test for a few browser resolutions. If we’ve got 40 initial configurations and 3 different browser resolutions that could potentially yield up to 64,000 different configurations. Realistically even for an automated suite and with one to two functional tests, that’s too many tests to run.

Reduce the number of tests

We can start with adding in those constraints I mentioned above and then focuses on just the variables we want to ensure we have coverage of. To get a better picture of the number of possible configuration tests I used ACTS (Automated Combinatorial Testing for Software) tool and based my constraints on what Sauce Labs has available today for configurations. After I added OS and Browsers it looked something like this:

ACTS Window

If I wanted every browser and operating system in combination to be covered (all-pairs) then according to ACTS there aren’t 40 different configurations, just 24 configuration options. That’s more manageable but still too many to start with. If I focus on my original concerns of covering just the browsers, I get back to a more manageable set of configuration options:

Test OS Browser
1 Windows8.1 Chrome47
2 Windows10 Chrome48
3 OSX10.11 Firefox43
4 Windows7 Firefox44
5 OSX10.10 Safari8
6 OSX10.11 Safari9
7 Windows10 IE10
8 Windows7 IE11

8 configuration options are way more manageable and less time consuming than 24 options, considering each one of those configurations will each run a few dozen functional tests as well.

A Good Start

Selecting the configurations is probably the easiest part of configuration testing (and the only thing I’ve shown) but I’ve found its worth thinking through. (The harder part is designing the automated acceptance tests to produce useful failures.) Using ACTS at this point may seem like overkill when we could have just selected the browsers from the beginning but it didn’t take much time and should make it easier in the future when we want to add more variables or change the values of our existing variables.