Good and Bad UI Test Automation explained – Inspired by Richard Bradshaw’s Tweets

Generally speaking there’s a scary trend with the influx of people interested in test automation where (seemingly) everyone wants to automate at the UI level. For example the phrase “Test automation” seems to be synonymous with UI automation which seems to mean using Selenium. To be fair there are numerous reasons for this. First, if you primarily focus on functional testing through the UI, this application of automation is the most relevant and straightforward to understand. Second, there are a lot of resources out there (books, blogs, classes) for Selenium. Third it is much easier to write some blackbox automation at the UI level then to develop competence at writing automation at the API or unit level. (You should still learn to do this lower.)

So Richard took to Twitter to explain his thoughts. Let’s take a look:

Before we go very far, let’s look at what Richard means by targeted tests.

Tests are “targeted” if they are designed to reveal a specific risk or failure (we might call this risk-based). If you are doing risk-based testing, your tests should ALWAYS be designed this way, regardless of how you intend to execute the tests (automated or not).

Seams are how you use the system’s implementation to help you build more streamlined and realiable automated tests. Seams can be can be APIs, mocks, stubs, or faked dependecies of your real system.

Want some examples of Seams? Checkout this blog post by Angie Jones.

Yes, most times we can use better designed tests! Want to test the various ways to get to a particular feature? Use PATH testing. Want to test the feature itself? Use functional or domain testing, etc. Some test techniques will be better at surfacing one type of problem and bad at surfacing another. It’s not always necessary to design your UI tests as long feature rich Scenarios.

When we talk of interfaces and automating at the lowest level it is usually a good time to mention this is what the Test Automation Pyramid tries to describe. There are lots of versions of this but I recommend Alan Page’s Test Automation Snowman.

If you can find the failure by writing a test lower in the stack (at the API level intead of UI level or at the unit level instead at the API level), the faster it runs and the faster you get feedback. If you do have to write the test at a higher level like the UI can you find some way to decrease the setup time / data of the test? (See above for Angie Jone’s blog post.) If you aren’t sure maybe thinking about whether you are testing the UI or testing through the UI will help?

Yes! Adding testability like IDs, classes, data-test attributes, setting up test data, mocking, etc. should be done as quickly and dependbly as possible. If you can’t do this on your own (or you have a different process for making changes) bring it up during sprint planning in order to get the development teams help behind building a solution.

This IS a thing. Selenium is great, it’s open source and the WebDriver protocol is the soon-to-be de facto standard for interacting with browsers. However there are lots of tools, some will use Selenium under the hood and others that are completely different (think Cypress.io). Depending on the problem you need to solve, you may find a specific tool much better than Selenium. GitHub is a really good resource for finding open source tools and when you find something that might work, give it a try.

If you want to write Good UI Test Automation your tests should be ideally suited for the UI (and not some other layer), targeted, reliable, use seams where possible andcover a specific risk. If you aren’t here yet work to refactor your test code until you can get there.

Did I miss anything about Richard’s tweet storm that was important?

Opting out of A/B Tests while Running your Automated Tests

At Laurel & Wolf, we extensively introduce & test new features as part of A/B or split testing. Basically we can split traffic coming to a particular page into different groups and then show those groups different variations of a page or object to see if those changes lead to different behavior. Most commonly this is used to test if small (or large) changes can drive some statistically significant positive change in conversion of package or eCommerce sales. The challenge when running automated UI tests and A/B tests together is that your tests are no longer deterministic (having one dependable state).

Different sites conduct A/B tests differently depending on their systems. With our system we’re able to bucket users into groups once the land on the page which then either sets the AB Test and its variant options in local storage or as a cookie. Here are a few ways we successfully dealt with the non-determinism of A/B Tests.

Opting out of A/B Tests (by taking the control variant):

  1. Opt into the Control variant by appending query params to the route (e.g. www.kenst.com?nameOfExperiment=NameOfControlVariant). Most frameworks offer the ability to check for a query param to see if it should force a variant for an experiment and ours was no exception. For the tests we knew were impacted by A/B tests we could simply append to the base_url the query params for the experiment name and the control variant to avoid changing anything. This method didn’t require us to really care about where the experiment + variant were set (either as a cookie or localStorage) but really only worked for a single A/B test for a given test.
  2. Opt into the Control variant by setting localStorage. We often accomplished this by simply running some JS on the page that would set our localStorage (e.g.  @driver.execute_script("localStorage.setItem('NameOfExperiment', 'NameofControlVariant')")). Depending on where the A/B test sat within a test and/or if there were more than one, this was often the easiest way to opt out assuming the A/B test was set it localStorage.
  3. Opt into the Control variant by setting a cookie. Like the example for localStorage we often accomplished setting the cookie in the same way using JS (e.g.@driver.manage.add_cookie(name: 'NameOfExperiment', value: 'NameofControlVariant'). Again this was another simple way of allowing us to opt out of an experiment or even multiple experiments when necessary.

I know there are a few other ways to approach this. Alister Scott mentions a few ways here and Dave Haeffner mentions how to opt out of an Optimizely A/B test by setting a cookie here. How do you deal with your A/B tests? Do you opt-out and/or go for the control route as well?

Oh and if this article worked for you please consider sharing it or buying me coffee!