Five for Friday – April 17, 2020

Welcome to Friday, here are five points worth sharing. A few of these are “work” related but not all. Here is to helping find balance between working from home, home time and personal time:

  • I’m finishing my second week of One Hundred Push ups. It’s hard but fun and takes less than 10 minutes per day. I’ve hit a few walls (because I’m out of shape) but the next workout (I do 3x a week) I usually break down the wall and continue on.
  • Arnold Schwarzenegger has been busy on social media helping people focus on what they can control. Then he shared his old no gym / home workout routine using just bodyweight. It’s kind of legit. Men’s Health has it. I’ll try it after I can do 100 push ups!
  • For those parents who are looking for ways to plan their children’s education while being remote we just signed up for ABCmouse.com for our 2.5 year old. Between this and the iOS app Endless Learning Academy I think it’s going well.
    • We’ve actively been trying to experiment with learning plans to keep the kid developing. He doesn’t go to preschool yet but then again it seems like no kids currently do.
  • Tea-Time with Testers magazine has a new issue after over 2 years on hiatus. Included was an article that struck home with me called How To Read A Difficult Book by Klára Jánová and James Bach. I also have tried reading An Introduction To General Systems thinking several times and have not gotten what I wanted from it. Perhaps I need to try what Klára did.
  • There are some good bits of advice in the a16z podcast: Moving to Remote Development (and work). One is about the kinds of bad behaviors that get emphasized when moving to remote work (especially for managers). Another is a discussion around communication, using video, how to be expressive and generally send the right signals.

And a friendly reminder that with Amazon delivering slower than normal for non-essential things, eBay sellers are shipping faster than ever!

Upgrading to WebDriverIO 5

A few weeks ago I finished upgrading our implementation of WebDriverIO from version 4 to version 5. The impetus for the upgrade was an announcement by the WebDriverIO twitter account of a new beta version 6 to be quickly followed by a finished version (it’s already here). One thing was clear: you have to be on v5 to go to v6 and each new subsequent version would only be supported for a year. Time to upgrade!

I’d been on version 4 since I originally deployed WebDriverIO in mid 2018. I knew version 5 was out but I had no immediate plans to upgrade given all of the warnings around breaking changes.

Preparation

This isn’t to say I wasn’t preparing myself. I created a JIRA ticket to outline what the work might look like. I was going through the TestAutomationU course on WebDriverIO which uses V5 and of course practicing in my own repo. With confirmation of the EOL of version 4 it was time to move on. 

I scoped out more of the work in JIRA (and also made a story to upgrade to v6). I bookmarked a few important articles including the WebDriverIO blog post announcing version 5 which highlights, among other things, specifically How to Upgrade. Then I made note of the methods that were either changing or being replaced for easier reference.

Upgrading to WebDriverIO 5

Finally it was time for the upgrade itself. The first question I needed to answer was logistical: how do I approach making the changes? Create a whole new repo, install the new packages and then move my tests over? Or just upgrade in place? Despite the daunting feeling I had, I figured it would be easier to upgrade in place and deal with the test failing as they came. This would introduce less changes than actually trying to move things over. Then it was time to follow the recommendations in the blog post:

  1. Remove node_module
  2. Remove all wdio-* packages from package.json
  3. Install the latest version of webdriverio: $ npm install [email protected]
    1. Note: if you did this today, you’d want to npm install [email protected] however I’d recommend going straight to v6 instead of going to v5 and then v6.
  4. Install the new wdio testrunner: $ npm install @wdio/cli --save-dev
  5. I have multiple configuration files and so I didn’t need to back them up. Rather I created a new one as part of the webdriverio configuration wizard. Then I eventually migrated / pruned the original ones until they were what I wanted.
  6. Rerun the configuration wizard: $ npx wdio config

All the Broken Things

Bam! WebDriverIO 5 deployed. Kind of. The easy part was done, next up was running each test one at a time, starting with the easiest / shortest tests. (I usually start with low hanging fruit so that I build momentum). When a test failed, I’d find out where and why (due to a rename or deprecation) and make a change. Rise and repeat for the whole test suite.

If this sounds simple, that’s because it was. All it took was time to remember where things were and why something was done a certain way and then make a change. 90% of the time this worked fine.

Other times it was more complicated. In at least one instance I ran into a breaking change, the functionality that was there before didn’t exist in the new version. I commented out 2 tests and moved on. (Speaking of which, I need to open a bug report about this).

Benefits

While it’s never fun to take something that’s working and break it just to upgrade, it does have some side benefits. Such as…. I fixed a few remaining bad ternary statements. Then cleaned up some unused libraries / plugins that I installed for some reason but can’t possibly remember anymore. Also refactored a bit of my code to make things simpler. When I was going through the TestAutomationU course I got to see how the author structured her tests. Now I have a new task to break apart my larger tests into smaller more defined tests with less assertions. 

All of this is to say, once you start making changes to improve one thing, it can snowball and lead to even more improvements. 

If you liked this article consider sharing it or buying me a coffee:

Regression testing isn’t only about repetition

Often when I’m chatting with someone about their regression testing strategy there is an assumption regression is all about repeating the same tests. This is a bit problematic because it ignores an important aspect which testers tend to be good at: focusing on risk. A better way to think of regression testing is it can be applied in two different ways: Procedurally and Risk-Focused

Procedural Regression Tests

When I speak of procedural I mean a sequence of actions or steps followed in regular order.  As I said above this seems to be the primary way people think about regression testing: repetition of the same tests. This extends to the way we think about automating tests as well.

Procedural regression testing can be quite valuable (so far as any single technique can be). The most valuable procedural regression tests are unit tests when applied to our CI system and run regularly. In this way they become a predictable detector of change, which is often why we run regression tests in the first place. (Funny enough automated UI tests are some of the most common procedural regression tests but aren’t the best detectors of change). 

The big problem with procedural regression tests are that once an application has passed a test, there is a very low probability of it finding another bug. 

Risk-Focused Regression Tests

When I speak of risk-focused I mean testing for the same risks (ways the application might fail) but changing up the individual tests we run. We might create new tests, combine previous tests, alter underlying data or infrastructure to yield new and interesting results.

To increase the probability of finding new bugs we start testing for side effects of the change(s) rather than going for repetition. The most valuable risk focused regression tests are typically done by the individual testers (or developers) who know how to alter their behavior with each pass through the system.  

A Combined Approach

Thinking about regression testing in terms of procedural and risk focus allows us to see two complementary approaches that can yield value at different times in our projects. It also gives testers an escape from the burden that comes  with repetition while still allowing us to meet our goals.

On other platforms:

If you liked this article please consider sharing it or buying me a coffee: