Five for Friday – April 17, 2020

Welcome to Friday, here are five points worth sharing. A few of these are “work” related but not all. Here is to helping find balance between working from home, home time and personal time:

  • I’m finishing my second week of One Hundred Push ups. It’s hard but fun and takes less than 10 minutes per day. I’ve hit a few walls (because I’m out of shape) but the next workout (I do 3x a week) I usually break down the wall and continue on.
  • Arnold Schwarzenegger has been busy on social media helping people focus on what they can control. Then he shared his old no gym / home workout routine using just bodyweight. It’s kind of legit. Men’s Health has it. I’ll try it after I can do 100 push ups!
  • For those parents who are looking for ways to plan their children’s education while being remote we just signed up for ABCmouse.com for our 2.5 year old. Between this and the iOS app Endless Learning Academy I think it’s going well.
    • We’ve actively been trying to experiment with learning plans to keep the kid developing. He doesn’t go to preschool yet but then again it seems like no kids currently do.
  • Tea-Time with Testers magazine has a new issue after over 2 years on hiatus. Included was an article that struck home with me called How To Read A Difficult Book by Klára Jánová and James Bach. I also have tried reading An Introduction To General Systems thinking several times and have not gotten what I wanted from it. Perhaps I need to try what Klára did.
  • There are some good bits of advice in the a16z podcast: Moving to Remote Development (and work). One is about the kinds of bad behaviors that get emphasized when moving to remote work (especially for managers). Another is a discussion around communication, using video, how to be expressive and generally send the right signals.

And a friendly reminder that with Amazon delivering slower than normal for non-essential things, eBay sellers are shipping faster than ever!

Quality Equals Profit – Understanding Quality Helps Us Test

A coworker sent me an email about a conversation he had with one of our executives about risk and how quality played into her decision making:

“She [the executive] was giving me some insight…

As we were discussing the balance of quality policies (as we experienced them in the beta program) and balancing them against expediency she mentioned that quality equals profits. Ultimately this is because quality means a greater degree of predictability, and that greater certainty influences commercial policy to take risk out of competitive pricing, which means more deals won, and also means revenue won’t be eroded by fixing product issues once deployed (and the negative marketing spin they can create).”

How fascinating! This executive thinks quality is about predictability and reducing the risk of selling! I’ve worked at a few large companies where teams of programmers, analysts, testers, support people, professional services and others are brought together from various functional areas for a particular project. I’ve also worked at small startups where I sit next to all my engineers and in each case, regardless of the organization size the testing process mostly works the same way.

As things are scoped (through planning or building), the information that flows out from the project help us testers start to understand the product under test, how much we’ll have to test (scope), what our requirements are (both written and not), who makes up the team, et. al. But it’s been my experience with all these processes going on we end up forgetting or missing the important thing for understanding our project:

QualityEqualsProfit1

Our mission as understood by our stakeholders! For testers this mission will come in the form of an information objective (or multiple objectives). Our information objective is very important because it dictates the problem or problems we are trying to solve for our customers and/or stakeholders (who I’ll refer to now on collectively as stakeholders). To make matters more complex it changes with each project even if we’re working on the same product or with the same team.

To help understand our testing objectives and our overall mission we should be asking what do our customers expect from this project? Just because some written requirement states something is supposed to work one way, does that mean our stakeholders really want it? Would they in fact accept something else that addresses their problem in a slightly different way? Matt Heusser uses the term “desirements” instead of requirements because when you really understand your stakeholders you realize they are willing to make changes if they meet the end goal.

As we begin to understand our testing objectives we then need to understand from those same stakeholders what quality means to them. If you are like me, you’ve probably worked on many teams and/or projects where the assumption was “whatever I deem to be quality is quality”.  For example, in the quote at the beginning it sounds like this executive is interested in quality as it relates to controlling costs and reducing risks (at least the risk of spending more money to fix things later). This could lead to more questions like what information do they need (information objective) that would help them make better decisions about controlling costs and/or spending more money to fix problems later? Is it possible that testing can help uncover this information?

Perhaps by understanding what problems (or types of problems) exist in the system now we can make fixing problems in the future more predictable.

If this was the consensus across our stakeholders and/or if this stakeholder was the stakeholder that mattered this information objective would drive our mission and our test strategy. If this wasn’t the stakeholder who matter but was just one, we’d also want to know what the other stakeholders think was important. What is quality to them? What information do they need from testing?

QualityEqualsProfit2

Quality is subjective and we should expect to get a different answer from each of your stakeholders but that’s ok. Knowing what quality is, what each group of stakeholders expects and to whom certain things are important can lead to a better understanding of the priorities of the test efforts. At the very least it will influence how bugs are fixed.

Influence how bugs are fixed? Yes. Let me give you an example.

Say we have a stakeholder who thinks quality means the branding of the system (logos, trademarks, text with the company name) conforms to the latest design guidelines and we know this because we asked their group what quality meant and what information they needed to see to sign off on the release. If we (the testers) file a bug that says something like “logo isn’t the appropriate size as in design guidelines” its entirely possible a programmer would consider this low priority and skip it. If this stakeholder is important to the project team we can cite who would be affected and the impact of NOT fixing the bug. Again assuming the importance of the stakeholder this might be enough evidence to the programmer to rate this bug a higher priority.

The opposite could also be said. By knowing what our stakeholders expect from the testing mission we can also identify biases certain groups will have.

Going back to the example in the beginning if we know our executive thinks quality is profit and wants testing to provide information to help control costs, we know they may be unlikely to get behind fixes or features that directly benefit another stakeholder if they don’t fall into this category. It doesn’t matter how beneficial to that other stakeholder or group of stakeholders, so we may need to provide additional information in bugs or change requests to address that bias.

It comes down to: The more I know about my stakeholders the greater effect it will have on my mission and my testing objectives and eventually it will make it easier to appeal and influence the person making the decision about whether to fix a bug or implement a change.

Testing is often too large of a task to just randomly go through the testing process without thinking about or trying to understand the underlying mission and objectives of testing, instead it requires sampling. Asking what quality means to your stakeholders is one way to sample. Language influences the way we think about and deal with problems.

It may sound odd, as testers, to equate quality to profits (maybe you think quality is reputation, credibility, no UI bugs, no Easter eggs, etc.) but if we don’t ask these basic questions we may misunderstand our testing mission.

However if we do ask questions like who are the people that matter, what do they think quality is and what do they need to understand from testing, then we end up dealing with the right problems for the right people.

These days I make it a point to understand my stakeholders and talk directly to them, even the obvious ones I might be sitting next to because so much of what I do is effected by the information they give to me. I ask them what quality means and what information is important for them to understand which leads me to develop my information objectives, my mission and eventually my test strategy.

This article was originally published in the December 2013 edition of Testing Circus Magazine. It has been updated and reposted because it highlights an important concept I was grappling with in 2013 which, in 2019 is that customers are the only ones capable of judging and evaluating the quality of our product. Enjoy!

Opting out of A/B Tests while Running your Automated Tests

At Laurel & Wolf, we extensively introduce & test new features as part of A/B or split testing. Basically we can split traffic coming to a particular page into different groups and then show those groups different variations of a page or object to see if those changes lead to different behavior. Most commonly this is used to test if small (or large) changes can drive some statistically significant positive change in conversion of package or eCommerce sales. The challenge when running automated UI tests and A/B tests together is that your tests are no longer deterministic (having one dependable state).

Different sites conduct A/B tests differently depending on their systems. With our system we’re able to bucket users into groups once the land on the page which then either sets the AB Test and its variant options in local storage or as a cookie. Here are a few ways we successfully dealt with the non-determinism of A/B Tests.

Opting out of A/B Tests (by taking the control variant):

  1. Opt into the Control variant by appending query params to the route (e.g. www.kenst.com?nameOfExperiment=NameOfControlVariant). Most frameworks offer the ability to check for a query param to see if it should force a variant for an experiment and ours was no exception. For the tests we knew were impacted by A/B tests we could simply append to the base_url the query params for the experiment name and the control variant to avoid changing anything. This method didn’t require us to really care about where the experiment + variant were set (either as a cookie or localStorage) but really only worked for a single A/B test for a given test.
  2. Opt into the Control variant by setting localStorage. We often accomplished this by simply running some JS on the page that would set our localStorage (e.g.  @driver.execute_script("localStorage.setItem('NameOfExperiment', 'NameofControlVariant')")). Depending on where the A/B test sat within a test and/or if there were more than one, this was often the easiest way to opt out assuming the A/B test was set it localStorage.
  3. Opt into the Control variant by setting a cookie. Like the example for localStorage we often accomplished setting the cookie in the same way using JS (e.g.@driver.manage.add_cookie(name: 'NameOfExperiment', value: 'NameofControlVariant'). Again this was another simple way of allowing us to opt out of an experiment or even multiple experiments when necessary.

I know there are a few other ways to approach this. Alister Scott mentions a few ways here and Dave Haeffner mentions how to opt out of an Optimizely A/B test by setting a cookie here. How do you deal with your A/B tests? Do you opt-out and/or go for the control route as well?

Oh and if this article worked for you please consider sharing it or buying me coffee!

Coding Without a Net

Yahoo! has been in the news quite a bit over the last few years as it’s primary business of placing display ads slowly dies and it searches for new ways to grow and/or remain relevant. It’s hired new executives, lost new executives and made acquisitions. Plenty of people still use Yahoo! products like finance and email. According to Yahoo!’s advertising page every day some 43 million people come to visit its homepage alone.

The Article

In December, Spectrum magazine posted an article about Yahoo! eliminating it’s testing department with the tag line “What happens when you eliminate test and QA? Fewer errors and faster development…”

Yahoo isn’t the first big tech company to move away from (presumably) dedicated testing teams. Google has never had them relying instead on SETs (software engineers in test) to build out automation infrastructure and TEs (test engineers) to understand testing and build out tools (or something to this effect). 1 Although it’s easy to question how much Google really cares about quality given it’s daunting task of dealing with such huge scale and demand and how often their services seem to go down or remain difficult to use. Microsoft did something similar a few years ago by moving towards a combined engineering approach with having everyone focused on the product. Alan Page, in particular, has talked about the fluidity of his role at Microsoft.

The most interesting idea from this article was how Yahoo! believed “coding with a net” was a good idea. Let’s assume “coding with a net” meant one team did some programming and another separate team was tasked with understanding those changes and testing for them. Most often this means please make sure we didn’t break something (regression testing) instead of help us understand what we don’t know (uncovering new information). That seems like a very narrow net, doesn’t it?

First, you choose to use one primary test technique (regression testing) out of the hundreds of available techniques. Two, those most responsible for building and ensuring the quality, the programmers, now have to wait until some the code is shipped to a different group until they can start getting feedback about what works and what doesn’t. That’s a very long time. Testers shouldn’t be used as gate-keepers; they should work together with programmers to understand as many aspects of quality as possible. The faster that can happen, the better!

My Experience

Article aside, my least favorite work experiences are those when I’ve been in a “siloed” or dedicated test team and away from the fast feedback of the rest of the development team. My favorite work experiences by far, including my current company, have me on the development team roaming around the product and trying to figure out how to test things, how to improve quality and constantly investigating the product. I’m the “quality guy” much like Alan describes here.

I do think testing helps and often you want a test specialist or a group of testers to help understand the product and pay attention the many different aspects of quality that your product might need to have. Eliminating testing or QA shouldn’t result in faster development (or much faster development) if the teams or roles are in alignment. My personal goal is to be an effective technical investigator, someone who understands quality (or at least spends a good deal of time thinking about it) and who is valuable to the team.

I guess I’m not a fan of “coding with a net”. I think the practice leads to dependency and gives people an excuse for building something bad in the first place. It’s important for the whole team to build quality.