Humans and Machines: Getting The Model Wrong

It seems like one of the more prominent and perpetual debates within the software testing community is the delineation between what the computer and human can and should do. Stated another way, this question becomes “what parts of testing fall to the human to design, run and evaluate and what parts fall to the computer?” My experience suggests the debate comes from the overuse and misuse of the term Test Automation (which in turn has given rise to the testing vs. checking distinction). Yet if we think about it, this debate is not just one within the specialty of software testing, it’s a problem the whole software industry constantly faces (and to a greater extent the entire economy) about the value humans and machines provide. While the concerns causing this debate may be valid, whenever we hear this rhetoric we need to challenge its premise.

In his book Zero to One, Peter Thiel, a prominent investor and entrepreneur who co-founded PayPal and Palantir Technologies, argues most of the software industry (and in particular Silicon Valley) has gotten this model wrong. Computers don’t replace humans, they extend us, allowing us to do things faster which when combined with the intelligence and intuition of a human mind creates an awesome hybrid.

Peter Thiel and Elon Musk at PayPal

He shares an example from PayPal: 1

Early into the business, PayPal had to combat problems with fraudulent charges that were seriously affecting the company’s profitability (and reputation). They were loosing millions of dollars per month. His co-founder Max Levchin assembled a team of mathematicians to study the fraud transfers and wrote some complex software to identify and cancel bogus transactions.

But it quickly became clear that this approach wouldn’t work either: after an hour or two, the thieves would catch on and change their tactics. We were dealing with an adaptive enemy, and our software could adapt in response.

The fraudsters’ adaptive evasions fooled our automatic detection algorithms, but we found that they didn’t fool our human analysts as easily. So Max and his engineers rewrote the software to take a hybrid approach: the computer would flag the most suspicious transactions on a well-designed user interface, and human operators would make the final judgment as to their legitimacy.

Thiel says he eventually realized the premise that computers are substitutes for humans was wrong. People can substitute for one another – that’s what globalization is all about. People compete for the same resources like jobs and money but computers are not rivals, they are tools. (In fact, long-term research on the impact of robots on labor and productivity seems to agree.) Machines will never want the next great gadget or the beachfront villa on its next vacation – just more electricity (and it’s not even smart enough to know it). People are good at making plans and decisions but bad at dealing with enormous sets of data. Computers struggle to make basic decisions that are easy for humans but can deal quickly with big sets of data.

Substitution seems to be the first thing people (writers, reporters, developers, managers) focus on. Depending on where you sit in an organization, substitution is either the thing you’d like to see (reduce costs – either in terms of time savings or in headcount reduction) or the thing you dread the most (being replaced entirely or your work reduced). Technology articles consistently focus on substitution like how to automate this and that or how cars are learning to drive themselves and soon we’ll no longer need taxi or truck drivers.

Why then do so many people miss the distinction between substitution and complementarity, including so many in our field?

Thiel says our education does this. Software engineers for example work on projects that replace human efforts because that’s what they’re trained to do – reducing human capabilities into specialized tasks computers can then be programmed for. The trendiest fields in Computer Science like machine language evoke the replacement of humans. We exoticize technology and are impressed by small feats done by computers, but we ignore big achievements from complementarity because humans make computers less uncanny. In other words we are biased against humans.

In the software testing community this same bias centers on the idea of Test Automation as a substitute for humans who might already be testing. Instead of extending the human tester’s natural abilities or making it easier for them to make decisions (and deal with large amounts or different kinds of data) it becomes about cost savings or being faster. Luckily part of the testing community I associate with has long challenged this premise and based purely on my own observations I think this distinction is gaining more clarity and momentum as people focus on the value of complementarity.

Don’t get me wrong; I love the idea of software eating more of the world. I just wonder how this potential predisposition for bias against humans (human approaches to solutions to be inferior) will play out on a bigger scale. Will some areas or communities be better at challenging bad premises than others? Will that lead to an imbalance in the application or advancement of certain knowledge? Are we doomed to continue into perpetuity the debate around the delineation between what the computer and human can and should do?

While I don’t want to make too many generalizations about the future I do believe people who are able to use technology to empower their work are more likely to be effective in the long run. This means if you want to be effective it’s only going to be more important to understand the advances that are coming and to test the assumptions and statements being. For those of us that do test, that do question software this should be a much more natural step.

For more information on the subject:

References:

Subscribe to Chris Kenst

Sign up now to get access to the library of members-only issues.
Jamie Larson
Subscribe