What are Quicktests and when are they used?

What are Quicktests?

Tests that don’t cost much to design, are based on some estimated idea for how the system could fail (risk-based) and don’t take much prior knowledge in order to apply are often called quicktests (sometimes stylized as quick tests or even called attacks).

When are Quicktests used?

When I’m about to test a new application the first part of my strategy is often to start with a few quicktests. I might choose to test around a particular interface, around error handling or even around boundaries cases. Similarly, when I’m about to test a new feature I can take a strategy where I look for common places where bugs might exist based on past experience (keeping an internal list of common bugs is a very good idea).

Boundaries are a good quicktest example: If there’s an input field on an billing or contact form we might decide to try testing some boundaries. Try to figure out what the upper limit is, lower limit, enter no information, try some weird characters, etc. Turns out you don’t really need to know much about the program to be effective with this approach and there are some handy tools that can help you.

The vast majority of us (developers, testers) use quicktests on a daily basis. And why wouldn’t we? They’re great if they work and if they don’t you can switch to something else. It’s not until these tests run out of steam that we’ll either need to switch focus to a new failure type or start testing the product in a deeper way. Hopefully by then we’ve gained some knowledge about the product and built a strategy around where we think additional valuable failures are so we can make better decisions about where / what to test.

More Quicktest Examples:

  • Interface tests
  • Boundaries
  • Intial States
  • Error Handling
  • Happy paths
  • Variable tours
  • Blink Tests
  • And many more… I’ve collected a few dozen examples and put them on this GitHub list.

When should Quicktests NOT be used?

Not every test is a quicktest, nor should they be. While boundaries are a good quicktest example, applying domain tests (specifically equivalence class partitioning) are not. In order to partition our field(s) (sometimes called variables) to develop our equivalence classes we need to know about the underlying data types for our variable (we might not know for sure, but we can make educated guesses). We need to know it’s primary purpose and what other variables might be dependent upon it. All of these things take time and effort to understand and apply. They’re still risk-based but they require more knowledge of the underlying system and are more costly to design.

It’s important to understand this concept of quicktests for a few reasons:

  • Test Strategy. Depending on the context of our work our test strategy should probably consist of quick and deeper tests.
  • Tool Selection. Really great tools like BugMagnet help with quicktests but not deeper boundary + equivalence class tests.
  • Creating Examples. It’s hard to find good examples of deeply applied test techniques. Most are only quicktests.

As I look around the web for the available resources on teaching test design many of the examples we have of particular test types or techniques revolve around showing these quicktests. Hey put in some boundaries. You are done! Yay. (facepalm) Starting with inexpensive tests optimized for a common type of bug is a great start but not a great ending. Here’s to better endings!



How Do I Test This?

Occasionally I’ll be looking at a bug report / kanban card / story, trying to understand it and its implications. Unable to make sense of what I’m reading, I’ll find the originator and ask them “how do I test this”? The problem is I don’t mean this literally; it’s a crutch and I need to stop saying it.

Instead of making sense of the artifact I’m now implying to the receiver of the question I would like them to tell me how to test (aka make the important testing decisions for me). That’s the best case scenario. Worst case scenario the person thinks I can’t do the job I was hired to do. Having said that, if the receiver responds to the literal asking of this question, I won’t discard or discount the information. I’m constantly amazed how open programmers are, how they will give suggestions about what they would test or what they are worried about. I can then add this valuable information to the test ideas I’m forming (but won’t rely solely on them).

Considering I’m often the testing expert (either on the team or within the company) if I let someone else make the testing decisions then I’m forsaking my (likely) greater skill and experience. I doubt anyone would prefer this. If I had said what I originally meant “this isn’t clear, can you please tell me more about it so I can better understand its changes and implications?” I could have avoided these problems entirely.

To help accomplish this, here’s a protocol I’ve been using lately:

  • When I come across some artifact I don’t understand I remind myself this is a good thing. It’s probably unclear to a few people.
  • I’ll find the person who wrote or worked on the artifact and tell them I’m having trouble understanding it. Be specific.
    • Try to describe what (if anything) I do understanding. Often this highlights good + bad assumptions the other person will quickly point out or make reference to
  • As the other person starts filling in gaps, I will then start modeling my testing and my information objective
  • At the end I’ll follow up with ‘how did you / would you test it?”
    • Since I’m already talking with the person I might as well see what they covered or what they might be worried about.

It’s hard to recognize when we might be conveying the wrong message. It took me overhearing someone else’s confusion and use of this phrase before I realized what was being said (vs implied). Although the implied confusion is understandable, the literal meaning is inconsistent with the message I try to convey and I intend to stop using the phrase. I hope you do to.

Trends in Testing Terminology

There are lots of things to consider when trying to recruit or develop software testers especially industry trends, both within the testing community and in the larger software engineering community. In a small community like ours those trends might include development practices, tools, techniques and terminology (among others). As I was contemplating these trends on my own I came across this fun graphic by the Ministry of Testing called “Words That Make Testers Feel Good” and thought it was worth sharing:


Aside from the obvious “feel good” connotation (and the corresponding icky words), one could look at this list as a set of positive trends moving through the industry. If I was looking to hire or train software testers (or a recruiter) I’d look at both lists for the terminology we use and debate the trade-offs of adopting these trends.

Not all of the feel good words are useful (“analysis”) but some might be. I’ve written about a few of them often enough to tag them: Exploratory Testing, James Bach, Test Techniques, and Automated Testing (or checking).

The idea of a Professional Tester

As rough as traveling can be, one benefit is dedicated time to catch up on reading. I finally got around to a post from Uncle Bob on “Sapient Testing: The “Professionalism” meme” and it captured something I’ve been thinking about for a some time: the label of professional(ism).

I’ve been using the “professional” term online to call attention to a potential difference between myself and everyone else: that I take my work seriously. Being a professional isn’t just about being paid to do something – there are many people paid to do jobs they could care less about (airports are a constant reminder of this); I mean professional in the sense of caring about the quality of my work, my skills and my ability to stay relevant.

After reading what Uncle Bob wrote after he attended a keynote by James Bach on the topic of professionalism, I think this says it better:

A professional tester does not blindly follow a test plan. A professional tester does not simply write test plans that reflect the stated requirements. Rather a professional tester takes responsibility for interpreting the requirements with intelligence. He tests, not only the system, but also (and more importantly) the assumptions of the programmers, and specifiers.

Uncle Bob goes on to say:

I like this view. I like it a lot. I like the fact that testers are seeking professionalism in the same way that developer are. I like the fact that testing is becoming a craft, and that people like James are passionate about that craft. There may yet be hope for our industry!

I like this view as well; I like it a lot.

Humans and Machines: Getting The Model Wrong

It seems like one of the more prominent and perpetual debates within the software testing community is the delineation between what the computer and human can and should do. Stated another way, this question becomes “what parts of testing fall to the human to design, run and evaluate and what parts fall to the computer?” My experience suggests the debate comes from the overuse and misuse of the term Test Automation (which in turn has given rise to the testing vs. checking distinction). Yet if we think about it, this debate is not just one within the specialty of software testing, it’s a problem the whole software industry constantly faces (and to a greater extent the entire economy) about the value humans and machines provide. While the concerns causing this debate may be valid, whenever we hear this rhetoric we need to challenge its premise.

In his book Zero to One, Peter Thiel, a prominent investor and entrepreneur who co-founded PayPal and Palantir Technologies, argues most of the software industry (and in particular Silicon Valley) has gotten this model wrong. Computers don’t replace humans, they extend us, allowing us to do things faster which when combined with the intelligence and intuition of a human mind creates an awesome hybrid.

Peter Thiel and Elon Musk at PayPal

He shares an example from PayPal: 1

Early into the business, PayPal had to combat problems with fraudulent charges that were seriously affecting the company’s profitability (and reputation). They were loosing millions of dollars per month. His co-founder Max Levchin assembled a team of mathematicians to study the fraud transfers and wrote some complex software to identify and cancel bogus transactions.

But it quickly became clear that this approach wouldn’t work either: after an hour or two, the thieves would catch on and change their tactics. We were dealing with an adaptive enemy, and our software could adapt in response.

The fraudsters’ adaptive evasions fooled our automatic detection algorithms, but we found that they didn’t fool our human analysts as easily. So Max and his engineers rewrote the software to take a hybrid approach: the computer would flag the most suspicious transactions on a well-designed user interface, and human operators would make the final judgment as to their legitimacy.

Thiel says he eventually realized the premise that computers are substitutes for humans was wrong. People can substitute for one another – that’s what globalization is all about. People compete for the same resources like jobs and money but computers are not rivals, they are tools. (In fact, long-term research on the impact of robots on labor and productivity seems to agree.) Machines will never want the next great gadget or the beachfront villa on its next vacation – just more electricity (and it’s not even smart enough to know it). People are good at making plans and decisions but bad at dealing with enormous sets of data. Computers struggle to make basic decisions that are easy for humans but can deal quickly with big sets of data.

Substitution seems to be the first thing people (writers, reporters, developers, managers) focus on. Depending on where you sit in an organization, substitution is either the thing you’d like to see (reduce costs – either in terms of time savings or in headcount reduction) or the thing you dread the most (being replaced entirely or your work reduced). Technology articles consistently focus on substitution like how to automate this and that or how cars are learning to drive themselves and soon we’ll no longer need taxi or truck drivers.

Why then do so many people miss the distinction between substitution and complementarity, including so many in our field?


The Domain Testing Workbook is available

Cem Kaner, Sowmya Padmanabhan and Doug Hoffman have a new book called The Domain Testing Workbook. I’d highly recommend picking up a copy or at least adding it to your reading list! This book is not just a deep dive into one test technique but it represents a collective thinking about what software testing is today.

Domain Testing

BBST Test Design was my formal introduction to Domain Testing aka boundary and equivalence class analysis. Domain Testing is often cited as the most popular (or one of the most popular) test techniques in use today. For its part the Test Design course spends a whole week, a full lecture series and at least one assignment introducing and practicing this technique. If you’d like an introduction I recommend the first lecture from the fifth week in the Test Design series in which Cem introduces Domain Testing:

(For more information see the Testing Education Website or YouTube.)

I got the chance to do some reviewing of the workbook and enjoyed how it elaborated on the material I was learning Test Design. Yet it went further by offering layers of detailed examples to work through and is set up to allow the reader to skip around to whatever section or chapter they find interesting.

In an email, Cem described the book:

My impression is that no author has explored their full process for using domain testing. They stop after describing what they see as the most important points. So everyone describes a different part of the elephant and we see a collection of incomplete and often contradictory analyses. Sowmya, Doug and I are trying to describe the whole elephant. (We aren’t describing The One True Elephant of Domain Testing, just the one WE ride when we hunt domain bugs.) That makes it possible for us to solve many types of problems using one coherent mental structure.

Cem Kaner

I’m excited for this book to be turned into a class on domain testing ( and hopefully open sourced) with the rest of BBST. In the meantime pick up the book from Amazon and let me know what you think!

I’m a Bug Advocate

I do advocate for bugs to be fixed but the title comes from passing the Association for Software Testing‘s (AST) Black Box Software Testing (BBST) Bug Advocacy course. The class officially ended in mid-July and it marks the third and final BBST class for me. Together Foundations, Bug Advocacy and Survey of Test Design make up a semester long class that Cem Kaner teaches at Florida Institute of Technology called Software Testing 1. These lectures and classes are, as far as I know, the only college level software testing courses available to anyone that wants to learn about testing – the material is free and the courses are pretty inexpensive.

As with the prior classes I agreed to have my name listed on AST’s graduates website and for fun my certificate of completion is below:

Despite this being the final BBST class my journey with BBST doesn’t end here. I enrolled in the Instructor class for November at the beginning of the year when I enrolled in Test Design. I figured the best way to become more familiar with the material and reinforce what I do know (challenge what I don’t know) is to try to teach it. Worked for Scuba Diving!

Why take a Black Box Software Testing course?

I was recently telling a friend about the BBST Bug Advocacy course I was working on and he asked why I was taking a black box testing course. I think what he meant was why would I take a course on black box testing as opposed to glass box (or white-box) testing?

This is the more thoughtful answer I wish I had given.

I fell into the testing profession. The University I went to taught some technical skills as part of the degree (programming, database design, etc.) but I never learned anything I would consider fundamental to understanding software testing, nothing that helped me deal with testing problems. Since I work in software testing I wanted to learn more about the domain, to better understand it and be able to differentiate between testing and other types of problems (technology, communication, requirements, etc.). It just so happens the courses that focus on the domain of software testing also focus on black box system level testing.

Black box and glass box testing approaches focus on different things. Black box is testing and test design without knowledge of the code. A black box tester will approach testing based on the interactions of the stakeholders with the system’s inputs and outputs. Contrast this to glass box as testing and test design with knowledge of the internal code. The glass box tester will approach testing based on the interactions of the underlying code, as in “does this code do what the programmer intended it to do?”

Testers using either approach can benefit from understanding the domain of software testing. Understanding what oracles are, how they are heuristics, advantages / disadvantages to measurement, how to communicate and write bugs effectively or even how to design appropriate tests affect both the black box and glass box approaches. That’s why someone would take a black box testing course (specifically the BBST series of courses); that’s why I did.

First principle reasoning

When I was young I remember wanting to be an awesome football player like Joe Montana, or an FBI agent working on the X-Files like Fox Mulder. These days I want to have the skills to identify and solve problems like Elon Musk.Musk is an interesting person. He’s created and sold numerous companies and with the profits he’s created a rocket building / space exploration company that is now the first (private) company to make it to space. He’s also built an American electric car company. While all these things make Musk an interesting person on the surface, its his approach that makes him enviable.In his TED talk Musk credits his training in physics with his ability to see and understand difficult problems. He says physics is about how to discover new things that might seem counterintuitive and physics first principle provides a good framework for thinking. The video is a good conversation between Elon Musk and TED curator Chris Anderson, I recommend watching it. Musk mentions first principle reasoning at about 19:37:

According to Wikipedia first principle in physics means you start directly at the lowest levels – at the laws. Musk provides a slightly easier understanding saying first principle is reasoning from the ground up as opposed to reasoning by analogy, which is copying what other people do with slight variations.

Musk further elaborates on first principle reasoning in this video. To summarize he says you look at the fundamentals of things, make sense of it, construct your reasoning and conclusion and (if possible) compare that to whatever is the current understanding. A part of that process involves questioning conclusions, asking whether or not something could be true. Sounds like Musk’s constantly modeling, learning, testing and re-modeling.

In thinking about my job there seems to be more reasoning by analogy than perhaps there should be (or at least its obvious to someone new). Whenever one of my developers or I ask why something is, why some conclusion has been reached – the typical response is “that’s how its done”. If I ask my test team why they do something a certain way its always “that’s how we’ve always done it” and there seems to be no desire (at least I haven’t seen it yet) to know whether something makes sense or is based on a real understanding of the problem. Perhaps there should be more modeling, learning and testing?

We all do some reasoning by analogy, in many ways it’s a much simpler way to communicate and learn but for many of us in the software engineering fields (testers and developers) perhaps we confuse reasoning methods? So how do we determine when we need to use first principles and when its ok to use analogy in reasoning? That’s the million-dollar question. I think we do like Musk: create a model, ask questions to help us learn, test and when we aren’t satisfied with the answer, we reason from the ground up.

Low and High Intensity Learning

Paul Graham in his essay Wealth says startups are a way of compressing a whole working life into a few years. You work at a very high intensity for a short period of time (say four years) instead of the normal low intensity for a long period of time (say forty years), in other words startups are a way of increasing your productivity exponentially. In my experience there is a correlation between high intensity working and high intensity learning.

The potential relationships between high-intensity work and learning have a lot of appeal because it provides a chance to leapfrog our understanding of several domains in a short period of time.

In his essay Startup = Growth Graham defines a startup as “… a company designed to grow fast”. My last company was small, we considered ourselves a startup (although according to Graham’s definition we were not) but we worked considerably faster (higher intensity) than a larger company would have. I can say that with some certainty now because a 15,000-person company acquired our small (maybe 10-person) company and the differences are pretty dramatic.

To be fair there is a difference between the learning that occurs when a person is working for a high intensity company as opposed to doing their own high intensity work. With a high intensity company people learn whatever they have to in order to solve the problems in front of them, then they move on. A person working intensely on their own has the freedom to focus on what they want but they run the risk of never finding focus.

I’m struggling with the second part. I’m back to a low-intensity company but I don’t want to be pulled into a low-intensity learning situation. Part of me says that won’t happen because of my own internal drive but another part of me is worried the low-intensity rhythm of the company will make it hard to find focus. (I’m not saying my small company was a good example of a high intensity work / learning environment but as of right now it seems better than the corporate world.)

One solution is to create my own startup – something designed to grow fast which would force fast learning. It would be amazing for many reasons including getting back to a higher-intensity learning situation but I don’t know where to start. Another solution is to find a person or person(s) with the same interests or goals as I have and work together to learn. Maybe the small team size would have the effect of pushing each other into a higher intensity work and learning environment? Now where do I begin?