My Testing Philosophy


As I study software development, especially testing, I’ve begun to develop certain views and a general philosophy on testing. In this article, when I say philosophy I mean:

[T]he critical study of the basic principles and concepts of a particular branch of knowledge, especially with a view to improving or reconstituting them.1

Or simply:

[A] foundation for studying and improving my ideas on software testing.

I’ve generally been aware of this development (others have highlighted it during discussions) but when I saw Andy Tinkham go through the exercise of writing out elements of his philosophy, I wondered if I could do something similar?

Then I created my list.

I borrowed heavily from Andy’s and I certainly haven’t covered everything but it’s a good start. Some elements are bound to change with time (as both I and the industry learn better) but here’s what I believe about software testing (and to a greater extent quality):

  • Software is eating more of the world.
    • Thus, quality is becoming more complex and important.
  • Quality is subjective. What one person values another might not.
  • Quality is everyone’s responsibility and can’t be added after the fact.
  • Software needs to solve the customer’s problems or it is useless.
  • Testing is a challenging intellectual process
    • Thus, any testing activity that discourages thinking or questioning while performing it is potentially harmful. Test execution off of pre-written scripts is often an example of this.
  • There are no best practices, just good practices for a particular context. Blindly applying a practice to a situation because it worked somewhere else can cause more harm than help. Understanding the context of any practice will be used in is crucial.
  • Complete testing is impossible and therefore involves tradeoffs.
  • Testing is all about providing information to the team and stakeholders. If testing is failing to provide the information that is needed, it is wasting time and resources.
  • To provide that information, test reporting needs to clearly tell 3 stories: the story of the product quality, the story of the testing done and not done, and the story of the testing quality (why what was done was or wasn’t sufficient).
  • A test is an experiment designed to reveal information (or answer a specific question) about a product or service.
  • To be successful, testers need to approach a SUT trying to dis-prove the existing understanding (for example, that it has bugs) rather than proving some functionality is correct.
    • Otherwise, we run the risk of falling victim to confirmation bias.
    • It’s all about empirical evidence.
  • Tests are only ever as good as their oracles.
  • Testers are not the gatekeepers of the quality. They should have a voice, but not the sole voice, in release decisions.
  • The value of any action has 3 components: The benefit gained by completing the action, the costs incurred by performing the action, and the hidden costs of not getting the benefit from performing other actions which can no longer be performed. Choosing the right action at any point in time means striking a balance among these 3 elements.
  • Writing out detailed, explicitly defined tests is rarely the best use of time and in many cases it’s improbable (to fully detail them).
    • Letting the tester executing tests make decisions that don’t impact the test’s goal leads to better testing.
    • Writing down every expected result can be a huge amount of work on one testing task and often doesn’t consider what other testing tasks remain.
  • Writing tests at the start of a cycle of testing effort means we’re creating those tests at the point where we have the least knowledge about what we’re testing that we’ll have during the cycle.
  • Communication is essential.
  • Variety in tests, techniques and approaches is essential.
  • Conventional software testing metrics are often misleading and can lead to undesired behavior. However, they’re what we have to work with at the moment, so until something better comes along, we need to use them to approximate the information we need. We just need to ensure we use them wisely and be conscious of the impacts and risks in doing so.
  • Bug reports (the tester’s primary work product) should be written in a clear, concise and persuasive way that convinces the reader to take some action.

Thanks to Andy for the inspiration and I highly recommend testers try this exercise. Think this was cool and looking for another exercise? Try thinking about your commitments to your programmers or your team and write them out like I have.

The challenge for me will be keeping this list up to date and constantly evolving. Think you can help? Feel free to challenge one or more of these elements, you’ll only be helping!

(Last updated on 12/21/2015)

  1. Dictionary on Reference.com
  • andytinkham

    Thanks for the shoutout, Chris! I like your list – there’s some things that probably should be on mine too. 🙂 I do disagree on one point – but it might just be we have different definitions of Complete in the context of complete testing being impossible, What definition of complete are you using there? I’m guessing something different than ‘exhaustive’ (running every possible test)….

    • Hi Andy,

      Thanks for commenting and you are welcome for the shoutout! I have been meaning to update the bullet point you are referring to from “Complete testing is almost always impossible” to “Complete testing is almost always impossible and therefore involves tradeoffs” and highlight that testing involves many tasks and you only have time to do a small sample of the work.

      By ‘complete testing’ I mean testing all of the inputs, all the combinations of inputs, all the paths through the program and all of the other potential failures a program might have. I think that’s the same thing you are calling exhaustive testing (running every possible test). I call it complete testing because I often heard people ask “how long will it take you to completely test that?”; luckily I don’t hear that very often these days. In that case maybe I should be more precise?

      I say it’s almost always impossible because I know of only one example where it was possible to run through every possible test and that was Doug Hoffman’s Square Root function.

      • andytinkham

        I’d say we agree on what complete testing is then. I still disagree that Doug’s MASPAR square root stuff is an example of complete testing though. It is exhaustive of all the valid inputs, and presumably thus exhaustive of valid outputs, but that doesn’t mean it’s exhaustive of all possible tests.

        I wasn’t involved in the project, but I have no knowledge of these tests being run – they may have been but it’s unlikely an exhaustive set of them was (since there’s a near infinite number):
        * passing in all invalid values. We may assume that the function can’t take anything, but that’s an assumption and we COULD be wrong.
        * checking the square root function in combination with other functions. What if some temporary register was left in an unexpected state and something late gave the wrong result if the user had previously used a square root before the registers got cleared? What if some other function left things in an unexpected state and the square root function didn’t take account of that?
        * what happens if a CPU fails while the system is running a square root on that CPU?

        In the end, we’re basically saying the same thing – there are tradeoffs we have to make. Many of them are easy – no matter how much time we’re given to run tests, we might never choose to run them. Others are based on prioritization – we’ll do the most important things that fit in the time we have up to the point where the business is accepting of the level of quality. There are always tradeoffs and assumptions though – we can never completely test a feature of most any level of complexity.

        • I think we agree on what complete testing is and that Doug’s MasPar example is not complete testing! After I posted my reply I started thinking about the article and decided to re-read it (for the nth time). Even though Doug and his team ran four billion tests he wasn’t doing complete testing. As the article says:

          “The good news was, we had been able to “exhaustively test” the 32-bit square root function (although we checked only the expected result, not other possible errors like leaving the wrong values in micro registers or not responding correctly to interrupts). ”

          Exhaustive testing of the 32-bit square root function but not complete testing (for the reasons you describe above). I’m so glad we had this conversation!