Appropriate Test Documentation & Formatting

The Question

Recently in an online forum a tester person asked:

Does someone have a simple example of test case (excel sheet) format? I am the only one tester in my company and we are trying to arrange the test documentation. Any advice or example will be useful.

I wish someone had given me this advice when I first started out:

Be careful about using someone else’s formatting and/or templates. If you don’t know what you are doing, you might be inheriting their issues and misunderstandings. It’s usually best to understand what you want to accomplish and then find a solution for yourself. Especially if it’s something simple like using Excel for documentation. Tables and matrices are great for organizing tests and don’t cost much money or time to make.

Appropriate Test Documentation

The appropriate documentation for your tests (or test cases) will depend to a large degree on the test technique and approach you use. Test Case Management systems (and similar systems) are often designed for functional or scenarios tests where the format is similar and you can list out multiple steps. Other test techniques such as Path Analysis, Domain Testing and even Risk Based testing can look very different when they are well designed. The same can be said when you decide to take an Exploratory approach to your testing where you are likely to write Exploratory Charters. Cramming all of these different techniques into one system usually isn’t the best idea.

The Smallest Amount

Focus on the smallest amount of documentation you need to do your job well. You have a finite amount of time and if you spend too much of it creating documentation you will have less time to do the work. If you are writing tests for other humans to read and understand, they should be written in a way that communicates this well such as using visuals (mind maps, diagrams, etc.) and summaries. Similarly if you are writing the tests for computers to read and understand they can (need to be) much more granular and step by step.

I’d wager most experienced engineers have gone through this process of: trying to document as much as possible but by the time you are done you didn’t do enough testing and the documentation isn’t done either. Maybe you put it all into a test case management system that no one else cares to read because it’s in this weird format that probably only makes sense to you.

Staying Agile

Focusing on the appropriate documentation for your tests and then creating the smallest amount you need to do your job well also allows you to have agility. Agile is all about being able to adjust to changes as they come in. Quick changes in direction aren’t kind on large amounts of pre-scripted documentation and/or can result in lost work.

I wish this approach was more obvious but a quick google search will reveal it’s not the dominate theory. However, if you read this and take the advice you’ll be learning from my mistakes. Focus on using the smallest amount of documentation you need to be successful and making sure that documentation is appropriate for your test choices will give you agility and future proofing even as you scale your team size and move into the future.

A typical day of Testing (circa 2018)

Recently I found myself repeatedly describing how I approach my testing role in a “typical day” and afterwards I thought it would be fun to capture some things I said to see how this might evolve over time:

Background

  • At Laurel & Wolf our engineering team works in 2 week sprints and we try to deploy to production on a daily basis.
  • We work within a GitHub workflow which essentially means our changes happen in feature branches, when those branches are ready they become Pull Requests (PRs), those PRs automatically run against our CI server, then get code reviewed and are ready for final testing by myself and our other QA.
  • All of our JIRA stories are attached to at least one PR depending on the repository / systems being updated.
  • Everyone on the engineering team has a fully loaded MacBook Pro with Docker containers running our development environment for ease of use.

 Daily

  • I’ll come in and if not already working on something, I’ll take the highest priority item in the QA column of our tracking system JIRA. (I try to evenly share the responsibility and heavy lifting with my fellow tester although I do love to jump into big risky changes.)
  • Review the GitHub PR to see /how big/ the code changes are and what files were impacted. You never know when this will come into play and at the very least it’s another source of input for what I might test. I will also do some basic code review looking for unexpected file changes or obvious challenges.
    • Checkout the branch locally and pull in the latest changes and get everything running.
  • If the story is large and/or complex, I’ll open up Xmind to visually model the system and changes mentioned in the story.
    • Import the acceptance criteria into as requirements. (An important input to testing but not the sole oracle).
    • Pull in relevant mockups or other sources cited either in the story, subtasks or other supporting documentation.
    • Use a few testing mnemonics like SFDIPOT and MIDTESTD from the Heuristic Test Strategy Model to come up with test ideas.
    • Pull in relevant catalogs of data, cheat sheets and previous mindmaps that might contain useful test ideas I might mix in. (When I do this I often update old test strategies with new ideas!)
    • Brainstorm relevant test techniques to apply based on the collected information and outline those tests or functional coverage areas at a high level. Depending on the technique (or what I brainstormed using mnemonics) I might select the data I’m going to use, make a table or matrix, or functional outline but it depends on the situation. All of these can be included or linked to the map.
  • If the story is small or less complex, I’ll use a notebook / moleskine for this same purpose but on a smaller scale. Frankly I also use the notebook in conjunction with the mind map as well.
    • I’ll list out any relevant questions I have that I want to answer during my exploration. Note any follow up questions during my testing.
    • I don’t typically time-box sessions but I almost always have a charter or two written down in my notebook or mindmap.
  • Start exploring the system and the changes outlined in the story (or outline in my mindmap) while generating more test ideas and marking down what things matched up with the acceptance criteria and my modeled system. Anything that doesn’t match I follow up on. Depending on the change I might:
    • Watch web requests in my browser and local logs to make sure requests are succeeding and the data or requests are what I expect it to be.
    • Inspect a page in DevTools for the browser or in React or Redux.
    • Reference the codebase, kick off workers to make something happen, manipulate or generate data, reference third party systems, etc.
    • Backend or API changes are usually tested in isolation and then with the features they are built to work with and then as a part of a larger system.
    • Look for testability improvements and hopefully address time during this time
    • Add new JIRA stories for potential automation coverage
    • I repeat this process until I’m able to find less bugs, less questions and/or I’m able to generate less and less test ideas that seem valuable. And/or I may stop testing after a certain amount of time depending on the desired ship date (unless there is something blocking) or a set time box.
  • When bugs are found and a few things can happen:
    • If they are small, I’ll probably try to fix them myself otherwise
    • I’ll let the developer know directly (slack message or PR comment) so they can begin working as I continuing to test
    • or I’ll pair with them so we can work through hard to reproduce issues.
    • If they are non-blocking issues I’ll file a bug report for addressing later
  • Run our e2e tests to try to catch known regression and make sure we didn’t break any of the tests.
    • This doesn’t happen often, our tests are fairly low maintenance.
  • Once changes are promoted to Staging I’ll do some smaller amounts of testing including testing on mobile devices. We do some of this automatically with our e2e tests.

Semi-Daily

  • Pickup a JIRA story for automation coverage and start writing new tests.
  • Investigate or triage a bug coming from our design community (an internal team that works directly with our interior designers).

circa 2018

I’m probably forgetting a ton of things but this is roughly what a “typical day” looks like for me. I expect this to evolve over time as data analysis becomes more important and as our automation suites grow.

What does your typical day look like?

18 GitHub Projects for Testing

Aside from it’s many awesome lists GitHub is a really good place for open source testing tools, libraries and frameworks (and their corresponding code). I’m pleasantly surprised by these new (and sometimes old) testing resources, so I’d like to highlight many of them in the hopes others might also find them useful.

The challenge with listing any GitHub projects is that there are really too many projects to list. I’ll push this problem and mention a few here anyways. Many of these projects are Ruby based because that’s my language of choice. I’m sure there are similar projects for your favorite languages as well.

Tools

Many of these tools are easy to download or install without any knowledge of their underlying code. A few like The Internet or Ally will require a little more work.

  • BugMagnet. Both a Chrome and Firefox extension that contains lists of values you can use for boundary testing. Great exploratory testing companion.
  • Form Filler. Other Chrome extension is great for filling in standard forms (login pages, etc).
  • The Internet. A test project for playing with selenium automation but could probably be cloned and used for other testing / practice purposes.
  • AutoHotkey. A windows desktop automation project for helping you speed up your workflow.
  • Android battery historian. Inspect battery related information and events on Android devices.
  • PICT. A pairwise / combinatorial tool for Windows from Microsoft.
  • Ally. An accessibility auditing tool.

Libraries

These offer extended functionality, generate data or generally make testing easier.

  • Faker. This great gem will help you generate fake data for testing. Need passwords, usernames, email address, city information? Faker has it.
  • Parallel Tests. This gem is for running multiple tests at one time. Personally I use this for running concurrent automated tests locally and at Sauce Labs.
  • Mailosaur Ruby Client. This gem is part of a mailosaur service but in theory will help with end to end automated testing around mocking email services.
  • BrowserMob Proxy. A proxy for monitoring and manipulating traffic. ElementalSelenium has a few tips showing how to use BrowserMob in your test automation.

Frameworks:

These are pretty well known across the testing landscape and for that reason are worth mentioning. Many of them have good wikis, documentation or other related resources you can learn from in addition to the underlying code and bug trackers. YMMV:

My goal with sharing these is to point on the many very good tools available for testing and I hope I’ve accomplished this. Did I miss any projects that you find useful for testing? If so leave a comment and tell me what you like about the particular projects!

9 GitHub Lists for Testing

I spend a lot of time on GitHub and it can be a great place for finding open source libraries, tools, frameworks and pretty much anything else you might want to version control. This includes lists (and more often than not, lists of lists). The challenge is finding just those lists that contain value and not chasing around each individual list of list in a recursive never ending search.

Why Lists?

Over time and when looking for certain types of failures (or bugs), patterns emerge. Some of these patterns can be captured as data, like password dictionaries or image catalogs, or as a collection of test ideas. Some authors have made lists this with heuristics (Bach, Bolton & Hendrickson) while others have published lists of failures common to certain applications or languages (Kaner).

Here are 9 lists I’ve found to be good references when testing.

Input Fields:

Useful for boundary analysis and equivalence class partitioning, input field catalogs are basically collections of values you can use to try to trigger failures based on the data-type of the input field. For this reason they are often broken into specific data types like Strings, Integers, Floating Points, etc.

Security:

For both tools and reference material, turns out there are some good references for learning more about security. Password libraries and other data:

More:

Other valuable lists that don’t easily fall into a single category:

  • My own catalog of images. Based on size and format you can use this catalog for testing image uploads. Searching Google I wasn’t able to find any specific collection for testing, so I made my own.
  • Awesome Test Automation. A curated list of test automation frameworks, tools, libraries, etc. The list is pretty good. I use Ruby and they had a good list of Ruby gems for generating test data.
  • TestingConferences.org. A simple list of software testing conferences and workshops published collaboratively with the testing community.
  • Free Software Testing Books. That’s right, a collection of free software testing books. Although some of these appear to be papers, guides and “demo” chapters, it’s still a good (cough, free) reference!

Did I miss any lists that you find useful for testing? If so leave a comment and tell me what you like about the particular list!

Exploratory Charters in GitHub

GitHub Issues
GitHub Issues (for TestingConferences.org)

Since CAST 2015 I’ve wanted to implement an interesting idea that could potentially give my testing greater visibility and greater scrutiny: [highlight]putting exploratory testing charters into our project tracking tool.[/highlight]

At work we use GitHub to host our code which means we use GitHub Issues as our bug tracker. Then we use ZenHub on top of GitHub Issues as project management / tracking tool. As the sole (omega) tester on the development I use GitHub issues for a number of activities:

  • Filing and reviewing issues (bugs and enhancements)
  • Occasionally reviewing pushed code changes referenced in the issues I’m tagged on
  • Occasionally reviewing pull requests
  • Committing the rare change to our production code (I’ve done this once so far)
  • Committing code changes to our test automation repo

Do charters belong?

(more…)

Feedback from a Developer (without knowing it)

Recently someone asked one of my developers if we created formal test plans. Since the conversation was in an email, my developer cc’d me on it and responded saying he wasn’t sure but he had seen me create test cases in our bug tracker, SpiraTeam. He wasn’t sure if that qualified as a formal test plan.

Upon reading the email I responded asking what the questioner considered a formal test plan. Then I explained how we use Mind Maps to detail test designs and that works for us as a “test plan”. Yet I kept wondering when the last time I wrote a test case was so I went through our system and found a timestamp on the last created case. It read July 7th, 2011.

Curious still, I sent an email to my developer and asked when was the last time he saw me create a test case in Spira. His response was:

“I don’t know, didn’t you create some for the release before the DW or something? Maybe it wasn’t test cases, but I’ve seen you do things that take forever in Spira, I always thought they were test plans or test cases.”

“…I’ve seen you do things that take forever!” Yup that’s what writing out explicitly defined scripts (sometimes called test cases) will do. They take time to write out, time to “execute”, don’t necessarily help the tester plan their testing, and are abandoned after their use. After numerous years it was time to move onto something more effective.

This conversation was interesting for a few reasons:

  • First my developer confused a test case with a test plan. Not a big deal and not unexpected but interesting just the same. I’m sure many people in my organization would answer the question the same way.
  • Second my developer seemed to remember me writing test cases from over a year ago but didn’t recall that a week prior I sent him a Mind Map to review. I wonder if he remembers *seeing* me work on the Mind Map?

Perhaps writing test cases seemed so strange (or perhaps wasteful or tedious?) to him that he couldn’t help but remember it? Like when your parents remember something you did wrong and don’t remember things you did well.

Perhaps he didn’t connect the Mind Map with testing, or to test planning? If my developer’s response is the consensus across my organization then it says the visibility and/or understanding of my work isn’t where it needs to be.

Thanks for the feedback. =-)

5 Ways to Revolutionize your QA

I can’t remember where I originally found this post and the corresponding eBook but the eBook is definitely worth taking a look at. Here is the former uTest blog post, now Applause blog post.

The 5 ways or insights are:

  1. There are two types of code and they require different types of testing
  2. Take your testing down a level from features to capabilities
  3. Take your testing up a level from test cases to techniques
  4. Improving development is your top priority
  5. Testing without innovation is a great way to lose talent

In point 2, James Whittaker also talks about a planning and analysis tool he used at Microsoft called a CFC or Component – Feature – Capability analysis. This allowed them to take testing down from features to capabilities.

The purpose is to understand the testable capabilities of a feature and to identify important interfaces where features interact with each other and external components. Once these are understood, then testing becomes the task of selecting environment and input variations that cover the primary cases.

While this tool was designed for testing desktop software I’m inclined to think it would work well for testing web applications. Essentially with the CFC you are mapping out the individual components / features in the web application in a branching form that closely resembles a mind map. Matter of fact a mind map might be better! =)