How to Write a Good Bug Report, use RIMGEN

RIMGEN is an acronym and mnemonic (or memory aid) to help us remember the elements of a good bug report. It can be used to help anyone write a better bug report or review existing bug reports looking to make improvements. 

In general my preference with reporting bugs is to:

  1. Work with a developer to get them fixed, either by mentioning it passing or pairing with them. By far the most effective way to get bugs fixed.
  2. Fix it myself, although most of these are very simple bugs.
  3. Write a bug report. These are good instructions for that process.

How to Write a Good Bug Report

Writing a bug report seems simple on the surface. People in many roles (programmers, testers, internal and external customers, analysts) write them without much, if any, practice. Although we’ve all seen those bug reports come in without much detail or context and guess what? They don’t get fixed. Just because something is easy to do doesn’t mean it is easy to do well.

We’re writing a good bug report to get a problem fixed. Programmers are busy people, with lots of things going on and plenty of demands on their time. They don’t want to look at a bug report that doesn’t make sense, requires too much work on their part to understand or is offensive. Let’s be honest, if a bug is fairly obvious a programmer isn’t likely to need much information, motivation or influence to fix it. They might not even require a bug report to be written (mention it in passing or through slack). Those aren’t the bugs we are interested in advocating for. We’re advocating for the bugs or problems we understand to be important (for some reason) but might not be easily understood as such.

How do we advocate for bugs to be fixed? We develop reports that clearly communicate the problem, reflect the problem in a harsh but honest way so that some stakeholder realizes the risk and wants to them fixed. Just reproducing the problem might not always be enough. We want to make programmers want to fix the bug by influencing their decision, we’re selling them on fixing the bugs. To “sell” someone on fixing the bugs we need to write a good report.

To help us understand the elements of a good report we can use the heuristic RIMGEN.


R – Replicate it

When writing a bug report we should include enough information to get the bug fixed including the steps or conditions necessary to replicate the problem on another machine. The lecture recommends using at least 2 different machines – a powerhouse computer and another less powerful / resource constrained machine. (Virtual environments might work as well.)

I – Isolate it

After we’ve replicated the problem, we want to identify the shortest number of critical steps necessary to replicate the problem by eliminating some of the variables. Clarity is the goal.

We also want to make sure to write up only one problem per report so we don’t get into a situation where one part of the bug is fixed and the other remains open. Example: OpenOffice Impress crashes after importing a file, but our reproduction is 25 steps long. We do some more testing that helps us eliminate some variables until we’re able to get the problem replicated in 6 steps.

M – Maximize it

Maximizing the problem means doing some follow up testing to see if we can find a more serious problem than the one we were originally going to report. Follow up testing can consist of varying our behavior, our environment, our inputs or even our system configuration. The more serious the problem the more likely (or easier) it will be to convince someone to fix it. Example: We find a problem in OO Impress where it won’t import old PowerPoint files; the application just won’t respond. After some follow up testing we find Impress actually crashes if we use an old PowerPoint file over a certain size.

Maximizing a bug (or follow up activities as it’s also known) is one of the more interesting and time consuming aspects of writing a bug report. There’s quite a bit of detail just in the four categories of variables: vary our behavior, vary our environment, vary our data and vary our system configuration and for those interested Markus Gartner goes a little more in-depth about each in this post.

G – Generalize it

Generalizing the bug means seeing if the problem affects more people in more ways than we originally thought. If the bug was a corner case maybe we can un-corner it, show that it occurs under less extreme conditions. (replicating on a broader range of systems?) Example: Same problem as above in OO Impress where it crashes when we try to import old PowerPoint files over a certain size. We vary the data (input) and find out Impress actually crashes when we try to import any file type over a certain size. Now the problem looks really bad and affects more users than just older PowerPoint users.

E – Externalize it

Switch the focus of the report from the program failing to the stakeholders who will be affected by it and show how it will affect them. (Historical review?) Example: OO Impress crashing when importing any file type of a certain size will stop all users from importing comparable application files (like from PowerPoint). This could prevent new users from downloading and installing the application and even turn off existing users who rely on the compatibility.

N – Neutral Tone

Make sure the bug report is easy to understand and that the tone of the report is neutral so as not to offend anyone reading it (perhaps the developer who created the bug) and/or discredit your reputation as the bug reporter (at least in the eye of the reader).


To help us communicate (and “sell”) the problem clearly in bug titles and summaries we can use the “Fails, When Protocol”. “Fails, When” says describe the failure first and then when that failure happens second.

For example in Bug Advocacy we look at two (apparently) well known problems in Microsoft Paint 95 (I think that’s the version) related to cutting and pasting from a zoomed section. When I wrote up my problem summary (before understanding the protocol) it came out as: “Cutting (Ctrl-X) a selected area while zoomed causes inconsistent behaviors.” The problem with this summary was it was too vague, contained minor details about the problem and although I spotted two bugs I tried to combine the two summaries into one. If I had used the “Fails, When” protocol a more appropriate set of bug titles might have been: “Cut doesn’t cut (freehand, zoom)” or “Cuts the wrong area (freehand, zoom, resize window)”.

To help us avoid being vague when summarizing and communicating a problem it’s important to consider how much insight the title or summary gives you into the nature, severity, or scope of the problem; does it over emphasize minor details (especially for the bug title)?

A Few Other Things

  • Looking for something a little more portable? Checkout my Bug Reporting Guidelines document.
  • This is an update of my previous post “How to Write a Good Bug Report (and be a Bug Advocate)” with some slight alterations.
  • Design vs coding bugs are slightly different and have a different emphasis using RIMGEN. I hope to address this later in a separate article.
  • Advocating for a bug to be fixed is not an invitation to be sleazy or aggressive in your reporting. We are trying to influence a decision based on our research and understanding, not trying to ruin our credibility by pushing something the business has no desire to fix.
  • RIMGEN is the updated version of RIMGEA

Exploratory Charters in GitHub

GitHub Issues
GitHub Issues (for

Since CAST 2015 I’ve wanted to implement an interesting idea that could potentially give my testing greater visibility and greater scrutiny: [highlight]putting exploratory testing charters into our project tracking tool.[/highlight]

At work we use GitHub to host our code which means we use GitHub Issues as our bug tracker. Then we use ZenHub on top of GitHub Issues as project management / tracking tool. As the sole (omega) tester on the development I use GitHub issues for a number of activities:

  • Filing and reviewing issues (bugs and enhancements)
  • Occasionally reviewing pushed code changes referenced in the issues I’m tagged on
  • Occasionally reviewing pull requests
  • Committing the rare change to our production code (I’ve done this once so far)
  • Committing code changes to our test automation repo

Do charters belong?


NRG Global Test Competition Retrospective

Roughly two and a half weeks ago I competed in the first NRG Global Test Competition. The idea behind the competition was simple: get a bunch of people/ teams together to test a few products, split the competition into two days, one with functional testing and another with performance testing, and based on the reports submitted judges would award points and announce winners. The full details are available here and here.This was the first online testing competition I’d tried but thanks to my experiences with testing challenges and rapid testing online I knew I’d have fun once I got past the quirks. By quirks I mean it can take time to get comfortable with the discussion format, figure out how to ask questions, how best to communicate with my fellow team members, etc. The competition took place at 10 am Eastern which sucks if you live on the west coast and have to wake up before 7 am like I did. It was all for fun anyways.

For as early in the morning and as new as the competition was I think I did reasonable. Not great, not even good, but reasonable. I think the best way to phrase it is: I’m not happy with my work. (I might be overly critical here but still.) Now we only had 3 hours from introduction of the products under test, to learn the product, ask questions of the “owner”, test it, ask more questions, file bugs and write a report. Yet when I think back at what we turned in I’m not happy with it. Let me explain.

My team member and I barely communicated with one another. We were using Skype but we didn’t do much planning ahead of time (not like we could have because nothing was public) so when it came time for the competition it was a simple “hi”, “what are you working on” and “I’ll look at x”. That was it. We each went to different applications. Thinking back on it now I think we would have done much better if we were on the same application, talking to one other about what we were seeing. My experience has been any collaboration no matter how small results in finding and learning amazing things.

At the time of competition I considered using Bach’s HTSM to map out the application but didn’t. I wish I had. Even though its a bit detailed and were we on a short deadline I think heuristics would have lead me to think about and discover even more potential problems. At the very least I’d feel more confident in what I had tested.

It took me an hour or so to really get started looking at the products, deciding which one to test and with what equipment (iPad), browsers, etc. I’m still blaming the time of day. I started with some simple touring of this “home built” application that must have been built specifically for the competition because it was really simple and full of problems. Even though I saw lots of problems initially I took note and kept searching until I felt I had covered the entire application as best as I could. Then I circled back, asked a few questions from the “product owner” Matt Heusser and began testing the problems I saw. By that time I had just enough time to get my bugs written up in the tracking system  (I had maybe 5) and started working on our Test Report. I think we got our report in right at the deadline.

I wasn’t able to commit to the second part of the competition, for performance testing and I’m not sure if my team member was able to either. I knew going in I couldn’t commit time for it however I’m still holding on to hope that I’ll get a chance to play with the AppLoader tool. Despite my displeasure with my performance I’m glad I joined, in fact I’m looking forward to getting the feedback on how other’s think we did. =)

Feedback from a Developer (without knowing it)

Recently someone asked one of my developers if we created formal test plans. Since the conversation was in an email, my developer cc’d me on it and responded saying he wasn’t sure but he had seen me create test cases in our bug tracker, SpiraTeam. He wasn’t sure if that qualified as a formal test plan.

Upon reading the email I responded asking what the questioner considered a formal test plan. Then I explained how we use Mind Maps to detail test designs and that works for us as a “test plan”. Yet I kept wondering when the last time I wrote a test case was so I went through our system and found a timestamp on the last created case. It read July 7th, 2011.

Curious still, I sent an email to my developer and asked when was the last time he saw me create a test case in Spira. His response was:

“I don’t know, didn’t you create some for the release before the DW or something? Maybe it wasn’t test cases, but I’ve seen you do things that take forever in Spira, I always thought they were test plans or test cases.”

“…I’ve seen you do things that take forever!” Yup that’s what writing out explicitly defined scripts (sometimes called test cases) will do. They take time to write out, time to “execute”, don’t necessarily help the tester plan their testing, and are abandoned after their use. After numerous years it was time to move onto something more effective.

This conversation was interesting for a few reasons:

  • First my developer confused a test case with a test plan. Not a big deal and not unexpected but interesting just the same. I’m sure many people in my organization would answer the question the same way.
  • Second my developer seemed to remember me writing test cases from over a year ago but didn’t recall that a week prior I sent him a Mind Map to review. I wonder if he remembers *seeing* me work on the Mind Map?

Perhaps writing test cases seemed so strange (or perhaps wasteful or tedious?) to him that he couldn’t help but remember it? Like when your parents remember something you did wrong and don’t remember things you did well.

Perhaps he didn’t connect the Mind Map with testing, or to test planning? If my developer’s response is the consensus across my organization then it says the visibility and/or understanding of my work isn’t where it needs to be.

Thanks for the feedback. =-)

Rapid Testing Intensive Confirmed!

(Stolen from the Rapid Testing Intensive site)

It’s official I’m booked for the onsite Rapid Testing Intensive with James and Jon Bach at the end of July on Orcas Island in Washington. According to the website this testing intensive will be based on “… Session-Based Test Management and Rapid Software Testing methodologies” and will “…allow you to see how the modern theory of testing meets practical work.” Sounds like a blast.

There are 10 onsite and 42 online participants as of 4/2/12 and one of those onsite partcipants is Robert Sabourin. I was in his “Using Visual Models for Test Case Design” class last year at StarWest so it will be interesting to work side by site with him as well as a few of the other participants.

As I said in my prior post my goal is for: “Experience and feedback on modern testing methodologies!” Can’t wait.

5 Ways to Revolutionize your QA

I can’t remember where I originally found this post and the corresponding eBook but the eBook is definitely worth taking a look at. Here is the former uTest blog post, now Applause blog post.

The 5 ways or insights are:

  1. There are two types of code and they require different types of testing
  2. Take your testing down a level from features to capabilities
  3. Take your testing up a level from test cases to techniques
  4. Improving development is your top priority
  5. Testing without innovation is a great way to lose talent

In point 2, James Whittaker also talks about a planning and analysis tool he used at Microsoft called a CFC or Component – Feature – Capability analysis. This allowed them to take testing down from features to capabilities.

The purpose is to understand the testable capabilities of a feature and to identify important interfaces where features interact with each other and external components. Once these are understood, then testing becomes the task of selecting environment and input variations that cover the primary cases.

While this tool was designed for testing desktop software I’m inclined to think it would work well for testing web applications. Essentially with the CFC you are mapping out the individual components / features in the web application in a branching form that closely resembles a mind map. Matter of fact a mind map might be better! =)