What Testers Need to Learn

Sunday night I attended a live webinar by James Bach entitled “What Testers Need to Learn” that was put on by Tea time with Testers. It seemed like an interesting topic so I joined (it only cost $30).

The webinar got off to a slow start thanks to some technical issues with GoToMeeting but as soon as they were resolved James jumped into his talk: his personal vision of the skills testers need to have based on his many years of experience coaching testers.

James shared his recently updated tester’s syllabus (a free download from his site) and then walked through it explaining some of the areas. The syllabus he shared was actually a part of a specially created slide deck composed of existing materials but arranged for this talk. You can download the slide deck here. If you haven’t seen (or downloaded) the syllabus these are the main areas:

  1. General Systems
  2. Applied Epistemology
  3. Social and Cognitive Science
  4. Mathematics
  5. Testing Folklore
  6. Communication
  7. Technology
  8. Software Process Dynamics
  9. Self-Management

A synopsis (what I remember) of the walk-through:

General Systems theory involves understanding what makes systems complex. It’s a fundamental skill of testing based on how to approach a system, break it down and then understand what’s there. James recommends An Introduction to General Systems Thinking by Weinberg.

Applied Epistemology. Epistemology is the study of how we know what we know. Scientific Thinking helps us understand applied epistemology. Testing is the process of creating experiments and exploring them so understanding how to design experiments is also very critical. Understanding written and un-written requirements requires an understanding of epistemology.

Cognitive Science. The difference between how people should think (a factor in epistemology) and how people do think is cognitive science. As a tester we we need to understand how people’s perception and biases factor into their work. Human factors relate to how people use and misuse systems. Testers are constantly learning so learning theory is huge.

Mathematics. Testers seem to be bad or afraid of mathematics and what you get is a system where people misuse / abuse mathematics. Counting test cases or metrics are often faulted here. People who don’t understand mathematics are too afraid to ask or question assumptions. The number one thing James uses is combinatorics or as he describes it “counting things in combinations”. Graph Theory is also big for identifying different pathways in testing. You don’t need to be an expert in these things but you need to know enough to be comfortable learning more.

Testing Folklore. Folklore dominates the testing field today. Testing folklore are ideas that are widely spread in the industry and yet poorly found in scientific practice / not based on scientific method. Examples include listing testing techniques, poorly thought out definitions or lists of words, testing certification, most things the ISTQB does, certain ideas about what testing is, etc. Communities of testers are found here including the context-driven school that James belongs to. Highly educated testers need to understand this, if you don’t want to be an expert you can ignore it. (Gotta love his attitude.)

Communication is important because testers need to write, make reports, and design documentation for the appropriate audience. Social Legibility involves presenting yourself and your work in a way people think they can understand.

Technology. The more you know about technology the better you’ll be as a tester. For example, programmers who don’t know anything about testing can be good in many respects because their knowledge of technology is great. This seems to be the area most testers understand the most. It’s helpful to know technology but great testers need to know about the other things as well.

Software Process Dynamics is important though not as important as other things. (No mention of how it ranks among its peers.) Software process dynamics is about how projects go wrong, why its good to find certain types of bugs at certain times, etc.

Self-Management. A lot of things are lumped in here because it’s a big deal in testing. This area is entirely non technical and is about being a grown-up: make plans and then do things. Ethical issues, contracts, accounting, record-keeping, being helpful are all lumped in here.

After the overview of the syllabus James answered some participant questions.

The first was asked by a gentleman who worked or was related to Tea time with testers. I think the question was how testers should balance learning with time commitments and how effective someone can be at learning. James’ response was something like:

You have to learn on the job, then work nights and on the weekends to be a great tester. Read a lot and try to experiment with a lot of things. Weekend Testing can help. He and others offer free coaching sessions via Skype. Other options include working with a peer group or other like-minded people, preferably not alone. Try to find somebody to work with and if you come up empty ask James. Also build a step by step portfolio of your work – where a portfolio is a set of your work that you can show to other people that demonstrates what you do as a tester.

There were a few more questions but since this webinar took place from 10:20 PM PST until 11:45 PM PST and those questions didn’t interest me I didn’t pay attention. =)

James mentioned a few of his recommended books, found on page 6 of the slide deck. Like I mentioned above I’ve posted a copy in my Dropbox folder here. All the people James respects as experts read and study books veraciously. They also have a support group that does the same.

The last part of the talk focused on skills he sees missing or needing improvement in the people he coaches – also found in the slide deck. Rapid Technical Self-Education, Test Framing, Test Factoring, etc. are some of the skills he focuses on when coaching. Student Syndromes are common problems he sees.

Lastly James shared and briefly discussed his Exploratory Testing Dynamics paper. Unfortunately at this point I don’t remember what was said about it. I was recording the whole broadcast (which hopefully someone will make available online) but some how it managed to crash and I lost my recording.

My (testing) learning problems are related to asking for help, not within my team or company but with people I don’t know. I work alone the majority of time but really I need someone to work with that can help push me. I think it’s time for a coaching session from James.

1993 World Book definitions for Quality and Testing

According to the 1993 “The World Book Dictionary” the definition for Quality Control is

“[T]he inspection of manufactured products from the raw materials that go into them to their finished form to insure that they meet the standards of quality set by the manufacturer.” (pg. 1703.)

That same dictionary didn’t have any definition for the word quality assurance and had many definitions for the word quality (7 to be exact).

The most relevant definition for Tester was defined as:

“[A] person or thing that tests.” (pg. 2167)

My parents still have their set of 1993 World Book encyclopedia’s which came with a two book set of dictionaries.

Throw someone else in to help QA it faster!

“Throw someone else in to help QA it faster!”

A former boss (or two) of mine

I’ve heard this statement many times in my career but it happened again just recently and it got me thinking. Aside from the poor choice of words, about “QAing” something (is that really a verb?), why would someone say this?

This seems to happen when management realizes it will take longer to test something than they initially planned and/or some client demands a product sooner. The most recent occurrence came when management didn’t communicate the expected release date and freaked at the estimated test duration. My response was you can have the product whenever you want but let me tell you what won’t be tested. This elicited the response “no we don’t want to not test it, how about we… throw someone else in to help QA it faster.” Clearly someone hasn’t heard of Brook’s law.

Adding manpower to a late software project makes it later.

Fred Brooks

Brook’s Law is a term coined by Fred Brooks in his book The Mythical Man-Month which states “adding manpower to a late software project makes it later”. It also appears the person saying this doesn’t understand what Quality Assurance (or QA) means.

If the role of software testing is to help the team understand vital information about the product and you bring in someone who doesn’t understand how this is accomplished, the value both will be providing is diminished. You slow down the primary tester as they coordinate with the person being brought in as work is divided up based on skill and comfort level. Configurations of different machines, problems local to the users and a whole host of other problems can crop up as well. In other words it takes time for people to become productive on a project.

Anyone who does time-based work (has a deliverable) can tell you it’s helpful to know when you need to be done. Crazy, right? Testing a product is a never ending journey but specific periods of testing can end, for example when the product is needed in the field. There will always be more testing to do but you don’t always have time nor is it always a good use of resources. Dates can help. If this statement comes up often either Management or the Team has problems communicating with each other about when things need to be done. Setting dates isn’t a sure fire method since dates can change but so can the decision on what needs to still be tested and what’s acceptable risk for the field.

While it’s possible to get an additional person to add some incremental value into a project (they might have some unique perspectives that can help, especially if they are subject matter experts) it’s going to take them awhile. Don’t assume “throwing someone else in” will do anything other than make the testing duration longer.

Performance Reviews

It’s that time of year at my company when we meet with our respective bosses to discuss how well we did. Review time is probably the least fun time of the year, not because I am fearful of how I might do but, because it’s time to give my boss hell for our bad performance management system.

Our company has this standard form that was borrowed from some other companies inept performance management system (most companies I’ve worked for also had bad performance systems). This form has evaluation areas which don’t apply to our roles and some areas that nobody knows how to evaluate. We start by filling out a “self review” and then send it to our respective bosses for their comments and grading – a scale from “didn’t meet expectations” to “far exceeds expectations”.

According to the book The One Minute Entrepreneur there are 3 primary parts to an effective performance management system:

  1. Performance planning. This is where managers and their people get together to agree on goals and objectives to focus on.
  2. Day to day coaching. This is where managers help their people in anyway they can so they become successful.
    1. It doesn’t necessarily mean you meet up or talk about how things are going everyday. Instead managers should work to support their people, praising when things are going right and correcting when things go wrong. This is the stage where feedback happens – where real managing is done.
  3. Performance evaluation. This is where managers and their people sit down and examine performance over time; also called a performance review.

(Note: Both The One Minute Manager and The One Minute Entrepreneur are good books with simple messages: clear goals make managing far easier and better for everyone involved. )

At the beginning of the last few years my boss and I have attempted to do some performance planning. Then we get busy, lose sight of the planned goals and then the coaching either happens sporadically or never. This year will be the same. We’ll review each other’s comments and grades and I’ll have to “adjust” his misperceptions (if I get anything below far exceeds) or point out the fact he never communicated any other goal than what I achieved. I make it a point to debate and remove any area on the form that is too vague or subjective or nobody knows how to evaluate. I don’t like doing this but I think it’s the only fair approach.

Looking back at the past I should have done this more often. Most of the companies I’ve ever worked for do #1 occasionally and #3 every year but almost never do #2. Day to day coaching is the hardest but it provides the most value. My boss usually has a few areas where they think I could have done better to which my response is always, “why didn’t you say something earlier?” If they had I could have corrected my behavior and brought value to both of us sooner.

Some managers don’t tell their employees what is expected of them; they just leave things alone and mark them down when they don’t perform well. I would imagine some do this because they don’t know any better. Other do this to look good. They believe and/or fear ranking everyone at the highest level of the review scale will make it look like they can’t discriminate between poor and good performance. I’m grateful not to have a manager like that.

If there’s little planning and no coaching then what is the point of the review? Perhaps it’s worth skipping the review all together assuming a raise isn’t impacted by not having it. However I fear my future raises are tied to a poor / bad review system which is why I give my boss such a difficult time. He’s well aware of the problems with the system but he doesn’t do anything to fix it. I want to avoid believing in infallible systems, it’s broken so let’s fix it. Until then I just don’t have a choice except give my boss hell.