My Tester’s Commitments

My job is to help programmers look good; to support them as they create quality; to ease that burden instead of adding to it. In that spirit, I make the following commitments:

  1. I provide a service. You are an important client of that service. I am not satisfied unless you are satisfied.
  2. I am not the gatekeeper of quality. I don’t “own” quality. Shipping a good product is a goal shared by all of us.
  3. I will test your code as soon as I can after you deliver it to me. I know that you need my test results quickly (especially for fixes and new features).
  4. I will strive to test in a way that allows you to be fully productive
  5. I’ll make every reasonable effort to test, even if I have only partial information about the product.
  6. I will learn the product quickly, and make use of that knowledge to test more cleverly.
  7. I will test important things first, and try to find important problems. (I will also report things you might consider unimportant, just in case they turn out to be important after all, but I will strive to spend less time on those.)
  8. I will strive to test in the interests of everyone whose opinions matter, including you, so that you can make better decisions about the product.
  9. I will write clear, concise, thoughtful, and respectful problem reports. (I may make suggestions about design, but I will never presume to be the designer.)
  10. I will let you know how I’m testing, and invite your comments. And I will confer with you about little things you can do to make the product much easier to test.
  11. I invite your special requests, such as if you need me to spot check something for you, help you document something, or run a special kind of test.
  12. I will not carelessly waste your time.

Inspired by A Tester’s Commitments.

Becoming a Software Testing Expert

From a software tester’s point of view a lecture entitled Becoming a Software Testing Expert is a bit enticing. A lecture by James Bach is even more so. Bach, widely considered an expert in Software Testing, is a passionate advocate of software testing. As an expert he’s in a good position to help others.

He makes the case that testers need to be professional skeptics. If testers are constantly skeptical about what they are supposed to test, ask lots of questions and can backup their reasoning for the tests being performed then they should do very well. A software tester’s best assets are their ability to rapidly learn about new systems and apply that learning to find gaps in the system. Some gaps will be based on written requirements and some on unwritten requirements.

The lecture presented at Google is worth a watch:

If you want more information about the lecture check out the slides on Bach’s site: http://www.satisfice.com/presentations/bste.pdf

It’s a rude awakening when you realize you can become an expert at your craft you just need to know it’s possible, set a goal and then overcome the hubris gained over time from working on an application for so long. When you start on the path towards becoming an expert it stops becoming a day job and becomes more of an adventure.

I’m happy to say I’m skeptical of my skepticism towards my current testing approach. =)

StarWest 2011 Keynote Presentations

I’ve uploaded two Keynote Presentation’s from this years (2011) StarWest conference.

The first is James Whittaker’s Keynote entitled All That Testing is Getting in the Way of Quality:

The second is the Lightning round Keynote featuring a number of testing luminaries like Michael Bolton, Lee Copeland, Bob Galen, Dorothy Graham, Hans Buwalde, Dale Emery, Julie Gardiner, Jeff Payne and Martin Pol:

Enjoy!

James Bach’s Open Lecture on Software Testing

I got to talk to James Bach last week at StarWest 2011 in Anaheim. I joined his Critical Thinking class for its final 2 hours on Tuesday after walking out on my boring afternoon half-day tutorial on Open Source tools.

I was surprised when I was able to catch up to and chat with him after the class. I asked about the books he recommended that were on sale at the convention at which point he gave me his copy of Captivating Lateral Thinking Puzzles he’d shown in class. (Thank you, although my girlfriend finds it amusing to open the book and quiz me randomly.) In our chat I told him I enjoyed this Open Lecture:

Some point during our conversation I asked when he would be doing another open lecture and where it would be (hoping it would be somewhere near SoCal). After detailing his itinerary he came to the realization everywhere else in the world except in the US he does open lectures. Sad. (In this instance an open lecture is where someone hires James to speak and then anyone who’s interested can join by purchasing a ticket.)

In this video James is doing an open lecture at a Estonia IT College. He uses some new and familiar terminology that I’ve listed below. I need to work on becoming a professional skeptic!

A quick summary of the testing terminology used:

  • Decision coverage
  • Predicate coverage
  • All-Path coverage
  • Click frenzy
  • Rumble strip heuristic
  • Error message hangover
  • Shoe test
  • Brancing and backtesting
  • Follow up testing

James Bach and Michael Bolton both use critical thinking puzzels in their lectures. The two puzzles in this video are the flow chart and calculator. I think the calculator problem could be used to interview some to help identify someone’s thinking pattern.

5 Ways to Revolutionize your QA

I can’t remember where I originally found this post and the corresponding eBook but the eBook is definitely worth taking a look at. Here is the former uTest blog post, now Applause blog post.

The 5 ways or insights are:

  1. There are two types of code and they require different types of testing
  2. Take your testing down a level from features to capabilities
  3. Take your testing up a level from test cases to techniques
  4. Improving development is your top priority
  5. Testing without innovation is a great way to lose talent

In point 2, James Whittaker also talks about a planning and analysis tool he used at Microsoft called a CFC or Component – Feature – Capability analysis. This allowed them to take testing down from features to capabilities.

The purpose is to understand the testable capabilities of a feature and to identify important interfaces where features interact with each other and external components. Once these are understood, then testing becomes the task of selecting environment and input variations that cover the primary cases.

While this tool was designed for testing desktop software I’m inclined to think it would work well for testing web applications. Essentially with the CFC you are mapping out the individual components / features in the web application in a branching form that closely resembles a mind map. Matter of fact a mind map might be better! =)

STAR West 2011

It’s official I’ve registered for STAR West 2011 (also know as Software Testing Analysis and Review for the west coast) in Anaheim, CA. I’m only going for Monday and Tuesday, the tutorial days, but I’m excited for the ones I’ve chosen:

Monday:
A Rapid Introduction to Rapid Software Testing with Michael Bolton. It’s a full day course. Hopefully it’s interesting so I can stay awake the entire time! http://www.sqe.com/StarWest/Tutorials/Default.aspx?Date=10/3/2011

Tuesday:
The quality of the courses on available on Tuesday is far below Monday’s so I went with two half day classes. In the morning I’m taking Using Visual Models for Test Case Design with Rob Sabourin. In the afternoon I’m taking Testing Web-based Applications: An Open Source Solution with Mukesh Mulchandani. I’m hoping it will broaden my understanding of automation since the full day automation tutorial from Monday isn’t available. http://www.sqe.com/StarWest/Tutorials/Default.aspx?Date=10/4/2011

James Whittaker from Google will be there Monday morning as he mentions on Google’s Testing Blog here. Google has two people presenting on Monday: James Whittaker in the morning talking about How Google Tests Software and Ankit Mehta on Testing Rich Internet AJAX-Based Applications.

If I had more time I’d check those two tutorials out but I don’t. Bummer. Hopefully Google’s Testing Blog will recap some of the things they covered.

http://googletesting.blogspot.com/2011/06/google-at-star-west-2011.html

GTAC 2011 and STARWEST 2011

The two big Software Testing events of the year, GTAC or the Google Test Automation Conference, and STARWEST are both being held in October of this year. Big month for software testers!

According to the Google Testing Blog GTAC 2011 will be held in Mountain View, CA during the week of October 25th. STARWEST 2011 will be held in Anaheim, CA during the week of October 2nd.

The real question is how do I get my company to pay for both? Hopefully GTAC is reasonable inexpensive.

Summary: How Google Tests Software

As a software tester I try to learn as much as I can about how other companies test software. It just so happens that through Google’s testing blog James Whittaker has taken steps to outline just how Google does it.

If you’re interested in learning more I’d recommend reading through the five part series by going to the Google Testing Blog directly but feel free to check out my summary and the things I found interesting:

Google’s organization structure is such that they don’t have a dedicated testing group. Instead the company has more of a project-matrix organizational structure where testers are located in a group called Engineering Productivity where they report directly to a manager but are then shared to individual product groups like Android, GMail, etc. Through this they are able to move around to different groups inside the company based on a particular project and stand to gain a better experience. Engineering Productivity also develops in-house tools, maintains knowledge disciplines in addition to loaning out engineers.

Google has a saying: “you build it, you break it”. They have essentially 3 engineering roles Software Engineers (SWEs), Software Engineers in Test (SETs), and Test Engineers (TEs). SWEs write code, design documentation and are responsible for the quality of anything they touch in isolation. SETs focus on testability, they write code that allows SWEs to test their features, refactor code, and write unit testing frameworks and automation. SETs are responsible for quality of the features. TEs are the opposite of SETs and are focused on user testing. They write some code in the form of automation scripts and usage scenarios and coordinate and test with other TEs. These descriptions are a bit over generalized but you get the idea.

It’s interesting to note that in all of the companies I’ve worked for the SWEs and SETs are the same people and TEs are usually focused on the low hanging fruit. Instead Google blends development and testing to prevent bugs / lapses in quality instead of trying to catch them later when it is more expensive and harder to fix.

As a rule Google tries to ship products as soon as they provide some benefit to the user. Instead of releasing new updates / features in large releases Google tries to release, get feedback and reiterate as fast as possible. This means less time in-house and more time getting their customers responses. Yet in order to get out to production Google has 5 Channels to get through: Canary, Dev, Test, Beta and Production. The Canary channel holds experiments and code that isn’t ready to be released. The Dev channels is where the day to day work gets done, the Test channel is used for internal dog fooding and potential beta items. The Beta and Production channels hold builds that will get external exposure assuming they have passed applicable testing / real world exposure.

Finally Google breaks down their types of testing into three broad categories that include both manual and automated testing: Small Tests, Medium Tests and Large Tests. Small tests are written by SWEs and SETs and are usually focused on single functions or modules. Medium tests involve two or more features and cover the interactions between those features. SETs are mostly responsible for Medium tests. Large tests are three or more features and represent real user scenarios as best as they can be represented. The mix of manual and automated testing depends on what is being tested. James reiterates it’s not how you label the tests just as long as everyone in the company is on the same page.

And there you have, roughly, how Google Tests Software. You can see they spend a great deal of time working on preventing bugs from ever coming up so they can focus their Test Engineers on bigger potential problems and less on the low hanging fruit – which completely makes sense. Now how you and I apply these things to our own testing framework is the real challenge!