My job is to help programmers look good; to support them as they create quality; to ease that burden instead of adding to it. In that spirit, I make the following commitments:
I provide a service. You are an important client of that service. I am not satisfied unless you are satisfied.
I am not the gatekeeper of quality. I don’t “own” quality. Shipping a good product is a goal shared by all of us.
I will test your code as soon as I can after you deliver it to me. I know that you need my test results quickly (especially for fixes and new features).
I will strive to test in a way that allows you to be fully productive
I’ll make every reasonable effort to test, even if I have only partial information about the product.
I will learn the product quickly, and make use of that knowledge to test more cleverly.
I will test important things first, and try to find important problems. (I will also report things you might consider unimportant, just in case they turn out to be important after all, but I will strive to spend less time on those.)
I will strive to test in the interests of everyone whose opinions matter, including you, so that you can make better decisions about the product.
I will write clear, concise, thoughtful, and respectful problem reports. (I may make suggestions about design, but I will never presume to be the designer.)
I will let you know how I’m testing, and invite your comments. And I will confer with you about little things you can do to make the product much easier to test.
I invite your special requests, such as if you need me to spot check something for you, help you document something, or run a special kind of test.
From a software tester’s point of view a lecture entitled Becoming a Software Testing Expert is a bit enticing. A lecture by James Bach is even more so. Bach, widely considered an expert in Software Testing, is a passionate advocate of software testing. As an expert he’s in a good position to help others.
He makes the case that testers need to be professional skeptics. If testers are constantly skeptical about what they are supposed to test, ask lots of questions and can backup their reasoning for the tests being performed then they should do very well. A software tester’s best assets are their ability to rapidly learn about new systems and apply that learning to find gaps in the system. Some gaps will be based on written requirements and some on unwritten requirements.
It’s a rude awakening when you realize you can become an expert at your craft you just need to know it’s possible, set a goal and then overcome the hubris gained over time from working on an application for so long. When you start on the path towards becoming an expert it stops becoming a day job and becomes more of an adventure.
I’m happy to say I’m skeptical of my skepticism towards my current testing approach. =)
I’ve uploaded two Keynote Presentation’s from this years (2011) StarWest conference.
The first is James Whittaker’s Keynote entitled All That Testing is Getting in the Way of Quality:
The second is the Lightning round Keynote featuring a number of testing luminaries like Michael Bolton, Lee Copeland, Bob Galen, Dorothy Graham, Hans Buwalde, Dale Emery, Julie Gardiner, Jeff Payne and Martin Pol:
I got to talk to James Bach last week at StarWest 2011 in Anaheim. I joined his Critical Thinking class for its final 2 hours on Tuesday after walking out on my boring afternoon half-day tutorial on Open Source tools.
I was surprised when I was able to catch up to and chat with him after the class. I asked about the books he recommended that were on sale at the convention at which point he gave me his copy of Captivating Lateral Thinking Puzzles he’d shown in class. (Thank you, although my girlfriend finds it amusing to open the book and quiz me randomly.) In our chat I told him I enjoyed this Open Lecture:
Some point during our conversation I asked when he would be doing another open lecture and where it would be (hoping it would be somewhere near SoCal). After detailing his itinerary he came to the realization everywhere else in the world except in the US he does open lectures. Sad. (In this instance an open lecture is where someone hires James to speak and then anyone who’s interested can join by purchasing a ticket.)
In this video James is doing an open lecture at a Estonia IT College. He uses some new and familiar terminology that I’ve listed below. I need to work on becoming a professional skeptic!
A quick summary of the testing terminology used:
Rumble strip heuristic
Error message hangover
Brancing and backtesting
Follow up testing
James Bach and Michael Bolton both use critical thinking puzzels in their lectures. The two puzzles in this video are the flow chart and calculator. I think the calculator problem could be used to interview some to help identify someone’s thinking pattern.
Working for a startup company you go through a lot of problems, potential solutions and more problems. I was reminded of my company in the article by Startup Lessons Learned entitled Validated learning about customers. Eric Ries, who writes the Startup Lessons Learned blog, describes two scenarios with two fictional companies.
My company is like the first company in his post: the metrics of success change constantly and our product definition fluctuates regularly. Our development team is always busy but those efforts don’t exactly lead to added value to the product. We are pretty good at selling the one-time product but we have to put a lot of effort into each sale and so the sales process isn’t scalable. Worse it’s frustrating that management doesn’t see this.
At the end of the article Eric lists some solutions to companies with this “stuck in the mud” situation and I think the third solution is something my company should try: build tools to help the sales team reduce the time on each sale and try building parts of our product that make the sales process faster or the investment afterwards less. (I added that last bit). How good is your product if it requires customers spend large amounts of time, energy and money in order to make it usable? Shouldn’t the company make the use of your product as frictionless and automated as possible so it’s easy for customers?
After reading this article I’m interested in reading his full book: The Lean Startup.
The last few months I’ve completed a number of rounds of testing for uTest’s clients, mostly in dealing with web applications for my iPhone. In fact a majority of work I’ve done since joining has been for functional testing of mobile applications. It’s been fun because mobile testing isn’t in my area of expertise but is a nice break from my normal routine and I like learning new things.
uTest’s Business Model:
A few months ago I was talking with my boss about new options for helping me test our software. I work for a small company where I’m the only tester. Often the backlog for getting our releases out is me. My boss was talking about adding an offshore resource and I brought up the idea of uTest and their crowdsource model. He thought it was an interesting idea and so he contacted uTest to get more information.
A few weeks later we had a quote from uTest and a chat with one of the sales reps which gave me a interesting perspective into their business model. uTest prefers to sell their services in packages which generally include several rounds of testing (the time between rounds is up to you). The sales engineer’s try to get an understanding of your testing needs and then give you a flat price per round with a minimum number of rounds plus a monthly Software as a Service (SaaS) charge for access to uTest’s application – a must have for the tester’s to submit bugs, test cases, etc. I think our application was considered pretty big / complex so for 3 rounds it was just north of $7,000.
That means we’d pay uTest $7k upfront plus the SaaS access charge each month. From there we’d work with a project manager and tell them how many tester’s we need, what type of backgrounds tools they need, etc. Then the project manager builds the test process and plan with you. Essentially you are hoping you get a good project manager otherwise the money you drop and the test outcomes may not be worth it. The actual payments the tester’s receive (for test plans and bugs) comes from uTest out of that flat fee.
Makes me wonder what the average payout to tester’s are per round of testing? Probably less than $1k depending on the size. That means a majority of the money is going to pay for your project manager and to uTest’s wallet.
This is an interesting blog post from Google Engineering about how 50% of their code changes every month and how important their continuous integration system is. It’s worth a read to know a little bit more about How Google Tests Software.
I can’t remember where I originally found this post and the corresponding eBook but the eBook is definitely worth taking a look at. Here is the former uTest blog post, now Applause blog post.
The 5 ways or insights are:
There are two types of code and they require different types of testing
Take your testing down a level from features to capabilities
Take your testing up a level from test cases to techniques
Improving development is your top priority
Testing without innovation is a great way to lose talent
In point 2, James Whittaker also talks about a planning and analysis tool he used at Microsoft called a CFC or Component – Feature – Capability analysis. This allowed them to take testing down from features to capabilities.
The purpose is to understand the testable capabilities of a feature and to identify important interfaces where features interact with each other and external components. Once these are understood, then testing becomes the task of selecting environment and input variations that cover the primary cases.
While this tool was designed for testing desktop software I’m inclined to think it would work well for testing web applications. Essentially with the CFC you are mapping out the individual components / features in the web application in a branching form that closely resembles a mind map. Matter of fact a mind map might be better! =)
I recently finished reading James Bach’s book Secrets of a Buccaneer-Scholar. I purchased the book mistakenly thinking it was a book on software testing (I didn’t really read the synopsis before buying it) but was pleasantly surprised after having read it.
I’d heard Bach was an expert in software testing, checked out his blog and then found this book online:
At first I wasn’t sure what to make of this book but it gets better as it goes on. Just as the title says it’s about how the pursuit of Self-Education can lead to success based on the author’s (James Bach) experience doing just that in the field of Software Testing.
It’s official I’ve registered for STAR West 2011 (also know as Software Testing Analysis and Review for the west coast) in Anaheim, CA. I’m only going for Monday and Tuesday, the tutorial days, but I’m excited for the ones I’ve chosen:
The quality of the courses on available on Tuesday is far below Monday’s so I went with two half day classes. In the morning I’m taking Using Visual Models for Test Case Design with Rob Sabourin. In the afternoon I’m taking Testing Web-based Applications: An Open Source Solution with Mukesh Mulchandani. I’m hoping it will broaden my understanding of automation since the full day automation tutorial from Monday isn’t available. http://www.sqe.com/StarWest/Tutorials/Default.aspx?Date=10/4/2011
James Whittaker from Google will be there Monday morning as he mentions on Google’s Testing Blog here. Google has two people presenting on Monday: James Whittaker in the morning talking about How Google Tests Software and Ankit Mehta on Testing Rich Internet AJAX-Based Applications.
If I had more time I’d check those two tutorials out but I don’t. Bummer. Hopefully Google’s Testing Blog will recap some of the things they covered.