This is an interesting blog post from Google Engineering about how 50% of their code changes every month and how important their continuous integration system is. It’s worth a read to know a little bit more about How Google Tests Software.
I can’t remember where I originally found this post and the corresponding eBook but the eBook is definitely worth taking a look at. Here is the former uTest blog post, now Applause blog post.
The 5 ways or insights are:
- There are two types of code and they require different types of testing
- Take your testing down a level from features to capabilities
- Take your testing up a level from test cases to techniques
- Improving development is your top priority
- Testing without innovation is a great way to lose talent
In point 2, James Whittaker also talks about a planning and analysis tool he used at Microsoft called a CFC or Component – Feature – Capability analysis. This allowed them to take testing down from features to capabilities.
The purpose is to understand the testable capabilities of a feature and to identify important interfaces where features interact with each other and external components. Once these are understood, then testing becomes the task of selecting environment and input variations that cover the primary cases.
While this tool was designed for testing desktop software I’m inclined to think it would work well for testing web applications. Essentially with the CFC you are mapping out the individual components / features in the web application in a branching form that closely resembles a mind map. Matter of fact a mind map might be better! =)
It’s official I’ve registered for STAR West 2011 (also know as Software Testing Analysis and Review for the west coast) in Anaheim, CA. I’m only going for Monday and Tuesday, the tutorial days, but I’m excited for the ones I’ve chosen:
A Rapid Introduction to Rapid Software Testing with Michael Bolton. It’s a full day course. Hopefully it’s interesting so I can stay awake the entire time! http://www.sqe.com/StarWest/Tutorials/Default.aspx?Date=10/3/2011
The quality of the courses on available on Tuesday is far below Monday’s so I went with two half day classes. In the morning I’m taking Using Visual Models for Test Case Design with Rob Sabourin. In the afternoon I’m taking Testing Web-based Applications: An Open Source Solution with Mukesh Mulchandani. I’m hoping it will broaden my understanding of automation since the full day automation tutorial from Monday isn’t available. http://www.sqe.com/StarWest/Tutorials/Default.aspx?Date=10/4/2011
James Whittaker from Google will be there Monday morning as he mentions on Google’s Testing Blog here. Google has two people presenting on Monday: James Whittaker in the morning talking about How Google Tests Software and Ankit Mehta on Testing Rich Internet AJAX-Based Applications.
If I had more time I’d check those two tutorials out but I don’t. Bummer. Hopefully Google’s Testing Blog will recap some of the things they covered.
The two big Software Testing events of the year, GTAC or the Google Test Automation Conference, and STARWEST are both being held in October of this year. Big month for software testers!
The real question is how do I get my company to pay for both? Hopefully GTAC is reasonable inexpensive.
As a software tester I try to learn as much as I can about how other companies test software. It just so happens that through Google’s testing blog James Whittaker has taken steps to outline just how Google does it.
If you’re interested in learning more I’d recommend reading through the five part series by going to the Google Testing Blog directly but feel free to check out my summary and the things I found interesting:
Google’s organization structure is such that they don’t have a dedicated testing group. Instead the company has more of a project-matrix organizational structure where testers are located in a group called Engineering Productivity where they report directly to a manager but are then shared to individual product groups like Android, GMail, etc. Through this they are able to move around to different groups inside the company based on a particular project and stand to gain a better experience. Engineering Productivity also develops in-house tools, maintains knowledge disciplines in addition to loaning out engineers.
Google has a saying: “you build it, you break it”. They have essentially 3 engineering roles Software Engineers (SWEs), Software Engineers in Test (SETs), and Test Engineers (TEs). SWEs write code, design documentation and are responsible for the quality of anything they touch in isolation. SETs focus on testability, they write code that allows SWEs to test their features, refactor code, and write unit testing frameworks and automation. SETs are responsible for quality of the features. TEs are the opposite of SETs and are focused on user testing. They write some code in the form of automation scripts and usage scenarios and coordinate and test with other TEs. These descriptions are a bit over generalized but you get the idea.
It’s interesting to note that in all of the companies I’ve worked for the SWEs and SETs are the same people and TEs are usually focused on the low hanging fruit. Instead Google blends development and testing to prevent bugs / lapses in quality instead of trying to catch them later when it is more expensive and harder to fix.
As a rule Google tries to ship products as soon as they provide some benefit to the user. Instead of releasing new updates / features in large releases Google tries to release, get feedback and reiterate as fast as possible. This means less time in-house and more time getting their customers responses. Yet in order to get out to production Google has 5 Channels to get through: Canary, Dev, Test, Beta and Production. The Canary channel holds experiments and code that isn’t ready to be released. The Dev channels is where the day to day work gets done, the Test channel is used for internal dog fooding and potential beta items. The Beta and Production channels hold builds that will get external exposure assuming they have passed applicable testing / real world exposure.
Finally Google breaks down their types of testing into three broad categories that include both manual and automated testing: Small Tests, Medium Tests and Large Tests. Small tests are written by SWEs and SETs and are usually focused on single functions or modules. Medium tests involve two or more features and cover the interactions between those features. SETs are mostly responsible for Medium tests. Large tests are three or more features and represent real user scenarios as best as they can be represented. The mix of manual and automated testing depends on what is being tested. James reiterates it’s not how you label the tests just as long as everyone in the company is on the same page.
And there you have, roughly, how Google Tests Software. You can see they spend a great deal of time working on preventing bugs from ever coming up so they can focus their Test Engineers on bigger potential problems and less on the low hanging fruit – which completely makes sense. Now how you and I apply these things to our own testing framework is the real challenge!
This quote is simple and yet remarkable accurate, especially when referring to computers:
The nice thing about standards is that you have so many to choose from. by Andrew S. Tanenbaum.
This is why best practices don’t work, you need to choose the standard that fits the context.