Microsoft’s Web Application Stress tool is a simple, free load generation tool that Microsoft no longer supports or provides the download links to. It seems to still have a small community dedicated to using it and since Windows 7 comes with a virtualized version of XP it could very well be used into the future.
On a few occasions I’ve used it to generate load to test my companies APIs. Its interface is pretty simple, albeit old. There are quite a few sites dedicated to helping users understand it’s functionality which I’ve listed as resources at the bottom page.
I did a quick Google search and found a few sites that had the software which I cite as my original source downloads. I consolidated the tool download with the required DLL (for Windows Vista and higher). The DLL needs to be placed in C:WindowsSystem32 before installing the software however I’d recommend if you have Windows 7 you just download Windows Virtual PC and the Windows XP mode and run it naturally from there.
Some of the first testing books I read were from James Whittaker’s How to Break Software series. Those books, like this one, are laid out in a practical manner with each chapter focused on a specific attack or approach making them easy to read, reference and apply. Perfect for learning. I picked up this book a few years ago when I started questioning the way I was testing. The material was new to me and made me ask what is exploratory testing and what does touring have to do with it?
According to Whittaker (pg. 16) exploratory testing (E.T.) is testing where scripts or rigidity have been removed (paraphrasing). Whittaker explains his terms “E.T. in the small”, decisions made where the scope of the testing is small and “E.T. in the large”, decisions made when the scope of testing is large (small might be a screen in an application while large is the whole application). At the end of chapter 3 he mentions E.T. can be done in a way that allows test planning and execution to be completed simultaneously which is one of E.T.s most important aspects and simplest definitions. Touring (as in a tour guide or sight-seeing) becomes a metaphor for and a way to structure E.T.
There are eight chapters in the book plus a number of appendices. In the first few chapters Whittaker discusses what he sees as the case for software quality (the context of the book), introduces E.T. and explains how he uses it, in the small and the large. The next four chapters cover tours he and others have come up with. The last chapter is about how Whittaker sees the future of testing or at least how he did at the time of publishing.
The first appendix, A, is one of the most important parts of the book: building a successful career in software testing. Whittaker talks about how he got into testing and gives some advice on “getting over the hump” to be a better tester. Appendix A is short but worth reading. The rest of the appendices are old blog posts from his Microsoft days.
As a beginner I found this book much more valuable than I do now several years later. I understand E.T. is an approach to testing that can but doesn’t necessarily include tours or scripts. It isn’t just manual testing either. For reference Michael Bolton (the testing expert) has some good posts in what E.T. is not: (notice how the first post is about touring?)
As you might not guess from the title of this book it does not do a proper job explaining E.T. in a way that one can use it, aside from following the tour metaphor. In fact after reading it again this book seems to say to the reader: these tours are the best, don’t you agree? It’s important to understand exploratory testing is about the way you work, and the extent to which test design, test execution, and learning support and reinforce each other.
According to James Bach the term “exploratory testing” was coined and first published by Cem Kaner and has been worked on by Bach, Whittaker and Kaner (among others) over the last decade. It seems a bit odd that in a book about E.T. Whittaker never mentions their work and provides no references for the reader to follow up. Apparently Whittaker thinks the easiest way to explain E.T. is through testing tours (hence the book) while Bach has a more direct explanation of what constitutes exploratory testing. I found Bach’s post more informative, applicable and frankly cheaper than Whittaker’s Exploratory Software Testing book.
Exploratory Software Testing (the book) offers a limited metaphor for understanding exploratory testing. It isn’t as practical as Whittaker’s previous books because you can’t apply the teachings very well without fully understanding what E.T. is and how tours fit in. If you only want ideas on how Microsoft’s testers used the touring metaphor to “perform” exploratory testing then you’ll get four chapters of information otherwise Exploratory Software Testing is worth skipping.
I wrote the same review on Amazon under the heading “Limited metaphor for exploratory testing”.
My job is to help programmers look good; to support them as they create quality; to ease that burden instead of adding to it. In that spirit, I make the following commitments:
I provide a service. You are an important client of that service. I am not satisfied unless you are satisfied.
I am not the gatekeeper of quality. I don’t “own” quality. Shipping a good product is a goal shared by all of us.
I will test your code as soon as I can after you deliver it to me. I know that you need my test results quickly (especially for fixes and new features).
I will strive to test in a way that allows you to be fully productive
I’ll make every reasonable effort to test, even if I have only partial information about the product.
I will learn the product quickly, and make use of that knowledge to test more cleverly.
I will test important things first, and try to find important problems. (I will also report things you might consider unimportant, just in case they turn out to be important after all, but I will strive to spend less time on those.)
I will strive to test in the interests of everyone whose opinions matter, including you, so that you can make better decisions about the product.
I will write clear, concise, thoughtful, and respectful problem reports. (I may make suggestions about design, but I will never presume to be the designer.)
I will let you know how I’m testing, and invite your comments. And I will confer with you about little things you can do to make the product much easier to test.
I invite your special requests, such as if you need me to spot check something for you, help you document something, or run a special kind of test.
From a software tester’s point of view a lecture entitled Becoming a Software Testing Expert is a bit enticing. A lecture by James Bach is even more so. Bach, widely considered an expert in Software Testing, is a passionate advocate of software testing. As an expert he’s in a good position to help others.
He makes the case that testers need to be professional skeptics. If testers are constantly skeptical about what they are supposed to test, ask lots of questions and can backup their reasoning for the tests being performed then they should do very well. A software tester’s best assets are their ability to rapidly learn about new systems and apply that learning to find gaps in the system. Some gaps will be based on written requirements and some on unwritten requirements.
It’s a rude awakening when you realize you can become an expert at your craft you just need to know it’s possible, set a goal and then overcome the hubris gained over time from working on an application for so long. When you start on the path towards becoming an expert it stops becoming a day job and becomes more of an adventure.
I’m happy to say I’m skeptical of my skepticism towards my current testing approach. =)
I’ve uploaded two Keynote Presentation’s from this years (2011) StarWest conference.
The first is James Whittaker’s Keynote entitled All That Testing is Getting in the Way of Quality:
The second is the Lightning round Keynote featuring a number of testing luminaries like Michael Bolton, Lee Copeland, Bob Galen, Dorothy Graham, Hans Buwalde, Dale Emery, Julie Gardiner, Jeff Payne and Martin Pol:
I got to talk to James Bach last week at StarWest 2011 in Anaheim. I joined his Critical Thinking class for its final 2 hours on Tuesday after walking out on my boring afternoon half-day tutorial on Open Source tools.
I was surprised when I was able to catch up to and chat with him after the class. I asked about the books he recommended that were on sale at the convention at which point he gave me his copy of Captivating Lateral Thinking Puzzles he’d shown in class. (Thank you, although my girlfriend finds it amusing to open the book and quiz me randomly.) In our chat I told him I enjoyed this Open Lecture:
Some point during our conversation I asked when he would be doing another open lecture and where it would be (hoping it would be somewhere near SoCal). After detailing his itinerary he came to the realization everywhere else in the world except in the US he does open lectures. Sad. (In this instance an open lecture is where someone hires James to speak and then anyone who’s interested can join by purchasing a ticket.)
In this video James is doing an open lecture at a Estonia IT College. He uses some new and familiar terminology that I’ve listed below. I need to work on becoming a professional skeptic!
A quick summary of the testing terminology used:
Rumble strip heuristic
Error message hangover
Brancing and backtesting
Follow up testing
James Bach and Michael Bolton both use critical thinking puzzels in their lectures. The two puzzles in this video are the flow chart and calculator. I think the calculator problem could be used to interview some to help identify someone’s thinking pattern.
Working for a startup company you go through a lot of problems, potential solutions and more problems. I was reminded of my company in the article by Startup Lessons Learned entitled Validated learning about customers. Eric Ries, who writes the Startup Lessons Learned blog, describes two scenarios with two fictional companies.
My company is like the first company in his post: the metrics of success change constantly and our product definition fluctuates regularly. Our development team is always busy but those efforts don’t exactly lead to added value to the product. We are pretty good at selling the one-time product but we have to put a lot of effort into each sale and so the sales process isn’t scalable. Worse it’s frustrating that management doesn’t see this.
At the end of the article Eric lists some solutions to companies with this “stuck in the mud” situation and I think the third solution is something my company should try: build tools to help the sales team reduce the time on each sale and try building parts of our product that make the sales process faster or the investment afterwards less. (I added that last bit). How good is your product if it requires customers spend large amounts of time, energy and money in order to make it usable? Shouldn’t the company make the use of your product as frictionless and automated as possible so it’s easy for customers?
After reading this article I’m interested in reading his full book: The Lean Startup.
The last few months I’ve completed a number of rounds of testing for uTest’s clients, mostly in dealing with web applications for my iPhone. In fact a majority of work I’ve done since joining has been for functional testing of mobile applications. It’s been fun because mobile testing isn’t in my area of expertise but is a nice break from my normal routine and I like learning new things.
uTest’s Business Model:
A few months ago I was talking with my boss about new options for helping me test our software. I work for a small company where I’m the only tester. Often the backlog for getting our releases out is me. My boss was talking about adding an offshore resource and I brought up the idea of uTest and their crowdsource model. He thought it was an interesting idea and so he contacted uTest to get more information.
A few weeks later we had a quote from uTest and a chat with one of the sales reps which gave me a interesting perspective into their business model. uTest prefers to sell their services in packages which generally include several rounds of testing (the time between rounds is up to you). The sales engineer’s try to get an understanding of your testing needs and then give you a flat price per round with a minimum number of rounds plus a monthly Software as a Service (SaaS) charge for access to uTest’s application – a must have for the tester’s to submit bugs, test cases, etc. I think our application was considered pretty big / complex so for 3 rounds it was just north of $7,000.
That means we’d pay uTest $7k upfront plus the SaaS access charge each month. From there we’d work with a project manager and tell them how many tester’s we need, what type of backgrounds tools they need, etc. Then the project manager builds the test process and plan with you. Essentially you are hoping you get a good project manager otherwise the money you drop and the test outcomes may not be worth it. The actual payments the tester’s receive (for test plans and bugs) comes from uTest out of that flat fee.
Makes me wonder what the average payout to tester’s are per round of testing? Probably less than $1k depending on the size. That means a majority of the money is going to pay for your project manager and to uTest’s wallet.
This is an interesting blog post from Google Engineering about how 50% of their code changes every month and how important their continuous integration system is. It’s worth a read to know a little bit more about How Google Tests Software.
I can’t remember where I originally found this post and the corresponding eBook but the eBook is definitely worth taking a look at. Here is the former uTest blog post, now Applause blog post.
The 5 ways or insights are:
There are two types of code and they require different types of testing
Take your testing down a level from features to capabilities
Take your testing up a level from test cases to techniques
Improving development is your top priority
Testing without innovation is a great way to lose talent
In point 2, James Whittaker also talks about a planning and analysis tool he used at Microsoft called a CFC or Component – Feature – Capability analysis. This allowed them to take testing down from features to capabilities.
The purpose is to understand the testable capabilities of a feature and to identify important interfaces where features interact with each other and external components. Once these are understood, then testing becomes the task of selecting environment and input variations that cover the primary cases.
While this tool was designed for testing desktop software I’m inclined to think it would work well for testing web applications. Essentially with the CFC you are mapping out the individual components / features in the web application in a branching form that closely resembles a mind map. Matter of fact a mind map might be better! =)