Contributing to GitHub is for Everyone at the Online Testing Conference

On June 13th Matt Heusser and I will be giving a talk at the Online Testing Conference on the value for testers in contributing and using GitHub. Actually it will be more than just a talk, we’ll do a full demo on how to get started on GitHub by creating and contributing to a repository of your own. Then we’ll give you some ideas on how to use it going forward. (Hint: there are lots of uses besides code.)

Update 06/14/17:

Matt posted the slides on SlideShare and the video is now live:


For the past few years one of my professional goals has been to attend (at least) one testing conference or workshop per year, mostly because it’s such a great way to recharge and learn what other practitioners are doing. Unfortunately, I couldn’t find a good source of events aside from talking with others, so I pulled together a list on my own. The value of any events list is only as great as the number and quality of events listed so I’ve open sourced the list and posted it online with the hope that it becomes driven-by and representative of the community as a whole (as opposed to what I might like or prefer).

Introducing – a simple list of software testing conferences and workshops published collaboratively with the testing community.

I hope this site is or becomes a useful resource for the software community as a whole. I also hope others will help by contributing because that’s the only way it will become better or at least maintain its usefulness. It doesn’t matter if you are a vendor or just a fan of workshops and conferences, please add to it!

A little more detail

I’m not sure how I found my first conference, StarWest, back in 2011 but I did. Every other conference or workshop I’ve been to since has been a referral or recommendation by others, including:

  • The Rapid Testing Intensive workshop (RTI)
  • The Workshop on Teaching Software Testing (WTST 13)
  • CAST
  • STPcon
  • STARWest (I’ve been to this twice I think)

I say referral / recommendation because the primary way this stuff was and is communicated are through the people who’ve attended. Sure you can Google “software testing conferences” or come across an advertisement in a testing publication but those are only useful if you have an idea of what you are looking for. Even if you do those things you might come across the most advertised, most popular but not necessarily anything near you (especially if you are outside the US) or relevant to your particular tastes. To me, that’s sad.

Conferences and workshops are great tools for conferring, collaborating and learning. At least part of your decision to attend a conference or workshop is determined by knowing what conferences are available and where they are located. That’s where this list comes in. There was never any single source of active conferences and workshops; especially with workshops you had to be in-the-know, else just stumble across one that was occurring.

Now I’m asking for your help so we can publish this list collaboratively within the testing community.


Truth is I’ve been developing this site for a while and promoting on Twitter as I added events. I suppose you could say this site has been in stealth mode and this is the official reveal.

Catching up on a few Conferences

One of the great things about modern testing conferences is most either live stream or record their conference talks so the information can be disseminated to a wider audience. While you don’t get the interaction and conferring that an in-person visit would get you, it does make it easier for those who can’t travel to or are priced out of many conferences and allows them to get potentially useful insights.

After I went to CAST in August I wrote in-depth about my experiences. CAST was my one and only conference for the year, so when the Selenium Conference came up I couldn’t swing it. STARWEST is going on now, but again can’t swing it. The good news is it’s easy to catch up on many (maybe not all) of the important conferences.

First, I’d highly recommend catching up on CAST 2015, if you haven’t already:

I’ve gone through all of the CAST videos because despite being in attendance, there were so many were sessions that I just didn’t get to go to. Even the ones I attended, watching them again gave me some additional tidbits.

Second, I’d recommend catching up on is the Selenium Conference 2015 (or SeConf 2015). I’m working on this one as I write:

Have I missed any other testing or testing-related conferences? I know STARWEST has it’s virtual conference but it doesn’t put anything online. Enjoy!

An In-depth look at CAST 2015

The Conference for the Association for Software Testing or CAST 2015 was held in Grand Rapids, MI during the first week of August. Since then I’ve been trying to reflect on what I thought, learned, liked and didn’t like. Here is that reflection in roughly 3,000 words. To speed the reading process I’ve created a table of contents.

tl;dr – overall CAST was great and I walked away with a lot to think about and apply to my job.

Speaking Truth to Power: Delivering Difficult Messages by Fiona Charles


See you at CAST 2015

Thanks to my awesome company’s sponsorship and my wife’s love of travel I will be at CAST this year and attending the tutorial “Delivering Difficult Messages” by Fiona Charles. This is both my first time attending CAST and traveling to Grand Rapids, MI so I’m expecting to have a little fun and to learn a lot. As for the conference experience, I’m not sure what to expect.

Over the last four or five years I’ve been to a few different conferences and training events:

WTST 13 was my first-ever peer workshop and the last trip I made. In contrast to large conferences like STAR and STPCon where several hundred or even a thousand people crisscross each other as they go to any number of tutorials or talks, the peer workshop format was limited to 20-25 participants in a single room. There were a few presenters who spoke along a similar theme but it was the participants who drove the discussions until everyone was satisfied. It was certainly a unique experience.

Comparing workshops to conferences may not be fair due to their different approaches but my goal for attending them is the same: to learn something new and useful (apply to my job or company) and interact with my peers.

My understanding of CAST’s format, even before going, is it’s a small-ish conference attended by few hundred conference-goers, most of whom are AST members. Although a few hundred participants is still a good-sized event, I’m hoping to be able to interact with a few (or a lot) people. Of the few conferences I’ve attended the biggest impact has always been the interactions: meeting others, sharing problems, being challenged in my thinking, trying to explain something, reference recommendations, etc.

Aside from the one tutorial I’m signed up for and a welcome get-together in the morning each day I don’t know what else there is planned. There’s no schedule available, yet and until there is I won’t be sure what to expect.

Enough about my expectations; who else is attending and what are you looking forward to?

What I’ve been up to lately

Things have been busy in the last month or so and I felt like sharing what I’ve been up to lately. Most of it revolves around software testing:

April saw the start of Dan Ariely’s A Beginners Guide to Irrational Behavior class on Coursera. I knew I had the BBST course coming up so I didn’t commit much time to the class other than watching the video lectures and doing the video quizzes. There are many aspects of irrational behavior that affect what we do in software development and testing – I’d like to write a more in-depth article about that in the future.

On the 14th of April I started the BBST Test Design course and completed it on May 8th. For those who have never taken a BBST class before they are incredibly intense month long courses. The course breaks a single calendar week into 2 class weeks – one week with 4 days, and a shorter week with 3 days and each week requires about 10-15 hours of work in order to do the readings, labs and work on the exam. The class is done but I still don’t know if I’ve passed; regardless I learned a lot.

On April 19th I joined the NRG Global Online test competition. My last post was a reflection on how well I thought I did and despite my low perception, my team ended up winning part of the competition!

I went to STPcon 2013 at the end of April in San Diego where I met up with a few Miagi-Do’ers, met some other testers I’d heard from in the twitter-verse or blog-o-sphere and learned a few things. I’m planning to write an experience report and post it either here or on the newly formed Miagi-Do blog. I think it might apply a little more here but I don’t know how it will turn out because I haven’t written it.

During the Test Design course I picked up on Test Design being the last of the 3 BBST courses and there being 3 more courses – Domain testing, spec-based testing and scenario-based testing listed in Cem Kaner’s diagram. I asked Cem about the domain testing course over twitter and he kindly sent me an email with a draft domain testing workbook which I plan to review – right after I email him back and telling him when.

May 8th through 10th I participated in the Rapid Testing Intensive Online #2 as a peer reviewer. It was fun to sit on the other side and provide some feedback to the students on their work although I would have been more effective if I was able to do the assignment as the students were – I just couldn’t take the time off work. Nevertheless I found participating as a peer reviewer to have its own unique challenges as I interacted with other testers and tried to answer their questions. In the RTIO there’s a ton of material and references coming at the students so it helps to interact and help others.

May 16th I signed up for the BBST Bug Advocacy class that takes place in June. One of my year end goals is to complete all 3 BBST courses and then pursue BBST Instructor so I can help others. In fact as I was writing this I signed up for the BBST Instructors course in October!

Lastly I’m looking for a cheap / free place to host a public Rapid Software Testing course with Paul Holland in the Los Angeles area. Anyone know of a place that can fit 20 people comfortably?

Wow I’ve got a lot to do…

Rapid Testing Intensive 2012: Day 5

The final day of the Rapid Testing Intensive #1:

RTI Group
The Group photo – taken on the 4th day (I’m in the 2nd row behind the #1)

9:00 AM – We all picked up our Certificates of Satisficity – saying we completed Rapid Testing Intensive #1

9:05 AM – Jon, as PM, is starting us off with the RTI Project Status with a background about getting started at eBay. He was forced to do metrics he didn’t like, getting bogged down in a ton of meetings and he got the opportunity to train new hires so Jon created a slide deck which he is showing. Going over highlights of the week with some screen images – the first bug filed was eBay Motors experiencing technical difficulties.

9:12 AM – Mark (part of team TRON) continued to get the experiencing technical difficulties problem up until yesterday – it was tied to his account. Wheel Center had about 39% of the bugs, Light Center had 35% of the bugs and the Tire Center had 43% of the bugs. Jon claims that the MyVehicles section only had 1 issue but that’s unlikely and that’s a reason why metrics need a context and a story before they make sense.

9:15 AM – We had opening activities (preparing the coverage) like creating teams, establishing JIRA and confluence, usability surveys, test cover outline, risk list. Jon has a list of more usability questions and it sounds like we can do more testing later today. Then we had the Coverage (test!) like photo upload, international compatibility, performance testing, severity 1 bug drill down and combinatorial testing. Finally we had closing activities (test your testing!) where we made a punch list, bug follow up, etc.

9:19 AM – Dwayne, Mark and I are the top 3 bug reporters for the exercise and we are all on the same team. This metric doesn’t really matter but it made Jon question why the top 3 would be all from the same group and Dwayne said it probably had something to do with internal competition. Now we are going to go through our bugs, read comments if any and label with categories. We can then nominate anything we’d like reviewed both good and bad.

9:55 AM – Done with bug triage / updating our bug categories. For any areas that we think might need more coverage we’ve got 30 minutes for a final session.

10:30 AM – Break time. During our break I was part of a conversation with Andrew, Thomas and Dwayne where James reviewed some of our session notes. It was an interesting debrief because he pointed out information that was unclear, I put a line “that was weird” but I didn’t explain what it was. Considering he was the audience for the report, the report to him was confusing. Good information.

10:48 AM – We are back and Jon is reviewing one of the bugs nominated in the chat room. Jon is going through and cleaning up the bug and James recommends trying to anticipate what the developer is going to ask – in the case of eBay if you can include URLs and links to the particular auction items.

11:00 AM – There was a lot of information in a TCO James was working through, it was a mix of parts of the product that might be tested, requirements that might be tested against and test ideas. It’s a good idea to keep things separate because it frees you to do more things. They aren’t the best place to combine or brainstorm ideas, they should be categories. If you have questions in the TCO you should be trying to answer them and if you can’t remove them.

11:10 AM – The Open Questions and Risks sections of this persons document was good according to James, it means they have a lot of questions and were probably paying attention – as long as they didn’t copy if from someone or somewhere else. James put his feedback in as a summary on the JIRA page.

11:15 AM – James likes to see expected and actual results in a bug report because it helps identify what the person thinks the issue is. James is comparing a bug of mine to a bug filed by Paul Holland. You don’t need to always stick to a template especially with steps to reproduce – if the steps are obvious.

11: 20 AM – James is talking about the debate he and Andrew had, which I mentioned above at 10:30 AM. James says when Andrew gave his oral testing story, yesterday, he was very effective in telling that story but when he wrote up his session notes it didn’t tell that same story. The interesting part of the testing story (according to James) was not recorded in the notes which was when Andrew, Mark and Thomas followed up on an interesting artifact which turned out to be a privacy issue and to James it meant the guys didn’t feel it was an interesting story. The debrief of the testing, talking to each other was very important for the knowledge transfer.

11:35 AM – James and Jon are reviewing another bug that was nominated. So far most of the bugs have been written well so they are trying to find something written badly. I think they finally found a bad TCO because it looks like some person just copied the eBay Motors homepage because they didn’t filter enough information and relied just on the visible.

11:50 AM – Jame and Jon are done evaluating the information for today but they will have to continue doing so as he prepares the report because the bug list has to be fully scrubbed. Problems found for each of the areas of concern for eBay. Don’t be afraid to try things and fail because we get better and better, the learning happens all along.

11:53 AM – That’s a wrap.

There are no photos from Day 5. Check out the other days:

Rapid Testing Intensive 2012: Day 4

9:02 AM – James starts us off on Day 4. We are going to look at the status of the test project in terms of what we need to accomplish and look for the holes. This is a typical rapid testing management maneuver. James is showing a graph and reiterates he doesn’t believe in fake metrics. The pink bands represent off hours and the clear bands represent on hours. At the beginning there is a very big jump in the number of bugs and then it flattens out.

9:08 AM – Turns out Paul Holland is going slow with checking the bugs – he claims its because there are only 3 people checking the bugs while 100 are reporting them. James wants to go through the bugs and check the risk areas to get general impressions so one of the activities together today or tomorrow might be to place risk measures on beach bug. The graph may make it on the front of the bug report. Dwayne says he isn’t sure of the value of the graph and James says he also isn’t sure but he doesn’t need to know the value because he thinks it will provoke interest of the reader – in this case eBay.

9:12 AM – In rapid testing we don’t put up graphs that give the impression we want to give, which is why James will filter out all duplicates and clean out the rest of the noise that could mislead readers. The graph could give the general impression of industriousness of the group over the four days we were here. Keep that skepticism in mind before considering showing metrics like this. You should always have someone doing bug triage otherwise you get a lot of noise in your reports and nothing gets corrected – no pressure. If you don’t have a big team, if you can’t dedicated someone, you can do it one at time at the time of the reporting.

9:22 AM – If you don’t do bug triage then you get a lot of complaints from developers and managers even though they’ve never looked at them. It takes time but its worth it. James says at Borland they could do 20 bugs an hour and they determined out of the 800 bugs about 400 were legit bugs. You’ve got to maintain the quality of the list. After the first triage you get a much better feedback loop from that information. After the scrubbing we will want to see what eBay’s final decisions are about the bugs. How do they rate the bugs we’ve created, what do they think of the bugs we’ve reported, how many do they end up fixing? That’s the big thing.

9:26 AM – To clarify test strategy James starts making a test report. He pulls up the Test Report for eBay Motors that he and Jon are working on building for this Intensive. This is going to be a professional and comprehensive test report because James is aware of the multiple constituencies for this report. eBay Motors has a number of groups who may look at the report. Apparently Jon had to pay for his trip here and maybe other groups in eBay will want to use this event next year and perhaps pay for Jon and his families trip. Jon wants to get away from test cases and automation as a first path at eBay. One constituency is eBay Motors, another is eBay’s other groups, another is us – because we will get a professional artifact to go on our CV. James is going to have to edit extensively because there are so many people reporting artifacts and there are so many overlaps but he’s going to get everyone in the report who contributed.

9:35 AM – Different sections of the report for different constituencies which makes it comprehensive. Some parts of the report will be comprehensive and others will not. If James forces himself into thinking – what do I need for a final final report what holes do I have? That can focus him. It points out what is not getting done. It’s called a forward backward method, from a book called how to read and do proofs chapter 2.

9:40 AM – Remember the three levels from yesterday? The same thing goes into the test report. It’s a challenge to identify all of the testing that has been done, especially from James point of view, because there are so many groups doing things outside of James and Jon’s view.

9:45 AM – Jon is showing his screen which is a to do list – he calls it a punch list, apparently it’s a home building term? James is reading out of his report from the risk area section. Karen says she uses her low tech testing dashboard and she can use that information to contribute to the lean areas of the report.

9:52 AM – James took a poetry class to meet women when he was 24 and it turns out only middle aged women take that class. (The entire class started laughing.) The upside from taking the class was he learned poetry and he learned consensus can happen which brings people together. The risk areas of the test report are phrased in terms of a question – for this specific list. James ask the class why he changed from a statement to a question? He is trying to name the problem he is interested in without making a statement that it’s actually a problem. To make it less confusing James turned the risk areas into a question.

10:15 AM – Looking and talking about test coverage outline of things we accomplished during our sessions that James put together and include in the test report. James films all his testing that he does professionally, he does note taking which can creates time stamps so its easy to refer back. He also does session notes which are crude but can help you locate the relative area for referring back to your videos.

10:24 AM – Both part 1 and part 2 of our reports are available on the internal site, which I’ll try and download and post online later. This is one of the reasons James went offline yesterday so we could get feedback on our reports. You can use screen capture tools that take regular, automatic screen shots or a video capture tool to watch you test.

10:30 AM – James is showing us a video of how he records his testing. He’s got a small tripod for his camera, the camera is placed under his left shoulder, the screen is zoomed in, and you can see he is using some log to record the inputs. With a detailed recording you can have confidence in what you tested. Scripts aren’t the answer because no one really follows scripts which invalidates the script – they didn’t setup and follow an oracle.

10:40 AM – Rapid testing is based on the idea of skill, testing credibility and trust and without that all these things are empty documents. You can’t stand behind a report you created using other peoples work unless you’ve done the testing work. This is the reason why Jon and James have to examine the work we’ve done this far before they can include our work in a testing report. Most of us don’t have the reputation with James and Jon where they can except without question the work.

10:48 AM – Break time. Jon and James are going to try to make their punch list a bit bigger.

11:06 AM – We are back and apparently are going to do some triage with Jon leading. I found the camera that James uses to record his meetings: Samsung HMX-Q10 camcorder. We are looking one of the bugs from JIRA and trying to figure out how many of the steps are relevant. Jon is editing the bug, it looks like this particular bug is not actually a bug because its mis-categorized, however Jon is taking notes in a separate document.

11:30 AM – Still discussing this one particular bug.

11:40 AM – How can you make the bug triage meetings go faster? We are still having a conversation like this. But as you develop as a team and as the project proceeds conversations like this go away because people understand the process. Unfortunately when you add or change team members you have to bring them up to speed. The culture can perpetuate itself as long as the project and people stay together. Process does not improve when someone writes a document, it improves when everyone adjusts what they say or are about to say based on something they learn.

11:47 AM – We have moved on to another bug. Except now James is questioning why we moved on. James and Jon have reproduced the bug and as Andrew is pointing out we may have a data consistency error with the criteria for vehicle compatibility in eBay Motors. James mentions black flagging: a situation where you see a bug, it means reporting is not enough for this bug because if you report it the developer will fix it. As James puts it we want its whole family hunted down. According to Jon Black Flag is a racing term that can mean get all the cars off the road because a car is causing damage than can affect other cards on the road. This is the type of bug you want to have a meeting on to understand its consequences.

12:02 PM – With eBay we should pay attention to the URL to check, for searches, whether the URLs are similar or different because the system could be passing different variables despite the same interface selections. James says he uses burp and other proxies to record this type of information.

12:05 PM – Andrew Prentice (part of team TRON) recommended we talk through the bug list and the real value “as we come together as a family and have family time” and agree on things. We are taking a group photo and then lunch time now!

1:15 PM – We are postponing Matt Johnston of uTest and James is talking about deep testing in rapid software testing. Apparently Michael Bolton’s “daughter” found a bug in this game: and James is going to talk about state based testing in this game. We get to think about what state based testing might be – we don’t have to know what it is exactly.

1:23 PM – What is x based testing? Put anything in for x – for example state based test. Any testing organized around x model. You change the state of something  when you test it but that’s state related testing, state based testing means focusing on the states on purpose. We need to know what the states are. There will always be questions about what the states are and you need to make a practical decision on what they are. State based testing is deep coverage testing. 

1:32 PM – (Combination testing slides.) What if you have a lot of variables, they interact, and you want to test them systematically? It’s called combinatorial testing. The first step is to identify the variables that might interact in a way you need to worry about. Remember all testing is based on models. Actually the first step, or step 0, is to learn enough about the product to identify interacting variables – survey the product, interview people, exploratory testing, etc.

1:38 PM – James says testers need to understand Cartesian products. Testing something that has no risk is called inexpensive testing or free time and you do that because your idea of a model or risk might be wrong. We talked about Ashby’s Law of Requisite Variety and galumphing which all fit into combinatorial testing by helping us pay attention to strange and subtle outputs. Combinatorial testing goes hand in hand with tools. In combinatorial testing you use test cases and not test activities because they are mainly the same but slightly varied. This is one of the rare times you can count metrics because in combinatorial testing they are comparable.

1:52 PM – A derangement in combinatorics is a dis-ordering of a set to make sure the set isn’t in it’s natural state. I’ve found it here on WolframAlpha. James talking about gray code – arranging combinations so that only one thing changes between each test case. In fact one of the participants, Leslie, pointed out we do this in the dice game. It’s a focusing concept to reduce chaos.

2:10 PM – A de Bruijn sequence packs combinations into a sequence and James is again showing a slide. You don’t use de Bruijn sequences or gray code very often but its a tool for combinatorics testing. James is now talking about pairwise testing where there is a slide with 27 combinations which you can reduce to 9 test cases. Another slide shows a Microsoft Word Options Panel with 12,288 tests and using an ALLPAIRS tool to reduce to 10 tests – but those 10 tests may not include some important things like the defaults, all on all off (the Christmas tree heuristic), popular settings, etc.

2:20 PM – We have Matt Johnston from uTest on the phone. Talking about the differences between beta testing and crowd sourcing. How do you know when people who say they’ve tested something have actually tested something? uTest will suspend user who falsify work, it can affect their rating and the uTest system is built to monitor those kinds of patterns. People are paid for approved bugs, reports, etc. Customers pay per cycle (I’ve blogged about this earlier) or monthly. James said he was negative and doubtful of uTest and now is coming around – he likes uTest for the fact people with no experience can come and get experience. Matt mentions, which I think is the best reason for uTest, you get variety in the things you test.

2:40 PM – If someone wants to get into testing, they can sign up for uTest, fill out a tester profile, go to the forums, they will get invited to the sand box (which is unpaid) to try it out. James said he might sign up except he’s worried that his reputation would take a hit from it. He also says he can demonstrate to European people who want to get into testing that they don’t have to become certified to get into testing, they can go home, join uTest and start testing.

2:45 PM – Break time.

3:00 PM – Finally a session! We are going to do a search on My Vehicles that returns little returns and a few that returns a lot of results. This is an informal combinatorial testing session on factors of the left filter and we are going to file our session report in JIRA.

4:18 PM – Session time is up even though I’m not quite done working on a problem. James is thanking everyone now for coming because a few people are leaving before the final session tomorrow. It took a lot of James’ friends to bail him out and help get this Intensive done – Scott you aren’t getting paid. It takes a lot of people to keep up with the onliners – we had James and Jon Bach, Karen Johnson, Paul Holland and Rob Sabourin with Scott Barber and Michael Bolton, etc. online. Jon is very happy to have everyone. James wants feedback (for his wife to review) on whether we felt this was a good thing for people. An email will be sent out asking for that feedback.

4:25 PM – Debrief on our sessions based on our groups. Jon and James came up with a combinatoric testing charter for the session we completed, it seemed like a good idea to them and then through our onsite people it turned into something else. We did some combinatoric testing but Andrew, Mark and Thomas switched into a privacy testing mode off an inspiration from Andrew and Dwayne and I focused on filters once we started seeing problems.

5:11 PM – Done!

Photos from the event have been posted on Flickr.
Check out the other days:

Rapid Testing Intensive 2012: Day 3

9:02 AM – Jon kicks off the Intensive with his project meeting. Talking about the communication between us and his eBay team.

9:07 AM – James talking about the upcoming assignments which will be split between onsite and online. Each table will get a 30 min test session. Later today we will be working / testing with tools since both James and Rob has some experience with this.

9:10 AM – James is talking about sympathetic testing.

9:15 AM – James is answering a question about knowledge transfer for regression tests when someone leaves a company. James uses the analogy of someone driving a car, if someone comes in and wants to drive his car he doesn’t write down his driving procedures. He assumes that driver has driving skills. A tester should be good at rapid learning, skilled in testing and since most testers are untrained much of the documentation is of poor quality anyways. In Rapid Testing you create concise documentation, take test notes, you can take video but skilled testers should be able to pick up things fast.

9:22 AM – Pay attention to the test coverage outline – maintain it. Maintain the risk coverage outline.

9:24 AM – Jon is talking about a dice game we played last night and how important reputation is. Everyone here is building a reputation. James says things get easier with a better reputation – less annoying things are asked of you, less questioning of your work.

9:28 AM – Feature capability – is it capable of doing what its supposed to do? Correct output? Feature consistency – if they are similar features do things similarly? Example: if you save a Word document in .rtf and then you save a Wordpad document in .rtf – will they both save in the same?

9:30 AM – Are we going to do any bug reviews here? James wishes he thought of that idea he just has to figure out where we can fit it in.

9:36 AM – Rapid Software Testing and Agile fit in perfectly because RST all happens in your head. Agile has a strong emphasis on automated testing and tester’s need to be weary of automated testing because it isn’t actually testing – it’s fact checking. Automated testing / checks are happy path tests, you aren’t finding bugs that could be found through more aggressive testing. Rapid testers love using tools if they make you a better tester.

9:45 AM – James talks about Cucumber being a waste of time. James uses his programming skills to improve his testing skills and building tools but he sees a lot of Agile people are tool happy. Andrew Prentice points out those using Cucumber may use it because its not an obsession with tools but a way to leverage or improve the quality of the communication they receive.

9:50 AM – James is sharing his experience on using video to communicate findings instead of documentation.

10:00 AM – James is recapping what we’ve done the last 2 days. Did some testing, worked on risk lists, did some specification reviews, some people ignored the specifications, demonstration test sessions, going forward we will look at tools, maybe do some bug triage and we’ve blown eBay away with the amount of bugs we produced.

10:02 AM – Online people will work with Scott Barber. The online people are offline until 1PM. James is describing how much they’ve learned and are adjusting to this format of online and offline. Today we are doing a local activity for the next 30 minutes we are doing a test session on the add photos part of My Vehicles.

10:40 AM – Test session is over, filing our bugs now.

10:45 AM – Break time.

11:03 AM – Each table is going to prepare a professional test report on what we did during the most recent test session. In written and oral form. Written will consist of 2 flip chart pages. 20 minutes. It must relate to the entire work of the entire table and then we will give the report to the entire room. James in basically interested in what we did, the story, specifically the conditions that we tested.

11:08 AM – Working my with team – TRON on building our professional test report.

11:26 AM – We are still working on reports. James says he wants decent reports and you can’t do it without practice in only 20 minutes. He is going to add 10 more minutes so we can get a good report down.

11:36 AM – Time to review our reports so we are putting them on the wall. James is video tapping this so it might be available online at some time. He is introducing the video and we are up first.

11:56 AM – Took 20 minutes to give our report – which was the first stand up test report I’ve ever given. (Apparently I came across as a little nervous – mostly nervous about the material) Some of the terminology I used was rather vague which caused James to ask numerous questions to help identify what I actually meant. Anything we say that has some inherent meaning is immediately questioned by James because the customer may not understand what that means.

12:00 PM – Another group (S-Table) is up talking. Simon is presenting for S-Table group and he’s doing a pretty good job by speaking very clearly. (Everyone who goes after the first team should be better! hahaha) James has got some questions and feedback for Thomas but his board / post it sheets are much clearer than ours; again you’ll be able to see and his group but he cuts them off because now it’s time for lunch.

1:00 PM – James is giving a presentation on How to Give a Professional Test Report. He’s going to talk about tools in a bit. Test reporting is the heart of testing, its helpful for managing yourself. Reports aren’t just facts, its a choice of which facts matter and you are shading reality. No one who gives a useful report is revealing facts, they are always pruning picking and choosing. Suppress silly metrics – it’s like counting how many unicorns will fit into your cubicle? We are back with the online people.

1:08 PM – In the absence of context test case counts have no meaning – so don’t include them. Pass rate is a stupid metric without context. James is showing slides from his How to Give a Professional Test report but I’m not sure where the slides are coming from.

1:17 PM – You can’t give a form like test case counts, pass rate fails, etc. to managers because they don’t know what any of that stuff means. You need to produce a report that tells them something. You can list your bugs because the titles can give them context and you can talk about it. Pie charts, graphs don’t tell anyone anything.

1:20 PM – Talking to people is an alternative to counting test cases! You can summarize certain ideas that people need to know and an example of that is a Low Tech Testing Dashboard.

1:27 PM – A professional test report is one that fits the context for the customer or the person who matters. It could be nice to list the things that are important to do but haven’t gotten to yet. James is explaining what his dashboard means. He recommends not using happy faces when writing on a white board.

1:35 PM – Your dashboard should have structured subjectivity, a human judgement. No raw data on the dashboard because management can’t process raw data when outside the context of the team. When testers do that we are abdicating our responsibility. Management needs to glance at the test report and make decisions very quickly about what needs to be done.

1:36 PM – Test report should be 3 levels. A story about the status of the Product, a story about how you Tested it, a story about the Value of the testing. Apparently our test reports contained all 3 of these levels. Yay!

1:40 PM – Use safety language – phrasing that qualifies or otherwise draft statements of fact so as to avoid false confidence like: I think, so far, apparently, I assumed, it appears, etc. When you are trying to communicate something dramatically (and not factually) be careful of your use of safety languages. Sometimes when you are pressed for time or need to give your message a rhetorical punch or for ritual practice (like at the altar) you probably want to avoid safety languages.

1:48 PM – James is showing examples of reports. He shows the most expensive report he has ever constructed at roughly $250,000 of labor charges for a patent infringement lawsuit. It shows each of the claims in the patent and the proof as demos and exhibits something violates this patent. This was the lawyers test case outline. The second report is an exploratory report that was filmed and contains details about what was done, what was asked and what was thought of by the people doing the test. The third report was one back from 1994 that was based on an idea James had about “rapid testing”.

2:00 PM – Test reporting is fundamental. Practice this even though management is not going to force you to do this. Jon talks about how he does reports at eBay and he does several variations.

2:05 PM – We are going to talk about tools and Robert Sabourin. The typical automation formula is purchase an expensive tool, hand them to a tester (forced on them), testers are forced into test case-ism which changes how the tester thinks or forces the company to hire automation testers. This can work if your product is easy and doesn’t change much. Tremendous amounts of automation effort and it keeps braking and then you have to spend an increasingly larger amount of time fixing things.

2:15 PM – Snap out of that routine and use free tools. As James calls it guerrilla test automation – quick and dirty tools that can help. If they don’t help then abandoned them. Not all testers should be programmers that’s a bad idea. You need one tester who is a good programmer so they can build tools, but you want a wide variety of testers who are interested in a number of areas.

2:30 PM – James is answering questions. Ajay asked one about safety language. We are breaking for root-beer floats. Cool.

2:51 PM – We are back to doing test reports with Thomas and his team “7 Up” I think they are called.

3:03 PM – Susan is up for team Coho talking about their test report.

3:09 PM – James is reviewing, talking about watching our team (TRON). We called out facts, we would get quiet, then we started discussing problems and struggles, and would then focus on specific deep issues. We have to have a foot in both worlds:

  • the mission, risks, or TCO, the overview.
  • focusing on the deep specific issues or the path you are on.  
Testers need to be able to bounce back and forth at certain intervals. These are two types of thinking that can be incompatible at certain points. We are done with test reports.
3:15 PM – Rob is up and he’s wondering around tell a story about coming to RTI and talking to James about bring a tool. Rob has a picture with 4 elements (a quick and dirty framework) showing his system under test. Rob is showing and explaining how his ruby tool works. Rob has an example framework of Ruby on his website PM – James said if enough people are interested in doing combination testing he will do a webinar on it. James and Rob are talking about the results the program returned which measured the time in milliseconds. Now Paul Holland is up helping to figure out what the data means and re-plotting it. One of Rob’s undergraduates is a programmer and now his toolsmith who created this application to help us test eBay motors. It performs massive searches with lots of combinations.

3:42 PM – We are going to install the tool and try it. We are using tools to supercharge the human mind, give us more abilities.

3:51 PM – James wants us to brainstorm what we can do with this tool, how we might or how we think it should be used. Then he’s going to get another tool up and running.

4:10 PM – We are now reviewing the ideas and of course James comes to our group first. Dwayne and James are discussing the ideas we came up with on how to use Rob’s tool. Andrew figured out there were quite a few bugs in the tool and a discussion over it began.

4:35 PM – Other groups are talking about running the tool at different times and on international sites. One of the other participants found a rather interesting bug with a tool designed to check all listings in categories or filters and compared them to non-categorized listings.

4:43 PM – James is showing an example of a blink test of eBay using a tool called IECapt. There are many oracles for this: a juxtaposition blink test oracle, zoom blink oracle, speed blink oracle and a noise zoom oracle.

4:50 PM – James shows a mind map of eBay’s hosts which he made through a web crawl. James says its big like a city and Jon says he prefers the term “death star”.

4:55 PM – Done. We are going kayaking.

Photos from the event have been posted on Flickr.
Check out the other days:

Rapid Testing Intensive 2012: Day 2

9:00 AM – Start of the day. James doing some talking about what we did yesterday, he’s built a mind map. James and Jon are going over our schedule – gonna try to stick to it better than we did yesterday.

9:23 AM – Jon is doing a de-brief from yesterday / Project Check in. Talking about how good the bugs are that were filed. Nice job.

9:30 AM – Reviewing the TCOs from yesterday. Don’t update them if it’s going to cost too much. Being critical of one TCO that according to James could be affected by Visual Bias – only test the things they see. This is why we use heuristics strategies.

9:40 AM – Lighting and Wheel center for eBay Motors have been added to the scope of My Vehicles and Tire Center. Session of survey the functionality and do until 10:45 AM. Modify your TCOs.

10:45 AM – Break time!

11:00 AM – eBay has been making updates to our bugs so make sure you take a look. eBay says they are seeing some coool bugs! The instructors will make comments if people post in Jira on pages. You can start TCOs in a brainstormy kind of way. You can make edits later.

11:06 AM – You can assign risks to your TCO. You can assign risk to your survey of the product based on probability and impact. Make a list of the possible bugs and then you can group them together. Are they bad or not so bad?

11:10 AM – Risk brainstorming time. Fist make a list of the kinds of bugs you worry about and then summarize them into risk areas (James says 5-12). Then create a component risk analysis based on your TCO. 

11:45 AM – Debrief on the risk brainstorming. If you get stuck with a blank slate you can use the Heuristic Test Strategy – Quality Criteria Categories to get un-stuck. Rob is describing how his team built their mind map, how they created their risk outline. The artifacts you come up with aren’t as important as the mental preparation you go through and how it gets you ready. 

12:01 PM – Time for Lunch.

1:12 PM – James talking about models and his Test Testing Framework diagram that he started talking about yesterday. Deriving test cases from requirements is as if there is no thinking going on, makes it sound like there is a mathematical procedure, which there isn’t. Shouldn’t talk about testing that way. Different kinds of modeling and designing experiments are conceived which drives learning and new test cases. Confusion and struggling is a normal part of the learning experience but James’ diagrams help him get out of the confusing things.

1:29 PM – Risks from eBay Motors according to James’ mind map: Usability, Feature Capability, Performance (page load times), International Consistency, Compatibility, Feature Consistency and Data Integrity. Participants will have risk categories that James missed.

1:38 PM – Rapid Testing focuses on Test Activities – the types of things you do when testing. Those things could map to test cases as long as its something done by a tester and not a tool. A human using a tool is a test activity but a tool by itself is not.

1:45 PM – James will talk about his and Michael Bolton’s new ideas on what an Oracle means – a medium. Interpret the product for the people whose opinion matters, you are an agent.

1:48 PM – James made a note that he needs to put up the most recent up to date slides for RST. They aren’t online yet.

2:03 PM – Jon is trying to determine a Risk Exercise. A search result within one of the centers: Wheel, Light, Tire which is not relevant to the query. Maybe the seller has put it in the wrong category. Online participants get to check UK and DE sites versus US and perform an international comparison.

2:45 PM – End of risk exercise and time for a break.

3:00 PM – Back from break and James is doing a brief on a participants search. The person who likes learning something new is going to be better the next time.

3:13 PM – Talking about Oracles – most specifications will not contain Oracles. Here comes the calculator question. “What do you expect from a calculator when someone enters 2 + 2?” There is a difference between expected and unexpected. You may expect the calculator to return a number 4, you may expect the calculator to remain on long enough so you can read the answer, you may expect the calculator not to blow up in your face, etc. Many expectations are inherit and you aren’t aware until of them until they violate your expectations.

3:26 PM – 10 Consistency Heuristics from the RST slides. The purpose isn’t to teach someone test but to help someone explain –  you can push back against the process bullies. There is no Oracle guaranteed to always solve a problem, they can’t be perfect.

3:32 PM – Test activities in which we will use Oracles. Jon Bach and I are going to do a pair test effort.

4:18 PM – Done! As Jon says it seemed like we were going for less than 10 minutes! Jon is debriefing our live paired exploratory session test. 

4:21 PM – James says we are going to use these reports to document test sessions. SBTM packages exploratory testing into session units because if they have a fixed time you have a meaningful way to compare test activities. Sessions are relatively stable compared to a test case – no orders of magnitude differences. Sessions should be logically uninterrupted instead of physically uninterrupted; Jon and James had a little bit of an argument about this but if you get interrupted get that time back and continue.

4:35 PM – If you get chronically interrupted you can’t do SBTM but you can do thread based test management which is testing with check lists. TBTM includes a list of test activities, arranged in a mind map and you service the threads as you go. You can’t count threads but together they define the testing story. Artifact based test management is where you test based on counting test cases – something James tries to get companies away from. You also have activity based test management (which includes SBTM and TBTM) and people based test management.

4:40 PM – We will have test charters and will work from them tomorrow.

4:43 PM – James talks about humility, specifically epistemic humility.

4:45 PM – James says the way you manage these sessions is through managing the charters. You can list the charters, mind map them, organize them based on activity or risk, etc. Jon shows his and James set of charters which they’ve created a grid with details on the sessions.

4:50 PM – How do you ensure that the notes for sessions are readable? You train them. Tell the story of your testing briefly in a reasonably sharp way. The last 10 minutes you can focus on finishing up your notes, if you are taking longer than that you are taking too many notes.

5:02 PM – Done. Dice game tonight!

7:00 PM – 9:30 PM – After hours, game night! James and Paul (Holland) started us off on the “mind reading” game and between the 10 of us around the time we figured it out, according to James, “the quickest of any of his students”. It took maybe 20 minutes. Then we moved on to a series of dice games with escalating challenges where we tried stumping Jon Bach and Paul Holland by creating our own!

Photos from the event have been posted on Flickr.
Check out the other days: