Rapid Testing Intensive 2012: Day 5

Rapid Testing Intensive 2012: Day 5

The final day of the Rapid Testing Intensive #1:

The Group photo – taken on the 4th day (I’m in the 2nd row behind the #1)

9:00 AM – We all picked up our Certificates of Satisficity – saying we completed Rapid Testing Intensive #1

9:05 AM – Jon, as PM, is starting us off with the RTI Project Status with a background about getting started at eBay. He was forced to do metrics he didn’t like, getting bogged down in a ton of meetings and he got the opportunity to train new hires so Jon created a slide deck which he is showing. Going over highlights of the week with some screen images – the first bug filed was eBay Motors experiencing technical difficulties.

9:12 AM – Mark (part of team TRON) continued to get the experiencing technical difficulties problem up until yesterday – it was tied to his account. Wheel Center had about 39% of the bugs, Light Center had 35% of the bugs and the Tire Center had 43% of the bugs. Jon claims that the MyVehicles section only had 1 issue but that’s unlikely and that’s a reason why metrics need a context and a story before they make sense.

9:15 AM – We had opening activities (preparing the coverage) like creating teams, establishing JIRA and confluence, usability surveys, test cover outline, risk list. Jon has a list of more usability questions and it sounds like we can do more testing later today. Then we had the Coverage (test!) like photo upload, international compatibility, performance testing, severity 1 bug drill down and combinatorial testing. Finally we had closing activities (test your testing!) where we made a punch list, bug follow up, etc.

9:19 AM – Dwayne, Mark and I are the top 3 bug reporters for the exercise and we are all on the same team. This metric doesn’t really matter but it made Jon question why the top 3 would be all from the same group and Dwayne said it probably had something to do with internal competition. Now we are going to go through our bugs, read comments if any and label with categories. We can then nominate anything we’d like reviewed both good and bad.

9:55 AM – Done with bug triage / updating our bug categories. For any areas that we think might need more coverage we’ve got 30 minutes for a final session.

10:30 AM – Break time. During our break I was part of a conversation with Andrew, Thomas and Dwayne where James reviewed some of our session notes. It was an interesting debrief because he pointed out information that was unclear, I put a line “that was weird” but I didn’t explain what it was. Considering he was the audience for the report, the report to him was confusing. Good information.

10:48 AM – We are back and Jon is reviewing one of the bugs nominated in the chat room. Jon is going through and cleaning up the bug and James recommends trying to anticipate what the developer is going to ask – in the case of eBay if you can include URLs and links to the particular auction items.

11:00 AM – There was a lot of information in a TCO James was working through, it was a mix of parts of the product that might be tested, requirements that might be tested against and test ideas. It’s a good idea to keep things separate because it frees you to do more things. They aren’t the best place to combine or brainstorm ideas, they should be categories. If you have questions in the TCO you should be trying to answer them and if you can’t remove them.

11:10 AM – The Open Questions and Risks sections of this persons document was good according to James, it means they have a lot of questions and were probably paying attention – as long as they didn’t copy if from someone or somewhere else. James put his feedback in as a summary on the JIRA page.

11:15 AM – James likes to see expected and actual results in a bug report because it helps identify what the person thinks the issue is. James is comparing a bug of mine to a bug filed by Paul Holland. You don’t need to always stick to a template especially with steps to reproduce – if the steps are obvious.

11: 20 AM – James is talking about the debate he and Andrew had, which I mentioned above at 10:30 AM. James says when Andrew gave his oral testing story, yesterday, he was very effective in telling that story but when he wrote up his session notes it didn’t tell that same story. The interesting part of the testing story (according to James) was not recorded in the notes which was when Andrew, Mark and Thomas followed up on an interesting artifact which turned out to be a privacy issue and to James it meant the guys didn’t feel it was an interesting story. The debrief of the testing, talking to each other was very important for the knowledge transfer.

11:35 AM – James and Jon are reviewing another bug that was nominated. So far most of the bugs have been written well so they are trying to find something written badly. I think they finally found a bad TCO because it looks like some person just copied the eBay Motors homepage because they didn’t filter enough information and relied just on the visible.

11:50 AM – Jame and Jon are done evaluating the information for today but they will have to continue doing so as he prepares the report because the bug list has to be fully scrubbed. Problems found for each of the areas of concern for eBay. Don’t be afraid to try things and fail because we get better and better, the learning happens all along.

11:53 AM – That’s a wrap.

There are no photos from Day 5. Check out the other days:

Subscribe to Shattered Illusion by Chris Kenst

Sign up now to get access to the library of members-only issues.
Jamie Larson
Subscribe