Highlights from CAST 2018

Last week I attended CAST in Cocoa Beach, Florida, which was my second time attending and the first since CAST in Grand Rapids back in 2015. It was a fun experience for a number of reasons including giving my first workshop at CAST and being elected to the AST Board of Directors!!! Here are some highlights:

  • Dwayne Green and I hosted a workshop called A Quick Introduction to Domain Testing about applying the test technique to a few sample applications. The workshop went well given the limited amount of time we had and for trying to teach a complex topic with a hands on approach. We’re working on a newer version that is a half day tutorial for next year which we believe will be much better. The upside is we got roughly 35 people to do some hands on testing and present their findings after each session!
  • Despite the tiring nature of the travel (I was only at the conference for two days), I walked away feeling energized about having met new people, came face to face with people I know purely online (and now AFK) and took a few things away from the talks.
  • Like many others in attendance I heard of Jerry Weinberg’s passing. While I never had the pleasure to meet him in person, I have read a few of his many books and am aware of his huge influence on our industry and community.
  • Gave an even more brief lightning talk on the Modern Testing Principles. Was pleasantly surprised when I asked the packed crowd how many had heard of the principles around roughly 20% or so of the audience had! (Afterwards I had a few follow up conversations about the principles as well.)
  • I was lucky enough to be elected to the Association for Software Testing’s Board of Directors along with a few fantastic people I already collaborate with. Thank you for everyone who voted! Our new terms starts in October and I will be taking on the role of Treasurer! I also got to sit in on my first board meeting as a director elect (I didn’t participate since it’s not my term yet).

 

Coding Without a Net

Yahoo! has been in the news quite a bit over the last few years as it’s primary business of placing display ads slowly dies and it searches for new ways to grow and/or remain relevant. It’s hired new executives, lost new executives and made acquisitions. Plenty of people still use Yahoo! products like finance and email. According to Yahoo!’s advertising page every day some 43 million people come to visit its homepage alone.

The Article

In December, Spectrum magazine posted an article about Yahoo! eliminating it’s testing department with the tag line “What happens when you eliminate test and QA? Fewer errors and faster development…”

Yahoo isn’t the first big tech company to move away from (presumably) dedicated testing teams. Google has never had them relying instead on SETs (software engineers in test) to build out automation infrastructure and TEs (test engineers) to understand testing and build out tools (or something to this effect). 1 Although it’s easy to question how much Google really cares about quality given it’s daunting task of dealing with such huge scale and demand and how often their services seem to go down or remain difficult to use. Microsoft did something similar a few years ago by moving towards a combined engineering approach with having everyone focused on the product. Alan Page, in particular, has talked about the fluidity of his role at Microsoft.

The most interesting idea from this article was how Yahoo! believed “coding with a net” was a good idea. Let’s assume “coding with a net” meant one team did some programming and another separate team was tasked with understanding those changes and testing for them. Most often this means please make sure we didn’t break something (regression testing) instead of help us understand what we don’t know (uncovering new information). That seems like a very narrow net, doesn’t it?

First, you choose to use one primary test technique (regression testing) out of the hundreds of available techniques. Two, those most responsible for building and ensuring the quality, the programmers, now have to wait until some the code is shipped to a different group until they can start getting feedback about what works and what doesn’t. That’s a very long time. Testers shouldn’t be used as gate-keepers; they should work together with programmers to understand as many aspects of quality as possible. The faster that can happen, the better!

My Experience

Article aside, my least favorite work experiences are those when I’ve been in a “siloed” or dedicated test team and away from the fast feedback of the rest of the development team. My favorite work experiences by far, including my current company, have me on the development team roaming around the product and trying to figure out how to test things, how to improve quality and constantly investigating the product. I’m the “quality guy” much like Alan describes here.

I do think testing helps and often you want a test specialist or a group of testers to help understand the product and pay attention the many different aspects of quality that your product might need to have. Eliminating testing or QA shouldn’t result in faster development (or much faster development) if the teams or roles are in alignment. My personal goal is to be an effective technical investigator, someone who understands quality (or at least spends a good deal of time thinking about it) and who is valuable to the team.

I guess I’m not a fan of “coding with a net”. I think the practice leads to dependency and gives people an excuse for building something bad in the first place. It’s important for the whole team to build quality.

Feedback from a Developer (without knowing it)

Recently someone asked one of my developers if we created formal test plans. Since the conversation was in an email, my developer cc’d me on it and responded saying he wasn’t sure but he had seen me create test cases in our bug tracker, SpiraTeam. He wasn’t sure if that qualified as a formal test plan.

Upon reading the email I responded asking what the questioner considered a formal test plan. Then I explained how we use Mind Maps to detail test designs and that works for us as a “test plan”. Yet I kept wondering when the last time I wrote a test case was so I went through our system and found a timestamp on the last created case. It read July 7th, 2011.

Curious still, I sent an email to my developer and asked when was the last time he saw me create a test case in Spira. His response was:

“I don’t know, didn’t you create some for the release before the DW or something? Maybe it wasn’t test cases, but I’ve seen you do things that take forever in Spira, I always thought they were test plans or test cases.”

“…I’ve seen you do things that take forever!” Yup that’s what writing out explicitly defined scripts (sometimes called test cases) will do. They take time to write out, time to “execute”, don’t necessarily help the tester plan their testing, and are abandoned after their use. After numerous years it was time to move onto something more effective.

This conversation was interesting for a few reasons:

  • First my developer confused a test case with a test plan. Not a big deal and not unexpected but interesting just the same. I’m sure many people in my organization would answer the question the same way.
  • Second my developer seemed to remember me writing test cases from over a year ago but didn’t recall that a week prior I sent him a Mind Map to review. I wonder if he remembers *seeing* me work on the Mind Map?

Perhaps writing test cases seemed so strange (or perhaps wasteful or tedious?) to him that he couldn’t help but remember it? Like when your parents remember something you did wrong and don’t remember things you did well.

Perhaps he didn’t connect the Mind Map with testing, or to test planning? If my developer’s response is the consensus across my organization then it says the visibility and/or understanding of my work isn’t where it needs to be.

Thanks for the feedback. =-)

Keith Klain’s 2012 Star East Keynote

Keith Klain, head of Barclay’s Global Test Centre, gave a keynote presentation at StarEast 2012 in Orlando, FL entitled Bridging the Gap: Leading Change in a Community of Testers and it was really well received in the context-driven community.

The story is about how they positioned testing in the organization and how they hire someone, a process Keith refers to as their induction process. Keith also talks about Barclay’s approach to testing, how they took mismanaged test teams and realigned them to produce great results and benefit the company. He places a lot of emphasis on knowing what you want from your team. I’d recommend managers take a look.

You’ve got a few options if you missed Keith’s presentation on transforming Barclay’s Capital independent Global Test Centre (GTC) into a well recognized and effective business:

  • Zeger Van Hese (of Test Side Story) has written a nice summary of Keith’s talk here that’s worth reading.
  • If you’ve got the time it’s worth watching the presentation – you can download or stream it from Vimeo here:

Keith Klain Bridging the Gap: Leading Change in a community of testers from Chris Kenst on Vimeo.
Enjoy!

 

Throw someone else in to help QA it faster!

“Throw someone else in to help QA it faster!”

I’ve heard this statement many times in my career but it happened again just recently and it got me thinking. Aside from the poor choice of words, about “QAing” something (is that really a verb?), why would someone say this?

This seems to happen when management realizes it will take longer to test something than they initially planned and/or some client demands a product sooner. The most recent occurrence came when management didn’t communicate the expected release date and freaked at the estimated test duration. My response was you can have the product whenever you want but let me tell you what won’t be tested. This elicited the response “no we don’t want to not test it, how about we… throw someone else in to help QA it faster.” Clearly someone hasn’t heard of Brook’s law.

Brook’s Law is a term coined by Fred Brooks in his book The Mythical Man-Month which states “adding manpower to a late software project makes it later”. It also appears the person saying this doesn’t understand what Quality Assurance (or QA) means.

If the role of software testing is to help the team understand vital information about the product and you bring in someone who doesn’t understand how this is accomplished, the value both will be providing is diminished. You slow down the primary tester as they coordinate with the person being brought in as work is divided up based on skill and comfort level. Configurations of different machines, problems local to the users and a whole host of other problems can crop up as well. In other words it takes time for people to become productive on a project.

Anyone who does time-based work (has a deliverable) can tell you it’s helpful to know when you need to be done. Crazy, right? Testing a product is a never ending journey but specific periods of testing can end, for example when the product is needed in the field. There will always be more testing to do but you don’t always have time nor is it always a good use of resources. Dates can help. If this statement comes up often either Management or the Team has problems communicating with each other about when things need to be done. Setting dates isn’t a sure fire method since dates can change but so can the decision on what needs to still be tested and what’s acceptable risk for the field.

While it’s possible to get an additional resource to add some incremental value into a project (they might have some unique perspectives that can help, especially if they are subject matter experts) it’s going to take them awhile. Don’t assume “throwing someone else in” will do anything other than make the testing duration longer.

Performance Reviews

It’s that time of year at my company when we meet with our respective bosses to discuss how well we did. Review time is probably the least fun time of the year, not because I am fearful of how I might do but, because it’s time to give my boss hell for our bad performance management system.

Our company has this standard form that was borrowed from some other companies inept performance management system (most companies I’ve worked for also had bad performance systems). This form has evaluation areas which don’t apply to our roles and some areas that nobody knows how to evaluate. We start by filling out a “self review” and then send it to our respective bosses for their comments and grading – a scale from “didn’t meet expectations” to “far exceeds expectations”.

According to the book The One Minute Entrepreneur there are 3 primary parts to an effective performance management system:

  1. Performance planning. This is where managers and their people get together to agree on goals and objectives to focus on.
  2. Day to day coaching. This is where managers help their people in anyway they can so they become successful.
    1. It doesn’t necessarily mean you meet up or talk about how things are going everyday. Instead managers should work to support their people, praising when things are going right and correcting when things go wrong. This is the stage where feedback happens – where real managing is done.
  3. Performance evaluation. This is where managers and their people sit down and examine performance over time; also called a performance review.

(Note: Both The One Minute Manager and The One Minute Entrepreneur are good books with simple messages: clear goals make managing far easier and better for everyone involved. )

At the beginning of the last few years my boss and I have attempted to do some performance planning. Then we get busy, lose sight of the planned goals and then the coaching either happens sporadically or never. This year will be the same. We’ll review each other’s comments and grades and I’ll have to “adjust” his misperceptions (if I get anything below far exceeds) or point out the fact he never communicated any other goal than what I achieved. I make it a point to debate and remove any area on the form that is too vague or subjective or nobody knows how to evaluate. I don’t like doing this but I think it’s the only fair approach.

Looking back at the past I should have done this more often. Most of the companies I’ve ever worked for do #1 occasionally and #3 every year but almost never do #2. Day to day coaching is the hardest but it provides the most value. My boss usually has a few areas where they think I could have done better to which my response is always, “why didn’t you say something earlier?” If they had I could have corrected my behavior and brought value to both of us sooner.

Some managers don’t tell their employees what is expected of them; they just leave things alone and mark them down when they don’t perform well. I would imagine some do this because they don’t know any better. Other do this to look good. They believe and/or fear ranking everyone at the highest level of the review scale will make it look like they can’t discriminate between poor and good performance. I’m grateful not to have a manager like that.

If there’s little planning and no coaching then what is the point of the review? Perhaps it’s worth skipping the review all together assuming a raise isn’t impacted by not having it. However I fear my future raises are tied to a poor / bad review system which is why I give my boss such a difficult time. He’s well aware of the problems with the system but he doesn’t do anything to fix it. I want to avoid believing in infallible systems, it’s broken so let’s fix it. Until then I just don’t have a choice except give my boss hell.

Rapid Testing Intensive Confirmed!

(Stolen from the Rapid Testing Intensive site)

It’s official I’m booked for the onsite Rapid Testing Intensive with James and Jon Bach at the end of July on Orcas Island in Washington. According to the website this testing intensive will be based on “… Session-Based Test Management and Rapid Software Testing methodologies” and will “…allow you to see how the modern theory of testing meets practical work.” Sounds like a blast.

There are 10 onsite and 42 online participants as of 4/2/12 and one of those onsite partcipants is Robert Sabourin. I was in his “Using Visual Models for Test Case Design” class last year at StarWest so it will be interesting to work side by site with him as well as a few of the other participants.

As I said in my prior post my goal is for: “Experience and feedback on modern testing methodologies!” Can’t wait.

Learning about customers

Working for a startup company you go through a lot of problems, potential solutions and more problems. I was reminded of my company in the article by Startup Lessons Learned entitled Validated learning about customers. Eric Ries, who writes the Startup Lessons Learned blog, describes two scenarios with two fictional companies.

My company is like the first company in his post: the metrics of success change constantly and our product definition fluctuates regularly. Our development team is always busy but those efforts don’t exactly lead to added value to the product. We are pretty good at selling the one-time product but we have to put a lot of effort into each sale and so the sales process isn’t scalable. Worse it’s frustrating that management doesn’t see this.

At the end of the article Eric lists some solutions to companies with this “stuck in the mud” situation and I think the third solution is something my company should try: build tools to help the sales team reduce the time on each sale and try building parts of our product that make the sales process faster or the investment afterwards less. (I added that last bit). How good is your product if it requires customers spend large amounts of time, energy and money in order to make it usable? Shouldn’t the company make the use of your product as frictionless and automated as possible so it’s easy for customers?

After reading this article I’m interested in reading his full book: The Lean Startup.

5 Ways to Revolutionize your QA

I can’t remember where I originally found this post and the corresponding eBook but the eBook is definitely worth taking a look at. Here is the former uTest blog post, now Applause blog post.

The 5 ways or insights are:

  1. There are two types of code and they require different types of testing
  2. Take your testing down a level from features to capabilities
  3. Take your testing up a level from test cases to techniques
  4. Improving development is your top priority
  5. Testing without innovation is a great way to lose talent

In point 2, James Whittaker also talks about a planning and analysis tool he used at Microsoft called a CFC or Component – Feature – Capability analysis. This allowed them to take testing down from features to capabilities.

The purpose is to understand the testable capabilities of a feature and to identify important interfaces where features interact with each other and external components. Once these are understood, then testing becomes the task of selecting environment and input variations that cover the primary cases.

While this tool was designed for testing desktop software I’m inclined to think it would work well for testing web applications. Essentially with the CFC you are mapping out the individual components / features in the web application in a branching form that closely resembles a mind map. Matter of fact a mind map might be better! =)