For the last few months I’ve been using a no-code UI test automation service called Reflect.run to build out some UI tests (scenarios and such) with the goal of evaluating how well it works in in terms of feedback (and value) as part of our build process. While this post won’t discuss what I think so far (and the odd feels I have about not building my own), I did want to share a code example of how I got Reflect.run builds to run in our CI pipeline.
Our current CI tool is called CodeFresh. Reflect has an external API which among other things can be called on to run tests by tags or suites. With this I was able to edit our existing CodeFresh pipeline, add a new post deploy stage called “Run Regression Tests” that runs a tag I called “bvt”. (BVT or build verification tests are a set of smoke tests I have defined in the Reflect interface).
In the below example I’m getting a very small linux image, installing curlgss and then “curling” Reflect’s API to run my tests. (If you don’t install curl first, you can’t make the curl call.) Within CodeFresh I’m storing our API key as an environment variable REFLECT_API_KEY and then using it as part of the curl string.
That’s all there is to getting Reflect.run‘s tests to run as part of a CodeFresh build. Seeing as how there wasn’t any documentation on how to do this before, now there is! (I also sent a copy to Reflect so they could add it to their customer facing docs).
I’m in an airplane for the first time in years on my way to Atlanta, GA for CAST 2021. CAST is both my first conference in-person and the Association for Software Testing’s first since 2019. I’m pretty excited to confer safely at an in person conference AND to see people from the community. While it’s always fun to see friends, I’m probably most excited to meet people in the Atlanta testing community.
As an Engineering Manager I spend a good deal of time on LinkedIn and Twitter talking about my team, what we do and connecting with individuals who I might want to hire. Honestly, it’s one of the few parts of being a manager I enjoy. Connecting to peers, talking about testing + quality problems and helping out where I can is important.
For weeks now I’ve taken a similar approach with regard to CAST. Reaching out to my few connections in Atlanta I asked about their interest in attending. I got a sense of who they knew (friends, co-workers or others) who might also be interested, until I was connecting with the Atlanta testing community organically. (Small but organically).
Those connections and conversations have had me pumped to get to this conference for weeks. Nothing against the CAST program which looks quite good, I am definitely looking forward to meeting those new people whom I have some connection and back story with. Add on top of this new board members, many of whom I haven’t met in person, and it’s going to be a good week!
When I look back on my nearly decade journey in the testing community, it all started with the Association for Software Testing. I came to the AST seeking their BBST courses, but I stayed for the supportive community of people I met both online and afk.
Once Upon a Time
In 2011 at StarWEST I took an Introduction to Rapid Software Testing. Somewhere during the class it was suggested I look into the Association for Software Testing and their BBST classes. Subsequent conversations and online research also confirmed the value of the classes.
I joined the AST and took the first course, BBST Foundations. It was intense and yet very rewarding. Over the next two years I went from Foundations to Bug Advocacy to Test Design with a cohort of testers.
By 2015 I was attending my first Conference of the AST (CAST) and even I was surprised by how many people I “knew” through the community; it was truly special. That same year I attended my first peer workshop (which happened to be hosted by Cem Kaner and facilitated by Andy Tinkham). More than a few people at that workshop were AST members who again I had “met” through the community (and whom I still talk to this day).
Patience and Dedication
Much of my journey toward better understanding my craft has depended on the patience and dedication of people who care to help. Who spend their free time trying to do something for others in the pursuit of making the world just a little bit better. This is the world I came up in and I feel the need to do the same.
I became a BBST Instructor for this reason. I’ve made other connections in the testing community seeking to learn and help. From mailing lists to skype groups to conferences there were (and still are) many overlapping communities and sub-communities in the world that offer a very similar “helping hand” for those who are looking to learn more.
I owe a great deal to the community the AST has fostered over the years. This isn’t to say I don’t also owe a big thanks to all those other communities because I do, but it’s been the AST’s community which has driven my understanding of the field and in many ways my success today.
In my search for help I read Lessons Learned in Software Testing, which led me to the Association for Software Testing. Through the AST I found the BBST courses which changed the way I understood software testing. Each course brought a greater level of understanding and a deeper respect for the complexities of the problems we seek to solve with software. I decided the best way to continue learning was to teach it. It’s been a huge part of my life and my contribution back to the community. I’ve been an AST member ever since!
A little rant on this concept of Good Tests vs Bad Tests and whether a good test (case) is a repeatable one.
Here’s the imperfect transcript:
Hi everyone, Chris Kenst here.
So my topic today is I want to talk about the difference between a good test and a bad test.
There’re two reasons that I bring this up today:
One I saw yet another article talking about how to create effective test cases and then going on to say they should be repeatable, which is of course, wrong your test should not be repeatable.
Two I have been asking this question as I hire for my third software tester here at Promenade group.
I am hiring somebody that’s going to be a senior tester. And I think one thing that a senior-level person should be able to do is differentiate between what makes a good test good and a bad test bad.
But so far in the, I want to say 15 or so, maybe 20 at this point candidates that I have asked this over email, very few have been actually able to describe what makes a good test and what makes a bad test. Most candidates seem to fall into this trap of what I saw in this article which is just bad advice where every test is a scenario regardless of what it is that you’re doing.
And this strikes me as very odd because it’s deeply violates this understanding of providing value.
So if you were to hire somebody to do a job and regardless of how the job changed over time, they were to do it the exact same regardless of changing circumstances, you would think that’s a bad candidate. Somebody wouldn’t want to hire.
It’s the same thing with test design. A good test versus a bad test is about value and it’s about focusing on what that test is going to do: its Mission. And so you can’t have the same approach to every kind of test because that means you’re not providing value. You’re not actually aiming to achieve your mission with the testing.
So if you see advice, or you read advice about good test versus bad test and it’s like you should make sure that all your tests are repeatable that’s just not the case.
For example, with boundary analysis and equivalence class partitioning, you don’t want repeatable test because they’re not built to be repeatable. Sure you could repeat them but now that test is no longer very useful and it’s probably not worth running again.
So it’s really challenging.
I will probably write an article about this too but if you’re ever thinking about how to write good test versus bad test, think about the value and think about the mission of the test itself. Then, try and think of all the different types of attributes that might make something valuable now, and something less valuable.
And then focus, on those things that deliver value, whether their power, whether the repeatability, whether they’re easier for coverage, for understanding that, because the better that we get as a community and better, we become better at testing will see that there are lots of different ways we can run tests.
And the more we understand that the more skilled and knowledgeable about our craft would become and this other kind of cool thing is that you can really set yourself apart from other people who may be interviewing.
If you can easily tell someone what a good test isn’t a bad test. If you are looking at someone’s test and going these are all scenario test you built the same thing just over and over again. How come you’re not varying anything?
All of a sudden you are in a much better position, out of all the other candidates because you can easily differentiate your work from their work and you can tie it to value.
So, that’s my rant today about good test versus bad test.
I published an audio experience report about running my first Testing Community of Practice (CoP) at work. tl;dr it was a really good exercise that I intend to run regularly.
Here’s an imperfect transcript:
Hello everyone, Chris Kenst here, I wanted to talk about running my first community of practice.
So for a little context, I work for a company called Promenade group.
We have 4 verticals or businesses basically that we support. And so as a QA engineering manager, I have testers across two of those verticals.
So I have a tester across our vertical called BloomNation which helps florists sell flowers online, individual small businesses sell online.
And then we have a business of vertical called Swigg where I also have a tester.
So we have built out this team. Only two people hiring my third And so, as a part of a growing team, what I wanted to do was bring my testers together and just talk about the things that we’re doing. Because as we scale up, as we add a third tester in, perhaps a fourth one, each person kind of works in their own isolated business, tackling the challenges for that particular business.
And, so I really wanted to get this kind of cross collaboration system going.
So I thought a community of practice would be the right thing for it.
And frankly after I set up a calendar time, last Friday, I told my people about it, two weeks in advance, I asked to present something.
So they would present something and I would present something. I set an agenda for it and kind of got their feedback and nobody really knew what to think about it because they have never really done one. I have never really done one.
So we kind of all agreed to it and then on Friday we had it, I set up two hours on the calendar, and we use that whole two hours for just three people.
And so this is kind of how it went:
The first thing we did was I had each of my testers present, what they were working on.
So what have they worked on recently?
What challenges have, they had what things have gone well and what things happened?
And even only having two testers that I, of course, have one-on-ones with what happened, is having them discuss the things that they work about and the specific problems unique to the businesses that they support, it was actually really eye-opening even for me,
One of my testers works on our BloomNation business and that’s a pretty stable mature business. So she works on all these different things, spanning all these different features that don’t exist on our Swigg vertical. And so part of her discussion about what she’s working on is actually, educating the other tester about, hey, these are the unique things that we do over here.
And it’s like, no we don’t and then it was the vice versa.
It was the tester I have on our Swig vertical showing going like, oh listen, you know, we’re changing our checkout flow, we’re cleaning this up.
We’re integrating with this new partner that can help us do deliveries and so just having the two distinct teams talk about what they worked on it actually highlighted a whole lot of things that I have taken for granted because I have worked on Both platforms and so just kind of allowed a bit more common sharing.
It was also really impressive to see them, put together like a little presentation. Some with slides, some not with sides but it was really on point. Everyone was able to kind of talk to the things that like went.
Well, it was a little bit harder to kind of dig up dig deep and understand things that didn’t work well.
But I like the idea of at least getting I am thinking about hey what isn’t working well, because, you know, are these things that we need to fix within the verticals self, or are there, are there lessons that we can pick up and address just ourselves.
So like one of the things that we can do ourselves that kind of came out of this was, you know, we just need more documentation around some of the things that we test that are across both platforms. So that make sense. like we can create some, some Asks, and write a bunch of documentation, like, that’s something I can do.
And, so I think my feedback might my thoughts after going through it was clearly, it has value to do to have a small community of practice.
It would probably only run it every few weeks. Sorry every few months. Once a month would be too frequent. So probably every other month is good, there are clearly lessons and ideas to be shared.
One of my testers, I noticed is doing much deeper work because they’re focused on say an EPIC at a time, or my other testers doing much more broad things. So, they’re very different in terms of, you know, even within the same company on different teams, just very different kinds of work.
You know, one of my testers is more likely to look into New Relic logs for the things that she’s doing. And my other testers more likely to look into the third party logs. Because that makes sense, it’s just all kinds of very interesting things that I pull.
I came away with the other thing I noticed is that, of course I like to use zoom because it has better pictures better? It’s just better visual like you can see people better but it just doesn’t work when you only have 43 minutes and you have a two-hour meeting. So that was a take away from me.
Yeah, and so overall I think things worked really well. I hope to be able to run this again.
Do something similar where we talk about the things that worked well and the things that don’t and I think just as we grow community of practice and getting each other talking to one another, about the things that were working on and struggling with I think well, We will definitely have more benefits.
So, I will update you when I do my second one. Thank you.
Last month the Association for Software Testing (AST) announced a new partnership with Altom, the owner of BBST®, that enables the AST to refresh our curriculum lineup with the new BBST® Community Track and help fund the future growth of the materials. This partnership and refresh are a huge milestone for the AST.
For some perspective: Cem Kaner developed the BBST courses over a long period of time, starting with Hung Nguyen in the 90s. In the 2000s after Cem Kaner was recruited to Florida Institute of Technology he and Becky Fielder received grants from the National Science Foundation to adapt them into online courses. Some time later Cem began collaborating with the AST to teach and develop the courses further. Those courses became known as AST-BBST to differentiate the way they developed and were taught (by passionate volunteers). Eventually Cem formed his own company, developed BBST® further and sold it to Altom after he retired.
BBST® classes are well known for their depth of core testing knowledge and focus on improving through peer review work. That’s great for students but frankly it’s a maintenance challenge. Long before I started with the AST the problem has been, how do we properly maintain and evolve the materials and classes?
A month ago someone on LinkedIn thanked a website and the person running it for helping them learn. They recommended others use the site. When people in my network commented on how the site wasn’t any good, I took notice. It reminded me of what Seth Godin said in ‘Not good enough’ is an easy place to hide:
The people who are paying attention are the ones who are trying. And shaming people who are trying because they’re not perfect is a terrific way to discourage them from trying. On the other hand, the core of every system is filled with the status quo, a status quo that isn’t even paying attention.
This is a really hard but important distinction to remember: It’s easy to criticize work in the name of peer review but end up on the bandwagon of not good enough. (There’s a fine line between effective peer review and unwanted comment).
One major lesson I’ve learned from interviewing testers is most aren’t paying attention. They aren’t looking around at how to improve. They don’t read blogs, books or take classes. So while it’s tempting to criticize the work people are putting out, it’s more impactful to reach out to those who are doing nothing and encourage them to try.
In May of 2020, back when Promenade Group was still called BloomNation, I opened a job posting for a Software Test Engineer. This was to be the first of many test positions we eventually hire for. After going through the whole process of hiring a software tester, I thought it would be useful to analyze the applicant data with the idea of learning something about how I hire and about the applicants who applied.
About the data
Some of this data was collected through our recruiting system and some was manually entered in by me. I spent a good deal of time crunching through raw data in Excel, then coming up with new questions and going back to find more data. Some of the data wasn’t captured at all and so I made guesses / assumptions. Specifically I did this for the applicants location and gender. I don’t hire based on gender, but I was curious to see how this might have effected the final outcome. Despite having 142 submissions, I ended up pulling data on only 107 resumes.
2020 was a year of starts and stops. Of more time but less mental energy. It was a year of developing patience and adapting to hard changes.
In early March my wife, an ICU nurse here in Los Angeles, saw the first signs of COVID coming in from travelers from Italy. Being ahead of the curve with little ability to do anything other than brace and watch it unfold forced us develop some humility.
In the AST we watched and guessed the trajectory of the virus as it spread country to country derailing our in person meeting. Then it shutdown major conference after major conference. All we could do was wait and see what are options were for our own conference, CAST. By the time it got canceled no one was surprised.
I spent more time at home with my family and less time on my commute. Despite these benefits I didn’t find more mental energy—quite the contrary. I had plans to attend many virtual conferences and I made it to none of them. Neither free nor paid. I had plans to write for other publications but couldn’t.
Writing became more consistent as I took to putting my frustrations down on paper instead. Yet I hardly published to my blog. I couldn’t get my mind to make space beyond the everyday challenges. There were / are so many things to say but no space available.