This site has public articles dating back to March 2009. At some point in my blogging journey I moved to using WordPress as a platform and inherited a url structure with dates in it. I recently got rid of this structure and simplified it to be the name of the article.
If you had come to this site a month ago a typical blog post url would have been:
I’ve come across a number of Frequently Asked Questions about Exploratory Testing and I’ve got what I hope are pretty good answers.
Exploratory Testing FAQs
Frequently Asked Questions about exploratory testing. Got a quick question? Get a quick answer.
Yes, there are many examples where people have used tools to enable and enhance their exploratory testing.
Is all testing exploratory?
No. Not unless you change the definition of testing to specifically exclude testing done by machines.
What is Exploratory Testing?
ET is an approach (or style) to testing that emphasizes the individual tester focusing on the value of their work through continuous learning and design.
Is Exploratory Testing used in Agile teams?
Yes. ET is about optimizing the value of your work given your context and so it’s a natural fit in agile projects and agile teams.
What is the definition of Exploratory Testing?
Exploratory Testing is a style (approach) of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the value of her work by treating test-related learning, test design, test execution and test result interpretation as mutually supportive activities that run in parallel throughout the project.
Is Exploratory Testing a test design technique?
No. You can design tests in an exploratory or scripted way (to a degree each way). This is why it’s called an approach. But ET itself is not a technique (a way to group, design and interpret results of similar kinds of tests).
Does Exploratory Testing require charters?
No, but charters can certainly be helpful.
Does Exploratory Testing require a timebox?
No, but a timebox can help you create similarly sized sessions for Session Based Test Management.
An exploratory testing charter is a mission statement for your testing. It helps provide structure and guidance so that you can focus your work and record what you find in a constructive way.
How to Write an Exploratory Charter
My favorite way to structure exploratory testing charters is to base them on “A simple charter template” from Elisabeth Hendrickson’s awesome book Explore It!, Chapter 2, page 67 of 502 (ebook).
Explore (target) With (resources) To discover (information)
Target: What are you exploring? It could be a feature, a requirement, or a module.
Resources: What resources will you bring with you? Resources can be anything: a tool, a data set, a technique, a configuration, or perhaps an interdependent feature.
Information: What kind of information are you hoping to find? Are you characterizing the security, performance, reliability, capability, usability or some other aspect of the system? Are you looking for consistency of design or violations of a standard?
Examples of Charters
While this is my favorite way to structure exploratory testing charters (I think its a really straightforward template) it isn’t the only way. As a way to learn I’ve complied a list of example charters you can look at that can be found on my Guides Page.
A few examples include:
Check the UI against Apple interface guidelines.
Identify and test all claims in the marketing materials.
You can see some use the template and some don’t.
How do charters relate to Session Based Testing or Session Based Test Management?
Exploring can be an open-ended endeavor which is both good and bad. To help guide those explorations you can organize your effort with charters and then time-box those charters into “sessions”. You can use sessions to measure and report on the testing you do without interrupting your workflow; this is called Session Based Test Management.
You can use exploratory charters without using Session Based Test Management. I’ve seen many examples of people using Charters in JIRA stories as part of the testing criteria for sign off for testers, developers and product managers.
Blogging is an excuse to write more. Writing is a great way to think clearly about a subject. Running a website on a (small) Linux droplet is a starting point to learn more about the ops side of the world.
Shortly after publishing this I got to deal with the less fun side of the ops journey.
I use Cloudflare as a CDN and caching layer over Kenst.com. This means DNS is configured through Cloudflare rather than say my hosting provider. Occasionally there have been issues where Cloudflare can’t talk to the droplet (virtual machine) and the site becomes inaccessible.
Right after the December post went live, that same problem came up and took the down the site. Eventually I think Cloudflare sent an email alerting me, but I found out quicker via someone on Twitter.
Most of the time things hum along quite nicely and my ops work is minimal. However when things do go wrong, they can be quite time consuming to debug and fix. In the past while updating Linux packages (as a DIY ops person you patch your own security holes) I managed to install wrong versions of PHP and then somehow couldn’t get apache running again. That wasn’t fun.
Now I take regular snapshots (in addition to backups) to make it faster to restore when I mess up. DigitalOcean offers API access which I’ve setup in Postman for easy access when something isn’t working.
Back to my original problems, I managed to solve both by:
Installing the Cloudflare WordPress plugin. I can’t say why this had a positive impact but I haven’t had the issue since.
Working in technology is a forcing function to understand conceptually how systems work together (like CDNs, DNS, and virtual machines). Applying them to my own projects turns that conceptual understanding into hard fought knowledge of how systems can work and fail. All it takes is a lot of time, patience and an ability to deal with failure! Self hosted blogs are not for the faint of heart!
I started my previous post by saying “2021 was an improvement over the previous year”. This was due, in large part, to the growth and new challenges at work. Here are some more reflections and a few fun stats from 2021 on those new challenges:
Growth at Promenade came as we hired people all over the engineering team from Testers to DevOps, VPs, Directors, and many other roles. Scanning my calendar I see some fascinating stats:
Took part in 61 interviews. 45 of which were in the first half of the year (we slowed hiring in the 2nd half).
Helped hire a number of executives including: VP of Engineering, VP of Product and Director of TechOps.
Our engineering team grew from 9 people in January to 26 people by the end of December. (There’s even more now)
Hiring is challenging in and of itself. Even more so when you source and screen all your own hires with little to no recruitment help (like I tend to do).
For the first time I got an official engineering manager title. I already had people reporting to me but getting the title to recognize the position was a nice step. The challenges then became how to manage people across different business units and how much coaching, and 1:1 time to give vs spend on my own work.
2 remote community of practice events
1 remote team building event
For the first time(s) this year I ran a few events virtually. With teams spread out over different businesses it’s important to me to look at my team as more of a community of practice, rather than a separate group.
Metrics became a thing. How do we think about assessing and reporting on quality and test systems? Do we do this for people as well? This seems to be a very interesting topic in and of itself.
Professional development. I informally started building a career ladder for software testers and a professional development plan. This has always been a goal I’ve had in the back of my mind. I give each person on my team goals with classes to take and pass. Now I’m going to share it with others. Scary!
Community work included:
1 talk given at the Odyssey Conference
1 BBST course taught on our new Platform
2 (or was it 3) hosted AST events
1 in-person conference organized (I helped in a very small way)
2 (or was it 3) Twitter spaces held
Simplifying things like this is fun but it also makes it seem easy and last year was anything but easy. It certainly was fun.
2021 was an improvement over the previous year in a number of ways: more mental energy, more growth at work and a safe return to in-person conferences at the AST.
Growth and the challenges at work have become inspiration for sharing in short form on LinkedIn and Twitter. They’ve also given me more desire to write long form (the second half of 2021 was better than the first). Ultimately in 2021 I published 14 articles, recorded 2 small podcasts (rackets) and recorded a single interview all of which became articles on this site.
Today, as per tradition, I summarize the most popular and important articles I’ve written over the past year. You can find previous years in review here: 2020 | 2019 | 2018 | 2017 | 2016 | 2015
On to 2021 in Review:
The Five Most-Viewed Article
The five most viewed articles according to page views:
How to Export Environments from Postman – A simple problem I had with not being able to remember how to do something. It is far and away the most popular article I wrote last year but also the least important.
Hiring a software tester, an analysis – By far my most time consuming post given I had to collect and analyze data and then write a report on it. I am pretty happy it got a lot of traction.
Better Tester Training Material – Reflecting on the effort Simon Peter and I put together to create better testing materials. In the end we didn’t use them but those materials are free for anyone to use.
Not Good Enough – I get the desire to criticize others when the work they produce doesn’t measure up. I do it too often. But I also recognize those who are trying shouldn’t be shamed or discouraged from getting better. It’s a delicate balance.
A few months ago I was chatting with Evgeny Kim about some of the reservations I had while exploring a new codeless test automation tool. He was also exploring some codeless tool options and so he invited me onto his podcast to talk about it. We chatted about a wide range of things such as challenges faced by software test engineers, the role of codeless tools and hiring problems.
It was a fun podcast with some interesting topics so I had the video translated. Watch the video or read the transcript whichever you prefer. (Sorry in advance for any mistakes.)
EVGENY KIM: All right, guys. Thank you for joining for today’s podcast session. We have a guest, Chris Kenst. He’s a QA Engineering Manager at a company, Promenade Group. Also, he’s a president of Association for Software Testing. And we’re going to talk today about the future of software test engineers. Thank you, Chris. Hi.
There are two types of schedule, which I’ll call the manager’s schedule and the maker’s schedule. The manager’s schedule is for bosses. It’s embodied in the traditional appointment book, with each day cut into one hour intervals. You can block off several hours for a single task if you need to, but by default you change what you’re doing every hour.
When you use time that way, it’s merely a practical problem to meet with someone. Find an open slot in your schedule, book them, and you’re done.
I’ve built out a team of three testers (plus myself) who each work embedded on an engineering team for a given business unit. I’m responsible for helping my team understand their respective businesses, contribute meaningfully to the team and their professional development (to name a few things). Then I help maintain and build out automated tests across three teams in two business units. In short, I’m a hands on manager with individual contributor responsibilities dealing with what feels like a split personality.
Back when I wrote Building an Awesome Home Office I was six months into remote work and although optimistic about my chances of returning to the office, determined to use all the hardware I had at my disposal to make my home work conditions better. As time went on I grew tired of my two 27″ ASUS monitors and began my search for THE Ultimate Curved widescreen monitor to power my MacBook Pro.
As time went on I grew tired of two drawbacks with dual 27″ ASUS monitors:
The closer you work to the center of those two screens the more you stare at the bezel.
The further away you work to the center of those screens, the more uncomfortable the distance is between you and the screen.
It was time for an upgrade to THE Ultimate Curved monitor. Ultimate is useful adjective when describing a piece of hardware but also relative: ultimate for now. I fully expect curved monitors to drop in price and expand in functionality over time. Regardless I started a search with a few important pieces of criteria:
USB-C downstream charging. I have a Macbook Pro 16″ and no longer wanted to use the provided charger.
38″ or above in size. My desk is 60″ in length and dual 27″ took up a ton of space (although they floated above the desk). I was fine with downsizing a little.
Removable stand. I don’t want a monitor stand on the desk, I love a clean looking desk with few things on it.
As few cables as possible: one, maybe two tops.
Research time. I created a Google Doc and started comparing different options:
I was so excited (and nervous) to be at an in-person conference I recorded video of the trip and made a short 2 minute overview of the conference and the AST’s board meeting afterward.
CAST2021 was meant to be a test of the communities readiness to attend in-person events. As such it was made to be small and safe.
Small meaning no more than 50 people. Safe meaning open only to fully vaccinated individuals (with proof checked upon entry) and designed to be partially open (hence the baseball park). We ended up with 40 excited and energized people learning and conferring over 2 days in Atlanta, GA and it was quite exciting. If that wasn’t enough I probably met 30 of the attendees between conversations, food, a tour of the stadium and of course games, games, games.
Tariq King’s tutorial on Testing AI and Machine Learning
The first day was Tariq King’s tutorial. There’s a ton of information to absorb, we we walked through foundational concepts in AI and ML, while also trying out a number of hands on exercises using publicly available tools such as Teachable Machine, Tensorflow Playground and Google Cloud vision. Tariq used GitBook to list out all of the tools in the tutorial which you can play with here. (Although by now it looks like it has been rebranded from CAST2021).
I’m still digesting what I learned, but one of my biggest questions around AI in software testing tooling space was addressed by Tariq. The primary advantage I’ve seen advertised is around better locators which ideally translates into less maintenance. While useful this also seems rather bland and perhaps not very compelling for my context.
Day 2 – Conference Talks
Day 2 of CAST offered the full conference experience with 6 speakers. CAST is famous for it’s use of k-cards and facilitated discussions. I’m always amazed at how a few small questions can lead to tangents and additional clarity during an interactive discussion. I get way more food for thought after this exercise than I do online or in a slack channel (most of the time).
James Thomas (a fellow AST board member and friend) did a great job writing up each of the talks at CAST. I highly recommend checking out each of his articles:
Per usual, James did a great job coming up with funny (pun-worthy) names. He also did some sketch noting which you can find in his articles and via Twitter using #CAST2021.
Ultimately I’m super happy to have gone to CAST. I met a lot of folks from the Atlanta testing community and I learned quite a bit from the talks and tutorials. Two weeks have passed since the conference and I still feel energized from the learning and bonding opportunities I was given. Hopefully I’ll see those same people and a few more at CAST 2022!???