During her talk last week at CAST, Natalie Bennett dropped a random tip that seems so straightforward, yet it’s not common practice anywhere I’ve worked.
The backstory: while we try to script and automate as many repetitive tasks as possible, there are any number of manual configuration tasks that are part of software development. Perhaps it’s the initial configuration of a server, or the creation of an account. The fact these tasks are done manually leaves them prone to human error, and sometimes these errors aren’t easily detected.
The other backstory: we know of pair programming, where two developers sit together to work on a bit of code, putting two brains into the design and hopefully catching errors as they occur rather than later in the process.
The solution: combine the two ideas, and when there’s manual configuration work to be done, pair for the work.
Duh. So simple, yet so smart.
Q: Why are you such a nitpicky jerk about typos and grammar errors on the cover letters and resumes of developer candidates?
A: Because they’ve had all the resources in the world to make them perfect, and they’re applying for a job where having even a single character wrong can mean a significant difference the correctness of their work.
My employer partners with a local university as part of an internship program; computer science students have an opportunity to participate in a series of six month paid internships with local software development groups. As a result, we’re now about three weeks into working with our latest intern. We’ve had two previous testing interns.
It’s interesting to see how they begin testing. With each of them I’ve set things up with an introduction to context driven testing and the ideas of software exploration and working with various heuristics to exercise the program.
It’s interesting to note if the new intern has an innate curiosity to explore.
Our current intern started at the beginning of the month. On her first day, as she began to explore one of our applications, she caught a bug that appeared when you altered a URL query string.
Curiosity: the lifeblood of a tester.
I’m looking forward to attending CAST next month in Vancouver, BC, which will be my first time at this particular conference. I’ve heard lots of good things from past attendees, and I look forward to what seems like a very practical lineup of folks talking bout real-world testing issues in an interactive environment.
I’ll be arriving on Sunday and attending James Bach and Fiona Charles’ tutorials on Monday.
I may try to organize some sort of Sunday evening gathering for craft beer lovers – keep an eye on Twitter.
The notion of the software tester as quality gatekeeper is generally seen as outdated; Jason B. Ogayon recently shared We Are Not Gatekeepers that does a great job of laying out the ideal scenario where the product owner is the one who makes the release decision and decides what level of quality is acceptable for the product.
In theory the team shares in the ownership of product quality; this isn’t a hard sell when things are going well. If the product is awesome, the team will generally own that and take pride in the quality, or as Jason noted:
We are not the authority about software quality, because the whole team is responsible for baking the quality into the product at every project phase.
Things get stickier when things aren’t great. If the product has a lot of defects, or is missing functionality that was previously expected, sharing the ownership for those shortcomings is often uncomfortable. It’s easy to blame the tester who raises the issues or reports on the poor quality.
But, much like the whole team being responsible for baking the quality into the product, the whole team, not just the testers, take responsibility for flaws in the quality recipe, and the individual who sets the quality bar assumes that gatekeeper role and responsibility.
A reminder that a passing check doesn’t mean things are necessarily correct:
I’ll be returning to Anaheim next month to attend STARWEST1. I’ve been a couple times previously and have generally found it to be a useful, fun experience.
In looking at this year’s program, several topics/speakers stand out. I’ve found great value in the half- and full-day tutorials. One thing I like about STARWEST is that during the “regular” conference talks there are five or six tracks being offered, which generally means there will be something of value in each time slot.
Beyond the program (and this really applies to all conferences, not just this one) is the other key value in these events: the informal networking and conversations. Whether it’s over lunch at the event, in the hallway between sessions, or in the evening at a bar, I’ve made some great connections with folks from around the country who share similar (or dissimilar) testing experiences.
Looking towards Paradise Pier at Disneys California Adventure
Also: it doesn’t hurt that I’m a Disneyland fan.
Will you be at STARWEST? Let’s connect.