When Quality Loses

Context: agile development with prioritization and release decisions being made by a product owner.

There’s often a false understanding of software quality (and the responsibility for software quality) in our industry. This falsehood isn’t helped by the “Quality Assurance” job title. With modern development practices, it’s misleading to presume that software testers are responsible for the quality of the released software.

QA as a Quality Advocate

As a software tester, we identify potential changes to the software. Sometimes it might be an obvious bug, where the software is not producing the response that’s clearly expected. Other times we might find potential enhancements such as new features or usability improvements. Either of these categories provide opportunities for improving the software. As a software testing and quality professional, I feel that I have an obligation to suggest that the software could always be better. When quality wins, users will have a better experience, and data will be in a quantifiable better state.

As a tester, I advocate for quality.

Testing != Release Decisions

Ultimately while I advocate for quality in the software I test, the ultimate decision on when to release (given whatever is known – or not known – about the quality of the software) belongs to someone else. In the agile world that’s usually the Product Owner; in other environments it might be a project manager, release manager, or other similar role.

That person – the one making the release decision – is the one who ultimately decides what level of quality is acceptable for a software release. Testers can help inform, but testers can’t insist.

Sometimes, we’ll advocate and our voices will be heard and the quality threshold will be raised prior to release. Sometimes, our voices will fall on deaf ears, or be drowned out by other voices or pressures.

Parked Cars, San Bruno Gas Line Explosion, 2010

The Release Where Quality Loses

When the quality isn’t up to par but the software is released anyway, expected repercussions will possibly and predictably include:

  • increased number of bugs-found-after-release
  • increased number of user support tickets
  • increased number of data or application hotfixes to resolve problems
  • PR or perception problems

Nobody in the development and product teams should be surprised by these results.  Sometimes there’s value in having the software released, even in a state of lessened quality, rather than holding it back to resolve more bugs.  The quality factor is one of many factors weighed in the release decision.  Sometimes quality loses.

As testers, we have to be okay with this, with the caveat that it’s not okay for the product team to blame the testers for the quality level of the product.  While many of us have the misnomer of “quality assurance” in our job titles, we can’t assure the quality when the release and budget decisions are out of our hands.

image via Thomas Hawk; used under Creative Commons licensing

Jonathan Coulton’s Still Alive as a Software Project Restropective

Jonathan CoultonSure, it came from the video game Portal, but Still Alive seems like a hodgepodge of gems that could be used as we look back on a software project. We start with the beginning of the song, referencing a project that went well:

This was a triumph.
I’m making a note here:
HUGE SUCCESS.
It’s hard to overstate my satisfaction.

Sometimes things don’t go so well. Bugs happen:

But there’s no sense crying over every mistake.

I assume this couplet is about a burndown chart:

Now these points of data make a beautiful line.
And we’re out of beta, we’re releasing on time.

Cake is the promised reward in Portal, and hey, who hasn’t met a group of developers motivated by unhealthy baked goods?

Anyway, this cake is great.
It’s so delicious and moist.

And here’s the full song, with the screen as it plays in the game’s final credits:

Image by Flickr user nickstone333, used under Creative Commons licensing

Coming Soon to a #Starwest Near You

Last year was my first time at Starwest, a conference for testers held in Anaheim.

Monorail Orange

As a famous Californaustrian once said, I’ll be back.

I’ll be in Anaheim from the 11th through the 16th, taking workshops from Michael Bolton, Rob Sabourin, and Bob Galin, followed by the main conference.

If you’ll be there, let’s connect!

Unrelated to Starwest, I’m leading a photowalk the evening of the 11th. Hit that link to find out more or register.

Great Expectations

As a software tester, I have great expectations.

  • I expect that as I test a feature that the functionality will match my understanding of the user story and related discussions
  • I expect that the software will have an intuitive user interface
  • I expect that the software will be consistent with itself, with other similar applications we’ve developed, and with industry standards

Sometimes my expectations are met. Sometimes I find that the software behaves differently than (I) expected.

When behavior differs from expectations, have I found a bug? Perhaps. Or perhaps my expectations were wrong.

Conversation Starters

When software behavior differs from my expectation as a tester, more often than not it can be a conversation starter[1] for further discussion. It often means it’s time for a conversation either with the product owner to see if my expectations are in line with his expectations as to the functionality of the system. Maybe it’s time for a conversation with the developer to figure out if her expectations differed when she wrote the code that’s not behaving as I expected.

Some scenarios:

  • I expected the client list screen to be sorted by last name, because hey, that makes sense, right? But perhaps the product owner told the developer that they wanted it sorted by last activity date instead.
  • Perhaps the data field on the screen is allowing for a different sort of input than was noted in the user story. Rather than assuming the developer is incompetent, I can ask if the desired behavior changed beyond a (non-updated) user story.
  • Often, especially with the non-standard use case, I run into an error situation that’s handled in what seems like a strange way. Developers on my team have learned what’s usually coming when I start a conversation with “What would you expect to happen when…” and lay out the scenario. Often I’ve discovered a workflow or use case that hadn’t been foreseen, so my expectation was based on something the developer hadn’t even considered.

Sometimes my expectations are “correct.” Sometimes the desired behavior is different than my expectations.

Expectations lead to revelations which lead to conversations, which may or may not lead to work to change software behavior. News flash: I’m not always right.

Related reading: the Huh? Really? So? section of my notes from James Bach’s workshop at STARWEST last year.


  1. Yes, there will always be the “it’s blatantly broken” bugs, but these aren’t the ones that usually cause process or personality grief.  ↩

It Happened on June 72nd

File this one in the “who the hell would’ve ever thought this was the correct behavior?” category…

Our dev team is moving into the bootstrap world, which means that we’re again learning how to manage date fields and date pickers.

Being the good tester that I am, I tried entering February 29, 2014 as test data. And the date field automatically changed to March 1st, 2014. Hm.

February 30th led to the date field changing to March 2nd. Well this is peculiar. It seems less-than-ideal that it would change the date without informing the user.

Let’s give it a really wacky date. What happens when I input 06/72/2014?

It changes it to 8/11/2014. Because, you know, August 11th is the proper way of representing June 72nd. We can count any number of arbitrary days from the beginning of a month.

WTF?

Management Shouldn’t Make Bug Count Jokes

A couple of years ago, a new senior manager began working at our organization (he was my boss’ boss). Shortly after his arrival, he came around to our group to introduce himself and meet the various members of the team.

He came into the room that houses my small dev team (5-6 people) on one side with another similar team on the far side of the room. He’s meets the other team, including their QA person. Then he meets our team and I’m introduced as our QA guy. He then quips:

So… do you guys keep score and see who has the least bugs?

Headdesk

Was it a joke? I’m not sure. He wasn’t laughing. And neither was I.

Yes, there is value in tracking some statistics, but what sort of impression does a tester get when the first interaction with a senior manager is that manager asking about bug stat competitions?

What are the odds that this person knows much about software testing? And if this person is going to evaluate software based on likely-bogus bug statistics, what other bad metrics is he going to use to make decisions?

Incidentally, said manager chose to leave the organization just a few weeks after being hired. Hopefully he found somewhere that’s a better fit.

Tip for management: testers probably are going to find your bug count jokes more scary than funny.