Testers Don’t Just Test the Code

Kate Falanga chimed in recently with some thoughts around titles for testers, QA folks, and the like in Exploring Testing Titles in Agile. She lays out a few good reasons why the term Quality Assurance is a bad one, mainly that the role can’t really assure quality. I believe this. Heck, in the tagline on this site I refer to “(The myth of) Quality Assurance.”

She then outlines why she doesn’t like the title of Tester, feeling that it’s not broad enough to reflect all of the work that we do, and that it’s reactive:

It gives a very reactive rather than proactive connotation. If someone is a Tester then it is assumed that something needs to be tested. If you don’t have anything to be tested then why do you need a Tester? Why include them in planning or the project at all until you have something for them to do?

Quality Assurance / Testers / Job Title Adventures

The bad assumption here is that code is the only thing being tested, and that testing is the only thing done by a tester. Sure, once there’s code being written, a tester will probably spend a majority of her time exercising that code, but the tester participates in testing activities prior to the code. Time spent in requirements discussions helps the team write better requirements or user stories. Time spent learning about the business environment or the end user’s use cases will help the tester get into the right mindset for testing activities.

These activities aren’t testing in the sense of testing new code that’s been writing, but they’re testing activities. If testing allows us to learn information about the software being tested, and we use that information to improve product quality, all methods of learning could be considered test activities, could they not?

Do we continue the search for a better title than Tester, or do we work to help the broader software industry understand that Tester doesn’t just mean exercising code changes?

Image by Ruth Hartnop, used under Creative Commons licensing

The Gatekeeper Must Own the Quality

The notion of the software tester as quality gatekeeper is generally seen as outdated; Jason B. Ogayon recently shared We Are Not Gatekeepers that does a great job of laying out the ideal scenario where the product owner is the one who makes the release decision and decides what level of quality is acceptable for the product.

In theory the team shares in the ownership of product quality; this isn’t a hard sell when things are going well. If the product is awesome, the team will generally own that and take pride in the quality, or as Jason noted:

We are not the authority about software quality, because the whole team is responsible for baking the quality into the product at every project phase.

Things get stickier when things aren’t great. If the product has a lot of defects, or is missing functionality that was previously expected, sharing the ownership for those shortcomings is often uncomfortable. It’s easy to blame the tester who raises the issues or reports on the poor quality.

But, much like the whole team being responsible for baking the quality into the product, the whole team, not just the testers, take responsibility for flaws in the quality recipe, and the individual who sets the quality bar assumes that gatekeeper role and responsibility.

Do You Ship a Steaming Pile of Turd if the Customer Doesn’t Argue?

Back in ye olden days of waterfalls, our requirements-gathering efforts would lead to reams of specifications. We’d account for each pixel, specifying the screen position of various labels. Our data entry forms would have a tab order laid out explicitly. The text of every error message would be wordsmithed to (alleged) perfection.

We all thought we knew what we wanted up front. We’d write a big pile of documentation to show just what we needed built, and how. Developers couldn’t be trusted to figure these things out on their own…

And then we all got a dose of reality, as we realized that quantity of documentation and specifications didn’t really correlate to system quality. Despite our best effort to make it difficult to change (hi there mister change control board), change still occurred. And it was expensive, since the process was built around what we thought was stability.

Enter Agile

But hey, here comes the agile world, where we work iteratively with our product owners and end users to create something that meets their needs, even as those needs change throughout the duration of development.

We don’t write as many detailed specifications, because we value working software over comprehensive documentation[1], and some of that time spent documenting is probably better spent writing code. But as we cut out the documentation, we leave more decisions for developers to evaluate and decide what to build.

This is generally a good thing. Study after study shows that agile methods usually produce better software, and they often do it faster.

But… and there’s always a but… what about the little quality details?

Things a User Won’t Ask For

Product owners, business customers, and end users are pretty good at figuring out the big obvious functionality for a piece of software. Their domain knowledge drives the big features, and when we get it right as a software team, the result makes their lives better.

I’ve found that the product owner or customer often won’t ask for little things that can increase the quality of a piece of software. When is the last time you heard a customer explicitly ask for:

  • the web application to use consistent <title> tags such that the application and particular screen are identified in a consistent manner
  • buttons for actions (Submit, Cancel) are always placed in the same place… for example, cancel on the left and submit on the right
  • consistent behavior with regards to the maximum length for data entered into text boxes, and handling cases when a user might circumvent this (copy and paste, perhaps?)
  • the Enter key performing a default action such as a search

You get the idea… I could list fifty other similar attributes. These are behaviors or patterns that are hallmarks of quality software. Some make it more intuitive to use. Some help prevent the user from losing some data. Each of these things makes the software just a bit better and makes it so that a user is less likely to use the name of your program as an expletive.

Customer Acceptance is not the Be-All, End-All

We recently had a lively discussion within our development team about setting a quality standard for our products. A couple of our developers argued for the position that if the customer accepts the work, then it’s good enough, and anything we did above and beyond the minimum needed for customer acceptance was “gold plating” and excessive.

I disagree. Just because a customer is willing to call something “good enough” doesn’t necessarily mean it’s good enough. Much like we trust the customer or product owner to come to us and contribute their domain knowledge for the problem we’re solving, as software professionals we also bring our domain knowledge to the table… in the domain of software quality. Our experience, both as individuals and as an industry, can bring great things to the table in terms of usability, reliability, stability, and other similar factors. Customers are often only able to articulate that they want the software to be “easy to use” or “user friendly.” Our expertise translates that into work tasks that can be verified by testing.

We ought to strive for the highest quality possible, even when the customer doesn’t explicitly ask for it. Establishing a set of quality standards for your software is a worthy effort and can help everyone on the product team have a clearer understanding of what is meant by quality software.