Valuing Consistency and Embracing Change

We know both from our own experiences and articles about UI and UX that consistency within and across our applications can provide great value for the user; that which is familiar becomes easy.

We also know from the Agile Manifesto that we value responding to change over following a plan.

Are we at odds with ourselves? Can we both embrace change with an open mind while also valuing the benefits of consistency?

Yes. With intention.

Valuing Intentional Consistency

Consistency within and across software applications can provide for a positive user experience once a user has become familiar with the conventions that are used. Consistency can exist in multiple levels.

From specific to broad, we can look at consistency on several layers:

  • Within a single application, we expect consistency for similar functions.
  • Within an organization and line of business, consistency could be expected across applications used by the same user groups.
  • Within an organization as a whole, consistency could be expected across applications built by or for that organization.
  • Finally, we value consistency with other similar applications in the industry. If your text-editing program offers a toolbar button to make text bold, it probably should be a big bold letter B on the button (assuming you’re in the English language).

Conscious choices to be consistent can help users become comfortable faster as they use your program for the first time, or as they gain proficiency and can apply previously-learned skills and behaviors from other areas of their work.

That said… if we always do things the same way we’ve always done them, we’ll never change, right?

Embracing Intentional Change

We want positive change. If we never changed as software development teams, we’d all be writing COBOL software to build command line applications, right? Any new interface or functional paradigm is going to be inconsistent with the existing pattern simply by virtue of being new.

Much like we embrace intentional consistency, we can embrace intentional change. It’s okay to use a new design paradigm if the benefits of the new design provide a lot of value.

Change… with a purpose… is good.

Do Not Confuse Sloppiness for Change

Where we get in trouble, and create poor user experiences, is when the development team gets sloppy and we introduce change without a purpose.

Perhaps a developer is new to an application that has handled validation errors by lighting the field up in red and displaying an error summary at the bottom the page, but he instead makes field labels bold and introduces an error toast in the upper right.

If a designer is used to dates in MM/DD/YYYY format, she might not know that for this line of business, dates are always displayed as YYYY-MM-DD, and would create misleading design comps.

These are usually innocent mistakes, not done with malice but without the knowledge of existing consistency patterns, or by not asking questions before diving into a bad pattern.

Build software with intention, and your change vs. consistency issues should work out well.

The Tester Must Understand the Stack

As testers, we test a piece of software against a variety of both written and unwritten functional and nonfunctional requirements.

One of the key tools in testing that software is having an understanding of the environment under which that software will run. We need to understand the pieces of the stack. We don’t need to be experts on every bit of it, but we need to understand what components are involved, and how they work together. I’ve found that often may folks will forget about parts of the stack, which can lead to red herrings or bad information when describing a situation or troubleshooting a problem.

Full Stack Testing, by Flickr user prettyinprint

For example, in my environment I’m usually testing web applications. For a given application, the following things are in place and need to be working correctly:

  • a web server (virtual) running on a host machine (physical)
  • a database server (virtual) running on a host machine (physical)
  • external (network) storage for the database
  • a working network between all of these servers and the client machine
  • DNS servers translating names into IP addresses to access the servers
  • a load balancer managing traffic between multiple web servers
  • IIS working correctly on the application server
  • zero or more APIs that are available and working
  • a web browser on the client machine that supports JavaScript

Even with this list, there’s a chance I’ve omitted something. Depending on the testing being performed or the issue being investigated, there’s a chance that any one of these components could be the culprit. Don’t forget about the entire stack.

Image by prettyinprint, used under Creative Commons licensing

Testers Don’t Just Test the Code

Kate Falanga chimed in recently with some thoughts around titles for testers, QA folks, and the like in Exploring Testing Titles in Agile. She lays out a few good reasons why the term Quality Assurance is a bad one, mainly that the role can’t really assure quality. I believe this. Heck, in the tagline on this site I refer to “(The myth of) Quality Assurance.”

She then outlines why she doesn’t like the title of Tester, feeling that it’s not broad enough to reflect all of the work that we do, and that it’s reactive:

It gives a very reactive rather than proactive connotation. If someone is a Tester then it is assumed that something needs to be tested. If you don’t have anything to be tested then why do you need a Tester? Why include them in planning or the project at all until you have something for them to do?

Quality Assurance / Testers / Job Title Adventures

The bad assumption here is that code is the only thing being tested, and that testing is the only thing done by a tester. Sure, once there’s code being written, a tester will probably spend a majority of her time exercising that code, but the tester participates in testing activities prior to the code. Time spent in requirements discussions helps the team write better requirements or user stories. Time spent learning about the business environment or the end user’s use cases will help the tester get into the right mindset for testing activities.

These activities aren’t testing in the sense of testing new code that’s been writing, but they’re testing activities. If testing allows us to learn information about the software being tested, and we use that information to improve product quality, all methods of learning could be considered test activities, could they not?

Do we continue the search for a better title than Tester, or do we work to help the broader software industry understand that Tester doesn’t just mean exercising code changes?

Image by Ruth Hartnop, used under Creative Commons licensing

Being Anal About Developer Cover Letters and Resumes

Q: Why are you such a nitpicky jerk about typos and grammar errors on the cover letters and resumes of developer candidates?

A: Because they’ve had all the resources in the world to make them perfect, and they’re applying for a job where having even a single character wrong can mean a significant difference the correctness of their work.

The Gatekeeper Must Own the Quality

The notion of the software tester as quality gatekeeper is generally seen as outdated; Jason B. Ogayon recently shared We Are Not Gatekeepers that does a great job of laying out the ideal scenario where the product owner is the one who makes the release decision and decides what level of quality is acceptable for the product.

In theory the team shares in the ownership of product quality; this isn’t a hard sell when things are going well. If the product is awesome, the team will generally own that and take pride in the quality, or as Jason noted:

We are not the authority about software quality, because the whole team is responsible for baking the quality into the product at every project phase.

Things get stickier when things aren’t great. If the product has a lot of defects, or is missing functionality that was previously expected, sharing the ownership for those shortcomings is often uncomfortable. It’s easy to blame the tester who raises the issues or reports on the poor quality.

But, much like the whole team being responsible for baking the quality into the product, the whole team, not just the testers, take responsibility for flaws in the quality recipe, and the individual who sets the quality bar assumes that gatekeeper role and responsibility.

When Quality Loses

Context: agile development with prioritization and release decisions being made by a product owner.

There’s often a false understanding of software quality (and the responsibility for software quality) in our industry. This falsehood isn’t helped by the “Quality Assurance” job title. With modern development practices, it’s misleading to presume that software testers are responsible for the quality of the released software.

QA as a Quality Advocate

As a software tester, we identify potential changes to the software. Sometimes it might be an obvious bug, where the software is not producing the response that’s clearly expected. Other times we might find potential enhancements such as new features or usability improvements. Either of these categories provide opportunities for improving the software. As a software testing and quality professional, I feel that I have an obligation to suggest that the software could always be better. When quality wins, users will have a better experience, and data will be in a quantifiable better state.

As a tester, I advocate for quality.

Testing != Release Decisions

Ultimately while I advocate for quality in the software I test, the ultimate decision on when to release (given whatever is known – or not known – about the quality of the software) belongs to someone else. In the agile world that’s usually the Product Owner; in other environments it might be a project manager, release manager, or other similar role.

That person – the one making the release decision – is the one who ultimately decides what level of quality is acceptable for a software release. Testers can help inform, but testers can’t insist.

Sometimes, we’ll advocate and our voices will be heard and the quality threshold will be raised prior to release. Sometimes, our voices will fall on deaf ears, or be drowned out by other voices or pressures.

Parked Cars, San Bruno Gas Line Explosion, 2010

The Release Where Quality Loses

When the quality isn’t up to par but the software is released anyway, expected repercussions will possibly and predictably include:

  • increased number of bugs-found-after-release
  • increased number of user support tickets
  • increased number of data or application hotfixes to resolve problems
  • PR or perception problems

Nobody in the development and product teams should be surprised by these results.  Sometimes there’s value in having the software released, even in a state of lessened quality, rather than holding it back to resolve more bugs.  The quality factor is one of many factors weighed in the release decision.  Sometimes quality loses.

As testers, we have to be okay with this, with the caveat that it’s not okay for the product team to blame the testers for the quality level of the product.  While many of us have the misnomer of “quality assurance” in our job titles, we can’t assure the quality when the release and budget decisions are out of our hands.

image via Thomas Hawk; used under Creative Commons licensing