The Tester Must Understand the Stack

As testers, we test a piece of software against a variety of both written and unwritten functional and nonfunctional requirements.

One of the key tools in testing that software is having an understanding of the environment under which that software will run. We need to understand the pieces of the stack. We don’t need to be experts on every bit of it, but we need to understand what components are involved, and how they work together. I’ve found that often may folks will forget about parts of the stack, which can lead to red herrings or bad information when describing a situation or troubleshooting a problem.

Full Stack Testing, by Flickr user prettyinprint

For example, in my environment I’m usually testing web applications. For a given application, the following things are in place and need to be working correctly:

  • a web server (virtual) running on a host machine (physical)
  • a database server (virtual) running on a host machine (physical)
  • external (network) storage for the database
  • a working network between all of these servers and the client machine
  • DNS servers translating names into IP addresses to access the servers
  • a load balancer managing traffic between multiple web servers
  • IIS working correctly on the application server
  • zero or more APIs that are available and working
  • a web browser on the client machine that supports JavaScript

Even with this list, there’s a chance I’ve omitted something. Depending on the testing being performed or the issue being investigated, there’s a chance that any one of these components could be the culprit. Don’t forget about the entire stack.

Image by prettyinprint, used under Creative Commons licensing

Testers Don’t Just Test the Code

Kate Falanga chimed in recently with some thoughts around titles for testers, QA folks, and the like in Exploring Testing Titles in Agile. She lays out a few good reasons why the term Quality Assurance is a bad one, mainly that the role can’t really assure quality. I believe this. Heck, in the tagline on this site I refer to “(The myth of) Quality Assurance.”

She then outlines why she doesn’t like the title of Tester, feeling that it’s not broad enough to reflect all of the work that we do, and that it’s reactive:

It gives a very reactive rather than proactive connotation. If someone is a Tester then it is assumed that something needs to be tested. If you don’t have anything to be tested then why do you need a Tester? Why include them in planning or the project at all until you have something for them to do?

Quality Assurance / Testers / Job Title Adventures

The bad assumption here is that code is the only thing being tested, and that testing is the only thing done by a tester. Sure, once there’s code being written, a tester will probably spend a majority of her time exercising that code, but the tester participates in testing activities prior to the code. Time spent in requirements discussions helps the team write better requirements or user stories. Time spent learning about the business environment or the end user’s use cases will help the tester get into the right mindset for testing activities.

These activities aren’t testing in the sense of testing new code that’s been writing, but they’re testing activities. If testing allows us to learn information about the software being tested, and we use that information to improve product quality, all methods of learning could be considered test activities, could they not?

Do we continue the search for a better title than Tester, or do we work to help the broader software industry understand that Tester doesn’t just mean exercising code changes?

Image by Ruth Hartnop, used under Creative Commons licensing

When Quality Loses

Context: agile development with prioritization and release decisions being made by a product owner.

There’s often a false understanding of software quality (and the responsibility for software quality) in our industry. This falsehood isn’t helped by the “Quality Assurance” job title. With modern development practices, it’s misleading to presume that software testers are responsible for the quality of the released software.

QA as a Quality Advocate

As a software tester, we identify potential changes to the software. Sometimes it might be an obvious bug, where the software is not producing the response that’s clearly expected. Other times we might find potential enhancements such as new features or usability improvements. Either of these categories provide opportunities for improving the software. As a software testing and quality professional, I feel that I have an obligation to suggest that the software could always be better. When quality wins, users will have a better experience, and data will be in a quantifiable better state.

As a tester, I advocate for quality.

Testing != Release Decisions

Ultimately while I advocate for quality in the software I test, the ultimate decision on when to release (given whatever is known – or not known – about the quality of the software) belongs to someone else. In the agile world that’s usually the Product Owner; in other environments it might be a project manager, release manager, or other similar role.

That person – the one making the release decision – is the one who ultimately decides what level of quality is acceptable for a software release. Testers can help inform, but testers can’t insist.

Sometimes, we’ll advocate and our voices will be heard and the quality threshold will be raised prior to release. Sometimes, our voices will fall on deaf ears, or be drowned out by other voices or pressures.

Parked Cars, San Bruno Gas Line Explosion, 2010

The Release Where Quality Loses

When the quality isn’t up to par but the software is released anyway, expected repercussions will possibly and predictably include:

  • increased number of bugs-found-after-release
  • increased number of user support tickets
  • increased number of data or application hotfixes to resolve problems
  • PR or perception problems

Nobody in the development and product teams should be surprised by these results.  Sometimes there’s value in having the software released, even in a state of lessened quality, rather than holding it back to resolve more bugs.  The quality factor is one of many factors weighed in the release decision.  Sometimes quality loses.

As testers, we have to be okay with this, with the caveat that it’s not okay for the product team to blame the testers for the quality level of the product.  While many of us have the misnomer of “quality assurance” in our job titles, we can’t assure the quality when the release and budget decisions are out of our hands.

image via Thomas Hawk; used under Creative Commons licensing

Management Shouldn’t Make Bug Count Jokes

A couple of years ago, a new senior manager began working at our organization (he was my boss’ boss). Shortly after his arrival, he came around to our group to introduce himself and meet the various members of the team.

He came into the room that houses my small dev team (5-6 people) on one side with another similar team on the far side of the room. He’s meets the other team, including their QA person. Then he meets our team and I’m introduced as our QA guy. He then quips:

So… do you guys keep score and see who has the least bugs?

Headdesk

Was it a joke? I’m not sure. He wasn’t laughing. And neither was I.

Yes, there is value in tracking some statistics, but what sort of impression does a tester get when the first interaction with a senior manager is that manager asking about bug stat competitions?

What are the odds that this person knows much about software testing? And if this person is going to evaluate software based on likely-bogus bug statistics, what other bad metrics is he going to use to make decisions?

Incidentally, said manager chose to leave the organization just a few weeks after being hired. Hopefully he found somewhere that’s a better fit.

Tip for management: testers probably are going to find your bug count jokes more scary than funny.

Testing Like an Airline Pilot with James Bach at STARWest

Yesterday James Bach presented one of the day-long tutorials preceding the STARWest conference. I tweeted a bunch during his talk, but I also wanted to revisit my notes and identify some key points and thoughts. And because I’m all generous and shit, I’m sharing that here 🙂

Critical Thinking for Software Testers

The talk was about critical thinking as a tester. The natural opening to the talk was identifying what is meant by testing and being a tester.

What is Testing (vs. Checking)?

Checking is the process of noting whether something worked exactly as expected off of a checklist of repeatable procedures. This is where most so-called “automated testing” falls… it’s not really testing… it’s checking that the software behaved as expected based on a pre-written list of specific expectations.

Testing is the broader process and cannot be simply reduced to a set of instructions. Testing requires a human brain to analyze things and make decisions on the fly, reacting and adapting. In this respect, testing is much like being a doctor, lawyer, or airline pilot… while there are parts of these professions that require following a checklist for the expected scenario, it’s also required that a professional have the ability to think on their feet, adapting to the unexpected.

Testers are agents for the user

Basic testing is straightforward and uses reflex thinking – faster and looser. Excellent testing requires slower, surer thinking. Excellent testing involves a difficult social and psychological process in addition to the technical aspect. It requires mixing technical observation with context in order to understand what’s important.

Modeling

A signfiicant portion of the afternoon discussed modeling and observation; models are fundamental to critical thinking. The ones and zeros of a software program do not exist on their own. The application and elements within the application usually model other things.

What you see is not all there is.

Models link observation and inference… and a good tester must be able to distinguish the two.

Test Cases and Observations

If someone asks how many test cases are needed to test a program or function, well, that’s a really crappy question that will lead to a meaningless answer. A better question would be how will you test this? The number of test cases doesn’t mean much… much like we don’t measure program complexity or productivity by lines of code, we shouldn’t much care about the number of test cases.

Documents and statements are stories, not reality

Huh? Really? So?

Three easy questions when testing.

Huh? – gets at the meaning… implies that you may not understand what’s really going on

Really? – gets at the facts… perhaps what you understand may not be true

So? – gets at the risks… the truth may not matter, or it may matter more than you think

The overall theme of the day fit in line with Bach’s philosophies on context-driven and exploratory testing and analysis. Check out Lessons Learned in Software Testing: A Context-Driven Approach for more.