The Tester Must Understand the Stack

As testers, we test a piece of software against a variety of both written and unwritten functional and nonfunctional requirements.

One of the key tools in testing that software is having an understanding of the environment under which that software will run. We need to understand the pieces of the stack. We don’t need to be experts on every bit of it, but we need to understand what components are involved, and how they work together. I’ve found that often may folks will forget about parts of the stack, which can lead to red herrings or bad information when describing a situation or troubleshooting a problem.

Full Stack Testing, by Flickr user prettyinprint

For example, in my environment I’m usually testing web applications. For a given application, the following things are in place and need to be working correctly:

  • a web server (virtual) running on a host machine (physical)
  • a database server (virtual) running on a host machine (physical)
  • external (network) storage for the database
  • a working network between all of these servers and the client machine
  • DNS servers translating names into IP addresses to access the servers
  • a load balancer managing traffic between multiple web servers
  • IIS working correctly on the application server
  • zero or more APIs that are available and working
  • a web browser on the client machine that supports JavaScript

Even with this list, there’s a chance I’ve omitted something. Depending on the testing being performed or the issue being investigated, there’s a chance that any one of these components could be the culprit. Don’t forget about the entire stack.

Image by prettyinprint, used under Creative Commons licensing

Effective Bug Reports with Pivotal Tracker

As someone who is a tester that was once an application developer, I’ve seen both sides of the bug report world: both writing them as well as being the one to receive them and have to act on the information. Here’s my take on what is included in a good bug report when using Pivotal Tracker as your system for work item tracking.

  • Story title: The title ought to give a quick, one-line indication of the issue. While many bugs require nuance and details, we need an easy way to reference this work item.
  • Story type: Bug. This one’s easy 🙂
  • Description: This is the meat of the bug. Let’s explore this a bit. Note that Pivotal Tracker supports Markdown, so you can add formatting if it helps clarify the bug report.

A bug work item will provide information for three audiences: the product owner who will prioritize the importance of this fix, the developers who will be tasked with doing the work to resolve the defect, and the tester who will eventually verify that the problem is resolved.

If this was a straightforward bug, we may not need much further explanation. But there’s probably some context to be shared.

Bug - by emil_kabanov on Flickr

First, clarify what happened and why this is a bug. What did you find, and why do you think it’s a bug. How did the behavior differ from your expectations. Is the behavior directly in conflict with the behavior outlined in the feature story you’re testing? That’s an easy one. Or maybe you’re finding a consistency problem. Outline how the behavior is inconsistent. Perhaps it’s a usability issue. Why is it a problem?

Provide steps to reproduce the issue. What were the data conditions? What did you click on? What user role were you in? What job or process did you run? What browser were you using? What window size? Any of these things can be relevant for reproducing the issue. Use your best judgement as to how much detail to include.

It can also be helpful to include some severity analysis. How bad is the problem? How often will it occur, and what will be the implications if the bug isn’t fixed? While the product owner controls prioritization of the work items, we can provide information to help them make an informed decision. If the bug makes the program unusable, or causes data loss, we ought to be clear (and perhaps note elsewhere such as Slack or a face to face conversation that there’s a very severe defect to be addressed). If the bug only occurs in rare circumstances, we should note that as well. Not all bugs are showstoppers; accurate severity information will help the project team ensure we’re addressing the right work at the right time.

Include or attach supporting documentation as appropriate. For user interface issues, a screenshot is often helpful. If it’s a complex data situation involved, attaching a database test script might help.

Finally, remember that a bug report is the first step in the conversation around a work item. We may gather additional information, learn more from the development team, or alter our perspective on a defect based on changing project conditions.

image by Flickr user emil_kabanov, used under Creative Commons licensing

Pairing for Manual Configuration Tasks

During her talk last week at CAST, Natalie Bennett dropped a random tip that seems so straightforward, yet it’s not common practice anywhere I’ve worked.

The backstory: while we try to script and automate as many repetitive tasks as possible, there are any number of manual configuration tasks that are part of software development. Perhaps it’s the initial configuration of a server, or the creation of an account. The fact these tasks are done manually leaves them prone to human error, and sometimes these errors aren’t easily detected.

The other backstory: we know of pair programming, where two developers sit together to work on a bit of code, putting two brains into the design and hopefully catching errors as they occur rather than later in the process.

The solution: combine the two ideas, and when there’s manual configuration work to be done, pair for the work.

Duh. So simple, yet so smart.

Curiosity Killed the Cat, But It’s the Lifeblood of a Tester

My employer partners with a local university as part of an internship program; computer science students have an opportunity to participate in a series of six month paid internships with local software development groups. As a result, we’re now about three weeks into working with our latest intern. We’ve had two previous testing interns.

It’s interesting to see how they begin testing. With each of them I’ve set things up with an introduction to context driven testing and the ideas of software exploration and working with various heuristics to exercise the program.

It’s interesting to note if the new intern has an innate curiosity to explore.

Our current intern started at the beginning of the month. On her first day, as she began to explore one of our applications, she caught a bug that appeared when you altered a URL query string.

Curiosity: the lifeblood of a tester.

The Gatekeeper Must Own the Quality

The notion of the software tester as quality gatekeeper is generally seen as outdated; Jason B. Ogayon recently shared We Are Not Gatekeepers that does a great job of laying out the ideal scenario where the product owner is the one who makes the release decision and decides what level of quality is acceptable for the product.

In theory the team shares in the ownership of product quality; this isn’t a hard sell when things are going well. If the product is awesome, the team will generally own that and take pride in the quality, or as Jason noted:

We are not the authority about software quality, because the whole team is responsible for baking the quality into the product at every project phase.

Things get stickier when things aren’t great. If the product has a lot of defects, or is missing functionality that was previously expected, sharing the ownership for those shortcomings is often uncomfortable. It’s easy to blame the tester who raises the issues or reports on the poor quality.

But, much like the whole team being responsible for baking the quality into the product, the whole team, not just the testers, take responsibility for flaws in the quality recipe, and the individual who sets the quality bar assumes that gatekeeper role and responsibility.

When Quality Loses

Context: agile development with prioritization and release decisions being made by a product owner.

There’s often a false understanding of software quality (and the responsibility for software quality) in our industry. This falsehood isn’t helped by the “Quality Assurance” job title. With modern development practices, it’s misleading to presume that software testers are responsible for the quality of the released software.

QA as a Quality Advocate

As a software tester, we identify potential changes to the software. Sometimes it might be an obvious bug, where the software is not producing the response that’s clearly expected. Other times we might find potential enhancements such as new features or usability improvements. Either of these categories provide opportunities for improving the software. As a software testing and quality professional, I feel that I have an obligation to suggest that the software could always be better. When quality wins, users will have a better experience, and data will be in a quantifiable better state.

As a tester, I advocate for quality.

Testing != Release Decisions

Ultimately while I advocate for quality in the software I test, the ultimate decision on when to release (given whatever is known – or not known – about the quality of the software) belongs to someone else. In the agile world that’s usually the Product Owner; in other environments it might be a project manager, release manager, or other similar role.

That person – the one making the release decision – is the one who ultimately decides what level of quality is acceptable for a software release. Testers can help inform, but testers can’t insist.

Sometimes, we’ll advocate and our voices will be heard and the quality threshold will be raised prior to release. Sometimes, our voices will fall on deaf ears, or be drowned out by other voices or pressures.

Parked Cars, San Bruno Gas Line Explosion, 2010

The Release Where Quality Loses

When the quality isn’t up to par but the software is released anyway, expected repercussions will possibly and predictably include:

  • increased number of bugs-found-after-release
  • increased number of user support tickets
  • increased number of data or application hotfixes to resolve problems
  • PR or perception problems

Nobody in the development and product teams should be surprised by these results.  Sometimes there’s value in having the software released, even in a state of lessened quality, rather than holding it back to resolve more bugs.  The quality factor is one of many factors weighed in the release decision.  Sometimes quality loses.

As testers, we have to be okay with this, with the caveat that it’s not okay for the product team to blame the testers for the quality level of the product.  While many of us have the misnomer of “quality assurance” in our job titles, we can’t assure the quality when the release and budget decisions are out of our hands.

image via Thomas Hawk; used under Creative Commons licensing