A Range of Errors

Ah… data values within a range. So many possibilities for fun. Ran into one of those today while reviewing a potential work item being proposed. As presented:


Add "Length of Incarceration" range

Select from a list
* Less than 1 year
* 1-3 years
* 3-5 years
* 5-10 years
* 10-20 years
* More than 20 years

Spot the problem? What happens if the length of incarceration was 1, 3, 5, or 10 years? Which of the two ranges that include these values should be chosen?

Presumably we’re grouping this data into ranges for reporting purposes.

At what point does it become statistically significant that given a situation where an offender spent 5 years incarcerated, Elizabeth always chooses 3-5 years, Ross always chooses 5-10 years, and Eduardo sometimes chooses one range and sometimes chooses another?

In any system with user-entered data, GIGO can apply, but let’s help the users by designing a system which makes it harder to input garbage.

Software Behaviors: When I Log In, the Browser Spawns a New Window

The longer I work as a tester, the more that I realize that to provide the most value for my team I need to not only be able to report on what’s happening, but also to be able to report in an intelligent fashion, synthesizing what we’ve seen in the software along with supporting information to provide context.

Last week a fellow tester reported an interesting bug in a web application’s login form: after he entered his username and then his password, when he hit Enter to submit the form, the application opened in an entirely new browser window.

This wasn’t behavior built into the system design… when operating normally, the software should have presented him with the application’s landing page after login.  I wasn’t able to immediately reproduce the behavior, and none of the system’s users had reported the issue.  Yet this tester insisted that it happened nearly every time he accessed the application.

You Know This One

Even without knowing our application… you probably have information to solve this puzzle.

Once I figured out what was going on, I decided to see if I could lead folks to the same conclusion.

I asked the tester: “Is the last letter of your password a capital letter?”  He said no… but it was a symbol.  “A symbol accessed via the Shift key on your keyboard?”  Yep.

What happens when you click a link in Chrome while you hold down the shift key?

You get a new window.

So when you’re still holding down Shift from the last character of your password, then hit Enter which activates the form submission… boom.  New window.

Testing is Information with Context

As testers, we provide information.  In this case, we can provide more information beyond “sometimes this thing happens.”  We can provide information of “This thing will happen every time, but in this set of circumstances.”  That’s useful.

Valuing Consistency and Embracing Change

We know both from our own experiences and articles about UI and UX that consistency within and across our applications can provide great value for the user; that which is familiar becomes easy.

We also know from the Agile Manifesto that we value responding to change over following a plan.

Are we at odds with ourselves? Can we both embrace change with an open mind while also valuing the benefits of consistency?

Yes. With intention.

Valuing Intentional Consistency

Consistency within and across software applications can provide for a positive user experience once a user has become familiar with the conventions that are used. Consistency can exist in multiple levels.

From specific to broad, we can look at consistency on several layers:

  • Within a single application, we expect consistency for similar functions.
  • Within an organization and line of business, consistency could be expected across applications used by the same user groups.
  • Within an organization as a whole, consistency could be expected across applications built by or for that organization.
  • Finally, we value consistency with other similar applications in the industry. If your text-editing program offers a toolbar button to make text bold, it probably should be a big bold letter B on the button (assuming you’re in the English language).

Conscious choices to be consistent can help users become comfortable faster as they use your program for the first time, or as they gain proficiency and can apply previously-learned skills and behaviors from other areas of their work.

That said… if we always do things the same way we’ve always done them, we’ll never change, right?

Embracing Intentional Change

We want positive change. If we never changed as software development teams, we’d all be writing COBOL software to build command line applications, right? Any new interface or functional paradigm is going to be inconsistent with the existing pattern simply by virtue of being new.

Much like we embrace intentional consistency, we can embrace intentional change. It’s okay to use a new design paradigm if the benefits of the new design provide a lot of value.

Change… with a purpose… is good.

Do Not Confuse Sloppiness for Change

Where we get in trouble, and create poor user experiences, is when the development team gets sloppy and we introduce change without a purpose.

Perhaps a developer is new to an application that has handled validation errors by lighting the field up in red and displaying an error summary at the bottom the page, but he instead makes field labels bold and introduces an error toast in the upper right.

If a designer is used to dates in MM/DD/YYYY format, she might not know that for this line of business, dates are always displayed as YYYY-MM-DD, and would create misleading design comps.

These are usually innocent mistakes, not done with malice but without the knowledge of existing consistency patterns, or by not asking questions before diving into a bad pattern.

Build software with intention, and your change vs. consistency issues should work out well.

The Tester Must Understand the Stack

As testers, we test a piece of software against a variety of both written and unwritten functional and nonfunctional requirements.

One of the key tools in testing that software is having an understanding of the environment under which that software will run. We need to understand the pieces of the stack. We don’t need to be experts on every bit of it, but we need to understand what components are involved, and how they work together. I’ve found that often may folks will forget about parts of the stack, which can lead to red herrings or bad information when describing a situation or troubleshooting a problem.

Full Stack Testing, by Flickr user prettyinprint

For example, in my environment I’m usually testing web applications. For a given application, the following things are in place and need to be working correctly:

  • a web server (virtual) running on a host machine (physical)
  • a database server (virtual) running on a host machine (physical)
  • external (network) storage for the database
  • a working network between all of these servers and the client machine
  • DNS servers translating names into IP addresses to access the servers
  • a load balancer managing traffic between multiple web servers
  • IIS working correctly on the application server
  • zero or more APIs that are available and working
  • a web browser on the client machine that supports JavaScript

Even with this list, there’s a chance I’ve omitted something. Depending on the testing being performed or the issue being investigated, there’s a chance that any one of these components could be the culprit. Don’t forget about the entire stack.

Image by prettyinprint, used under Creative Commons licensing

Testers Don’t Just Test the Code

Kate Falanga chimed in recently with some thoughts around titles for testers, QA folks, and the like in Exploring Testing Titles in Agile. She lays out a few good reasons why the term Quality Assurance is a bad one, mainly that the role can’t really assure quality. I believe this. Heck, in the tagline on this site I refer to “(The myth of) Quality Assurance.”

She then outlines why she doesn’t like the title of Tester, feeling that it’s not broad enough to reflect all of the work that we do, and that it’s reactive:

It gives a very reactive rather than proactive connotation. If someone is a Tester then it is assumed that something needs to be tested. If you don’t have anything to be tested then why do you need a Tester? Why include them in planning or the project at all until you have something for them to do?

Quality Assurance / Testers / Job Title Adventures

The bad assumption here is that code is the only thing being tested, and that testing is the only thing done by a tester. Sure, once there’s code being written, a tester will probably spend a majority of her time exercising that code, but the tester participates in testing activities prior to the code. Time spent in requirements discussions helps the team write better requirements or user stories. Time spent learning about the business environment or the end user’s use cases will help the tester get into the right mindset for testing activities.

These activities aren’t testing in the sense of testing new code that’s been writing, but they’re testing activities. If testing allows us to learn information about the software being tested, and we use that information to improve product quality, all methods of learning could be considered test activities, could they not?

Do we continue the search for a better title than Tester, or do we work to help the broader software industry understand that Tester doesn’t just mean exercising code changes?

Image by Ruth Hartnop, used under Creative Commons licensing

Effective Bug Reports with Pivotal Tracker

As someone who is a tester that was once an application developer, I’ve seen both sides of the bug report world: both writing them as well as being the one to receive them and have to act on the information. Here’s my take on what is included in a good bug report when using Pivotal Tracker as your system for work item tracking.

  • Story title: The title ought to give a quick, one-line indication of the issue. While many bugs require nuance and details, we need an easy way to reference this work item.
  • Story type: Bug. This one’s easy 🙂
  • Description: This is the meat of the bug. Let’s explore this a bit. Note that Pivotal Tracker supports Markdown, so you can add formatting if it helps clarify the bug report.

A bug work item will provide information for three audiences: the product owner who will prioritize the importance of this fix, the developers who will be tasked with doing the work to resolve the defect, and the tester who will eventually verify that the problem is resolved.

If this was a straightforward bug, we may not need much further explanation. But there’s probably some context to be shared.

Bug - by emil_kabanov on Flickr

First, clarify what happened and why this is a bug. What did you find, and why do you think it’s a bug. How did the behavior differ from your expectations. Is the behavior directly in conflict with the behavior outlined in the feature story you’re testing? That’s an easy one. Or maybe you’re finding a consistency problem. Outline how the behavior is inconsistent. Perhaps it’s a usability issue. Why is it a problem?

Provide steps to reproduce the issue. What were the data conditions? What did you click on? What user role were you in? What job or process did you run? What browser were you using? What window size? Any of these things can be relevant for reproducing the issue. Use your best judgement as to how much detail to include.

It can also be helpful to include some severity analysis. How bad is the problem? How often will it occur, and what will be the implications if the bug isn’t fixed? While the product owner controls prioritization of the work items, we can provide information to help them make an informed decision. If the bug makes the program unusable, or causes data loss, we ought to be clear (and perhaps note elsewhere such as Slack or a face to face conversation that there’s a very severe defect to be addressed). If the bug only occurs in rare circumstances, we should note that as well. Not all bugs are showstoppers; accurate severity information will help the project team ensure we’re addressing the right work at the right time.

Include or attach supporting documentation as appropriate. For user interface issues, a screenshot is often helpful. If it’s a complex data situation involved, attaching a database test script might help.

Finally, remember that a bug report is the first step in the conversation around a work item. We may gather additional information, learn more from the development team, or alter our perspective on a defect based on changing project conditions.

image by Flickr user emil_kabanov, used under Creative Commons licensing

Pairing for Manual Configuration Tasks

During her talk last week at CAST, Natalie Bennett dropped a random tip that seems so straightforward, yet it’s not common practice anywhere I’ve worked.

The backstory: while we try to script and automate as many repetitive tasks as possible, there are any number of manual configuration tasks that are part of software development. Perhaps it’s the initial configuration of a server, or the creation of an account. The fact these tasks are done manually leaves them prone to human error, and sometimes these errors aren’t easily detected.

The other backstory: we know of pair programming, where two developers sit together to work on a bit of code, putting two brains into the design and hopefully catching errors as they occur rather than later in the process.

The solution: combine the two ideas, and when there’s manual configuration work to be done, pair for the work.

Duh. So simple, yet so smart.