Sketchnotes from CAST 2017 session by Andrea Connell – Testing Through Time and Space. You’re welcome to share, please credit Aaron Hockley with a link back to KwalityRules.com – thanks!
Sketchnotes from CAST 2017 keynote by David Snowden – Context is Everything. You’re welcome to share, please credit Aaron Hockley with a link back to KwalityRules.com – thanks!
Assigning blame is never constructive on a software project.
Asking “How did this happen?” can lead to helpful information that can be used in the future.
Ah… data values within a range. So many possibilities for fun. Ran into one of those today while reviewing a potential work item being proposed. As presented:
Add "Length of Incarceration" range
Select from a list
* Less than 1 year
* 1-3 years
* 3-5 years
* 5-10 years
* 10-20 years
* More than 20 years
Spot the problem? What happens if the length of incarceration was 1, 3, 5, or 10 years? Which of the two ranges that include these values should be chosen?
Presumably we’re grouping this data into ranges for reporting purposes.
At what point does it become statistically significant that given a situation where an offender spent 5 years incarcerated, Elizabeth always chooses 3-5 years, Ross always chooses 5-10 years, and Eduardo sometimes chooses one range and sometimes chooses another?
In any system with user-entered data, GIGO can apply, but let’s help the users by designing a system which makes it harder to input garbage.
The longer I work as a tester, the more that I realize that to provide the most value for my team I need to not only be able to report on what’s happening, but also to be able to report in an intelligent fashion, synthesizing what we’ve seen in the software along with supporting information to provide context.
Last week a fellow tester reported an interesting bug in a web application’s login form: after he entered his username and then his password, when he hit Enter to submit the form, the application opened in an entirely new browser window.
This wasn’t behavior built into the system design… when operating normally, the software should have presented him with the application’s landing page after login. I wasn’t able to immediately reproduce the behavior, and none of the system’s users had reported the issue. Yet this tester insisted that it happened nearly every time he accessed the application.
You Know This One
Even without knowing our application… you probably have information to solve this puzzle.
Once I figured out what was going on, I decided to see if I could lead folks to the same conclusion.
I asked the tester: “Is the last letter of your password a capital letter?” He said no… but it was a symbol. “A symbol accessed via the Shift key on your keyboard?” Yep.
What happens when you click a link in Chrome while you hold down the shift key?
You get a new window.
So when you’re still holding down Shift from the last character of your password, then hit Enter which activates the form submission… boom. New window.
Testing is Information with Context
As testers, we provide information. In this case, we can provide more information beyond “sometimes this thing happens.” We can provide information of “This thing will happen every time, but in this set of circumstances.” That’s useful.
We know both from our own experiences and articles about UI and UX that consistency within and across our applications can provide great value for the user; that which is familiar becomes easy.
We also know from the Agile Manifesto that we value responding to change over following a plan.
Are we at odds with ourselves? Can we both embrace change with an open mind while also valuing the benefits of consistency?
Yes. With intention.
Valuing Intentional Consistency
Consistency within and across software applications can provide for a positive user experience once a user has become familiar with the conventions that are used. Consistency can exist in multiple levels.
From specific to broad, we can look at consistency on several layers:
- Within a single application, we expect consistency for similar functions.
- Within an organization and line of business, consistency could be expected across applications used by the same user groups.
- Within an organization as a whole, consistency could be expected across applications built by or for that organization.
- Finally, we value consistency with other similar applications in the industry. If your text-editing program offers a toolbar button to make text bold, it probably should be a big bold letter B on the button (assuming you’re in the English language).
Conscious choices to be consistent can help users become comfortable faster as they use your program for the first time, or as they gain proficiency and can apply previously-learned skills and behaviors from other areas of their work.
That said… if we always do things the same way we’ve always done them, we’ll never change, right?
Embracing Intentional Change
We want positive change. If we never changed as software development teams, we’d all be writing COBOL software to build command line applications, right? Any new interface or functional paradigm is going to be inconsistent with the existing pattern simply by virtue of being new.
Much like we embrace intentional consistency, we can embrace intentional change. It’s okay to use a new design paradigm if the benefits of the new design provide a lot of value.
Change… with a purpose… is good.
Do Not Confuse Sloppiness for Change
Where we get in trouble, and create poor user experiences, is when the development team gets sloppy and we introduce change without a purpose.
Perhaps a developer is new to an application that has handled validation errors by lighting the field up in red and displaying an error summary at the bottom the page, but he instead makes field labels bold and introduces an error toast in the upper right.
If a designer is used to dates in MM/DD/YYYY format, she might not know that for this line of business, dates are always displayed as YYYY-MM-DD, and would create misleading design comps.
These are usually innocent mistakes, not done with malice but without the knowledge of existing consistency patterns, or by not asking questions before diving into a bad pattern.
Build software with intention, and your change vs. consistency issues should work out well.
As testers, we test a piece of software against a variety of both written and unwritten functional and nonfunctional requirements.
One of the key tools in testing that software is having an understanding of the environment under which that software will run. We need to understand the pieces of the stack. We don’t need to be experts on every bit of it, but we need to understand what components are involved, and how they work together. I’ve found that often may folks will forget about parts of the stack, which can lead to red herrings or bad information when describing a situation or troubleshooting a problem.
For example, in my environment I’m usually testing web applications. For a given application, the following things are in place and need to be working correctly:
- a web server (virtual) running on a host machine (physical)
- a database server (virtual) running on a host machine (physical)
- external (network) storage for the database
- a working network between all of these servers and the client machine
- DNS servers translating names into IP addresses to access the servers
- a load balancer managing traffic between multiple web servers
- IIS working correctly on the application server
- zero or more APIs that are available and working
Even with this list, there’s a chance I’ve omitted something. Depending on the testing being performed or the issue being investigated, there’s a chance that any one of these components could be the culprit. Don’t forget about the entire stack.
Image by prettyinprint, used under Creative Commons licensing