Jonathan Coulton’s Still Alive as a Software Project Restropective

Jonathan CoultonSure, it came from the video game Portal, but Still Alive seems like a hodgepodge of gems that could be used as we look back on a software project. We start with the beginning of the song, referencing a project that went well:

This was a triumph.
I’m making a note here:
HUGE SUCCESS.
It’s hard to overstate my satisfaction.

Sometimes things don’t go so well. Bugs happen:

But there’s no sense crying over every mistake.

I assume this couplet is about a burndown chart:

Now these points of data make a beautiful line.
And we’re out of beta, we’re releasing on time.

Cake is the promised reward in Portal, and hey, who hasn’t met a group of developers motivated by unhealthy baked goods?

Anyway, this cake is great.
It’s so delicious and moist.

And here’s the full song, with the screen as it plays in the game’s final credits:

Image by Flickr user nickstone333, used under Creative Commons licensing

Coming Soon to a #Starwest Near You

Last year was my first time at Starwest, a conference for testers held in Anaheim.

Monorail Orange

As a famous Californaustrian once said, I’ll be back.

I’ll be in Anaheim from the 11th through the 16th, taking workshops from Michael Bolton, Rob Sabourin, and Bob Galin, followed by the main conference.

If you’ll be there, let’s connect!

Unrelated to Starwest, I’m leading a photowalk the evening of the 11th. Hit that link to find out more or register.

Great Expectations

As a software tester, I have great expectations.

  • I expect that as I test a feature that the functionality will match my understanding of the user story and related discussions
  • I expect that the software will have an intuitive user interface
  • I expect that the software will be consistent with itself, with other similar applications we’ve developed, and with industry standards

Sometimes my expectations are met. Sometimes I find that the software behaves differently than (I) expected.

When behavior differs from expectations, have I found a bug? Perhaps. Or perhaps my expectations were wrong.

Conversation Starters

When software behavior differs from my expectation as a tester, more often than not it can be a conversation starter[1] for further discussion. It often means it’s time for a conversation either with the product owner to see if my expectations are in line with his expectations as to the functionality of the system. Maybe it’s time for a conversation with the developer to figure out if her expectations differed when she wrote the code that’s not behaving as I expected.

Some scenarios:

  • I expected the client list screen to be sorted by last name, because hey, that makes sense, right? But perhaps the product owner told the developer that they wanted it sorted by last activity date instead.
  • Perhaps the data field on the screen is allowing for a different sort of input than was noted in the user story. Rather than assuming the developer is incompetent, I can ask if the desired behavior changed beyond a (non-updated) user story.
  • Often, especially with the non-standard use case, I run into an error situation that’s handled in what seems like a strange way. Developers on my team have learned what’s usually coming when I start a conversation with “What would you expect to happen when…” and lay out the scenario. Often I’ve discovered a workflow or use case that hadn’t been foreseen, so my expectation was based on something the developer hadn’t even considered.

Sometimes my expectations are “correct.” Sometimes the desired behavior is different than my expectations.

Expectations lead to revelations which lead to conversations, which may or may not lead to work to change software behavior. News flash: I’m not always right.

Related reading: the Huh? Really? So? section of my notes from James Bach’s workshop at STARWEST last year.


  1. Yes, there will always be the “it’s blatantly broken” bugs, but these aren’t the ones that usually cause process or personality grief.  ↩

It Happened on June 72nd

File this one in the “who the hell would’ve ever thought this was the correct behavior?” category…

Our dev team is moving into the bootstrap world, which means that we’re again learning how to manage date fields and date pickers.

Being the good tester that I am, I tried entering February 29, 2014 as test data. And the date field automatically changed to March 1st, 2014. Hm.

February 30th led to the date field changing to March 2nd. Well this is peculiar. It seems less-than-ideal that it would change the date without informing the user.

Let’s give it a really wacky date. What happens when I input 06/72/2014?

It changes it to 8/11/2014. Because, you know, August 11th is the proper way of representing June 72nd. We can count any number of arbitrary days from the beginning of a month.

WTF?

Management Shouldn’t Make Bug Count Jokes

A couple of years ago, a new senior manager began working at our organization (he was my boss’ boss). Shortly after his arrival, he came around to our group to introduce himself and meet the various members of the team.

He came into the room that houses my small dev team (5-6 people) on one side with another similar team on the far side of the room. He’s meets the other team, including their QA person. Then he meets our team and I’m introduced as our QA guy. He then quips:

So… do you guys keep score and see who has the least bugs?

Headdesk

Was it a joke? I’m not sure. He wasn’t laughing. And neither was I.

Yes, there is value in tracking some statistics, but what sort of impression does a tester get when the first interaction with a senior manager is that manager asking about bug stat competitions?

What are the odds that this person knows much about software testing? And if this person is going to evaluate software based on likely-bogus bug statistics, what other bad metrics is he going to use to make decisions?

Incidentally, said manager chose to leave the organization just a few weeks after being hired. Hopefully he found somewhere that’s a better fit.

Tip for management: testers probably are going to find your bug count jokes more scary than funny.

Do You Ship a Steaming Pile of Turd if the Customer Doesn’t Argue?

Back in ye olden days of waterfalls, our requirements-gathering efforts would lead to reams of specifications. We’d account for each pixel, specifying the screen position of various labels. Our data entry forms would have a tab order laid out explicitly. The text of every error message would be wordsmithed to (alleged) perfection.

We all thought we knew what we wanted up front. We’d write a big pile of documentation to show just what we needed built, and how. Developers couldn’t be trusted to figure these things out on their own…

And then we all got a dose of reality, as we realized that quantity of documentation and specifications didn’t really correlate to system quality. Despite our best effort to make it difficult to change (hi there mister change control board), change still occurred. And it was expensive, since the process was built around what we thought was stability.

Enter Agile

But hey, here comes the agile world, where we work iteratively with our product owners and end users to create something that meets their needs, even as those needs change throughout the duration of development.

We don’t write as many detailed specifications, because we value working software over comprehensive documentation[1], and some of that time spent documenting is probably better spent writing code. But as we cut out the documentation, we leave more decisions for developers to evaluate and decide what to build.

This is generally a good thing. Study after study shows that agile methods usually produce better software, and they often do it faster.

But… and there’s always a but… what about the little quality details?

Things a User Won’t Ask For

Product owners, business customers, and end users are pretty good at figuring out the big obvious functionality for a piece of software. Their domain knowledge drives the big features, and when we get it right as a software team, the result makes their lives better.

I’ve found that the product owner or customer often won’t ask for little things that can increase the quality of a piece of software. When is the last time you heard a customer explicitly ask for:

  • the web application to use consistent <title> tags such that the application and particular screen are identified in a consistent manner
  • buttons for actions (Submit, Cancel) are always placed in the same place… for example, cancel on the left and submit on the right
  • consistent behavior with regards to the maximum length for data entered into text boxes, and handling cases when a user might circumvent this (copy and paste, perhaps?)
  • the Enter key performing a default action such as a search

You get the idea… I could list fifty other similar attributes. These are behaviors or patterns that are hallmarks of quality software. Some make it more intuitive to use. Some help prevent the user from losing some data. Each of these things makes the software just a bit better and makes it so that a user is less likely to use the name of your program as an expletive.

Customer Acceptance is not the Be-All, End-All

We recently had a lively discussion within our development team about setting a quality standard for our products. A couple of our developers argued for the position that if the customer accepts the work, then it’s good enough, and anything we did above and beyond the minimum needed for customer acceptance was “gold plating” and excessive.

I disagree. Just because a customer is willing to call something “good enough” doesn’t necessarily mean it’s good enough. Much like we trust the customer or product owner to come to us and contribute their domain knowledge for the problem we’re solving, as software professionals we also bring our domain knowledge to the table… in the domain of software quality. Our experience, both as individuals and as an industry, can bring great things to the table in terms of usability, reliability, stability, and other similar factors. Customers are often only able to articulate that they want the software to be “easy to use” or “user friendly.” Our expertise translates that into work tasks that can be verified by testing.

We ought to strive for the highest quality possible, even when the customer doesn’t explicitly ask for it. Establishing a set of quality standards for your software is a worthy effort and can help everyone on the product team have a clearer understanding of what is meant by quality software.