Tuesday, August 6, 2013

Designing for testability

I'm going to assume you agree that writing tests is good and that 100% code coverage (or as close to it as possible) is a great ideal to strive for.

Testing stuff is hard. Any stuff. By anyone. (QA teams don't have it any easier.) This is true if you don't have tests and if you have tests. And, sometimes, the tests you have make it harder to write more tests.

The root problem is testability. I define testability as "the ease by which a system is verifiable." (This is different from "How well can someone describe a testcase." The latter is a skill of the person, the former an attribute of the system.) The easier a system is to test, the greater its testability.

Testability affects and is affected by everything. Every decision made by anyone on the project can reduce the project's testability. Often in ways that aren't obvious until months later. For example, the ops team adds a new service and it needs a configuration file. The person in charge of doing it is focused on getting this service up and running, so they hard-code the file's path into a module that's included in the application. They didn't know the dev team's process for adding a new configuration file - they're ops, not dev. But, that's now a block to testability. Instead of creating a new configuration file with appropriate values for testing and pointing the code at it, the tester has to put the file in that spot. The spot might be in a directory that requires privileges to write in, meaning tests now have to run with elevated privileges. It's also a spot which might change later, intermittently breaking the test suite in hard-to-diagnose ways.

A lot of ink (digital and not) has been spent on discussing ways of improving the application code within a system to make it easier to write unit-tests. An incomplete list would include:
  • Decoupling
  • Interfaces
  • Mock objects
A nearly equal amount has described how to write integration tests, though with less prescription for making a system more testable (we'll see why in a later post). And, still further, people have talked about other ways of distinguishing this test from that test.

At the heart, testing any system is just this:

  1. Hook up an input stream with testing data
  2. Hook up monitors on an output stream
  3. Run the test
This process works for everything, so we'll look at it in the light of a car. When I take my car into the local oil change place, they test a whole bunch of components in my car, not just the oil. For example, to test the transmission fluid, they:
  1. (input) Extract a small amount of fluid from my transmission and put it on a card.
  2. (output) The card has a reference color on it.
  3. (run test) Compare the fluid color against the reference color using a Mark-1 eyeball.
That's a highly repeatable and very strong test. It's cheap to execute (for time, materials, and training) and it works. (Happily for me, they are able to do this - the transmission fluid in one of my older cars was filthy and would have caused the transmission to fail if it hadn't been changed. I wouldn't have known to do it otherwise.) They test the air filter, the transmission fluid, the lights, the wipers - pretty much every component in my car. 

Well, not quite. They test every highly-testable component in my car. They don't test the integrity of the engine mounts, the safety of the seat-belts, or if the airbags are charged. Why not? What's different about those components that makes tests for them much harder?

Unlike the various fluids and filters, the airbags (for example), aren't designed to be tested. There may be very good reasons for doing so, but that's not the question. If there was a car that designed the airbags in such a way that my oil changing place could cheaply test their charge, they would jump all over it. Running several dozen cheap tests make clueless drivers (like me!) want to use them and the more they can test, the more they will find that (legitimately) needs replaced. (Likely, by them, because why go somewhere else?)

The oil change experience also gives us another crucial point - unit tests and integration tests are the same thing. The mechanics use different inputs, outputs, and tests when examining different components. But, the point of input, the point of output, and the expectation are all well-defined. There's no distinction between someone who is capable of judging the transmission fluid vs. the performance of the car as a whole. Nor is there a distinction between the types of tests (or inspections, as they call them).

More on this in part 2.

No comments:

Post a Comment