Friday, August 9, 2013

Thoughts on component testing

If software testing, we can compare the entire system to a jigsaw puzzle, with each component being a piece. To verify that a piece has the proper shape, we can do one of (at least) three things:

a) check whether it fits in its place,
b) check whether it fits with the pieces around it,
or c) compare it to another piece we already know has the required shape.

Approach a) is exactly standard unit testing - we simulate the proper environment for the piece, using fakes and control flow, and check whether it performs as expected. The challenge is simulating the correct "hole" for the component to fill - it's an exercise in negative space.

Option b) is integration testing. Theoretically speaking, much of it is not mandatory - as long as we've performed the proper unit tests, combinations of components should work as expected, like parts of a mathematical formula; when the number of components involved in a process grows beyond the trivial, we factor it out into separate master components and unit test those at a higher level of abstraction - which however introduces more opportunity for design flaws and excessive architecture.

More importantly, we are imperfect and so are our tests. We are not always capable of creating a perfect representation of the expected environment for our tested components - which, unlike jigsaw puzzle holes is implicit and multi-dimensional. Even well-tested components can be assumed to have undetected rough edges, and as the separate pieces integrate into the larger whole, their entropy multiplies and we risk ending up with an unpredictable system.

In the end, many integration tests serve as sanity checks to make up for deficiencies in our unit-testing code base, as well as to avoid unnecessary design layers.

Option c) would be testing by example, perhaps the topic of a future meditation. Stay tuned...

No comments:

Post a Comment