Tuesday, 11 June 2013

The TAO of TDD

TDD - you need the right curves in the right places
I have identified a graph that will help maximise the benefits of test driven development. This takes into account the natural phases that a software project typically goes through as well as a often ignored but very significant bit of code meta data: Code volatility. Because, implicit in the notion of TDD is the idea of code volatility.

A closer look at the bandwagon
TDD (test driven development) is all the rage right now. It means that you write your tests first, then you develop your code, checking it against the tests.

The idea is that by knowing how you will test your code, you gain not only from having a robust set of tests written to prove that your code is correct, but by developing the tests in the first place, you have focussed on the finer details of how that code will work. For enterprise class development, this means that your development process is something like this:

  1. System requirements
  2. Problem analysis/solution identification
  3. Program specification
  4. Develop tests
  5. Write code running tests for each section of code as you go
  6. Verify code against full test suit
  7. Higher level human testing
The befits are:
  • Better clarify of code before start of programming
  • Full test suit available for verifying subsequent code changes/maintenance.
  • Focussing on, and agreeing tests up front is an excellent way to debug requirements and identify poorly specified, misunderstood or ambiguous requirements.
  • Rapid testing: Automated testing is usually far quicker than human testing.
  • Full set of automated, repeatable tests ensures all tests reliably performed and at low cost.
  • By running comprehensive tests across the entire software suite, unexpected/unintended consequences of code changes/development can be identified that could otherwise be missed in the more "targetted" approach of human testing.
  • Human testing can be focussed on higher value, more intelligent, expert level testing. Better job satisfaction for humans.
  • By removing the cost of "mundane" testing, human expertise is freed to identify nuances such as user experience issues, colour/icon mismatches, porting problems and other such "higher level" verifications.
  • With human testing focussed on the "interesting" work, a higher quality testing results. Bored humans make for poor testers.
  • The biggest advantage of all is that quite simply, much more testing can be performed much more regularly and much more comprehensively.
A bit of magic
However, there is a bit of a black art to TDD. One that is often overlooked: TDD is about delivering benefits in terms of cost and time.

TDD implies that you are writing tests, lots of them. This means:
  • Your tests need to cover a potentially exponential set of execution paths, combinations, boundaries and exceptions, component dependences, flooding, stress testing, performance testing and data set-up.
  • This implies a significant amount of test scripting. And that means software development. Often to fully test an item of code requires more lines of code than the code being tested.
  • Test scripts, like all software, are prone to error. Bugs in test scripts can waste significant time if they falsely identify code bugs. You start by attempting to resolve a problem that doesn't exist. Depending on the nature of the test, the number of dependencies in the code and the complexity of the code, this can be a significant misdirection and hence waste of time and resources.
  • A large package of test scripts and data becomes a significant factor when changing code. This is not limited to software maintenance. Software additions almost always require existing code to be modified in some way and that usually implies updating the testing software and data too. In software with high combinatorial paths or outcomes, this can imply very serious test script modification overheads.
TDD is about benefits. These benefits include cost savings, software reliability, reduced cost of maintenance and faster delivery of code changes. One word is at the heart of these benefits: Automation.

However, automation implies repetition in this context. Scripting a test that will only ever be run once is unlikely to yield a cost-benefit in your favour. The downside of additional software development (test scripts/data), potential for bugs in the test scripts and other considerations mentioned above need to be looked at carefully as traditional human testing may well be better and possibly cheaper in these circumstances.

Likewise highly volatile software undergoing major, frequent changes may amplify the additional software maintenance costs of the test suite past usefulness. Code/functional volatility essentially draws software closer to the curve delineating that software as "once-off" code and hence much less likely to yield a scripted TDD benefit.

When human is best: Meet the curve
Note that TDD as a methodology doesn't require scripting. It's perfectly feasible to implement your testing as human testing although TDD as it's commonly used implies scripted testing.

When the software volatility curve is such that it draws close to once-off code, then human testing begins to beat scripted testing.

A "typical" software lifetime follows a volatility curve, with high volatility in the early stages, slowly reducing over time. If you include the conceptualisation and requirements phase in that graph, then the volatility curve is even more dramatic as various ideas, scoping decisions and requirements inclusions/exclusions take place.

To begin writing your tests too early in the software cycle, during the highly volatile stages, is to dramatically increase the costs and also the likelihood of redundant or changing tests.

Writing your tests too late in the software's life, for example several generations into the software when it's stable, has few defects and its requirements are unlikely to change means that you are developing tests to prove already proven software. The majority of these tests are unlikely to be reused much,if at all, due to low software maintenance. The majority of the test suite would therefore be redundant.

Therefore, there is a "sweet spot" in most software developments where scripted TDD is appropriate. It's between a certain point at the start of the software's life-cycle and ends at a certain point of software stability and maturity. Within this sweet spot the return on investment of scripted TDD is positive and worth doing. However, outside of this sweet spot, the return on investment is at best close to zero, but often it is significantly negative and hence scripted TDD is not worth investing in when these conditions hold.

This implies that there is a natural value curve to TDD across a project's life. That curve begins and ends either negatively or close to zero with a positive "sweet spot" yielding good return on investment for TDD.

Identifying this curve enables project planning that will yield the most benefit from TDD with well timed allocation of resources maximising the returns on investment.

No comments:

Post a Comment