Monday, April 28, 2014

TDD and Me

It will come as no surprise to anyone who knows me but I have a bit of a thing for Test Driven Development (TDD).  Recently @dhh stirred the developer waters with a provocative post, titled TDD is dead. Long live testing. That set off a firestorm on Twitter (what doesn’t these days) which engendered a response from @unclebobmartin, titled Monogamous TDD. Leaving the rhetoric about fundamentalism, from both, aside, I’m more inclined to agree with the latter. I’ve found TDD to be a practice which does, as Uncle Bob says, help me:
  1. Spend less time debugging
  2. Document the system at the lowest level in an “accurate, precise, and unambiguous” way.
  3. Forces me to write higher quality, decoupled code.
I’ve even, occasionally, gotten enough confidence from my unit tests that I do deploy without fear based on passing tests and do feel the confidence to refactor that Uncle Bob describes.

But Uncle Bob left out one, perhaps glaringly obvious and important, benefit that TDD provides that other testing strategies don’t. I suspect this because both DHH and Uncle Bob agree on the need for tests.

TDD, because it insists on writing the tests first, ensures that the tests are written at all.

My fear, based on my experience with other developers, is that despite DHH’s careful phrasing, most developers will only hear “TDD is dead.” To many, if not most, developers the question isn’t, “Do we write the tests first or last?” but “Do we write tests at all?” In my experience, TDD was the practice that gave me enough of a reason to write the tests.

Though I would agree that tests were necessary and led to higher quality code, I always found writing tests afterwards something that I would rarely, if ever, do. Why? because the code “obviously worked.” Never mind the fact that it actually probably didn’t, or at least, didn’t always work. Even now, say when I’ve written some experimental code to try and figure out how something works, I find that when I go to write the production code I discover things about the experimental code that were broken when I write the tests (first) for the production version.

Even within the last week I’ve had conversations with developers who insist that TDD slows you down and “customers don’t pay us to write tests.” While true in part, these sorts of myths are drive people to omit tests entirely, not test last or test using alternative methods. I fear that they will only be reinforced by DHH’s post.

To borrow a concept, Shu Ha Ri, from Alistair Cockburn, I feel like the “conversation” between DHH and Uncle Bob is taking place between two people at the Ri, or fluent stage that will be misinterpreted by the majority of us in the Shu, or follower, stage. Because the practices at this stage are still being ingrained and can take effort to master, they are easily abandoned when another “expert” weighs in and discredits them. In our sound-bite culture, this is especially pernicious as DHH isn’t advocating the abandonment of testing (which I think many will hear) but an alternative practice of testing.

My Practice

Or How I Make TDD Work for Me
First and foremost I consider myself to be committed to high quality code. Secondarily, I consider myself to be a pragmatist. When I started doing TDD, I tested everything. I also found myself writing prescriptive, rather than descriptive tests. I can still tend to do this if I’m not careful.

What do I mean by that? I mean that the tests were describing how things got done rather than what was being done. I think this is one of the key pitfalls of poorly done TDD and leads to tests being brittle and hard to maintain (though it's true about all tests, it's just that - mostly - those other tests are never written). Since this is one of the common criticisms of TDD, I think it's important to note that it's also something that needs to change if you're going to be successful at TDD. These days I try to write tests that are the least constraining as possible and define behavior, not implementations as much as possible.

I also try to only test meaningful code that I write. "Meaningful" is a tricky concept as sometimes I find myself going back and adding a test that I didn't think initially was meaningful...until it broke in QA. Logging is one of those things that I don't often test unless there are specific requirements around it. IOC container configuration is another. YMMV. "Code I write" is much easier. I avoid testing that the framework or language works. So, for example, no testing of C# properties (i.e., getter/setter methods) unless there is complex logic in it.

The other thing that was crippling to me when I first started with TDD, was copy-paste disease or "DRY-rot". By that I mean that I often simply copy-pasted test set-up into each test rather than treating my test code as a first-class entity and applying the same craftsmanship principles to it that I did to the production code. It seems obvious in retrospect, but this was test code, not production code (or so my thinking went). Of course, when things changed that led to a cascading chain of test changes. Again, this is one of the key complaints I hear about TDD (and again all tests).

Now I make sure to refactor my tests to keep my code DRY and employ good architectural principles in my test code as well as my code under test. In my mind it's all production code. Refactoring to DRY code, just like in your code under test, makes changing the code much less painful and easier to maintain.

I'm also frequently confronted with code that hasn't been written with TDD or whose tests haven't been maintained. As a pragmatist, I don't find it valuable to go back and write tests for existing code, especially when I'm first coming to a project and don't have a complete understanding of how that code is supposed to work.

Aside: the lack of tests makes it harder to understand such code since there isn't an "executable specification" that shows me how the code differs from the, typically, out-of-date or non-existent documentation. At best you can get an incomplete oral history from the previous developer, but even they may not be the original author and only have a limited understanding.

If I'm implementing a new feature I will write tests around that feature and refactor as necessary to support testability in the feature. In many cases this means introducing either wrappers around hard to test (read: static) implementations or alternative constructors with suitable default implementations which allow me to introduce dependency injection where it wasn't before. Less frequently, while changing features, I'll write enough tests around the feature to help ensure that I'm not breaking anything. This is easier when there are exiting albeit out-of-date tests as at least the architecture supports testability. As a last resort I do manual testing.

Almost invariably, if TDD hasn't been used, there are no or only minimal developer tests. I don't count QA scripts as tests though they serve a valuable purpose, they don't given the developer enough feedback quickly enough to ensure safety during development. This is almost universal in my experience and leads to the fears I've detailed above.

The last thing I'll mention is, as I alluded to before, I don't typically write unit tests for experiments or spikes. If I do end up using that code as the basis for production code, I try to follow the practice of commenting out the pieces of it and writing the tests as if the code did not exist and then uncommenting the relevant bit of code to pass the test. Invariably, I find that my experimental code was broken in some significant way when I do this so I try to keep it to a minimum.

The Need for Speed

I want to say a word about one the common complaints that I hear about TDD: that TDD slows development. I think that this is both true (and good) and false (and still good). First, yes, writing the tests in addition to writing the code, particularly before you write the code can add additional effort to the task. If you otherwise don't write the test at all, by doing TDD you've incurred the entire cost of writing the test. If you otherwise write the test after, by doing TDD you've incurred any refactoring you've done while writing the code.

So, yes, doing TDD can take more time per unit of code under test than either not testing or writing tests after. BUT... that time isn't necessarily wasted because it also gives you more time and the impetus to think about the code you are going to write before you write it. In my experience, this leads to better code the first time and less re-writing and can actually save time even during initial development, though it typically does increase it somewhat.

Over the long term, I think the idea that TDD takes more time is patently false. There is a body of research into the topic that shows that testing, and specifically TDD, reduces code defects and re-work significantly (cf, Is There Hard Evidence of the ROI of Unit Testing).

Ancedotally, I've also found that on projects where I've used TDD and other developers haven't tested at all, we've spent a lot more effort around the parts of the code that haven't been tested. While I'd like to take credit for my superior development skills, the reality is that I just prevented a lot of the defects that I otherwise would have made using TDD. When I'm not able to write tests, I make those same errors.

On projects where they have unit tests, I've found that when I come in later, I'm much more quickly able to get up to speed and I'm able to develop with more confidence, increasing my productivity, when I have the safety of a unit test suite. Again, in my experience, if you don't do TDD, you probably also don't do testing, period. At the very least, if you wait and write the tests afterwards, you're still incurring the costs of finding and fixing defects while the code is under development that TDD could have saved you.

Always a Newb

Finally, while I feel like I've learned a lot about testing, in general, and TDD, in particular, over the years by no means do I consider myself as knowing everything. I'm still discovering how to make it work better. I don't pretend that there aren't alternative ways that might work as well. Based on my experience, it's the best thing that I've found and I heartily recommend it to anyone who asks. I will happily abandon TDD for any practice that works better. So far I haven't found one.

I will say, though, that I know enough to be able to safely say that if you're not doing tests, unit or otherwise, don't bother weighing in on whether TDD or some other means of test is better...just start testing. If you are testing but haven't tried TDD, give it a chance. Study the pitfalls (maybe start with TDD, Where Did It All Go Wrong (Ian Cooper) and avoid them but don't blindly dismiss it based on someone else's experience because there are plenty of us that have a different, markedly better one.

Edited to correct misspellings and grammar. Now, if I could only find a way of doing TDD on my blog posts.