We had a discussion at work after watching the excellent Kelvin Henney talk about good unit tests. And it helped me clarify the many roles tests play and why people get a habit of complaining about their tests getting in the way. So here is the four categories a test is trying to balance:
Most test today are written as requirements. And this is a huge improvement over the past when we'd write tests later and try and get code coverage. Those 'later' tests are useless. The requirement tests happen for high level acceptance tests and in low level TDD too. Your test defines a behaviour you want. Once your test passes you don't go back to change it at all beyond some obvious cleanups like code duplication. These tests allow rapid development and reliable progress. But they can hinder refactoring, provide poor feedback on failures for regression and may be too implementation focused to be readable.
And easy correction is to make tests act as good documentation. Unit tests should explain component behaviour to other programmers. Think of it as the examples section in your book that any programmer is going to flip to and read first. Higher level tests can use BDD, tables or other techniques to create highly readable descriptions that can be shared with your product owner or customer. One thing you'll notice if you try and make unit tests act as good documentation is you'll want to avoid the london-style or mockist style test where every secondary object is mocked out. All those mocks heavily hinder the tests being a document as it shifts away from being an example of how the code runs on the product.
You may think all those requirements focused tests are all you need to cover the regression case. But for good regression tests you want highly informative error messages. I find those get introduced later when you start seeing such failures. That might be when a bug is introduced or a new feature makes old tests inaccurate. That's a good time to go back through those assert statements and make sure they give enough information to know how to correct it. As once corrected those asserts will benefit you ever after. There are also some frameworks that can help like the highly informative asserts from py.test.
Unit tests are your best guide for refactoring. So you want a complete suite of tests that run very fast. That way you can change a line of code and know within a few seconds if that change is okay. Higher level tests are helpful too in that you could replace an entire component (perhaps something custom with something off-the-shelf) and see everything still works. Can you replace your database with confidence? That's the kind of feedback high level tests can give to help refactoring.
So a single test can satisfy all four of these needs. But it doesn't need to so it's better to figure out where the focus is for the test you are writing at the time. Some tests might only be there for documentation. It's okay for tests to evolve to help refactoring only when you need that refactoring. But don't assume that once a test is passing that it automatically ticks all these boxes. It's just a check at that point.
- Requirements
- Regression
- Documentation
- Refactoring guide
Most test today are written as requirements. And this is a huge improvement over the past when we'd write tests later and try and get code coverage. Those 'later' tests are useless. The requirement tests happen for high level acceptance tests and in low level TDD too. Your test defines a behaviour you want. Once your test passes you don't go back to change it at all beyond some obvious cleanups like code duplication. These tests allow rapid development and reliable progress. But they can hinder refactoring, provide poor feedback on failures for regression and may be too implementation focused to be readable.
And easy correction is to make tests act as good documentation. Unit tests should explain component behaviour to other programmers. Think of it as the examples section in your book that any programmer is going to flip to and read first. Higher level tests can use BDD, tables or other techniques to create highly readable descriptions that can be shared with your product owner or customer. One thing you'll notice if you try and make unit tests act as good documentation is you'll want to avoid the london-style or mockist style test where every secondary object is mocked out. All those mocks heavily hinder the tests being a document as it shifts away from being an example of how the code runs on the product.
You may think all those requirements focused tests are all you need to cover the regression case. But for good regression tests you want highly informative error messages. I find those get introduced later when you start seeing such failures. That might be when a bug is introduced or a new feature makes old tests inaccurate. That's a good time to go back through those assert statements and make sure they give enough information to know how to correct it. As once corrected those asserts will benefit you ever after. There are also some frameworks that can help like the highly informative asserts from py.test.
Unit tests are your best guide for refactoring. So you want a complete suite of tests that run very fast. That way you can change a line of code and know within a few seconds if that change is okay. Higher level tests are helpful too in that you could replace an entire component (perhaps something custom with something off-the-shelf) and see everything still works. Can you replace your database with confidence? That's the kind of feedback high level tests can give to help refactoring.
So a single test can satisfy all four of these needs. But it doesn't need to so it's better to figure out where the focus is for the test you are writing at the time. Some tests might only be there for documentation. It's okay for tests to evolve to help refactoring only when you need that refactoring. But don't assume that once a test is passing that it automatically ticks all these boxes. It's just a check at that point.
Comments
Post a Comment