sobota, 3 maja 2014
Unit testing the functional way
Recently I stepped upon an interesting (although old) blog post by Christian Sunesson.
He got a point in saying that in OO programming the good design practices that make testability possible are very close to some functional programming principles. Unit testing and TDD encourages making the operations cleanly separated, asserting on value objects and refering to verification capabilities of mocks as a last resort or not at all. The more pure the method is, the more easy it is to test it. John Sonmez also refers to this topic.
But how to do unit testing with mostly pure functions and no mocks? How to design a system with small and large components and avoid dependencies between them? After all, we need to test in isolation. This brings a picture like this one below:
We unit test only the blue parts of the system and bind them together using the large green one.It does not look very scalable. However, the green part, which is the application layer, might in turn be composed of a few subsystems (and communicating with other such component groups), wrapping all hard-to-unit test logic, like concurrency. Then, only end-to-end tests could be used to verify it.
Each of the non-dependent blue leaf components could consist of only pure functional logic. But what about some leaf component that is overly complicated and should be refactored into several smaller ones? it is still a leaf from outside point of view, but now it has some dependencies inside.
In order to avoid mocks, the test isolation principle should be changed: testing the root component is not done in isolation - it uses the actual smaller parts.
But isolation paradigm is in place to quickly reason about failed tests. If some test fails, we can be sure that the faulty logic is the one it invokes in the first place, in isolation. It is not the case in testing dependent components without mocks: the fault might be inside some sub-part that is invoked in the test dealing with compound objects. It looks like it would be harder to find the bug.
But lets make an assumption, that all the sub-parts are also covered by tests. Each of the components is tested on its own (but not in isolation), being part of a test "onion" -- every higher level, more complex component test adding an extra layer on it. Then, the failed tests will form a red cross section through this onion. Finding a bug is such a test suite is just a matter of getting to the core of the cross section -- locating the most simple component test that is failing.
This kind of approach clearly asks for tooling support. If the dependency graph of components could somehow be hierarchically visualized, then locating such bugs would be simple. And many mocks could be avoided.