I was discussing what our test code can tell us about the design of our solutions this morning with a friend. It’s an interesting topic. The received wisdom is that big test set-ups mean that the class or module being tested has too many dependencies and is therefore almost certainly doing too much.
This is often the case, but not always. Let me illustrate with an example. Here’s an integration test for my Guitar Shack solution:
The set-up for this test is pretty big. Does that mean my StockMonitor class has too many dependencies? Let’s take a look.
That actually looks fine to me. StockMonitor essentially does one job, and collaborates with three other classes in my solution. The rest of the design is hidden behind those interfaces.
In fact, the design is like that all the way through. Each class only does on job. Each class hides its internal workings behind small, client-specific interfaces. Each dependency is swappable by dependency injection. This code is highly modular.
When we look at the unit test for StockMonitor, we see a much smaller set-up.
The nesting in the set-up for the integration test is a bit of clue here.
This style of object construction is what I call “Russian dolls”. The objects at the bottom of the call stack are injected into the objects one level up, which are injected into objects another level up, and so on. Each object only sees its direct collaborators, and the lower layers are hidden behind their interfaces.
This is a natural consequence of the way I test-drove my solution: from the outside in, solving one problem at a time and using stubs and mocks as placeholders for sub-solutions.
So the big set-up in my integration test is not a sign of a class that’s doing too much and a lack of separation of concerns, and that’s because it’s a “Russian dolls” set-up. if it was a “flat set-up”, where every object is passed in as a direct parameter of StockMonitor‘s constructor, then that would surely be a sign of StockMonitor doing too much.
So, big set-up != lack of modularity in certain cases. What about the other way around? Does a small set-up always mean no problems in the solution design?
Before Christmas I refuctored my Guitar Shack solution to create some practice “legacy code” for students to stretch their refactoring skills on.
I think it’s beyond any reasonable doubt that this class does too much. There’s almost no separation of concerns in this design.
Now, I didn’t write any unit tests for this (because “legacy code”), but I do have a command line program I can use the StockMonitor with for manual or shell script testing. Take a look at the set-up.
It’s pretty small. And that’s because StockMonitor‘s dependencies are nearly all hard-wired inside it. Ironically, lack of separation of concerns in this case means a simple interface and a tiny set-up.
So, big set-ups don’t always point to a lack of modularity, and small set-ups don’t always mean that that we have modularity in our design.
Of course, what the big set-up in our integration test does mean is that this test could fail for many reasons, in many layers of our call stack. So if all our tests have big set-ups, that in itself could spell trouble.