Parameterized tests (sometimes called “data-driven tests”) can be a useful technique for removing duplication from test code, as well as potentially buying teams much greater test assurance with surprisingly little extra code.
But they can come at the price of readability. So if we’re going to use them, we need to invest some care in making sure it’s easy to understand what the parameter data means, and to ensure that the messages we get when tests fail are meaningful.
Some testing frameworks make it harder than others, but I’m going to illustrate using some mocha tests in JavaScript.
Consider this test code for a Mars Rover:
These four tests are different examples of the same behaviour, and there’s a lot of duplication (I should know – I copied and pasted them myself!)
We can consolidate them into a single parameterised test:
While we’ve removed a fair amount of duplicate test code, arguably this single parameterized test is harder to follow – both at read-time, and at run-time.
Let’s start with the parameter names. Can we make it more obvious what roles these data items play in the test, instead of just using generic names like “input” and “expected”?
And how about we format the list of test cases so they’re easier to distinguish?
And how about we declutter the body of the test a little by destructuring the testCase object?
Okay, hopefully this is much easier to follow. But what happens when we run these tests?

It’s not at all clear which test case is which. So let’s embed some identifying data inside the test name.
Now when we run the tests, we can easily identify which test case is which.

With a bit of extra care, it’s possible with most unit testing tools – not all, sadly – to have our cake and eat it with readable parameterized tests.