I’ve seen dev teams constantly tout the importance of having as many unit tests in a codebase as possible.
This became such an important metric for one such group, that leadership overseeing the team started a competition between the developers to track the “leaders” who had the most unit tests.
Engineering VPs began frequently sending out a chart listing the top unit test producers for each sprint. Each chart would be broken down like this:
Name | # of unit tests committed during sprint |
Person A | 12 |
Person B | 5 |
Person C | 2 |
Person D | 2 |
They would stress in their communications for everyone to keep up the good work, and keep adding more “good” unit tests. “We need more and more good unit tests” they would constantly reiterate and repeat over and over in nearly every email or meeting regarding the improving code quality of the codebase.
Here’s the problem with this type of practice: this doesn’t promote the production of more, “good”, quality unit tests — it promotes the production of more frivolous unit tests.
Hyper-focusing on just the volume of unit tests goes against one of the main reasons they exist: to increase the testability, maintainability, extensibility, and clarity of code. If you’re throwing unit tests out there just for the sake of increasing unit test count, you’re actually hindering your ability to achieve these things — not helping them. Ironically, you end up adding one of the biggest things you aim to get rid of: wasteful, glitchy, code overhead that has no purpose. Additionally, keep in mind that unit tests need to get executed in order to actually perform the validation testing that they’re written to do. The unit tests that a developer coded just to reach an inconsequential goal and gain a top spot on a leader board is going to drag down the runtime of build tests (amongst other things) as well as impede the runtime of their own local testing (which can result in the delay of them rolling out bug fixes and new features).
With any mature codebase, there’s often a great deal of bloat or oversights inherit in the code resulting from past development decisions that prioritized speed over quality and thoroughness. This “Tech Debt” makes it harder for developers to test, debug, and extend the surrounding code due to the bloat and verboseness of it. Unfortunately, addressing tech debt isn’t as “flashy” of an endeavor as adding new features to the product. At the same time, however, stakeholders understand that this bloat makes it harder to maintain and support the system, and impairs the organization’s ability to ship new functionality and bug fixes to customers.
This conflict — juggling the realization that tech debt hinders the ability to deliver quality results faster to clients, but not wanting to prioritize and allocate the proper time necessary to clean it up, is what results in leadership pushing these sorts of “more unit test” campaigns. They view past redundant, slow, and disorganized spaghetti code baked into the software as a sort of sunk cost. They find it hard to understand why any time should be spent on rewriting it, and instead believe that everyone should just move on and switch their attention and energy into writing unit tests for future code commits. This view is compounded by the fact that implementing new features is what keeps customers engaged and interested in a product — customers don’t care about your tech debt or past development oversights, they just want new features built, and current features to be faster.
As such, it’s often tricky to get product managers and leadership onboard with the idea of dedicating an entire sprint (or more) to tackling tech debt when there is a backlog of customer requested features waiting to be implemented. That’s why the push for tech debt removal needs to be sold in a skillful way. Whenever possible, point out the fact that that removing the convoluted code in Module X would cut the development time for Feature Y in half, or make a report 200% times faster for a client to run. Hold routine meetings with PMs, management, and support staff reviewing any recent escalations and bugs. Show how an issue could have been mitigated faster if getting down to the root cause of the issue hadn’t involved hours of untangling spaghetti code. It’s easier to get buy-in when everyone realizes that messy code resulted in a customer not getting a fix in time and escalating a complaint up to the president of the company.
The Bottom Line
Unit tests should absolutely be included in code wherever possible to strengthen the quality of the codebase. However, they should be added strategically, and not just thrown around for the sake of adding more unit tests. Trying to gamify such metrics is pointless in general anyway, especially when there are so many additional actions that need to be taken to foster a stable system. Unit testing is not the sole magic solution to improving code integrity.
Moreover, adding new unit tests alongside new code does not retroactively fix existing tech debt and alleviate the overhead costs such debt poses to the maintainability of a system. If this tech debt isn’t proactively taken care of, someone will just have to clean it up to allow them to support unit tests eventually, so past tech debt is never avoidable for very long.
Refactoring efforts and initiatives to optimize current modules and remove existing debt should be prioritized. This will strip existing overhead from the code, and facilitate the process of writing higher quality unit tests in the future. A strong foundation of clean, organized, well structured code will lead the way for higher quality, and better unit tests down the line.