Discover more from Ryan Craven Tech Digest
The Case Against 100% Code Coverage
Going for Gold Can Mean Going for Broke
Code coverage is a measurement of how much of your codebase is exercised by your test suite. Having 100% coverage means that every line of code is invoked when your tests are run. This is often held up as an ideal target to aim for in testing. However, 100% code coverage can be misleading and counterproductive.
Here are some key reasons why 100% coverage should not be the ultimate goal:
100% coverage does not equal bug-free code. You can have fully covered code that still contains defects that your tests did not catch. Having high coverage numbers often makes teams feel safe and confident, when there could still be underlying bugs lurking that slipped through. A false sense of security is one of the biggest pitfalls of relying too heavily on coverage stats.
Chasing the last 10–20% of coverage requires significant effort for minimal gain. Getting from 80–90% coverage to 100% typically needs intricate, hard-to-maintain tests to exercise all corner cases and uncommon flows through the code. The time spent getting to 100% could be much better spent adding value elsewhere, whether optimizing existing tests, pursuing automation opportunities, or developing new features. The closer you get to 100%, the more work for less meaningful return.
To hit 100% coverage, teams sacrifice good software design and test quality. In order to touch every single line and branch, tests often get bloated with duplication, tightly coupled to implementation details, and difficult to understand. It is an anti-pattern to add sleeps and waits purely to cover a complex section of code. Test quality and maintainability suffer in pursuit of the coverage target.
Some code paths should not be tested. Legacy code, generated code, uncommon errors handling are often covered just to increase coverage. Those new tests provide negligible value while requiring significant maintenance costs down the road. Thoughtfully excluding some code from testing can be prudent if it does not represent application logic.
More tests don’t equal better product. A high quality, well-tested product with 80% coverage is far superior to a poorly tested one with 100% coverage. The quality, scope, and thoughtfulness of your tests matter much more to your customers than hitting arbitrary coverage targets. A single end-to-end test can provide more value than dozens of unit tests.
Coverage metrics can be misleading. Lacking configuration details and context, the coverage percentage does not actually tell you how much application logic is tested. Two teams with 95% coverage could have wildly different test strategies in terms of value and rigor. Always pair coverage with manual reviews to get the full picture.
Gaming and cheating coverage is common. Devs often take “shortcuts” to increase coverage stats like adding unused parameters purely to be touched in testing. The metrics get gamed, while actual test value does not improve.
Exceptions and instrumentation skew results. Test tools miss code executed before instrumentation starts like early initialization. And exceptions before assertions can lead to coverage over-reporting. The coverage percentage is an imperfect estimate.
Coverage expectations vary by code type. Testability and importance varies radically between front-end UI, API logic, data access layer, etc. Blanket expectations of 100% coverage ignores unique challenges and value proposition of testing different types of code.
Timeliness matters. 100% coverage early on provides little insight into whether tests stay relevant. The real metric should be coverage over time / after refactors. Chasing coverage too early often results in throwaway tests.
So what should teams focus on instead? Here are some tips:
Maximize test value, not percentages. Expand tests for high-risk areas, major flows, and key use cases. Prune wasteful tests that no longer provide value.
Assess coverage with context. Review coverage against application functionality to spot gaps, not just statistics. Spot check low coverage areas for needed tests.
Set risk-based coverage goals. Set realistic expectations based on code quality and testability — e.g. 50% for UI, 80% for core logic. Increase coverage driven by risk analysis, not arbitrary targets.
Reward reducing complexity. Promote good design and refactoring to simplify code and remove hard-to-test paths, reducing reliance on coverage vanity metrics.
Continuously refine tests. Improve test readability, reliability, speed, and duplication. Optimize your test suite for maintainability and bug-finding ability.
Add integration and end-to-end tests. Compensate for lower unit test coverage with higher-level tests that often provide more value.
Review test relevance after major changes. Consider coverage over time and across releases, not just at single points in time.
So don’t get distracted chasing the supposed perfection of 100% coverage. Think critically about what to test, and do it well. By focusing on risk and delivering quality test value over quantity, your users will get software they can depend on.