A team lead approached me asking what should they define as minimum unit test coverage percentage and code complexity metrics. I had to think about my answer a bit and came up with following, which I think is worth sharing to others as well.
Unit-test Coverage %
The more good unit tests there is covering the code, the easier it will be to read, understand and change the code later on. Excellent answer to the question of what should the coverage be can be found from StackOverflow: http://stackoverflow.com/questions/90002/what-is-a-reasonable-code-coverage-for-unit-tests-and-why
If you still insist on having a target percentage, I’d personally recommend to aim for 100%, which is achievable if you use strict TDD. By strict I mean following the Red-Green-Refactor pattern with discipline. There’s a good presentation about this on InfoQ: http://www.infoq.com/presentations/TDD-as-if-You-Meant-It.
Being more reasonable, specially if you already existing code base, you might set the target to 70-80%. Keep in mind though not to stare at the numbers only! Bad tests also give you high coverage but not much confidence or easier refactoring. For this as well, there’s an excellent article available: http://www.sustainabletdd.com/2011/12/lies-damned-lies-and-code-coverage.html
Code complexity figure tells quite well how easy it is to read and change the code. It also affects unit testability as smaller methods are generally easier to unit test. I’d aim for cyclomatic complexity of less than 10 per method. More complex code usually contains more bugs as well, so if you find some parts of the code to be more complex, do focus your testing efforts on that area.
We have one project where we have set up our Jenkins job so that it becomes unstable if the amount of complex methods grows. This project now has average method complexity of 5.25. Still there is one methods with high complexity (23 in fact) and about 50 methods of medium level of complexity (10-20).
There might be a good reason for high complexity number e.g. an event handler with a long switch clause. While this might be avoided with polymorphism (see http://sourcemaking.com/refactoring/replace-conditional-with-polymorphism), but this might also make the code more difficult to maintain.
It’s good to early enough think about code maintainability and the speed of development in the future. Unit test coverage and code complexity are factors which affect those a lot. Other things worth investing to are code reviews as a standard practice and pair programming. Both work as fast feedback cycles to catch not only defects but also code which is difficult to maintain.
In the end it’s up to you and your Product Owner to decide how much do you invest in code quality. Code which is simple, produced with TDD and covered with good unit tests is cheaper to maintain in the future. On the other hand, if these factors are not taken in account, it’s like taking a loan from the bank. You’ll manage a long time by just paying the interest, or parts of that, but eventually you need to start paying back the debt as well. Or take another loan.
It’s more likely that you will produce Clean Code when you have proper measures in place to early detect code going bad. Also worth to note is that you should always aim for writing as less code as possible. A nice article on this subject: http://mikegrouchy.com/blog/2012/06/write-less-code.html