Friday, 14 February 2014

Why I don't trust code coverage

I had a discussion with a colleague about code coverage the other day where I suggested that 100% code coverage meant very little because you may have covered every line of code, but not necessarily every path through the code. After the conversion ended I thought I'd better check I wasn't talking utter nonsense.

Here is a contrived method and two unit tests.

Here are the code coverage results.



As you can see, 100% code coverage is reported but not all paths through the code have been tested. Yes, the code is dreadful but I think it makes my point. Just because every line of your code has been tested, it doesn't mean that your code has been thoroughly tested.

To make matters worse, lets say 90% code coverage is reported by whatever tool you use, there's no indication of the importance of the code that has been tested. What if the 10% of code that has minimal coverage is the most critical part of your application?

Don't get me wrong, I'm not saying code coverage is a complete waste of time. As a general indication that an area of code needs a few more tests it's great. But other than that, not so much.

No comments:

Post a comment