Code review and Code metrics are two facts of life that a good developer cannot (or should not) escape. Code metrics are ways of quantifying traits about your source code that can give developers feedback about the quality of the source code. Code metrics are usually run by an automated process like a continuous integration process (see ). Setting up a process to generate code metrics and collecting them at intervals is quite easy. However code metrics is a slippery slope because it will also give you tools to squarely point a finger at your developers and tell them how bad their code is, when in reality nothing may be wrong with their code. It is very important to know how to interpret the metrics in the context of your application and developers. So let’s have a look at few of those metrics.
Code review and Code metrics are two facts of life that a good developer cannot (or should not) escape. Code metrics are ways of quantifying traits about your source code that can give developers feedback about the quality of the source code. Code metrics are usually run by an automated process like a continuous integration process (see Continuous Integration: Going Beyond Builds). Setting up a process to generate code metrics and collecting them at intervals is quite easy. However code metrics is a slippery slope because it will also give you tools to squarely point a finger at your developers and tell them how bad their code is, when in reality nothing may be wrong with their code. It is very important to know how to interpret the metrics in the context of your application and developers. So let’s have a look at few of those metrics.
One of the most often used metrics is LoC (Lines of Code) or KLoC (Kilo Lines of Code). It is either the most liked metrics (mostly managers) or the most hated metric (mostly developers). Simply put it is the number of lines of code your developers can hammer out, not counting the comments, spaces and auto generated code. Some people tend to equate this to productivity and that may or may not be true based on the context in which it is interpreted. More often than not by itself this metric is not very useful, however it does need to be measured as it can provide a base for other useful metrics like LoC/class or function, bug density i.e. number of bugs per lines of code, etc both of which should always be low.
Another metric that gets diligently measured is code coverage. Code coverage measures the amount of code written by a developer that is actually tested by a testing framework employed by the development team. Developers are usually given a magical number to achieve and they strive to achieve that amount of coverage in their code (85% ?). However it is quite possible to have high code coverage and not test anything at all. Having code coverage for the sake of meeting a number is not a good practice. Often when developers have to meet a certain level of coverage, you will find the coverage hovering at that number and never rising above that number. That does not necessarily guarantee high quality of code. It is still better than having low coverage or no coverage at all. A better practice is to use Test Driven Development and use code coverage to identify edge cases that are not getting tested.
Cyclomatic complexity is another favorite metric and possibly ranks above the code coverage. Cyclomatic complexity tells the number of unique paths or decisions that are taken through a unit of code. In general this tells you about the readability, maintainability and the potential for the code to have bugs. Lower values are considered better. At a base level this is a count of all the control statements in your code like if,else, while, case etc. The more of these you have the more your code will branch out and the more complex it will be for someone to test and debug.
Besides these there are a lot of other metrics that get tracked and can prove useful like
- Cohesion – Cohesion gives a measure of how much a methods of a class or a module belong together. If the methods are similar and perform similar tasks towards a similar goal then the system is said to be highly cohesive. A cohesive module is very reusable.
- Coupling – Coupling is in contrast of cohesion. Coupling gives a measure of how modules in a system are interdependent on each other. A low coupling is a sign of a good system design that avoids a lot of interdependencies.
- Function Point – Function Point is a measure of business functionality delivered by software to a user. Although it sounds subjective, there are objective ways of measuring it. See here for details. However this measure is a little outdated and ties heavily with LoC so comes with all the criticism that goes with LoC.
- There are a lot more that I have not mentioned here that people choose to track.
Ultimately every project is different and chooses to track or ignore metrics differently. All the metrics in the world will not help you if you do not have insights into your developer’s development style, your project’s goals and the context to interpret your metrics. Also getting developer feedback on set thresholds is very important as some of these thresholds (85%!) can be a source of considerable stress to developers.
Lastly the best measure of code quality I have ever come across can be found here.
So what metrics do you track ? Let’s discuss.
Note: Originally published at http://www.nimkar.net/index.php/9-release-management/6-code-metrics-how-good-is-your-code