Skip to main content

On cyclomatic complexity

Not all of us are lucky to work with greenfield projects; the majority end up working with codebases that we inherit. These codebases have had their share of modifications that moved them away from the original elegant design: bug fixes, feature additions, and enhancements that were done in a hurry because of delivery pressure, without regard to code hygiene and future maintainability. The code then feels heavy, complex, hard to understand and more painful to maintain.

There are a lot of ways to characterize code complexity, but the one that jumps to mind is "cyclomatic complexity" described by the McCabe paper from 1976 http://www.literateprogramming.com/mccabe.pdf.  The paper describes how to measure the cyclomatic complexity of different program control paths, and recommends a bound of 10 for manageable code complexity. NIST also selects the same number http://hissa.nist.gov/HHRFdata/Artifacts/ITLdoc/235/chapter2.htm. The numbers, despite being arbitrary, provide a guideline to when to get out the scalpel, and start cleaning up the code.

I was curious about how our code fares, and luckily I found a lot of open source tools that help with that. For C/C++ programs "cccc" is good. If you're using the ports systems,

sudo port install cccc

does the trick. For Java code there are a lot of plugins for various IDEs, and for NetBeans, Simple Code Metrics is good. The plugin page shows that it is old and unmaintained, however it works perfectly in NetBeans 7.1. You choose a file or a package, and click the SCM icon, and it spits out metrics about the code including the cyclomatic complexity.

When I ran the tests, the results were an eye-opener. The good news is that most of the code has a complexity below 15, with the occasional function that is high up in the 30s. Not alarming, but definitely would need attention to keep the code hygiene high. The surprise came when I included some of the open source libraries that we use, and the numbers shot up as high as 206. When I talked to colleagues from other companies, the experiences were similar--most of the normal code hovers around 20, with the occasional function around 50, but no high alarming numbers like the 206 one.




Comments

Popular posts from this blog

Kindle Paperwhite

I have always been allergic to buying specialized electronic devices that do only one thing, such as the Kindle, the iPod, and fitness trackers. Why buy these when technology evolves so fast that a multi-purpose device such as the phone or a smart watch can eventually do the same thing, but with the convenience of updates that fix bugs and add functionality? So, I was shocked when this weekend I made an impulse buy and got the newest Kindle Paperwhite—a special purpose device for reading eBooks. I was walking past the Amazon store in the mall and saw that the newest Kindle Paperwhites were marked down by $40 for the holidays. The device looked good in the display, so I went in to look at it closely. The Paperwhite is small and light, with a 6” screen that is backlit and waterproof.   The text was crisp and readable, and in the ambient light, it felt like I am reading a printed book. I was sold and bought it on the spot. At home I have struggled to put it down. The bo...

A paper a day keeps the doctor away: NoDB

In most database systems, the user defines the shape of the data that is stored and queried using concepts such as entities and relations. The database system takes care of translating that shape into physical storage, and managing its lifecycle. Most of the systems store data in the form of tuples, either in row format, or broken down into columns and stored in columnar format. The system also stores metadata associated with the data, that helps with speedy retrieval and processing. Defining the shape of the data a priori, and transforming it from the raw or ingestion format to the storage format is a cost that database systems incur to make queries faster. What if we can have fast queries without incurring that initial cost? In the paper " NoDB: Efficient Query Execution on Raw Data Files ", the authors examine that question, and advocate a system (NoDB) that answers it. The authors start with the motivation for such a system. With the recent explosion of data...

A paper a day keeps the doctor away: MillWheel: Fault-Tolerant Stream Processing at Internet Scale

The recent data explosion, and the increase in appetite for fast results spurred a lot of interest in low-latency data processing systems. One such system is MillWheel, presented in the paper " MillWheel: Fault-Tolerant Stream Processing at Internet Scale ", which is widely used at Google. In MillWheel, the users specify a directed computation graph that describe what they would like to do, and write application code that runs on each individual node in the graph. The system takes care of managing the flow of data within the graph, persisting the state of the computation, and handling any failures that occur, relieving the users from that burden. MillWheel exposes an API for record processing, that handles each record in an idempotent fashion, with an exactly once delivery semantics. The system checkpoints progress with a fine granularity, removing the need to buffer data between external senders. The authors describe the system using the Zeitgeist produ...