Modularity

Exploring the importance of modularity and how to measure it
Published on 2024/02/28

Right off the bat there are a few things I feel conflicted about when it comes to modularity. I would definitely not argue against the benefits of modularity as an organizing principle. Developing software systems that pay high attention to coupling, cohesion, and other related measurements tend to build a resilient system. There are several levels at which good (let me have this freebie without defining what good means exactly yet) modularity can apply, system and code level are just two examples. In my mind thinking about a module simply means a well organized, semantically and logically cohesive group of related code. Being able to structure your system so that modules are loosely coupled allows for reusability and robustness. By robustness I mean that if a module needs to be replaced for whatever reason, the area of effect for this change is self-contained. If you need to make changes to its internals again, the area of effect is limited.

But this is not where my conflict comes from, it is related to measuring. There are several ways you can use to measure the modularity of your system, and when we talk about code we have: cohesion, coupling, and connascence. I think there's a reason why I haven't be part of a project or a company where these metrics were checked on regularly or even considered to refuse a specific design. In a way this is similar to how arbitrary levels of code coverage can be harmful. They need interpretation and logical analysis that most of these tools can't provide.

Cohesion, for example, measure how much two parts are tied together. But (and I was happy to find support of this from Mark Richards and Neal Ford) this can help find lack of structural cohesion but not if two pieces logically fit together. Coupling seems to take it a step forward measuring afferent and efferent coupling (i.e. how many modules depend on my module vs how many modules does my module depend on). No matter how valuable this metic is, I find it difficult to justify blocking changes because they don't meet a certain modularity metric. I haven't seen a business case made for this that is compelling (but I feel like there are niche cases for this). An example of a metric that can block a release is performance, if certain changes bring a degraded performance it is much simpler to justify a "no-go" since it directly affects customers and the business.

Last but not least is connascence, which I can't pronounce right. I'll quote Meilir Page-Jones for this.

Two components are connascent if a change in one would require the other to be modified in order to maintain the overall correctness of the system

I'd recommend exploring more about connascence and I think you'll find out that you mentally already do some of this if you're trying to design your system well. I like this metric because it's very pragmatic but unlike the others it doesn't seem to offer a formula that you can measure easily. It is mostly related to how strong the level of connascence is with the goal to move closer to weaker forms of it. So what's my beef with all this? I'm not a fan of arbitrary measures and I believe they can do more harm than anything. While modularity helps developing better designs I think there's a reason why specific measuring tools didn't make it into common software development practices.

Thoughts

Metrics for software quality have to carry meaningful measurements. Following formulas to adhere to a self-imposed standard might do more harm and hamper development. If you stumble across something along these lines, make sure to dig into the why. Some problems carry necessary complexity which would have to be distinguished from accidental complexity. Following sound design principles should be a must but I don't think we are at a point where basing this on automated formulas is good enough. Will AI change this? Not sure, but I'm ready to be surprised!

0
← Go Back