-
A piece of software can “feel” complex, but humans are biased
-
it’s important to have actual metrics to make investment decisions (i.e., being 📊 Data Driven)
-
The goal of measuring complexity is:
- Find complex parts to refactor and make it easier
- Prevent future bugs from being written (this is very difficult to assess)
-
Humans aren’t good at maintaining an accurate mental model of complex codebases (Cognitive Load Theory)
-
The article points to this research paper to point out that most bugs are the result of a small number of modules (Pareto Principle)
-
Measuring is not necessarily about providing certainty, it can be about reducing uncertainty (pointing to How To Measure Anything as a source)
-
Providing certainty is often not realistic in the real world. The solution is to rely on heuristics instead of perfect formulas
-
Experimentation works very well to reduce uncertainty
Examples of low-level metrics:
- Halstead Metrics (1977)
- Cyclomatic Complexity (1976)
- Counting Lines of Code
- Counting Code Indentation
- All metrics are overly simplistic
From a higher-level point of view, Coupling and Cohesion can be considered. 4 types of categories can be defined (source)
- Structural Coupling: can be found by static code analysis
- Dynamic Coupling: coupling happening at runtime for example through dynamic binding or polymorphism
- Logical Coupling: Code changing together that doesn’t necessarily have structural coupling
- code-maat is a tool to analyse logical coupling
- Semantic Coupling: Not super clear from the article what this means