A software system that determines the quality of a codebase should rely on automated metrics rather than manual data from engineering. This approach minimises the need for manual effort and leverages existing systems to provide accurate insights.
One key area of focus is runtime data, which can be easily calculated using available information. For instance, we can analyse how often lines of code are executed, how frequently certain branches are taken, and even use automated tests as a starting point for this analysis. However, it’s essential to note that these metrics from tests may not be representative of the real world use of the software system.
Another crucial aspect is version control data, which provides valuable information about the codebase’s structure and behaviour. We can identify code hotspots by tracking how often certain files are updated, as well as analysing the relationship between code hotspots and automated test coverage. Furthermore, monitoring changes in tests after a code change can reveal potential issues, such as an excessive number of changing tests indicating poor quality code.