Saturday, 11 April 2015

Fuzzy Software Quality

When it comes to Software Quality there are several tools that try to measure it: test coverage, cyclomatic complexity, static analysis, technical debt, etc. We try and make these numbers look good but bugs get delivered, users get annoyed and engineers get frustrated. So it was all just for the sake of having good looking numbers.

I think Software Quality isn't something you can imprison in one number. It isn't something that deserves numbers precision nor their strictness. It's more a questionnaire-kind-of-thing where you ask a general question to the right stakeholder (i.e. the developer, the tester, the user etc) about the aspects of Software Quality you consider worth "measuring". The answer must be one of: strongly disagree, disagree, agree, strongly agree.
To me, the following would be the questionnaire-kind-of-questions worth asking ourselves and our customers or users:
  • Code Maintainability: As a developer, I am happy of making the next change. The code is in a good shape and the time I spent on the last code-change was reasonably proportional to the complexity of the behavioral-change.
  • Code Quality: As a developer, when I need to make a code-change in an area I'm not expert of, the time I spend reverse-engineering is reasonably proportional to the behavioral-complexity of that area.
  • Product Quality: As a user, I am overall satisfied with the software stability, performance, ease of use and correctness.
There is something that probably needs a bit of clarification. I referred to code-change and behavioral-change. These aren't terms commonly used but I believe they're pretty easy to understand. Code-change means the actual change to the source code whereas behavioral-change is the feature itself, from the user's point of view.

For example, since we are in 2015 chances are that sending an e-mail to the user will be considered a trivial behavioral-change. If implementing this feature required a lot of code-changes and took 2 days, then the answer to the Code Maintainability question is likely to be disagree. To add insult to injury, it took another couple of days just to reverse-engineer how the users list is handled, so strongly disagree is the answer to Code Quality. Nevertheless, the users are happy with our product so far and don't seem to complain too much about the time it takes to add features, so they answer agree to Product Quality. The following would then be how our "perception of Software Quality" would look like on graph.
So what about all those fancy tools that measure test and branch coverage, cyclomatic complexity, do static analysis and lot more? They're useful. Definitely worth using. But not to measure the Software Quality directly but rather to build our own confidence that we're writing good code. If the test coverage is bad, the cyclomatic complexity is skyrocketing and the compiler spits out tons of warnings then I would answer strongly disagree to Code Quality, without even asking myself how long it takes to reverse-engineer a bit of code.

But I'm not suggesting this is as yet another Software Quality measuring tool. Software Quality is really a hard thing to measure precisely. There won't be silver bullets and there won't be magic questions or magic answers. Just ask the stakeholders what they think and build your own confidence on it.

No comments:

Post a Comment