What impact does software quality have on your business?
Authors: Jürgen Burger & Ralf Trapp
published on: 19.04.2024
Companies are constantly faced with the task of evaluating the quality of software systems - both when selecting new systems and when re-evaluating existing systems. The technical perspective usually plays a particularly important role in this evaluation - and therefore aspects such as the functional scope and user-friendliness of the system under consideration. Other aspects such as performance, portability, code quality, technology stack, customisability, etc. are often neglected - presumably because they are not known to the people involved or are difficult to grasp and evaluate. In the worst case scenario, this leads to a software system being incorrectly evaluated, which can be very expensive or even business-critical given the increasing relevance of software. In this article, we therefore want to shed light on the various aspects of software quality and thus create a basis for a more comprehensive and objective evaluation of software systems.
A comprehensive look at the topic of software quality is particularly worthwhile because it can prevent many problems later on and maximise potential benefits. The benefits of high software quality are high user acceptance (and therefore intensive use of the software), low downtimes and maintenance costs and therefore ultimately optimum support for the associated business objectives.
Our Definition of "Software Quality”
When different groups of people are asked what they understand by software quality, they give very different answers. A software developer at the manufacturer attaches importance to good opportunities to further develop the software without hurdles, continuously and without errors. A software developer at the system integrator, on the other hand, appreciates being able to adapt the standard behaviour of the system without intensive training and as easily as possible, i.e. to carry out customising. IT colleagues in data centre operations focus on good installability, updatability and conservation of resources. Software users want error-free and easy-to-use software that does what they expect. And the business would like the same software to achieve its business goals.
All stakeholders have their own personal focus on what exactly software quality means to them. This is what makes it so difficult to define a standardised language and the same understanding of software quality across all stakeholder groups.
In our business world, software is never an end in itself, but always serves - in the sense of a tool - a business objective. Every aspect of software quality must therefore be subordinate to this business objective. Software is also always written for users. In other words, people who use this software to do a job that is also subordinate to the business objective. It is therefore logical to align the criteria for software quality with the business objectives. Business goals correspond to clearly formulated expectations. That's why we say:
The less unexpectedly a software behaves, the higher its quality.
In this context, we prefer the term unexpected behaviour or software anomaly instead of the common word bug. This expresses much better that users expect a certain behaviour, but the software behaves differently. This changes the perspective from a technical-orientated one to an application-orientated one. Furthermore, a behaviour that is technically correct but undesired or unexpected by the user is classified as faulty.
An example: It has become common practice for the reject option of a dialogue to always be on the left: CANCEL on the left, OK on the right. From a purely technical point of view, there is no reason why these two buttons should not be swapped. However, most users would regard this as unexpected and therefore as a mistake. Until they get used to it, the user error rate will increase and the working speed will decrease. And so this unexpected behaviour will run counter to business objectives. This is because software quality has been reduced in this way.
Any unexpected behaviour therefore reduces software quality.
The various aspects of software quality
In the following, we take a closer look at the various aspects of software quality. Ultimately, software quality is defined by the sum of these (individually weighted) aspects.
(1) Functional scope
The most obvious aspect for most people is certainly the functional scope of software. For a long time, this was usually understood to mean the maximum possible range of functions - accordingly, the selection procedures were also characterised by maximum lists. In the meantime, more and more people have come to realise that the "jack of all trades" is never needed and that an unnecessarily wide range of functions can also have a negative impact on other aspects of software quality (such as usability, performance and maintainability). It therefore makes much more sense to determine the functionalities actually required today and in the foreseeable future on the basis of relevant use cases and to prioritise these according to their respective relevance. This provides a good basis for evaluating the functional scope of software in concrete terms.
(2) Performance
The aspect of performance is almost self-evident – but nevertheless important to mention. No user likes to wait, so the software should be performant enough to carry out the usual tasks quickly enough to avoid waiting times. Performance should be considered both for the current quantity structure and with regard to a possible growth path. It is important that the desired performance can be realised with reasonable resources and does not require an (expensive) expansion of the required hardware.
Jakob Nielsen said back in 2010 that the response times of websites should not be longer than 1 second. Nowadays, we can expect programmes to respond within 200ms or less. A longer response time is literally un-"waitable".
(3) User-friendliness
The user-friendliness of software is an increasingly important aspect - especially if this software is not only used by a few experts, but also by so-called occasional users who only need the software rarely or only for a limited use case. If the software does not enable intuitive use, the hurdle for this (increasingly large) group of users is often too high. In order to enable intuitive use, it should therefore be possible to customise the interface to the respective requirements or use case so as not to offer unnecessary functionalities. It is also helpful if the design of the interfaces is based on standard design criteria and no "design experiments" are carried out. This means that the software meets the expectations that users have based on common design criteria and thus avoids unexpected behaviour.
(4) Portability
The portability of software refers to its ability to switch easily from one environment to another. Portable software can run smoothly on different systems or platforms, be it on different operating systems, hardware configurations or cloud services. This quality feature makes it possible to reduce the costs and effort involved in adapting to different environments.
Well-portable software is flexible and independent of specific technological restrictions. It facilitates smooth implementation and maintenance, which increases the company's agility and ensures the long-term adaptability of the software.
If there is a question of running the software on different platforms, it should not behave unexpectedly, i.e. differently, on any of the platforms.
(5) Code quality
The quality of the programme code has a major impact on the quality and life cycle of the software. Code quality lays the foundation for freedom from errors, testability and maintainability.
The typical life cycle of software is as follows: It is created or designed once in the implementation phase and then continuously maintained, enhanced, given new features, populated with data for many, many years until it is finally decommissioned or replaced by other software. As a result, much more effort is invested in maintenance and further development over the entire life of the software than in its initial creation.
Software manufacturers are therefore well advised to ensure that this further development can take place under good conditions and with low risk.
Regression errors
When the further development of software produces errors in existing code, we speak of "regression errors". Regression errors are errors that occur after a change to the code, even though they did not occur before. They typically occur in places that are not directly affected by the adaptation of the code.
A fictitious example: A new field is inserted into a dialogue in a piece of software. The underlying data structures are adapted for this. However, other parts of the software cannot handle the adapted data structures and suddenly behave unexpectedly.
To avoid such errors, it is important to carry out regression tests. This ensures that the existing functions of the system continue to work as expected even after further development.
Strictly speaking, the entire software must be retested each time a line of code is developed or changed. This is the only way to ensure that no regression errors have crept in.
In practice, this is often neglected. This is because it is far too time-consuming and expensive to test the entire software manually every time a small change is made. As a rule, only areas that are directly affected by the adaptation are tested, and in the best case, neighbouring areas are also tested.
Automated tests
The effort required to avoid regression errors increases exponentially with the number of software features and functions. The testing effort for software that is developed over a long period of time therefore increases immensely.
In the medium and long term, this testing effort can only be reduced with automated software tests. Human testing must be reduced to a minimum.
In order for software to be tested automatically at all, the components must be decoupled and isolated as far as possible. It must be defined how the components communicate with each other and the code should be kept up to date through regular refactoring.
Decoupling and encapsulation
Software consists of many individual components that should be able to work independently of each other. They should be decoupled from each other and encapsulated within themselves.
If they are not, the more strongly the components are networked and interlinked, the greater the influence of the components on each other. This increases the probability that a change to one component will affect another component.
The following applies here: code that is written for execution on behalf of several actors violates the principle of clear responsibility . If this principle is violated, the code will degenerate over time and become a big mud ball.
What sounds fun at first has a serious background. Code like this is no longer maintainable. Any further development, such as the addition of new features, will almost always have undesirable side effects. This means that defects also arise with every new feature. These usually occur in places that are not direct neighbours of the adapted code.
If there are no or only manual software tests, then these defects remain undetected, the software is delivered incorrectly and the defects are only discovered by the customer. The software thus becomes banana software - it matures with the customer.
Software manufacturers are therefore well advised to continuously and meticulously ensure that the innards of the software are cleanly separated from each other and tested automatically throughout.
In other words: individual components should retain their behaviour, even if they are used in a larger context. They should not change their behaviour unexpectedly.
Refactoring
Refactoring is the term used to describe changing the source code of programs without affecting the observable behaviour of the software.
Why should something that works be changed without changing anything?
The question is similar to the question: Why should I clean my flat? It's my flat before I clean it, it's my flat afterwards. It's just cleaner afterwards than it was before. The function remains exactly the same.
The question of the purpose of cleaning does not normally arise. It seems logical that we bring our property up to date from time to time and even renovate it from time to time. Our vehicles are also regularly serviced and wearing parts replaced.
We do this so that we can continue to enjoy them in the future and avoid breakdowns.
This is also a form of "refactoring": we improve the current condition of the items without changing their function.
The situation is very similar for software: due to further developments of the software, other parts of the software age in comparison to the further developments. This calls for a renovation, a sprucing up of these old software parts - a refactoring.
The dangerous thing: Initially, it makes little difference whether refactoring takes place or not. The longer the refactoring is delayed, the greater the technical debt5. The longer functional enhancements take. And the more error-prone the future development will be.
The more complex the software has become, the more time-consuming the tests become. If there are no automated tests, refactoring is often postponed indefinitely. True to the motto "Never change a running system", it is only carried out when there is no other option.
As a result, the software becomes increasingly difficult to adapt and eventually gets stuck in the status quo. At this point, the only thing that often helps is a complete rewrite.
Existing and regularly executed tests and regular refactoring are therefore important criteria for good and continuous software quality.
(6) Technology stack
Under certain circumstances, it can make sense to use a widespread technology stack; this at least has the advantage that there are a large number of experts who can handle it, so that you are not dependent on just a few. If a customer also intends to customise and/or extend the software themselves, they will also attach importance to ensuring that the technology stack matches the expertise of their own specialists.
(7) Customisability
A decisive quality criterion for the selection and use of software is how well, quickly, conveniently and cheaply this software can be adapted to your own business case. More importantly, how well can the software be adapted to ongoing changes in your own business case?
In today's world, good customer service is becoming a competitive advantage because customer expectations are so high. If companies have to forego improving their customer service because adapting the necessary software is too time-consuming or expensive, they will fall behind. This is because the competition may be able to adapt customer-related processes and thus gain a competitive advantage.
In today's VUCA world, it is also becoming increasingly difficult to predict which measures will be accepted by software users or customers. This means that it must be possible to try out adjustments and measures to see how they are received and whether the target group honours these measures. Quick and cost-effective customisation of the software used is also essential for this.
For this reason, it is necessary to customise the software without the need for specialist or expert knowledge. Only in this way can an appropriate number of people make changes quickly and cheaply.
And how can software quality be checked?
The lower the willingness to bear the risk, the higher the quality bar applied. If the costs of a potential risk are high or there are other reasons to reduce the probability of unexpected behaviour, then it is worth investing in higher quality.
The interesting thing is that it is not the manufacturers who decide how much quality is allowed. Only the users, the application purpose and the consequences of unexpected behaviour decide how high the quality bar should be.
(1) Unexpected behaviour
We understand software quality as a function of unexpected behaviour. In other words: The more often the software behaves unexpectedly, the poorer the software quality. Consequently, the number of unexpected behaviours is a possible measurement criterion for software. To ensure that this measurement criterion does not deteriorate if it is used more often or runs for longer, it makes sense to normalise this criterion (e.g. "Unexpected behaviour per user" or "Unexpected behaviour per hour of runtime").
Experience shows that software manufacturers will not calculate these figures. If they do, it is highly unlikely that potential customers will find out. Nevertheless, asking questions during the software selection process costs nothing.
There are also a number of other indirect indicators that allow conclusions to be drawn about software quality.
(2) Releases
How quickly could a new release be made available?
The fact that a release is technically ready and when or whether it will be published are two different events that do not have to be coupled.
The question of the minimum time in which a release could be produced is a possible indication of software quality. If the time is in the range of minutes or hours, this indicates a high degree of automation in testing and building the software.
If we are talking about days, weeks or more, this indicates many manual steps.
Rapid deployment is particularly necessary for security-relevant corrections.
Another indicator of good software quality: does it take little effort to install a new release? Can this be done within seconds or minutes and "at the touch of a button", as in modern smartphones? Or does it require time-consuming work, possibly even from integration partners? The easier it is to carry out updates, the better.
(3) Support
How many customers do support staff look after on average?
The size of the support team in relation to the customer base is another indicator of software quality. As a general rule, the lower the quality, the more support is required. If the support team is too large, scepticism is advisable.
(4) Employee fluctuation
How high is employee turnover at the software manufacturer?
The more often employees change, the lower the level of software expertise in the company. The level of knowledge decreases and thus increases the rate of possible programming errors.
Portals such as Kununu or Glassdoor could provide useful information or warning indicators here.
(5) Self Service
Is there a support self-service area?
A good self-service area not only relieves the burden on support staff, but can also be used as an indicator of the simplicity of the software. The more complicated the software is to operate, maintain or adapt, the more difficult it will be to set up a self-service area that provides users with sufficient assistance in the event of unexpected behaviour.
The less unexpectedly the software behaves, the larger the community and the more help users can give each other. Support is therefore relieved.
(6) Plug-Ins
Plug-ins are additional modules that can be seamlessly integrated into the existing software to add specific functions or extensions. These extensions make it possible to customise the basic functionality of your software as required without changing the core structure. This ensures flexible adaptation to business requirements.
The art of a good modularisation concept is to allow adjustments to the software without hindering updates.
At the same time, the software must be adaptable to all required business cases.
If manufacturers certify both positively, this is a strong indication of good software quality.
(7) API
Is there an API and how complete is it and the associated documentation?
In today's world, software islands are becoming increasingly rare. Data must be exchanged, processed, refined, forwarded and returned between systems. APIs are the means of choice for this. If the API does not map all the functionalities of the software or if there are gaps in the documentation, this is a strong indication that the software is outdated.
If the API documentation is generated automatically using standards such as swagger7, this is an indication of modern software of good quality. A focus on the MACH principles (microservices, API-first, cloud-native, headless) also tends to speak in favour of modern, high-quality, scalable and easily maintainable software.
The Quintessence
There is a whole range of aspects that influence the quality of software. And since software quality has a major influence on both the follow-up costs and the achievable benefits, a comprehensive consideration of these aspects in connection with the evaluation of software systems is strongly recommended.
Depending on the respective framework conditions, each company must determine for itself the weighting of the individual aspects in the software evaluation.
Interesse an einem Austausch?
Möchten Sie mehr über das oben beschriebene Thema erfahren? Dann nehmen Sie noch heute Kontakt mit uns auf.