The EAGLES work takes as its starting point an existing Standard, ISO 9126 ((ISO91a)), which is concerned primarily with the definition of quality characteristics to be used in the evaluation of software products. ISO 9126 sets out six quality characteristics, which are intended to be exhaustive. From this it follows that each quality characteristic is very broad. We shall not recapitulate all six here, but will give two as illustrative examples.
A set of attributes that bear on the existence of a set of functions and their specified properties. The functions are those that satisfy stated or implied needs.
- This set of attributes characterises what the software does to fulfill needs, whereas the other sets mainly characterise when and how it does so.
- For the stated and implied needs in this characteristic, the note to the definition of quality applies (see 3.6).
Since this note (3.6) will prove critical to later argumentation, we reproduce it here:
NOTE: In a contractual environment, needs are specified, whereas in other environments, implied needs should be identified and defined (ISO 8402:1986, note 1).
A second quality characteristic that will be important in what follows is usability:
A set of attributes that bear on the effort needed for use, and on the individual assessment of such use, by a stated or implied set of users.
- ``Users'' may be interpreted as most directly meaning the users of interactive software. Users may include operators, and users and indirect users who are under the influence of or dependent on the use of the software. Usability must address all of the different user environments that the sofware may affect, which may include preparation for usage and evaluation of results.
- Usability defined in this International Standard as a specific set of attributes of a software product differs from the definition from an ergonomic point of view, where other characteristics such as efficiency and effectiveness are also seen as constituents of usability.
A key point here is that quality characteristics are the top level of an hierarchical organisation of attributes: each characteristic may be broken down into quality sub-characteristics, which may themselves be further broken down. Specific evaluations or specific views of software quality may imply that some attributes are considered to be more important than others. ISO mentions the views of the user, the developer and the manager. The manager's view is quoted here in illustration.
A manager may be more interested in the overall quality rather than in a specific quality characteristic, and for this reason will need to assign weights, reflecting business requirements, to the individual characteristics.
The manager may also need to balance the quality improvement with management criteria such as schedule delay or cost overrun, because he wishes to optimise quality within limited cost, human resources and time-frame.
The quality characteristics are accompanied by guidelines for their use. As we shall see, each attribute is associated with one or more metrics, which allow a value for that attribute to be determined for a particular system. As ISO 9126 points out:
Currently only a few generally accepted metrics exist for the characteristics described in this International Standard. Standards groups or organisations may establish their own evaluation process models and methods for creating and validating metrics associated with these characteristics to cover different areas of application and lifecycle stages. In those cases where appropriate metrics are unavailable and cannot be developed, verbal descriptions or ``rule of thumb'' may sometimes be used.
The guidelines nonetheless suggest an evaluation process model, which breaks down into three stages.
First comes the quality requirements definition, which takes as input a set of stated or implied needs, relevant technical documentation and the ISO Standard itself and produces a quality requirement specification.
The second stage is that of evaluation preparation, which involves the selection of appropriate metrics, a rating level definition and the definition of assessment criteria. Metrics, in ISO 9126, typically give rise to quantifiable measures mapped on to scales. The rating levels definition determines what ranges of values on those scales count as satisfactory or unsatisfactory. Since quality refers to given needs, which vary from one evaluation to another, no general levels for rating are possible: they must be defined for each specific evaluation. Similarly, the assessment criteria definition involves preparing a procedure for summarising the results of the evaluation and takes, as input, management criteria specific to a particular environment which may influence the relative importance of different quality characteristics and sub-characteristics. This definition, too, is therefore specific to the particular evaluation.
The final stage is the evaluation procedure, which is refined into three steps: measurement, rating and assessment. The first two are intuitively straightforward: in measurement, the selected metrics are applied to the software product and values on the scales of the metrics obtained. Subsequently, for each measured value, the rating level is determined. Assessment is the final step of the software evaluation process, where a set of rated levels are summarised. The result is a summary of the quality of the software product. The summarised quality is then compared with other aspects such as time and cost, and the final managerial decision is taken, based on managerial criteria.