First, some context is given for the place of requirements analysis in evaluation.
In most accounts of the software engineering life cycle, evaluation is carried out in terms of the requirements elaborated in the first phase of the software development process: requirements analysis. Requirements are taken here to be equivalent to `stated or implied needs'. More detailed, more committed descriptions of the system, which fall under the heading of design, are used to test software modules. Figure shows how the outputs of the various design stages are typically fed into the evaluation process.
Figure C.1: `V-diagram' showing the place of evaluation in software development.
Requirements engineering is a growing field of enquiry in software engineering and it is intended to use the products of research in requirements engineering and in the general software engineering field in this work insofar as they are useful. However, language engineering as an application area, and the particular purposes of the different kinds of evaluation distinguished by EAGLES, have special characteristics and require the development of special-purpose versions of requirements procedures.
The different levels of analysis and design can be thought of as different descriptions of a problem and its possible solutions.
The relevant space of descriptions has been described (Jackson95) as covering two intersecting sets of attributes. The set Di is a set of attributes of the problem domain, stated in terms of that domain without reference to system design decisions; the requirements statement proper is a set of relations on D or constraints involving terms from D.
The set M is a set of machine attributes; these are expressed in terms of entities which are parts of the actual system and not accessible to users of the system. The intersection of D and M, called S, is, in Jackson's terms, the area of specifications, where attributes exist in both the domain and the system design and where constraints derived from domain requirements are expressed in terms of machine requirements. In terms of the consumer report paradigm of our framework, specifications are constraints on the values of reportable attributes.Figure illustrates the intersection.
Figure 2.2: Domain and machine attributes and specifications.
For instance, a problem level requirement of spelling checkers for some users might be that customisations should be readily sharable between end-users. At the specification level, this might be expressed as the requirement that a personal dictionary must be batch copyable -- that is, that there should be commands to perform the task of copying a user dictionary. Specifications can typically be thought of as relating to system functions that are directly accessible to the user. At the machine level, this will be expressed by constraints on a number of different software modules which are not directly accessible to the user.
Restricting the requirements statement to deal purely in terms of problem domain attributes, without prior commitment to any particular system's means of solving the problem, is particularly useful as a starting point for an evaluation which will be applicable to multiple software systems. This is typically the case for adequacy evaluations according to the consumer report paradigm, since it forms a baseline against which different ways of fulfilling these requirements might be compared and evaluated. However, there remains the problem of how to translate these domain-level requirements into a form in which it is possible to test them against all the relevant systems under test (i.e., to transform them into specifications, in Jackson's terms, or reportable attributes, in our terms). This involves not just top-down development of requirements, but is affected by the nature of the systems under test.
A reportable attribute called something like `sharability of user dictionaries' might have a nominal value for each of the ways that existing or envisagable systems satisfy the requirement, including values corresponding to various kinds of failure to satisfy requirement at a given level.
In general, requirements are partitioned into functional requirements and non-functional requirements. Functional requirements are associated with specific functions, tasks or behaviours the system must support, while non-functional requirements are constraints on various attributes of these functions or tasks. In terms of the ISO quality characteristics for evaluation, the functional requirements address the quality characteristic of functionality while the other quality characteristics are concerned with various kinds of non-functional requirements. Because non-functional requirements tend to be stated in terms of constraints on the results of tasks which are given as functional requirements (e.g., constraints on the speed or efficiency of a given task), a task-based functional requirements statement is a useful skeleton upon which to construct a complete requirements statement. That is the approach taken in this work. It can be helpful to think of non-functional requirements as adverbially related to tasks or functional requirements: how fast, how efficiently, how safely, etc., is a particular task carried out by a particular system?
User profiles behave like parameterisations of requirements statements, capturing regular variation in requirements for similar types of system.
Different users may have different functional requirements, and so require different subsets of functionality to be evaluated, or they may have different non-functional constraints on functions.
The terms user and user profile will not be used so generally in what follows. It is the terminology used in our efforts towards formalisation; however, the factors that affect and parameterise software requirements do not only derive from end-users of the software, nor yet from only human elements in the software's intended environment. The term that will be used for the overall context of use is setup, following (Galliers93); this will be returned to later. The actual operator of software will be called the end-user; the person or organisation to whom the evaluation is addressed will be called the customer of the evaluation.