next previous contents
Next: Transformation to reportable attributes Up: Requirements analysis in LE Previous: Overview

Problem domain requirements

The first stage addresses the highest level (or widest scope) of the requirements analysis. It is in terms of this level that other levels of analysis, down to the reportable attributes which actually express measurements of systems, must be validated.

The basic tasks the system is required to address are functional requirements, which at the top level of description should relate as closely as possible to valid and measurable user requirements. For spelling checkers, this is relatively straightforward, since the main functional requirements can be expressed in terms of the difference between the spelling errors in a document before checking and those after.gif

The top level task description is usefully supported by constructing a process model of the basic situation of use (or setup). Such a model represents the data flows and the rôles of data-transforming processes in the setup; Figure C.5.3 is an example for spelling checkers at this top level of analysis.

   figure2274
Figure C.3: S1: Top level data flows and agents in spell checking task.

Each data type and process identified in this model will be available to be decomposed and analysed further in the course of the procedure. The process can be paraphrased as something like

(S1) The unproofed text produced by the writer is revised by the editor to give proofed text suitable for the reader.
An immediate decomposition of this statement can be made, to bring it closer to direct description of the functionality we seek: we can say that
(S2) The editor corrects the spelling errors in the unproofed text produced by the writer to give proofed text suitable for the reader.
This is based on a model of unproofed text which sees it essentially as a set of spelling errors. A spelling error is minimally a pair consisting of the text actually present and the intended text.gif

Note that this level of analysis presupposes nothing about the way the checking is to be accomplished. At this level of abstraction, we can define some quality requirements on the task at the domain level. These will form the basis of more detailed requirements at the reportable attribute level.

Functional requirements can often be defined in terms of classic recall and precision measurements: does the system do all and only what it should? If we take the first task description, (S1) above, the quality requirement is just that the process in the editor rôle must transform the unproofed text to proofed text that is suitable for the reader -- a quality requirement expressed in terms of overall text quality. Clearly this has to be decomposed before it says anything about spelling, as in (S2) above. However, for any such decomposition, we have to think about its validity. In a case like this one, there may not be a problem; we are likely to assume that spelling quality ought to be independent of the rest of text quality. However, keeping in mind the assumptions made at each decomposition is useful. For instance, it might be that although a spelling checker improves spelling quality by its suggestions, the change to the proofing activity that it promotes is such that overall text quality is reduced because careful hand-proofing is no longer performed. It may not be possible to validate all such decompositions, but knowing that there is a gap prevents unwarranted assumptions, and may suggest areas where warnings or more work may be needed. The introduction of a software system inevitably changes the tasks that preceded it, so although analysis of tasks before the introduction of a computer system is very valuable, it cannot be taken as the final word on the overall task.

To return to the more detailed task description and associated quality requirements...For error checking systems, classic recall and precision requirements are based on a comparison of the count of errors in the before and after texts. (The editor rôle in this process can be thought of either as a human editor in a situation before introduction of any computational tool, or as the combined role of the checking phase carried out by human and software.)

Non-functional requirements at this level might include factors like the volume of text to be checked, time constraints, and so on.

More detailed functional and non-functional requirements are to be found at the next level of analysis.

The next step is to construct a set of relevant setups, identifying situational and environmental variables that affect the requirements on the task under consideration. This includes the gathering of possibly disjoint sets of requirements from different sources, as the Consumer Report Paradigm allows. Setup identification is an outward-looking activity, based on finding real-world situations in which the product under test might be used.

Questions that are relevant for the analysis of the setup include the definition of the upstream and downstream path of information to and from the document types that form the scope of the top-level task evaluation. For example, if the text is sent to an Optical Character Recognition device after being written, this should be noted, and a new rôle node inserted into the process diagram, since this will affect the kind of spelling errors that appear in the input document. Similarly, if the downstream path of the document is not immediately a human reader, but some sort of automatic process like a parser or a grammar checker, this will affect the definitions of the concept spelling error that we need to develop.

Nodes in the process model are used to structure the process of identifying variables that are relevant to task performance, and thus facilitate the modularisation of requirements. For instance, the errors present in the text before proofing are likely to be affected independently by variables associated with the writer rôle (e.g., first language and language of the text), and variables associated with the OCR rôle. Other relevant elements of the setup include the computational and organisational context.

These variables are parameterisations of the rôles in the process model. Just as different processes in the setup can constitute independent sources of variation, so can different functions within a rôle. For instance, it might be that rates of accidental typing mistakes can be treated as independent of errors of intention such as language transfer errors in second language users, though both are associated with the writer rôle.

The validity of subsequent evaluation processes depends on the validity of the methods used to analyse requirements. Sources of representative texts need to be found to characterise the before and after texts in various setups; reliable experts need to be identified to analyse them; or reliable and applicable prior research must be found that characterises these two document types. The development of well-documented corpora of representative and realistic text for a wide range of requirements is necessary.


next up previous contents
Next: Transformation to reportable attributes Up: Requirements analysis in LE Previous: Overview

ceditor@tnos.ilc.pi.cnr.it