We have already seen how the various parts of the task model are interdependent and can be defined and obtained in terms of various combinations of the others. This has implications for how we fill in and validate the model, and how we obtain in the detail required for those parts of the model that we need in order to proceed with our testing and reporting.
The error taxonomy and the proofed text model together are, in a sense, primary in terms of our evaluation methods. All the other taxonomies have aspects that are only fixed relative to the errors, whether inherently or as a result of the practicalities of collecting information, and the errors are only fixed relative to the proofed text model. Thus, we find the following:
However, as we have already discussed, the error taxonomy itself is derived from examination of unproofed texts, implicit knowledge of proofed text and shrewd but unfounded ideas about error sources. Are we going round in circles? Not quite; Appendix Requirements Analysis for Linguistic Engineering Evaluation contains some preliminary discussion of the issue of validation for these structures, but the problem does require more work.
In our evaluation method we use these linked structures of the model in a number of different ways; different parts of this linked structure must be fully realised to support these different purposes of the model, which are similar to the purposes given for the error taxonomy: