Many of the comments made in discussing the evaluation of translators' work benches apply equally in the context of evaluating local terminology management systems, especially the questions of validation and the emphasis on good interfaces. It was pointed out in particular that the interfaces for terminology management systems need to be quite sophisticated because of the number of languages to be dealt with.
Local terminology management systems were essentially being used in two ways in the evaluation.
First, they were used as an intermediary between the terminologist or translator and the central resources of EURODICAUTOM. When a text was received, the user would search for the appropriate terminology in the central resources and down-load it into the local system. This was done with each document as it was received: the local base was not used all the time. The possibility of sending a list to EURODICAUTOM for search was said to be very useful in this context.
Secondly, local management systems were used as a way to create local terminology, for example requester oriented terminology, which was of special interest to the unit but not necessarily of central interest. Sometimes the terminology correspondent of the group was responsible for coordinating this work and perhaps for entering material in batch mode after validation within the local group. In this application, the group suggested that it would be useful if the requesting service could also have access to the local terminology base, especially if the terms were integrated into a document such as a working programme where it was important that terminology be consistent.
The importance of integration with other tools was also emphasized in this context, as was the need to be able to use the local management tool without being distracted whilst translating.
Two worries were expressed by those involved in the evaluation. The first was that the system being called a local data base management system might lead to unrealistic expectations on the part of some users, who would expect to find a system complete with data rather than a software skeleton onto which the data had to be grafted. The same concern surfaced also with the testing of translation memories: users' expectations can be unrealistically high simply because they do not differentiate very well between an interface and the data that lies behind it.
The second, already mentioned and very widely expressed, was that the availability of local systems would lead to a proliferation of scattered small terminology bases - a computerized equivalent of the time when every translator had his own private collection of translation equivalents, the contents of which were unknown to his colleagues and therefore not easily shareable, with the result that much work was done several times over and consistency across translators was much harder to achieve than it is when a central resource creates a de facto standard for terminology.