Glass box testing has traditionally been divided up into static and dynamic analysis (Hausen82, 119,122).
The only generally acknowledged and therefore most important characteristic of static analysis techniques is that the testing as such does not necessitate the execution of the program (Hausen84, 325). ``Essential functions of static analysis are checking whether representations and descriptions of software are consistent, noncontradictory or unambiguous'' (Hausen84, 325). It aims at correct descriptions, specifications and representations of software systems and is therefore a precondition to any further testing exercise. Static analysis covers the lexical analysis of the program syntax and investigates and checks the structure and usage of the individual statements (Sneed87, 10.3-3). There are principally three different possibilities of program testing (Sneed87, 10.3-3), i.e.\
While some software engineers consider it characteristic of static analysis techniques that they can be performed automatically, i.e.\ with the aid of specific tools such as parsers, data flow analysers, syntax analysers and the like (Hausen82, 126), (Miller84, 260) and (Osterweil84, 77), others also include manual techniques for testing that do not ask for an execution of the program (Sneed87, 10.3-3). Figure is an attempt to structure the most important static testing techniques as they are presented in SE literature between 1975 and 1994.
Figure B.1: Static Analysis Techniques
Syntax parsers, which split the program/document text into individual statements, are the elementary automatic static analysis tools. When checking the program/document internally, the consistency of statements can be evaluated.
When performed with two texts on different semantic levels, i.e. a program against its specification, the completeness and correctness of the program can be evaluated. (Sneed87, 10.3-6) and (Hausen84, 328). This technique, which aims at detecting problems in the translation between specification and program realisation, is called static verification (Sneed87, 10.3-3), and (Hausen87, 126). Verification requires formal specifications and formal definitions of the specification and programming languages used as well as a method of algorithmic proving that is adapted to these description means (Miller84, 263) and (Hausen87, 126). Static verification compares the actual values provided by the program with the target values as pre-defined in the specification document. It does not, however, provide any means to check whether the program actually solves the given problems, i.e.\ whether the specification as such is correct (Hausen87, 126). The result of automatic static verification procedures is described in boolean terms, i.e. a statement is either true or false (Hausen87, 127). The obvious advantage of static verification is that, being based on formal methods, it leads to objective and correct results. However, since it is both very difficult and time-consuming to elaborate the formal specifications which are needed for static verification, it is mostly only performed for software that needs to be highly reliable . Another technique which is normally subsumed under static analysis is called symbolic execution (Hausen84, 327), (Miller84, 263), (Hausen82, 117) and (Hausen87, 127). It analyses, in symbolic terms, what a program does along a given path (Miller84, 263). ``By symbolic execution, we mean the process of computing the values of a program's variables as functions which represent the sequence of operations carried out as execution is traced along a specific path through the program.'' (Osterweil84, 79). Symbolic execution is most appropriate for the analysis of mathematical algorithms. Making use of symbolic values only, whole classes of values can be represented by a single interpretation, which leads to a very high coverage of test cases (Hausen82, 117). The development of programs for symbolic execution is very expensive and therefore is mainly used for testing numerical programs, where the cost/benefit relation is acceptable.
The most important manual technique which allows testing the program without running it is software inspection (Thaller94, 36), (Ackerman84, 14) and (Hausen87, 126). The method of inspection originally goes back to Fagan (Fagan76) who saw the practical necessity to implement procedures to improve software quality at several stages during the software life-cycle. In short a software inspection can be described as follows: ``A software inspection is a group review process that is used to detect and correct defects in a software workproduct. It is a formal, technical activity that is performed by the workproduct author and a small peer group on a limited amount of material. It produces a formal, quantified report on the resources expended and the results achieved'' (Ackerman84, 14), (Thaller94, 36), (Hausen87, 126) and (Hausen84, 324).
During inspection either the code or the design of a workproduct is compared to a set of pre-established inspection rules (Miller84, 260) and (Thaller94, 37). Inspection processes are mostly performed along checklists which cover typical aspects of software behaviour (Thaller94, 37), (Hausen87, 126). ``Inspection of software means examining by reading, explaining, getting explanations and understanding of system descriptions, software specifications and programs'' (Hausen84, 324). Some software engineers report inspection as adequate for any kind of document, e.g. specifications, test plans etc. (Thaller94, 37). While most testing techniques are intimately related to the system attribute whose value they are designed to measure, and thus offer no information about other attributes, a major advantage of inspection processes is that any kind of problem can be detected and thus results can be delivered with respect to every software quality factor (Thaller94, 37) and (Hausen87, 126).
Walkthroughs are similar peer review processes that involve the author of the program, the tester, a secretary and a moderator (Thaller94, 43). The participants of a walkthrough create a small number of test cases by ``simulating'' the computer. Its objective is to question the logic and basic assumptions behind the source code, particularly of program interfaces in embedded systems (Thaller94, 44).
While static analysis techniques do not necessitate the execution of the software, dynamic analysis is what is generally considered as ``testing``, i.e. it involves running the system. ``The analysis of the behaviour of a software system before, during and after its execution in an artificial or real applicational environment characterises dynamic analysis'' (Hausen84, 326). Dynamic analysis techniques involve the running of the program formally under controlled circumstances and with specific results expected (Miller84, 260). It shows whether a system is correct in the system states under examination or not (Hausen84, 327).
Among the most important dynamic analysis techniques are path and branch testing. During dynamic analysis path testing involves the execution of the program during which as many as possible logical paths of a program are exercised (Miller84, 260) and(Howden80, 163). The major quality attribute measured by path testing is program complexity (Howden80, 163) and (Sneed87, 10.3-4). Branch testing requires that tests be constructed in a way that every branch in a program is traversed at least once (Howden80, 163). Problems when running the branches lead to the probability of later program defects.
Today there are a number of dynamic analysers that are used during the software development process. The most important tools are presented Table (Thaller94, 177) :
|Type of Dynamic Analyser||Functionality of Tool|
|test coverage analysis||tests to which extent the code can be checked by glass box techniques|
|tracing||follows all paths used during program execution and provides e.g. values for all variables etc.|
|tuning||measures resources used during program execution|
|simulator||simulates parts of systems, if e.g. the actual code or hardware are not available|
|assertion checking||tests whether certain conditions are given in complex logical constructs|
The selection and generation of test data in glass box tests is an important discipline. The most basic approach to test data generation is random testing. For random testing a number of input values are generated automatically without being based on any structural or functional assumption (Sneed87, 10.3-4) and (Bukowski87, 370). There are also two more sophisticated approaches to test data generation, i.e.\ structural testing and functional testing. ``Structural testing is an approach to testing in which the internal control structure of a program is used to guide the selection of test data. It is an attempt to take the internal functional properties of a program into account during test data generation and to avoid the limitations of black box functional testing'' (Howden80, 162). Functional testing as described by (Howden80) takes into account both functional requirements of a system and important functional properties that are part of its design or implementation and which are not described in the requirements (Howden80, 162). ``In functional testing, a program is considered to be a function and is thought of in terms of input values and corresponding output values.'' (Howden80, 162). There are tools for test data generation on the market that can be used in combination with specific programming languages. Particularly for embedded systems, tools for test data generation are useful, since they can be used to simulate a larger system environment providing input data for every possible system interface (Thaller94, 178). In other words, if a system is not fully implemented or not linked to all relevant data sources, not all system interfaces can be tested, because no input values are given for non-implemented functions. Data Generation tools provide input values for all available system interfaces as if a real module was linked to it.