PROGRAM ERRORS AND TESTING REVIEW


1. CAUSES OF ERROR

Broadly we can identify 3 causes of error:

  1. Compile time errors, e.g. missing semicolons, mistyped words (typos), incorrect program structure.
  2. Run time errors due to:
  • Logic errors - software complies and runs OK but the wrong result is produced.

    Consider the following program fragment where a "divide by zero" may be caused to occur.

    procedure ADA_ERROR_EXAMPLE is
            X_VALUE, Y_VALUE, Z_VALUE: INTEGER;
    begin
            GET(X_VALUE);
            GET(Y_VALUE);
            Z_VALUE := X_VALUE/Y_VALUE;
            PUT("Result is");
            PUT(Z_VALUE);
            NEW_LINE;
    end ADA_ERROR_EXAMPLE;
    

    If the variable Y_VALUE is set to 0 a "divide by zero" error will occur.


    2. CLASSIFICATION OF ERRORS

    There are various methods whereby commonly experienced errors can be classified. These can be summarised as follows.


    3. SOFTWARE TESTING

    Software testing is an essential element of the software development process. The objectives of software testing may be stated as follows (Myers 1979):

    1. Testing is a process of executing a program with the intent of finding an error.
    2. A good test is one that has a high probability of finding an "as-yet" undiscovered error.
    3. A successful test is one that uncovers an "as-yet" undiscovered error.

    Remember that if no errors are discovered we cannot say that "the software in question is error free", all we can say is that "no errors have been discovered".


    4. VERIFICATION AND VALIDATION (V&V)

    We can identify two distinct elements of software testing:

    1. Verification, and
    2. Validation

    Verification refers to the set of activities that ensures that software is correctly implemented as a set of procedures/functions. Validation refers to a different set of activities that ensure that the software built is traceable to customer requirements. The distinction is sometimes described as follows (Bohem 1981):

    Verification"Are we building the product right?"
    Validation"Are we building the right product?"

    5. TEST CASES (V&V)

    Both parts of V&V can be carried out using appropriate test cases. Over the past two decades a rich variety of test case design methods have evolved for soft ware. These methods provide the developer with a systematic approach to testing. More importantly, they provide a mechanism that can help to ensure the completeness of tests and provide the highest likelihood for uncovering errors in software. We can identify two types of testing:

    1. Black box testing
    2. White box testing

    5.1. Black box testing

    Black box testing is concerned with inputs to and outputs from software. Black box test cases are designed to demonstrate that :

    1. Input is properly accepted
    2. The appropriate output is produced
    3. Arithmetic expression operate as expected

    Black box testing is not concerned with the internal operation of a software system only its interface activities/


    5.2. White box testing

    White box testing is concerned with the internal operation of software systems. logical paths through the software are tested by providing test cases that exercise specific sets of conditions and/or loops. In addition the "status of the program" may be examined at various points to determine if the expected or asserted status corresponds to the actual status. The latter is normally carried out during the implementation process rather than subsequent to it. Typically test cases are generated to:

    1. Execute all independent paths through the software
    2. Exercise all selections.
    3. Execute all loops.

    6. BLACK BOX TESTING TECHNIQUES


    Limit testing

    Given an input variable of some form we should derive test cases that include the lowest and highest possible values for the variable in question.


    Boundary Value Analysis (BVA) testing

    For reasons that are not completely clear a greater number of errors tend to occur at the boundaries of the input domain rather than in the enter. It is for this reason that Boundary Value Analysis (BVA) has been developed as a testing technique. BVA leads to a selection of test cases that exercise bounding values. Guidelines for BVA are as follows:

    1. If an input condition specifies a range bounded by a value a and b, test cases should be designed with values just above and below a and b respectively.
    2. If an input condition specifies a number of values (an enumerated type), test cases should be developed that exercise the minimum and maximum values. (Values just above and below the minimum values should also be tested).
    3. If an output condition specifies a range or a number of values than attempt to apply guidelines 1 and 2 to the output.
    4. If internal program structures have prescribed boundaries then attempt to apply guidelines 1 and 2 to this data item.

    Arithmetic testing

    Arithmetic testing is only applicable for statements that incorporate arithmetic expressions. In this case test cases should be constructed whereby each variable in the expression is tested with a zero, negative and positive sample value (if the variable's definition permits this) against each other variable (also with a zero, negative and positive sample values).


    7. WHITE BOX TESTING TECHNIQUES


    Path Testing

    This is concerned with the derivation of a set of test cases to exercise every statement in a program at least once during testing. Before appropriate test cases can be derived we must know how control may flow through our program. Their are specialised notations (flow graphs) which may be used for this. Alternatively we can refer to our design specification (e.g. Nassi-Shneiderman charts).


    Loop testing

    Loop testing is a white box testing technique that focuses exclusively on the validity of loop constructs (suggested by Beizer amongst others in the early 1980s - Beizer 1983). In addition to testing each path within a loop a specialised set of additional test are recommended depending on the type of loop:


    8. DEBUGGING

    Debugging occurs as a consequence of successful testing, i.e. when an error is discovered. Debugging is a four stage process:

    1. Error location
    2. Design repair
    3. Implement repair
    4. Test repair

    Note that the last three stages are analogise to the stages in the software development process. Error location can be the most arduous stage in this process. The most straightforward and simplest method of locating the source of errors is through code inspections. If this fails output statements may be embedded in the code whereby values for key variables or messages are output (the latter indicating that specific stages in the processing have been reached). In this manner, through a process of elimination, sources of errors can be isolated.


    REFERENCES

    1. Beizer, B. (1983). Software Testing Techniques. Van Nostrand Reinhold.
    2. Boehm, B. (1981). Software Engineering Economics. Prentice-Hall.
    3. Myers, G. (1979). The Art of Software Testing. Wiley



    Created and maintained by Frans Coenen. Last updated 11 October 1999