INTRODUCTION TO PROGRAMMING IN JAVA: PROGRAM ERRORS ("BUGS") AND TESTING REVIEW

NOTE: This set of www pages is not the set of www pages for the curent version of COMP101. The pages are from a previous version that, at the request of students, I have kept on line.


CONTENTS

1. Causes of errors
2. Classification of errors
3. Software testing
4. Verification and Validation (V&V)
5. Test cases
5.1. Black box testing
 
5.2. White box testing
6. Black box testing techniques
7. White box testing techniques
7.1. Loop testing example
8. Data validation
9. Unit and system testing
10. Debugging



1. CAUSES OF ERROR

Broadly we can identify 3 causes of error:

  1. Compile time errors, e.g. missing semicolons, mistyped words (typos), incorrect program structure.
  2. Run time errors due to:
    Domain/data errors: an operation is called with data that cannot be handled by that operation, e.g. divide by zero, integer overflow, wrong type for input variable.
    Resource exhaustion: memory errors.
    Loss of facilities: network failure, power failure, etc.
 
  1. Logic errors - software complies and runs OK but the wrong result is produced.

Consider the following program fragment where a "divide by zero" may be caused to occur.

public int division(int xValue, yValue) {
	int zValue;
	
	zValue = xvalue/yValue;
	
	return(zValue);
	}

If the variable yValue is set to 0 a "divide by zero" error will occur.




2. CLASSIFICATION OF ERRORS

There are various methods whereby commonly experienced errors can be classified. These can be summarised as follows.

Synchronous/Asynchronous (Internal/ External):
  1. Synchronous (Internal) - Integer overflow, running out of memory.
  2. Asynchronous (External) - User interrupts, power failure.
 
Reproducing/Nonreproducing:
  1. Reproducing - Same error will occur on each invocation, integer overflow.
  2. Nonreproducing - external errors, running out of memory.



3. SOFTWARE TESTING

Software testing is an essential element of the software development process. The objectives of software testing may be stated as follows (Myers 1979):

  1. Testing is a process of executing a program with the intent of finding an error.
  2. A good test is one that has a high probability of finding an "as-yet" undiscovered error.
 
  1. A successful test is one that uncovers an "as-yet" undiscovered error.

Remember that if no errors are discovered we cannot say that "the software in question is error free", all we can say is that "no errors have been discovered".




4. VERIFICATION AND VALIDATION (V&V)

We can identify two distinct elements of software testing:

  1. Verification, and
  2. Validation

Verification refers to the set of activities that ensures that software is correctly implemented as a set of methods.

 

Validation refers to a different set of activities that ensures that the software built is traceable to customer requirements. The distinction is sometimes described as follows (Bohem 1981):

Verification"Are we building the product right?"
Validation"Are we building the right product?"



5. TEST CASES

Both parts of V&V can be carried out using appropriate test cases. Over the past two decades (sinse 2002) a rich variety of test case design methods have evolved for software. These methods provide the developer with a systematic approach to testing.

 

More importantly, they provide a mechanism that can help to ensure the completeness of tests and provide the highest likelihood for uncovering errors in software. We can identify two types of testing:

  1. Black box testing
  2. White box testing


5.1. Black box testing

Black box testing is concerned with inputs to, and outputs from, software. Black box test cases are designed to demonstrate that :

  1. Input is properly accepted
 
  1. The appropriate output is produced
  2. Arithmetic expression operate as expected

Black box testing is not concerned with the internal operation of a software system only its interface activities.



5.2. White box testing

White box testing is concerned with the internal operation of software systems. Logical paths through the software are tested by providing test cases that exercise specific sets of conditions and/or loops. In addition the "status of the program" may be examined at various points to determine if the expected or asserted status corresponds to the actual status. The latter is normally carried out during the implementation process rather than subsequent to it.

 

Typically test cases are generated to:

  1. Execute all independent paths through the software
  2. Exercise all selections.
  3. Execute all loops.



6. BLACK BOX TESTING TECHNIQUES


Limit testing

Given an input variable of some form we should derive test cases that include the lowest and highest possible values for the variable in question.


Boundary Value Analysis (BVA) testing

For reasons that are not completely clear a greater number of errors tend to occur at the boundaries of the input domain rather than in the centre. It is for this reason that Boundary Value Analysis (BVA) has been developed as a testing technique. BVA leads to a selection of test cases that exercise bounding values. Guidelines for BVA are as follows:

 
  1. If an input condition specifies a range bounded by a value a and a value b, test cases should be designed with values just above and below a and b respectively.
  2. If an input condition is not specified but some default range is assumed then attempt to apply guideline 1.
  3. If internal program structures have prescribed boundaries then attempt to apply guidelines 1 and 2 to these structures.

Arithmetic testing

Arithmetic testing is only applicable for statements that incorporate arithmetic expressions. In this case test cases should be constructed whereby each variable in the expression is tested with a zero, negative and positive sample value (if the variable's definition permits this) against each other variable (also with a zero, negative and positive sample values).




7. WHITE BOX TESTING TECHNIQUES

Whit box (glass box) testing is concerned with the internal operation of software.


Path Testing

This is concerned with the derivation of a set of test cases to exercise every statement in a program at least once during testing. Before appropriate test cases can be derived we must know how control may flow through our program. Their are specialised notations which may be used for this. We have been using Activity Diagrams (ADs).


Loop testing

Loop testing is a white box testing technique that focuses exclusively on the validity of loop constructs (suggested by Beizer amongst others in the early 1980s - Beizer 1983). In addition to testing each path within a loop a specialised set of additional tests are recommended depending on the type of loop:

Simple loops: The following set of tests should be applied to simple loops, where N is the maximum number of allowable passes through the loop:
 
  1. Skip loop entirely
  2. Only one pass through the loop
  3. Two passes through the loop
  4. M passes through the loop where M < N
  5. N-1, N, N+1 passes through the loop
Nested loops: If we were to extend the test approach for simple loops to nested loops, the number of possible tests would grow geometrically as the level of nesting increases. This would result in an impractical number of tests. Beizer suggests the following:
  1. Start at the innermost loop. Set all other loops to minimum values.
  2. Conduct simple loop tests (see above) for innermost loop.
  3. Work outward, conducting tests for the next loop, but keeping all other outer loops at a minimum.
  4. Continue until all loops have been tested
Concatenated loops: Concatenated loops should be tested using the approach described for simple loops above provided that each loop is independent of the other. If two loops are not independent (e.g. the final loop count for for the first becomes the initial loop count for the second) Beizer recommends the approach applied to nested loops (see above), i.e. test the second loop first with first loop counter set t0 minimum.

7.1. Loop testing example

An example variable count loop is given in Table 1. Note that the number of itterations for this loop is influenced by the value of the input data item. However, N (the maximum passes through the loop) is 10. A suitable set of loop test cases for this code is given to the right.

 
TEST CASEEXPECTED RESULT
inputNumber of iterations
110 (Skip loop entirely
101 (Only one pass through the loop
9 2 (Two passes through the loop
5 6 (M passes through the loop where M < N)
2 9 (N-1 passes through the loop
1 10 (N passes through the loop
0 0 (N+1 passes through the loop

 
// LOOP TESTING EXAMPLE PROGRAM
// Frans Coenen
// Wednesday 20 August 2003
// The University of Liverpool, UK

import java.io.*;

class LoopTestExampleApp {

    // ------------------ FIELDS ------------------------

    public  static BufferedReader keyboardInput = 
                new BufferedReader(new InputStreamReader(System.in));
    private static final int MINIMUM = 1;
    private static final int MAXIMUM = 10;
    
    // ------------------ METHODS ------------------------	
    
    /* Main method  */

    public static void main(String[] args) throws IOException { 
	System.out.println("Input an integer value:");
        int input = new Integer(keyboardInput.readLine()).intValue();
	
	int numberOfIterations=0;
	for(int index=input;index >= MINIMUM && index <= MAXIMUM;index++) {
            numberOfIterations++;
    	    }
	
	// Output and end
	System.out.println("Number of iterations = " + numberOfIterations);
	}
    }

Table 1: Variable count loop (number of iterations depends on input value)




8. DATA VALIDATION

Data validation is concerned with the input of spurious data to ensure that our program cam successfully handle such input. The exceptions handling mechanism that comes with Java is very good at "catching" such erroneous input so we do not normally have to include any additional code to handle this.




9. UNIT AND SYSTEM TESTING

When writing significant software systems (in terms of size) it is often not practical (or desirable) to leave all testing till completion of the entire software system --- this is not considered to be good practice. Instead we should test individual methods, or small groups of methods that are intended to perform some particular function, as the implementation proceeds. We refer to this type of testing as Unit Testing. Once the software system is complete we will still have to test the operation of the entire system; but, because we have carried out our unit testing, we will know that the individual components are working correctly. This testing of the entire ststem on completion is refered to as System Testing.

Note that Top down development can easily be combined with unit testing.




10. DEBUGGING

Debugging occurs as a consequence of successful testing, i.e. when an error is discovered. Debugging is a four stage process:

  1. Error location
  2. Design repair
  3. Implement repair
  4. Test repair
 

Note that the last three stages are analogous to the stages in the software development process. Error location can be the most arduous stage in this process. The most straightforward and simplest method of locating the source of errors is through code inspections. If this fails output statements may be embedded in the code whereby values for key variables or messages are output (the latter indicating that specific stages in the processing have been reached). In this manner, through a process of elimination, sources of errors can be isolated. Tools (called debuggers) exist to help software developers to debug software.



REFERENCES

  1. Beizer, B. (1983). Software Testing Techniques. Van Nostrand Reinhold.
  2. Bohem, B. (1981). Software Engineering Economics. Prentice-Hall.
  3. Myers, G. (1979). The Art of Software Testing. Wiley



Created and maintained by Frans Coenen. Last updated 10 February 2015