|
DISCLAIMER!: The following description of the CBA algorithm (Liu et al 1998) is that implemented by the author of this WWW page, i.e. it is not identical to that first produced by Bing Liu, Wynne Hse and Yiming Ma, but certainly displays all the "salient" features of the CBA algorithm.
THANKS: Thanks to Cheng Zhou for method to include a default rule where no rules are generated that is now (17 August 2015) included in the AprioriTFP_CBA class.
|
|
1. INTRODUCTION |
CBA (Classification Based on Associations) is a Classification Association Rule Mining (CARM) algorithm developed by Bing Liu, Wynne Hsu and Yiming Ma (Liu et al. 1998). CBA operates using a two stage approach to generating a classifier:
In Liu et al.'s implementation the first stage is implemented using CBA-RG
(CBA-Rule Generator). CBA-RG is a multi-pass Apriori style algorithm. Rules,
once generated, are pruned using the "pessimistic error rate based pruning
method" described by Quinlan with respect to C4.5 (Quinlan 1992), and place in a
rule list R. The second
stage is implemented using CBA-CB (CBA - Classifier Builder). The operation of
CBA is described in more detail in
The two stage approach is a fairly common approach used by many CARM algorithms, for example the CMAR (Classification based on Multiple Association Rules) algorithm (Li et al. 2001).
The LUCS-KDD implementation of CBA operates in exactly the same manner as described by Liu et al except that the CARs are generated using the Apriori-TFP algorithm (Coenen et al. 2001, Coenen et al. 2004 and Coenen and Leng 2004) and placed directly into the rule list R.
2. DOWNLOADING THE SOFTWARE |
The LUCS KDD CBA software comprises nine source files. These are provided from this WWW page together with three application classes. The source files are as follows:
The PtreeNode, PtreeNodeTop and TtreeNode classes are separate to the remaining classes which are arranged in a class hierarchy of the form presented in Figure 1.
|
Figure 1: Class Hierarchy
The Apriori-TFPC CBA application classes included here are as follows:
There is also a "tar ball" cba.tgz that can be downloaded that includes all of the above source and application class files. It can be unpacked using tar -zxf cba.tgz.
The LUCS-KDD CBA software has been implemented in Java using the Java2 SDK (Software Development Kit) Version 1.4.0, which should therefore make it highly portable. The code does not require any special packages and thus can be compiled using a standard Java compiler:
javac *.java
The code can be documented using Java Doc. First create a directory Documentation in which to place the resulting HTML pages and then type:
javadoc -d Documentation/ *.java
This will produce a hierarchy of WWW pages contained in the Document directory.
3. RUNNING THE SOFTWARE |
When compiled the software can be invoked in the normal manner using the Java interpreter:
java APPLICATION_CLASS_FILE_NAME -F FILE_NAME -N NUMBER_OF_CLASSES
The -F flag is used to input the name of the data file to be used, and the -N flag to input the number of classes represented within the input data.
If you planning to process a very large data set it is a good idea to grab some extra memory. For example:
java -Xms600m -Xmx600m APPLICATION_CLASS_FILE_NAME -F FILE_NAME -N NUMBER_OF_CLASSES
The input to the software, in all cases is a (space separated) binary valued data set T. The set T comprises a set of N records such that each record (t), in turn, comprises a set of attributes. Thus:
T = {t | t = subset A}
Where A is the set of available attributes. The value D is then defined as:
D = |A|
We then say that a particular data set has D columns and N rows. A small example data sets might be as follows:
1 2 3 6 1 4 5 7 1 3 4 6 1 2 6 1 2 3 4 5 7
where, in this case, A = {1, 2, 3, 4, 5, 6, 7} of which 6 and 7 represent the possible classes. Note that attribute numbers are ordered sequentially commencing with the number 1 (the value 0 has a special meaning). This includes the class attributes which should follow on from the last attribute number as in the above example. It is not a good idea to assign some very high value to the class attributes that guarantees that they are numerically after the last attribute number as this introduces inefficiencies into the code. For example, if in the above case, the class numbers are 1001 and 1002 (instead of 6 and 7 respectively) the algorithm will assume that there are 1002 attributes in total in which case there will be 2^1002 - 1 candidate frequent item sets (instead of only 2^7 - 1 candidate sets).
The program assumes support and confidence default threshold values of 20% and 80% respectively. However the user may enter their own thresholds using the -S and -C flags. Thresholds are expressed as percentages.
Some example invocations, using a discretized/ normalised version of the Pima Indians data set (also available from this site, the raw data can be obtained from the UCI Machine learning Repository (Blake and Merz 1998)) and the two application classes provided by this WWW site, are given below:
java ClassCBA_App -Fpima.D38.N768.C2.num -N2 -S1 -C50 java ClassCBA_App10 -Fpima.D38.N768.C2.num -N2
(note that the ordering of flags is not significant). The output from each application is a set of CRs (ordered according to the prioritisation given in Sub-section 1.1) plus some diagnostic information (run time, number of rules generated etc.).
To enhance the efficiency of the CBA algorithm the attributers are ordered according to frequency. This means that the original set of input attributes is transformed (for example the original attribute 1 may no longer be attribute 1, while some other attribute N has been redesignated as attrinute 1). To avoid ordering users can comment out the lines:
newClassification.idInputDataOrdering(); newClassification.recastInputData();
in the application source code files (ClassCBA_App.java and ClassCBA_App10.java).
A number of output options are available (listed below) that may be included in the ClassCBA_App.java files. Inspection of this file will indicaye that many of these output options are already there. For example:
newClassification.outputNumFreqSets();
Note that it would not make sense to include them in ClassCBA_App10.java because different classifiers will be generated for each 10th!
Method Summary | |
public static void | outputDataArraySize() Outputs size (number of records and number of elements) of stored input data set read from input data file. |
public static void | outputConversionArrays() Outputs conversion array (used to renumber columns for input data in terms of frequency of single attributes --- reordering will enhance performance for some ARM algorithms). |
public static void | outputSettings() Outputs command line values provided by user. |
public static void | outputRulesWithDefault() Outputs contents of rule linked list (if any), with reconversion, such that last rule is the default rule. |
public static void | outputCBArules() Outputs contents of CBA rule linked list (if any), includes details of generation process. |
public static void | outputNumCBArules() Outputs number of generated rules. |
public static void | outputFrequentSets() Commences the process of outputting the frequent sets contained in the T-tree. |
public static void | outputNumFreqSets() Commences the process of counting and outputing number of supported nodes in the T-tree. A supported set is assumed to be a non null node in the T-tree. |
public static void | outputTestSet() Outputs test set once it has been split off from the input data. Include in application class after createTrainingAndTestDataSets() method. |
4. THE CBA ALGORITHM IN MORE DETAIL |
The CBA algorithm commences by generating a list of CARS (potential CRs in the target classifier) ordered according to the following schema:
In Liu et al.'s original paper the third rule is actually expressed as "(if) both the confidence and support of r1 and r2 are the same, but r1 is generated earlier than r2 (then select r1)". However, the effect of this is that rules with lower antecedent cardinality are selected before rules with higher cardinality because of CBA's apriori style CAR generation algorithm (CBA-RG) where low cardinality rules are generated first. In otherwords CBA operates using the same ordering schema as used in the CMAR algorithm (Li et al. 2001).
This rule list (R) is then processed, using a variation of the "cover" principle, to produced a classifier. A very much simplified version of the CBA algorithm is given in Figure 2.
|
Figure 2: Simplified version of the CBA algorithm
On completion of the above the first rule r in R with the lowest totalError value is identified. All rules after r are discarded and a default rule with the class r.DefaultClass is added to the end of the classifier. An example of how the above works is given in sub-section 4.1 below.
Given the data set: a b e a e b e c d f c f d f b c e a d f where e and f are class attributes. This would give a distribution array of [4,4] (the first element representing the class e and the second the class f). Using this dataset the following ordered rule list (with confidence and support values given in parenthesise) may be generated: b -> e (100%, 3) d -> f (100%, 3) b c -> e (100%, 1) a d -> f (100%, 1) a -> e (67%, 2) c -> f (67%, 2) c -> e (33%, 1) a -> f (33%, 1) Note that the list (R) is ordered according to the above schema. |
The rule list is then processes to identify the DefaultClass and Terr value for each rule. This processing is illustrated in the following table (processing is stopped after rule 4 because there are no more records to consider):
The first rule with the lowest total error rate (Terr) is b c -> e so the final classifier becomes: b -> e d -> f b c -> e default f The problem with the above basic algorithm it that it requires |R| passes through the dataset --- clearly undesirable. The actual algorithm described by Liu et al. significantly elaborates on the above with the aim of reducing the number of passes of the data set by determining in advance the values the for the local distribution array L associated with each CAR. This is described in more detail in sub-section 4.2. below. |
In this sub-section the CBA algorithm will be considered in further detail. It is difficult to contrive a simple example (such as that presented in Sub-section 4.1) that serves to demonstrate all the features of CBA so the example given here uses a discretized/ normalised version of Pima Indians set (also available from this site, the raw data can be obtained from the UCI Machine learning Repository (Blake and Merz 1998)). The data was discretized/ normalised using the LUCS-KDD DN software. For the example a confidence threshold of 75% and a support threshold of 1% is used. The data is split so that the first half was used as the training set and the second half the test set. A maximum size of antecedent of 5 attributes was also used. In the example 708 CARs (Classification Association Rules) are generated by Apriori-TFP before the antecedent size limit is reached. The ordered list of CARs is presented in Table 1. The process of generating the local distribution arrays and producing the final classifier is carried out in three stages:
Each stage is described in greater detail below. |
Table 1: Initail CBA rule list with confidence presented as a percentage and support as a number of records |
During Stage 1 the training set is processed and appropriate c and w rules identified for each record. For every identified c rule the appropriate element in the local distribution array is incremented so that the the sum of all the local distribution arrays will be equivalent to the number of records in the training set (384 in this case). For those records where the wRule has higher precedence than the associated cRule the record is stored in a struture (SetA) with the following fields:
At the same time every rule (r) which is a cRule for at least one record is marked (r.isAcRule <-- true), and every rule r) which is a "strong" cRule is marked (r.isAstrongCrule <-- true). Thus for each record if:
On completion of stage 1: (i) all records which are wrongly classified (but for which there is a corresponding cRule) will be collated into a linked list of SetA structures ready for further consideration, (ii) all rules which are "strong" cRules with respect to at least one record will be identified, and (iii) for each cRule the number of records classified correctly will be known. Some sections of the rule linked list, with cRules and strong cRules marked, as of the end of Stage 1, are presented in Table 2. The sum of the distribution array elements ("class cases discovered") should equal the number of records for which a cRule was identified --- 326 in this case. The last field shown in Table 2 represents the local distribution array for the rule (L). Thus rule 1 satisfies 0 records with class 41 and 6 records with class 42. Some rulers, such as rule 1 are strong cRules; others, such as rule 330, are simply cRules because there exists a corresponding wRule for a given record which has higher precedence (in the case of rule 330 the higher precedence rule is rule 249 with respect to record 166). This can be confirmed from the set A (containing details of all miss-classified records such a record no 166) some sections of which are given in Table 3. Note also that many rules are are not cRules with respect to any record. From Table 3 it can be seen that record number 166, which has the class lable 37, is wrongly classified by the wRule {1 2 5 6 9} -> {38} (rule number 249) which has higher precedence than the corresponding cRule {1 4 5 10} -> {37} (rule number 330) --- and so on. |
Table 3: Miss classified records (the set A) ordered in reverse It may be the case that there are some records which are not satisfied by any rules, or which have only a cRule or a wRule. In the example given here there are 134 records that do not have an associated cRule (i.e. cannot be correctly satisfied by any rules in the rule list). To increase the likelyhood of every record having a cRule a low support threshold is recommended, however this will result in the generation of many more CARS. |
(1) {9 15} -> {38} (100.0%, 6.0) * STRONG cRule * [0,6] (2) {6 9 13} -> {38} (100.0%, 6.0) * STRONG cRule * [0,5] (3) {2 9 15} -> {38} (100.0%, 6.0) [0,0] (4) {3 9 15} -> {38} (100.0%, 6.0) [0,0] (5) {6 9 15} -> {38} (100.0%, 6.0) [0,0] ................. (248) {2 4 6 8 10} -> {37} (84.61%, 11.0) [0,0] (249) {1 2 5 6 9} -> {38} (84.09%, 37.0) * STRONG cRule * [0,29] (250) {1 2 6 9} -> {38} (83.67%, 41.0) * STRONG cRule * [0,1] (251) {8 17} -> {37} (83.33%, 5.0) * STRONG cRule * [3,0] (252) {6 21} -> {38} (83.33%, 5.0) [0,0] (253) {1 8 17} -> {37} (83.33%, 5.0) [0,0] ................. (329) {2 4 5 6 9} -> {38} (80.95%, 34.0) [0,0] (330) {1 4 5 10} -> {37} (80.95%, 17.0) * cRule * [2,0] (331) {1 5 6 10} -> {37} (80.95%, 17.0) [0,0] ................. (343) {23} -> {37} (80.0%, 4.0) * STRONG cRule * [4,0] (344) {34} -> {38} (80.0%, 4.0) * STRONG cRule * [0,1] (345) {7 17} -> {37} (80.0%, 4.0) * cRule * [1,0] ................. (462) {1 4 5 6 8} -> {37} (79.37%, 177.0) [0,0] (463) {2 3 9} -> {38} (79.36%, 50.0) * STRONG cRule * [0,2] (464) {5 6 7 8} -> {37} (79.35%, 173.0) [0,0] ................. (525) {3 5 8} -> {37} (77.77%, 203.0) (526) {5 7 8} -> {37} (77.77%, 189.0) (527) {1 5 7 8} -> {37} (77.77%, 189.0) ................. (533) {8 18} -> {37} (77.77%, 7.0) [0,0] (534) {1 16} -> {38} (77.77%, 7.0) [0,0] (535) {6 16} -> {38} (77.77%, 7.0) [0,0] ................. (564) {2 6 8} -> {37} (77.51%, 193.0) [0,0] (565) {1 2 3 4 8} -> {37} (77.51%, 193.0) * cRule * [1,0] (566) {2 4 6 8} -> {37} (77.48%, 179.0) [0,0] ................. (643) {4 7 9} -> {38} (75.0%, 33.0) [0,0] (644) {3 4 7 9} -> {38} (75.0%, 30.0) [0,0] (645) {4 5 7 9} -> {38} (75.0%, 30.0) [0,0] ................. (664) {6 22} -> {37} (75.0%, 6.0) * cRule * [1,0] (665) {4 20} -> {38} (75.0%, 6.0) * STRONG cRule * [0,2] (666) {4 21} -> {38} (75.0%, 6.0) [0,0] ................. (704) {1 6 7 8 15} -> {37} (75.0%, 6.0) [0,0] (705) {3 6 7 8 15} -> {37} (75.0%, 6.0) [0,0] (706) {1 2 7 8 18} -> {37} (75.0%, 6.0) [0,0] (707) {1 4 6 7 14} -> {38} (75.0%, 6.0) [0,0] (708) {2 3 4 5 21} -> {38} (75.0%, 6.0) [0,0] |
Table 2: Rule list at the end of Stage 1
In stage 2 we process the set A, the set of records that are not classified correctly. Note that for each "unreachable" cRule referenced in the set A the associated local distribution array has already been incremented accordingly. For each record in A there are two posiblilities:
|
At the end of stage 2 we should have the sum of the counts in the local distribution arrays for each rule, this will be equivalent to the number of records in the training set. Note that the intervening rules, if identified, are stored in an Overides structure which has the following four fields:
Some sections of the rule linked list as of the end of stage 2 are presented in Table 4 below. Note that some rules have list of rules they overide. For example rule 463 overides rule 664 with respect to TID 219. Note also that "strong" cRules are the only rules that may potentially be included in the final classifier. For example rule 565 which is a cRule, but not a "strong" cRule, will not be included. |
(1) {9 15} -> {38} (100.0%, 6.0) * STRONG cRule [0,6] (2) {6 9 13} -> {38} (100.0%, 6.0) * STRONG cRule * [0,5] (3) {2 9 15} -> {38} (100.0%, 6.0) [0,0] (4) {3 9 15} -> {38} (100.0%, 6.0) [0,0] (5) {6 9 15} -> {38} (100.0%, 6.0) [0,0] ................. (248) {2 4 6 8 10} -> {37} (84.61%, 11.0)[0,0] (249) {1 2 5 6 9} -> {38} (84.09%, 37.0) * STRONG cRule * [1,29] (250) {1 2 6 9} -> {38} (83.67%, 41.0) * STRONG cRule *[1,1] (251) {8 17} -> {37} (83.33%, 5.0) * STRONG cRule * [3,0] (252) {6 21} -> {38} (83.33%, 5.0) [0,0] (253) {1 8 17} -> {37} (83.33%, 5.0) [0,0] ................. (329) {2 4 5 6 9} -> {38} (80.95%, 34.0) [0,0] (330) {1 4 5 10} -> {37} (80.95%, 17.0) * cRule * [0,0] (331) {1 5 6 10} -> {37} (80.95%, 17.0) [0,0] ................. (343) {23} -> {37} (80.0%, 4.0) * STRONG cRule * [4,0] (344) {34} -> {38} (80.0%, 4.0) * STRONG cRule * [1,1] (345) {7 17} -> {37} (80.0%, 4.0) * cRule * [0,0] ................. (462) {1 4 5 6 8} -> {37} (79.37%, 177.0) [0,0] (463) {2 3 9} -> {38} (79.36%, 50.0) * STRONG cRule * [1,2] Overrides linked list: {6 22} -> {37} , TID = 219, classLabel = 37 (464) {5 6 7 8} -> {37} (79.35%, 173.0) [0,0] ................. (525) {3 5 8} -> {37} (77.77%, 203.0) [0,0] (526) {5 7 8} -> {37} (77.77%, 189.0) [0,0] (527) {1 5 7 8} -> {37} (77.77%, 189.0) [0,0] ................. (533) {8 18} -> {37} (77.77%, 7.0) [0,0] (534) {1 16} -> {38} (77.77%, 7.0) [0,0] (535) {6 16} -> {38} (77.77%, 7.0) [0,0] ................. (564) {2 6 8} -> {37} (77.51%, 193.0) [0,0] (565) {1 2 3 4 8} -> {37} (77.51%, 193.0) * cRule * [1,0] (566) {2 4 6 8} -> {37} (77.48%, 179.0) [0,0] ................. (643) {4 7 9} -> {38} (75.0%, 33.0) [0,0] (644) {3 4 7 9} -> {38} (75.0%, 30.0) [0,0] (645) {4 5 7 9} -> {38} (75.0%, 30.0) [0,0] ................. (664) {6 22} -> {37} (75.0%, 6.0) * cRule * [1,0] (665) {4 20} -> {38} (75.0%, 6.0) * STRONG cRule * [0,2] (666) {4 21} -> {38} (75.0%, 6.0) [0,0] ................. (704) {1 6 7 8 15} -> {37} (75.0%, 6.0) [0,0] (705) {3 6 7 8 15} -> {37} (75.0%, 6.0) [0,0] (706) {1 2 7 8 18} -> {37} (75.0%, 6.0) [0,0] (707) {1 4 6 7 14} -> {38} (75.0%, 6.0) [0,0] (708) {2 3 4 5 21} -> {38} (75.0%, 6.0) [0,0] |
Table 4: Rule list at the end of Stage 2
During stage 3 the final list classification rules is generated by processesing the CBA rule list so far. Note that default rule for the classifier is the last rule selected to be included. For each "strong" cRule in the list that correctly classifies at least one record (as a result of stage 2 a strong c rule may no longer correctly classify any records) the rule is processed as follows:
The process commences with the generation of a global class distribution array which contains the number of training cases for each class. 250 Records for class 37, and 134 for class 38 in this case. A number of sections from the rule list after stage 3 of processing are given in Table 5 below. Note that the total number of errors decreases as the number of potential rules in the classifier increases. |
The final classifier (Table 6) is then constructed by finding the first strong c rule (that satisfies at least one record) with the lowest total error in the CBA rule list. In the example Rule 552 is the first rule with the minimum total errors of 34. The final classifier then comprises all the rules up to and including the identified rule pluss a default rule that produces the default class associated with the identified rule. The given classifier produces an overall accuracy of 73.44%.
Table 6: The final classifier |
|
Figure 3: CBA stage 3 algorithm
(1) {9 15} -> {38} (100.0%, 6.0) * STRONG cRule * [0,6], D-Class = 37, T-errors = 128 (2) {6 9 13} -> {38} (100.0%, 6.0) * STRONG cRule * [0,5], D-Class = 37, T-errors = 123 (3) {2 9 15} -> {38} (100.0%, 6.0) [0,0] (4) {3 9 15} -> {38} (100.0%, 6.0) [0,0] (5) {6 9 15} -> {38} (100.0%, 6.0) [0,0] ............. (248) {2 4 6 8 10} -> {37} (84.61%, 11.0) [0,0] (249) {1 2 5 6 9} -> {38} (84.09%, 37.0) * STRONG cRule * [1,29], D-Class = 37, T-errors = 85 (250) {1 2 6 9} -> {38} (83.67%, 41.0) * STRONG cRule * [1,1], D-Class = 37, T-errors = 85 (251) {8 17} -> {37} (83.33%, 5.0) * STRONG cRule * [3,0], D-Class = 37, T-errors = 85 (252) {6 21} -> {38} (83.33%, 5.0) [0,0] (253) {1 8 17} -> {37} (83.33%, 5.0) [0,0] ............. (329) {2 4 5 6 9} -> {38} (80.95%, 34.0) [0,0] (330) {1 4 5 10} -> {37} (80.95%, 17.0) * cRule * [0,0] (331) {1 5 6 10} -> {37} (80.95%, 17.0) [0,0] ............. (343) {23} -> {37} (80.0%, 4.0) * STRONG cRule * [4,0], D-Class = 37, T-errors = 81 (344) {34} -> {38} (80.0%, 4.0) * STRONG cRule * [1,1], D-Class = 37, T-errors = 81 (345) {7 17} -> {37} (80.0%, 4.0) * cRule * [0,0] ............. (462) {1 4 5 6 8} -> {37} (79.37%, 177.0) [0,0] (463) {2 3 9} -> {38} (79.36%, 50.0) * STRONG cRule * [0,2], D-Class = 38, T-errors = 44 Overrides linked list: {6 22} -> {37} , TID = 219, classLabel = 37 (464) {5 6 7 8} -> {37} (79.35%, 173.0) [0,0] ............. (551) {2 6 7 8} -> {37} (77.67%, 174.0) [0,0] (552) {1 3 6 8} -> {37} (77.64%, 198.0) * STRONG cRule * [1,0], D-Class = 38, T-errors = 34 (553) {1 4 5 8} -> {37} (77.64%, 191.0) [0,0] ............. (525) {3 5 8} -> {37} (77.77%, 203.0) [0,0] (526) {5 7 8} -> {37} (77.77%, 189.0) [0,0] (527) {1 5 7 8} -> {37} (77.77%, 189.0) [0,0] ............. (533) {8 18} -> {37} (77.77%, 7.0) [0,0] (534) {1 16} -> {38} (77.77%, 7.0) [0,0] (535) {6 16} -> {38} (77.77%, 7.0) [0,0] ............. (564) {2 6 8} -> {37} (77.51%, 193.0) [0,0] (565) {1 2 3 4 8} -> {37} (77.51%, 193.0) * cRule * [1,0] (566) {2 4 6 8} -> {37} (77.48%, 179.0) [0,0] ............. (643) {4 7 9} -> {38} (75.0%, 33.0) [0,0] (644) {3 4 7 9} -> {38} (75.0%, 30.0) [0,0] (645) {4 5 7 9} -> {38} (75.0%, 30.0) [0,0] ............. (664) {6 22} -> {37} (75.0%, 6.0) * cRule * [1,0] (665) {4 20} -> {38} (75.0%, 6.0) * STRONG cRule * [0,2], D-Class = 38, T-errors = 34 (666) {4 21} -> {38} (75.0%, 6.0) [0,0] ............. (704) {1 6 7 8 15} -> {37} (75.0%, 6.0) [0,0] (705) {3 6 7 8 15} -> {37} (75.0%, 6.0) [0,0] (706) {1 2 7 8 18} -> {37} (75.0%, 6.0) [0,0] (707) {1 4 6 7 14} -> {38} (75.0%, 6.0) [0,0] (708) {2 3 4 5 21} -> {38} (75.0%, 6.0) [0,0] |
Table 5: Rule list at the end of Stage 3
5. CONCUSSIONS |
The LUCS-KDD implementation of the CBA algorithm described here has been used successfully by the LUCS-KDD research group to contrast and compare a variety of classification algorithms and techniques. The software is available for free, however the author would appreciate appropriate acknowledgment. The following reference format for referring to the LUCS-KDD implementation of CBA, available from this WWW page, is suggested:
Should you discover any "bugs" or other problems within the software (or this documentation), do not hesitate to contact the author.
REFERENCES |
Created and maintained by Frans Coenen. Last updated 5 May 2004