COMP101 - INTRODUCTION TO PROGRAMMING IN JAVA

ASSESSMENT CRITERIA FOR COMP101 PRACTICALS

(Notes for Tutors)

May 2005

1. Overview
2. Analysis and design
3. Implementation
4. Testing
5. Report



1. OVERVIEW

Each practical should be awarded a mark between 0 and 100 inclusive and should be awarded according to the following headings:

  1. Analysis and design (30%)
  2. Implementation (30%)
  3. Testing (30%)
  4. Write Up (10%)

The "requirements" for each practical, as presented to the students, will include reference to the design and testing techniques that are expected to be used. An average COMP101 student should expect to get between 57% and 63%.

Some guidance as on what students are expected to hand in with respect to COMP101 practicals can be found on the COMP101 tutor's WWW page. In addition please also refer to the notes on the "COMP101 OO Development Methodology" available at:

http://www.csc.liv.ac.uk/~frans/COMP101/guidanceNotes.html

All submission are electronic, if a student does not submit electronically, then the student should be awarded a mark of 0 (unless there are some very special extenuating circumstances).

Markers are encouraged to use the full range of marks. It should be entrely possible for a student to get 100%, especially on the first exercise. As a guideline where a student receives 20 for (say) the design the assessor should be able to explane to the student what else he/she had to do to get the full allocation of mark, gievn the time available for the exercise, if they had a second chance.




2. ASSESSMENT OF ANALYSIS AND DESIGN

With practitioners and academics still arguing over the "finer points" of what constitutes good design, it is of course difficult to provide definitive guidelines as to what constitutes good design. However in the context of COMP101 the analysis/design should commence with a high level description of the classes, and the attributes and methods of those classes. This will be presented in the form of a UML style class diagram or diagrams. The high level design should demonstrate a sensible allocation of attributes and methods that reflects the requirements with which the students are presented. Names for classes, attributes and methods should be descriptive, i.e. convey meaning (h for height is probably inappropriate, as is myMethod or method1).

The detail of each field and method should then be presented in tabular form, grouped according to the class they belong to. COMP101 students are encouraged to present class details in a tabular manner similar to that adopted by Sun for their J2SE 1.5 API documentation. Each field and method should thus be accompanied by a one or two sentence summary of its functionality (i.e. what it does/is for).

Detailed method design should be in the form of Nassi-Shneiderman charts. (This is an imperative technique, but is simple and encourages students to think about the structure of their methods.)

Note that methods should ideally perform a single function (it is better to have a number of readily understandable small methods than one "rambling" long method). A good indication that a method is "too big" is if all the details of a method cannot be clearly fitted on to a single Nassi-Shneiderman chart. A chart that extends over one page is clearly too big (in the same manner that methods that require more than one "page" of implementation are too big).

For more complex programs a UML Action Diagram will also be expected.

When assessing a design ask yourself the question "given this design is there sufficient information to allow me to implement the design in a OO programming language of my choice?" If the answer is "No" then there is clearly something amiss with it.

Design Assessment Descriptors
25-30Excellent Complete design that accurately reflects the requirements, displays all the features of good design outlined above and uses correct notation (some minor omissions are acceptable). Given the design the assessor (in theory) should have no difficulty in implementing it.
20-24Good The design should be more or less complete, reflect the requirements and display many of the features of good design outlined above. Some errors in notation are acceptable as long as the intention is clear. The assessor should have a clear understanding of the students intentions (i.e. given the design the assessor should be able to produce a working programme).
15-19 Moderate Design may include some omissions and/or inaccuracies or may simply be wrong with respect to some part of the requirements. However, the general gist of the design should still be understandable. The assessor should, given the design and with a little bit of guess work, be able to implement it. Design may include infringements of the features of good design outlined above.
10-14Poor Design only partially complete or not fully implementable for some other reason. In addition the design will probably display only a few of the features of good design outlined above.
1-9Failed Design incomprehensible although the student has clearly attempted to convey some notion of a design and has made attempts at class diagrams, Nassi-Shneiderman charts etc. In the worst case the student will have made some effort to convey some ideas possibly in the form of: (1) some descriptive text, (2) a list of class and/or method names, (3) one or more rudimentary class diagrams, or (4) something that might be construed as either a Nassi-Shneiderman chart or an Activity Diagram.
0No submission Student has not handed in any design.



3. ASSESSMENT OF IMPLEMENTATION

The implementation should follow on from the design. The students should be clear that design and implementation are two parts of the software engineering process and that the first should flow into the second. If the implementation bears no resemblance to the design something is clearly wrong.

Overall the implementation should be well laid out, "readable" and consequently understandable. The students have been told that "readability enhances understandability, and understandability enhances maintainability". To this end the implementation should:

  1. Use sensible indenting in such a way as to aide clarity (and consequently readability).
  2. The start and end of methods and blocks should be obvious (i.e. the assessor should not have to resort to counting closing brackets etc.).
  3. Each method should be introduced by a comment, as should the program itself and program blocks contained within methods (i.e. starts of selection statements or loops).
  4. Students should not use "magic numbers" but constants instead.

As a guideline if "a programme looks good it probably is good".

Implementation Assessment Descriptors
25-30Excellent Fully working program that flows from the design, and displays all the features of good implementation outlined above (some minor omissions acceptable).
20-24Good Working programme that "more or less" follows on from the design and display many of the features good implementation outlined above.
15-19 Moderate A working program but one that does not entirely reflect the design (i.e. different/additional methods and/or constructs). In addition the implementation may include significant infringements of the features of good implementation outlined above.
10-14Poor Non-working program (does not compile) or partially working program (i.e. only works given a particular kind of input). Implementation will probably also include many infringements of the features of good implementation outlined above.
1-9Failed Partial implementation but not complete, i.e. unfinished implementation. Evidence that student has made some attempt to write some code, possibly only a class definition or one or more methods. In the worst case the student may have only written a few comments or other lines of code.
0No submission Student has not done any implementation. Note: if a student insists on embedding their sourcec code in their Word document (i.e. in such a way that it makes it difficult for you to compile and run the code should you wish to do so) then on the "first offence" you should reduce the mark for the implementation by 50% and issue a warning to the student; on further "offences" award the student 0 for the implementation component of the overall mark, i.e. as if no implementation has been submitted.



4. ASSESSMENT OF TESTING

This is the proof that the programme works as specified by the requirements. To illustrate this the students should first specify a set of test cases that will exercise all parts of the programme and test the limits of input values, loop counters, etc. Remember that a good test cases is one that is likely to unearth an error, it is not one that is guaranteed to work!

Secondly the student should produce results of the running of test cases, i.e. programme output. Occasionally students attempt to falsify their output; either because the program does not compile or work as expected, or for some other reason such as plagiarism. If, from inspection of the implementation you believe that the output could clearly not have been produced by the presented code then no credit should be given for the results.

Note that if the output (as presented to the assessor) is not the same as the expected output (due to some unforeseen programming error) the students should give some explanation or conjecture as to the reason for this anomaly.

Testing Assessment Descriptors
25-30Excellent Complete set of test cases with corresponding generated output (not applicable if output could not have been generated by the given code).
20-24Good Most aspects of implementation tested with corresponding output (not applicable if output could not have been generated by the given code).
15-19 Moderate Some testing with corresponding output, but a number of significant test cases absent (not applicable if output could not have been generated by the given code).
10-14Poor Either: (a) only very limited testing (2 or 3 test cases) carried out with little thought to the construction of "good" test cases; or (b) a complete set of test cases but no corresponding output, or output that could not have been generated by the given code.
1-9Failed Evidence that at least some thought has been given to testing --- for example some written text, or suggestion of one or two test cases.
0No submission No evidence of testing.



5. ASSESSMENT OF REPORT

The report should be presented in a logical manner ---- design, implementation and testing. Nassi-Shneiderman charts, and class and activity diagrams, do not have to be done using a graphics package (students have enough to do as it is). The test cases together with test results should be included in the report (code is submitted separately). Each submission should be accompanied by a "Declaration on Plagiarism and Collusion" (this is University policy) comleteed by the student.

Give a mark between 1 and 10 for the clarity and layout of the report, an average report should be awarded a 7.




Created and maintained by Frans Coenen. Last updated 15 December 2006