G51SWT Feedback 2003

Attendance at the weekly lab sessions was drastically lower than last year (when there was an assessed coursework each week) and the average marks for coursework were also well below those last year (possibly as a result).
1st Coursework
Most people who attempted this managed OK and the average mark was around 50%.
2nd Coursework
The average mark for this was around 60% partly because an element of groupwork was involved and most groups managed to get nearly full marks for that aspect.
3rd Coursework
The average mark was around 40% which was disappointing. This was partly because the 3rd exercise required the use of recursion which prevented several people from progressing onto the 4th exercise (which was worth 1/2 the marks) but another contributory factor was clearly that for many students this was the first time they had attempted to write a perl program despite having been set weekly non-assessed exercises throughout the module and being warned that this coursework would be difficult if the assessed exercises were not attempted.

The Exam

Performance in the exam was disappointing and suggested among other things a basic lack of exam technique with many students unable to repeat basic definitions given in the lectures and module notes.

Roughly equal numbers of students attempted questions 2, 3 and 4 with slightly fewer (around 70 as opposed to around 100) attempting question 5.

Question 1
This was compulsory with an average mark of around 37%. The students had been warned in advance that this would be a comprehension question based on the sample solution of one of the non-assessed courseworks. As a result it was disappointing that only about 3 students managed to get full marks for the first part (describe in your own words what this program does).
Question 2
This was a question on structural testing. The average mark was around 35%. The biggest problem was that many students did not know the terminology involved and so were unable to produce test sets for statement and edge coverage of the code involved. There were also a number of students who did not know how to represent loops in control-flow graphs or who were confused by the layout of the code into thinking it started with some sort of three-way condition.
Question 3
A design question. This was probably the hardest on the paper and certainly the one with the least clearly defined answers. The average mark was again around 35%. Again most students were let down by lack of revision and didn't know what a Blackboard architecture was nor were able to suggest using a client-server architecture for a standard web application both of which were covered in the notes.
Question 4
eXtreme Programming. This was the least technical question and this is reflected perhaps in a slightly higher average of 40% for the question. Again the biggest problem was a lack of knowledge of the basic terms and concepts involved in XP.
Question 5
This was the most technical question and attempted by correspondingly fewer students. However in general it was the more able students who tackled it resulting in an average of 60%. A basic perl programming exercise with, on the whole, pleasing answers.

The exam revealed a need for clearer notes/guidance on program architectures and a greater emphasis on textbook exercises in this area. However it also revealed a fundamental lack of knowledge of the basic terminology which prevented possibly otherwise able students from demonstrating that they could use the techniques and methodologies involved. It is hard to know how this can be redressed.


Louise Dennis
Last modified: Thu Jun 5 10:12:52 BST 2003