Louise Dennis: Teaching Portfolio

G51SWT

I have taught G51SWT, Software Methods and Tools, in every year I have worked at Nottingham and have approached it slightly differently in each of those years. In some ways I regret this since it makes it hard to use the module as a guide to any improvement in my actual lecturing or other support abilities on the other hand the changes I have made year on year have generally been prompted by critical reflection on the previous one and this sort of fluidity perhaps is to be expected of a novice lecturer approaching a new module.

Module Planning

Software Methods and Tools is a first year module whose learning outcomes remain somewhat opaque, even to myself. I have interpreted them as competence in a couple of programming languages, ability to use certain software tools, ability to apply some technical software engineering methods and an understanding of some limited theory of software engineering. However I remain uncertain about whether this is really the best role the module can play within the context of the school as a whole.

G51SWT, Software Methods and Tools, is a first year module. When I was appointed I inherited a set of slides from the previous module convener. These involved teaching three programming languages, Perl, Tcl/Tk and Visual Basic in a Windows Environment; the syllabus described a module which taught basic UNIX tools and programming; and the module text book was a general Software Engineering textbook. It was quite difficult to reconcile these diverse materials, however the previous convener said that module feedback suggested the students felt overloaded by the number of programming languages and I felt it important to deliver a module more in line with the published syllabus. As a result I pruned away Tcl/Tk and Visual Basic and used the space to reinstate some of the UNIX material, updating it in places. Prompted by the nature of the module textbook and my observations about the problems second years were having in structuring their group Software Engineering projects I also included some general Software Engineering information.

Individual Lectures

When I first started creating the module I was fortunate to find, on the Internet, a module very similar to the one I had in mind that was offered by Kenneth Anderson at the University of Colorado at Boulder, also titled Software Methods and Tools and he was very happy for me to take and use his material. This formed the basis of the lecture material delivered in my first year. As with G53DDB I placed a student-centred exercise in the middle of each lecture. As there was no exam I used instead a "Quick Quiz" with answers provided on the module website after the lecture. Students were encouraged to write down their answers and hand them in and I gave a prize at the end of the module to the three students who had consistently performed best on these exercises. I was very pleased with these in my first year, however in the second year the number of students participating dropped off dramatically (for no reason I could satisfactorily ascertain) and I scrapped them half-way through the module as they appeared to be causing discipline problems in the lectures, in terms of talking and general activity while I lectured, without delivering any real gains. One student commented that he felt it was a pity as he had personally found them useful. I may re-instate them in future but at present feel they require too much effort to administrate for relatively little gain and carry with them a risk that students come to treat the lectures as a general opportunity for chat.

In the second year I left the lecture material largely untouched, though I made some minor changes in response to problems I had observed the previous year. However, prompted by concerns about plagiarism I had introduced an exam. I had also been concerned that the previous courseworks had not greatly assessed the software engineering content of the module. Performance in this exam was very poor. In all it was a fairly sobering experience. The following year I attempted to tie the lecture material much better to a textbook (and switched textbooks to one I felt more suitable for this) so that the students had an alternative source of revision material to the published lecture slides. These slides tended to contain only outline points relying on my own discussion in the lectures to fill in the real meat of the information.

I have also experimented with lectures which adopt a worked-example paradigm (backed up by a recommendation that students read the relevant background material in the module textbook) rather than a distribution of information. This is partly in order to motivate use of the text book and partly as an idealogical exercise in the purpose I felt a lecture should serve. In these lectures students see me attempting sample exercises. These have been particularly successful in terms of programming exercises where the actual process of producing a program is not obvious from examples of completed programs. The important thing has been to be "honest" - ie. to report mistakes and wrong turnings in equal measure to correct solutions. I prepare the lectures by setting myself and then attempting the exercises and write this up as a handout which is distributed in the lecture but not made available from the module website (this is partly a device to encourage lecture attendance, and partly because I am unwilling for such material to be used unthinkingly as revision material without students having actually witnessed what I am doing with it). I find it harder to do prepare such lectures effectively for non-programming material since the final arbiter of whether an answer is right or wrong is my own judgement, not the programming language compiler, and so mistakes and wrong turns are, in general, fewer and further between. I hoped that these lectures would encourage textbook use and accustom students to seeing the relevant techniques actually used rather than simply described.

SET/SEM and verbal module feedback indicate that this approach is popular although exam performance suggests it doesn't translate well into knowledge (or at least examinable knowledge). The areas that appear to translate particularly badly are those in which similar exercises can not be easily attempted by the students (or perhaps more importantly for which it is not easy for them to get feedback on prior to the exam). I must confess I am not clear in my mind of the best way to proceed here but for next year I am backing up the examples lectures in these areas with a more traditional fact-delivery lecture on the same topic, I am also hoping to attract some third year project students to work on online self-test facilities for this material.

An obvious way to back up this material would be through the provision of tutorials (in CSiT this is a group class which works through exercises on paper (rather than on a computer) either in advance of the tutorial or in the tutorial with the guidance of a tutor). Unfortunately there is not enough appropriate technical material in this module to justify weekly tutorials and experience suggests that less frequent tutorials exacerbate the problem of student non-attendance because students forget in which weeks they are supposed to be present.

Exam performance was slightly improved on the previous year and taken with the coursework performance showed an average over 10% higher for the whole module which was pleasing. I still feel the delivery of the software engineering content needs to be improved and will be more radically rewriting some of the lectures, providing some ongoing examples through the module and adding some more straightforward lectures to back up the examples based ones in order to facilitate this.

e-Learning Resources

The module is backed up by an e-Learning resource described more fully in Developing Learning Environments. This was originally created for me by Jasdeep Kalsi, a third year project student. As well as the normal content you would expect for a module website (lecture slides, previous exam papers, etc.) it contains a noticeboard for communicating with the students, facilities for the automatic submission of coursework and a feedback mechanism.

The noticeboard is useful for answering student queries in a manner in which the information becomes and remains permanently accessible to all students. Students primarily use it to ask questions about summative courseworks and it has the added advantage that it allows the more able students to participate in teaching the less able thus reinforcing their own learning. I also believe, although I have no direct evidence of this, that seeing messages about the next coursework appearing on the noticeboard prompts some students to start working on it sooner than they might if left entirely to their own devices.

All the coursework for the module is programming based and run on automatic test rigs. In this situation a mechanism for its automatic submission makes good sense. In particular the submission mechanism allows the students to view the output of their program running through a version of the test rig. This lets them detect trivial mistakes that mean their program only runs in their personal environment and, at the same time, prevents them from being able to use the excuse "it was running when I submitted it".

Lastly the tutor back end of the system allows me to view the output from the programs and the programs themselves online, preventing the printing out of large swathes of code and test results. Coupled with very explicit mark schemes this makes it straightforward to me to mark each program by filling in check boxes and comment boxes in an online form, the system then calculates the marks (removing the chance of trivial arithmetic errors) and returns the completed form, including any comments I've made to the students' section of the website. This provides a simple mechanism by which the students get relatively detailed feedback on how their marks were arrived at which creates little extra work for the marker. This contrasts strongly to the mechanism I used for feedback in my first year as module convener in which I emailed feedback individually to students on request by copying it from the notes I had made on print outs of their program. This process was exceptionally time-consuming.

Labs

The module is supported by weekly labs (these are large group sessions (for this module) where students work at computers on programming exercises and demonstrators are available for trouble-shooting). I have provided formative "lab exercises" and there are also summative "coursework exercises" either of which may be attempted in labs. In every year I have been present at the majority of lab sessions. These seem to be valuable to the students who do attend but the number actually working on the formative exercises appears very small (before revision week - the formative exercises are examined in order to provide an extra incentive to attempt them). If appropriate computer mediated exercises for the non-programming technical aspects of the module could be provided then the labs would provide an ideal environment for the students to attempt and get feedback on those aspects of the module, though as this would again be as a formative exercise it might benefit only a small number of students. SET/SEM feedback this year indicated dissatisfaction with the lab provision in particular a desire to be "taught something" in the labs. I find it hard to know how to react to this since the labs are designed explicitly as a student-centred experience where they are not "taught" but have an opportunity to access help with their own "learning". In future I intend to make this clearer in the introductory lecture for the module.

Weekly Exercises

The original module syllabus stated that G51SWT was to be assessed by coursework only. Possibly unwisely I decided to do this through weekly programming exercises, reasoning that a steady drip of work designed in an incremental fashion would be of more use and more approachable to first years than one or two large assignments. The students worked well on the assessments, initially at least, although application dropped off towards the end especially among students who knew from earlier feedback that they had already passed the module. Basic plagiarism detection techniques also highlighted widespread collusion leading to a fair amount of concern over whether students really deserved the marks they were achieving and eating up a great deal of my time in meetings with students where I queried them about their coursework submissions. I had also provided formative exercises and designed these to precede the weekly assessed exercises (they were not prerequisites but my intention was the the material covered in the formative exercises would help with the assessed exercises). I observed that many students did not attempt the formative exercises and that even the brightest students were doing the assessed exercise first and only then attempting the formative one. I concluded that the module was over-assessed and that, in particular, the problem of plagiarism meant that it was important to introduce an exam to guarantee that a substantial proportion of the marks were gained individually. I also felt an exam would be useful in assessing the more software engineering aspects of the module which could not easily be assessed by programming exercises.

The shift to fewer summative exercises resulted in a dramatic drop in lab attendance (except in the labs immediately preceding a deadline) and performance in the three courseworks that remained was poor. Although performance in the summative coursework was disappointing I felt this was largely due to the last coursework and that its primary problem lay in the order in which the tasks were set. As a result these were relatively unchanged this year but the minor tweaks appeared to result in a much better performance from the students. Another innovation, giving explicit guidance on time management with the exercises also improved the lab attendance with evidence that students were starting work on the exercises at a more sensible point in time. It is still clear, however, that few attempt the formative exercises in their appointed weeks although several do appear prompted to attempt them when faced with a specific, related problem in a summative coursework and the better time management skills on display mean they now have the luxury of time in which to do this.

Feedback on Exercises

For reasons outlined in the PGCHE group project I use a mixture of automated test rigs and human inspection to assess the summative exercises. I have gradually moved towards the use of extremely transparent marking schemes for these. Originally I left several marks for "good style" but found I was getting into arguments with students over what constituted "good style" - for instance frequently there were disagreements over the respective values of code readability against efficiency. I also reasoned that while style might be an important quality to be developed later in the course - in first year we were more interested in basic ability. As a result I took to publishing the mark scheme with the exercise - this has the added advantage that if I wish the students to demonstrate the ability to use some particular aspect of a programming language I can specify its use in the mark scheme without having necessarily think up an exercise which requires its use. These clear mark schemes have also allowed me to develop an online marksheet which I can fill in for each student which then automatically calculates their mark and generates feedback for the student on their progress. This system is discussed in Developing Effective Learning Environments and Learner Support Systems. While more tailored individualised feedback would be desirable this mechanism provides a practical route for providing a breakdown of performance to each student and there is scope within it for providing individual comments where appropriate.

Examination

Initially G51SWT had no exam associated with it. However my concerns over the level of plagiarism and the fact that the summative coursework did not well assess the software engineering aspects of the module led to the introduction of an exam in my second year as module convener.

I designed the exam to have two sections. The first was a compulsory section and took the form of a comprehension question based on the sample answer to one of the formative exercises used during the module. This served two purposes. Firstly, it was an added incentive to the students to attempt the formative exercises and secondly it was intended to act as a question for which they could easily revise and on which most students should be able to perform well. In reality they did very poorly on this exercise in the first year the exam was used but rather better the second year. This may be because exam revision is targeted more on the contents of previous exams (of which there were none in the first year) than on the content stated in lectures or that the formative exercise chosen as the basis of the question was from an earlier (and therefore easier) segment of the module.

The remainder of the exam offers students a choice of 4 questions of which they must attempt 2 for full marks. I attempt to maintain a fairly even split between "programming" based questions and more general software engineering based questions in this section. While I have become more generous in the mark schemes I employ for these questions I am still often surprised by the facets the students find easy and those they find hard.