Louise Dennis: Teaching Portfolio

Evaluating your practice and personal development

There are a number of processes by which my practice is evaluated.

SET/SEM

Summaries of my SET/SEM performance can be found here. It is difficult to draw hard conclusions from the scores since the module assessed in 2003 is of a more mathematical nature than that assessed in 2004. This means it is expected to be unpopular with students. However the two modules covered are both compulsory first year second semester modules and so, with a few exceptions for Joint Honours students and the like the respondents were exposed to my teaching on both modules at the same time. With caution, therefore, they can be used to draw some tentative conclusions about the development of my teaching practice. We can see an improvement in nearly all categories from year to year which is encouraging though not unexpected given the nature of the two modules assessed. An area that would appear to continue to be of concern is my ability to engage and retain the students' interest - currently I am working on trying to bring more discussion of the wider use of the techniques into my lectures in the hope that this will help students understand their importance and so engage their interest.

There is also an issue surrounding library resources. I was surprised to learn that feedback from this question is not returned to the library and, as such, it seems a little pointless to have this question on the form. As far as I'm aware the issues the students have with library resources are almost entirely to do with the formula the library uses to determine how many textbooks to order for a module.

Module Feedback

Alongside SET and SEM the School solicits verbal module feedback to a separate member of staff. This occurs at the same time as the students fill in their SET and SEM forms. The member of staff then reports back to the module convener. In general they have little to add beyond the data that arises from the forms. Although students have particularly praised the level of feedback given for G53DDB and the e-Learning environment used for G51SWT. In general I think the students are insufficiently at ease with members of staff to give really genuine feedback in this fashion.

Peer Review

The School conducts a policy of peer review by which members of staff sit in on and observe the teaching of another on an annual basis. Feedback received in this way has generally been positive although minor issues are often picked up by this process and which can be useful, although they are often quite specific to the module being presented.

Through the PGCHE I have also had the opportunity both for further peer review by colleagues in my Learning Set and by external academics. I have so far received one set of external feedback in this fashion. This was for the G53DDB module which was selected in consultation with my assessor based on my unhappiness with the way the module was progressing. On the whole this was a positive experience for, while the assessor confirmed many of my concerns about the module and its content, he also reassured me that the problems were not perhaps as severe as I thought.

Student Contact

Students periodically contact my about my teaching either in person, by email or through module noticeboards. While it is difficult to draw general conclusions from such contacts it can be very useful in detecting and correcting immediate problems such as the font size used when programming during lectures.

SSCC

In theory the Staff Student Consultative Committee would be an ideal forum for gaining detailed feedback from students. In practice, however, I have found the issues raised through this committee of little use. I have had a number of contacts about my modules from this committee but in fact the issues have always concerned either problems of which I was already aware or complaints which were provably incorrect. My only available response in either case has been to attempt to improve the documentation and communication channels associated with my modules. However, my general impression is that the issues brought to this meeting are those which concern a minority of students who do not really pay attention to such information so I'm relatively pessimistic about the effectiveness of these measures.

Exam Performance

I find exam performance a very useful mechanism for evaluating practice. Not so much in terms of the overall marks that students achieve (though obviously that is a consideration) but more particularly in observing on which parts of the exam they perform well and on which parts of the exam they perform badly. This helps me focus attention on particular aspects of modules for revision and to evaluate any experiments in terms of teaching methods. For instance the poor performance of students on G51SWT on those parts of the module that had been taught by example only, convinced me that this method (which was popular with students - based on SET/SEM comments and module feedback) needed to be backed up with more traditional delivery of material.

Coursework Performance

As with exam performance students' performance on coursework is also useful. In this case I find its primary usefulness is in detecting assumptions I may have made about previous experience. For instance it is easy to forget that Computer Science students actually gain relatively little experience of technical writing during the course of their degree and so need help when writing project dissertations for G52GRP and G53IDS otherwise they may fall into an inappropriately conversational (often diary-like) style.