LONDON SCHOOL OF ECONOMICS AND POLITICAL SCIENCE TEACHING QUALITY ASSURANCE AND REVIEW OFFICE Guidance notes for interpreting the results of the surveys of students' opinions of courses and teaching since 2009-2010 1 Background The Teaching Quality Assurance and Review Office (TQARO) conducts surveys in both Michaelmas and Lent Terms to assess students’ opinions of teaching. Two different types of teachers are included in the survey process, namely Permanent teachers Graduate teaching assistants (GTAs): these are non-permanent teachers, mainly hourly-paid graduate teachers but also including some temporary or visiting teachers. Two different surveys are carried out in both Michaelmas and Lent terms: Class/seminar survey: this survey covers class/seminar teaching and various aspects of the course overall. Lecture survey: assesses lectures and is for permanent teachers only. The Michaelmas Term survey covers GTAs who take classes in that term. It is also covers permanent teachers who lecture and/or teach in that term (i.e. either on half-unit courses, or on the first half of full-unit courses). The Lent Term survey mainly covers permanent teachers who lecture and/or teach on full-unit courses. But it also covers permanent teachers who lecture and/or teach on the second half of full-unit courses, or on Lent Term half-unit courses. The Lent Term survey also covers GTAs who only teach on Lent Term halfunits. It also covers some GTAs who teach on full-unit courses, i.e. those who scored poorly in the Michaelmas Term survey. All the surveys are paper-based. The procedure is the same for both Michaelmas and Lent Term surveys, with the main survey runs taking place in weeks 8-9 of the term. 2 Distribution of Results Individual Teachers Each GTA/Teacher will receive a report that includes statistical results on various aspects of their teaching (and of the course in the case of the class surveys). These reports are emailed to individual teachers normally within a week or two after their completed questionnaires are returned to TQARO. These reports include written comments from students. These comments are not reviewed before being forwarded to the teacher: any teacher who receives comments that cause distress should contact the Head of TQARO (Mark Thomson, m.t.thomson@lse.ac.uk). After all surveys are returned and scanned, each teacher will receive a second copy of the report. This second report is identical to the first they receive, with one addition: it includes the average teaching scores for the department. Heads of Departments Quantitative results for every teacher are sent to the teacher’s Head of Department. The written comments are confidential to each teacher. A set of aggregate results, combining quantitative scores of all courses within a department and an aggregate for the School is produced and sent to Heads of Departments. School officers and committees Quantitative results are available to the Director of the Teaching and Learning Centre and the ProDirector for Teaching and Learning. They also receive aggregate results, showing quantitative scores for all courses within a department and an aggregate for the School. Aggregate results across a rolling three-year period are considered by the Teaching, Learning and Assessment Committee annually. Courses with consistently poor scores over a number of years are taken to TLAC for potential review and/or to assess whether any support is needed to improve those courses. Quantitative survey results are also considered by the Promotions Committee when making decisions in individual promotion review cases. The following sections provide brief guidance notes for individual teachers, Heads of Departments and other senior members of the School who receive survey results. A note on the timing of results: the reports to Departments and to Officers of the School involve aggregated results, so are compiled after the results for all courses have been completed. These results are normally distributed within two weeks of the start of the term that follows the surveys. 3 Guidelines for Individual Teachers The reports to individual teachers have different formats for the two different surveys (lecture survey and class/seminar/course survey). The differences in the formats are described briefly below, followed by a section on frequently asked questions. If you have any queries, please refer to the frequently asked questions below in the first instance. If your query is not answered, please contact Richard Leppington on extension 6266 (email R.F.Leppington@lse.ac.uk). In all correspondence, please mention your name, department and any relevant course codes. 3.1 Format of Class/Seminar/Course Survey Reports for GTAs and Permanent Class Teachers Each teacher receives an individual report about the students’ assessments of her/his course and teaching. The layout is as follows: Overall Indicators Survey answers are rated on a scale 1-5, with 1 as the “best” response and 5 as the “worst” response. The survey questions are grouped into two broad categories — “class/seminar teacher evaluation” (about the teacher) and “course evaluation” (about the course). The overall indicators in the first three lines of the report give you an average score for each of these categories and a combined average: a) Global Index: an (unweighted) average score of both class/seminar teaching questions (Q1.1 – Q1.9) and course evaluation questions (Q2.1 – Q2.8); b) Teaching Evaluation: this is a mean score, averaged over all student responses and over all the teaching questions (Q1.1 – Q1.9); c) Course Evaluation: this is a mean score across all students and across course questions (Q2.1 – Q2.8). Legend The legend illustrates the graphics used in the report to indicate the range of marks, and indicates how the averages (av), standard deviations (dev) and histograms are displayed, with information about the number (n) of responses and number (ab) of abstentions. Class/seminar teacher evaluation questions, Q1.1 – Q1.9 These questions lead to scores (averaged across all the responses) for particular attributes such as “spoken English”, “clarity in presenting course material” and “helpful feedback”. All questions are measured on a scale 1-5 with 1 as the “best” score and 5 as the “worst” score. November 2011 -2- Course evaluation questions, Q2.1 – 2.9 Almost all of the questions (except Q2.2 and Q2.9) show mean scores measured on a scale 1-5 with 1 as the “best” score and 5 as the “worst” score. Questions Q2.2 (on hours of independent study) and Q2.9 (“would you recommend this course … ?) display the results as histograms giving percentages for the various responses. Your information (that is, information about the respondents): questions Q3.1 - 3.2 Questions 3.1-3.2 provide statistical data about: (Q.3.1) the categories of students attending the class/seminar (e.g. 3rd year UG or Masters student) and (Q3.2) the reasons for taking the course. In both cases the results are given as percentages. Profile line The profile line at the end of the report displays the results of the teacher evaluation questions and the course evaluation questions in a single diagram. The second version of the report (sent out after the survey has closed and all Departmental results have been processed) shows your scores alongside the comparative scores for the department overall. Written comments Written comments (if any) are shown at the end of the teacher’s individual report and are confidential to the teacher concerned. 3.2 Format of Lecture Survey Reports for Permanent Teachers Each (permanent) teacher receives an individual report about the students’ assessments of her/his lecturing. The layout is as follows. Legend The legend illustrates the graphics used in the report to indicate the range of marks, and indicates how the averages (av), standard deviations (dev) and histograms are displayed, with information about the number (n) of responses and number (ab) of abstentions. Lecture questions Q1.1 – 1.11 These questions lead to scores (averaged across all the responses) for various lecturing attributes, such as “audibility”, “structure and organisation of lectures” and “overall, how would you rate the performance of this lecturer?” Almost all of the questions (except Q1.7) are measured on a scale 1-5 with 1 as the “best” score and 5 as the “worst” score. Question Q1.7 (on the pace of the lectures) is on a scale 1-3 and has “best” score 2 (with score 1 meaning “too fast” and 3 meaning “ too slow”). Your information (that is, information about the respondents): questions Q2.1 and Q2.3 November 2011 -3- Questions Q2.1 and Q2.3 provide statistical data about: (Q2.1) the categories of students attending the lecture (e.g. 3rd year UG or Masters student) and (Q2.3) the reasons for taking the course. In both cases the results are given as percentages. Profile line This simply displays the results of the lecture evaluation questions (Q1.1-Q1.11) in a single diagram. The second version of the report shows your scores alongside the comparative scores for the department overall. Individual comments Qualitative comments (if any) are shown at the end of the teacher’s individual report and are confidential to the teacher concerned. Guidelines for Heads of Departments These guidelines can be used by Heads of Departments and other relevant senior officers of the School (such as the Director of the Teaching and Learning Centre and the Pro-Director for Teaching and Learning), who have access to some of the outcomes of the surveys. At its meeting in June 2010, academic staff at the School agreed via the Academic Board to a number of changes about how survey information was to be used. These changes placed an increased emphasis on Heads of Departments to put in place development and support arrangements where permanent staff teaching performances were poor; and also to recognise good teaching. The proposals to which the Academic Board agreed were: There has to be some recognition for good teaching and/or for improved teaching scores. Heads of Department could write to members of their departments with high scores to congratulate and thank them, and note in particular staff whose scores have improved and high scores on ‘difficult’ courses, such as compulsory courses involving students from different disciplines and backgrounds. Good and improved teaching can be put forward as part of the case for promotion and for increments; There will have to be a ‘score’ to identify poor teaching. This would be pitched to identify c. 50 staff with poor scores in any given year [i.e. across the School]. We would have to make it clear to staff that this score, or number, did not mean anything in itself but was simply being used to flag up a potential problem which might need to be looked into in more detail; An action plan would be agreed between the Head of Department and the Pro Director (T&L) depending on the seniority and particular circumstances of the staff member. This should include some support and training, which could be one-to-one from TLC, language training or mentoring from someone in the department. It may include coordinated action by the Head of Department and the Pro Director (T&L) but that would depend on individual circumstances and the extent to which the Head of Department felt that this could be handled within the department; If the scores do not improve over the period of the plan of action, or if the member of staff is unwilling to accept support or guidance, then she or he cannot be put forward for major review, for promotion or for any increment until there are signs of improvement. It is also proposed that the teaching scores (i.e. the results from the section of the class/seminar questionnaire on ‘teaching’) should be made available to the mentors of permanent staff and to the Chairs of Departmental Teaching committees if he or she is not Head of Department. If you have queries, please refer to the Frequently Asked Questions in the first instance. If you have further questions, please contact Richard Leppington on extension 6266 (email R.F.Leppington@lse.ac.uk). In all correspondence, please mention your name, department and any relevant course codes. Data provided to Heads of Departments There are two surveys, in each of the Michaelmas and Lent terms: Lecture Survey — for permanent teachers only Class/Seminar and Courses Survey — for GTAs and permanent teachers. November 2011 -4- Soon after the completion of each of these survey runs, Heads of Department are sent (via email) the following files: Spreadsheets containing key scores by teacher and by course PDF reports showing overall scores/averages for your Department PDF reports showing overall scores/averages for the School. The pdf files give aggregated results for every question on the questionnaires. The standard spreadsheets give summary results to a limited set of key questions, giving scores for each teacher and for each course. The spreadsheets also contain the complete raw data that allow departments to produce easily the scores for all other the other survey questions. More details about the spreadsheets and pdf files are given below. HoDs also have access to individual pdf reports (excluding written comments) for each teacher in their department, in a restricted access public folder, available here. Alternatively, you can navigate to these reports in the public folders via: All Public Folders – Admin – Academic Registrar – Student Surveys – <your department>. In the folder you will find an email for each teacher — listed by their name — and the individual’s reports are attached to the email. Please note the folders contain some past survey results also – please see the email subject for the correct teacher and term you require. Due to the confidential nature of the reports, the folders have restricted access, for yourselves and TQARO staff only. If you require administrative staff (e.g. a Departmental Manager) to be given access, please request this by email. Explanatory notes on the data files sent to HoDs 1. The Lecture survey spreadsheet The Lecture survey spreadsheet gives the detailed results for three key questions in the “Lecture Survey”. There are three separate worksheets (accessed as usual by the tabs at the bottom of the overall spreadsheet): (i) Worksheet "Results-By-Teacher" — results for each teacher (with separate lines for those who give more than one course); (ii) Worksheet "Results-By-Course" — results for each course; (iii) Worksheet "Amended Data" —- this contains the raw data with "non-applicable" responses removed, and allows the user to modify the various pivot tables to get results for any of the questions on the questionnaire and in a variety of formats. 2. The Class survey spreadsheet The Class survey spreadsheet gives the detailed results for several key questions in the “Class/Seminar and Course Survey”. There are three worksheets (accessed by the named tabs at the bottom of the overall spreadsheet: (i) Worksheet "Course Evaluation" — results for five key questions on each course; (ii) Worksheet "Class Teacher Evaluation" — results for three key questions for each teacher (with separate lines for those who give more than one course); (iii) Worksheet "Amended Data" - raw data with "non-applicable" responses removed. Note that, in both worksheets (i) or (ii), you can use a drop-down menu button to see results for GTAs or permanent teacher separately, or for both combined; again, the worksheet (iii) provides the raw data that allows the user to modify the various pivot tables to get results for any of the questions. Please also note that explanatory notes are given in the first few lines of each of the worksheets: these describe the particular sets of questions for which the scores are presented. 3. The pdf reports There are four pdf reports that give detailed aggregated results for the whole department (not for each course, November 2011 -5- or each teacher), showing the results for all the questions; the formats are the same as those for the reports that are sent to individual teachers. One report for the "Lecture Survey" Three reports "Class/Seminar Survey" — one for GTAs, one for Permanent Teachers, and one for both teaching types combined. For comparison purposes, there are four similar reports, with the same formats, that show the corresponding aggregated results for the whole School. All aggregated reports are available on the TQARO Website. Layout of pdf report on lecture survey Each (permanent) teacher receives an individual report about the students’ assessments of her/his course and teaching. A related report — described here — provides this information averaged over all the (permanent) teachers in a department (or School overall). The layout is as follows. Legend This simply illustrates the graphics used in the report to indicate the range of marks, and indicates how the averages (av), standard deviations (dev) and histograms are displayed, with information about the number (n) of responses and number (b) of abstentions. Lecture questions 1.1 – 1.10 These questions lead to scores (averaged across the whole department) for particular course attributes. Almost all of the questions (except Q1.7) are measured on a scale 1-5 with 1 as the “best” score and 5 as the “worst” score. Question Q1.7 (on the pace of the lectures) is one a scale 1-3 has “best” score 2 (with score 1 meaning “too fast” and 3 meaning “ too slow”) Student information questions 2.1 - 2.2 Questions 2.1-2.2 provide statistical data about: (Q.2.1) the category of students attending the lecture (e.g. 3rd year UG or Masters student) and (Q2.2) the reasons for taking the course. In both cases the results are given as percentages. Profile This simply displays the results of the Lecture questions (Q1.1-Q1.11) in a single diagram. Layout of pdf report on class/seminar survey Each teacher receives an individual report about the students’ assessments of her/his course and teaching. A related report — described here — provides this information averaged over all the (permanent) teachers in a department (or School overall). The layout is as follows. Legend This simply illustrates the graphics used in the report to indicate the range of marks, and indicates how the averages (av), standard deviations (dev) and histograms are displayed, with information about the number (n) of responses and number (b) of abstentions. Class/seminar teacher evaluation questions 1.1 – 1.9 These questions lead to scores (averaged across the whole department) for particular course attributes. All questions are measured on a scale 1-5 with 1 as the “best” score and 5 as the “worst” score. Course evaluation questions 2.1 – 2.9 Almost all of the questions (except Q2.2 and Q2.9) show mean scores measured on a scale 1-5 with 1 as the “best” score and 5 as the “worst” score. Questions Q2.2 (on hours of independent study) and Q2.9 (“would you recommend this course … ?) show results represented histograms showing percentages for the various responses) . Student information questions 3.1 - 3.2 Questions 3.1-3.2 provide statistical data about: (Q.3.1) the category of students attending the class/seminar (e.g. 3rd year UG or Masters student) and (Q3.2) the reasons for taking the course. In both cases the results are given as percentages. Profile This simply displays the results of the teacher evaluation questions and the courses evaluation questions) in a single diagram. November 2011 -6-