‘Routledge (58) argues…’ A quasi-experimental evaluation of different formats to teach students how to reference Katja Sarmiento-Mirwaldt Department of Politics, History and the Brunel Law School Brunel University London Kingston Lane Uxbridge Middlesex UB8 3PH United Kingdom Email: katja.sarmiento-mirwaldt@brunel.ac.uk Twitter: @SarmientoMir Keywords: referencing, citation, quasi-experiment, teaching methods Abstract Referencing academic and other academic sources is a key study skill, but very little research has been dedicated to the effectiveness of different methods to teach students how to reference. This research seeks to determine which teaching methods are most effective in helping politics students to learn how to reference correctly. The research design is quasiexperimental: different students’ ability to cite academic sources are mapped onto the same students’ attendance at different learning activities dedicated to referencing. The analysis indicates that lecture-based and online methods, as well as group work in seminars, have no impact on students’ ability to reference. Whilst one has to bear in mind the limited internal validity of quasiexperiments, this research suggests that attending a one-to-one tutorial on referencing has a positive impact on students’ ability to reference. 1 ‘Routledge (58) argues…’ is a reference taken from a student assignment. The student referenced the publisher instead of the author and the page number instead of the year of publication. This may be a particularly glaring example, but many others could be named that indicate just how much students struggle to reference the academic, and generally any, literature. Many undergraduates omit important pieces of information (like journal titles or page numbers), include irrelevant ones (like the URL of an electronic journal), and confuse the Harvard and footnote systems of referencing. As an area of particular concern, students sometimes come dangerously close to plagiarising when citing other people’s words and ideas. Most students first encounter the need for engaging with the academic literature, and for referencing it properly, at university. Referencing is a vital academic skill for any student at undergraduate or postgraduate level. In order to use academic sources in their essays and assignments, students need to be able to place references in the text or footnotes in the appropriate places and to list their sources correctly in a bibliography. More broadly, a sound referencing routine is indicative of a deeper understanding of the collaborative nature of academic knowledge creation (Hendricks and Quinn, 2000). This research project arose out of the recognition that it is not enough to simply give students a set of referencing guidelines to follow. Underlying the research was an effort to integrate citation of the academic literature into a compulsory first-year Politics course. The aim was to equip students with the referencing skills that they would need throughout their academic careers. Integrating different teaching formats devoted to referencing into a compulsory course afforded an opportunity to compare the effectiveness of these formats in helping students to achieve the learning outcome, or their performance on a scale that measures their ability to reference sources correctly. Formats for teaching students how to reference Several studies on students’ citation behaviour have focused on academic honesty (e.g. Pecorari, 2003; Park et al., 2011) or the broader reasons for helping students to acknowledge other people’s ideas (Hendricks and Quinn, 2000). Others have measured students’ accuracy in the simpler task of referencing (Clarke and Oppenheim, 2006; Gadd et al., 2010). However, there have so far been few empirical investigations of methods specifically devoted to teaching students how to reference. A general distinction is commonly made between several different teaching formats. Lectures are the first such format. They are often criticised as a mere ‘transmit-receive occasion’ that does little to help students to engage with a topic (Race, 2007: 121). Indeed, research has consistently shown that lectures are one of the least effective teaching methods in helping students to reach the desired learning outcomes (Gibbs, 1995). And yet, they are often a necessary component in teaching classes with 50+ students. Moreover, some 2 scholars have argued that well-delivered lectures can be useful in promoting student interest and improving memorisation of certain fundamental aspects of a topic (Frederick, 1986; Bligh, 2000; Omelicheva and Avdeyeva, 2008). Lectures stand in contrast with approaches that encourage ‘active learning’. These are characterised by high student involvement through discussion, debate, simulations, writing research projects and other activities that promote an active and critical engagement with the substantive content of a course (Ishiyama, 2013). For this reason, small-groups are sometimes seen as a better forum to promote active learning than large lectures (e.g. Pollock et al., 2011). Thus, students tend to process information better if they discuss it in small groups such as seminars that offer a chance to engage students in purposeful activities with each other (Newble and Cannon, 2000; Exley and Dennick, 2004; Race, 2007). One-to-one tutorials are another time-honoured teaching format. Though this ‘Oxbridge model’ is not used widely at most universities (Biggs, 1999), some students do seek advice from their teachers during office hours, often related to specific questions about an aspect of their topic or of an assignment. This can help teachers to tailor learning activities around students’ varying needs (Duckett et al., 2006). Finally, online teaching is an increasingly common format. Students can use online methods in their own time and control the speed of an activity themselves (Lockyer et al. 1999; Wilson et al., 2006). A range of web-based activities are available for teachers to choose from, and these are easily integrated with virtual learning platforms such as Blackboard Learn (Lynch, 2002). Online quizzes are a particularly useful tool to allow students to check on their own progress (Klass and Crothers, 2000). If, upon completing such a quiz, they are subsequently given detailed feedback, they will be able identify any possible problem areas that they still need to work on (Kear, 2011). Research design and methods In this research project, the effect of lectures, seminars, one-on-one tutorials and an online quiz on Politics students' ability to reference was tested. This was done in 2013 as part of a course on Political Science Methods that is compulsory for all first-year Politics students at Brunel University. The course has several learning outcomes, including understanding the uses of empirical enquiry, or the difference between qualitative and quantitative methods of data collection and analysis. However, given that none of the students claimed to have any prior knowledge of referencing at the start of the course, and in the absence of a dedicated study skills session, particular emphasis was placed on using – and referencing – the academic literature in Term 1. The following were dedicated to teaching students how to reference: 1) A traditional lecture on the literature review: the lecture covered the concept of ‘academic literature’ and the purposes and features of a literature review. The lecture also included some advice on when references are necessary and when they are not. Finally, a substantial 3 2) 3) 4) 5) part of the lecture was devoted to the question of how to reference academic sources correctly and attribute other people’s ideas and words to their source. An additional guest lecture given by the subject librarian: this lecture introduced tools such as the library catalogue or Google Scholar and suggested ways of phrasing and linking search terms. The lecture also offered advice on selecting the most reliable and useful sources. Finally, it contained a short reminder of the correct way to reference academic sources. An interactive seminar: in this seminar, students were given a short journal article that described the results of an empirical study on women MPs. Students were divided into small groups and discussed among themselves the function of the literature review in the article and how academic sources were cited in it. Tutorials: throughout the course, one-to-one sessions with the course leader or with the departmental academic excellence tutor (whose job it is to teach special study skills sessions on request) were also offered to students, and several of them made use of the opportunity to attend a tutorial on how to reference. A quiz: an online test on when to reference and how to reference academic sources and on how to avoid accidental plagiarism was made available to students on Blackboard Learn (BBL). Students could take the test as many times as they wanted. The test also gave them feedback on their answers. To be sure, it is not just the teaching format that is likely to have an impact on student learning but also the question of how well the different elements are taught. The fact that – with the exception of the guest lecture and some of the tutorials – all the elements were taught by the author, should ensure a degree of consistency. More generally, staff at Brunel University are constantly involved in efforts to improve their teaching practice through training and exchanges of best practice. In this particular course, too, every effort was made to ensure high-quality teaching for each of the different formats. A quasi-experiment was chosen as the most appropriate way of answering the question of which teaching methods are most effective in helping students to learn how to reference correctly. Quasi-experiments, like traditional experiments and natural experiments, test the effect of different interventions on a treatment group as compared to a control group. Unlike traditional experiments, assignment to these groups is non-random (Rossi et al., 2004, Dunning, 2012). Instead, student attendance at different teaching events is used as the mechanism for selection. Attendance at teaching events at many universities is optional but recommended, and it is a well-known fact that not all students attend all teaching events (e.g. Stanca, 2006). Thus, attendance of the three classes or making use of a tutorial or the online test was defined as the experimental treatment. The information was collated in a spreadsheet, where each student (identified by student number to ensure anonymity) was assigned a 1 if he or she attended a particular event and 0 of he or she did not attend. 4 There are both practical and ethical reasons why a quasi-experiment was chosen over a traditional experiment, even though traditional experiments have greater internal validity (Thyer, 2012). One reason is practical: it is simply not feasible to randomly assign students to different treatments and tell those that were not selected for the lecture or the quiz, for example, that they cannot make use of these resources that are available to other students. There may also be ethical problems with random assignment to a control group and an experimental group: if certain teaching formats or methods are expected to better promote student learning, depriving a group of students of these would violate ethical research principles. Assigning students randomly to a control group would potentially deprive willing students of the best opportunity to acquire a key academic skill, which could also affect their performance in future assignments. There are some limitations to quasi-experiments that weaken the internal validity of findings. In particular, they pose the problem of self-selection bias: it seems probable that highly motivated students or high achievers are more likely to attend more teaching events and that these students are also more likely to perform well in their assignments. However, self-selection bias can be mitigated to some extent by controlling for general academic performance in the analysis. Here, students’ university admissions results in UCAS tariff points were included as such a control variable.i The outcome variable required a measure of students' ability to reference correctly or broadly correctly. Different institutions, and indeed different members of the same department, follow different styles, but there is a basic consensus on some fundamentals, such as the need to include page numbers for direct quotations or to include information such as the city and publisher when referencing books. For the purposes of this research, the dependent variable was constructed on the basis of the first assignment that students on the course had to complete. This included a literature review on one of three topics: age and vote, class and vote or sex and vote in British elections. Students had to locate appropriate academic literature, identify commonalities and differences between different sources and reference these sources correctly. Each literature review was assigned a score from 0 (no evidence that the students is able to reference the academic literature) to 25 (perfect ability to reference). Five dimensions of referencing skills were assessed, with five points available for each: range of reading; bibliography; references in the text; consistency of referencing; and when to include a reference (see the appendix for details). Students’ choice of referencing system was not taken into account, and nor was their choice of how they formatted the bibliography, as long as it contained all the required information. Scores were assigned to each student by student number. To prevent any possible marker bias arising from this process, scores were assigned after, and independently of, the actual marking process of the entire batch of assignments. 5 Findings Students' scores on the 25-point scale were regressed on their attendance at different events or their use of the online test, controlling for prior academic achievement. First of all, it is important to note that, at 74, the number of firstyear Politics students was small. Table 1 indicates that attendance was strong only at the lecture and seminar on the literature review, with respectively 80% and 74% of students attending these. Conversely, less than half the students attended the guest lecture or took the online test. Six students sought the advice of their teacher or the academic excellence tutor. Table 1 about here There was a small but statistically significant correlation between attendance at the seminar and the lecture (0.24) as well as the seminar and the guest lecture (0.44). In other words, those students who missed the seminar were also more likely to have missed the lecture and, above all, the guest lecture. Chart 1 displays the distribution of scores on the 25-point scale that measures students’ ability to reference. The chart shows that no student achieved a perfect score, and nine students received no points at all because their assignments contained no references. However, 15 students received fairly respectable scores of over 20, and the mean was just under 15, with a standard deviation of 6.5. Chart 1 about here Table 2 presents the results of the regression analysis. The results are somewhat counterintuitive. Above all, the table indicates that attending the lecture had a negative effect on students’ ability to reference. However, this result, like most of the others, is not statistically significant at a 5% level. In fact, the only significant independent variable is ‘tutorial’. In other words, students who sought the help of their teacher or of the academic excellence tutor, would on average score 5.7 points more on the ‘Ability to reference’ scale than students who did not. Table 2 about here 6 The positive effect of the tutorials is particularly noteworthy because, in five out of the six cases, these tutorials were given by the departmental academic excellence tutor. Funding for this post has since run out, but these results suggest, at the very least, that such a tutor is a valuable resource to several students. Discussion The data presented here shows that one-to-one tutorials were effective in teaching students how to reference, while lectures, student interaction in seminars or an online test were not. This again raises the question of selfselection. Only six students sought the help of their teacher or academic excellence tutor, which may suggest that these were also particularly motivated students. However, as prior academic performance has been included as a control variable, it is safe to say that these students were not overachievers compared to the rest. To shed more light on this puzzle, those students who had made use of a one-on-one tutorial – and several of the students who did not – were asked about their experience during the feedback sessions for student assignments. Most of them found the advice they were given ‘quite useful’ or ‘very/highly useful.’ In particular, they appreciated the fact that, unlike lectures or seminars, a tutorial could be arranged for when they needed the advice, that is to say, when they were actually working on their assignment. Moreover, they valued the responses they received to their specific questions. For example, one student appreciated being taught about the use of ‘Ibid.’ and ‘how to maintain consistency on referencing as well as writing an appropriate bibliography.’ Another appreciated an explanation of the difference between Endnote and Harvard styles of referencing. At the same time, several students who did not make use of a tutorial – and one who did – commented on referencing being ‘a lot to take in’ or ‘difficult to digest’. These students expressed the desire to spend even more time on referencing in class. One student had found the experience of going through his own bibliography with his teacher citation-by-citation very useful, as his teacher was able to point out what information was missing and what was superfluous. It took this student several attempts to identify the type of reference (article, chapter, book) and to extrapolate to other citations what information was required, but in the end he managed it quite well. This trial-and-error approach also raises the question of how much students learned from the written comments on their textual references and bibliographies. It is possible that they need to go through a few iterations before they get it right. The question of how feedback influences student learning of referencing would make for an interesting subject of further investigation. On a more general note, given all the effort put into teaching first-year Politics students how to reference, the results are mildly encouraging. The mean of 14.5 on the 25-point scale that measures students’ ability to reference lies above the middle score of 12.5. Moreover, if one leaves out the students who 7 did not even attempt to reference in their assignment, and who consequently received a score of 0, the mean rises to 16.5. Over 20% of students received 20 points or more, and over 70% received more than the middle score on the 25-point referencing scale. If nothing else, this suggests that a majority of students, who had no prior knowledge of referencing, managed to at least learn the basics. Such a basic understanding is a good starting point to continue to polish their referencing practice in their second year at university. 8 About the author Katja Sarmiento-Mirwaldt is a Lecturer in Politics in the Department of Politics, History and the Brunel Law School at Brunel University. Her research interests include political geography, corruption perceptions, and teaching and learning in the discipline of politics. Email: katja.sarmiento-mirwaldt@brunel.ac.uk; Twitter: @SarmientoMir 9 References Biggs, J.B. (1999) Teaching for Quality Learning at University, Buckingham, Open University Press. Bligh, D.A. (2000) What’s the use of lectures?, San Francisco, Jossey-Bass. Clarke, M.E. and Oppenheim, C. (2006) ‘Citation Behaviour of Information Science Students II: Postgraduate Students’, Education for Information 24(1), pp. 1-30. Duckett, I. and Jones, C. with Hardman, J. and O’Toole, G. (2006) Personalised Learning: Meeting Individual Learner Needs, London, Learning and Skills Network. Dunning, T. (2012) Natural Experiments in the Social Sciences: A DesignBased Approach, Cambridge, Cambridge University Press. Exley, K. and Dennick, R. (2004) Small Group Teaching: Tutorials, Seminars and Beyond, London, Routledge. Frederick, P. J. (1986) ‘The lively lecture—8 variations’, College Teaching 34(2), pp. 43-50. Gadd, E., Baldwin, A., and Norris, M. (2010) ‘The citation behaviour of Civil Engineering students’, Journal of Information Literacy 4(2), pp. 37-49. Gibbs, G. (1995) ‘Research into Student Learning’, in B. Smith and S. Brown (eds.) Research Teaching and Learning in Higher Education, London, Kogan Page, pp. 19-29. Hendricks, M. and Quinn, L. (2000) ‘Teaching Referencing as an Introduction to Epistemological Empowerment’, Teaching in Higher Education 5(4), pp. 447-457. Ishiyama, J. (2013) ‘Frequently used Active Learning Techniques and Their Impact: a Critical Review of Existing Journal Literature in the United States’, European Political Science 12(1), pp. 116-126. Kear, K. (2011) Online and Social Networking Communities: A Best Practice Guide for Educators, Abingdon, Routledge. Klass, G., and Crothers, L. (2000) ‘An experimental evaluation of Web-based tutorial quizzes’, Social Science Computer Review 18(4), pp. 508-515. Lockyer, L., Patterson, J. and Harper, B. (1999) ‘Measuring effectiveness of health education in a web-based learning environment: A preliminary report’, Higher Education Research and Development 18(2), pp. 233–246. Lynch, M. M. (2002) The online educator: A guide to creating the virtual classroom, London, Routledge. 10 Newble, D. and Cannon, R. (2000) A Handbook for Teachers in Universities & Colleges, 4th ed., New York, Kogan Page. Omelicheva, M.Y. and Avdeyeva, O. (2008) ‘Teaching with lecture or debate? Testing the effectiveness of traditional versus active learning methods of instruction’, PS: Political Science & Politics 41(3), pp. 603-607. Park, S., Mardis, L.A. and Ury, C.J. (2011) ‘I've lost my identity–oh, there it is… in a style manual: teaching citation styles and academic honesty’, Reference Services Review 39(1), pp. 42-57. Pecorari, P. (2003) ‘Good and original: Plagiarism and patchwriting in academic second-language writing’, Journal of Second Language Writing 12(4), pp. 317–345. Pollock, P.H., Hamann, K., and Wilson, B.M. (2011) ‘Learning through discussions: Comparing the benefits of small-group and large-class settings’, Journal of Political Science Education 7(1), pp. 48-64. Race, P. (2007) The Lecturer’s Toolkit: A Practical Guide to Assessment, Learning and Teaching, 3rd ed., London, Routledge. Rossi, P. H., Lipsey, M. W. and Freeman, H. E. (2004) Evaluation: A Systematic Approach, 7th ed., Thousand Oaks, Sage. Stanca, L. (2006) ‘The Effects of Attendance on Academic Performance: Panel Data Evidence for Introductory Microeconomics’, The Journal of Economic Education 37(3), pp. 251-266. Thyer, B. A. (2012) Quasi-Experimental Research Designs, Oxford, Oxford University Press. Wilson, B.M., Pollock, P.H. and Hamann, K. (2006), 'Partial Online Instruction and Gender-based Differences in Learning: A Quasi-Experimental Study of American Government ', PS: Political Science & Politics 39(2), pp. 335-339. 11 Source: Own ‘Ability to cite’ data set; own calculation. 12 Table 1: Frequency of attendance at different events Students attended Did not attend (%) (%) Lecture 59 (80) 15 (20) Guest lecture 31 (42) 43 (58) Seminar 55 (74) 19 (26) Tutorial 6 (8) 68 (92) BBL Testii 21 (29) 52 (71) Source: Own ‘Ability to reference’ data set; own calculation. 13 Table 2: Influences on students’ ability to cite B SE C 10.38 7.10 A-Level 0.01 0.02 Lecture -1.45 1.97 Guest lecture 2.39 1.71 Seminar 0.40 1.95 Tutorial 5.71* 2.78 BBL Testiii 0.25 2.16 Adj. R² 0.03 N 74 Source: Own ‘Ability to reference’ data set; own calculation. Note: * = p <0.05. 14 Appendix Score sheet to permit consistent scoring of student assignments. 1. Range of reading Students were instructed that they needed to review 4-5 sources of academic literature as part of their assignment. Five points were given if they referenced, and showed evidence of having read, four academic sources or more. Points were deducted for fewer sources of if more non-academic sources (web pages, newspapers, blogs etc.) than non-academic sources were used. 2. Bibliography Five points were given for bibliographies that contained all the required information (author, year, title, edition if 2nd or later, city of publication, publisher for books; author, year, title, journal title, volume, issue, page numbers for articles; etc.) for most of the references. Points were deducted if superfluous information was included (such as the URL or ISBN number, the publisher for journals or the page numbers for books). Points were also deducted if the bibliography was not alphabetised. 3. References in the text Five points were given if references in the text contained all the necessary information. Points were deducted if information was missing (such as the year of publication for the Harvard system or any bibliographic information for the footnote system). Points were likewise deducted if footnotes contained superfluous information such as the ISBN number, or if space was wasted by repeating all the information in footnotes for repeat citations. Finally, points were deducted if references were cited in the text but missing from the bibliography. 4. Consistency Five points were given if the references in the bibliography and the text were consistent, even if this meant that they were consistently missing information or including superfluous information. Points were deducted if there were inconsistencies in the references in the text or in the bibliography. Moreover, students lost points if they mixed the footnote and Harvard system of referencing. 5. When to reference Five points were given if students referenced all ideas and words they had taken from academic sources correctly and thus avoided deliberate or accidental plagiarism. Points were deducted if they did not provide page numbers for direct quotes. Furthermore, students lost points for including too many quotes and noticeably too many or too few citations. For each of these areas, students could get a maximum of 5 points, so a total of 25 was possible. In general, the worse a student a student performed on any one measure, the more points he or she would lose. In some extreme cases, e.g. where a bibliography was missing altogether, a student could get no point on that particular measure. 15 Notes i The resulting scale ranges from 180 to 400. Where students had different qualifications than A-Levels that could not easily be converted into UCAS tariff points, these were set at the mean of 265. ii This group was made up of ten students who completed the test plus eleven who attempted the test without completing it. iii A separate regression was run with ‘BBL test completed’ as opposed to ‘BBL Test completed or BBL test completed or attempted’, but this made no difference to the results. 16