1 Criteria for Good Research with Emphasis on Qualitative Research This document is based on an exploration of the question, “What makes research good?” by students in a similar research methods course than you are taking. I have added some criteria that were missing from your notes and filled in details. You can use the document for: Evaluating an example of research Designing your own research (i.e., research proposal) 1) Problem statement This is extremely important. The first question about any piece of research is: What is the problem? The problem logically comes before the research questions. A good paper/proposal has a problem statement that does the following: Explain what the problem is Explain why the problem is important to research. This includes the novelty of the problem (has it been investigated before? Is there previous research, but there still are important aspects that need research?) Explain how the study contributes to knowledge about the problem All this should be in very simple language. Try explaining your problem statement to someone in the canteen, or a family member who is not an expert in education. If you can’t get them interested in your research, the problem statement is not good enough. The problem statement should help to develop a focus, so that the research does not try to cover a general topic. 2) Research Questions The research question is also extremely important and must be researchable. A question is researchable when it is possible to design a study that can answer the research question; some questions cannot be answered empirically, or do not require empirical research. You also mentioned that studies have hypotheses, but this is less common for qualitative research. Here are some other important issues with research questions: Have a small number of research questions, or else the paper/proposal becomes too unfocused and it becomes difficult to see what the new contribution of the research is The research questions should be related to the research problem, and suggest specific ways to study the problem The research methods must be capable of answering the research questions. Can you really come to a conclusion about the conceptual change of a student based on a 200-word transcript of an interview? 2 The research questions must be answered during the study, at least to some extent. 3) Objectives Any one study will not be able to fully solve a difficult problem, so it important that researchers spell out what they hope to accomplish with the study via objectives. For, example, “The objective of this study is to gain a better understanding of how successful teachers encourage student to student talk, with a view to designing professional development.” Objectives can clarify: What it will be possible to do with the study results? The kind of knowledge expected (e.g., practical knowledge in action research versus more generalizable knowledge) The boundaries of the study. What must be studied to achieve the objectives and what is better left to a future study? This is especially important for case studies. Feasibility: Are the resources and time available sufficient for achieving the objectives? A project can be too ambitious for the resources available, but it also is important to be aware that a study can be so small-scale that nothing worthwhile can be learned from the study. 4) Literature review The literature review is important for framing the study and demonstrating how the study adds to existing knowledge. The literature review develops the argument for doing the study—why it would be useful. Note that just the fact that research has not been done before is not necessarily a reason for doing it now. The literature review should: Discuss what is known about the problem already. If research already exists it helps to demonstrate that the field is interested in the field Discuss what is not known about the problem. There may be gaps in the literature—aspects that have not been studied at all or have not been studied well—that can help you to identify a “niche,” some research questions that would be worth studying. For example, “Almost all studies in this literature examine teacher learning, but no study was found that also examines the impact on student learning. Some authors (references) argue that professional development should have impact on classroom learning.” Introduce the major concepts that will be used to analyze the data—the conceptual framework. For example, if you are interviewing teachers about classroom management, what kinds of theories and concepts will be used to analyze the teacher’s responses? Here it is impossible to define terms well. There are different ways to think about a problem, so it is important that researchers anticipate alternate ideas than the ones that underlie a conceptual framework (i.e. 3 criticisms). It is particularly to note that researchers may be biased or make unstated assumptions that can be challenged. Literature reviews vary greatly. Some journals require short articles, so the literature review can be very focused; other journals require a broader exploration of the problem. Without an adequate literature review it is very difficult to evaluate if a study makes a contribution to knowledge. It is important that literature reviews are based on good literature (e.g., not conference papers when journal articles or books on the subject are available), and are reasonably comprehensive (e.g., also works from before 2000, so that major areas of work are not missed, or also social psychology of learning in groups and not just cognitive psychology). And a good literature does not just summarize studies (“Chan said xxx, Postman said yy”) but provides analysis (your commentary), so that the case for doing the new piece of research gradually becomes clear. There is a tendency for readers to lose their way in the details of a literature review, missing the main points. 5) Description of data collection methods The methods section of a paper/proposal is very important. Readers need to be able to understand what the researchers did, and need to be able to evaluate whether the methods used are of sufficient quality to make answers to the research questions possible. The first job of the methods section is to justify the overall approach chosen. Why action research and not a case study? Why not an experiment? Researchers need to demonstrate that they are aware that the research questions can be studied in different ways, and that there are debates in the field about how certain kinds of research questions are best investigated. There also are many issues with sampling and recruitment: Is the “case” chosen the best one to study the problem or research questions? E.g., if you wan to study the difficulties of implementing assessment for learning, why would Queens College make a good school to study? Are the phenomena sampled enough? Is solving problems in one area of the curriculum (e.g., trigonometry) enough for meeting the objectives of the study? Do you have enough participants? What kinds of participants are necessary (e.g., teacher and his/her students, but not friends or the principal)? Sample size also is important for inferential statistics (t-test, ANOVA, etc., in which you are trying to generalize to a population. Besides sample size, representativeness of the sample is also important, especially in quantitative research. For example, does the proportion of participants in your sample who are women match that in the population or society? How are the participants recruited? Are there any criteria for excluding participants from the study? How were the participants informed about what would be involved in participating (ethics) 4 Description of procedures: What was the nature of the intervention, if there was one? E.g., in action research, what was the action undertaken, why, and how? How long did the intervention last? How well was the intervention carried out? For example, if it was important to have mixed-ability groups, were these used, or did students choose their friends to be in their group? What special materials were used, and how? If students did online discussions, did they do them in a computer lab or after school? What instructions were students given? Were these discussions assessed? Ethics: Did the procedures follow or fail to follow ethical principles (e.g., a long experiment that gives a treatment to one class but not another class, the use of deception, payment of participants). The basic idea here is to provide as full a description of what happened, what the students and teacher did, as possible. Description of data collection: Was a pilot study completed to clarify the research questions or various data collection methods? E.g., did the researchers try out their interviewing procedures before doing the main study? Do the interview questions lead the interviewee in a certain direction or are they more neutral? Is the interview carried out in a way that produces the best information possible? Were interviews carried out in English or Cantonese? How often did interviews take place? What kinds? How long did they last? Were they recorded? How often did observations take place? What was their focus? How were they recorded? What kinds of materials were collected? Why? What tests were used? What do we know about their validity? What process was used to develop the test (if a new one)? Was more than one data source collected? How does each data source contribute to answering the research questions? Is the data collection plan logically adequate for answering the research questions? For example, if a study lacks a pre-test can you still evaluate learning? Is a comparison group needed? The researcher needs to make decisions regarding these issues before the data collection. Was the study long enough to answer the research questions? Description of data analysis: Are procedures for processing the data described? E.g., levels of transcription, checking of transcription by participant. 5 Are questionnaires and materials in that were developed in English translated into Chinese and back translated (to check accuracy of the translation)? Are the procedures by which a coding scheme is developed explained in detail (e.g., different kinds of coding). What was the source of ideas for coding? Are any judgments of a subjective kind (e.g., ratings, coding) repeated by an independent rater? Or would it make sense not to expect close agreement? Are the theories or perspectives that underlie the analysis described (e.g., grounded theory, conversation analysis, hermeneutics)? Are inferences made from some of the data tested on the remaining data? This can mean different things. E.g., testing inferences from interviews in the observation data (triangulation), or testing inferences from 6 cases on the remaining 2 cases (Grounded theory should be tested) Are questionnaire data cleaned? E.g., removal of incomplete questionnaires or obviously bogus responses. Is the coding audited, so that biases could be detected? Is both data that confirm and disconfirm reported, if both exist? Is an effect that happened just once over-emphasized? Is it explained how the quotes shown in a paper were chosen? Why do the researchers focus on this quote and not many other possible ones that would lead to a different conclusion? Have the researchers done anything special to investigate questions along the way? E.g., if 4-5 students were absent from a pre-test and were excluded from the study, did the researchers investigate if the sample was still representative? Is the data analysis superficial or in-depth? Are the analytical methods appropriate for the data—i.e., are the assumptions about the data that underlie the analysis valid? Ethics: are the results “too good to be true”. Are results likely to be fabricated or altered? Are learning assessments sufficiently distant from the intervention? 6) Reporting of findings, discussion, and conclusion You won’t have to deal with this in the proposal, but it is important in any research report. Some issues: The Results section Are the research questions answered? Do the researchers explain what the data are revealing about the research questions? Or is the presentation datadriven, in which the researchers summarize the results in an unfocused way, without returning to the research questions? Are assertions based on the data analysis, or do they seem like opinions on the topic that are not related to the evidence presented? Was a participant check carried out to see how the participants reacted to the interpretations and descriptions in the paper. 6 Are alternate explanations considered? For example, rather than the intervention maturation of the participants could explain the results Do the researchers clearly delineate the boundaries around their claims? “The results show that xx but not that yyy.” Does the Results section stay focused on the results, or does it wonder away and go into long discussions. Does the study include clear comparisons that facilitate interpretation? Does the study have clear findings that are likely to have impact? Or are the findings so commonplace that they do not seem to require research? The Discussion/conclusions section: This should not report additional findings. That’s what the results section is for! Do the researchers review whether their objectives were met? Do the researchers discuss the implications of the study by returning to the problem statement literature? Do the researchers point out the limitations of this study to qualify their conclusions? Do they suggest next steps? Are these steps based on the findings of the study? E.g., if the researchers recommend more attention to professional development, is that a recommendation based on the study, or is it their favorite recommendation, which they make after every study of theirs? Does the paper have a conclusion? Are the conclusions based on analysis of results/data? Is the focus of the study preserved in the discussion or does the framing wonder away from earlier sections?