Ho, C. H., & Swan, K. (2007). Evaluating online conversation in an asynchronous learning environment: An application of Grice's cooperative principle. The Internet and Higher Education, 10(1), 3-14. Much research regarding distance education has been criticized on several points. Critics claim that true or original research in the field is lacking, and also assert that the validity and reliability of the instruments used to measure the research have been questionable. Lastly, some critics cite a lack of theoretical or conceptual framework underpinning much of the research dedicated to distance education. The journal used is a peer-reviewed, quarterly journal dedicated to scholarly presentation and dissemination of theoretical and applied information and research as it related to the use of Internet and IT technologies in the field of higher education. The authors are researchers at SUNY-Albany and Kent State University. They tackle these objectives in their case study (evaluation) of asynchronous communication in an online-learning environment examining four hypotheses. The tool they used to measure the information regarding their four hypotheses was originally developed for measuring oral discourse in a face-to-face learning environment and is based on the theoretical foundation of Grice’s Cooperative Principle as it relates to classroom discussion and has had previously been proven to be a reliable and valid tool. The authors found support for three of their four hypotheses, drawing positive relationships between increased Gricean ratings and direct responses to online postings as assigned by their instructors, in addition to demonstrating a relationship between measured actual learning and their Gricean scores. The authors admit multiple limitations in their study - namely small sample size and course specific subject matter- as factors that limit the generalizability of their findings. The statistical methods they used to analyze their findings were stringent and objective (ANOVA, Pearson, Tukey, etc.) and give strength to their findings. The results open the door for further studies to be conducted and investigations into the discussions of online students and their relationship with their learning success and outcomes. Garrison, D. R., Cleveland-Innes, M., & Fung, T. S. (2010). Exploring causal relationships among teaching, cognitive and social presence: Student perceptions of the community of inquiry framework. The Internet and Higher Education, 13(1), 31-36. The Community of Inquiry (CoI) framework is a philosophical and theoretical framework that has been applied specifically to assessing higher education and learning, and recently, has been used to evaluate three core elements that have an (theoretical) impact on learning in distance education. These elements are: teaching, cognitive and social presences. The journal used is a peer-reviewed, quarterly journal dedicated to scholarly presentation and dissemination of theoretical and applied information and research as it related to the use of Internet and IT technologies in the field of higher education. The authors are all researchers, with the primary investigator being a prolific writer in the field of education research, much of it aimed at distance education. The authors use the CoI survey tool, previously established as a valid and reliable tool, and confirmed correlation between teaching presence, defined as “the design, facilitation and direction of cognitive and social processes for the purpose of realizing personally meaningful and educationally worthwhile learning outcomes”, and the creation and maintenance of social and cognitive presence. They assert that such a causal relationship supports the belief that teaching presence plays a vital and primary role in “sustaining an online learning environment and realizing intended learning outcomes.” Their results are strengthened by the fact that not only is their assessment tool reliable and valid, but their participants (287 respondents), two educational programs and fourteen individual learning courses studied, help to eliminate within-program and student (subject) bias from the survey. Additionally, these multiple layers of stratification add a possible level of generalizability that can be applied to other institutions, programs and courses. Swan, K., Day, S. L., Bogle, L. R., & Matthews, D. B. (2014). A collaborative, design-based approach to improving an online program. The Internet and Higher Education, 21, 74-81. Recently, it was revealed that almost one-third of college students have taken at least one online course. It is obvious that online-learning differs from that of classroom or face-to-face learning, not only in content but also in approaches to instruction. Much has been made in the past of whether one form of learning was superior to that of another and multiple media comparison studies in the last few decades tried to address that. Now, however, researcher and studies seem to be going away from comparison of one over the other and trends are starting to be seen in education research involving the study of rubrics and assessment tools used by online educators and course developers in order to improve the quality of design and implementation of their courses in an effort to improve learning outcomes. The journal used is a peer-reviewed, quarterly journal dedicated to scholarly presentation and dissemination of theoretical and applied information and research as it related to the use of Internet and IT technologies in the field of higher education. The primary investigator/author is a prolific writer in the field of education research, as well as a Stukel Distinguished Professor for Educational Leadership (University of Illinois). The authors use design-based method of approach to this study based on a theoretical framework mentioned in a previous bibliography (Community of Inquiry) and the Quality Matters (QM) assessment tool, a peer-reviewed instrument which uses design principles in order to assist faculty with addressing and adjusting course design in order to maximize “well-specified outcomes, objectives and assessments.” The goal was to assess four college courses (over time in different semesters) after an initial offering, followed by QM changes in the second semester and CoI improvements in subsequent semesters and then correlate the findings. The authors attempted to answer three questions using these tools and the concept of a course design revision(s) as having an impact on student learning outcomes. They studied four courses, with 214 students as subjects, at one university in a fully online graduate program. Their findings were mixed and despite the authors’ assertions that their hypotheses were held up by the results (in general terms), the facts paint a somewhat different picture. Two of the courses show no significant difference in outcomes using CoI and/or QM improvements. Their subjects and site for the study proved to be a limiting factor in the generalizability of their findings. While the authors admit lack of generalizability, the remaining data lends itself to the interpretation that a more valid assessment tool or return to the drawing board in the form of a pilot study, in order to more adequately field and test improvements and their subsequent effects on learners using the CoI and QM frameworks, would be prudent.