LEIZELLE MAE E. UBOD 2018-2659 Synthesis and Reflection on Quantitative Research Methods Research journals are all about a study of evidences and validity. Researches has somehow become the backbone of our advancement although, some may comprehend that researches are only for sciences, this is however untrue. Since the beginning rational emergence in the society particularly in environmental processes, astronomical observations, quantum mechanics and such, it was required that to claim a statement, it should be supported by evidences although resources may still lack but studies were written in a fashion that provides strong arguments and observations. The establishment of research designs or its standards may have developed later in years, the quality is still in context. Today, in the matter of analysing about any part of it, it is a discernment of reliability and openness. Regardless of its type, focus, or discipline, the common qualities demanded by the society can be considered a must-have of every paper for it to be acknowledged and cited. The articulation, strategy, and data organization are although vital, these only the external characteristics that delivers trust to the readers and fellow researchers. There are many kinds of researches categorized depending on their purpose, depth of scope, type of data used, degree of manipulation of variables, type of inference, timeframe, sources of information, or in other words, the methodological framework determines the discipline of a research paper. Regardless, studies compels whatever objectives and information it laid to be. The constant feature that past and current papers contain is that it poses a problems and generate new ideas in answering by tests or experiments or observations or surveys. In this light, the execution of methodological framework is provides the crucial intervention in overall synthesis of a research paper, and by this, every step LEIZELLE MAE E. UBOD 2018-2659 has a threat to invalidity. Therefore, for a studies to be meaningfully contribute to society, it must adhere to the fashioned standards by the experts. In this writing, the author will focus on one of the researches that has sole dependence of its data used as it is categorized – the quantitative research. Quantitative research involves the process of objectively collecting and analysing numerical data to describe, predict, or control variables of interest. The goals of quantitative research are to test causal relationships between variables, make predictions, and generalize results to wider populations[2]. This uses of computational, statistical, and mathematical tools to derive results[2]. As it encompasses numerical and statistical approach, strict implementation is required. This type can either discrete or continuous data regardless of study’s requirement in order to prove a point or effectively deliver the data needed. This discrete data refers to information that can only take certain values that can't be divided based on the nature of what they are while continuous data can take any value like height, weight, temperature and length data. It can be divided up as much as you want, and measured to many decimal places. ^^^In addition, although it mentions numerical values it does not mean it randomly utilize its functions. These data are categorized as well in term of its properties – nominal, ordinal, interval, and ratio. These are the type of data and measurement that help data to correctly classify thus, allowing them to perform accurate interpretation and respond with appropriate statistical tools. A nominal variable is another name for a categorical variable. These are variables with no numeric value, such as occupation or political party affiliation that is used for labelling variables, without any quantitative value. For the purposes of statistics, one can't label something as both of some categories and only related by the main category of which they're a part. For example, one cannot be both a man LEIZELLE MAE E. UBOD 2018-2659 and a woman. Using mean for these variables may be faulty but utilizing the mode can be useful. On the other hand, ordinal data is a type of measurement scale that deals with ordered variables or ranks. The using of Likert scale is common for collecting this data - agree, strongly agree, disagree, etc. especially for surveys. Ordinal scales are typically measures of non-numeric concepts like satisfaction, happiness, discomfort, and such. For quantitatively categorized data with continuous value format, these are scaled by this another type: the interval data numeric scales – one of the two types – which, conveys both the order and the exact differences between the values however, the differences represents no relation to each other. “Interval” itself means “space in between,” interval scales not only tell us about order, but also about the value between each item although it has “no-zero” value. A data value can hypothetically fall anywhere on a number line within the range of a given data set. Examples of interval-level data include temperature, aptitude scores, and intelligence quotients. Last for the quantitative scale is the ratio – this tell us about the order, they tell us the exact value between units, and also have an absolute zero–which allows for a wide range of both descriptive and inferential statistics to be applied. This allows you to measure standard deviation and central tendency. Ratio data is very similar interval data, except zero means none. For ratio data, it is not possible to have negative values. For instance, height is ratio data. It is not possible to have negative height. If an object's height is zero, then there is no object. This is different than something like temperature. Both 0 degrees and -5 degrees are completely valid and meaningful temperatures. Ratio data allows us to establish a true ratio between the different points on a scale. This added degree of precision allows us to use ratio scales to measure data more accurately than any of the other previously mentioned scales. LEIZELLE MAE E. UBOD 2018-2659 The difference between interval and ratio scales comes from their ability to dip below zero. Interval scales hold no true zero and can represent values below zero. For example, you can measure temperature below 0 degrees Celsius, such as -10 degrees. Ratio variables, on the other hand, never fall below zero. Height and weight measure from 0 and above, but never fall below it. An interval scale allows you to measure all quantitative attributes. Any measurement of interval scale can be ranked, counted, subtracted, or added, and equal intervals separate each number on the scale. However, these measurements don’t provide any sense of ratio between one another. A ratio scale has the same properties as interval scales. You can use it to add, subtract, or count measurements. Ratio scales differ by having a character of origin, which is the starting or zero-point of the scale. Tackling about the different types of measurement scales, this brings us to discuss about the designs where these scales should be appropriately used. Having constructed designs for research helps describe the groups you will collect data from, how often you will collect data, and at what point the data will be analysed[3]. Helps researcher to prepare himself to carry out research in a proper and a systematic way with proper planning of the resources and their procurement in right time. It reduces inaccuracy; get maximum efficiency and reliability; eliminates bias and marginal errors; minimizes wastage of time; helpful for collecting research materials and for testing of hypothesis; gives an idea regarding the type of resources required in terms of money, manpower, time, and efforts; provides an overview to other experts; and guides the research in the right direction; describe how the population is identified the manner in which the sample will be selected, when the data will be collected, and so on. There are many types of research designs but this discussion will re-organize the types into a deduced classifications but in different extended forms. The four LEIZELLE MAE E. UBOD 2018-2659 major quantitative research designs: survey research, correlational research, causal–comparative research, and experimental research. Survey research investigates and reports on the current status of a population based on numeric data you’ve collected (Fink, 2003; Fowler, 2013). Choosing a descriptive research approach starts with the Problem Statement, the Purpose Statement, and the Research Questions but doesn’t include a hypothesis. First, the variables of interest are measured using self-reports. In essence, survey researchers ask their participants (who are often called respondents in survey research) to report directly on their own thoughts, feelings, and behaviours. Second, considerable attention is paid to the issue of sampling. In particular, survey researchers have a strong preference for large random samples because they provide the most accurate estimates of what is true in the population. Beyond these two characteristics, almost anything goes in survey research. Surveys can be long or short. They can be conducted in person, by telephone, through the mail, or over the Internet. They can be about voting intentions, consumer preferences, social attitudes, health, or anything else that it is possible to ask people about and receive meaningful answers. Although survey data are often analysed using statistics, there are many questions that lend themselves to more qualitative analysis. Additionally, there hypotheses exclusion in this design – this is not examining the relationship of the variables nor testing for a cause and effect. Instead, research problems are answered by descriptive statistics, such as the mean or range of values in your dataset; graphical descriptive tools, such as a bar chart showing the number of occurrences of a value in your data; and inferential statistics, tools such as t-tests, ANOVAs, and regression analysis that allow us to make decisions about the data we have collected and the hypotheses stated. However, to utilize statistics, this needs to collect data from people. There is a systematic way of choosing the supposed participants from a large population. LEIZELLE MAE E. UBOD 2018-2659 The population is all the individuals in whom the interest of the study. Sometimes, it may be geographical areas such as all cities with populations of 100,000 or more. Or we may be interested in all households in a particular area. A sample is the subset of the population involved in a study. In other words, a sample is part of the population. The process of selecting the sample is called sampling. The idea of sampling is to select part of the population to represent the entire population. There are two types of sampling - probability and nonprobability. A probability sample is one in which each individual in the population has a known, nonzero chance of being selected in the sample. The most basic type is the simple random sample. In a simple random sample, every individual has the same chance of being selected in the sample. This is the equivalent of writing each person's name on a piece of paper, putting them in plastic balls, putting all the balls in a big bowl, mixing the balls thoroughly, and selecting some predetermined number of balls from the bowl. This would produce a simple random sample. In this type of situation, a multistage cluster sample would be used. For instance, sample of countries could then be divided into smaller geographical areas such as blocks and a sample of blocks would be selected. Then, construct a list of all households for only those blocks in the sample. Finally, go to these households and randomly select one member of each household for our sample. Once the household and the member of that household have been selected, substitution would not be allowed. A nonprobability sample is one in which each individual in the population does not have a known chance of selection in the sample. There are several types of nonprobability samples. For example, magazines often include questionnaires for readers to fill out and return. This is a volunteer sample since respondents selfselect themselves into the sample. Another type of nonprobability sample is a quota sample. Survey researchers may assign quotas to interviewers. For example, interviewers might be told that half of their respondents must be female and the other half male. This is a quota on sex. LEIZELLE MAE E. UBOD 2018-2659 Probability samples are preferable to nonprobability samples. First, they avoid the dangers of what survey researchers call "systematic selection biases" which are inherent in nonprobability samples. For example, in a volunteer sample, particular types of persons might be more likely to volunteer. Perhaps highlyeducated individuals are more likely to volunteer to be in the sample and this would produce a systematic selection bias in favor of the highly educated. In a probability sample, the selection of the actual cases in the sample is left to chance. Second, a probability sample estimate the amount of sampling error. There will be a certain amount of error as a result of selecting a sample from the population. Sampling error can be estimated in a probability sample, but not in a nonprobability sample. Nonsampling error would include such things as the effects of biased questions, the tendency of respondents to systematically underestimate such things as age, the exclusion of certain types of people from the sample or the tendency of some respondents to systematically agree to statements regardless of the content of the statements. In some studies, the amount of nonsampling error might be far greater than the amount of sampling error. Notice that sampling error is random in nature, while nonsampling error may be nonrandom producing systematic biases. Eliminating sampling error entirely is impossible, and it is unrealistic to expect that we could ever eliminate nonsampling error. It is good research practice to be diligent in seeking out sources of nonsampling error and attempt to minimize it. A survey consists of many questions or statements to which participants respond – this is sometimes called scale and the question or statements in the survey are often called items. There are three types of questions that can be included in a survey: open-ended items (participants respond to the questions on their own words; feelings of appropriation with no limitations); partially openended items (give participants a few restricted answer options and a last one that allows participants to respond on their own words in case the few restricted options do not fit with their answer); and restricted items (most commonly used, LEIZELLE MAE E. UBOD 2018-2659 this does not give participants an option to respond in their own words, instead, the item is restricted to the finite number of options provided by the researcher). Correlational research is a type of nonexperimental research in which the researcher measures two variables and assesses the statistical relationship between them in an identifiable pattern. This determine the extent to which two factors are related, not the extent to which one factor causes change in another factor however. When one value (i.e., temperature) goes down, so does the other (i.e., ice cream consumption). This is also called a positive correlation because, just like the first example, the data values move together in the same direction. There is also negative correlation; when one value gets larger, the other value gets smaller. In other words, when stating a hypothesis, it means the study calls to examine the effect of an independent variable (i.e., the cause) on a dependent variable (i.e., the effect)—if measured, this may be the effect size; therefore we could test the hypotheses unlike in survey design. Computing the relationship of the variables is through the statistical measure called correlation coefficient which is used to measure the strength and direction of the linear relationship, or correlation, between two factors. The value of r can change from -1.0 (values for two factors change in the opposite direction) to +1.0 (values for two factors change in the same direction). The direction of a relationship between two factors is described as being positive or negative with values closer to r = +1.0 indicating a stronger relationship. This can be shown by the regression line to show how far data points fall when dotted in a graph. Data points are fit to a regression line to determine the extent t which changes in one factor are related to changes in a second factor. For positive correlation, as values on one factor increase, values of a second factor also increase; as values for one factor decrease, values of a second factor also increase. If two factors have values that change in the same direction, correlation can be graphed using straight line. In negative correlation, LEIZELLE MAE E. UBOD 2018-2659 as values of one factor increase, values of the second factor decrease. If two factors have values that change in opposite direction, correlation can be graphed using straight line. A zero correlation r=0 means that there is no linear pattern or relationship between two factors. The closer a correlation coefficient is to r=0, the weaker the correlation and the less likely the two factors are related. The most common formula used for computing r is the Pearson determining the strength and direction of the relationship between two factors on an interval or a ratio scale of measurement. Another type of design is the causal-comparative research also known as “ex post facto” research. In this type of research investigators attempt to determine the cause or consequences of differences that already exist between or among groups of individuals and attempt to identify a causative relationship between an independent variable and a dependent variable. The relationship between the independent variable and dependent variable is usually a suggested relationship (not proven) because the researcher do not have complete control over the independent variable. The researcher’s goal is to determine whether the independent variable affected the outcome, or the dependent variable, by comparing two or more groups of individuals. Steps can be: 1. identify the pre-existing groups and state your hypotheses; 2. Collect data representing the variables you want to investigate; 3. Use statistical software to analyse data; and 4. Test the hypotheses based on data analysis. In formulating the problem, choosing the sample, and preparing for instrumentations - achievement tests, questionnaires, interviews, observational devices, attitudinal measures, etc. the validity of research is threatened. In terms of validity, there are two types: the internal validity which refer to the degree of confidence that the causal relationship being tested is trustworthy and not influenced by other factors or variables and external validity as the extent to which results from a study can be applied (generalized) to other situations, groups LEIZELLE MAE E. UBOD 2018-2659 or events. Threats to the validity can be anything that might affect the accuracy of our results, and are issues that may affect the generalizability of our results, respectively to both validities. To specify, these are some of the threats to internal validity – i.) History - may influence the outcome of studies that occur over a period of time, such as a change in the political leader or natural disaster that influences how study participants feel and act. For instance, the fateful events of 9/11 changed Americans’ lives forever; it would be unrealistic to think that the results of any study conducted with military members at that point in time would be meaningful; ii.) Maturation - describes the impact of time as a variable in a study. If a study takes place over a period of time in which it is possible that participants naturally changed in some way (grew older, became tired), then it may be impossible to rule out whether effects seen in the study were simply due to the effect of time; iii.) Testing - the repeatedly testing participants using the same measures influences outcomes. If given the same test three times, it is likely that they will do better as they learn the test or become used to the testing process so that they answer differently; iv.) Instrumentation - It's possible to "prime" participants in a study in certain ways with the measures used, which causes them to react in a way that is different than they would have otherwise. For most instruments, however, item and sampling validity are the two biggest concerns we face. Problems with either can negatively affect the internal validity of our study; v.) Statistical regression to the mean - means that when an event is measured twice, extreme scores on the second attempt will tend to be closer to the average score of the group than extreme scores on the first attempt; vi.) Differential selection of participants - is a threat that happens when you select groups that are different to begin with; this difference may affect the dependent variable. The treatment in a study spreading from the treatment group to the control group through the groups interacting and talking with or observing one another. This can also lead to another issue called resentful demoralization, in which a control group tries less LEIZELLE MAE E. UBOD 2018-2659 hard because they feel resentful over the group that they are in; and vii.) Mortality - Participants dropping out or leaving a study, which means that the results are based on a biased sample of only the people who did not choose to leave (and possibly who all have something in common, such as higher motivation). Threats to generalizability tend to be caused by the actions of participants involved in the study, problems with the sample or how it was created, or issues beyond the control of the person conducting the study. As there is external validity, there are also factors that threatens its integrity: i.) selection-treatment interaction - the nonrandom or volunteer selection of participants limits the generalizability of the study; ii.) Pre-test-treatment interaction - the pre-test sensitizes participants to aspects of the treatment and thus influences post-test scores; iii.) Multiple treatment inference - when participants receive more than one treatment, the effect of prior treatment can affect or interact with later treatments, limiting generalizability; iv.) Treatment diffusion - treatment groups communicate and adopt pieces of each other’s treatment, alternating the initial status of the treatments’ comparison; v.) Experimenter effects - conscious or unconscious actions of the researcher affects participants’ performance and responses; and vi.) Specificity of variables - poorly operationalized variables make it difficult to identify the setting and procedures to which the variables can be generalized. This only shows that there are many factors that can compromise the study’s validity if methodology is not executed properly. Moving on, the fourth design we’ll talk about is the experimental design. In this design, one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels (random assignment), and the results of the treatments on outcomes (dependent variables) are observed. Experimental research is best suited for explanatory research (rather than for descriptive or exploratory research), where the goal of the study is to examine cause-effect relationships. It also works well for LEIZELLE MAE E. UBOD 2018-2659 research that involves a relatively limited and well-defined set of independent variables that can either be manipulated or controlled. Experimental research can be conducted in laboratory or field settings. Laboratory experiments tend to be high in internal validity, but this comes at the cost of low external validity (generalizability), because the artificial (laboratory) setting in which the study is conducted may not reflect the real world. Field experiments, conducted in field settings such as in a real organization, and high in both internal and external validity. But such experiments are relatively rare, because of the difficulties associated with manipulating treatments and controlling for extraneous effects in a field setting. This can be followed by 1.) Stating your hypothesis; 2.) Identifying appropriate data collection instruments; 3.) Identifying your population and sample selection procedures; 4.) Determining the design you will use to test your hypothesis; and 5.) Developing a detailed set of procedures you will follow while conducting your study. There are various types of experimental designs that also has its sub-types. I.) Pre-experimental designs - a single group is observed subsequent to some agent or treatment presumed to cause change: i.) one-shot case study - A single group is studied at a single point in time after some treatment that is presumed to have caused change. The carefully studied single instance is compared to general expectations of what the case would have looked like had the treatment not occurred and to other events casually observed. No control or comparison group is employed - X→O; ii.) one-group pre-test post-test design - A single case is observed at two time points, one before the treatment and one after the treatment. Changes in the outcome of interest are presumed to be the result of the intervention or treatment. No control or comparison group is employed O→X→O ; and iii.) Static group comparison - A group that has experienced some treatment is compared with one that has not. Observed differences between the two groups are assumed to be a result of the treatment X1 → O; X2 → O. LEIZELLE MAE E. UBOD 2018-2659 The second type is II.) Quasi-Experimental design - aims to establish a causeand-effect relationship between an independent and dependent variable and does not rely on random assignment. Instead, subjects are assigned to groups based on non-random criteria. This can be a useful tool in situations where true experiments cannot be used for ethical or practical reasons: i.) Non-equivalent group design – in which the researcher chooses existing groups that appear similar, but where only one of the groups experiences the treatment. Groups are not random, they may differ in other ways—they are nonequivalent groups. When using this kind of design, researchers try to account for any confounding variables by controlling for them in their analysis or by choosing groups that are as similar as possible O → X1 → O; O → X2 → O; ii.) Time-series design – where series of periodic measurements is taken from one group of test units, followed by a treatment, then another series of measurements OOOOOXOOOO; iii.) Counterbalanced design - used when there are two possible conditions, A and B. As with the standard repeated measures design, the researchers want to test every subject for both conditions. They divide the subjects into two groups and one group is treated with condition A, followed by condition B, and the other is tested with condition B followed by condition A X1 →O→X2 →O; X2 →O→X1 →O. Lastly, the III.) True experiment design - the researcher randomly assigns test units and treatments to the experimental groups and all the important factors that might affect the phenomena of interest are completely controlled: i.) pre-test post-test control group - test units are randomly allocated to an experimental group and a control group. Both groups are measured before and after the experimental group is exposed to a treatment R O → X1 → O; R O → X2 → O randomization has taken place, meaning participants have been randomly assigned to one group or the other; ii.) Post-test-only control group design - to investigate the effect of short-term therapy versus cognitive-behavioural therapy, on levels of client hope and test units are randomly allocated to an experimental group and a control group. The experimental group is exposed to a treatment LEIZELLE MAE E. UBOD 2018-2659 and both groups are measured afterwards R X1 → O; R X2 → O; and iii.) Solomon four group design - attempts to take into account the influence of pretesting on subsequent post-test results and control for the threats to validity rising from the pre-test–post-test design, and the mortality issue caused by the post-test-only design R O → X1 → O; R O → X2 → O; R → X3 → O; R → X4 → O. [R means that membership in a group is randomized; O represents a point where data is collected (i.e., a pretest, posttest, or survey representing the dependent variable); and X indicates an independent variable. When there is more than one level, they will be numbered. For example, an independent variable with two levels would be shown as X1 and X2]. As seen, there are multiple types on how to execute studies according to the data wanted, required, or needed. LEIZELLE MAE E. UBOD 2018-2659 REFLECTION Undergoing through extra details in studying quantitative research designs put emphasis on how critical the process can take. Regardless of whatsoever types of design is wanted, data needed, instrumentations appropriated for surveys, observations, or statistical computations, it will always be fragile to some factors that possibly affect its validity. Most commonly, these threats are too subtle to notice or else, it’s a compromise. Errors cannot be completely eliminated, and so, attentiveness is required. From formulation of problems, hypotheses, classifying of variables, articulation of test-items for surveys, choosing sample population, timeframe for data collection, consideration for location and participant situation to the subject being tested, it needs extra work to lessen inaccuracies. However, as there are threats to its validity, there is also ways to improve it although more simple than the complex threats. While errors can happen anytime, defence can happen during collection of data. Additionally, although the same terms and design, there are more types particularly in experimental design wherein the researcher can be guided on how to correctly implement his/her data collection. It may be for the times observation or tests are conducted, groups involve, or randomization. In addition, it’s a must to exhibit statistical computation especially when designing a quantitative research. There are numerous ways of utilizing numerical data and instruments in proving or disproving the hypothesized claims. This is vital, even when one have no interest in statistics, this is a requirement for a quality paper or publication. Advancing oneself to research levels is a great responsibility to transparency, accuracy, reliability, and validity. This requires a lot of work, patient, and attention for it to contribute to and acknowledge by the society. LEIZELLE MAE E. UBOD 2018-2659 References: Jonker, J. and Pennink, B. (2010). The Essence of Research Methodology. London New York: Springer Heidelberg Dordrecht. DOI 10.1007/978-3-540-71659-4 Ary, D., Jacobs, L., & Sorensen, C. (2010). Introduction to research in education. Wadsworth CENGAGE Learning: USA. 8th ed. ISBN-13: 978-0-495-60122-7 ISBN-10: 0-495-60122-5 Fraenkel, J., Wallen, N., & Hyun, H. (1990). How to design and evaluate research in education. McGraw-Hill companies: New York, USA. 8th ed. ISBN: 978-007-809785-0 MHID: 0-07-809785-1 Ellisn, C. (2010). Writing Research Papers. . McGraw-Hill companies: New York. MHID: 0-07-162990-4 Retrieved from: https://www.discoverphds.com/blog/types-of-research https://www.simplypsychology.org/qualitative-quantitative.html https://www.sisinternational.com/what-is-quantitative-research/ https://www.statisticshowto.com/probability-and-statistics/statisticsdefinitions/nominal-ordinal-interval-ratio/ https://www.freecodecamp.org/news/types-of-data-in-statistics-nominalordinal-interval-and-ratio-data-types-explained-with-examples/ https://www.open.edu/openlearn/ocw/mod/oucontent/view.php?id=85587&se ction=1#:~:text=Discrete%20data%20is%20information%20that%20can%20only%2 0take%20certain%20values.&text=Continuous%20data%20is%20data%20that,all% 20examples%20of%20continuous%20data. LEIZELLE MAE E. UBOD 2018-2659 https://www.mymarketresearchmethods.com/types-of-data-nominal-ordinalinterval-ratio/ https://www.questionpro.com/blog/ratio-scale-vs-intervalscale/#:~:text=The%20difference%20between%20interval%20and,hand%2C%20n ever%20fall%20below%20zero. https://relivingmbadays.wordpress.com/2013/05/15/research-design-meaningandimportance/#:~:text=Advantages%20of%20research%20design&text=Ensures%2 0project%20time%20schedule.,their%20procurement%20in%20right%20time. https://ssric.org/trd/modules/cowi/chapter3 https://opentextbc.ca/researchmethods/chapter/overview-of-survey-research/ https://www.verywellmind.com/internal-and-external-validity-4584479 https://courses.lumenlearning.com/suny-hccc-researchmethods/chapter/chapter-10-experimental-research/ https://www.researchconnections.org/childcare/datamethods/preexperimenta l.jsp https://explorable.com/counterbalanced-measures-design https://www.researchconnections.org/childcare/datamethods/experimentsqua si.jsp https://www.insightsassociation.org/issues-policies/glossary/pre-test-post-testcontrol-group-design https://www.insightsassociation.org/issues-policies/glossary/post-test-onlycontrol-group-design https://methods.sagepub.com/reference/the-sage-encyclopedia-ofcommunication-research-methods/i13736.xml