Uploaded by azihhemmanuella

research study two

advertisement
outline
1) Research Design
2) Population of the Study
3) Sample and Sampling Technique
4) Instrumentation
5) Reliability of the Research Instrument
6) validity of the Research Instrument
7) Data collection procedure
8) Method of Data Analysis
9) Ethical consideration
Research Design
What is Research Design?
Research design is the framework of research methods and techniques chosen by a researcher to
conduct a study. The design allows researchers to sharpen the research methods suitable for the
subject matter and set up their studies for success.
Types of Research Design
1
A researcher must clearly understand the various research design types to select which model to
implement for a study. Like research itself, the design of your analysis can be broadly classified
into quantitative and qualitative.

Qualitative research
Qualitative research is a process of naturalistic inquiry that seeks an in-depth understanding of
social phenomena within their natural setting. Qualitative research relies on data obtained by the
researcher from first-hand observation, interviews, questionnaires (on which participants write
descriptively), focus groups, and participant-observation, recordings made in natural settings,
documents, case studies, and artifacts. The data are generally non-numerical. Qualitative
methods include:
1) Ethnography research: This is used to understand the culture of a group, community or
organization through participation and close observation
2) Grounded Theory research: This is a qualitative method that enables you to study a
particular phenomenon or process and discover new theories that are based on the
collection and analysis of real world data ...
3) Discourse Analysis: discourse studies, is an approach to the analysis of written, vocal, or
sign language use, or any significant semiotic event.
Discourse analysis is a research method for studying written or spoken language in
relation to its social context. It aims to understand how language is used in real life
situations. When you do discourse analysis, you might focus on: The purposes and effects
of different types of language.
2
4) Interpretative Phenomenological Analysis: is a qualitative approach which aims to
provide detailed examinations of personal lived experience. It is use to explore in detail
how participants are making sense of their personal and social world.
5) Descriptive phenomenological approach : This
is better suited to examining the
experiences of family caregivers of patients with advanced head and neck cancer

Quantitative research
Quantitative research is a research strategy that focuses on quantifying the collection and
analysis of data. It is for cases where statistical conclusions to collect actionable insights are
essential. Numbers provide a better perspective for making critical business decisions.
Quantitative research methods are necessary for the growth of any organization. Insights drawn
from complex numerical data and analysis prove to be highly effective when making decisions
about the business’s future.
You can further break down the types of research design into categories which are:
1. Descriptive: In a descriptive composition, a researcher is solely interested in describing the
situation or case under their research study. It is a theory-based design method created by
gathering, analyzing, and presenting collected data. This allows a researcher to provide insights
into the why and how of research. Descriptive design helps others better understand the need for
the research. If the problem statement is not clear, you can conduct exploratory research.
3
2. Experimental: Experimental research establishes a relationship between the cause and effect
of a situation. It is a causal design where one observes the impact caused by the independent
variable on the dependent variable. For example, one monitors the influence of an independent
variable such as a price on a dependent variable such as customer satisfaction or brand loyalty. It
is an efficient research method as it contributes to solving a problem.
The independent variables are manipulated to monitor the change it has on the dependent
variable. Social sciences often use it to observe human behavior by analyzing two groups.
Researchers can have participants change their actions and study how the people around them
react to understand social psychology better.
3. Correlational research: Correlational research is a non-experimental research technique. It
helps researchers establish a relationship between two closely connected variables. There is no
assumption while evaluating a relationship between two other variables, and statistical analysis
techniques calculate the relationship between them. This type of research requires two different
groups.
A correlation coefficient determines the correlation between two variables whose values range
between -1 and +1. If the correlation coefficient is towards +1, it indicates a positive relationship
between the variables, and -1 means a negative relationship between the two variables.
4. Diagnostic research: In diagnostic design, the researcher is looking to evaluate the underlying
cause of a specific topic or phenomenon. This method helps one learn more about the factors that
create troublesome situations.
4
5. Explanatory research: Explanatory design uses a researcher’s ideas and thoughts on a subject
to further explore their theories. The study explains unexplored aspects of a subject and details
the research questions’ what, how, and why.
Sample and Sampling Technique
Sampling is related with the selection of a subset of individuals from within a population to
estimate the characteristics of whole population. i.e Sampling is a process in statistical analysis
where researchers take a predetermined number of observations from a larger population
Sampling Techniques
The following points need to be considered in selection of individuals.
a. Investigations may be carried out on an entire group or a representative taken out from the
group.
b. Whenever a sample is selected it should be a random sample.
c. While selecting the samples the heterogeneity within the group should be kept in mind and
proper sampling technique should be applied.
Some common sample designs described in the literature include purposive sampling, random
sampling, and quota sampling (Cochran 1963, Rao 1985, Sudman 1976).
The random sampling can also be of different types.
Purposive Sampling
5
In this technique, sampling units are selected according to the purpose. Purposive sampling
provides biased estimate and it is not statistically recognized. This technique can be used only for
some specific purposes.
Random Sampling
In this method of sampling, each unit included in the sample will have certain pre assigned
chance of inclusion in the sample. This sampling provides the better estimate of parameters in
the studies in comparison to purposive sampling.The every single individual in the sampling
frame has known and non-zero chance of being selected into the sample. It is the ideal and
recognized single stage random sampling.
Lottery Method of Sampling
There are several different ways to draw a simple random sample. The most common way is the
lottery method. Here, each member or item of the population at hand is assigned a unique
number. The numbers are then thoroughly mixed, like if you put them in a bowl or jar and shook
it. Then, without looking, the researcher selects n numbers. The population members or items
that are assigned that number are then included in the sample.
By Using Random Number Table
Most statistics books and many research methods books contain a table of random numbers as a
part of the appendices. A random number table typically contains 10,000 random digits between
0 and 9 that are arranged in groups of 5 and given in rows. In the table, all digits are equally
probable and the probability of any given digit is unaffected by the digits that precede it.
6
Simple Random Sampling
In the Simple random sampling method, each unit included in the sample has equal chance of
inclusion in the sample. This technique provides the unbiased and better estimate of the
parameters if the population is homogeneous.
Stratified Random Sampling
Stratified random sampling is useful method for data collection if the population is
heterogeneous. In this method, the entire heterogeneous population is divided in to a number of
homogeneous groups, usually known as Strata, each of these groups is homogeneous within
itself, and then units are sampled at random from each of these stratums. The sample size in each
stratum varies according to the relative importance of the stratum in the population. The
technique of the drawing this stratified sample is known as Stratified Sampling. In other words,
stratification is the technique by which the population is divided into subgroup/strata. Sampling
will then be conducted separately in each stratum. Strata or Subgroup are chosen because
evidence is available that they are related to outcome. The selection of strata will vary by area
and local conditions. After stratification, sampling is conducted separately in each stratum. In
stratified sample, the sampling error depends on the population variance within stratum but not
between the strata. Stratified random sampling also defined as where the population embraces a
number of distinct categories, the frame can be organized by these categories into separate
"strata." Each stratum is then sampled as an independent sub-population, out of which individual
elements can be randomly selected.
Cluster Sampling
7
Cluster sampling is a sampling method where the entire population is divided into groups, or
clusters, and a random sample of these clusters are selected. All observations in the selected
clusters are included in the sample. Cluster sampling is a sampling technique used when
"natural" but relatively homogeneous groupings are evident in a statistical population. Cluster
sampling is generally used when the researcher cannot get a complete list of the units of a
population they wish to study but can get a complete list of groups or 'clusters' of the population.
This sampling method may well be more practical and economical than simple random sampling
or stratified sampling. Compared to simple random sampling and stratified sampling, cluster
sampling has advantages and disadvantages. For example, given equal sample sizes, cluster
sampling usually provides less precision than either simple random sampling or stratified
sampling. On the other hand, if contact costs between clusters are high, cluster sampling may be
more cost effective than the other methods.
Systematic Random Sampling
In this method of sampling, the first unit of the sample selected at random and the subsequent
units are selected in a systematic way. If there are N units in the population and n units are to be
selected, then R = N/n (the R is known as the sampling interval). The first number is selected at
random out of the remainder of this R (Sampling Interval) to the previous selected number.
Multistage Random Sampling
In Multistage random sampling, units are selected at various stages. The sampling designs may
be either same or different at each stage. Multistage sampling technique is also referred to as
cluster sampling, it involves the use of samples that are to some extent of clustered. The principle
8
advantage of this sampling technique is that it permits the available resources to be concentrated
on a limited number of units of the frame, but in this sampling technique the sampling error will
be increased.
Quota sampling
In quota sampling, the population is first segmented into mutually exclusive sub-groups, just as
in stratified sampling. Then judgment is used to select the subjects or units from each segment
based on a specified proportion. It is this second step which makes the technique one of nonprobability sampling. In quota sampling, the selection of the sample is non-random. For example
interviewers might be tempted to interview those who look most helpful. The problem is that
these samples may be biased because not everyone gets a chance of selection. This random
element is its greatest
Spatial Sampling
Spatial sampling is an area of survey sampling associated with sampling in two or more
dimensions.
Independent Sampling
Independent samples are those samples selected from the same population, or different
populations, which have no effect on one another. That is, no correlation exists between the
samples
Sample size determination
9
Sample size determination is the act of choosing the number of observations or replicates to
include in a statistical sample.
How do you determine the sample size?
Five steps to finding your sample size
1. Define population size or number of people.
2. Designate your margin of error.
3. Determine your confidence level.
4. Predict expected variance.
5. Finalize your sample size.
There are different methods use for sample size determination which are:
1) Taro Yamane is as follows:
n = N/ 1+ N(e)2
In the formular above;
n is the required sample size from the population under study
N is the whole population that is under study
e is the precision or sampling error which is usually 0.10,0.05 or 0.01
Example:
Using the Taro Yamane’s statistical formular to determine the adequate sample size of
say 300 respondents under study. This would hence be
10
n = N/ 1+ N(e)2
N=300; e= 0.1; e2= 0.01
n = 300/1+ 300(0.1)2
n= 100.
2. Slovin's Formula
Slovin’s formula is used to calculate the sample size (n) given the population size (N) and a
margin of error (e).
- it's a random sampling technique formula to estimate sampling size
-It is computed as n = N / (1+Ne2).
whereas:
n = no. of samples
N = total population
e = error margin / margin of error
Note: There is practically no difference between Slovin's and Taro Yamane's formula for
calculating sample size
3. Cochran’s Sample Size Formula
Cochran’s formula is considered especially appropriate in situations with large populations. A
sample of any given size provides more information about a smaller population than a larger one,
so there’s a ‘correction’ through which the number given by Cochran’s formula can be reduced if
the whole population is relatively small.
11
The Cochran formula is:
Where:
e is the desired level of precision (i.e. the margin of error),
p is the (estimated) proportion of the population which has the attribute in question,
q = 1 – p.
The z-value is found in a Z table.
Cochran’s Formula Example
Suppose we are doing a study on the inhabitants of a large town, and want to find out how many
households serve breakfast in the mornings. We don’t have much information on the subject to
begin with, so we’re going to assume that half of the families serve breakfast: this gives us
maximum variability. So p = 0.5. Now let’s say we want 95% confidence, and at least 5
percent—plus or minus—precision. A 95 % confidence level gives us Z values of 1.96, per the
normal tables, so we get
((1.96)2 (0.5) (0.5)) / (0.05)2 = 385.
So a random sample of 385 households in our target population should be enough to give us the
confidence levels we need.
12
Instrumentation in research
Instrumentation is the process of constructing research instruments that could be used
appropriately in gathering data on the study
13
VALIDITY OF TEST ITEM
Validity refers to how accurately a method measures what it is intended to measure. If research
has high validity that means it produces that means it produces results that correspond to real
properties, characteristics, and variations in the physical or social world.
High reliability is one indicator that a measurement is valid. If a method is not reliable, it
probably isn’t valid
Test Validity
Test validity is an indicator of how much meaning can be placed upon a set of test results.
1. Criterion Validity: This assesses whether a test reflects a certain set of abilities.
i)
Concurrent validity measures the test against a benchmark test and high
correlation indicates that the test has strong criterion validity.
ii)
Predictive validity is a measure of how well a test predicts abilities. It involves
testing a group of subjects for a certain construct and then comparing them with
results obtained at some point in the future.
2. Content Validity: This is the estimate of how much a measure represents every single
element of a construct.
3. Construct Validity: defines how well a test or experiment measures up to its claims. A
test designed to measure depression must only measure that particular construct, not
closely related ideals such as anxiety or stress.
14
i)
Convergent validity tests that constructs that are expected to be related are, in fact,
related.
ii)
Discriminant validity tests that constructs that should have no relationship do, in fact,
not have any relationship. (Also referred to as divergent validity)
4.) Face Validity: This is a measure of how representative a research project is ‘at face value,'
and whether it appears to be a good project.
RELIABILITY IN RESEARCH.
Reliability refers to how consistently a method measures something. If the same result can be
consistently achieved by using the same methods under the same circumstances, the
measurement is considered reliable
Reliability and validity are closely related, but they mean different things. A measurement can be
reliable without being valid. However, if a measurement is valid, it is usually also reliable.
Reliability, like validity, is a way of assessing the quality of the measurement procedure used to
collect data in a dissertation. In order for the results from a study to be considered valid, the
measurement procedure must first be reliable.
Types of Reliability
1. Test-retest reliability is a measure of reliability obtained by administering the same test
twice over a period of time to a group of individuals. The scores from Time 1 and Time
2 can then be correlated in order to evaluate the test for stability over time.
Example: A test designed to assess student learning in psychology could be given to a group of
students twice, with the second administration perhaps coming a week after the first. The
obtained correlation coefficient would indicate the stability of the scores.
15
2. Parallel forms reliability is a measure of reliability obtained by administering different
versions of an assessment tool (both versions must contain items that probe the same
construct, skill, knowledge base, etc.) to the same group of individuals. The scores from
the two versions can then be correlated in order to evaluate the consistency of results
across alternate versions.
Example: If you wanted to evaluate the reliability of a critical thinking assessment, you might
create a large set of items that all pertain to critical thinking and then randomly split the
questions up into two sets, which would represent the parallel forms.
3. Inter-rater reliability is a measure of reliability used to assess the degree to which
different judges or raters agree in their assessment decisions. Inter-rater reliability is
useful because human observers will not necessarily interpret answers the same way;
raters may disagree as to how well certain responses or material demonstrate knowledge
of the construct or skill being assessed.
Example: Inter-rater reliability might be employed when different judges are evaluating the
degree to which art portfolios meet certain standards. Inter-rater reliability is especially useful
when judgments can be considered relatively subjective. Thus, the use of this type of reliability
would probably be more likely when evaluating artwork as opposed to math problems.
4. Internal consistency reliability is a measure of reliability used to evaluate the degree to
which different test items that probe the same construct produce similar results.
Average inter-item correlation is a subtype of internal consistency reliability. It is obtained by
taking all of the items on a test that probe the same construct (e.g., reading comprehension),
16
determining the correlation coefficient for each pair of items, and finally taking the average of all
of these correlation coefficients. This final step yields the average inter-item correlation.
5. Split-half reliability is another subtype of internal consistency reliability. The process of
obtaining split-half reliability is begun by “splitting in half” all items of a test that are
intended to probe the same area of knowledge (e.g., World War II) in order to form two
“sets” of items. The entire test is administered to a group of individuals, the total score
for each “set” is computed, and finally the split-half reliability is obtained by determining
the correlation between the two total “set” scores.
Note: alone is not sufficient. For a test to be reliable, it also needs to be valid. For example, if
your scale is off by 5 lbs, it reads your weight every day with an excess of 5lbs. The scale is
reliable because it consistently reports the same weight every day, but it is not valid because it
adds 5lbs to your true weight. It is not a valid measure of your weight.
Test-Retest Reliability coefficient formula
Test-retest reliability refers to the degree to which test results are consistent over time.
Note: In order to measure test-retest reliability, we must first give the same test to the same
individuals on two occasions and correlate the scores.
Consider the following hypothetical scenario: You give your students a vocabulary test on
February 26 and a retest on March 5. If there are no significant changes in your students'
abilities, a reliable test given at these two different times should yield similar results. To find the
test-retest reliability coefficient, we need to find out the correlation between the test and the
retest. In this case, we can use the formula for the correlation coefficient, such as Pearson's
correlation coefficient:
17
N is the total number of pairs of test and retest scores.
For example, if 50 students took the test and retest, then N would be 50. Following the N is the
Greek symbol sigma, which means the sum of. xy means we multiply x by y, where x and y are
the test and retest scores. If 50 students took the test and retest, then we would sum all 50 pairs of
the test scores (x) and multiply them by the sum of retest scores (y).
Split-half reliability
This is a statistical method used to measure the consistency of the scores of a test. Split-half
reliability is typically calculated in the following steps
Divide whatever test you are analyzing into two halves and score them separately (usually the
odd numbered items are scored separately from the even-numbered items).
I.
Calculate a Pearson product-moment correlation coefficient between the students' scores
on the even-numbered items and their scores on the odd-numbered items. The resulting
coefficient is an estimate of the half-test reliability of your test (i.e., the reliability of the
odd-numbered items, or the even-numbered items, but not both combined).
18
II.
Apply the Spearman-Brown prophecy formula to adjust the half-test reliability to full-test
reliability. We know that, all other factors being held constant, a longer test will probably
be more reliable than a shorter test. The Spearman-Brown prophecy formula was
developed to estimate the change in reliability for different numbers of items. The
Spearman-Brown formula that is often applied in the split-half adjustment is as follows:
For example, if the half-test correlation (for a 30-item test) between the 15 odd-numbered and 15
even-numbered items on a test turned out to be .50, the full-test (30-item) reliability would be
0.67 as follows:
Ethical consideration
1
Principle of beneficence
2
Benefit
3.
Principle of non-maleficence
4.
Freedom from harm
5.
Freedom from exploitation
19
6.
Principle of respect for human dignity
7.
Right to self-determination or autonomy
8.
Informed consent
9.
Right o full disclosure
10.
Principle of justice
11.
Right to fair treatment
12.
Right to privacy
20
Download