Uploaded by Krisna Isa Penalosa

Test-of-reliabilty

advertisement
Test of Reliability of
Research Questionnaire
VALIDITY AND RELIABILITY OF
RESEARCH QUESTIONNAIRE
 VALIDITY
VS RELIABILITY
Validity expresses the degree to which a measurement
measures what it intends to measure. Validity can refer
to any of the following

Face validity

Construct validity

Content validity

Criterion validity
VALIDITY VS RELIABILITY

Face Validity
 Also
called the logical validity or surface validity, as it
is a subjective assessment whether the measurement
procedure is a valid measure of a given variable or
construct.
 This
is the most used method of validity in researches
because it is the easiest form of validity to apply, but
most expert consider this as the weakest form of
validity.
 It
is superficial in nature because it appears to
measure construct that is believed to be measured.
VALIDITY VS RELIABILITY

CONSTRUCT VALIDITY
 Does
the tool measures what we intend or interested to
measure. Meaning, are the construct in the questionnaire
are attributes of the variables we intend to measure?
 To
achieve construct validity, ensure that your indicators
and measurements are carefully developed based on
relevant existing knowledge.
Example is, if we want to measure Job Satisfaction, which
can not be measured directly, but literatures suggests that
wages, benefit, working conditions etc. are some indicators
that we can measure.
VALIDITY VS RELIABILITY

Content Validity
 Evaluates
whether the questionnaire, all the
items/questions, reflects all the aspect of the construct.
That is, are all the items representative of the indicators
which intend to measure our variable of interest.
For example, suppose Job satisfaction has 4 indicators and
each indicator has 5 questions. So that all in all there are 20
questions, can we say that the questions in each indicator
relevant or are there missing items that threatens validity.
VALIDITY VS RELIABILITY

Criterion Validity
 Evaluates
how the result of the test is associated to the result
of other tests.
 is
a method of test validation that examines the extent to
which scores on an inventory or scale correlate with external,
non-test criteria (Cohen & Swerdlik, 2005).
 To
evaluate criterion validity, you calculate the correlation
between the results of your measurement and the results of
the criterion measurement. If there is a high correlation, this
gives a good indication that your test is measuring what it
intends to measure.
VALIDITY VS RELIABILITY



RELIABILITY
the consistency of your measurement, or the degree to
which an instrument measures the same way each time it
is used under the same condition with the same subjects. A
measure is considered reliable if a person's score on the
same test given twice is similar or shall we say highly
correlated.
Reliability contributes to the validity of the questionnaire
but is not sufficient to indicate validity. Meaning we
cannot say that a reliable test is valid.
How to estimate reliability?

1.
Test-Retest Reliability (Stability)
 This
is administering the questionnaire to a group of
respondents and repeat the administration after some
period of time to the same group.
 We
test the correlation of the results of the first and
second administration. High correlation indicates high
reliability
 Cohen’s
Kappa – Categorical Data
 Correlation
r – Continuous Data
How to estimate reliability?

2. Internal Consistency
 Estimates
reliability by grouping questions in a
questionnaire that measure the same concept.
 it
measures the extent to which the questions in
the survey all measure the same underlying
construct.
 Internal
consistency is usually estimated using
split-half reliability index (Spearman Brown
formula) and coefficient alpha index, Mc Donald’s
omega or sometimes Kuder-Richardson 20 (KR-20)
How to estimate reliability

Parallel-form reliability (Equivalence) involves
developing two equivalent, parallel forms of the
survey; form A and form B say, both measuring
the same underlying construct, but with different
questions in each. Respondents are asked to
complete both surveys; some taking form A
followed by form B, others taking form B first
then form A.
Interpreting Reliability Statistic
Cohen’s Kappa – interrater reliability
Value of Kappa
Level of Agreement
0–.20
.21–.39
.40–.59
.60–.79
.80–.90
Above.90
None
Minimal
Weak
Moderate
Strong
Almost Perfect
% of Data that are
Reliable
0–4%
4–15%
15–35%
35–63%
64–81%
82–100%
Interpreting Reliability Statistic

Guttman λ-2 Coefficient – variance of true scores

Spearman Brown Coefficient

Mc Donald’s ω

Cronbach’s Alpha
 >=
0.7
REFERENCES




Altares, P. 2012. Elementary statistics with computer applications. (2nd ed.,
Vol. xii). Manila(PH): Rex Bookstore.
Anderson DR, Sweeney DJ. Statistics for Business and Economics. Boston:
MA: Cengage Learning; 2018.
Bluman, A. 2013. Elementary Statistics.6th ed., Vol. 1. Singapore (SG):
McGraw- Hill Education.Cuesta H. Practical Data Analysis. Birmingham:
Packt
Publishing; 2016.
Dando P. Say It with Data: A Concise Guide to Making Your Case and
Getting Results. ALA ed. Chicago; 2014.

https://select-statistics.co.uk/blog/assessing-questionnaire-reliability/

https://www.npmj.org/article.asp?issn=1117-1936;year=2015

http://dissertation.laerd.com/reliability-in-research.php

http://dissertation.laerd.com/internal-validity.php
THANK YOU
&
GOD BLESS
Download