Uploaded by gix86203

Research Methodology - III

advertisement
1
EDUCATIONAL RESEARCH
Unit Structure
1.0
1.1
1.2
1.3
1.4
Objectives
Introduction
Sources Acquiring Knowledge
Meaning, Steps and Scope of Educational Research
Scientific Method, aims and characteristics of research as a
scientific activity
1.5 Ethical considerations in Educational Research
1.6 Paradigms of Educational research
1.7 Types of Research
1.7.a Fundamental
1.7.b Applied Research
1.7.c. Action Research
1.0 OBJECTIVES :
After reading this unit, you will be able to:
To explain the concept of Educational Research
To describe the scope of Educational Research
To state the purpose of Educational Research
To explain what is scientific enquiry.
To explain importance of theory development.
To explain relationship among science, education and
educational research.
To Identity fundamental research
To Identity applied research
To Identify action research
To Differentiate between fundamental, applied, and action
research
To Identify different paradigms of research
2
1.1 INTRODUCTION :
Research purifies human life. It improves its quality. It is
search for knowledge. If shows how to Solve any problem
scientifically. It is a careful enquiry through search for any kind of
Knowledge. It is a journey from known to unknown. It is a
systematic effort to gain new knowledge in any kind of discipline.
When it Seeks a solution of any educational problem it leads to
educational research.
Curiosity, inquisitiveness are natural gifts secured by a man.
They inspire him to quest, increase his thirst for knowledge / truth.
After trial and error, he worked systematically in the direction of the
desired goal. His adjustment and coping with situation makes him
successful in his task. Thereby he learns something‘s, becomes wise
and prepares his own scientific procedure while performing the same
task for second time. So is there any relationship among science,
education and educational Research?
―Research is the voyage of discovery‖. It is the quest for
answers to unsolved problems.
Research is required in any field to come up with new
theories or modify, accept, or nullify the existing theory. From time
immemorial it has been seen so many discoveries and inventions
took place through research and world has got so many new theories
which help the human being to solve his problems. Graham Bell,
Thomas Edison, JC Bose, John Dewey, Skinner, Piaget Research
like have given us theories which may cause educational progress
research needs expertise.
1.2 SOURCES OF ACQUIRING KNOWLEDGE :
From the time we were born and the present day, each one of
us has accumulated a body of knowledge. Curiosity, the desire to
learn about one‘s environment and the desire to improve one‘s life
through problem-solving is natural to all human beings. For this
purpose, human beings depend on several methods / sources of
acquiring knowledge as follows:
1. Learned Authority : Human beings refer to an authority such as
a teacher, a parent or the boss or an expert or consultant and seek
his / her advice. Such an authority may be based on knowledge or
3
experience or both. For example, if a child has difficulty in
learning a particular subject, he / she may consult a teacher.
Learned authority could also be a book / dictionary /
encyclopaedia / journal / web-site on internet.
2. Tradition : Human beings easily accept many of the traditions of
their culture or forefathers. For example, in matters of food,
dress, communications, religion, home remedies for minor
ailments, the way a friend will react to an invitation, one relies on
family traditions. On the other hand, students, in case of
admission criteria and procedures, examination patterns and
procedures, methods of maintaining discipline, co-curricular
activities, acceptable manner of greeting teachers and peers rely
on school traditions. Long established customs or practices are
popular sources of acquiring knowledge. This is also known as
tenacity which implies holding on to a perspective without any
consideration of alternatives.
3. Experience : Our own prior personal experiences in matters of
problem-solving or understanding educational phenomena is the
most common, familiar and fundamental source of knowledge.
4. Scientific Method : In order to comprehend and accept learning
acquired through these sources, we use certain approaches which
are as follows:
(a) Empiricism : It implies relying on what our senses tell us.
Through a combination of hearing and seeing we come to
know the sound of a train. i.e. through these two senses, we
learn to associate specific sounds with specific objects. Our
senses also enable us to compare objects / phenomena /
events. They provide us with the means for studying and
understanding relationships between various concepts (eg.
level of education and income).
(b) Rationalism : It includes mental reflection. it places
emphasis on ideas rather than material substances. if we see
logical interconnectedness between two or more things, we
accept those things. For example, we may reason that
conducive school / college environment is expected to lead
to better teacher performance.
(c) Fideism : It implies the use of our beliefs, emotions or gut
reactions including religion. We believe in God because
our parents told us though we had not sensed God, seen or
heard him nor had concluded that that his existence is
logically proved.
4
1.3 MEANING, STEPS AND SCOPE OF EDUCATIONAL
RESEARCH :
MEANING OF EDUCATIONAL RESEARCH :
Educational Research as nothing but cleansing of educational
Research is nothing but cleansing of educational process. Many
experts think Educational Research as underAccording to Mouly, ―Educational Research is the systematic
application of scientific method for solving for solving educational
problem.‖
Travers thinks, ―Educational Research is the activity for
developing science of behavior in educational situations. It allows
the educator to achieve his goals effectively.‖
According to Whitney, ―Educational Research aims at finding
out solution of educational problems by using scientific
philosophical method.‖
Thus, Educational Research is to solve educational problem
in systematic and scientific manner, it is to understand, explain,
predict and control human behaviour.
Educational Research Characterizes as follows :
-
It is highly purposeful.
-
It deals with educational problems regarding students and
teachers as well.
-
It is precise, objective, scientific and systematic process of
investigation.
-
It attempts to organize data quantitatively and qualitatively to
arrive at statistical inferences.
-
It discovers new facts in new perspective. i. e. It generates
new knowledge.
-
It is based on some philosophic theory.
-
It depends on the researchers ability, ingenuity and experience
for its interpretation and conclusions.
-
It needs interdisciplinary approach for solving educational
problem.
5
-
It demands subjective interpretation and deductive reasoning in
some cases.
-
It uses classrooms, schools, colleges department of education
as the laboratory for conducting researches.
STEPS OF RESEARCH :
The various steps involved in the research process can be
summarised as follows ;
Step 1 : Identifying the Gap in Knowledge
The researcher, on the basis of experience and observation
realises that some students in the class do not perform well in the
examination. So he / she poses an unanswered question : ―Which
factors are associated with students‘ academic performance?‖
Step 2 : Identifying the Antecedent / Causes
On the basis of experience, observation and a review of
related literature, he / she realises that students who are either very
anxious or not at all anxious do not perform well in the examination.
Thus he / she identifies anxiety as one of the factors that could be
associated with students‘ academic performance.
Step 3 : Stating the Goals
The researcher now states the goals of the study :
1. To ascertain the relationship of anxiety with academic
performance of students.
2. To ascertain the gender differences in the anxiety and
academic performance of students.
3. To ascertain the gender difference in the relationship of
anxiety with academic performance of students.
Step 4 : Formulating Hypotheses
The researcher may state his / her hypotheses as follows:
1. There is a significant relationship between anxiety and
academic performance of students.
2. There is a significant gender difference in the anxiety and
academic performance of students.
6
3. There is a significant gender difference in the relationship of
anxiety with academic performance of students.
Step 5 : Collecting Relevant Information
The researcher uses appropriate tools and techniques to
measure anxiety and academic performance of students, selects a
sample of students and collects data from them.
Step 6 : Testing the Hypotheses
He / she now uses appropriate statistical techniques to verify
and test the hypotheses of the study stated in Step 4.
Step 7 : Interpreting the Findings
He / she interprets the findings in terms of whether the
relationship between anxiety and academic performance is positive
or negative, linear or curvilinear. He / she finds that this relationship
is curvilinear i.e. when a student‘s anxiety is either very low or very
high, his / her academic performance is found to be low. But when a
student‘s anxiety is moderate, his / her academic performance is
found to be high.
He / she now tries to explain this finding based on logic and
creativity.
Step 8 : Comparing the Findings with Prior researchers’
Findings
At this step, the researcher tries to find out whether his / her
conclusions match those of the prior researches or not. If not, then
the researcher attempts to find out why conclusions do not match
with other researches by analysing prior studies further.
Step 9 : Modifying Theory
On the basis of steps 7 and 8, the researcher speculates that
anxiety alone cannot influence academic performance of students.
There could be a third factor which influences the relationship
between anxiety and academic performance of students. This third
factor could be study habits of students. For instance, students who
have very low level of anxiety may have neglected their studies
through out the year and hence their academic performance is poor.
On the other hand, students who have very high level of anxiety may
7
not be able to remember what they have learnt or cannot concentrate
on studies due to stress or may fall sick very often and hence cannot
study properly. Hence their academic performance is poor. However,
students with a moderate level of anxiety are motivated enough to
study regularly and systematically all through the year and hence
their academic performance is high.
Thus, the loosely structured theory on students‘ academic
performance needs to incorporate one more variable, namely, study
habits of students. In other words, it needs to be modified.
Step 10 : Asking New Questions
Do study habits and anxiety interact with each other and
influence academic performance of students? i.e. we can now start
with a fresh topic of research involving three variables rather than
two.
Check your Progress (1)
1.
What is the aim of Educational Research?
2. Name the method which is mainly applicable in Educational
Research?
3.
W
hich approach is adopted in Educational Research?
4. Name the places which can act as lab oratory for conducting
Education Research..
8
SCOPE OF EDUCATIONAL RESEARCH :
Name of Educational Research changes with the gradual
development occurs with respect to knowledge and technology, so
Educational Research needs to extend its horizon. Being scientific
study of educational process, it involves :
- individuals (Student, teachers, educational managers, parents.)
- institutions (Schools, colleges, research – institutes)
It discovers facts and relationship in order to make
educational process more effective. It relates social sciences like
education.
It includes process like investigation, planning (design)
collecting data, processing of data, their analysis, interpretation and
drawing inferences.
It covers areas from formal education and conformal education as
well.
Check your Progress
1.
Name disciplines on which education depends.
2.
How education is an art?
3.
How education is a science?
4.
Name the areas of Educational Research in addition to formal
education?
9
5.
Name the aspects in which Educational Research can seek
improvement?
6.
Name the human elements involved in Educational Research?
7.
Name the institutions involved in Educational Research?
8.
Name the essential qualities of Educational Researches.
9.
What is the goal of Educational Research with respect to new
knowledge?
LET US SUM UP :
-
Educational Research is systematic application of scientific
method for solving educational problems, regarding students
and teachers as well.
-
Results democratic education are slow and sometimes defective.
So it needs Educational Research to solve educational problems.
-
Educational Research involves individuals like teachers /
students and educational institutions. It covers areas from
formal education to nonformal education.
10
-
Educational Research solves educational problems, purifies
educative process and generates new knowledge.
UNIT END EXERCISE :
(1) What is meant by Educational Research?
(2) What is the need of Educational Research?
(3) What is its scope?
(4) State the purpose of Education Research?
Question-1
(i) to solve education problem
(ii) scientific method
(iii) interdisciplinary approach
(iv) classroom, school, college, department of education.
Question-2
(v) Philosophy, Psychology, Sociology, History, Economics.
(vi) Since it imparts knowledge
(vii) It explains working of human mind/growth, educational
program
(viii) Nonformal education, educational technology.
(ix) Curriculum, textbooks, teaching methods.
(x) Teachers, Students, Educational managers, Parents.
(xi) School, college, research-institutes.
(xii) Updated knowledge, imagination, insight, scientific attitude.
(xiii) to generate new knowledge.
References / Suggested readings.
1)
Best Kahn (1955), Research in Education, New Delhi, Prentice
Hall of India.
2)
Kaul L. (1984) Methodology of Education research, New Delhi,
Vikas Publishing House.
11
1.4
SCIENTIFIC METHOD, AIMS AND
CHARACTERISTICS OF RESEARCH AS A
SCIENTIFIC ACTIVITY :
RELATIONSHIP AMONG SCIENCE, EDUCATION AND
EDUCATIONAL RESEARCH :
Science helps to find out the truth behind the phenomenon. It
is an approach to gathering of knowledge rather than mere subject
matter. It has following two main functions:
- to develop a theory.
- to deduce hypothesis from that theory.
Scientist uses an empirical approach for data collection and
rational approach for development of the theory.
Research shows a way to solve life – problems scientifically.
It is a reliable tool for progress of knowledge. Being systematic and
methodological, it is treated as a science. It also helps to derive the
truth behind the knowledge. It offers methods of improving quality
of the process and the product as well. Ultimately, Science and
research go hand in hand to find out solution of the problem.
Since Philosophy offers a sound basis to education, Education
is considered as an art. However, Scientific progress makes
education inclining towards a science rather than an art.
Science belongs to precision and exactness. It suffers hardly
from any variable. But education as a social science suffers from
many variables, so goes away from exactness. Educational Research
tries to make educative process more scientific. But education is
softening from multivariable, so it can‘t be as exact as physical
sciences. If the study is systematically designed to achieve
educational goals, it will be an educational research. Let us
summaries this discussion with Good‘s thought –
―If we wish wisdom, we must expect science. If we wish in increase
in wisdom, we must expect research‖
Knowledge is educator‘s need. Curiosity and thirst for search
makes him to follow scientific way wisely. Indirectly, he plays a role
12
of educational researcher. Ultimately he is able to solve the
educational problem and generate new knowledge. All the three
aspects. (Science, education and educational research) have truth as
a common basis, More or less, they need exactness and precision.
While solving a problem.
AIMS AND CHARACTERISTICS :
An enquiry is a natural technique for a search. But when it‘s
used systematically and scientifically, it takes the form of a method.
So scientific enquiry is also known as Scientific Method.
Bacon‘s inductive method contributes to human knowledge.
It is difficult to solve many problems either by inductive or by
deductive method. So Charles Darwin seeks happy blending of
inductive and deductive method in his scientific method. In this
method, initially knowledge gained from previous knowledge,
experience, reflective thinking and observation is unorganized. Later
on it proceeds inductively from part to whole and particular to
general and ultimately to meaningful hypothesis. Thereafter, it
proceeds deductively from whole to part, general to particular and
hypothesis to logical conclusion.
This method is different from the methods of knowledge –
generation like trial and error, experience, authority and intuition. It
is a parallel to Dewey‘s reflective thinking; because the researcher
himself is engrossed in reflective thinking while conducting
research.
Scientific method follows five steps as under :
Identification and definition of the problem: The researcher states
the identified problem in such a manner that it can b solved through
experimentation or observation.
Formulation of hypothesis:
It allows to have an intelligent
guess for the solution of the problem.
Implication of hypothesis through deductive reasoning : Here,
the researcher deduces the implications of suggested hypothesis,
which may be true.
13
Collection and analysis of evidence: The researcher is expected
here to test the deduced implications of the hypothesis by collecting
concerned evidence related to them through experimentation and
observation.
Verification of the hypothesis: Later on the researcher verifies
whether the evidence support hypothesis. If it supports, the
hypothesis is accepted, if it doesn‘t the hypothesis is not accepted
and later on it is modified if it is necessary.
A peculiar feature of this method is not to prove the
hypothesis as an absolute truth but to conclude that the evidence
does or doesn‘t support the hypothesis.
LET US SUM UP :
- Scientific enquiry / Scientific method is happy blending of
Inductive and deductive method. Initially it proceeds from part to
whole to state meaningful hypothesis. Later on it proceeds from
whole to part and hypothesis to logical conclusion.
-
A theory is general explanation of phenomenon.
It can e refined and modified as factual knowledge. It is
developed on the basis of results of scientific method.
Unit End Exercise
i) What are the steps of scientific enquiry / method?
ii) Explain how scientific method relates theory development?
iii) What is theory? What are the principles to be considered while
stating a theory?
iv) Explain the relationship among Science, Education and
Educational Research?
Suggested Readings / References:
1) Best – Kahn, (1995), Research in education. New Delhi, Prentice
Hall of India.
2) Ebel R.L. (1969), ‗Encyclopedia of Educational Research,‘
London, The Macmillan Co.
14
3) Kaul L. (1984), ‗Methodology of Educational Research,‘ New
Delhi, Vikas Publishing House.
1.5 ETHICAL CONSIDERATIONS OF RESEARCH :
Research exerts a significant influence over educational
systems. Hence a researcher needs to adhere to an ethical code of
conduct. These ethical considerations are as follows :
While a researcher may have some obligations to his / her
client in case of sponsored research where the sponsoring
agency has given him / her financial aid for conducting the
research, he / she has obligations to the users, the larger
society, the subjects (sample / respondents) and professional
colleagues. He / she should not discard data that can lead to
unfavourable conclusions and interpretations for the sponsoring
agency.
The researcher should maintain strict confidentiality about the
information obtained from the respondents. No information
about the personal details of the respondents should be revealed
in any of the records, reports or to other individuals without the
respondents‘ permission.
The researcher should not make use of hidden cameras,
microphones, tape-recorders or observers without the
respondents‘ permission. Similarly, private correspondence
should not be used without the concerned respondent‘s
permission.
In an experimental study, when volunteers are used as subjects,
the researcher should explain the procedures completely (eg.
the experiment will go on for six months) along with the risks
involved and the demands that he / she would make upon the
participants of the study (such as the subjects will be required
to stay back for one hour after school hours etc.). If possible,
the subjects should be informed about the purpose of the
experiment / research. While dealing with school children
(minors) or mentally challenged students, parents‘ or
guardians‘ consent should be obtained. This phenomenon is
known as ‗informed consent‘.
The researcher should accept the fact that the subjects have the
freedom to decline to participate or to withdraw from the
experiment.
15
In order to ensure the subjects‘ inclusion and continuation in
the experiment, the researcher should never try to make undue
efforts giving favourable treatment after the experiment, more
(additional marks) in a school subject, money and so on.
In an experimental research which may have a temporary or
permanent effect on the subjects, the researcher must take all
precautions to protect the subjects from mental and physical
harm, danger and stress.
The researcher should make his / her data available to peers for
scrutiny.
The respondents / subjects / participants should be provided
with the reasons for the experimental procedures as well as the
findings of the study if they so demand.
The researcher should give due credit to all those who have
helped him / her in the research procedure, tool construction,
data collection, data analysis or preparation of the research
report.
If at all the researcher has made some promise to the
participants, it must be honoured and fulfilled.
1.6 PARADIGMS OF EDUCATIONAL RESEARCH :
The idea of social construction of rationality can be pursued
by considering Kuhn‘s idea of scientific paradigm. Thomas Kuhn,
himself a historian of science, contributed to a fruitful development
in the philosophy of science with his book ―The Structure of
Scientific Revolutions‖ published in 1962. It brought into focus two
streams of thinking about what could be regarded as ‗scientific‘, the
Aristotelian tradition with its teleological approach and the Galilean
with its causal and mechanistic approach. It introduced the concept
of ‗paradigm‘ into the philosophical debate.
Definition and Meaning of Paradigm of Research :
―Paradigm‖ derives from the Greek verb for ―exhibiting side
by side‖. In lexica it is given with the translations ―examples‖ or
table of changes in form and differences in form. Thus, Paradigms
are ways of organizing information so that fundamental, abstract
relationships can be clearly understood.
16
The idea of paradigm directs attention to science as having
recognized patterns of commitments, questions, methods, and
procedures that underlie and give direction to scientific work. Kuhn
focuses upon the paradigmatic elements of research when he
suggests that science has emotional and political as well as cognitive
elements. We can distinguish the underlying assumptions of a
paradigm by viewing its discourse as having different layers of
abstractions. The layers exists simultaneously and are superimposed
upon one another.
The concept of paradigm provides a way to consider the
divergence in vision, custom, and tradition. It enables us to consider
science as having different sets of assumptions, commitments,
procedures and theories of social affairs.
A paradigm determines the criteria according to which one
selects and defines problems for inquiry and how one approaches
them theoretically and methodologically.
A paradigm could be regarded as a cultural man made object,
reflecting the dominant notions about scientific behaviour in a
particular scientific community, be it national or international, and at
a particular pointing time.
Paradigms determine scientific
approaches and procedures which stand out as exemplary to the new
generation of scientists – as long as they do not oppose them.
A ―revolution‖ in the world of scientific paradigms occurs
when one or several researchers at a given time encounter anomalies
or differences, for instance, make observations, which in a striking
way to not fit the prevailing paradigm. Such anomalies can give rise
to a crisis after which the universe under study is perceived in an
entirely new light. Previous theories and facts become subject to
thorough rethinking and revaluation.
History of Paradigms of Research :
Educational research faces a particular problem, since
education, is not a well defined, unitary discipline but a practical art.
Research into educational problems is conducted by scholars with
many disciplinary affiliations. Most of them have a background in
psychology or other behavioural sciences, but quite a few of them
have a humanistic background in philosophy and history. Thus, there
17
cannot be any prevailing paradigm or ‗normal science‘ in the very
multifaceted field of educational research. However, when empirical
research conducted by behavioural scientists, particularly in the
Anglo-Saxon countries, in the 1960‘s and early 1970‘s began to be
accused of dominating research with a positivist quantitatively
oriented paradigm that prevented other paradigms of a humanistic or
dialectical nature being employed, the accusations were directed at
those with a behavioural science background.
During twentieth century two main paradigms were employed
in researching educational problems. The one is modeled on the
natural sciences with an emphasis on empirical quantifiable
observations which lend themselves to analyses by means of
mathematical tools. The task of research is to establish causal
relationships, to explain. The other paradigm is derived from the
humanities with an emphasis on holistic and qualitative information
and interpretive approaches.
The two paradigms in educational research developed
historically as follows. By the mid-nineteenth century, when August
come \91798-1857) developed positivism in sociology and John
Stuart Mill (1806-1873) empiricism in psychology. They came to
serve as models and their prevailing paradigm was taken over by
social scientists, particularly in the Anglo Saxon countries. In
European Continent there was another from German idealism and
Hegelianism. The ―Galilean‖ mechanistic conception became the
dominant one particularly with mathematical physics as the
methodological ideal.
There are three strands for the other main paradigm in
educational research. According to the first strand, Wilhelm Dilthey
(1833-1911) maintained that the humanities had their own logic of
research and pointed out that the difference between natural sciences
and humanities was that the former tried to explain, whereas the
latter tried to understand the unique individual in his or her entire,
concrete setting.
The second strand was represented by the phenomenological
philosophy developed by Edmund Husserl in Germany. It
emphasized the importance of taking a widened perspective and of
trying to ―get to the roots‖ of human activity. The third strand in he
18
humanistic paradigm consists of the critical philosophy, which
developed with certain amount of neo-Marxism.
The paradigm determines how a problem is formulated and
methodologically handled. According to the traditional positivist
conception, problems related to, for example, to classroom behviour
should be investigated primarily in terms of the individual actor,
either the pupils, who might be neurotic, or the teacher who might be
ill prepared for this her job. The other conception is to formulate the
problem in terms of the larger setting, that of the school, or rather
that of the society at large. By means of such mechanisms as testing,
observation and the like, one does not try to find out why the pupil
or the teacher deviates from the normal. Rather an attempt is made to
study the particular individual as a goal directed human being with
particular and unique motives.
Interdependence of the Paradigms :
One can distinguish between two main paradigms in
educational research planning and with different basis of knowledge.
On one hand there is functional- structural, objective – rational,
goal-directed, manipulative, hierarchical, and technocratic approach.
On the other hand, there is the interpretivist, humanistic, consensual,
subjective, and collegial one.
The first approach is derived from classical positivism. The
second one, more popular now, partly derived from the critical
theory of the Frankfurt school, particularly from Habermas‘s theory
of communicative action. The first approach is ―linear‖ and consists
of a straight forward rational action toward preconceived problem.
The second approach leaves room for reinterpretation and reshaping
of the problem during the process of dialogue prior to action and
even during action.
Keeves (1988) argues that the various research paradigms
employed in education, the empirical-positivist, the hermeneutic or
phenomenological, and the ethnographic-anthropological are
complementary to each other. He talks about the ―unity of
educational research,‖ makes a distinction between paradigms and
approaches, and contends that there is, in the final analysis, only one
paradigm but many approaches.
19
For example, the teaching-learning process can be observed
and /or video recorded. The observations can be quantified and the
data analyzed by means of advanced statistical method. Content can
be studied in the light of national traditions, and the philosophy
underlying curriculum constructions. Both the teaching-learning
process and its outcomes can be studied in a comparative, crossnational perspective.
Depending upon the objective of a particular research project,
emphasis is laid more on the one or on the error paradigm. Thus
qualitative and quantitative paradigms are more often than not
complementing each other. For example, it is not possible to arrive
at any valid information about a school or national system
concerning the level of competence achieved in, for instance, science
by visiting a number of classrooms and thereby trying to collect
impressions.
Sample surveys like one collected by IEA
(International Association for the Evaluation of Educational
Achievement) would be an important tool. But such surveys are not
much useful if it comes to accounting for factors behind the
differences between school systems.
Here the qualitative
information of different kinds is required.
Policymakers,
planners,
and
administrators
want
generalizations and rules which apply to a wide variety of
institutions with children of rather diverse backgrounds. The
policymaker and planner is more interested in collectivity than in the
individual child. They operate from the perspective of the whole
system. Whereas, the classroom practitioners are not very much
helped by generalizations which apply ―on the whole‖ or ―by and
large‖ because they are concerned with the timely, the particular
child here and now.
Need for contemporary approaches :
The behavioural sciences have equipped educational
researchers with a store of research tools, such as observational
methods and tests, which helps them to systematize observation
which would otherwise would not have been considered in the more
holistic and intuitive attempts to make, for instance, informal
observations or to conduct personal interviews.
20
Those who turn to social science research in order to find the
―best‖ pedagogy or the most ―efficient‖ methods of teaching are in a
way victims of traditional science which claimed to be able to arrive
at generalizations applicable in practically every context. But,
through critical philosophy researchers have become increasingly
aware that education does not take place in a social vacuum.
Educational researchers have also begun to realize that educational
practices are not independent of the cultural and social context in
which they operate. Nor they are neutral to educational policies.
Thus the two main paradigms are not exclusive, but complementary
to each other.
Check your progress :
Answer the following questions.
Q.1 Define the term paradigm?
Q.2 List the two paradigms of research?
LET US SUM UP :
Fundamental Research is a basic research which is for the
sake of knowledge. Applied research to solve an immediate
practical problem. Action research seeks effective way to solve a
problem of the concerned area without using a particular
methodology / paradigm. Paradigm of research is way to select,
define and solve the problem methodically.
Unit End Exercise :
1. Why do we need a conduct fundamental researches?
2. In which situations can applied research be conducted?
3. How is action research different from the other types?
21
4. What benefits can teachers get from action research?
5. How are the paradigms dependent on each other illustrate?
References :
Thomas H. Popkewitz, Paradigms and Ideologies of Research in
Education.
John W. Best and James V. Kahn (1996) Research in Education
L. R. Gay (1990) Research in Education
1.7 TYPES OF RESEARCH :
1.7.a FUNDAMENTAL RESEARCH :
It is basic approach which is for the sake of knowledge.
Fundamental research is usually carried on in a laboratory or other
sterile environment, sometimes with animals. This type of research,
which has no immediate or planned application, may later result in
further research of an applied nature. Basic researches involve the
development of theory.
It is not concerned with practical
applicability and most closely resembles the laboratory conditions
and controls usually associated with scientific research. It is concerned
establishing generally principles of learning.
For example, much basic research has been conducted with
animals to determine principles of reinforcement and their effect on
learning. Like the experiment of skinner on cats gave the principle of
conditioning and reinforcement.
According to Travers, basic research is designed to add to an
organized body of scientific knowledge and does not necessarily produce
results of immediate practical value. Basic research is primarily
concerned with the formulation of the theory or a contribution to the
existing body of knowledge. Its major aim is to obtain and use the
empirical data to formulate, expand or evaluate theory. This type of
research draws its pattern and spirit from the physical sciences. It
represents a rigorous and structured type of analysis. It employs careful
sampling procedures in order to extend the findings beyond the group or
situations and thus develops theories by discovering proved
generalizations or principles. The main aim of basic research is the
discovery of knowledge solely for the sake of knowledge.
22
Another system for classification is sometimes used for the
research dealing with these who types of questions.
This
classification is based on goal or objective of the research. The first
type of research, which has its aim obtaining the empirical data that
can be used to formulate, expand or evaluate theory is called basic
research. This type of study is not oriented in design or purpose
towards the solution of practical problem.
Its essential aim is to expand the frontiers of knowledge
without regard to practical application. Of course, the findings may
eventually apply to practical problems that have social value.
For example, advances in the practice of medicine are
dependent upon basic research in biochemistry and microbiology.
Likewise, progress in educational practices has been related to
progress in the discovery of general laws through psychological,
educational, sociological research.
Check your progress – 1 :
Answer the following questions :
Q.1 What is fundamental research?
Q.2 Where can the basic researches be conducted?
1.7.b APPLIED RESEARCH :
The second type of research which aims to solve an
immediate practical problem, is referred to as applied research.
According to Travers, ―applied research is undertaken to solve an
immediate practical problem and the goal of adding to scientific
knowledge is secondary.‖
It is research performed in relation to actual problems and
under the conditions in which they are found in practice. Through
applied research, educators are often able to solve their problems at
23
the appropriate level of complexity, that is, in the classroom teaching
learning situations. We may depend upon basic research for the
discovery of more general laws of learning, but applied research
much is conducted in the order to determine how these laws operate
in the classroom. This approach is essential if scientific changes in
teaching practice are to be effected. Unless educators undertake to
solve their own practical problems of this type no one else will. It
should be pointed out that applied research also uses the scientific
method of enquiry. We find that there is not always a sharp line of
demarcation between basic and applied research.
Certainly
applications are made from theory to help in the solution of practical
problems. We attempt to apply the theories of learning in the
classroom. On the other hand, basic research may depend upon the
findings of the applied research to complete its theoretical
formulations. A classroom learning experiment can throw some
light on the learning theory. Furthermore, observations in the
practical situations serve to test theories and may lead to the
formulation of new theories.
Most educational research studies are classified at the applied
end of the continuum; they are more concerned with ―what‖ works
best than with ―why‖. For example, applied research tests the
principle of reinforcement to determine their effectiveness in
improving learning (e.g. programmed instruction) and behaviour
(e.g. behaviour modification). Applied research has most of the
characteristics of fundamental research, including the use of
sampling techniques and the subsequent inferences about the target
population. Its purpose, however, is improving a product or a
process – testing theoretical concepts in actual problem situations.
Most educational research is applied research, for it attempts to
develop generalizations about teaching – learning processes and
instructional materials.
The applied research may also be employed a university or
research institute or may be found in private industry or working for
a government agency. In the field of education such a person might
be employed by a curriculum publishing company, a state
department of education, or a college of education at a university.
Applied researches are also found in the settings in which the
application or practitioner‘s role is primary. This is where the
24
teachers, clinical psychologists, school psychologists, social workers
physicians, civil engineers, managers, advertising specialists and so
on are found. Many of theses people receive training in doing
research, and they use this knowledge for two purpose.
(1) To help practitioners understand, evaluate, and use the research
produced by basic and applied researches in their own fields and,
(2) To develop a systematic way of addressing the practical
problems and questions that arise as they practice their
professions.
For example, a teacher who notices that a segment of the
class is not adequately motivated in science might look at the
research literature on teaching science and then systematically try
some of the findings suggested by the research.
Some of the recent focus of applied educational research have
been grading practices, collective bargaining for school personnel,
curriculum content, instructional procedures, educational
technology, and assessment of achievement. The topics have been
investigated with an applied research because the questions raised in
these areas generally have limited or no concrete knowledge of
theory we can draw upon directly to aid in decision making.
Check your progress – 2 :
Answer the following questions :
Q.1
What do you mean by applied research?
Q.2
Applied research be conducted?
1.7.c ACTION RESEARCH :
Research designed to uncover effective ways of dealing with
problems in the real world can be referred to as action research.
25
This kind of research is not confined to a particular methodology or
paradigm. For example, a study of the effectiveness of training
teenage parents to care for their infants. The study is based on
statistical and other evidence that infants of teenage mothers seemed
to be exposed to more risks than other infants. The mother and
children were recruited for participation in the study while the
children were still in neonate period. Mothers were trained at home
or in an infant nursery. A controlled group received no training.
The mothers trained at home were visited at 2-weeks interval over a
12-month period. Those trained in nursery setting attended 3-days
per week for 6 months, were paid minimum wage, and assisted as
staff in centre. Results of the study suggested that the children of
both group of trained mothers benefited more in terms of their health
and cognitive measures than did the controlled children. Generally
greater benefits were realized by the children of the mothers trained
in the nursery that with the mothers trained at home.
Thus the study shows that such researches have direct
application to real world problems. Second, elements of both
quantitative and qualitative approaches can be found in the study.
For example, quantitative measure of weight, height, and cognitive
skills were obtained in this study. However, at the start itself from
the personal impressions and observations without the benefit of
systematic quantitative data, the researches was able to say that the
mother in the nursery centre showed some unexpected vocational
aspirations to become nurses. Third, treatments and methods that
are investigated are flexible and might change during the study in
response to the results as they are obtained. Thus, action research is
more systematic and empirical than some other approaches to
innovation and change, but it does not lead to careful controlled
scientific experiments that are generalizable to a wide variety of
situations and settings.
The purpose of action research is to solve classroom problems
through the application of scientific methods. It is concerned with a
local problem and is conducted in a local setting. It is not concerned
with whether the results are generalizable to any other setting and is
not characterized by the same kind of control evidence in other
categories of research. The primary goal of action research is the
solution of a given problem, not contribution to science. Whether
26
the research is conducted in one classroom or many classrooms, the
teacher is very much a part of the process. The more research
trainings the teacher involved have had, the more likely it is that the
research will produce valid, if not generalizable research.
The value of action research is confined primarily to those
who are conducting it. Despite its shortcomings, it does represents a
scientific approach to the problem solving that is considerably better
than changed based on the alleged effectiveness of untried
procedures, and infinitely better than no changes at all. It is a means
by which concerned school personnel can attempt to improve the
educational process, at least within their environment. Of course,
the true value of action research to true scientific progress is limited.
True progress requires the development of sound theories having
implications for many classrooms, not just one or two. One sound
theory that includes ten principles of learning may eliminate the
need of hundreds of would – be action research studies. Given the
current status of educational theory, however, action research
provides immediate answers to problem that can not wait for
theoretical solutions.
As John Best puts it, action research is focused on immediate
applications. Its purposes is to improve school practices and at the
same time, to improve those who try to improve the practices, to
combine the research processes, habits of thinking, ability to work
harmoniously with others, and professional spirit.
If most classroom teachers are to be involved in research
activity, it will probably be in the area of action research. Many
observers have projected action research nothing more than the
application of common sense or good management. Whether or not
it is worthy of the term research it does not apply scientific thinking
and methods to real life problems and represents a greater
improvement over teachers‘ subjective judgments and decision
based upon stereotype thinking and limited personal experience.
The concept of action research under the leadership of Corey
has been instrumental in bringing educational research nearer to
educational practitioners. Action research is research undertaken by
practitioners in order that they may attempt to solve their local,
practical problems by using the method of science.
27
Check your progress – 3
Answer the following questions :
Q.1 What do you mean by action research?
Q.2 What benefits can teachers get from action research?

28
2
RESEARCH DESIGN
Unit Structure:
2.0 Objectives
2.1 (A) Meaning, definition, purpose and components of research
design.
2.2
(B) Difference between the terms research method and
research methodology.
2.3 (C) Research proposal: Its meaning and need
A. Identification of research topic: sources and need
B. Review of related Literature
C. Rationale and need for the study
D. Definition of Terms
E. Variables
F. Research questions, objectives and hypotheses
G. Assumptions if any
H. Scope, limitations and delimitations
I. Method, sample and tools
J. Significance of study
K. Technique for data analysis
L. Bibliography
M. Time frame
N. Budget
O. Chapterisation
29
2.0 OBJECTIVES :
On completion of this unit, you will be able to :
1)
State meaning of research design
2)
Describe purpose of research design
3)
Distinguish between
methodology.
4)
Discuss purposes of research proposal
5)
List down various components of research proposal
6)
Prepare write up for research proposal for a given topic.
research
method
and
research
2.1 (A) MEANING, DEFINITION, PURPOSE AND
COMPONENTS OF RESEARCH DESIGN
Meaning of Research Design : Before starting a research, the
investigator will look for problem, he will read books, journals,
research reports and other related literature. Based on this, he will
finalise the topic for research. During this process, he will be in
close contact with his guide. As soon as the topic is decided, first
task is to decide about design.
Research design is a blue print or structure with in which
research is conducted. It constitutes the blue print for the collection,
measurement and analysis of data. According to Gay and Airasian
(2000), ―A design is general strategy for conducting a research
study. The nature of the hypothesis, the variables involved, and the
constraints of the ―real world‖ all contribute to the selection of
design.‖ Kothari (1988) says, ―Decisions regarding WHAT?,
WHERE?, WHEN?, HOW MUCH?, by WHAT? means concerning
an inquiry or a research study constitute research design.
30
Thus, it can be said that research design is an outline of what
the researcher will do from writing of objectives, hypotheses and its
operational implications to find analysis of data. Research design
should be able to convey following :
What is the study about?
Where will study be carried out?
What type of data is necessary?
Where necessary data is available?
How much time is needed to complete the study?
What will be the sampling design?
Which tools will be identified to collect data?
How data will be analysed?
Depending upon the types of research the structure of design
may vary. Suppose, one is conducting an experimental research,
then identification of variables, control of variables, types of
experimental design etc. be discussed properly. If someone is
conducting qualitative research, then one should stress on
understanding of setting, nature of data, holistic approach, selection
of participants, inductive data analysis. Thus, according to nature
and type of study the components of design will be decided.
In short, any efficient research design will help the researcher
to carry out the study in a systematic way.
PURPOSE OF RESEARCH DESIGN :
A research design helps the investigator to obtain answers to
research problem and issues involved in the research, since it is
the outline of entire research process.
Design also tells us about how to collect data, what observation
are to be carry out, how to make them, how to analyse the data.
Design also guides investigator about statistical techniques to be
used for analysis.
31
Design also guides to control certain variables in experimental
research.
Thus, design guides the investigator to carry out research step
by step in an efficient way. The design section is said to be
complete / adequate if investigator could carry out his research by
following the steps described in design.
2.2 (B) DIFFERENCE BETWEEN THE TERMS
RESEARCH METHOD AND RESEARCH
METHODOLOGY :
While preparing the design of the study, it is necessary to
think of research method. It is simply the method for conducting
research. Generally, such methods are divided into quantitative and
qualitative methods. Such quantitative methods include descriptive
research, evaluation research and assessment research. Assessment
type of studies include surveys, public opinion polls, assessment of
educational achievement. Evaluation studies include school surveys,
follow up studies. Descriptive research studies are concerned with
analysis of the relationships between non manipulated variables.
Apart from these quantitative methods, educational research also
includes experimental and quasi experimented research, survey
research and causal-comparative research.
Qualitative research methods include ethnography,
phenomenology, ethnomethodology, narrative research, grounded
theory, symbolic interaction and case study.
Thus, the researcher should mention about methods of
research used in his research with proper justification for its use.
32
The term ‗methodology‘ seems to be broader, in the sense it
includes nature of population, selection of sample, selection /
preparation of tools, collection of data and how data will be
analysed. Here the method of research is also included.
2.3 (C) RESEARCH PROPOSAL : ITS MEANING AND
NEED :
Preparing the research proposal is an important step because
at this stage, entire research project gets a concrete shape.
Researcher‘s insight and inspiration are translated into a step by step
plan for discovering new knowledge.
Proposal is more than research design. Research design is a
subset of proposal. Ordinarily research design will not talk much
about heoretical frame work of the study. It will be also silent about
the review of related studies. A strong rationale for conducting
research is also not part of research design. At the stage of writing
proposal, the entire research work shapes into concrete form. In the
proposal, the researcher demonstrates that he is familiar with what
he is doing.
Following are a few purposes of a research proposal :
The proposal is like the blue print which the architect designs
before construction of a house. It conveys the plan of entire
research work along with justification of conducting the same.
The proposal is to be presented to funding agency or a
departmental research committee. Now presentation of research
proposal is compulsory before the committee as per U.G.C.
guidelines of July 2009. In such a committee, a number of
experts participate and suggest important points to help and guide
researcher. In fact, this is a very constructive activity. In
C.A.S.E., a research proposal is presented on three occasions.
First, in the researcher‘s forum on Saturday, second in Tuesday
seminar and finally before the committee consisting of Dean,
33
Head, Guide and other experts. Such fruitful discussion helps in
resolving many issues. When such presentation is there, it
always brings seriousness on the part of researcher and guide
also. During such presentation, strengths and limitations of
proposal will be come out. Funding agency also provides funds
based on strength and quality of proposal.
Research proposal serves as a plan of action. It conveys
researcher and others as to how study will be conducted. There
is indication of time schedule and budget estimates in the
proposal which guides researcher to complete the task in time
with in sanctioned budget.
The proposal approved by committee serves as a bond of
agreement between researcher and guide. Entire proposal
becomes a mirror for both to execute the study further.
Thus, a research proposal serves mainly following purposes.
(i)
It communicates researcher‘s plan to all others interested.
(ii) It serves as a plan of action.
(iii) It is an agreement between researcher and the guide.
(iv) Its presentation before experts provide further rethinking on the
entire work.
Following components are generally included in the research
proposal. It is not necessary to follow this list rigidly. It should
provide useful outline for writing of any research proposal.
Normally, a research proposal begins with an Introduction,
this gives clearly the background or history of the problem selected.
Some also calls this as a theoretical / conceptual framework. This
will include various theories / concepts related to problem selected.
Theoretical frame work should have logical sequence. Suppose
researcher wants to study the achievement of class IX students in
mathematics in particular area, then conceptual frame may include:
Objectives of teaching mathematics, its purpose of secondary
school level
34
Importance of achievement in mathematics
Level of achievement as studied by other researchers
Factors affecting achievements of mathematics
Various commissions and committees views on achievement
in mathematics.
All these points can be put into sequence logically.
Whenever needed theoretical support be given. This is an important
step in research proposal. Generally any proposal begins with this
type of introduction.
A. Identification of Research Topic : Sources and Need :
As discussed earlier, researcher will spell out as to how the
problem emerged, its social and educational context and its
importance to the field. Some researchers name this caption as
background of the study or Theoretical / Conceptual frame work of
the study. In short, here the entire topic of the research is briefly
introduced along with related concepts and theories in the field.
B. Review of Related Literature :
In this section, one presents what is so far known about the
problem under investigation. Generally theoretical / conceptual
frame work is already reported in earlier section. In this section
researcher concentrates on studies conducted in the area of interest.
here, a researcher will locate various studies conducted in his area
and interest. Try to justify that all such located studies are ‗related
‗to your work. For locating such studies one will refer following
documents / sources.
Surveys of research in education (Edited earlier by Prof. M. B.
Buch and Later on by NCERT, New Delhi)
35
Ph. D. Theses available in various libraries.
Current Index to Journals in Education (CIJE)
Dissertation Abstract International (DAI)
Educational Resources Information Centre (ERIC) by U.S. office
of education.
Various national / International journals, Internet resources
(For detail see Ary, D., Jacobs, L.C., and Razavih A. (1972).
Introduction to Research in Education N. Y. Holt, Rinehart and
Winston, ING pp 55 – 70)
In research proposal, the review of studies conducted earlier
is reported briefly. There are two was of reporting the same. One
way could be all such related studies be reported chronologically in
brief indicating purpose, sample, tools and major findings. Of
course, this will increase the volume of research proposal. Second
studies with similar trends be put together and its important trend/s
be highlighted. This is bit difficult, but innovative. Normally in
review the surname of author and year in bracket is mentioned.
There is also a trend to report studies conducted in other countries
separately. It is left to guide and researcher whether such separate
caption is necessary or not.
At the end of review, in research proposal, there should be
conclusion. (Of course a separate caption like conclusion be
avoided.)
Here, the researcher shares the insights he has gained from
the review. Also, on the basis of review he will justice the need of
conducting present study. The researcher should conclude with
following points :
What has been done so far in this area?
Where? (Area wise)
When? (Year wise)
How? (Methodology wise)
36
What needs to be done?
Thus, the researcher will identify the ‗Research Gap‘.
C. Rationale and Need of the Study :
Rationale should answer the question – ‗why‘ this study is
conducted? It ‗why‘ is answered properly, then rationale a strong
one. For strong rationale, the earlier section of review will be of
much help. Identified research gaps will convey as to why this study
is conducted. Suppose the investigator wants to study the following
problem : ‗Development and Try out of CAI in Teaching of Science
for Class VIII in Mumbai‖.
Here, the researcher should try to answer why CAI only?
Why it is in Science teaching only? Why it is for class VIII only?
Why it is in Mumbai only?
If these questions are answered adequately, then rationale
becomes strong. Here one has to identify gaps in the area of Science
teaching especially with reference to CAI. Apart from this, the need
for conducting the present study be justified.
D. Definition of Terms :
Every research study involves certain key or technical terms
which have some special connotation in the context of study; hence
it is always desirable to define such key words. There are two types
definitions, (i) Theoretical / constitutive and (ii) Operational.
A constitutive definition elucidates a term and perhaps gives
some more insight into the phenomena described by the terms.
Thus, this definition is based on some theory. While an operational
37
definition is one which ascribes meaning to a concept by specifying
the operations that must be performed in order to measure the
concept, e.g. the word ‗achievement‘ has many meanings but
operationally it can be defined as, ―the scores obtained by the
students in English test constructed by researcher in 2009. Here it is
clear that achievement in English will be measured by administering
to test constructed by Mr. So and So in 2009. Apart from
operational definitions, one can define some terms which have
definite meaning with reference to particular investigation. The
terms like Lok Jumbish, Minimum Levels of Learning, Programmed
Learning etc. can be define in particular context of research.
E. Variables :
Variables involved in the research need to be identified here.
Their operational definitions should be given in the research
proposal. Especially in study where experimental research is
conducted, variables be specified with enough care.
Their
classification should be done in terms and dependent variables,
independent variables, intervening variables, extraneous variables
etc. Controlling of some variables need to be discussed at an
appropriate stage in proposal.
F. Research questions objectives and hypotheses:
While reading the statement of the problem, there may be bit
confusion to avoid such confusions there is a need to have
specification of a research problem. This specification can be done
by writing research questions, objectives, hypotheses, by writing
operational definitions thus, objectives give more clarity to
researchers and reactors objectives are the foundations of the
research, as they will guide the entire process of research. List of
objectives should not be too lengthy not ambiguous. The objectives
we stated clearly to indicate what the researcher is trying to
investigate.
38
While conducting any research, researcher would definitely
aim at assuring certain questions. The researcher should frame such
questions in a praise way. Some researchers simply put the
objectives in the question form, which is just duplication of
objectives, which be avoided.
Depending on the nature of study, the researcher would
formulate hypotheses, The proposition of a hypothesis is derived
from theoretical constructs, previous researches on earlier
researches, the researcher can write research or will hypothesis will
be more suitable however as per evidences from previous researches
one can decide the nature of hypothesis.
Formulation of hypothesis is an indication that researcher has
sufficient knowledge in the area and it also gives direction for data
collection and analysis. A hypothesis has to be :
(I) testable, (ii) have explanatory power, (iii) state expected
relationship between variables. (iv) consistent with existing body of
knowledge.
G. Assumptions:
Best and Kahn (2004) assumptions are statements of what
the researcher believes to be facts but cannot verify. If the
researcher is proceeding with certain assumptions, then same need to
be reported in the research proposal.
H. Scope, Limitations and Delimitations:
39
In any research, it is not possible to cover all aspects of the
area of interest, variables, population and so on. Thus, a study has
always certain limitations. Limitations are those conditions beyond
the control of the researcher that may play restriction on conclusions.
Sometimes, the tool used is not revalidated. This itself becomes
limitation of the study. Thus limitation is a broad term, but
delimitation is a narrow term. It indicates boundaries of the study.
The study on achievement in English can be delimited to only grantin-aid school, which includes schools who follow Maharashtra State
Board, so here beyond this conclusion can not be extended. This
can be made more specific by specifying the population and sample.
I. Method, Sample and Tools:
Method: A researcher should report about method of research. As
discussed in (b), researcher should mention as to how study will be
conducted. Depending on nature of study – qualitative or
quantitative the method of research need to be reported along with
justification. i.e. how particular method suits one‘s study be
discussed in brief. If it is survey, do not write simply survey, but
indicate further the type of survey too. If it is experimental design,
mention specifically which type of experimental design.
Sample : You might have already studied about sampling in details.
This section of research proposal will mention about selection of
sample. First, the researcher should mention about would like to in
for. One must describe the population along with total size. This is
especially needed in case of randomization and stratification.
Researcher should mention about probability non probability
sampling design. Accordingly selection of sample need to be
detailed out along with its justification. Many researchers write
about randomization without mentioning size of population. the
researcher also writes about stratified sampling without details of
various strata along with its size. As from the sample statistics,
population parameter is to be estimated, solution of sample be done
with enough care.
40
In case of qualitative research, investigator may go for
theoretical sampling. It is necessary to derailed out have , it need
be, description of field is necessary.
Tools : You have already shared about various tools of data
collection. In this section of proposal selection and description of
tool is for be reported with proper justification. Steps of construction
of particular tool need to be reported in brief. If readymade tools are
used then its related details need to be reported. Details like author
of the tool, its reliability, validity, and norms, along with scoring
procedure need to be reported. It has been found that many
researchers fail to report the year when tool was constructed. As far
as possible, very old tools need to be avoided. In case of readymade
tools, always look for which population it was desirable to use valid
and reliable tools.
J. Significance of the Study :
If we have already reported strong rationale then, hardly there
is any need to go for significance. In rationale part, one must
describe as to how this study will contribute to the field of education.
How the findings / results of particular research will influence
educational process in general need to be reported in the rationale
only.
(Note : There are various models for writing research proposal. It
differs from university to university. Many funding
agencies have their own format for proposal.)
K. Technique/s of Data Analysis :
This is crucial step in proposal. As to how collected data will
be tabulated and organized for the purpose of further analysis is to
41
be reported in this section. If it is a quantitative research, parametric
or non-parametric statistical techniques will be used need to be
reported. Before applying any technique for data analysis, verify the
needed assumptions about that particular technique. Suppose if one
wants to go for ANOVA, verify about assumption for normality,
nature of data – especially in interval or ratio scale, homogeneity of
variances and randomization. If it is qualitative analysis, detailed
out about nature of data, its tabulation, orgnaisation and description.
If data are to be analysed with the help of content analysis, how
exactly it will be done needs to be detailed out. Whichever
technique one is using, it needs to be in tune with objectives and
hypotheses of study.
L. Bibliography :
During preparation of proposal, researcher consults various
sources like books, journals, reports, Ph.D. theses etc. All such
primary / secondary sources need to be reported in the bibliography.
Generally American Psychological Association – Publication
Manual be followed to write references. All authors quoted in
proposal need to be listed in bibliography. Authors who are not
quoted but they are useful for further reading be also listed.
Consistency and uniformity be observed in reporting references.
M. Time Frame :
The proposal submitted for M.Phil or Ph.D. degrees,
generally do not require time frame in all universities, but there is a
fixed limit for these courses. It is always advisable to give detailed
schedule if research work, as it helps to keep researcher alert.
Proposals to be submitted to funding agency definitely ask for time
frame. Time frame need to be reported keeping following points in
view. Time / duration mentioned by funding agency be properly
dividend.
42
Time required for preliminary work like review of literature.
Time required for preparing tool/s.
Time require for data collection, field visits etc.
Time required for data analysis and report writing.
N. Budget :
The proposal submitted to the funding agency needs details
regarding financial estimates. It may include expected expenditure
keeping various budget needs. Following budget needs be kept in
view along with amount.
Remuneration for project team, i.e. principal investigator and
project team.
Remuneration for secretarial staff like clerk, data entry operator,
accountants, helpers etc.
Remuneration for appointing project fellow, field investigators
etc.
Expenditure towards purchase of books, journals, tools etc.
Expenditure towards printing, xeroxing, stationery etc.
Expenditure for data entry, tabulation and analysis of data.
Expenditure for field work, travel for monitoring purpose etc.
Expenditure for preparing final report.
While preparing budget, examine the guidelines given by
particular funding agency.
O. Chapterisation :
Generally scheme of chapterisation is given in synopsis. If at
all it is to be reported in research proposal write down various
43
caption, sub captions in each chapter, format for thesis is given by
few universities, same be followed.
Check Your Progress – I
(a)
Select one topic for research in education and write a various
steps of research proposal at length.
Suggested Readings :
1) Best, J.W. and Kahn, J.V. (2004), Research in Education, New
Delhi, Prentice Hall of India.
2) Gay, L.R. and Airasian, P. (2000), Educational Research :
Competencies for Analysis and Application, New Jersey : Mersil.
3) Kothari, R.C. (1985), Research Methodology, New Delhi : Wiley
Eastern Ltd.

44
3
VARIABLES AND HYPOTHESES
Unit Structure
3.0
3.1
3.2
3.3
Objectives
Introduction
Meaning of variables
Types of variables (independent, dependent, Extraneous,
Intervening and Moderator)
3.4
Concept of hypothesis
3.5
Sources of hypothesis
3.6
Types of hypothesis (Research, Directional, Non Directional,
Null, Statistical and question form
3.7
Formulating hypothesis
3.8
Characteristics of a good hypothesis
3.9 Hypothesis testing and theory
3.10 Errors in testing of hypothesis
3.11 Summary
3.0
OBJECTIVES:
After reading this unit you will be able to:
Define variables
Identify the different types of variables
Show the relationship between the variables
Explain the concept of hypotheses
45
State the sources of hypotheses
Explain different types of hypothesis
Identify types of hypothesis
Frame hypotheses skillfully
Describe the characteristics of a good hypothesis
Explain the significance level in hypothesis testing
Identify the errors in testing of hypothesis
3.1 INTRODUCTION:
Each person/thing we collect data on is called an
observation (in our research work these are usually
people/subjects). Observation (participants) possess a variety of
characteristics. If a characteristic of an observation (participant) is
the same for every member of the group i.e. it does not vary, it is
called a constant. If a characteristic of an observation (participant)
differs for group members it is called a variable. In research we do
not get excited about constants (since everyone is the same on that
characteristic); we are more interested in variables.
3.2 MEANING OF VARIABLES
A variable is any entity that can take on different values. So
what does that mean? Anything that can vary can be considered a
variable. For instance, age can be considered a variable because age
can take different values for different people or for the same person
at different times. Similarly, country can be considered a variable
because a person's country can be assigned a value.
A variable is a concept or abstract idea that can be described
in measurable terms. In research, this term refers to the
measurable characteristics, qualities, traits, or attributes of a
particular individual, object, or situation being studied.
Variables are properties or characteristics of some event,
object, or person that can take on different values or amounts.
46
Variables are things that we measure, control, or
manipulate in research. They differ in many respects, most notably
in the role they are given in our research and in the type of
measures that can be applied to them.
By itself, the statement of the problem usually provides only
general direction for the research study; it does not include all the
specific information. There is some basic terminology that is
extremely important in how we communicate specific information
about research problems and about research in general.
Let us analyse an example; if a researcher is interested in the
effects of two different teaching methods on the science
achievement of fifth-grade students, the grade level is constant,
because all individuals involved are fifth-graders. This characteristic
is the same for everyone; it is a ‘constant’ condition of the study.
After the different teaching methods have been implemented, the
fifth-graders involved would be measured with a science
achievement test. It is very unlikely that all of the fifth-graders
would receive the same score on this test, hence the score on the
science achievement test becomes a variable, because different
individuals will have different scores; at least, not all individuals will
have the same scores. We would say that science achievement is a
variable, but we would mean, specifically, that the score on the
science achievement test is a variable.
There is another variable in the preceding example – the
teaching method. In contrast to the science achievement test
score, which undoubtedly would be measured on a scale with many
possible values, teaching method is a categorical variable consisting
of only two categories, the two methods. Thus, we have different
kinds of variables and different names or classifications for them.
A concept which can take on different quantitative values is
called a variable. As such the concepts like weight, height, income
are all examples of variables. Qualitative phenomena (or the
attributes) are also quantified on the basis of the presence or
absence of the concerning attributes(s). Age is an example of
47
continuous variable, but the number of male and female
respondents is an example of discrete variable.
3.3 TYPES OF VARIABLES:
There are many classification systems given in the literature
the names we use are descriptive; they describe the roles that
variables play in a research study. The variables described below by
no means exhaust the different systems and names that exist, but
they are the most useful for communicating about educational
research.
3.3.1 Independent variables:
Independent variables are variables which are manipulated or
controlled or changed. In the example “a study of the effect of
teacher praise on the reading achievement of second-graders”, the
effect of praise, the researcher is trying to determine whether there
is a cause-and-effect relationship, so the kind of praise is varied to
see whether it produces different scores on the reading
achievement test. We call this a manipulated independent
variable (treatment variable). The amount and kind of praise is
manipulated by the researcher. The researcher could analyze the
scores for boys and girls separately to see whether the results are
the same for both genders. In this case gender is a classifying or
attributes independent variable. The researcher cannot
manipulate gender, but can classify the children according to
gender.
3.3.2 Dependent variables:
Dependent variables are the outcome variables and are the
variables for which we calculate statistics. The variable which
changes on account of independent variable is known as
dependent variable.
48
Let us take the example, a study of the effect of teacher
praise on the reading achievement of second-graders; the
dependent variable is reading achievement. We might compare the
average reading achievement scores of second-graders in different
praise conditions such as no praise, oral praise, written praise, and
combined oral and written praise.
The following example further illustrates the use of variables
and constants. In a study conducted to determine the effect of
three different teaching methods on achievement in elementary
algebra, each of three ninth-grade algebra sections in the same
school, taught by the same teacher, is taught using one of the
methods. Both boys and girls are included in the study. The
constants in the study are grade level, school, and teacher. (This
assumes that, except for method, the teacher can hold teaching
effectiveness constant.) The independent variables in the study are
teaching method and gender of the student. Teaching method has
three levels that arbitrarily can be designated methods A, B, and C;
gender of the student, of course, has two levels. Achievement in
algebra, as measured at the end of the instructional period, is the
dependent variable.
The terms dependent and independent variable apply mostly
to experimental research where some variables are manipulated,
and in this sense they are "independent" from the initial reaction
patterns, features, intentions, etc. of the subjects. Some other
variables are expected to be "dependent" on the manipulation or
experimental conditions. That is to say, they depend on "what the
subject will do" in response. Somewhat contrary to the nature of
this distinction, these terms are also used in studies where we do
not literally manipulate independent variables, but only assign
subjects to "experimental groups" based on some pre-existing
properties of the subjects. . Independent variables are those that
are manipulated whereas dependent variables are only measured
or registered.
Consider other examples of independent and dependent
variables:
49
Example 1: A study of teacher-student classroom interaction at
different levels of schooling.
Independent variable: Level of schooling, four categories –
primary, upper primary, secondary and junior college.
Dependent variable: Score on a classroom observation inventory,
which measures teacher – student interaction
Example 2: A comparative study of the professional attitudes of
secondary school teachers by gender.
Independent variable: Gender of the teacher – male, female.
Dependent variable: Score on a professional attitude inventory.
3.3.3 Extraneous variable:
Independent variables that are not related to the purpose of
the study, but may affect the dependent variable are termed as
extraneous variables. Suppose the researcher wants to test the
hypothesis that there is a relationship between children’s gains in
social studies achievement and their self-concepts. In this case
self-concept is an independent variable and social studies
achievement is a dependent variable. Intelligence may as well
affect the social studies achievement, but since it is not related to
the purpose of the study undertaken by the researcher, it will be
termed as an extraneous variable. Whatever effect is noticed on
dependent variable as a result of extraneous variable(s) is
technically described as an ‘experimental error’. A study must
always be so designed that the effect upon the dependent variable
is attributed entirely to the independent variable(s), and not to
some extraneous variable or variables.
50
E.g. Effectiveness of different methods of teaching Social Science.
Here variables such as teacher’s competence, Teacher’s
enthusiasm, age, socio economic status also contribute
substantially to the teaching learning process. It cannot be
controlled by the researcher. The conclusions lack incredibility
because of extraneous variables.
3.3.4 Intervening variables:
They intervene between cause and effect. It is difficult to
observe, as they are related with individuals feelings such as
boredom, fatigue excitement At times some of these variables
cannot be controlled or measured but have an important effect
upon the result of the study as it intervenes between cause and
effect. Though difficult, it has to be controlled through appropriate
design.
Eg. “Effect of immediate reinforcement on learning the parts of
speech”.
Factors other than reinforcement such as anxiety, fatigue,
and motivation may be intervening variables. They are difficult to
define in operational, observable terms however they cannot be
ignored and must be controlled using appropriate research design.
3.3.5 Moderator:
A third variable that when introduced into an analysis alters
or has a contingent effect on the relationship between an
independent and a dependent variable. A moderator variable is an
independent variable that is not of primary interest that has levels,
which when combined with the levels of the independent variable
of interest produces different effects.
51
For example, suppose that the researcher designs a study to
determine the impact of the lengths of reading passages on the
comprehension of the reading passage. The design has three levels
of passage length: 100 words, 200 words, and 300 words. The
participants in the study are fourth-fifth- and sixth-graders.
Suppose that the three grade levels all did very well on the 100word passage, but only the sixth-graders did very well on the 300word passage. This would mean that successfully comprehending
reading passages of different lengths was moderated by grade level.
Check your progress:
1. What is a Variable?
2. Identify the variables in this example “Teaching effectiveness of
secondary school teachers in relation to their presage
characteristics”.
3.4 CONCEPT OF HYPOTHESIS
52
Hypothesis is usually considered as the principal instrument
in research. The derivation of a suitable hypothesis goes hand in
hand with the selection of a research problem. A hypothesis, as a
tentative hunch, explains the situation under observation so as to
design the study to prove or disprove it. What a researcher is
looking for is a working or positive hypothesis. It is very difficult,
laborious and time consuming to make adequate discriminations in
the complex interplay of facts without hypothesis. It gives definite
point and direction to the study, prevents blind search and
indiscriminate gathering of data and helps to delimit the field of
inquiry.
3.4.1 Meaning:
The word hypothesis (plural is hypotheses) is derived from
the Greek word – ‘hypotithenai’ meaning ‘to put under’ or ‘to
suppose’ for a hypothesis to be put forward as a scientific
hypothesis, the scientific method requires that one can test it.
Etymologically hypothesis is made up of two words, “hypo” (less
than) and “thesis”, which mean less than or less certain than a
thesis. It is the presumptive statement of a proposition or a
reasonable guess, based upon the available evidence, which the
researcher seeks to prove through his study.
According to Lundberg, ―A hypothesis is a tentative
generalisation, the validity of which remains to be tested. In its most
elementary stage, the hypothesis may be any hunch, guess,
imaginative idea, which becomes the basis for action or
investigation.
Goode and Hatt have defined it as ―a proposition which can
be put to test to determine its validity‖. A hypothesis is a statement
temporarily accepted as true in the light of what is, at the time,
known about a phenomenon, and it is employed as a basis for action
in the search of new truth.
A hypothesis is a tentative assumption drawn from
knowledge and theory which is used as a guide in the investigation
of other facts and theories that are yet unknown.
53
It is a guess, supposition or tentative inference as to the
existence of some fact, condition or relationship relative to some
phenomenon which serves to explain such facts as already are
known to exist in a given area of research and to guide the search
for new truth.
Hypotheses reflect the research worker’s guess as to the
probable outcome of the experiments.
A hypothesis is therefore a shrewd and intelligent guess, a
supposition, inference, hunch, provisional statement or tentative
generalization as to the existence of some fact, condition or
relationship relative to some phenomenon which serves to explain
already known facts in a given area of research and to guide the
search for new truth on the basis of empirical evidence. The
hypothesis is put to test for its tenability and for determining its
validity.
In this connection Lundberg observes: Quite often a research
hypothesis is a predictive statement, capable of being tested by
scientific methods, that relates an independent variable to some
dependent variable. For example, consider statements like the
following ones: “Students who receive counselling will show a
greater increase in creativity than students not receiving
counseling” or “There is a positive relationship between academic
aptitude scores and scores on a social adjustment inventory for high
school students”
These are hypotheses capable of being objectively verified
and tested. Thus, we may conclude that a hypothesis states what
we are looking for and it is a proposition which can be put to a test
to determine its validity.
3.4.2 Importance of the Hypotheses:
54
The importance of hypotheses is generally recognized more
in the studies which aim to make predictions about some outcome.
In experimental research, the researchers is interested in making
predictions about the outcome of the experiment or what the
results are expected to show and therefore the role of hypotheses
is considered to be of utmost importance. In the historical or
descriptive research, on the other hand, the researcher is
investigating the history of a city or a nation, the life of a man, the
happening of an event, or is seeking facts to determine the status
quo of some situation and thus may not have a basis for making a
prediction of results. A hypothesis, therefore, may not be required
in such fact-finding studies. Hillway (1964) too is of the view that
“when fact-finding alone is the aim of the study, a hypothesis may
not be required.”
Most historical or descriptive studies, however, involve not
only fact-finding but interpretation of facts to draw generalizations.
If a researcher is tracing the history of an educational institution or
making a study about the results of a coming assembly poll, the
facts or data he gathers will prove useful only if he is able to draw
generalizations from them. Whenever possible, a hypothesis is
recommended for all major studies to explain observed facts,
conditions or behaviour and to serve as a guide in the research
process. The importance of hypotheses may be summarized as
under.
1.
Hypotheses facilitate the extension of knowledge in an area.
They provide tentative explanations of facts and phenomena,
and can be tested and validated. It sensitizes the investigator
to certain aspects of situations which are relevant from the
standpoint of the problem in hand.
2.
Hypotheses provide the researcher with rational statements,
consisting of elements expressed in a logical order of
relationships which seek to describe or to explain conditions or
events, that have not yet been confirmed by facts. The
hypotheses enable the researcher to relate logically known
facts to intelligent guesses about unknown conditions. It is a
guide to the thinking process and the process of discovery. It
55
is the investigator’s eye – a sort of guiding light in the work of
darkness.
3.
Hypotheses provide direction to the research. It defines what
is relevant and what is irrelevant. The hypotheses tell the
researcher specifically what he needs to do and find out in his
study. Thus it prevents the review of irrelevant literature and
the collection of useless or excess data. Hypotheses provide a
basis for selecting the sample and the research procedures to
be used in the study. The statistical techniques needed in the
analysis of data, and the relationships between the variables
to be tested, are also implied by the hypotheses.
Furthermore, the hypotheses help the researcher to delimit
his study in scope so that it does not become broad or
unwieldy.
4.
Hypotheses provide the basis for reporting the conclusions of
the study. It serves as a framework for drawing conclusions.
The researcher will find it very convenient to test each
hypothesis separately and state the conclusions that are
relevant to each. On the basis of these conclusions, he can
make the research report interesting and meaningful to the
reader. It provides the outline for setting conclusions in a
meaningful way.
Hypothesis has a very important place in research although it
occupies a very small pace in the body of a thesis. It is almost
impossible for a research worker not to have one or more
hypotheses before proceeding with his work.
3.5 SOURCES OF HYPOTHESIS:
The derivation of a good hypothesis demands characteristic
of experience and creativity. Though hypothesis should precede the
gathering of data, a good hypothesis can come only from
experience. Some degree of data gathering, the review of related
literature, or a pilot study must precede the development and
gradual refinement of the hypothesis. A good investigator must
56
have not only an alert mind capable of deriving relevant hypothesis,
but also a critical mind capable of rejecting faulty hypothesis.
What is the source of hypotheses? They may be derived
directly from the statement of the problem; they may be based on
the research literature, or in some cases, such as in ethnographic
research, they may (at least in part) be generated from data
collection and analysis. The various sources of hypotheses may be:
Review of similar studies in the area or of the studies on similar
problems;
Examination of data and records, if available, concerning the
problem for possible trends, peculiarities and other clues;
Discussions with colleagues and experts about the problem, its
origin and the objectives in seeking a solution.
Exploratory personal investigation which involves original field
interviews on a limited scale with interested parties and
individuals with a view to secure greater insight into the
practical aspects of the problem.
Intuition is often considered a reasonable source of research
hypotheses -- especially when it is the intuition of a well-known
researcher or theoretician who “knows what is known”
Rational Induction is often used to form “new hypotheses” by
logically combining the empirical findings from separate areas of
research
Prior empirical research findings are perhaps the most common
source of new research hypotheses, especially when carefully
combined using rational induction
Thus hypothesis are formulated as a result of prior thinking
about the subject, examination of the available data and
material including related studies and the council of experts.
Check your progress:
57
1. Define hypothesis.
2. Hypothesis is stated in researches concerned with
3. What are the sources of hypotheses?
3.6 TYPES OF HYPOTHESIS :
3.6.1 Research hypothesis: When a prediction or a hypothesized
relationship is to be tested by scientific methods, it is termed
as research hypothesis. The research hypothesis is a
predictive statement that relates an independent variable to
a dependent variable. Usually a research hypothesis must
contain, at least, one independent and one dependent
58
variable. A research hypothesis must be stated in a testable
form for its proper evaluation. As already stressed, this form
should indicate a relationship between the variables in clear,
concise, and understandable language. Research hypotheses
are classified as being directional or non-directional.
3.6.2 Directional hypothesis: The hypotheses which stipulate the
direction of the expected differences or relationships are
terms as directional hypotheses. For example, the research
hypothesis: “There will be a positive relationship between
individual’s attitude towards high caste Hindus and his socioeconomic status,” is a directional research hypothesis. This
hypothesis stipulates that individuals with favourable
attitude towards high cast Hindus will generally come from
higher socio-economic Hindu families and therefore it does
stipulate the direction of the relationship. Similarly, the
hypothesis: “Adolescent boys with high IQ will exhibit low
anxiety than adolescent boys with low IQ” is a directional
research hypothesis because it stipulates the direction of the
difference between groups.
3.6.3 Non-directional hypothesis: A research hypothesis which
does not specify the direction of expected differences or
relationships is a non-directional research hypothesis. For
example, the hypotheses: “There will be difference in the
adaptability of fathers and mothers towards rearing of their
children” or “There is a difference in the anxiety level of
adolescent girls of high IQ and low IQ” are non-directional
research hypotheses. Although these hypotheses stipulate
there will be a difference, the direction of the difference is
not specified. A research hypothesis can take either
statistical form, declarative form, the null form, or the
question form.
3.6.4 Statistical hypothesis: When it is time to test whether the
data support or refute the research hypothesis, it needs to
be translated into a statistical hypothesis. A statistical
hypothesis is given in statistical terms. Technically, in the
context of inferential statistics, it is a statement about one or
more parameters that are measures of the populations
under study. Statistical hypotheses often are given in
quantitative terms, for example: “The mean reading
59
achievement of the population of third-grade students
taught by Method A equals the mean reading achievement
of the population taught by Method B.” Therefore we can
say that statistical hypotheses are, concerned with
populations under study. We use inferential statistics, to
draw conclusions about population values even though we
have access to only a sample of participants. In order to use
inferential statistics, we need to translate the research
hypothesis into a testable form, which is called the null
hypothesis. An alternative or declarative hypothesis
indicates the situation corresponding to when the null
hypothesis is not true. The stated hypothesis will differ
depending on whether or not it is a directional research
hypothesis.
3.6.5 Declarative hypothesis: When the researcher makes a
positive statement about the outcome of the study, the
hypothesis takes the declarative form. For example, the
hypothesis: “The academic achievement of extroverts is
significantly higher than that of the introverts,” is stated in
the declarative form. In such a statement of hypothesis, the
researcher makes a prediction based on his theoretical
formulations of what should happen if the explanations of
the behaviour he has given in his theory are correct.
3.6.6 Null hypothesis: In the null form, the researcher makes a
statement that no relationship exists. The hypothesis, “There
is no significant difference between the academic
achievement of high school athletes and that of non
athletes,” is an example of null hypothesis. Since null
hypotheses can be tested statistically, they are often termed
as statistical hypotheses. They are also called the testing
hypotheses when declarative hypotheses are tested
statistically by converting them into null form. It states that
even where it seems to hold good it is due to mere chance.
It is for the researcher to reject the null hypothesis by
showing that the outcome mentioned in the declarative
hypothesis does occur and the quantum of it is such that it
cannot be easily dismissed as having occurred by chance.
60
3.6.7 Question form hypothesis: In the question form hypothesis,
a question is asked as to what the outcome will be instead of
stating what outcome is expected. Suppose a researcher is
interested in knowing whether programmed instruction has
any relationship to test anxiety of children.
The declarative form of the hypothesis might be:
“Teaching children through the programmed instruction
material will decrease their test anxiety”.
The null form would be: “teaching children through
programmed instruction material will have no effect on
their test anxiety.’ This statement shows that no
relationship exists between programmed instruction and
test anxiety.
The question form puts the statement in the form: “Will
teaching children through programmed instruction
decrease their test anxiety?”
3.7
FORMULATING HYPOTHESIS:
Hypotheses are guesses or tentative generalizations, but
these guesses are not merely accidents. Collection of factual
information alone does not lead to successful formulation of
hypotheses. Hypotheses are the products of considerable
speculation and imaginative guess work. They are based partly on
known facts and explanations, and partly conceptual. There are no
precise rules for formulating hypotheses and deducing
consequences from them that can be empirically verified.
However, there are certain necessary conditions that are conducive
to their formulation. Some of them are:
Richness of background knowledge. A researcher may deduce
hypotheses inductively after making observations of behaviour,
noticing trends or probable relationships. For example, a
classroom teacher daily observes student behaviour. On the
basis of his experience and his knowledge of behaviour in a
61
school situation, the teacher may attempt to relate the
behaviour of students to his own, to his teaching methods, to
changes in the school environment, and so on. From these
observed relationships, the teacher may inductively formulate a
hypothesis that attempts to explain such relationships.
Background knowledge, however, is essential for
perceiving relationships among the variables and to determine
what findings other researchers have reported on the problem
under study. New knowledge, new discoveries, and new
inventions should always form continuity with the already
existing corpus of knowledge and, therefore, it becomes all the
more essential to be well versed with the already existing
knowledge.
Hypotheses may be formulated correctly by persons who
have rich experiences and academic background, but they can
never be formulated by those who have poor background
knowledge.
Versatility of intellect: Hypotheses are also derived through
deductive reasoning from a theory. Such hypotheses are called
deductive hypotheses. A researcher may begin a study by
selecting one of the theories in his own area of interest. After
selecting the particular theory, the researcher proceeds to
deduce a hypothesis from this theory through symbolic logic or
mathematics. This is possible only when the researcher has a
versatile intellect and can make use of it for restructuring his
experiences. Creative imagination is the product of an
adventure, sound attitude and agile intellect. In the hypotheses
formulation, the researcher works on numerous paths. He has
to take a consistent effort and develop certain habits and
attitudes. Moreover, the researcher has to saturate himself
with all possible information about the problem and then think
liberally at it and proceed further in the conduct of the study.
Analogy and other practices. Analogies also lead the researcher
to clues that he might find useful in the formulation of
hypotheses and for finding solutions to problems. For example,
suppose a new situation resembles an old situation in regard to
62
a factor X. If the researcher knows from previous experience
that the old situation is related to other factors Y and Z as well
as to X, he reasons that perhaps a new situation is also related
to Y and Z. The researcher, however, should use analogies with
caution as they are not fool proof tools for finding solutions to
problems. At times, conversations and consultations with
colleagues and expert from different fields are also helpful in
formulating important and useful hypotheses.
3.8 CHARACTERISTICS OF A GOOD HYPOTHESIS
Hypothesis must possess the following characteristics:
i) Hypothesis should be clear and precise. If the hypothesis is
not clear and precise, the inferences drawn on its basis
cannot be taken as reliable.
ii) Hypothesis should be capable of being tested. Some prior
study may be done by researcher in order to make
hypothesis a testable one. A hypothesis “is testable if other
deductions can be made from it which, in turn, can be
confirmed or disproved by observation.”
iii) Hypothesis should state relationship between variables, if it
happens to be a relational hypothesis.
iv) Hypothesis should be limited in scope and must be specific.
A researcher must remember that narrower hypotheses are
generally more testable and he should develop such
hypotheses.
v) Hypothesis should be stated as far as possible in most simple
terms so that the same is easily understandable by all
concerned. But one must remember that simplicity of
hypothesis has nothing to do with its significance.
vi) Hypothesis should be consistent with most known facts i.e. it
must be consistent with a substantial body of established
facts. In other words, it should be one which judges accept
as being the most likely.
vii) The hypotheses selected should be amenable to testing
within a reasonable time. The researcher should not select a
problem which involves hypotheses that are not agreeable
to testing within a reasonable and specified time. He must
63
know that there are problems that cannot be solved for a
long time to come. These are problems of immense
difficulty that cannot be profitably studied because of the
lack of essential techniques or measures.
viii) Hypothesis must explain the facts that gave rise to the need
for explanation. This means that by using the hypothesis
plus other known and accepted generalizations, one should
be able to deduce the original problem condition. Thus
hypothesis must actually explain what it claims to explain, it
should have empirical reference.
Check your progress:
1. What are the different types of hypothesis?
2. List the characteristics of hypothesis.
3.9 HYPOTHESIS TESTING AND THEORY
64
When the purpose of research is to test a research
hypothesis, it is termed as hypothesis-testing research. It can be of
the experimental design or of the non-experimental design.
Research in which the independent variable is manipulated is
termed ‘experimental hypothesis-testing research’ and a research
in which an independent variable is not manipulated is called ‘nonexperimental hypothesis-testing research’.
Let us get acquainted with relevant terminologies used in
hypothesis testing.
Null hypothesis and alternative hypothesis:
In the context of statistical analysis, we often talk about null
hypothesis and alternative hypothesis. If we are to compare
method A with method B about its superiority and if we proceed on
the assumption that both methods are equally good, then this
assumption is termed as the null hypothesis. As against this, we
may think that the method A is superior or the method B is inferior,
we are then stating what is termed as alternative hypothesis. The
null hypothesis is generally symbolized as H0 and the alternative
hypothesis as Ha. The null hypothesis and the alternative hypothesis
are chosen before the sample is drawn. Generally, in hypothesis
testing we proceed on the basis of null hypothesis, keeping the
alternative hypothesis in view. Why so? The answer is that on the
assumption that null hypothesis is true, one can assign the
probabilities to different possible sample results, but this cannot be
done if we proceed with the alternative hypothesis. Hence the use
of null hypothesis (at times also known as statistical hypothesis) is
quite frequent.
a) The level of significance: This is very important concept in the
context of hypothesis testing. It is always some percentage
(usually 5%) which should be chosen with great care, thought
and reason. In case we take the significance level at 5 per cent,
then this implies that H0 will be rejected when the sampling
result (i.e. observed evidence) has a less than 0.05 probability of
occurring if H0 is true. In other words, the 5 percent level of
significance means that researcher is willing to take as much as
65
a 5 percent risk of rejecting the null hypothesis when it (H0)
happens to be true. Thus the significance level is the maximum
value of the probability of rejecting H0 when it is true and is
usually determined in advance before testing the hypothesis.
b) The criteria for rejecting the null hypothesis may differ.
Sometimes the null hypothesis is rejected only when the
quantity of the outcome is so large that the probability of its
having occurred by mere chance is 1 time out of 100. We
consider the probability of its having occurred by chance to be
too little and we reject the chance theory of the null hypothesis
and take the occurrence to be due to a genuine tendency. On
other occasions, we may be bolder and reject the null
hypothesis even when the quantity of the reported outcome is
likely to occur by chance 5 times out of 100.Statistically the
former is known as the rejection of the null hypothesis at 0.1
level of significance and the latter as the rejection at 0.5 level. It
may be pointed out that if the researcher is able to reject the
null hypothesis, he cannot directly uphold the declarative
hypothesis. If an outcome is not held to be due to chance, it
does not mean that it is due to the very cause and effect
relationship asserted in the particular declarative statement. It
may be due to something else which the researcher may have
failed to control.
c) Decision rule or test of hypothesis: Given a hypothesis H0 and
an alternative hypothesis Ha we make a rule which is known as
decision rule according to which we accept H0 (i.e. reject Ha) or
reject H0 (ie. accept Ha). For instance, if H0 is that a certain lot is
good (there are very few defective items in it) against Ha that
the lot is not good (there are too many defective items in it),
then we must decide the number of items to be tested and the
criterion for accepting or rejecting the hypothesis. We might
test 10 times in the lot and plan our decision saying that if there
are none or only 1 defective item among the 10, we will accept
H0 otherwise we will reject H0 (or accept Ha). This sort of basis is
known as decision rule.
d) Two-tailed and One-tailed tests: In the context of hypothesis
testing, these two terms are quite important and must be clearly
understood. A two-tailed test rejects the null hypothesis if, say,
66
the sample mean is significantly higher or lower than the
hypothesized value of the mean of the population. Such a test is
appropriate when the null hypothesis is some specified value
and the alternative hypothesis is a value not equal to the
specified value of the null hypothesis. In a two-tailed test, there
are two rejection regions, one on each tail of the curve which
can be illustrated as under:
If the significance level is 5 per cent and the two-tailed test is
to be applied, the probability of the rejection area will be 0.05
(equally divided on both tails of the curve as 0.025) and that of the
acceptance region will be 0.95
But there are situations when only one-tailed test is
considered appropriate. A one-tailed test would be used when we
are to test, say, whether the population mean is either lower than
or higher than some hypothesized value. We should always
remember that accepting H0, on the basis of sample information
does not constitute the proof that H0, is true. We only mean that
there is no statistical evidence to reject it.
3.10 ERRORS IN TESTING OF HYPOTHESIS
Type I and Type II errors: in the context of testing of
hypotheses, there are basically two types of errors we can make.
We may reject H0 when H0 is true and we may accept H0 when in
fact H0 is not true. The former is known as Type I error and the
latter as Type II error. In other words, Type I error means rejection
of hypothesis which should have been accepted and Type II error
means accepting the hypothesis which should have been rejected.
Type I error is denoted by (alpha) known as (alpha) error, also
called the level of significance of test; and Type II error is denoted
by ß (beta) known as ß error. In a tabular form the said two errors
can be presented as follows:
Table 3.1
Decision
67
Accept H0
Reject H0
H0 (true)
Correct decision
Type I error (alpha error)
H0 (false)
Type II error (ß error)
Correct decision
The probability of Type I error is usually determined in
advance and is understood as the level of significance of testing the
hypothesis. If type I error is fixed at 5 per cent, it means that there
are about 5 chances in 100 that we will reject H0 when H0 is true.
We can control Type I error just by fixing it at a lower level. For
instance, if we fix it at 1 per cent, we will say that the maximum
probability of committing Type I error would only be 0.01.
But with a fixed sample size, n, when we try to reduce Type I
error, the probability of committing Type II error increases. Both
types of errors cannot be reduced simultaneously. There is a tradeoff between two types of errors which means that the probability of
making one type of error can only be reduced if we are willing to
increase the probability of making the other type of error. To deal
with this trade-off in business situations, decision-makers decide
the appropriate level of Type I error by examining the costs or
penalties attached to both types of errors. If Type I error involves
the time and trouble of reworking a batch of chemicals that should
have been accepted, whereas Type II error means taking a chance
that an entire group of users of this chemical compound will be
poisoned, then in such a situation one should prefer a Type I error
to a Type II error. As a result one must set very high level for Type I
error in one’s testing technique of a given hypothesis. Hence, in the
testing of hypothesis, one must make all possible effort to strike an
adequate balance between Type I and Type II errors.
Check your progress:
1. Explain the term level of significance?
68
2. What are the two types of error in the testing of the hypothesis?
3.11 SUMMARY
It is important for the researcher to formulate hypotheses
before data are gathered. This is necessary for an objective and
unbiased study. It should be evident from what you have read so far
that in order to carry out research; you need to start by identifying
a question which demands an answer, or a need which requires a
solution. The problem can be generated either by an initiating idea,
or by a perceived problem area. We also studied that there are
important qualities of hypotheses which distinguish them from
other forms of statement. A good hypothesis is a very useful aid to
organizing the research effort. It specifically limits the enquiry to
the interaction of certain variables; it suggests the methods
appropriated for collecting, analyzing and interpreting the data; and
the resultant confirmation or rejection of the hypothesis through
empirical or experimental testing gives a clear indication of the
extent of knowledge gained. The hypothesis must be conceptually
clear. The concepts utilized in the hypothesis should be clearly
defined – not only formally but also if possible, operationally.
Hypothesis testing is the often used strategy for deciding whether a
sample data offer such support for a hypothesis that generalization
can be made. Thus hypothesis testing enables us to make
probability statements about population parameter(s).
BIBLIOGRAPHY
69
1. Best J.W, Kahn J.V ;( 2009); Research in education, tenth edition,
Dorling Kindersley (India) Pvt Ltd, New Delhi.
2. Wiersma W , Jurs S G;(2009); research methods in education an
introduction, ninth edition, Pearson education, Dorling
Kindersley (India) Pvt Ltd, New Delhi.
3. Kothari C.R; (2009) Research methodology methods and
techniques, second edition, New Delhi, New age international
(P) Ltd publishers.
4. Gupta S; (2007); Research methodology and statistical
techniques; New Delhi, Deep and Deep publication Pvt Ltd.
5. McBurney, D.H; White T,L; (2007); Research methods, seventh
edition; Delhi, Akash press.
6. Kaul, L; (2004); Methodology of educational research, third
edition, New Delhi, UBS Publishers’ Distributors Pvt Ltd.

70
4
SAMPLING
Unit Structure
4.0
Objectives
4.1
Introduction
4.2
Concept of Universe sample and sampling
4.3
Need for sampling
4.4
Advantages and Disadvantages of sampling
4.5
Characteristics of a good sample
4.6
Techniques of sampling
4.7
Types of probability sampling
4.8
Types of Non-probability sampling
4.9
Let us sum up
4.0 OBJECTIVES :
After reading this unit the student will be able to :
Define universe, sample and sampling
State the need of sampling
List the advantages and disadvantages of sampling
State the characteristics of a good sample
Differentiate between techniques of sampling
71
Explain types of probability & Non- probability sampling
4.1 INTRODUCTION :
The researcher is concerned with the generalizability of the
data beyond the sample. For studying any problem it is impossible
to study the entire population. It is therefore convenient to pick
out a sample out of the universe proposed to be covered by the
study. The process of sampling makes it possible to draw valid
inferences or generalizations on the basis of careful observation of
variables within a small proportion of the population.
4.2
CONCEPT OF UNIVERSE, SAMPLE AND SAMPLING :
Universe or Population : It refers to the totality of objects or
individuals regarding which inferences are to be made in a sampling
study.
Or
It refers to the group of people, items or units under
investigation and includes every individual.
First, the population is selected for observation and analysis.
Sample : It is a collection consisting of a part or subset of the
objects or individuals of population which is selected for the
purpose, representing the population sample obtained by collecting
information only about some members of a population.
72
Sampling : It is the process of selecting a sample from the
population. For this population is divided into a number of parts
called Sampling Units.
4.3 NEED FOR SAMPLING :
Large population can be conveniently covered.
Time, money and energy is saved.
Helpful when units of area are homogenous.
Used when percent accuracy is not acquired.
Used when the data is unlimited.
4.4
ADVANTAGES AND DISADVANTAGES OF SAMPLING :
Advantages of Sampling :
Economical : Manageable sample will reduce the cost compare
to entire population.
Increased speed : The process of research like collection of
data, analysis and Interpretation of data etc take less time than
the population.
Greater Scope : Handling data becomes easier and manageable
in case of a sample. Moreover comprehensive scope and
flexibility exists in the case of a sample.
Accuracy : Due to limited area of coverage, completeness and
accuracy is possible. The processing of data is done accurately
producing authentic results.
Rapport : Better rapport is established with the respondents,
which helps in validity and reliability of the results.
Disadvantages of Sampling :
73
Biasedness : Chances of biased selection leading to erroneous
conclusions may prevail. Bias in the sample may be due to
faulty method of selection of individuals or the nature of
phenomenon itself.
Selection of true representative sample : It the problem under
study is of a complex nature, it becomes difficult to select a true
representative sample, otherwise results will not be accurate &
will be usable.
Need for specialized knowledge : The researcher needs
knowledge, training and experience in sampling technique,
statistical analysis and calculation of probable error. Lack of
those may lead to serious mistakes.
Changeability of units : If the units of population are not
homogeneous, the sampling technique will be unscientific. At
times, all the individuals may not be accessible or may be
uncooperative. In such a case, they have o be replaced. This
introduces a change in the subjects to be studied.
Impossibility of sampling : Sometimes population is too small
or too heterogeneous to select a representative sample. In such
cases ‘census study’ is the alternative (Information about each
member of the population) Sampling error also comes because
of expectation of high standard of accuracy.
4.5 CHARACTERISTICS OF A GOOD SAMPLE :
A good sample should possess the following characteristics :
A true representative of the population
Free from error due to bias
Adequate in size for being reliable
Units of sample should be independent and relevant
Units of sample should be complete precise and up to date
Free from random sampling error
Avoiding substituting the original sample for convenience.
Check Your Progress - I
Q.1
Define the following
74
(a) Sample
(b) Sampling
Q.2
Write any three points for the following.
(a) Need for sampling
(b) Advantages of sampling
(c) Disadvantages of sampling
(d) Characteristics of a good sample
75
4.6 TECHNIQUES OF SAMPLING :
There are different types of sampling techniques based on
two factors viz. (1) the representation basis and (2) the element
selection technique on the representation basis. The sample may
be probability sampling or it may be non-probability sampling. On
the element basis, the sample may be either unrestricted or
restricted. Here we will discuss about two types of sampling viz.
(a) Probability Sampling and
(b) Non-Probability Sampling.
Difference between Probability and Non- Probability Sampling :
(1) A probability sample is one in which each member of the
population has an equal chance of being selected but in a nonprobability sample, a particular member of the population being
chosen is unknown.
(2) In probability sampling, randomness is the element of control. In
non-probability sampling, it relies on personal judgment.
4.7 TYPES OF PROBABILITY SAMPLING :
Following are the types of probability sampling
1) Simple random sampling
2) Systematic sampling
3) Stratified sampling
4) Cluster sampling
76
5) Multi stage sampling
Simple Random Sampling : In this all members have the same
chance (probability) of being selected. Random method provides
an unbiased cross selection of the population. For Example, we
wish to draw a sample of 50 students from a population of 400
students. Place all 400 names in a container and draw out 50
names one by one.
Systematic Sampling : Each member of the sample comes after an
equal interval from its previous member.
For Example, for a
sample of 50 students, the sampling fraction is
50
400
1
i.e. select
8
one student out of every eight students in the population. The
starting points for the selection is chosen at random.
Stratified Sampling : The population is divided into smaller
homogenous group or strata by some characteristic and from each
of these strata at random members are selected. For Example,
population is Christian community of greater Mumbai region. It is
divided into strata as professionals, skilled workers, Labourers and
Managers then from each strata sampling fraction. i.e.
Ѕample size
Τotal No. in the strata is chosen.
Τotal population
Finally from
each stratum using simple random or systematic sample method is
used to select final sample.
There are 400 Christians in greater Mumbai. There are 100
professionals, 200 skilled workers, 80 labourers and 120 Managers.
If the sample size is 80, then from each stratum sampling fraction
is
77
Professionals =
80
100 20
400
Skilled workers =
200
100 50
400
Labourers =
80
100 20
400
Managers =
120
100 30
400
From each stratum select randomly or systematically.
Cluster Sampling (Area Sampling) : A researcher selects sampling
units at random and then does complete observation of all units in
the group. For example, your research involves kindergarten
schools. Select randomly 15 schools. Then study all the children of
15 schools. In cluster sampling the unit of sampling consists of
multiple cases. It is also known as area sampling, as the selection of
individual member is made on the basis of place residence or
employment.
Multistage Sampling : The sample to be studied is selected at
random at different stages. For example, we need to select a
sample of middle class working couples in aharashtra state. The
first stage will be randomly selecting a specific number of districts
in a state. The second stage involves selecting randomly a specific
number of rural and urban areas for the study. At the third stage,
from each area, a specific number of middle class families will be
78
selected and at the last stage, working couples will be selected
from these families.
4.8 TYPES NON-PROBABILITY SAMPLING :
The following are techniques of non-probability sampling :
a) Purposive Sampling
b) Convenience Sampling
c) Quota Sampling
d) Snowball Sampling
A) Purposive Sampling : In this sampling method, the researcher
selects a "typical group" of individuals who might represent the
larger population and then collects data from this group. For
example, if a researcher wants to survey the attitude towards the
teaching profession of teachers teaching students from lower socioeconomic stratum, he or she might survey the teachers teaching in
schools catering to students from slums (more specifically, teachers
teaching in Municipal schools) with the assumption that since all
teachers teaching in Municipal schools cater to students from the
lower socio-economic stratum, they are representative of all the
teachers teaching students from lower socio-economic stratum.
B) Convenience Sampling : It refers to the procedures of obtaining
units or members who are most conveniently available. It consists
of units which are obtained because cases are readily available. In
selecting the incidental sample, the researcher determines the
required sample size and then simply collects data on that number
of individuals who are available easily.
79
C) Quota Sampling : The selection of the sample is made by the
researcher, who decides the quotas for selecting sample from
specified sub groups of the population. Here, the researcher first
identifies those categories which he or she feels are important to
ensure the representativeness of the population, then establishes a
sample size for each category, and finally selects individuals on an
availability basis. For example, an interviewer might be need data
from 40 adults and 20 adolescents in order to study students’
television viewing habits. He therefore, will go out and select 20
adult men and 20 adult women, 10 adolescent girls and 10
adolescent boys so that they could interview them about their
students’ television viewing habits.
Snowball Sampling : In snowball sampling, the researcher
identifying and selecting available respondents who meet the
criteria for inclusion in his/her study. After the data have been
collected from the subject, the researcher asks for a referral of
other individuals, who would also meet the criteria and represent
the population of concern.
Check Your Progress - II
Q.1) Differentiate
sampling.
between
probability
and
Q.2) Discuss the types of probability sampling.
non-probability
80
Q.3) Discuss the types of non-probability sampling.
4.9 LET US SUM UP :
In this unit we discussed the concept of population, sample
and sampling. Need of sampling advantages and disadvantages of
sampling were discussed and also Characteristics of a good sample
are elaborated. In the second part, the types of probability and
non-probability sampling were detailed.
References :
1) Kulbir Singh Siddhu (1992), Methodology of Research in
Education, Sterling Publishers Private Limited pg, 252.
2) C.R. Kothari (1991), Research Methodology – Methods and
Techniques, wiley Eaetern Limited, pg 68.



81
5
DESCRIPTIVE RESEARCH
Unit Structure
5.0
5.1
5.2
5.3
5.4
5.5
5.6
5.7
Objectives
Meaning of Descriptive Research
Co relational Research
Causal-Comparative Research
Document Analysis
Ethnography
Case Study
Analytical Method.
5.0 OBJECTIVES :
After reading this unit, the student will be able to :
(a) State the nature of descriptive research
(b) Explain how to conduct correlational research
(c) Explain how to conduct correlational research
(d) Explain how to conduct causal-comparative research
(e) Explain how to conduct case study research
(f) Explain the concept of documentary research
(g) Explain how to conduct ethnographic research
(h) Explain the concept of analytical research
5.1 NATURE OF DESCRIPTIVE RESEARCH:
The descriptive research attempts to describe, explain and
interpret conditions of the present i.e. ―what is‘. The purpose of a
descriptive research is to examine a phenomenon that is occurring at
a specific place(s) and time. A descriptive research is concerned with
82
conditions, practices, structures, differences or relationships that
exist, opinions held, processes that are going on or trends that are
evident.
Types of Descriptive Research Methods
In the present unit, the following descriptive research
methods are described in detail:
1.
2.
3.
4.
5.
6.
Correlational Research
Causal-Comparative Research
Case Study
Ethnography
Document Analysis
Analytical Method.
5.2 CO-RELATIONAL METHOD :
Correlational research describes what exists at the moment
(conditions, practices, processes, structures etc.) and is therefore,
classified as a type of descriptive method. Nevertheless, these
conditions, practices, processes or structures described are markedly
different from the way they are usually described in a survey or an
observational study.
Correlational research comprises of collecting data to
determine whether, and to what extent, a relationship exists between
two or more quantifiable variables. Correlational research uses
numerical data to explore relationships between two or more
variables. The degree of relationship is expressed in terms of a
coefficient of correlation. If the relationship exists between
variables, it implies that scores on one variable are associated with
or vary with the scores on another variable. The exploration of
relationship of the relationship between variables provides insight
into the nature of the variables themselves as well as an
understanding of their relationships. If the relationships are
substantial and consistent, they enable a researcher to make
predictions about the variables.
Correlational research is aimed at determining the nature,
degree and direction of relationships between variables or using
these relationships to make predictions. Correlational studies
typically investigate a number of variables expected to be related to
a major, complex variable. Those variables which are not found to
be related to this major, complex variable are omitted from further
analysis. On the other hand, those variables which are found to be
83
related to this major, complex variable are further analysed in a
causal-comparative or experimental study so as to determine the
exact nature of the relationship between them.
In a correlational study, hypotheses or research questions are
stated at the beginning of the study. The null hypotheses are often
used in a correlational study.
Correlational study does not specify cause-and-effect
relationships between variables under consideration. It merely
specifies concomitant variations in the scores on the variables. For
example, there is a strong relationship between students‘ scores on
academic achievement in Mathematics and their scores on academic
achievement in Science. This does not suggest that one of these
variables is the cause and the other is the effect. In fact, a third
variable, viz., students‘ intelligence could be the cause of students‘
academic achievement in both, Mathematics and Science.
Steps of a Correlational Research
1. Selection of a Problem: Correlational study is designed (a) to
determine whether and how a set of variables are related, or
(b) to test the hypothesis of expected relationship between
among the set of two or more variables. The variables to be
included in the study need to be selected on the basis of a
sound theory or prior research or observation and experience.
There has to be some logical connection between the
variables so as to make interpretations of the findings of the
study more meaningful, valid and scientific. A correlational
study is not done just to find out what exists: it is done for the
ultimate purpose of explanation and prediction of phenomena.
If a correlational study is done just to find out what exists, it
is usually known as a ‗shot gun‘ approach and the findings of
such a study are very difficult to interpret.
2. Selection of the Sample and the Tools: The minimum
acceptable sample size should be 30, as statistically, it is
regarded as a large sample. The sample is generally selected
using one of the acceptable sampling methods. If the validity
and the reliability of the variables to be studied are low, the
measurement error is likely to be high and hence the sample
size should be large. Thus it is necessary to ensure that valid
and reliable tools are used for the purpose of collecting the
data. Moreover, suppose you are studying the relationship
between classroom environment and academic achievement
of students. If your tool measuring classroom environment
84
focuses only on the physical aspects of the classroom and not
its psycho-social aspects, then your findings would indicate a
relationship only between academic achievement of students
and the physical aspects of the classroom environment and
not the entire classroom environment since the physical
aspects of the classroom environment is not the only
comprehensive and reliable measure of classroom
environment. Thus the measurement instruments should be
valid and reliable.
3. Design and Procedure: The basic design of a correlational
study is simple. It requires scores obtained on two or more
variables from each unit of the sample and the correlation
coefficient between the paired scores is computed which
indicates the degree and direction of the relationship between
variables.
4. Interpretation of the Findings: In a study designed to explore
or test hypothesized relationships, a correlation coefficient is
interpreted in terms of its statistical significance.
Co relational research is of the following two types:
(a) Relationship Studies: These attempt to gain insight into
variables that are related to complex variables such as
academic performance, self-concept, stress, achievement
motivation or creativity.
(b) Prediction Studies: These are conducted to facilitate
decisions about individuals or to aid in various types of
selection. They are also conducted to determine predictive
validity of measuring tools as well as to test variables
hypothesized to be predictors of a criterion variable.
Some questions that could be examined through correlational
research are as follows:
1. How is job satisfaction of a teacher related to the extent of
autonomy available in job?
2. Is there a relationship between Socio-Economic Status of
parents and their involvement with the school?
3. How well do Common Entrance Test Scores for admission to
B.Ed. reflect / predict teacher effectiveness?
85
Check Your Progress - I
(a) State the meaning of correlational research.
(b) Explain the steps of correlational research.
5.3 CAUSAL-COMPARATIVE RESEARCH :
It is a type of descriptive research since it describes
conditions that already exist. It is a form of investigation in which
the researcher has no direct control over independent variable as its
expression has already occurred or because they are essentially nonmanipulable. It also attempts to identify reasons or causes of preexisting differences in groups of individuals i.e. if a researcher
observes that two or more groups are different on a variable, he tries
to identify the main factor that has led to this difference. Another
name for this type of research is ex post facto research (which in
Latin means ―after the fact‖) since both the hypothesised cause and
the effect have already occurred and must be studied in retrospect.
Causal-comparative studies attempt to identify cause-effect
relationships, correlational studies do not. Causal-comparative
studies involve comparison, correlational studies involve
relationship. However, neither method provides researchers with true
experimental data. On the other hand, causal-comparative and
experimental research both attempt to establish cause-and-effect
relationships and both involve comparisons. In an experimental
study, the researcher selects a random sample and then randomly
divides the sample into two or more groups. Groups are assigned to
86
the treatments and the study is carried out. However, in causalcomparative research, individuals are not randomly assigned to
treatment groups because they already were selected into groups
before the research began. In experimental research, the independent
variable is manipulated by the researcher, whereas in causalcomparative research, the groups are already formed and already
different on the independent variable.
Inferences about cause-and-effect relationships are made
without direct intervention, on the basis of concomitant variation of
independent and dependent variables. The basic causal-comparative
method starts with an effect and seeks possible causes. For example,
if a researcher observes that the academic achievement of students
from different schools. He may hypothesise the possible cause for
this as the type of management of schools, viz. private-aided,
private-unaided, or government schools (local or state or any other).
He therefore decides to conduct a causal-comparative research in
which academic achievement of students is the effect that has
already occurred and school types by management is the possible
hypothesised cause. This approach is known as retrospective causalcomparative research since it starts with the effects and investigates
the causes.
In another variation of this type of research, the investigator
starts with a cause and investigates its effect on some other variable.
i.e. such research is concerned with the question ‗what is the effect
of X on Y when X has already occurred?‘ For example, what longterm effect has occurred on the self-concept of students who are
grouped according to ability in schools? Here, the investigator
hypothesises that students who are grouped according to ability in
schools are labelled ‗brilliant‘, ‗average‘ or ‗dull‘ and this over a
period of time could lead to unduly high or unduly poor self-concept
in them. This approach is known as prospective causal-comparative
research since it starts with the causes and investigates the effects.
However, retrospective causal-comparative studies are far more
common in educational research.
Causal-comparative research involves two or more groups
and one independent variable. The goal of causal-comparative
research is to establish cause-and-effect relationships just like an
experimental research. However, in causal-comparative research, the
researcher is able to identify past experiences of the subjects that are
consistent with a ‗treatment‘ and compares them with those subjects
who have had a different treatment or no treatment. The causalcomparative research may also involve a pre-test and a post-test. For
87
instance, a researcher wants to compare the effect of ―Environmental
Education‖ in the B.Ed. syllabus on student-teachers‘ awareness of
environmental issues and problems attitude towards environmental
protection. Here, a researcher can develop and administer a pre-test
before being taught the paper on ―Environmental Education‖ and a
post-test after being taught the same. At the same time, the pre-test
as well as the post-test are also administered to a group which was
not taught the paper on ―Environmental Education‖. This is
essentially a non-experimental research as there is no manipulation
of the treatment although it involves a pre-test and a post-test. In this
type of research, the groups are not randomly assigned to exposure
to the paper on ―Environmental Education‖. Thus it is possible that
other variables could also affect the outcome variables. Therefore, in
a causal-comparative research, it is important to think whether
differences other than the independent variable could affect the
results.
In order to establish cause-and-effect in a causal-comparative
research, it is essential to build a convincing rational argument that
the independent variable is influencing the dependent variable. It is
also essential to ensure that other uncontrolled variables do not have
an effect on the dependent variable. For this purpose, the researcher
should try to draw a sample that minimises the effects of other
extraneous variables. According to Picciano, ―In stating a hypothesis
in a causal comparative study, the word ―effect‖ is frequently used‖.
Conducting a Causal-Comparative Study
Although the independent variable is not manipulated, there
are control procedures that can be exercised to improve
interpretation of results.
Design and Procedure
The researcher selects two groups of participants, accurately
referred to as comparison groups. These groups may differ in two
ways as follows:
(i)
One group possesses a characteristic that the other does
not.
(ii)
Each group has the characteristic, but to differing degrees
or amounts.
88
(iii)
Definition and selection of the comparison groups are
very important parts of the causal-comparative procedure.
(iv)
The independent variable differentiating the groups must
be clearly and operationally defined, since each group
represents a different population.
(v)
In causal-comparative research the random sample is
selected from two already existing populations, not from a
single population as in experimental research.
(vi)
As in experimental studies, the goal is to have groups that
are as similar as possible on all relevant variables except
the independent variable.
(vii)
The more similar the two groups are on such variables, the
more homogeneous they are on everything but the
independent variable.
Control Procedures
Lack of randomization, manipulation, and control are
all sources of weakness in a causal-comparative study.
Random assignment is probably the single best way to
try to ensure equality of the groups.
A problem is the possibility that the groups are
different on some other important variable (e.g.
gender, experience, or age) besides the identified
independent variable.
Matching
Matching is another control technique.
If a researcher has identified a variable likely to influence
performance on the dependent variable, the researcher may
control for that variable by pair-wise matching of
participants.
For each participant in one group, the researcher finds a
participant in the other group with the same or very similar
score on the control variable.
If a participant in either group does not have a suitable match,
the participant is eliminated from the study.
89
The resulting matched groups are identical or very similar
with respect to the identified extraneous variable.
The problem becomes serious when the researcher attempts to
simultaneously match participants on two or more variables.
Comparing Homogeneous Groups or Subgroups
To control extraneous variables, groups that are homogeneous
with respect to the extraneous variable are compared.
This procedure may lower the number of participants and
limit the generalisability of the findings.
A similar but more satisfactory approach is to form subgroups
within each group that represent all levels of the control
variable.
Each group might be divided into two or more subgroups on
the basis of high, average, and low levels of ‗Independent
variable‘.
Suppose the independent variable in the study is students‘ IQ.
The subgroups then will comprise of high, average, and low
levels of IQ. The existence of comparable subgroups in each
group controls for IQ.
In addition to controlling for the variable, this approach also
permits the researcher to determine whether the independent
variable affects the dependent variable differently at different
levels of the control variable.
The best approach is to build the control variable right into
the research design and analyze the results in a statistical
technique called factorial analysis of variance.
A factorial analysis allows the researcher to determine the
effect of the independent variable and the control variable on
the dependent variable both separately and in combination.
It permits determination of whether there is interaction
between the independent variable and the control variable
such that the independent variable operates differently at
different levels of the control variable.
Independent variables in a causal-comparative research can
be of following types:
Type of Variable
Examples
90
Organismic Variables
Age
Gender
Religion
Caste
Ability Variables
Intelligence
Scholastic Ability
Specific Aptitudes
91
Personality Variables
Home Background
Related Variables
Anxiety Level
Stress-proneness
Aggression Level
Emotional Intelligence
Introversion / Extroversion
Self-Esteem
Self-Concept
Academic or Vocational Aspirations
Brain Dominance
Learning, Cognitive or Thinking Styles
Psycho-Social Maturity
Home Environment
Socio-Economic Status
Educational Background of Parents
Economic Background of Parents
Employment Status of Parents
Single Parent v/s Both Parents
Employment Status of Mother (Working or
Non- Working)
Birth Order
No. of Siblings
School Related Variables School Environment
Classroom Environment
Teacher Personality
Teaching Style
Leadership Style
School Type by Management ( Private-aided
v/s Private-unaided v/s Government)
School Type by Gender( Single-sex v/s Coeducational)
School Type by Denomination (Run by a nonreligious organisation v/s Run by a religious
organisation whose one of the objectives is to
propagate a specific religion.)
School Type by Board Affiliation (SSC,
CBSE, ICSE, IB, IGCSE)
School Size
Per Student Expenditure
Socio-Economic Context of the School
92
The Value of Causal-Comparative Research: In a large majority
of educational research especially in the fields of sociology of
education and educational psychology, it is not possible to
manipulate independent variables due to ethical considerations
especially when one is dealing with variables such as anxiety,
intelligence, home environment, teacher personality, negative
reinforcement, equality of opportunity and so on. It is also not
possible to control such variables as in an experimental research. For
studying such topics and their influence on students, causalcomparative method is the most appropriate.
The Weaknesses of Causal-Comparative Research: There are
three major limitations of causal-comparative research. These
include, (a) lack of control or the inability to manipulate independent
variables methodologically, (b) the lack of power to assign subjects
randomly to groups and (c) the danger of inappropriate
interpretations. The lack of randomization, manipulation, and
control factors make it difficult to establish cause-and-effect
relationships with any degree of confidence.
The statistical techniques used to compare groups in a causalcomparative research include the t-test when two groups are to be
compared and ANOVA when more than two groups are to be
compared. The technique of ANCOVA may also be used in case
some other variables likely to influence the dependent variable need
to be controlled statistically. Sometimes, chi square is also used to
compare group frequencies, or to see if an event occurs more
frequently in one group than another.
Use of Analysis of Covariance (ANCOVA) : It is used to
adjust initial group differences on variables used in causalcomparative and experimental research studies. Analysis of
covariance adjusts scores on a dependent variable for initial
differences on some other variable related to performance on the
dependent. Suppose we were doing a study to compare two methods,
X and Y, of teaching sixth standard students to solve mathematical
problems. Covariate analysis statistically adjusts the scores of
method Y to remove the initial advantage so that the results at the
end of the study can be fairly compared as if the two groups started
equally.
93
Check Your Progress - II
(a) State the meaning of causal-comparative research.
(b) Explain the steps and procedure of conducting causalcomparative research.
(c) Explain the strengths and weaknesses of causal-comparative
research.
(d) Give examples of causal-comparative research in education.
5.4 DOCUMENTARY ANALYSIS:
94
Documentary Analysis is closely related to historical
research since in such surveys we study the existing documents. But
it is different from historical research in which our emphasis is on
the study of the past; and in the descriptive research we emphasise
on the study of the present. Descriptive research in the field of
education may focus on describing the existing school practices,
attendance rate of the students, health records, and so on.
The method of documentary analysis enables the researcher
to include large amounts of textual information and systematically
identify its properties. Documentary analysis today is a widely used
research tool aimed at determining the presence of certain words or
concepts within texts or sets of texts. Researchers quantify and
analyze the presence, meanings and relationships of such words and
concepts, then make inferences about the messages within the texts,
the writer(s), the audience and even the culture and time of which
these are a part. Documentary analysis could be defined as a
research technique for the objective, systematic, and quantitative
description of manifest content of communications. It is a technique
for making inferences by objectively and systematically identifying
specified characteristics of messages. The technique of documentary
analysis is not restricted to the domain of textual analysis, but may
be applied to other areas such as coding student drawings or coding
of actions observed in videotaped studies, analyzing past documents
such as memos, minutes of the meetings, legal and policy statements
and so on. In order to allow for replication, however, the technique
can only be applied to data that are durable in nature. Texts in
documentary analysis can be defined broadly as books, book
chapters, essays, interviews, discussions, newspaper headlines and
articles, historical documents, speeches, conversations, advertising,
theater, informal conversation, or really any occurrence of
communicative language. Texts in a single study may also represent
a variety of different types of occurrences.
Documentary analysis enables researchers to sift through
large amount of data with comparative ease in a systematic fashion.
It can be a useful technique for allowing one to discover and
describe the focus of individual, group, institutional or social
attention. It also allows inferences to be made which can then be
corroborated using other methods of data collection. Much
95
documentary analysis research is motivated by the search for
techniques to infer from symbolic data what would be too costly, no
longer possible, or too obtrusive by the use of other techniques.
These definitions illustrate that documentary analysis emphasises an
integrated view of speech/texts and their specific contexts.
Document analysis is the systematic exploration of written
documents or other artefacts such as films, videos and photographs.
In pedagogic research, it is usually the contents of the artefacts,
rather than say, the style or design, that are of interest.
Why analyse documents?
Documents are an essential element of day-to-day work in
education. They include:
Student essays
Exam papers
Minutes of meetings
Module outlines
Policy documents
In some pedagogic research, analysis of relevant documents
will inform the investigation. If used to triangulate, or give another
perspective on a research question, results of document analysis may
complement or refute other data. For example, policy documents in
an institution may be analysed and interviews with staff or students
and observation of classes may suggest whether or not new policies
are being implemented. A set of data from documents, interviews
and observations could contribute to a case study of a particular
aspect of pedagogy.
How can documents be analysed?
The content of documents can be explored in systematic ways
which look at patterns and themes related to the research question(s).
For example, in making a case study of deep and surface learning in
a particular course, the question might be
'How has deep learning been encouraged in this course in the
last three years?'
96
Minutes of course meetings could be examined to see whether
or how much this issue has been discussed; Student handouts could
be analysed to see whether they are expressed in ways which might
encourage deep learning. Together with other data-gathering
activities such as student questionnaires or observation of classes, an
action research study might then be based on an extended research
question so that strategies are implemented to develop deep learning.
In the example of deep learning, perhaps the most obvious
way to analyse the set of minutes would be to use a highlighting pen
every time the term 'deep learning' was used. You might also choose
to highlight 'surface learning' a term with an implied relationship to
deep learning. You might also decide, either before starting the
analysis, or after reading the documents, that there are other terms or
inferences which imply an emphasis on deep learning. You might
therefore go through the documents again, selecting additional
references.
The levels of analysis will vary but a practitioner-researcher
will need to be clear and explicit about the rationale for, and the
approach to, selection of content.
Advantages and disadvantages of document analysis
Robson (2002) points out the advantages and disadvantages
of content analysis. An advantage is that documents are unobtrusive
and can be used without imposing on participants; they can be
checked and re-checked for reliability.
A major problem is that documents may not have been
written for the same purposes as the research and therefore
conclusions will not usually be possible from document analysis
alone.
Check Your Progress - III
(a) State the meaning of documentary research.
97
(b) Explain the applications of documentary research.
5.5 ETHNOGRAPHY:
Meaning
Ethnographic studies are usually holistic, founded on the idea
that human beings are best understood in the fullest possible context,
including the place where they live, the improvements they have
made to that place, how they make a living and gather food, housing,
energy and water for themselves, what their marriage customs are,
what language(s) they speak and so on. Ethnography is a form of
research focusing on the sociology of meaning through close field
observation of socio-cultural phenomena. Typically, the
ethnographer focuses on a community (not necessarily geographic,
considering also work, leisure, classroom or school groups and other
communities). Ethnography may be approached from the point of
view of art and cultural preservation and as a descriptive rather than
analytic endeavour. It essentially is a branch of social and cultural
anthropology. The emphasis in ethnography is on studying an entire
culture.The method starts with selection of a culture, review of the
literature pertaining to the culture, and identification of variables of
interest - typically variables perceived as significant by members of
the culture. Ethnography is an enormously wide area with an
immense diversity of practitioners and methods. However, the most
common ethnographic approach is participant observation and
unstructured interviewing as a part of field research. The
ethnographer becomes immersed in the culture as an active
participant and records extensive field notes. In an ethnographic
study, there is no preset limit of what will be observed and
interviewed and no real end point in as is the case with grounded
theory.
98
Hammersley and Atkinson define ethnography as, "We see
the term as referring primarily to a particular method or sets of
methods. In its most characteristic form it involves the ethnographer
participating, overtly or covertly, in people's lives for an extended
period of time, watching what happens, listening to what is said,
asking questions—in fact, collecting whatever data are available to
throw light on the issues that are the focus of the research. Johnson
defines ethnography as "a descriptive account of social life and
culture in a particular social system based on detailed observations
of what people actually do."
Assumptions in an Ethnographic Research
According to Garson, these are as follows:
a. Ethnography assumes that the principal research interest is
primarily affected by community cultural understandings. The
methodology virtually assures that common cultural
understandings will be identified for the research interest at
hand. Interpretation is apt to place great emphasis on the causal
importance of such cultural understandings. There is a
possibility that an ethnographic focus will overestimate the role
of cultural perceptions and underestimate the causal role of
objective forces.
b. Ethnography assumes an ability to identify the relevant
community of interest. In some settings, this can be difficult.
Community, formal organization, informal group and
individual-level perceptions may all play a causal role in the
subject under study and the importance of these may vary by
time, place and issue. There is a possibility that an
ethnographic focus may overestimate the role of community
culture and underestimate the causal role of individual
psychological or of sub-community (or for that matter, extracommunity) forces.
c. Ethnography assumes that the researcher is capable of
understanding the cultural mores of the population under study,
has mastered the language or technical jargon of the culture and
has based findings on comprehensive knowledge of the culture.
There is a danger that the researcher may introduce bias toward
perspectives of his or her own culture.
99
d. While not inherent to the method, cross-cultural ethnographic
research runs the risk of falsely assuming that given measures
have the same meaning across cultures.
Characteristics of Ethnographic Research:
According to Hammersley and Sanders, ethnography is
characterized by the following features :
(a) People's behaviour is studied in everyday contexts.
(b) It is conducted in a natural setting.
(c) Its goal is more likely to be exploratory rather than evaluative.
(d) It is aimed at discovering the local person‘s or ―native‘s‖ point of
view, wherein, the native may be a consumer or an end-user.
(e) Data are gathered from a wide range of sources, but observation
and/or relatively informal conversations are usually the
principal ones.
(f) The approach to data collection is unstructured in that it does not
involve following through a predetermined detailed plan set up
at the beginning of the study nor does it determine the
categories that will be used for analysing and interpreting the
soft data obtained. This does not mean that the research is
unsystematic. It simply means that initially the data are
collected as raw form and a wide amount as feasible.
(g) The focus is usually a single setting or group of a relatively
small size. In life history research, the focus may even be a
single individual.
(h) The analysis of the data involves interpretation of the meanings
and functions of human actions and mainly takes the form of
verbal descriptions and explanations, with quantification and
statistical analysis playing a subordinate role at most.
(i) It is cyclic in nature concerning data collection and analysis. It is
open to change and refinement throughout the process as new
learning shapes future observations. As one type of data
provides new information, this information may stimulate the
researcher to look at another type of data or to elicit
confirmation of an interpretation from another person who is
part of the culture being studied.
Guidelines for Conducting Ethnography
100
Following are some broad guidelines for conducting fieldwork:
1. Be descriptive in taking field notes. Avoid evaluations.
2. Collect a diversity of information from different perspectives.
3. Cross-validate and triangulate by collecting different kinds of
data obtained using observations, interviews, programme
documentation, recordings and photographs.
4. Capture participants' views of their own experiences in their
own words. Use quotations to represent programme
participants in their own terms.
5. Select key informants carefully. Draw on the wisdom of their
informed perspectives, but keep in mind that their
perspectives are limited.
6. Be conscious of and perceptive to the different stages of
fieldwork. (a) Build trust and rapport at the entry stage.
Remember that the researcher-observer is also being observed
and evaluated. (b) Stay attentive and disciplined during the
more routine middle-phase of fieldwork. (c) Focus on pulling
together a useful synthesis as fieldwork draws to a close. (d)
Be well-organized and meticulous in taking detailed field
notes at all stages of fieldwork. (e) Maintain an analytical
perspective grounded in the purpose of the fieldwork: to
conduct research while at the same time remaining involved
in experiencing the observed setting as fully as possible. (f)
Distinguish clearly between description, interpretation and
judgment. (g) Provide formative feedback carefully as part of
the verification process of fieldwork. Observe its effect. (h)
Include in your field notes and observations reports of your
own experiences, thoughts and feelings. These are also field
data. Fieldwork is a highly personal experience. The meshing
of fieldwork procedures with individual capabilities and
situational variation is what makes fieldwork a highly
personal experience. The validity and meaningfulness of the
results obtained depend directly on the observer's skill,
discipline, and perspective. This is both the strength and
weakness of observational methods.
Techniques Used in Conducting Ethnography
These include the following :
A. Listening to conversations and interviewing. The researcher
needs to make notes or audio-record these.
101
B. Observing behaviour and its traces, making notes and
mapping patterns of behaviour, sketching of relationship
between people, taking photographs, video-recordings of
daily life and activities and using digital technology and web
cameras.
Stages in Conducting Ethnography
According to Spradley, following are the stages in conducting
an ethnographic study :
1. Selecting an ethnographic project.
2. Asking ethnographic questions and collecting ethnographic
data.
3. Making an ethnographic record.
4. Analysing ethnographic data and conducting more research as
required.
5. Outlining and writing an ethnography.
Steps of Conducting Ethnography
According to Spradley, ethnography is a non-linear research
process but is rather, a cyclical process. As the researcher develops
questions and uncovers answers, more questions emerge and the
researcher must move through the steps again.
According to Spradley, following are the steps of conducting an
ethnographic study (However, all research topics may not follow all
the steps listed here) :
1. Locating a social situation. The scope of the topic may vary
from
the ―micro-ethnography‖ of a ―single-socialsituation‖ to ―macro-ethnography‖ of a complex society.
According to Hymes, there are three levels of ethnography
including (i) ―comprehensive ethnography‖ which documents
an entire culture, (ii) the ―topic-oriented ethnography‖ which
looks at aspects of a culture and (iii) ―hypothesis-oriented
ethnography‖ which beings with an idea about why
something happens in a culture. Suppose you want to conduct
research on classroom environment. This step requires that
you select a category of classroom environment and identify
social and academic situations in which it is used.
102
2. Collecting data. There are four types of data collection used
in ethnographic research, namely, (a) watching or being part
of a social context using participant and non-participant
observation and noted in the form of on observer notes, logs,
diaries, and so on, (b) asking open and closed questions that
cover identified topics using semi-structured interviews, (c)
asking open questions that enable a free development of
conversation using unstructured interviews and (d) using
collected material such as published and unpublished
documents, photographs,
papers, videos and assorted
artefacts, letters, books or reports. The problem with such
data is that the more you have, greater is the effort required to
analyse. Moreover, as the study progresses, the amount of
data increases making it more difficult and sharp to analyse
the data. Yet more data leads to better codes, categories,
theories and conclusions. What is 'enough' data is subject to
debate and may well be constrained by the time and resource
the researcher has available. Deciding when and where to
collect data can be a crucial decision. A profound analysis at
one point may miss others, whilst a broad encounter may miss
critical finer points. Several deep dives can be a useful
method. Social data can be difficult to access due to ethics,
confidentiality and determination necessary in such research.
There is often less division of activity phases in qualitative
research and the researcher may be memoing and coding as
he proceeds with the study. The researcher usually uses
theoretical and selective sampling for data collection.
3. Doing participant observation. Formulate open questions
about the social situations under study. Malinowski opines
that ethnographic research should begin with ―foreshadowed
problems‖. These problems are questions that researchers
bring to a study and to which they keep an open eye but to
which they are not enslaved.
Collect examples of the
classroom environment. Select research tools/techniques.
Spradley provides a matrix of questions about cultural space,
objects, acts activities, events, time, actors, goals and feelings
that researchers can use when just starting the study.
4. Making an ethnographic record. Write descriptions of
classroom environment and the situations in which it is used.
5. Making descriptive observations. Select method for doing
analysis.
6. Making domain analysis. Discover themes within the data and
apply existing theories, if any, as applicable. Domain
103
analysis requires the researcher to first choose one semantic
relationship such as ―causes‖ or ―classes‖. Second, you select
a portion of your data and begin reading it and while doing
so, fill out a domain analysis worksheet where you list all the
terms that fit the semantic relationship you chose. Now
formulate structural questions for each domain. Structural
questions occur less frequently as compared to descriptive
questions in normal conversation. Hence they require more
framing. Types of structural questions include the following :
(i)
Verification and elicitation questions such as (a)
verification of hypotheses (Is the teacher-student
relationship a conducive?), (b) domain verification (Are
there different types of teacher-student relationships?
What are the different types? (c) verification of included
terms (is teachers‘ strike an illegal activity?) and (d)
verification of semantic relationship (Is teaching
beautiful?).
(ii) Frame substitution. This requires starting with real
sentence like "you get a lot of brickbats in
administration". Then ask, can you think of any other
terms that go in that sentence instead of brickbats? You
get a lot of _____ in administration. (This can be done
systematically by giving them list of terms to choose
from).
(iii) Card sorts. Write phrases or words on cards. Then lay
them out and ask the questions mentioned above. The
researcher can ask which words are similar. Testing
hypotheses about relations between domains and
between domains and items. Like: "Are there different
kinds of classroom climates?" If yes, it is a domain.
Then ask "what kinds of classroom climates are there?‖
The final step in domain analysis is to make a list of all
the hypothetical domains you have identified, the
relationships in these domain and the structural
questions that follow your analysis.
7. Making focussed observations.
8. Making a taxonomic analysis. Taxonomy is a scientific
process of classifying things and arranging them in groups or
a set of categories (domains) organised on a single semantic
relationships. The researcher needs to test his taxonomies
104
against data given by informants. Make comparisons of two
or three symbols such as word, event, constructs.
9. Making selected observations.
10. Making a componential analysis which is a systematic search
fro the attributes or features of cultural symbols that
distinguish them from others and give them meaning. The
basic idea in componential analysis is that all items in a
domain can be decomposed into combinations of semantic
features which combine to give the item meaning.
11. Discovering cultural themes. A theme is a postulate or
position, explicit or implicit, which is directly or indirectly
approved and promoted in a society. Strategies of discovering
cultural themes include (i) in-depth study of culture, (ii)
making a cultural inventory, (iii) identifying and analysing
components of all domains, (iv) searching for common
elements across all domains such as gender, age, SES groups
etc., (v) identifying domains that clearly show a strong pattern
of behaviour, (vi) making schema of cultural scene and (vii)
identifying generic (etic) codes usually functional such as
social conflict, inequality, cultural contradictions in the
institutional social system, strategies of social control,
managing interpersonal relations, acquiring status in the
institution and outside, solving educational and administrative
problems and so on.
12. Taking a cultural inventory.
13. Writing an ethnography
Guidelines for Interviewing
According to Patton, following are some useful guidelines
that can be used for effective interviewing :
1. Throughout all phases of interviewing, from planning through
data collection to analysis, keep centred on the purpose of the
research endeavour. Let that purpose guide the interviewing
process.
2. The fundamental principle of qualitative interviewing is to
provide a framework within which respondents can express their
own understandings in their own terms.
105
3. Understand the strengths and weaknesses of different types of
interviews: the informal conversational interview; the interview
guide approach; and the standardized open-ended interview.
4. Select the type of interview (or combination of types) that is most
appropriate to the purposes of the research effort.
5. Understand the different kinds of information one can collect
through interviews: behavioural data; opinions; feelings;
knowledge; sensory data; and background information.
6. Think about and plan how these different kinds of questions can
be most appropriately sequenced for each interview topic,
including past, present, and future questions.
7. Ask truly open-ended questions.
8. Ask clear questions, using understandable and appropriate
language.
9. Ask one question at a time.
10. Use probes and follow-up questions to solicit depth and detail.
11. Communicate clearly what information is desired, why that
information is important, and let the interviewee know how the
interview is progressing.
12. Listen attentively and respond appropriately to let the person
know he or she is being heard.
13. Avoid leading questions.
14. Understand the difference between a depth interview and an
interrogation. Qualitative evaluators conduct depth interviews;
police investigators and tax auditors conduct interrogations.
15. Establish personal rapport and a sense of mutual interest.
16. Maintain neutrality toward the specific content of responses. You
are there to collect information not to make judgments about that
person.
17. Observe while interviewing. Be aware of and sensitive to how
the person is affected by and responds to different questions.
18. Maintain control of the interview.
19. Tape record whenever possible to capture full and exact
quotations for analysis and reporting.
106
20. Take notes to capture and highlight major points as the interview
progresses.
21. As soon as possible after the interview check the recording for
malfunctions; review notes for clarity; elaborate where
necessary; and record observations.
22. Take whatever steps are appropriate and necessary to gather valid
and reliable information.
23. Treat the person being interviewed with respect. Keep in mind
that it is a privilege and responsibility to peer into another
person's experience.
24. Practice interviewing. Develop your skills.
25. Enjoy interviewing. Take the time along the way to stop and
"hear" the roses.
Writing Ethnographic Research Report
The components of an ethnographic research report should
include the following :
1. Purpose / Goals / Questions.
2. Research Philosophy.
3. Conceptual/Theoretical Framework
4. Research Design / Model.
5. Setting/Circumstances.
6. Sampling Procedures.
7. Background and Experience of Researcher.
8. Role/s of Researcher.
9. Data Collection Method.
10. Data Analysis/Interpretation.
11. Applications/Recommendations.
12. Presentation Format and Sequence.
Advantages of Ethnography
These are as follows:
1. It provides the researcher with a much more comprehensive
perspective than other forms of research
107
2. It is also appropriate to behaviours that are best understood by
observing them within their natural environment (dynamics)
Disadvantages of Ethnography
These are as follows:
1. It is highly dependent on the researcher‘s observations and
interpretations
2. There is no way to check the validity of the researcher‘s
conclusion, since numerical data is rarely provided
3. Observer bias is almost impossible to eliminate
4. Generalizations are almost non-existent since only a single
situation is observed, leaving ambiguity in the study.
5. It is very time consuming.
Check Your Progress - IV
(a) State the characteristics of ethnographic research.
(b) Explain the steps of conducting ehtnographic research.
5.6 CASE STUDY:
Case study research is descriptive research that involves
describing and interpreting events, conditions, circumstances or
situations that are occurring in the present. Case study seeks to
engage with and report the complexities of social activity in order to
108
represent the meanings that individual social actors bring to their
social settings. It excels at bringing us to an understanding of a
complex issue or object and can extend experience or add strength to
what is already known through previous research. Case studies
emphasize detailed contextual analysis of a limited number of events
or conditions and their relationships. Darwin's theory of evolution
was based, in essence, on case study research, not experimentation,
for instance. In education, this is one of the most widely used
qualitative approaches of research.
According to Odum, ―The case study method is a technique
by which individual factor whether it be an institution or just an
episode in the life of an individual or a group is analyzed in its
relationship to any other in the group.‖ Its distinguishing
characteristic is that each respondent is (individual, family,
classroom, institution, cultural group) is taken as a unit and the
unitary nature of individual case is the focus of analysis. It seeks to
engage with and report the complexity of social and/or educational
activity in order to represent the meanings that individual actors in
the situation bring to that setting. It assumes that social and/or
educational reality is created through social interactions, situated in
specific contexts and histories and seeks to identify and describe
followed by analysing and theorising. It assumes that things may not
be as they seem and involve in-depth analysis so as to understand a
‗case‘ rather than generalising to a larger population. It derives much
of its philosophical underpinnings and methodology from
ethnography, symbolic interactionism, ethnomethodology and
phenomenology. It follows the ‗social constructivism‘ perspective of
social sciences.
Most case studies are usually qualitative in nature. Case study
research excels at enabling us to understand a complex issue or
object and can extend experience or add strength to what is already
known through previous research. Case studies involve a detailed
contextual analysis of a limited number of events or conditions and
their relationships. Social scientists have made a wide use of this
qualitative research method to examine contemporary real-life
situations and provide the basis for the application of ideas and
extension of methods. Yin defines the case study research method as
an empirical inquiry that investigates a contemporary phenomenon
within its real-life context; when the boundaries between
phenomenon and context are not clearly evident; and in which
multiple sources of evidence are used.
However, some case studies can also be quantitative in nature
especially if they deal with cost-effectiveness, cost-benefit analysis
109
or institutional effectiveness. Many case studies have been done by
combining the qualitative as well as the quantitative approaches in
which initially the qualitative approach has been used and data have
been collected using interviews and observations followed by the
quantitative approach. The approach of case studies ranges from
general field studies to interview of a single individual or group. A
case study can be precisely focused on a topic or can include a broad
view of life and society. For example, a case study can focus on the
life of a single gifted student, his actions, behaviour, abilities and so
on in his school or it can focus on the social life of an individual
including his entire background, experiences, motivations and
aspirations that influence his behaviour society. Examples of case
studies include a ‗case‘ of curriculum development, of innovative
training, of disruptive behaviour, of an ineffective institution and so
on.
Case studies can be conducted to develop a ‗research-based‘
theory with which to analyse situations: a theory of, for and about
practice. It is essential to note that since most case studies focus on a
single unit or small number of units, the findings cannot be
generalised to larger populations. However, its utility can not be
underestimated. A case study is conducted with a fundamental
assumption that though human behaviour is situation-specific and
individualised, there is a predictable uniformity in basic human
nature.
A case study can be conducted to explore, to describe or to
explain a phenomenon. It could be a synchronic study in which data
are collected at one point of time or it could be longitudinal in
nature. It could be conducted at a single site or it could be multi-site.
In other words, it is inherently a very flexible methodology.
A case typically refers to a person, either a learner, a teacher,
an administrator or an entity, such as a school, a university, a
classroom or a programme. In some policy-related research, the case
could be a country. Case studies may be included in larger
quantitative or qualitative studies to provide a concrete illustration of
findings, or they may be conducted independently, either
longitudinally or in a more restricted temporal period. Unlike
ethnographic research, case studies do not necessarily focus on
cultural aspects of a group or its members. Case study research may
focus on a single case or multiple cases.
Characteristics of a Case Study
Following are the characteristics of a case study:
110
1. It is concerned with an exhaustive study of particular instances.
A case is a particular instance of a phenomenon. In education,
examples of phenomena include educational programmes,
curricula, roles, events, interactions, policies, process, concept
and so on. Its distinguishing feature is that each respondent
(individual, class, institution or cultural group) is treated as a
unit.
2. It emphasises the study of interrelationship between different
attributes of a unit.
3. According to Cooley, case study deepens our perception and
gives us a clear insight into life… It gets at behaviour directly
and not by an indirect or abstract approach.
4. Each case study needs to have a clear focus which may include
those aspects of the case on which the data collection and
analysis will concentrate. The focus of a study could be a specific
topic, theme, proposition or a working hypothesis.
5. It focuses on the natural history of the unit under study and its
interaction with the social world around it.
6. The progressive records of personal experience in a case study
reveals the internal strivings, tensions and motivations that lead
to specific behaviours or actions of individuals or the unit of
analysis.
7. In order to ensure that the case study is intensive and in-depth,
data are collected over a long period of time from a variety of
sources including human and material and by using a variety of
techniques such as interviews and observations and tools such as
questionnaires, documents, artefacts, diaries and so on.
8. According to Smith, as cited by Merriam, (1998), these studies
are different from other forms of qualitative of research in that
they focus on a ‗single unit‘ or a ‗bounded system‘. A system is
said to be a bounded system if it includes a finite or limited
number of cases to interviewed or observed within a definite
amount of time.
9. It may be defined as an in-depth study of one or more instances
of a phenomenon- an individual, a group, an institution, a
classroom or an event- with the objective of discovering
meaning, investigating processes, gaining an insight and an
understanding of an individual, group or phenomena within the
context in such a way that it reflects the real life context of the
111
participants involved in the phenomena. These individuals,
groups, institutions, classrooms or events may represent the unit
of analysis in a case study. For example, in a case study, the unit
of analysis may be a classroom and the researcher may decide to
investigate the events in three such classrooms.
10. According to Yin, case studies typically involve investigation of
a phenomenon for which the boundaries between the
phenomenon and its context are not clearly evident. These
boundaries should be clearly clarified as part of the case study.
He further emphasises the importance of conducting a case study
in its real life context. In education, the classroom or the school
is the real life context of a case study as the participants of such a
case study are naturally found in these settings.
11. There are two major perspectives in a case study, namely, the etic
perspective and the emic perspective. The etic perspective is that
of the researcher (i.e. the outsider‘s perspective) whereas the
emic perspective is that of the research participants including
teachers, principals and students (i.e. the insider‘s perspective).
This enables the researcher to study the local, immediate
meanings of social actions of the participants and to study how
they view the social situation of the setting and the phenomenon
under study. A comprehensive case study includes both the
perspectives.
12. A case study can be a single-site study or a multi-site study.
13. Cases are selected on the basis of dimensions of a theory
(pattern-matching) or on diversity on a dependent phenomenon
(explanation-building).
14. No generalization is made to a population beyond cases similar
to those studied.
15. Conclusions are phrased in terms of model elimination, not
model validation. Numerous alternative theories may be
consistent with data gathered from a case study.
16. Case study approaches have difficulty in terms of evaluation of
low-probability causal paths in a model as any given case
selected for study may fail to display such a path, even when it
exists in the larger population of potential cases.
17. Acknowledging multiple realities in qualitative case studies, as
is now commonly done, involves discerning the various
112
perspectives of the researcher, the case/participant, and others,
which may or may not converge.
Components of a Case Study Design
According to Yin, following are the five component elements
of a case study design:
1. Study questions
2. Study propositions (if any are being used) or theoretical
framework
3. Identification of the units of analysis
4. The logical linking of the data to the propositions (or theory)
5. The criteria for interpreting the findings.
The purpose of a case study is a detailed examination of a
specific activity, event, institution, or person/s. The hypotheses or
the research questions are stated broadly at the beginning at the
study. A study‘s questions are directed towards ‗how‘ and ‗why‘
considerations and enunciating and defining these are the first task
of the researcher. The study‘s propositions could be derived from
these ‗how‘ and ‗why‘ questions. These propositions could help in
developing a theoretical focus. However, all case studies may not
have propositions. For instance, an exploratory case study may give
only a purpose statement or criteria that could guide the research
process. The unit of analysis defines what the case study is focussing
on, whether an individual, a group, n institution, a city, a society, a
nation and so on. Linkages between the data and the propositions (or
theory) and the criteria for interpreting the findings are usually the
least developed aspects of case studies (Yin, 1994).
Types of Case Study Designs
Yin (1994) and Winston (1997) have identified several types
of case study designs. These are as follows:
(A)
Exploratory Case Study Design: In this type of case study
design, field work and data collection are carried out before
determining the research questions. It examines a topic on
which there is very little prior research. Such a study is a
prelude to a large social scientific study. However, before
conducting such an exploratory case study, its organisational
framework is designed in advance so as to ensure its
usefulness as a pilot study of a larger, more comprehensive
research. The purpose of the exploratory study is to elaborate
a concept, build up a model or advocate propositions.
113
(B)
Explanatory Case Study Design: These are useful when
providing explanation to phenomena under consideration.
These explanations are patterns implying that one type of
variation observed in a case study is systematically related to
another variation. Such a pattern can be a relational pattern
or a causal pattern depending on the conceptual framework
of the study. In complex studies of organisations and
communities, multivariate cases are included so as to
examine a plurality of influences. Yin and Moore (1988)
suggest the use of a pattern-matching technique in such a
research wherein several pieces of information from the
same case may be related to some theoretical proposition.
(C)
Descriptive Case Study Design: A descriptive case study
necessitates that the researcher present a descriptive theory
which establishes the overall framework for the investigator
to follow throughout the study. This type of case study
requires formulation and identification of a practicable
theoretical framework before articulating research questions.
It is also essential to determine the unit of analysis before
beginning the research study. In this type of case study, the
researcher attempts to portray a phenomenon and
conceptualize it, including statements that recreate a
situation and context as much as possible.
(D)
Evaluative Case Study Design : Often, in responsive
evaluation, quasi-legal evaluation and expertise-based
evaluation, a case study is conducted to make judgments.
This may include a deep account of the phenomenon being
evaluated and identification of most important and relevant
constructs, themes and patterns. Evaluative case studies can
be conducted on educational programmes funded by the
Government such as ―Sarva Shiksha Abhiyan‖ or
Orientation Programmes and Refresher Courses conducted
by Academic Staff Colleges for college teachers or other
such programmes organised by the State and Local
Governments for secondary and primary school teachers.
Steps of Conducting a Case Study
Following are the steps of a case study:
1. Identifying a current topic which is of interest to the researcher.
2. Identifying research questions and developing hypotheses
(if any).
114
3. Determining the unit of sampling and the number of units. Select
the cases.
4. Identifying sources, tools and techniques of data collection.
These could include interviews, observations, documentation,
student records and school databases. Collect data in the field.
5. Evaluating and Analysing Data.
6. Report writing.
Each of these is described in detail in the following
paragraphs.
Step 1 : Identifying a current topic which is of interest to the
researcher
In order to identify a topic for case study research, the
following questions need to be asked:
(i)
What kind of topics can be addressed using the case study
method?
(ii)
How can a case study research be designed, shaped and
scoped in order to answer the research question
adequately?
(iii)
How can the participation of individuals/institutions be
obtained for the case study research?
(iv)
How can case study data be obtained from case
participants in an effective and efficient manner?
(v)
How can rigour be established in the case study research
report so that it is publishable in academic journals?
According to Maxwell, there are eight different factors that
could influence the goals of a case study as follows:
1. To grasp the meanings that events, situations, experiences and
actions have for participants in the study which is part of the
reality that the researcher wants to understand.
2. To understand the particular context within which the
participants are operating and its influence on their actions, in
addition to the context in which one‘s research participants
are embedded. Qualitative researchers also take into account
the contextual factors that influence the research itself.
3. To identify unanticipated phenomena and influences that
emerge in the setting and to generate new grounded theories
about such aspects.
115
4. To grasp the process by which events and actions take place
that lead to particular outcomes.
5. To develop causal explanations based on process theory
(which involves tracing the process by which specific aspects
affect other aspects), rather than variance theory (which
involves showing a relationship between two variables as in
quantitative research).
6. To generate results and theories that are understandable and
experientially credible, both to the participants in the study
and to others.
7. To conduct summative evaluations designed to improve
practice rather than merely to assess the value of a final
programme or product.
8. To engage in collaborative and action research practitioners
and research participants.
Step 2 : Identifying research questions and developing
hypotheses (if any)
The second step in case study research is to establish a
research focal point by forming questions about the situation or
problem to be studied and determining a purpose for the study. The
research objective in a case study is often a programme, an entity, a
person or a group of people. Each objective is likely to be connected
to political, social, historical and personal issues providing extensive
potential for questions and adding intricacy to the case study. The
researcher attains the objective of the case study through an in-depth
investigation using a variety of data gathering methods to generate
substantiation that leads to understanding of the case and answers
the research questions. Case study research is usually aimed at
answering one or more questions which begin with "how" or "why."
The questions are concerned with a limited number of events or
conditions and their inter-relationships. In order to formulate
research questions, literature review needs to be undertaken so as to
establish what research has been previously conducted. This helps in
refining the research questions and making them more insightful.
The literature review, definition of the purpose of the case study and
early determination of the significance of the study for potential
audience for the final report direct how the study will be designed,
116
conducted and publicly reported.
Step 3 : Determining the unit of sampling and the number of
units. Select the cases.
Sampling Strategies in a Case Study : In a case study design,
purposeful sampling is done which has been defined by Patton as
―selecting information-rich cases for study in-depth.‖ a case study
research, purposeful sampling is preferred over probability sampling
as they enhance the usefulness of the information acquired from
small samples. Purposive samples are expected to be conversant and
informative about the phenomenon under investigation.
A case study requires a plan for choosing sites and
participants in order to start data collection. The plan is known as an
‗emergent design‘ in which research decisions depend on preceding
information. This necessitates purposive sampling, data collection
and partial, simultaneous analysis of data as well as interactive rather
than distinct sequential steps.
During the phase of designing a case study research, the
researcher determines whether to use single or multiple real-life
cases to examine in-depth and which instruments and data collection
techniques to use. When multiple cases are used, each case is treated
as a single case. Each case/s conclusions can then be used as
contributing information to the entire study, but each case remains a
single case for collecting data and analysis. Exemplary case studies
carefully select cases and carefully examine the choices available
from among many research tools available so as to enhance the
validity of the study. Careful selection helps in determining
boundaries around the case. The researcher must determine whether
to study ‗unique cases‘, or ‗typical cases‘. He also needs to decide
whether to select cases from different geographical areas. It is
necessary at this stage to keep in mind the goals of the study so as to
identify and select relevant cases and evidence that will fulfil the
goals of the study and answer the research questions raised.
Selecting multiple or single cases is a key element, but a case study
can include more than one unit of embedded analysis. For example,
a case study may involve study of a single type of school (Fro
example, Municipal School) and a school belonging to this type.
This type of case study involves two levels of analysis and increases
117
the complexity and amount of data to be gathered and analyzed.
Multiple cases are often preferable to single cases, particularly when
the cases may not be representative of the population from which
they are drawn and when a range of behaviours/profiles,
experiences, outcomes, or situations is desirable. However, including
multiple cases limits the depth with which each case may be
analyzed and also has implications for the structure and length of the
final report.
Step 4 : Identifying sources, tools and techniques of data
collection
Sources of Data in a Case Study : A case study method involves
using multiple sources and techniques in the data collection process.
The researcher determines in advance what evidence to collect and
which techniques of data analysis to use so as to answer the research
questions. Data collected is normally principally qualitative and soft
data, but it may also be quantitative also. Data are collected from
primary documents such as school records and databases, students‘
records, transcripts and results, field notes, self-reports or thinkaloud protocols and memoranda. Techniques used to collect data can
include surveys, interviews, questionnaires, documentation review,
observation and physical artefacts. These multiple tools and
techniques of data collection add texture, depth, and multiple
insights to an analysis and can enhance the validity or credibility of
the results.
Case studies may make use of field notes and databases to
categorize and reference data so that it is readily available for
subsequent re-interpretation. Field notes record feelings and intuitive
hunches, pose questions, and document the work in progress. They
record testimonies, stories and illustrations which can be used in
reporting the study. They may inform of impending preconceptions
because of the detailed exposure of the client to special attention or
give an early signal that a pattern is emerging. They assist in
determining whether or not the investigation needs to be
reformulated or redefined based on what is being observed. Field
notes should be kept separate from the data being collected and
stored for analysis.
According to Cohen and Manion, the researcher must use the
118
chosen data collection tools and techniques systematically and
properly in collecting the evidence. Observations and data collection
settings may range from natural to artificial, with relatively
unstructured to highly structured elicitation tasks and category
systems depending on the purpose of the study and the disciplinary
traditions associated with it.
Case studies necessitate that effective training programmes be
developed for investigators, clear protocols and procedures be
established in advance before starting the field work and conduct a
pilot study in before moving into the field so as to eliminate apparent
obstacles and problems. The researcher training programme need to
cover the vital concepts of the study, terminology, processes and
methods and need to teach researcher/s how to apply the techniques
being used in the study accurately. The programme should also be
aimed at training researcher/s to understand how the collection of
data using multiple techniques strengthens the study by providing
opportunities for triangulation during the analysis phase of the study.
The programme should also include protocols for case study
research including time deadlines, formats for narrative reporting
and field notes, guidelines for collection of documents, and
guidelines for field procedures to be used. Investigators need to be
good listeners who can hear exactly the words being used by those
interviewed. Qualities of effective investigators also include being
able to ask good questions and interpret answers. Effective
investigators not only review documents looking for facts but also
read between the lines and pursue collaborative evidence elsewhere
when that seems appropriate. Investigators need to be flexible in
real-life situations and not feel threatened by unexpected change,
missed appointments or lack of space. Investigators need to
understand the goals of the study and comprehend the issues and
must be open to contrary findings. Investigators must also be aware
that they are going into the world of real human beings who may be
threatened or unsure of what the case study will bring.
After investigators are trained, the final advance preparation
step is to select a site for pilot study and conduct a pilot test using all
the data collection tools and techniques so that difficult and tricky
119
areas can be uncovered and corrected. Researchers need to anticipate
key problems and events, identify key people, prepare letters of
introduction, establish rules for confidentiality, and actively seek
opportunities to revisit and revise the research design in order to
address and add to the original set of research questions.
Throughout the design phase, researchers must ensure that
the construct validity, internal validity, external validity, and
reliability of the tools and the research method are adequate.
Construct validity requires the researcher to use the suitable
measures for the concepts being studied. Internal validity (especially
important in explanatory or causal studies) demonstrates that certain
conditions/events (causes) lead to other conditions/events (effect/s)
and necessitates the use of multiple sets of evidence from multiple
sources to reveal convergent lines of inquiry. The researcher makes
efforts to ascertain a series of substantiation forward and backward.
External validity reflects whether findings are generalisable beyond
the immediate case/s. The more variations in places, people and
procedures a case study can withstand and still yield the same
findings, the more will be its external validity. Techniques such as
cross-case examination and within-case examination along with
literature review help in ensuring external validity. Reliability refers
to the stability, accuracy and precision of measurement. Exemplary
case study design ensures that the procedures used are well
documented and can be repeated with the same results over and over
again. Establishing a trusting relationship with research participants,
using multiple data collection procedures, obtaining sufficient
pertinent background information about case participants and sites
and having access to or contact with the case over a period of time
are, in general, all decidedly advantageous.
Step 5 : Evaluating and analysing data
The case study research generates a huge quantity of data
from multiple sources. Hence systematic organisation of the data is
essential in prevent the researcher from losing sight of the original
research purpose and questions. Advance preparation assists in
handling huge quantity of largely soft data in a documented and
120
systematic manner. Researchers prepare databases for categorizing,
sorting, storing and retrieving data for analysis. The researcher
examines raw data so as to find linkages between the research object
and the outcomes with reference to the original research questions.
Throughout the evaluation and analysis process, the researcher
remains open to new opportunities and insights. The case study
method, with its use of multiple data collection methods and analysis
techniques, provides researchers with opportunities to triangulate
data in order to strengthen the research findings and conclusions.
According to Creswell, analysis of data in case study research
usually involves an iterative, spiralling or cyclical process that
proceeds from more general to more specific observations.
According to Miles and Huberman, data analysis may commence
during interviews or observations and continue during transcription,
when recurring themes, patterns and categories become apparent.
Once written records are available, analysis involves the coding of
data and the identification of prominent points or structures. Having
additional coders is highly desirable, especially in structural analyses
of discourse, texts, syntactic structures or interaction patterns
involving high-inference categories leading ultimately to the
quantification of types of items within categories. Data reduction
may include quantification or other means of data aggregation and
reduction, including the use of data matrices, tables, and figures.
The strategies used in analysis require researchers to move
beyond initial impressions to improve the likelihood of precise and
consistent findings. Data need to be consciously sorted in many
different ways to expose or create new insights and will deliberately
look for contradictory data to disconfirm the analysis. Researchers
categorize, tabulate and recombine data to answer the initial research
questions and conduct cross-checking of facts and incongruities in
accounts. Focused, short, repeated interviews may be essential to
collect supplementary data to authenticate key observations or check
a fact.
Precise techniques that could be used for data analysis include
placing information into arrays, creating matrices of categories,
creating flow charts or other displays and tabulating frequency of
121
events. Researchers can use quantitative data to substantiate and
support the qualitative data so as to comprehend the raison d'être or
theory underlying relationships. Besides, multiple investigators
could be used to gain the advantage provided when diverse
perspectives and insights scrutinize the data and the patterns. When
the multiple observations converge, reliability of the findings
enhances. Inconsistent discernments, on the other hand, necessitate
the researchers to inquire more intensely. Moreover, the cross-case
search for patterns, keeps investigators from reaching untimely
conclusions by requiring that investigators look at the data in diverse
ways. Cross-case analysis divides the data by type across all cases
investigated. One researcher then examines the data of that type
carefully. When a pattern from one data type is substantiated by the
evidence from another, the result is stronger. When substantiation
conflicts, deeper probing of the variation is necessary to identify the
cause/s or source/s of conflict. In all cases, the researcher treats the
evidence reasonably to construct analytic conclusions answering the
original "how" and "why" research questions.
Step 6 : Report writing
Case studies report the data in a way that transforms a
multifarious issue into one that can be understood, permitting the
reader to question and examine the study and reach an understanding
independent of the researcher. The objective of the written report is
to depict a multifaceted problem in a way that conveys an explicit
experience to the reader. Case studies should present data in a way
that leads the reader to apply the experience in his or her own reallife situation. Researchers need to pay exacting consideration to
displaying adequate evidence to achieve the reader‘s confidence that
all avenues have been explored, clearly communicating the confines
of the case and giving special attention to conflicting propositions.
In general, a research report in a case study should include the
following aspects:
A statement of the study's purpose and the theoretical context.
The problem or issue being addressed.
122
Central research questions.
A detailed description of the case(s) and explanation of
decisions related to sampling and selection.
Context of the study and case history, where relevant. The
research report should provide sufficient contextual
information about the case, including relevant biographical
and social information (depending on the focus), such as
teaching - learning history, students‘ and teachers‘
background, years of studying/working in the institution, data
collection site(s) or other relevant descriptive information
pertaining to the case and situation.
Issues of access to the site/participants and the relationship
between you and the research participant (case).
The duration of the study.
Evidence that you obtained informed consent, that the
participants' identities and privacy are protected, and, ideally,
that participants benefited in some way from taking part in
the study.
Methods of data collection and analysis, either manual or
computer-based data management and analysis (see weitzman
& miles, 1995), or other equipment and procedures used.
Findings, which may take the form of major emergent
themes, developmental stages, or an in-depth discussion of
each case in relation to the research questions; and illustrative
quotations or excerpts and sufficient amounts of other data to
establish the validity and credibility of the analysis and
interpretations.
A discussion of factors that might have influenced the
interpretation of data in undesired, unanticipated, or
conflicting ways.
A consideration of the connection between the case study and
larger theoretical and practical issues in the field is essential to
report. The report could include a separate chapter handling each
case separately or treating the case as a chronological recounting.
Some researchers report the case study as a story. During the report
preparation process, the researcher critically scrutinizes the report
trying to identify ways of making it comprehensive and complete.
The researcher could use representative audience groups to review
and comment on the draft report. Based on the comments, the
123
researcher could rewrite and revise the report. Some case study
researchers suggest that the report review audience should include
the participants of the study.
Strengths of Case Study Method
1. It involves detailed, holistic investigation of all aspects of the
unit under study.
2. Case studies data are strong in reality.
3. It can utilise a wide range of measurement tools and
techniques.
4. Data can be collected over a period of time and is contextual.
5. It enables the researcher to assess and document not just the
empirical data but also how the subject or institution under
study interacts with the larger social system.
6. Case study reports are often written in non-technical language
and are therefore easily understood by laypersons.
7. They help in interpreting similar other cases.
Weaknesses of Case Study Method
1. The small sample size prevents the researcher from
generalising to larger populations.
2. The case study method has been criticised for use of a small
number of cases can offer no grounds for establishing
reliability or generality of findings.
3. The intense exposure to study of the case biases the findings.
4. It has also been criticised as being useful only as an
exploratory tool.
5. They are often not easy to cross-check.
Yet researchers continue to use the case study research method
with success in carefully planned and crafted studies of real-life
situations, issues, and problems.
Check Your Progress - V
(a) State the meaning of case study research.
124
(b) Explain the characteristics of case study research.
(c) Explain the steps of case study research.
5.7 ANALYTICAL METHOD:
It involves the identification and interpretation of data already
existing in documents, pictures and artefacts. It is a form of research
in which events, ideas, concepts or artefacts are examined through
analysis of documents, records, recordings or other media. Here,
contextual information is very essential to for an accurate
interpretation of data. Historical research comprises of systematic
collection and analysis of documents, records and artefacts with the
objective of providing a description and interpretation of past events
or persons. Its application lies in a range of research methods such as
historical research which could use both quantitative and qualitative
data, legal analysis which focuses on selected laws and court
decisions with the objective of understanding how legal principles
and precedents apply to educational practices, concept analysis
which is carried out to understand the meaning and usage of
educational concepts (eg. school-based reforms, ability grouping,
affective teacher education) and content analysis which is carried
out to understand the meaning and identify properties of large
amounts of textual information in a systematic manner.
Characteristics of Analytical Research
Following are the characteristics of analytical research:
1. It does not ‗create‘ or ‗generate‘ data through research tools
and techniques.
2. The topic of analytical research deals with the past.
3. It reinterprets existing data.
125
4. It predominantly uses primary sources for collecting data.
5. Internal and external criticism is used as a technique while
searching for facts and providing interpretative explanations.
6. It uses documents, relics and oral testimonies for collecting
data.
Objectives Analytical Research
Following are the objectives of analytical research:
1. It offers understanding of the past/existing/available data.
2. It enables the researcher to shed light on existing policies by
interpreting the past.
3. It generates a sense of universal justification and underlying
principles and aims of education in a society.
4. It reinterprets the past for each age group.
5. It uses data and logic to analyse the past and demythologises
idealised conceptions of the past.
Check Your Progress - VI
(a) State the meaning of analytical research.
Suggested Readings
Auberbach, C. F. and Silverstein, L. B. Qualitative Data: An
Introduction to Coding and Analysis. New York: New
York University Press. 2003.
Baron, R. A. Psychology. India : Allyn and Bacon. 2001.
Brewer, M. (2000). Research Design and Issues of Validity. In Reis,
H. & Judd, C. (eds) Handbook of Research Methods in
Social and Personality Psychology.
Cambridge:Cambridge University Press.
Cohen, L and Manion, L. Research Methods in Education. London :
Routledge, 1989.
126
Creswell, J. W. Educational Research, Planning, Conducting, and
Evaluating Quantitative and Qualitative Research,
University of Netvaska : Merrill Prentice Hall, 2002,
Creswell, J. W. Qualitative Inquiry and Research Design: Choosing
among Five Traditions. Sage Publications, Inc. 1998.
Picciano, A. G. Educational Research Primer. London : Continuum.
2004.

127
6
HISTORICAL RESEARCH
Unit Structure:
6.0
Objectives
6.1
Introduction
6.2
Meaning
6.3
The purpose of Historical Research
6.4
Characteristics of Historical Research
6.5
Scope of Historical Research in Education
6.6
Approaches to the study of History
6.7
Steps in Historical Research
6.8
Problems and Weaknesses to be avoided in Historical
Research.
6.9
Criteria of Evaluating Historical Research.
6.0 OBJECTIVES:
After reading this unit the student will be able to:
128
Define the meaning of historical research, its purposesnad
characteristics, scope and approaches to the study of history.
Explain the steps of historical research.
State the Weaknesses to be avoided in Historical Research.
6.1 INTRODUCTION :
History usually refers simply to an account of the past of
human societies. It is the study of what “can be known…(to the
historian)… through the surviving record.” Gottschalk referred to
this as ‘history as record’, He further stated that “The process of
critically examining and analyzing the records and survivals of the
past is … called historical method. The imaginative reconstruction of
the past from the data derived by that process is called
historiography (the writing of history)”.
6.2 MEANING :
Historical research has been defined as the systematic and
objective location, evaluation and synthesis of evidence in order to
establish facts and draw conclusions about past events. It involves a
critical inquiry of a previous age with the aim of reconstructing a
faithful representation of the past. In historical research, the
investigator studies documents and other sources that contain facts
concerning the research theme with the objective of achieving
better understanding of present policies, practices, problems and
institutions. An attempt is made to examine past events or
combinations of events and establish facts in order to arrive at
conclusions concerning past events or predict future events.
Historical research is a type of analytical research. Its common
methodological characteristics include (i) identifying a research
topic that addresses past events, (ii) review of primary and
129
secondary data, (iii) systematic collection and objective evaluation
of data related to past occurrences with the help of techniques of
criticism for historical searches and evaluation of the information
and (iv) synthesis and explanation of findings in order to test
hypotheses concerning causes, effects or trends of these events
that may help to explain present events and anticipate future
events. Historical studies attempt to provide information and
understanding of past historical, legal and policy events. The
historical method consists of the techniques and guidelines by
which historians use historical sources and other evidences to
research and then to write history.
6.3 THE PURPOSE OF HISTORICAL RESEARCH :
Conducting historical research in education can serve several
purposes as follows:
1. It enables educationists to find out solutions to contemporary
problems which have their roots in the past. i.e. it serves the
purpose of bringing about reforms in education. The work of a
historical researcher sometimes sensitizes educators to unjust
or misguided practices in the past which may have unknowingly
continued into the present and require reform. A historical
researcher studies the past with a detached perspective and
without any ego-involvement with the past practices. Hence it
could be easier for educationists to identify misguided practices
thus enabling them to bring about reforms.
2. It throws light on present trends and can help in predicting
future trends. If we understand how an educationist or a group
of educationists acted in the past, we can predict how they will
act in future. Similarly, studying the past enables a researcher to
understand the factors / causes affecting present trends. In
130
order to make such future predictions reliable and trustworthy,
the historical researcher needs to identify and clearly describe in
which ways the past differs from the present context and how
the present social, economic and political situations and policies
could have an impact on the present and the future.
3. It enables a researcher to re-evaluate data in relation to
selected hypotheses, theories and generalizations that are
presently held about the past.
4. It emphasizes and analyzes the relative importance and the
effect of the various interactions in the prevailing cultures.
5. It enables us to understand how and why educational theories
and practices developed.
6.4 CHARACTERISTICS OF HISTORICAL RESEARCH
These are as follows:
1. It is not a mere accumulation of facts and data or even a
portrayal of past events.
2. It is a flowing, vibrant report of past events which involves an
analysis and explanation of these occurrences with the
objective of recapturing the nuances, personalities and ideas
that influenced these events.
3. Conducting historical research involves the process of
collecting and reading the research material collected and
writing the manuscript from the data collected. The
researcher often goes back-and-forth between collecting,
reading, and writing. i.e. the process of data collection and
analysis are done simultaneously are not two distinct phases
of research.
131
4. It deals with discovery of data that already exists and does
not involve creation of data using structured tools.
5. It is analytical in that it uses logical induction.
6. It has a variety of foci such as issues, events, movements and
concepts.
7. It records and evaluates the accomplishments of individuals,
agencies or institutions
6.5 SCOPE OF HISTORICAL RESEARCH IN EDUCATION
1. General educational history of specific periods such as (a)
ancient India, (b) A during British rule, (c) Independent India etc.
2. History of specific levels of education (a) primary education,
(b) secondary education, (c) tertiary education etc. in India.
3. History of specific types of education such as (a) adult
education, (b) distance education, (c) disadvantaged education,
(d) women’s education in India.
4. Historical study of specific educational institutions such as (i)
University of Mumbai, (ii) Aligarh Muslim University and so on.
5. History of the role of the teacher in ancient India.
6. History of specific components of education such as (a)
curriculum, (b) text-books, (c) teaching-learning methods, (d)
aims and objectives of education, (e) teacher-student
relationships, (f) evaluation process and so on.
7. History of national education policies in India.
8. History of admission processes in professional / technical
courses (medicine, engineering, management) in India.
9. History of teacher education.
10. Historical biographies of major contributors to education such as
Mahatma Gandhi, Maharshi Karve, Maharshi Phule, Shri
Aurobindo, Gurudev Tagore and so on.
11. History of educational administration.
132
12. History of public financing of education.
13. History of educational legislation in India.
14. History of educational planning.
15. History of contemporary problems in India.
16. Historical study of the relationship between politics and
education in India.
17. Historical study of the impact of the British rule in India.
18. Comparative history of education in India and some other
country / countries.
19. Historical study of the system of state-sponsored inspection in
India.
20. Historical study of education in specific Indian states such as
Maharashtra, Tamil Nadu, Madhya Pradesh, Rajasthan etc.
In other words, historical research in education may be
concerned with an individual, a group, an idea a movement or an
institution.
If a historical study focuses on an entire country / society /
system, i.e. if it is broad in scope, it is said to be a macro-level
historical research. On the other hand, if its focus is narrow and
includes a selective set of people or events of interest, it is said to
be a micro-level historical research.
6.6 APPROACHES TO THE STUDY OF HISTORY:
According to Monaghan and Hartman, there are four major
approaches to the study of the past:
a. Qualitative Approach : This is what most laypersons think of as
history: the search for a story inferred from a range of written or
printed evidence. The resultant history is organised
133
chronologically and presented as a factual tale: a tale of a person
who created reading textbooks, such as a biography of William
Holmes McGuffey (Sullivan, 1994) or the Lindley Murray family
(Monaghan, 1998) in the Western context. The sources of
qualitative history range from manuscripts such as account
books, school records, marginalia, letters, diaries and memoirs to
imprints such as textbooks, children‘s books, journals, and other
books of the period under consideration.
b. Quantitative Approach : Here, rather than relying on ―history by
quotation,‖ as the former approach has been negatively called,
researchers intentionally look for evidence that lends itself to
being counted and that is therefore presumed to have superior
validity and generalisability. Researchers have sought to estimate
the popularity of a particular textbook by tabulating the numbers
printed, based on copyright records. The assumption is that
broader questions such as the relationship between education and
political system in India or between textbooks and their influence
on children can thus be addressed more authoritatively.
c. Content Analysis : Here the text itself is the focus of
examination. This approach uses published works as its data (in
the case of history of textbooks, these might be readers, or
examples of the changing contents of school textbooks in
successive editions) and subjects them to a careful analysis that
usually includes both quantitative and qualitative aspects.
Content analysis has been particularly useful in investigating
constructs such as race, caste, etc.
d. Oral History : Qualitative, quantitative, and content approaches
use written or printed text as their database. In contrast, the
fourth approach, oral history, turns to living memory. For
instance, oral historians interested in women‘s education could
ask their respondents about their early experiences and efforts in
women‘s education.
These four approaches are not, of course, mutually exclusive.
Indeed, historians avail themselves of as many of these as their
134
question, topic, and time period permit. This integration is possible
because the nature of historical research cuts across a variety of
approaches, all of which commence with the recognition of a topic
and the framing of a question. In other words, a historical study may
be quantitative in nature, qualitative in nature or a combination of
the approaches.
Its purpose can be mainly descriptive, aiming to understand
some specific development in a particular period of time in a
particular culture; or it could be explanatory, trying to test and
accept / reject widely held assumptions.
A historical investigation is conducted with objectivity and
the desire to minimize bias, distortion and prejudice. Thus, it is
similar to descriptive method of research in this aspect. Besides, it
aims at describing all aspects of the particular situation under study
(or all that is accessible) in its search for the truth. Thus, it is
holistic, comprehensive in nature and is similar to the interpretive
approach. Though it is not empirical in nature (does not collect data
through direct observation or experimentation), it does make use of
reports (all the available written and/or oral material), it definitely
qualifies to be a scientific activity. This is because it requires
scholarship to conduct a systematic and objective study and
evaluation and synthesis of evidence so as to arrive at conclusions.
In other words, historical research is scientific in nature.
Moreover, any competent researcher in other types of
empirical studies reviews the related literature so as to find out
prior researches and theoretical work done on a particular topic.
This requires studying journals, books, encyclopedias, unpublished
theses and so on. This is followed by interpretation of their
significance. These steps are common to empirical research and
historical research. i.e. to some extent, every researcher makes use
of the historical method in his/her research.
135
However, it should be mentioned here that historical
researcher in education “discovers” already existing data from a
wide range of historical sources such as documents, relics,
autobiographies, diaries or photographs. On the other hand, in
other types of educational studies, the researcher “creates” data
through observations, measurement through tests and
experimentation. To this extent, historical research differs from
descriptive and experimental researches.
Check Your Progress - I
Q.1
Define the following
(a) Characteristics of historical research in education.
(b) Purposes of historical research.
Q.2 (a) Give examples of research topics in historical research
(b) Explain the approaches to historical research.
136
6.7 STEPS IN HISTORICAL RESEARCH :
The essential steps involved in conducting a historical
research are as follows:
A. Identify a topic/subject and define the problems/questions
to be investigated.
B. Search for sources of data.
C. Evaluate the historical sources.
D. Analyze, synthesize and summarize interpreting the data /
information.
E. Write the research report.
Since most historical studies are largely qualitative in nature,
the search for sources of data, evaluating, analyzing, synthesizing
and summarizing information and interpreting the findings may not
always be discreet, separate, sequential steps i.e. the sequence of
steps in historical research is flexible.
Let us now look at each of these steps in details.
A. Identify a Topic and Define the Problem
According to Borg, “In historical research, it is especially
important that the student carefully defines his problem and
appraises its appropriateness before committing himself too fully.
Many problems are not adaptable to historical research methods
and cannot be adequately treated using this approach. Other
problems have little or no chance of producing significant results
either because of the lack of pertinent data or because the problem
is a trivial one.”
137
Beach has classified the problems that prompt historical
inquiry into five types:
1. Current social issues are the most popular source of historical
problems in education. e.g. Rural education, adult and
continuing education, positive discrimination in education etc.
2. Histories of specific individuals, histories of specific educational
institutions and histories of educational movement. These
studies are often conducted with “the simple desire to acquire
knowledge about previously unexamined phenomena”.
3. A historical study of interpreting ideas or events that previously
had seemed unrelated. For example, history of educational
financing and history of aims of education in India may be
unrelated. But a person reviewing these two researches
separately may detect some relationship between the two
histories and design a study to understand this relationship.
4. A historical study aimed at synthesizing old data or merge them
with new historical facts discovered by the researcher.
5. A historical inquiry involving reinterpretation of past events that
have been studied by other historical researchers. This is known
as revisionist history.
On the other hand, in order to identify a significant research
problem, Gottschalk recommends that four questions should be
asked:
(i) Where do the events take place?
138
(ii) Who are the persons involved?
(iii) When do the events occur?
(iv) What kinds of human activity are involved?
The scope of the study can be determined on the basis of the
extent of emphasis placed on the four questions identified by
Gottschalk i.e. the geographical area included, the number of
persons involved, the time span included and the number and kinds
of human activities involved often, the exact scope and delimitation
of a study is decided by a researcher only after the relevant
material has been obtained. The selection of a topic in historical
research depends on several personal factors of the researcher
such as his/her motivation, interest, historical knowledge and
curiosity, ability to interpret historical facts and so on. If the
problem selected involves understanding an event, an institution, a
person, a past period, more clearly, it should be taken up for a
research.
The topic selected should be defined in terms of the types of
written materials and other resources available to you.
This should be followed by formulating a specific and
testable hypothesis or a series of research questions, if required.
This will provide a clear focus and direction to data collection,
analysis and interpretation. i.e. it provides a structure to the study.
According to Borg, without hypotheses, historical research
often becomes little more than an aimless gathering of facts.
B. Search for Sources of Data
Historical research is not empirical in that it does not include
direct observation of events or persons. Here, the researcher
interprets past events on the basis of traces they have left. He uses
the evidence of past acts and thoughts. Thus, through he/she does
not use his/her own observation but on other people’s
139
observations. The researcher’s job here is to test the truthfulness of
the reports of other people’s observations. These observations are
obtained from several sources of historical data. Let us now try to
discuss various sources of historical data.
Sources of Historical Data
These sources are broadly classified into two types:
(a) Primary Sources: Gottschalk defines a primary data source as
“the testimony of any eyewitness, or of a witness by any other of
the senses, or of a mechanical device like the Dictaphone – that is,
of one who … was present at the events of which he tells. A primary
source must thus have been produced by a contemporary of the
events it narrates.” In other words, primary sources are tangible
materials that provide a description of an historical event and were
produced shortly after the event happened. They have a direct
physical relationship to the event being studied. Examples of
primary sources include new paper report, letters, public
documents, court decisions, personal diaries, autobiographies,
artifacts and eyewitness’s verbal accounts. These primary sources
of data can be divided into two broad categories as follows:
(i) The remains or relics of a given historical period. These could
include photographs, coins, skeletons, fossils, tools, weapons,
utensils, furniture, buildings and pieces of art and culture (object d’
art). Though these were not originally meant for transmitting
information to future generations they could prove very useful
sources in providing reliable and sound evidence about the past.
Most of these relics provide non-verbal information.
(ii)
Those objects that have a direct physical relationship with
the events being reconstructed. This includes documents such as
laws, files, letters, manuscripts, government resolutions, charters,
memoranda, wills, news-papers, magazines, journals, films,
government or other official publications, maps, charts, log-books,
140
catalogues, research reports, record of minutes of meetings,
recording, inscriptions, transcriptions and so on.
(b)
Secondary Sources: A secondary source is one in which the
eyewitness or the participant i.e. the person describing the event
was not actually present but who obtained his/her descriptions or
narrations from another person or source. This another person may
or may not be a primary source. Secondary sources, thus, do not
have a direct physical relationship with the event being studies.
They include data which are not original. Examples of secondary
sources include textbooks, biographies, encyclopedias, reference
books, replicas of art objects and paintings and so on. It is possible
that secondary sources contain errors due to passing of information
from one source to another. These errors could get multiplied when
the information passes through many sources thereby resulting in
an error of freat magnitude in the final data. Thus, wherever
possible, the researcher should try to use primary sources of data.
However, that does not reduce the value of secondary sources.
In conclusion, the various sources of historical informationboth primary and secondary can be summarized as follows:
Sources of Historical Information
Documents
Quantitative
(written / printed)
Records
Oral Records
Relics
(Spoken words) (Physical or
Visual objects)
- diaries
- memoirs
- notebooks
- yearbooks
- memos
- School budgets
- Ballads
- Student attendance
records
- Staff attendance
records
- lagbooks
- Student’s marks
- laws
- School results
- School Buildings
- Tales
- School Furniture
- Saga
- Textbooks
- Oral interviews
- Pictures
of eyewitnesses
- Drawings
and participants
- Architectural
Plans
141
It must be mentioned here that the branch of historical
research using all or some types of oral records is known as oral
history.
It should also be mentioned here that some objects can be
classified as documents or relics depending on the how they are
used in a historical study. For example, in a research study on how a
historical figure (a politician, a freedom fighter or a social reformer)
is presented in textbooks of different periods, the textbook will be
classified as a document as the emphasis here is on analyzing its
content-matter given in a verbal form. On the other hand, in a
research study on printing methods in the past, the textbook can be
used as a relic as the focus here is not on analyzing its contents but
on its physical, outward characteristics or features.
Searching for Historical Data
The procedure of searching for historical data should be
systematic and pre-planned. The researcher should know what
information he needs so as to identify important sources of data
and provide a direction to his search for relevant data. Using his
knowledge, imagination and resourcefulness, he needs to explore
the kinds of data required, persons involved, institutions involved.
This will help him to identify the kinds of records he require and
142
whom he should interview. Since a historical research is mainly
qualitative in nature all the primary and secondary sources cannot
be identified in advance. It is possible that as one collects some
data, analyzes and interprets it, the need for further pertinent data
may arise depending on the interpretive framework. This will
enable him to identify other primary or secondary sources of data.
The search for sources of data begins with wide reading of
preliminary sources including published bibliographies, biographies,
atlas, specialized chronologies, dictionaries of quotations and
terms. Good university and college libraries tend to have a great
deal of such preliminary materials. This will enable a researcher to
identify valuable secondary sources on the topic being studied such
books on history relating to one’s topic. For extensive materials on
a subject, the researcher may need to go to a large research library
or a library with extensive holdings on a specific subject. Such
secondary materials could include other historian’s conclusions and
interpretations, historical information, references to other
secondary and primary sources. The historical researcher needs to
evaluate the secondary sources for their validity and authenticity.
Now the researcher should turn his attention to the primary
sources. These are usually available in the institution or the archives
especially if the source concerns data pertaining to distant past or
data pertaining to events in which the chief witnesses are either
dead or inaccessible. In case of data concerning the recent past, the
researcher can contact witnesses or participants themselves in
order to interview them and/or study the documents possessed by
them.
However, it is not possible for a historical researcher to
examine all the material available. Selecting the best sources of
data is important in a historical study. In a historical study the
complete “population” of available data can never be obtained or
known. Hence the sample of materials examined must always be a
purposive one. What it represents and what it fails to represent
143
should be considered. The researcher needs to identify and use a
sample that should be representative enough for wider
generalization.
c) Evaluation of the Historical Sources
The data of historical sources is subject to two types of
evaluation. These two types are: (i) external evaluation or criticism
and (ii) internal evaluation or criticism. Let us now look at these in
detail.
(i) External Criticism of Data:
This is sometimes also known as lower criticism of data.
External criticism regards the issue of authenticity of the data from
the psychological attitude of the researcher in that it is primarily
concerned with the question, is the source of data genuine?
External criticism seeks to determine whether the document or the
artifact that the researcher is studying is genuinely valid primary
data. It is possible to get counterfeit documents or artifacts.
External criticism of the sources of data is of paramount importance
in establishing the credibility of the research.
Although,
theoretically, the main purpose of external criticism is the
establishment of historical truth, in reality its actual operation is
chiefly restricted to the negative role i.e. to identity and expose
forgeries, frauds, hoaxes desertions and counterfeits. In order to
identify such forgeries, researcher needs to look at problems
pertaining to plagiarism, alterations of document, insertions,
deletions or unintentional omissions. This will reveal whether the
historical source of data is authentic or not.
Establishing
authenticity of documents may involve carbondating, handwriting
analysis, identification of ink and paper, vocabulary usage,
signatures, script, spelling, names of places and writing style and
other considerations. In other words, it examines the document
144
and its external features rather than the statements it contains. It
tries to determine whether (a) the information it contains was
available at the time the document was written? (b) this
information is consistent with what is known about the author or
the period from another source?
In other words, external criticism is aimed at answering
questions about the nature of the historical source such as who
wrote it? Where? When? Under which circumstances? Is it
original? Is it genuine? and so on.
ii) Internal Criticism of Data :
Having established the authenticity of the source of historical
data, the researcher now focuses his/her attention on the accuracy
and wroth of the data contained in the document. Internal criticism
is concerned with the meaning of the written material. It is also
known as higher criticism of data. It deals answering questions
such as what does it mean? What was the author attempting to
say? What thought was the author trying to convey? Is it possible
that people would act in the way described in the document? Is it
possible that events described occurred so quickly? What
inferences or interpretations could be extracted from these words?
Do the financial data / figures mentioned in the document seem
reasonable for that period in the past? What does the decision of a
court mean? What do the words of the decision convey regarding
the intent and the will of the count? Is there any (unintended)
misinformation given in the document? Is there any evidence of
deception? and so on here, the researcher needs to be very
cautious so that he does not reject a statement only because the
event described in the document appears to be improbable.
145
In addition to answering these questions, internal criticism
should also include establishing the credibility of the author of the
document. According to Travers, the following questions could be
answered so as to establish the author’s credibility : Was he a
trained or untrained observer of the event? i.e. How competent
was he? What was his relationship to the event? To what extent
was he under pressure, from fear or vanity resulting in distortion or
omission of facts? What was the intent of the writer of the
document? To what extent was he an expert at recording the
particular event? Were the habits of the author such that they
might interfere with the accuracy of recording? Was he too
antagonistic or too sympathetic to give a true picture? How long
after the event did he record his testimony? Was he able to
remember accurately? Is he in agreement with other independent
witnesses?
These questions need to be answered for two reasons :
i) Perceptions are individualized and selective.
Even if
eyewitnesses are competent and truthful, they could still record
different descriptions of the events they witnessed or
experienced.
ii) Research studies in Psychology indicate that eye witnesses can
be very unreliable, especially if they are emotionally aroused or
under stress at the time of the event. (e.g. at the time of
demolition of Babri Masjid or at the time of Gujarat riots in
2002.
This brings us to the question of bias especially when life
histories or communal situations are being studied. According to
Plummer, there are three possible sources of bias as follows :
Source One : The Life History Informant
146
Is misinformation (unintended) given?
Has there been evasion?
Is there evidence of direct lying and deception?
Is a ‘front’ being presented?
What may the informant ‘take for granted’ and hence not
reveal?
How far is the informant ‘pleasing you’?
How much has been forgotten?
How much may be self-deception?
Source Two : The Social Scientist Research
Could any of the following be shaping the outcome?
(a) Attitudes of researcher : age, gender, class, race etc.
(b) Demeanour of researcher : dress, speech, body language
etc.
(c) Personality of researcher : anxiety, need for approval,
hostility, warmth etc.
(d) Attitudes researcher: religion, politics, tolerance, general
assumptions
(e) Scientific role of researcher: theory held etc. (researcher
expectancy)
Source Three : The Interaction
The joint act needs to be examined. Is bias coming from
(a) The physical setting – ‘social’ space
(b) The prior interaction?
147
(c) Non-verbal communication?
(d) Vocal behavior?
Often, internal and external criticism are interdependent and
complementary processes. The internal and external criticism of
data require a high level of scholarship.
D.
Analysis, Synthesis, Summarizing and Interpretation of
Data :
We have seen how data can be located and evaluated. Let
us now look at how to collect and control the data so that the
greatest return from the innumerable hours spent in archives,
document rooms and libraries can be reaped. The research should
not only learn how to take notes but also learn how to organize the
various notes, note cards, bibliography cards and memoranda so as
to derive useful and meaningful facts for interpretation. Hence
before beginning historical research, the researcher should have a
specific and systematic plan for the acquisition, organization,
storage and retrieval of the data. Following are some suggestions
that may help you in systematizing your research efforts.
(i)
Note cards and Bibliography Cards :
It would be convenient for you to prepare bibliography cards
of size 3 5 inches for taking down bibliographical notes. A
bibliography card is valuable not only for gathering and recording of
information but also for locating it again at a future date, if
necessary, without going back to the library again and again. Such a
card contains the essential information concerning a bibliographical
source. Keep plenty of such cards with you when you go to the
library so that you can report very valuable references encountered
unexpectedly. You can also note down the document’s relation to
your research. A sample of a bibliographic reference card could be
as follows :
Serial No. __________
148
You can ideally have two copies of such a bibliographic card.
One copy can be arranged according to the authors’ names
alphabetically whereas the other copy can be can be arranged as
per serial number of the card.
On the other hand, a note card can be of size 4 6 or 5 7
inches for substantive notes. It is advisable to place only one item
of information on each card. Each card can be given a code so as to
indicate the place / question / theme / period / person to which the
note relates. These cards then can be arranged as per the question,
149
theme, period, place or person under study so as to make analysis
easier. In other words, note cards can be kept in multiple copies.
(e.g. in triplicate or quadruplicate) depending on the ultimate
analysis of the data. Given here is a sample note card.
Main Heading : __________________________
Card No.
________________________________________________________
Sub – Heading : __________________________________________
__________________________________________
Source : Author : _________ Year : _________ pp._________
Title : __________ Bibliography card No. __________
In this card, one can mention the bibliography card no. which
can be referred to for further information it required. The reverse
of the card can be used if the space is found to be insufficient for
necessary information.
(ii)
Summary of Quantitative Data :
150
Usually historical studies are chiefly qualitative in nature
since the data obtained includes verbal and / or symbolic material
from an institution, society or culture’s past. However, when the
study involves quantitative data pertaining to the past events, you
need to think carefully about the relevance of the data to your
research. This is because recording and analysis of quantitative
data is time-consuming and sometimes expensive. Examples of
quantitative data in historical research include records of students’
and teachers’ attendance rates, examination results, financial
information such as budgets, income and expenditure statements,
salaries, fees and so on. Content analysis is one of the methods
involving quantitative data. The basic goal of content analysis is to
take a verbal, non-quantitative document and transform it into
quantitative data.
(iii)
Interpretation of Historical Data :
Once the researcher establishes the validity and authenticity
of data, interpretation of the facts in the light of the topic of
research is necessary. This step requires caution, imagination,
ingenuity, insight and scholarliness. The scientific status of his
study depends on these characteristics. The researcher needs to be
aware of his/her biases, values, prejudices and interest as these
could influence the analysis and interpretation of the data as well
as the perceptions of the researcher. He needs to make sense out
of the multitude of data gathered which generally involves a
synthesis of data in relation to a hypothesis or question or theory
rather than mere accumulation or summarization. In doing so, he /
she should avoid biases and unduly projecting his / her own
personality onto the data. The data should be fitted into a logically
parsimonious structure. The researcher should be clear about the
interpretative framework so as to become sensitive towards bias in
other historical researchers’ interpretations who have conducted
research on the same or similar topics.
151
In historical research, ‘causes’ are in the form of antecedents
or precipitating factors. They are not ‘causes’ in the strictly
scientific sense. Such antecedents are always complex and hence
the researcher should avoid over simplification while interpreting
them. Past events are mainly in the form of human behaviour.
Therefore ‘causes’ in historical research could be interpreted in
terms of motives of the participants involved.
The researcher needs to identify the motives of the people
involved in the event under study while interpreting the data.
These motives may be multiple in nature and interact with each
other. This makes interpretation of the data a difficult task. For
example, a new government decides to change the prevalent
textbooks. The motives here could be many such as its political
ideology does not match the prevalent textbooks, it had a personal
grudge against the authors of the prevalent textbooks or the
ministers concerned wanted to derive personal glory out of his
actions. These reasons may influence each other making the task of
interpretation of data difficult.
Historical researchers can make use of concepts from other
social and behavioural science disciplines in analyzing interpreting
data. Some examples of such concepts may be bureaucracy, role,
institution (from sociology), leadership, institutional effectiveness
(From management), culture (from anthropology), motive,
personality attitude etc. (from psychology) and so on.
The researcher also can make use of the concepts of
historical time and historical space white interpreting the data.
The concept of historical time makes use of a chronology of
events. i.e. the researcher needs to identify the chain of events
152
(chronology) of substantive history and then try to understand the
meaning of these events, the relationship among the events and
the relationship of the events to the research topic. The researcher
is studying more than one set of chronological data within the same
time frame may gain increased insight into multiple events and
their causes.
The concept of historical space deals with ‘where’ the event
originated, spread or culminated. This could provide a different
insight into the meaning of the data.
The historical researcher can also use analogy as a source of
hypothesis or as a frame of reference for interpretation. i.e. He /
she can draw parallels between one historical event and other
events. Here, one has to be aware of similarities, differences as
well as exceptions while comparing two historical events,
otherwise, such an extrapolation will be unreliable. Also, it is risky
to interpret an event by comparing with another event in another
culture at another time.
iv) Making Inferences and Generalizations in Historical Research:
in order to identify and explain the ‘cause/s’ of a historical
event, the research must be aware of his/ her assumptions which
are then used in ascribing causation to subsequent events. Some
examples of such assumption could include (i) history repeats itself,
or (ii) historical events are unique. The researcher must make clear
whether his / her analysis is based on the former assumption or the
latter.
Some examples of ‘causes’ of historical events identified in
prior researches include (i) strong ideology (eg. Maharshi Karve’s
ideology of women’s education) (ii) actions of certain key persons
153
(e.g. Mohamed Ali Jinnah’s actions for India’s partition), (iii)
Advances in Science in technology (e.g. use of computers in
education), (iv) economic / geographical / psychological /
sociological factors or a combination of all these (e.g. privatization
of education) etc.
The historian’s objective is not only to establish facts but also
to determine trends in the data and causes of events leading to
generalizations i.e. he / she needs to synthesize and interpret and
not merely summarize the data. These data, as in other types of
researches, are obtained not from the entire population of persons,
settings, events or objects pertaining to the topic, but from a small
sample. Moreover, this sample is selected from the remains of the
past. It can not be selected from the entire population of
documents or relics that existed during the period under study.
Such remains may not be representative. This necessitates a very
careful and cautious approach in locating consistency in different
documents and relics while making generalizations. Also, the
researcher should not rely on only one document pertaining to an
individual from the past while making a generalization as it will not
be known whether the individual held a particular opinion about an
educational issue consistently or had changed it over a period of
time. If he had changed his opinion, the researcher must find out
when and how it was changed, under what conditions and what
were the consequences. This makes it imperative that the
researcher uses as many primary and secondary sources as possible
on a topic. If the evidence is limited, he needs restrict the
generalizability of his interpretations to that extent.
E) Writing the Research Report :
This task involves the highest level of scholarship. In a
historical research, data collection is flexible. Besides, due to the
relative lack of conclusive evidence on which valid generalizations
154
can be established, the writing of historical research has to be a
little freer so as to allow subjective interpretation of the data. (This
by no means implies distortion of truth). Thus reports of historical
research have no standard formats. The presentation of data
analysis, interpretations and the findings depend on the nature of
the problem.
There are several board ways of reporting historical
investigation as follows :
i) The researcher can report the historical facts as answers to
different research questions. Answer to each question could be
reported in a separate chapter.
ii) He / she can present the facts in a chronological order with each
chapter pertaining to a specific historical period chronologically.
iii) Report can also written in a thematic manner where each
chapter deals with a specific theme / topic.
iv) Chapters could also deal with each state of India or each district
of an Indian state separately.
v) Chapter could also pertain to specific historical persons
separately.
vi) The researcher can also combine two or more of these
approaches while writing the research report.
In addition, the report should contain a chapter each on
introduction, methodology, review of related literature, findings,
the researcher’s interpretations and reflections on the
interpretative process.
Check Your Progress – II
155
Q1. Explain how will you identify a research topic for studying the
history of education?
2. What are the different sources of historical data? How will you
evaluate these sources?
3. What care will you take in making inferences and generalizations
in historical research?
4. How will you organize writing the research report?
156
The researcher needs to demonstrate his / her scholarship
and grasp of the topic, his / her insights into the topic, and the
plausibility and clarity of interpretations. This requires creativity,
ingenuity and imagination so as to make the research report
adequate and comprehensive.
6.8 PROBLEMS AND WEAKNESSES TO BE AVOIDED IN
HISTORICAL RESEARCH
Some of the weaknesses, problems and mistakes that need
to be avoided in a historical research are as follows :
1. The problem of research should not be too broad.
2. It should be selected after ensuring that sources of data are
existent, accessible and in a language known to the researcher.
3. Excessive use of easy-to-find secondary sources of data should
be avoided. Though locating primary sources of data timeconsuming and requires efforts, they are usually more
trustworthy.
4. Adequate internal and external criticism of sources of historical
data is very essential for establishing the authenticity and
validity of the data. It is also necessary to ascertain whether
statements concerning evidence by one participant have
influenced opinions of other participant or witnesses.
157
5. The researcher needs to be aware of his/her own personal
values, interests and biases. For this purpose, it is necessary for
the researcher to quote statements alongwith the context in
which they were made. Lifting them out of context shows the
intention of persuading the readers. The researcher also needs
to avoid both-extreme generosity or admiration as well as
extreme criticism. The researcher needs to avoid reliance on
beliefs such as “old is gold” “new is always better” or “change
implies progress”. All such beliefs indicate researcher’s bias and
personal values.
6. The researcher needs to ensure that the concepts borrowed
from other disciplines are relevant to his/her topic.
7. He/She should avoid unwarranted causal inferences arising on
account of (i) oversimplification (causes of historical event may
be multiple, complex and interactive), (ii) Faulty interpretation
of meanings of words, (iii) inability to distinguish between facts,
opinions and situations, (iv) inability to identify and discard
irrelevant or unimportant facts and (v) Faulty generalization
based on inadequate evidence, faulty logic and reasoning in the
analysis of data, use of wrong analogy and faulty comparison of
events in unsimilar cultures.
8. The researcher needs to synthesize facts into meaningful
chronological and thematic patterns.
9. The report should be written in a logical and scientific manner. It
should avoid flowery or flippant language, emotional words, dull
and colourless language or persuasive style.
158
10. The researcher should avoid projecting current problems onto
historical events as this is likely to create distortions.
6.9 CRITERIA OF EVALUATING HISTORICAL RESEARCH :
Mouly has provided the following criteria of evaluating
historical research:
1. Problem: Has the problem been clearly defined? It is difficult
enough to conduct historical research adequately without
adding to the confusion by starting out with a nebulous
problem. Is the problem capable of solution? Is it within the
competence of the investigator?
2. Data: Are data of a primary nature available in sufficient
completeness to provide a solution, or has there been an
overdependence on secondary or unverifiable sources?
3. Analysis: Has the dependability of the data been adequately
established? Has the relevance of the data been adequately
explored?
4. Interpretation: Does the author display adequate mastery of his
data and insight into their relative significance? Does he display
adequate historical perspective? Does he maintain his objective
or does he allow personal bias to distort the evidence? Are his
hypotheses plausible? Have they been adequately tested? Does
he take a sufficiently broad view of the total situation? Does he
see the relationship between his data and other ‘historical
facts’?
159
5. Presentation: Does the style of writing attract as well as inform?
Does the report make a contribution on the basis of newly
discovered data or new interpretation, or is it simply ‘uninspired
back work’? Does it reflect scholarliness?
Check your Progress III
1. What care will you take to avoid weaknesses in conducting
historical research?
2. State the criteria of evaluating historical research.
Suggested Readings:
Garraghan, G. J. A Guide to Historical Method. New York:
Fordham University Press. 1946.
160
Gottschalk, L. Understanding History. New York : Alfred A. Knopf.
1951.
McMillan, J. H. and Schumacher, S. Research in Education : A
Conceptual Introduction. Boston, MA : Little, brown and Company.
1984.
Shafer, R. J. A Guide to Historical Method. Illinois : The Dorsey
Press. 1974.





161
7
EXPERIMENTAL RESEARCH
Unit Structure
7.0 Objectives
7.1 Introduction
72 Experimental Designs
a. Pre-experimental design
b. Quasi-experimental design
c. True experimental design
7.3 Factorial Design
7.4 Nested Design
7.5 Single Subject Design
7.6 Internal and External Experimental Validity
7.7 Controlling Extraneous and Intervening Variables
7.0 OBJECTIVES :
After going through this module, you will be able to:
(i)
Conceptualize experimental method of Educational research;
(ii)
Describe the salient features of experimental research; on
(iii) Conceptualize various experimental design
(iv) Conceptualize the internal and external experimental validity
162
(v)
Conceptualize the process of controlling the intervening and
extraneous variables; and
(vi)
Apply experimental method for appropriate research problem.
7.1 INTRODUCTION :
The experimental method in educational research is the
application and adaptation of the classical method of
experimentation. It is a scientifically sophisticated method. It
provides a method of investigation to derive basic relationships
among phenomena under controlled condition or, more simply, to
identify the conditions underlying the occurrence of a given
phenomenon. Experimental research is the description and analysis
of what will be, or what will occur, under carefully controlled
conditions.
Experimenters manipulate certain stimuli, treatments, or
environmental conditions and observe how the condition or
behaviour of the subject is affected or changed. Such manipulations
are deliberate and systematic. The researchers must be aware of
other factors that could influence the outcome and remove or
control them in such a way that it will establish a logical association
between manipulated factors and observed factors.
Experimental research provides a method of hypothesis
testing. Hypothesis is the heart of experimental research. After the
experimenter defines a problem he has to propose a tentative
answer to the problem or hypothesis. Further, he has to test the
hypothesis and confirm or disconfirm it.
Although, the experimental method has greatest utility in
the laboratory, it has been effectively applied non-laboratory
settings such as the classroom. The immediate purpose of
experimentation is to predict events in the experimental setting.
The ultimate purpose is to generalize the variable relationships so
that they may be applied outside the laboratory to a wider
population of interest.
163
Characteristics of Experimental Method
There are four essential characteristics of experimental
research: (i) Cool, (ii) Manipulationtrn, (iii) Observation, and (iv)
Replication.
Control : Variables that are not of direct interest to the researcher,
called extraneous variables, need to be controlled. Control refers to
removing or minimizing the influence of such variables by several
methods such as: randomization or random assignment of subjects
to groups; matching subjects on extraneous variable(s) and then
assigning subjects randomly to groups; making groups that are as
homogenous as possible on extraneous variable(s); application of
statistical technique of analysis of covariance (ANCOVA); balancing
means and standard deviations of the groups.
Manipulation : Manipulation refers to a deliberate operation of the
conditions by the researcher. In this process, a pre-determined set
of conditions, called independent variable or experimental variable.
It is also called treatment variable. Such variables are imposed on
the subjects of experiment. In specific terms manipulation refers to
deliberate operation of independent variable on the subjects of
experimental group by the researcher to observe its effect. Sex,
socio-economic status, intelligence, method of teaching, training or
qualification of teacher, and classroom environment are the major
independent variables in educational research. If the researcher, for
example, wants to study the effect of ‘X’ method of teaching on the
achievement of students in mathematics, the independent variable
here is the method of teaching. The researcher in this experiment
needs to manipulate ‘X’ i.e. the method of teaching. In other words,
the researcher has to teach the experimental groups using ‘X’
method and see its effect on achievement.
164
Observation : In experimental research, the experimenter
observes the effect of the manipulation of the independent variable
on dependent variable. The dependent variable, for example, may
be performance or achievement in a task.
Replication : Replication is a matter of conducting a number of
sub-experiments, instead of one experiment only, within the
framework of the same experimental design. The researcher may
make a multiple comparison of a number of cases of the control
group and a number of cases of the experimental group. In some
experimental situations, a number of control and experimental
groups, each consisting of equivalent subjects, are combined
within a single experiment.
Check your progress – 1
Q.1) What are the characteristics of experimental research?
7.2 EXPERIMENTAL DESIGNS :
Experimental design is the blueprint of the procedures that
enable the researcher to test hypotheses by reaching valid
conclusions about relationships between independent and dependent
variables (Best, 1982, p.68). Thus, it provides the researcher an
opportunity for the comparison as required in the hypotheses of the
experiment and enables him to make a meaningful interpretation of
the results of the study. The designs deal with practical problems
165
associated with the experimentation such as: (i) how subjects are to
be selected for experimental and control groups, (ii) the ways
through which variables are to be manipulated and controlled, (iii)
the ways in which extraneous variables are to be controlled, how
observations are to be made, and (iv) the type of statistical analysis
to be employed.
Variables are the conditions or characteristics that the
experimenter manipulates, controls, or observes. The independent
variables are the conditions or characteristics that the experimenter
manipulates or controls in his or her attempt to study their
relationships to the observed phenomena. The dependent variables
are the conditions or characteristics that appear or disappear or
change as the experimenter introduces, removes or changes the
independent variable. In educational research teaching method is an
example of independent variable and the achievement of the students
is an example of dependent variable. There are some confounding
variables that might influence the dependent variable. Confounding
variables are of two types; intervening and extraneous variables.
Intervening variables are those variables that cannot be controlled or
measured but may influence the dependent variable. Extraneous
variables are not manipulated by the researcher but influence the
dependent variable. It is impossible to eliminate all extraneous
variables, but sound experimental design enables the researcher to
more or less neutralize their influence on dependent variables.
There are various types of experimental designs. The
selection of a particular design depends upon factors like nature
and purpose of experiment, the type of variables to be
manipulated, the nature of the data, the facilities available for
carrying out the experiment and the competence of the
experimenter. The following categories of experimental research
designs are popular in educational research:
(i)
Pre-experimental designs – They are least effective and
provide little or no control of extraneous variables.
(ii)
True experimental designs – employ randomization to control
the effects of variables such as history, maturation, testing,
statistical regression, and mortality.
166
(iii) Quasi-experimental designs – provide less satisfactory degree
of control and are used only when randomization is not
feasible.
(iv) Factorial designs- more than one independent variables can be
manipulated simultaneously. Both independent and
interaction effects of two or more than two factors can be
studied with the help of this factorial design.
Symbols used :
In discussing experimental designs a few symbols are used.
E – Experimental group
C – Control group
X – Independent variable
Y – Dependent variable
R – Random assignment of subjects to groups
Yb – Dependent variable measures taken before experiment /
treatment (pre-test)
Ya – Dependent variable measures taken after experiment/
treatment (Post-test)
Mr – Matching subjects and then random assignment to groups.
a. Pre-Experimental design :
There are two types of pre-experimental designs:
1.
The one group pre-test post-test design :
This is a simple experimental research design without
involvement of a control group. In this design the experimenter takes
167
dependent variable measures (Yb) before the independent variable
(X) is manipulated and again takes its measures (Ya) afterwards: The
difference if any, between the two measurements (Yb and Ya) is
computed and is ascribed to the manipulation of X.
Pre-test
Independent variable
Yb
X
Post-test
Ya
The experimenter, in order to evaluate the effectiveness of
computer-based instruction (CBI) in teaching of science to grade V
students, administers an achievement test to the whole class (Y b)
before teaching through CBI. The test is administered over the
same class again to measure Ya. The means of Yb and Ya are
compared and the difference if any is ascribed to effect of X, i.e.
teaching through CBI.
The design has the inherent limitation of using one group
only. The design also lacks scope of controlling extraneous variables
like history, maturation, pre-test sensitization, and statistical
regression etc.
2.
The two groups static design :
This design provides some improvement over the previous by
adding a control group which is not exposed to the experimental
treatment. The experimenter may take two sections of grade-V of
one school or grade-V of one school or grade-V students of two
different schools (intact classes) as experimental and control groups
respectively and assume the two groups to be equivalent. No pre-test
is taken to ascertain it.
Group
E
Independent Variable
X
Post-test
Ya
168
C
-
Ya
This design compares the post-test scores of experimental
group (Ya E) that has received experimental treatment (X) with that
of control group (Ya C) that has not received X.
The major limitation of the design is that there is no
provision for establishing the equivalence of the experimental (E)
and control (C) groups. However, since no pretest is used, this
design controls for the effects of extraneous variables such history,
maturation, and pre-testing.
Check your progress – 2
Q.2) What is pre-experimental design?
b. Quasi-Experimental Design :
Researchers commonly try to establish equivalence between
the experimental and control groups, the extent they are successful
in doing so; to this extent the design is valid. Sometimes it is
extremely difficult or impossible to equate groups by random
selection or random assignment, or by matching. In such situations,
the researcher uses quasi-experimental design.
169
The Non-Equivalent Groups Design is probably the most
frequently used design in social research. It is structured like a
pretest-posttest randomized experiment, but it lacks the key feature
of the randomized designs -- random assignment. In the NonEquivalent Groups Design, we most often use intact groups that we
think are similar as the treatment and control groups. In education,
we might pick two comparable classrooms or schools. In
community-based research, we might use two similar communities.
We try to select groups that are as similar as possible so we can
fairly compare the treated one with the comparison one. But we can
never be sure the groups are comparable. Or, put another way, it's
unlikely that the two groups would be as similar as they would if we
assigned them through a random lottery. Because it's often likely
that the groups are not equivalent, this designed was named the
nonequivalent group design to remind us.
Pre-test
Independent Variable
Post-test
Yb
X
Ya (Experimental)
Yb
-
Ya (Control)
So, what does the term "nonequivalent" mean? In one sense,
it just means that assignment to group was not random. In other
words, the researcher did not control the assignment to groups
through the mechanism of random assignment. As a result, the
groups may be different prior to the study. This design is especially
susceptible to the internal validity threat of selection. Any prior
differences between the groups may affect the outcome of the study.
Under the worst circumstances, this can lead us to conclude that our
program didn't make a difference when in fact it did, or that it did
make a difference when in fact it didn't.
The counterbalanced design may be used when the random
assignment of subject to experimental group and control group is
not possible. This design is also known as rotation group design. In
counterbalanced design each group of subject is assigned to
170
experimental treatment at different times during the experiment.
This design overcomes the weakness of non-equivalent design.
When intact groups are used, rotation of groups provides an
opportunity to eliminate any differences that might exist between
the groups. Since all the groups are exposed to all the treatments,
the results obtained cannot be attributed to the preexisting
differences in the subjects. The limitation of this design is that there
is carry-over effect of the groups from one treatment to the next.
Therefore, this design should be used only when the experimental
treatments are such that the administration of one treatment on a
group will have no effect on the next treatment. There is possibility
of boring students with repeated testing.
Check your progress – 3
How does pre-experimental design differ from quasi experimental
design?
c. True experimental design :
True experimental designs are used in educational research
because they ascertain equivalence of experimental and control
groups by random assignment of subjects to these groups, and thus,
control the effects of extraneous variables like history, maturation,
171
testing, measuring instruments, statistical regression and mortality.
This design, in contrast to pre-experimental design, is a better and
used in educational research wherever possible.
1.
Two groups, randomized subjects, post-test only design:
This is one of the most effective designs in minimizing the
threats to experimental validity. In this design subjects are assigned
to experimental and control groups by random assignment which
controls all possible extraneous variables, e.g. testing, statistical
regression, mortality etc. At the end of experiment the difference
between the mean post-test scores of the experimental and control
group are put to statistical test –‘t’ test or analysis of variance
(ANOVA). If the differences between the means are found
significant, it can be attributed to the effect of (X), the independent
variable.
Group
Independent Variable
Post-test
E
X
Ya
C
-
Ya
R
The main advantage of this design is random assignment of
subjects to groups, which assures the equivalence of the groups prior
to experiment. Further, this design, in the absence of pretest, controls
the effects of history, maturation and pre-testing etc.
This design is useful in the experimental studies at the preprimary or primary stage and the situations in which a pre-test is
not appropriate or not available.
2.
Two groups, randomized matched subject, post-test only
design :
172
This design, instead of using random assignment of subjects
to experimental and control group, uses the technique of matching.
In this technique, the subjects are paired so that their scores on
matching variable(s), i.e. the extraneous variable(s) the experimenter
wants to control, are as close as possible. One subject of each pair is
randomly assigned to one group and the other to the second group.
The groups are designated as experimental and control by random
assignment (tossing a coin).
Group
MR
Independent Variable
Post-test
E
X
Ya
C
-
Ya
This design is mainly used where ―two groups randomized
subjects, post-test only‖ design is not applicable and where small
groups are to be used. The random assignment of subjects to groups
after matching adds to the strength of this design. The major
limitation of the design is that it is very difficult to use matching as a
method of controlling extraneous variables because in some
situations it is not possible to locate a match and some subjects are
excluded from the experiment.
3.Two groups randomized subjects, pre-test post-test design :
In this design subjects are assigned to the experimental group
and the control group at random and are given a pre-test (Yb). The
treatment is introduced only to the experimental group, after which
the two groups are measured on dependent variable. The difference
in scores or gain scores (D) in respect of pre-test and post-test (Ya –
Yb = D) is found for each group and the difference in scores of both
the groups (De and Dc) is compared in order to ascertain whether the
experimental treatment produced a significant change. Unless the
effect of the experimental manipulation is strong, the analysis of the
differential score is not advisable (Kerlinger, 1973, p-336). If they
are analyzed, however, a ‗t‘ or ‗F‘ test is used.
Group
Pre-test
Independent Variable Post-test
173
E
Yb
X
Ya
C
Yb
-
Ya
The main advantages of this design include:
Through initial randomization and pre-testing equivalence
between the two groups can be ensured.
Randomization seems to control most of the extraneous
variables.
But the design does not guarantee external validity of the
experiment as the pretest may increase the subjects’ sensitivity to
the manipulation of X.
4.
The Solomon three groups design :
This design, developed by Solomon seeks to overcome the
difficulty of the design: Randomized Groups, Pre-test Posttest
Design, i.e. the interactive effects of pre-testing and the experimental
manipulation. This is achieved by employing a second control group
(C2) which is not pre-tested but is exposed to the experimental
treatment (X).
Group
Pre-test
Independent Variable Post-test
E
Yb
X
Ya
C1
Yb
-
Ya
C2
-
X
Ya
This design provides scope for comparing post-tests (Ya)
scores for the three groups. Even though the experimental group
174
has a significantly higher mean score as compared to that of the
first control group (YaE > Ya C1), one cannot be confident that this
difference is due to the experimental treatment (X). It might have
occurred because of the subjects’ pre-test sensitization. But, if the
mean Ya scores of the second control group is also higher as
compared to that of the first control group (Ya C2 > Ya Cl), then one
can assume that the experimental treatment has produced the
difference rather than the pre-test sensitization, since C2 is not pretested.
5.
The Solomon four group design :
This design is an extension of Solomon three group design
and is really a combination of two two-groups designs: (i) Two
groups randomized subjects pre-test post-test design; and (ii) Two
group randomized subjects post-test only design. This design
provides rigorous control over extraneous variables and also
provides opportunity for multiple comparisons to determine the
effects of the experimental treatment (X).
In this design the subjects are randomly assigned to the four
groups. One experimental (E) and three control (Cl, C2 and C3). The
experimental and the first control group (E and C 1) are pre-tested
groups, and the second and third control groups (C 2 and C3) are not
pre-tested groups. If the post-test mean scores of experimental
group (Ya E) is significantly greater than the post-test mean score of
the first control group (Ya C1); and also the post test mean score of
the second control group (Ya C2) is significantly greater than the
post-test mean score of the third control group (Y a C3), the
experimenter arrives at the conclusion that the experimental
treatment (X) has effect.
Group
Pre-
Independent
Post test
175
test
Variable
E
Yb
X
Ya
C1
Yb
-
Ya
C2
-
X
Ya
C3
-
-
Ya
R
This design is considered to be strong one as it actually
involves conducting the experiment twice, once with pre-test and
once without pre-test. Therefore, if the results of these two
experiments are in agreement, the experimenter can have much
greater confidence in his findings. The design seems to have two
sources of weakness. One is practicability as it is difficult to conduct
two simultaneous experiments, and the researcher encounters the
difficulty of locating more subjects of the same kind. The other
difficulty is statistical. Since the design involves four sets of
measures for four groups and the experimenter has to make
comparison between the experimental and first control group and
between second and third control groups there is no single statistical
procedure that would make use of the six available measures
simultaneously.
Check your progress – 4
How does true experimental design differ from quasi experimental
design?
176
7.3 FACTORIAL DESIGN :
Experiments may be designed to study simultaneously the
effects of two or more variables. Such an experiment is called
factorial experiment. Experiments in which the treatments are
combinations of levels of two or more factors are said to be
factorial. If all possible treatment combinations are studied, the
experiment is said to be a complete factorial experiment. When
two independent factors have two levels each, we call it as 2x2
(spoken "two-by-two”) factorial design. When three independent
factors have two levels each, we call it 2x2x2 factorial design.
Similarly, we may have 2x3, 3x3, 3x4, 3x3x3, 2x2x2x2, etc.
Simple Factorial Design :
A simple factorial design is 2x2 factorial design. In this design
there are two independent variables and each of the variables has
two levels. One advantage is that information is obtained about the
interaction of factors. Both independent and interaction effects of
two or more than two factors can be studied with the help of this
factorial design.
In factorial designs, a factor is a major independent variable.
In this example we have two factors: methods of teaching and
intelligence level of the students. A level is a subdivision of a factor.
In this example, method of teaching has two levels and intelligence
has two levels. Sometimes we depict a factorial design with a
numbering notation. In this example, we can say that we have a 2 x
2 (spoken "two-by-two”) factorial design. In this notation, the
number of numbers tells us how many factors there are and the
number values tell how many levels. The number of different
treatment groups that we have in any factorial design can easily be
determined by multiplying through the number notation. For
177
instance, in our example we have 2x2 = 4 groups. In our notational
example, we would need 3 x 4 = 12 groups. Full factorial experiment
is an experiment whose design consists of two or more factors,
each with discrete possible values or "levels", and whose
experimental units take on all possible combinations of these levels
across all such factors. A full factorial design may also be called a
fully-crossed design. Such an experiment allows studying the effect
of each factor on the response variable, as well as the effects of
interactions between factors on the response variable.
For the vast majority of factorial experiments, each factor has
only two levels. For example, with two factors each taking two
levels, a factorial experiment would have four treatment
combinations in total, and is usually called a 2×2 factorial design.
The first independent variable, which is manipulated, has two values
called the experimental variable. The second independent variable,
which is divided into levels, may be called control variable. For
example, there are two experimental treatments, that is, teaching
through co-operative learning and teaching through lecture method.
It is observed that there may be differential effects of these methods
on different levels of intelligence of the students. On the basis of the
IQ score the experimenter divides the students into two groups: one
high intelligent group and the other the low intelligent group. There
are four groups of students within each of the two levels of
intelligence.
Teaching Through
Co-operative
Learning Method
Teaching Through
Lecture Method
High Intelligence
Group
Low Intelligence
Group
Gain Score on the
Dependent Variable
Gain Score on the
Dependent Variable
Gain Score on the
Dependent Variable
Gain Score on the
Dependent Variable
178
Since one of the objectives is to compare various
combinations of these groups, the experimenter has to obtain the
mean scores for each row and each column. The experimenter can
first study the main effect of the two independent variables and the
interaction effect between the intelligence level and teaching
method.
Check your progress – 5
1. How does factorial design differ from true experimental design?
7.4 NESTED DESIGN :
In a nested design, each subject receives one, and only one,
treatment condition. In a nested design, the levels of one factor
appear only within one level of another factor. The levels of the first
factor are said to be nested within the level(s) of the second factor.
When variables such as race, income and education, etc. may be
found only at a particular level of the independent variable, these
variables are called nested variables. In these studies the various
nested variables are grouped for the study. For example, a
researcher is studying school effectiveness with academic
achievement of students as the indicator or criterion variable. In
this type of research, school type can be nested within individual
schools which can be nested within classrooms. The major
179
distinguishing feature of nested designs is that each subject has a
single score. The effect, if any, occurs between groups of subjects
and thus the name “Between Subjects” is given to these designs.
The relative advantages and disadvantages of nested designs are
opposite those of crossed designs. First, carry over effects are not a
problem, as individuals are measured only once. Second, the
number of subjects needed to discover effects is greater than with
crossed designs. Some treatments by their nature are nested. The
effect of gender, for example, is necessarily nested. One is either a
male or a female, but not both. Religion is another example.
Treatment conditions which rely on a pre-existing condition are
sometimes called demographic or blocking factors.
Crossed Design :
In a crossed design each subject sees each level of the
treatment conditions. In a very simple experiment, such as one that
studies the effects of caffeine on alertness, each subject would be
exposed to both a caffeine condition and a no caffeine condition. For
example, using the members of a statistics class as subjects, the
experiment might be conducted as follows. On the first day of the
experiment, the class is divided in half with one half of the class
getting coffee with caffeine and the other half getting coffee without
caffeine. A measure of alertness is taken for each individual, such as
the number of yawns during the class period. On the second day the
conditions are reversed; that is, the individuals who received coffee
with caffeine are now given coffee without and vice-versa. The size
of the effect will be the difference of alertness on the days with and
without caffeine.
The distinguishing feature of crossed designs is that each
individual will have more than one score. The effect occurs within
each subject, thus these designs are sometimes referred to as ‗within
subjects‘ designs.
Crossed designs have two advantages. One, they generally
require fewer subjects, because each subject is used a number of
times in the experiment. Two, they are more likely to result in a
180
significant effect, given the effects are real.
Crossed designs also have disadvantages. One, the
experimenter must be concerned about carry-over effects. For
example, individuals not used to caffeine may still feel the effects of
caffeine on the second day, even though they did not receive the
drug. Two, the first measurements taken may influence the second.
For example, if the measurement of interest was score on a statistics
test, taking the test once may influence performance the second time
the test is taken. Three, the assumptions necessary when more than
two treatment levels are employed in a crossed design may be
restrictive.
Check your progress - 6
1. What is the difference between nested design and crossed design?
7.5 SINGLE FACTOR EXPERIMENT :
Many experiments involve single treatment or variable with
two or more levels. First, a group of experimental subjects may be
divided into independent groups, using a random method. Different
treatment may be applied to each group. One group may be a
control group, a group to which no treatment is applied. For
meaningful interpretation of experiment, results obtained under
treatment may be compared with results obtained in the absence
of treatment. Comparison may be made between treatments and
between treatment and a control.
181
Some single factor experiments involve a single group of
subjects. Each subject receives treatments. Repeated observations
or measurements are made on the same subjects.
Some single factor experiments may consists of groups that
are matched on one or more variables which are known to be
correlated with the dependent variable. For example IQ may be
correlated with achievement.
An example of single factor Experiment :
It is believed that the amount of time a player warms up at
the beginning will have a significant impact on his game, lawn
tennis. The hypothesis is that if he does not warm up at all or only
for a brief time (less than 15 minutes), he will be stiff and his score
will be poor. However, if he warms up too much (over 40 minutes),
he will be tired and his game score will also suffer. He needs to
choose levels of warming up to test this hypothesis that are
significantly different enough. The levels he will test are warming
up for 0, 15, 30, and 45 minutes.
7.6 INTERNAL AND EXTERNAL EXPERIMENTAL VALIDITY :
Validity of experimentation :
An experiment must have two types of validity: internal
validity and external validity (Campbell and Stanley, 1963):
Internal validity :
Internal validity refers to the extent to which the manipulated
or independent variables actually have a genuine effect on the
observed results or dependent variable and the observed results were
182
not affected by the extraneous variables. This validity is affected by
the lack of control of extraneous variables.
External validity :
External validity is the extent to which the relationships
among the variables can be generalized outside the experimental
setting like other population, other variables. This validity is
concerned with the generalizability or representativeness of the
findings of experiment, i.e. to what population, setting and
variables can the results of the experiment be generalized.
Factors affecting validity of experimentation :
In educational experiments, a number of extraneous variables
influence the results of the experiment in way that are difficult to
evaluate. Although these extraneous variables cannot be completely
eliminated, many of them can be identified. Campbell and Stanley
(1963) have pointed out the following major variables which affect
significantly the validity of an experiment:
History : The variables, other than the independent variables, that
may occur between the first and the second measurement of the
subjects (Pre-test and post test).
Maturation : The changes that occur in the subjects over a period
of time and confused with the effects of the independent variables.
Testing : Pre-testing, at the beginning of an experiment, may be
sensitive to subjects, which may produce a change among them and
may affect their post-test performance.
Measuring Instruments : Different measuring instruments, scorers,
interviewers or the observers used at the pre and post testing
183
stages; and unreliable measuring instruments or techniques are
threats to the validity of an experiment.
Statistical regression : It refers to the tendency for extreme scores
to regress or move towards the common mean on subsequent
measures. The subjects who scored high on a pre-test are likely to
score relatively low on the retest whereas the subjects who scored
low on the pre-test are likely to score high on the retest.
Experimental mortality : It refers to the differential loss of subjects
from the comparison groups. Such loss of subjects may affect the
findings of the study. For example, if some subjects in the
experimental group who received the low scores on the pre-test
drop out after taking the test, this group may show higher mean on
the post-test than the control group.
Differential selection of subjects : It refers to difference
between/among groups on some important variables related to the
dependent variable before application of the experimental
treatment.
Check your progress – 7
1. What is experimental validity?
184
7.7
CONTROLLING EXTRANEOUS
VARIABLES :
AND
INTERVENING
All experimental designs have one central characteristic: they
are based on manipulating the independent variable and measuring
the effect on the dependent variable. Experimental designs result in
inferences drawn from the data that explain the relationships
between the variables.
The classic experimental design consists of the experimental
group and the control group. In the experimental group the
independent variable is manipulated. In the control the dependent
variable is measured when no alteration has been made on the
independent variable. The dependent variable is measured in the
experimental group the same way, and at the same time, as in the
control group.
The prediction is that the dependent variable in the
experimental group will change in a specific way and that the
dependent variable in the control group will not change.
Controlling Unwanted Influences :
To obtain a reliable answer to the research question, the
design should eliminate unwanted influences. The amount of
control that the researcher has over the variables being studied
varies, from very little in exploratory studies to a great deal in
experimental design, but the limitations on control must be
addressed in any research proposal.
These unwanted influences stem from one or more of the
following: extraneous variables, bias, the Hawthorne effect, and the
passage of time.
185
Extraneous Variables :
Extraneous variables are variables that can interfere with the action
of the independent variable. Since they are not part of the study, their
influence must be controlled.
In the research literature, the extraneous variables also
referred to as intervening variables, directly affect the action of the
independent variable on the dependent variables. Intervening
variables are those variables that occur in the study setting. They
include economic, physical, and psychological variables. Therefore,
it is important to control extraneous variables to study the effect of
independent variable on dependent variable. We must be very
careful to control all possible extraneous variables that might
intervene the dependant variable.
Methods of controlling extraneous variables include :
randomization
homogeneous sampling techniques
matching
building the variables into the design
statistical control
Randomization : Theoretically, randomization is the only
method of controlling all possible extraneous variables. The
random assignment of subjects to the various treatment and
control groups means that the groups can be considered
statistically equal in all ways at the beginning of the experiment.
It does not mean that they actually are equal for all variables.
However, the probability of their being equal is greater than
the probability of their not being equal, if the random assignment
was carried out properly. The exception lies with small groups
where random assignment could result in unequal distribution of
crucial variables. If this possibility exists, the other method would
be more appropriate. In most instances, however, randomization is
the best method of controlling extraneous variables.
186
A random sampling technique results in a normal distribution
of extraneous variables in the sample; this approximates the
distribution of those variables in the population. The purpose of
randomization is to ensure a representative sample.
Randomization comes into play when we randomly assign
subjects to experimental and control groups, thus ensuring that the
groups are as equivalent as possible prior to the manipulation of the
independent variable. Random assignment assures that the
researcher is unbiased. Instead, assignment is predetermined for
each subject.
Homogeneous Sample : One simple and effective way of
controlling an extraneous variable is not to allow it to vary. We
may choose a sample that is homogenous for that variable. For
example, if a researcher believes that gender of the subject might
affect the dependant variable, he/she could select the subjects of
the desired gender only. If the researcher believes that socioeconomic status might influence the dependant variable, he/she
would select subject from a particular range of socio-economic
status. After selecting students from a homogenous population
the researcher may assign the subjects to experimental and
control group randomly.
Matching : When randomization is not possible, or when the
experimental groups are too small and contain some crucial
variables, subjects can be matched for those variables. The
experimenter chooses subjects who match each other for the
specified variables. One of these matched subjects is assigned to
the control group and the other to the experimental group, thus
ensuring the equality of the groups at the outset.
The process of matching is time consuming and introduces
considerable subjectivity into sample selection. Therefore, it should
be avoided whenever possible. If we use matching, limit the number
of groups to be matched and keep the number of variables for which
the subjects are matched low. Matching with more than five
variables becomes extremely cumbersome, and it is almost
impossible to find enough matched partners for the sample.
Matching may be used in all research designs when we are looking
187
at certain outcomes and want to have as much control as possible.
Building Extraneous Variables into the Design : When
extraneous variables cannot be adequately controlled by
randomization, they can be built into the design as independent
variables. They would have to be added to the purpose of study
and tested for significance along with other variables. In this
way, their effect can be measured and separated from the effect
of the independent variable.
Statistical Control : In experimental designs, the effect of the
extraneous variables can be subtracted statistically from the total
action of the variables. The technique of analysis of covariance
(ANCOVA) may be used for this purpose. Here, one or more
extraneous variables are measured along with the dependant
variables. This method adds to the cost of the study because of the
additional data collection and analysis required. Therefore, it should
be used only as a last resort.
Check your progress - 8
1. As an experimenter, how will you control the effect of extraneous
and intervening variables?
Unit End Exercise:
Differentiate between the true experimental design and
factorial design.
Differentiate between internal and external validity.
What is the significance of randomization in experimental
research?
References :
188
Anderson, G (1990): Fundamentals of Educational Research: The
Falmer Press, London.
Best, J.W. & Kahn, J.V. (1993): Research in Education; 7 th Ed.
Prentice Hall of India Pvt., Ltd., New Delhi.
Campbell, D. T. & Stanley, J. C. (1963). Experimental and QuasiExperimental Designs for Research.. Chicago: Rand McNally.
Fisher, R. A. (1959). Statistical Methods & Scientific Inference.
New York: Hafner Publishing.
Gay, L.R. (1987). Educational Research, Englewood Cliffs NJ:
Macmillan Publishing Company.
Kerlinger, F.N. (1964) : Foundations of Behavioural Research (2nd
Ed.), Surjeet Publications, New Delhi.
Koul, L. (1984): Methodology of Educational Research (2 nd Ed.),
Vikash Publishing House Pvt. Ltd., New Delhi.
Stockburger, David W. (1998): Introductory Statistics: Concepts,
Models, and Applications, (2nd Ed), www.atomicdogpublishing.com

189
8
TOOLS AND TECHNIQUES OF RESEARCH
Unit Structure
8.0 Steps of Preparing a Research Tool
8.1 Types of Validity
8.2 Factors affecting validity
8.3 Reliability
8.4 The methods of Estimating Reliability
8.5 Item Analysis
8.6 Steps involved in Item – Analysis
8.0 STEPS OF PREPARING A RESEARCH TOOL:
The first step in preparing a research tool is to develop a pool
of item. This is followed by item analysis which involves computing
difficulty index and discrimination index of each item. This is
followed by ascertaining the validity of the tool.
(i) Validity : Validity is the most important consideration in the
selection and use of any testing procedures. The validity of a test,
or of any measuring instrument, depends upon the degree of
190
exactness with which something is reproduced/copied or with
which it measures what it purports to measure.
The validity of a test may be defined as “the accuracy with
which a test measures what it attempts to measure.” It is also
defined as “The efficiency with which a test measures what it
attempts to measure”. Lindquist has defined validity – “As the
accuracy with which it measures that which is intended to as the
degree to which it approaches infallibility in measuring what it
purports to measure”.
On the basis of the preceding definitions, it is seen that
Validity is a matter of degree. It maybe high, moderate or low.
Validity is specific rather than general. A test may be valid for
one specific purpose but not for another Valid for one specific
group of students but not for another.
8.1 TYPES OF VALIDITY :
(i) Content Validity : According to Anastasi (1968), “content
validity involves essentially the systematic examination of the
text content to determine whether it covers a representative
sample of the bahaviour domain to be measured”. It refers to
how well our tool sample represents the universe of criterion
behaviour. Content validity is employed in the selection of items
in research tools. The validation of content through competent
judgments is satisfactory when the sampling of items is wide
and judicious.
(ii) Criterion-related Validity : This is also known as empirical
validity.
191
There are two forms of criterion-related validity.
a) Predictive Validity : It refers to how well the scores obtained on
the tool predict future criterion behavior.
b) Concurrent Validity : It refers to how well the scores obtained
on the tool are correlated with present criterion behaviour.
(iii)Construct Validity : It is the extent to which the tool measures a
theoretical construct or trait or psychological variable. It refers
to how well our tool seems to measure a hypothesized trait.
8.2 FACTORS AFFECTING VALIDITY :
The following points influence the validity of a test :
(I)
Unclear Direction : If directions do not clearly indicate to the
respondent how to respond to tool items, the validity of a tool
is reduced.
(II) Vocabulary : If the vocabulary of the respondent is poor, the
he/she fails to respond to the tool item, even if he/she knows
the answer. It becomes a reading comprehension text for
him/her, and the validity decreases.
(III) Difficult Sentence Construction : If a sentence is so
constructed as to be difficult to understand, respondents
would be confused, which will affect the validity of the tool.
(IV) Poorly Constructed Test Items: These reduce the validity of a
test.
192
(V) Use of Inappropriate Items : The use of inappropriate items
lowers validity.
(VI) Difficulty Level of Items : In an achievement test, too easy or
too difficult test items would not discriminate among
students. Thereby the validity of a test is lowered.
(VII) Influence of Extraneous Factors: Extraneous factors like the
style of expression, legibility, mechanics of grammar, (Spelling,
punctuation) handwriting, length of the tool, influence the
validity of a tool.
(VIII) Inappropriate Time Limit : In a speed test, if no time limit is
given he result will be invalidated. In a power test, an
inappropriate time limit will lower its validity. Our tests are
both power and speed tests. Hence care should be taken in
fixing the time limit.
(IX) Inappropriate Coverage : If the does not cover all aspects of
the construct being measured adequately, its content validity
will be adversely affected due to inadequate sampling of
items.
(X) Inadequate Weightage : Inadequate weightage to some
dimensions, sub-topics or objectives would call into question
the validity of tool.
(XI) Halo Effect : If a respondent has formed a poor impression
about one aspect of the concept, item, person, issue being
measured, he/she is likely to rate that concept, item, person,
issue poor on all other aspects too. Similarly, good impression
193
about one aspect of the concept, item, person, issue being
measured, he/she is likely to rate that concept, item, person,
issue high on all other aspects too. This is known as the haloeffect which lowers the validity of the tool about one aspect of
the concept, item, person, issue being measured, he/she is
likely to rate that concept, item, person, issue poor on all
other aspects too..
8.3 RELIABILITY :
A test score is called reliable when we have reasons of
believing the score to be stable and trustworthy. If we measure a
student s level of achievement, we hope that his score would be
similar under different administrators, using different scores, with
similar but not identical items, or during a different time of the day.
The reliability of a test may be defined as-
“The degree of consistency with which the test measures
what it does measure”.
Anastasi (1968) “Reliability means consistency of scores
obtained by same individual when re-examined with the test on
different sets of equivalent items or under other variable examining
conditions”.
A psychological or educational measurement is indirect and
is connected with less precise instruments or traits that are not
always stable. There are many reasons why a pupil’s test score may
vary –
a) Trait Instability : The characteristics we measure may change
over a period of time.
194
b) Administrative Error : Any change in direction, timing or
amount of rapport with the test administrative may cause score
variability.
c) Scoring Error : Inaccuracies in scoring a test paper will affect the
scores.
d) Sampling Error : Any particular questions we ask in order to
infer a person’s knowledge may affect his score.
e) Other Factors : Such as health, motivation, degree of fatigue of
the pupil, good or bad luck in guessing may cause score
variability.
8.4 THE METHODS OF ESTIMATING RELIABILITY :
The four procedures in common use for computing the
reliability coefficient of a test
a) Test – Retest Method
b) The Alternate or Parallel Forms Method.
c) The Internal Consistency Reliability
d) The Inter-rater Reliability
a) Test-Retest (Repetition) Method (Co-efficient of Stability) : In
test – retest method the single form of a test is administered
twice on the same sample with a reasonable gap. Thus two set
of scores are obtained by administering a test twice. The
195
correlation Co-efficient is computed between the two set of
scores as the reliability index. If the test is repeated
immediately, many subjects will recall their first answers and
spend their time on new material, thus tending to increase their
scores. Immediate memory effects, practice and the confidence
induced by familiarity with the material will affect scores when
the test is taken for a second time. And, if the interval between
tests is rather long, growth changes will affect the retest score
and tends to lower the reliability coefficient.
A high test – retest reliability or co-efficient of stability shows
that there is low variable error in the sets of obtained scores and
vice-versa. The error variance contributes inversely to the
coefficient of stability.
b) Alternate or Parallel forms Method (Co-efficient of Equivalence
Reliability) : When alternative or parallel forms of a test can be
developed, the correlation between Form-‘A’ and Form ’B’ may
be taken as a reliability index.
The reliability index depends upon the alikeness of two
forms of the test. When the two forms are virtually alike,
reliability is too high, when they are not sufficient alike,
reliability will be too low. The two forms of the test are
administered on same sample of subjects on the same day after
a considerable gap. Pearson’s method of correlation is used for
calculating of correlation between the sets of scores obtained by
administering the two forms of the test. The co-efficient of
correlation is termed as co-efficient of equivalence.
c) The Spilt Half Method (The Co-efficient of Stability and
Equivalence) : The test is administered once on sample of
subjects. Each individual scope is obtained in two parts (odd
196
numbers and even numbers). The scoring is done separately of
these two parts even numbers and odd numbers of items. The
co-efficient of correlation is calculated of two halves of scores.
The co-efficient of correlation indicates the reliability of half
test. The self-correlation co-efficient of whole test is then
estimated by using spearman-Brown Prophecy formula.
d) The method of ‘Rational Equivalence (Co-efficient of Internal
Consistency) : The method of rational equivalence stresses the
inter correlations of items in the test and the correlations of the
items with the test as a whole. The assumption is that all items
have the same or equal difficulty value, but not necessary the
same persons solve each item correctly.
Factors Influencing Reliability :
(i) Interval : With any method involving two setting testing
occasions, the longer the interval of time between two test
administration, the lower the co-efficient will tend to be.
(ii) Test Length : Adding equivalent items makes a test more
reliable, while deleting them makes it less reliable.
A longer test will provide amore adequate sample of the
behaviour being measured and the scores are apt to be less
influenced by chance factors.
Lengthening of a test by a number of practical considerations
like time, fatigue, boredom, limited stock of good items.
197
(iii)Inappropriate Time Limit : A test is considered to be a pure
speed test if everyone who reaches an item gets it right, but no
one has the time to finish all the items. A power test is one in
which everyone has time to try all the items but, because of the
difficulty level, no one obtains a perfect score.
(iv)Group Homogeneity : Other things being equal, the more
heterogeneous the group, the higher the reliability. The test is
more reliable when applied to a group of pupils with a wide
range of ability than one with a narrow range of ability.
(v) Difficulty of the Items : Tests in which there is little variability
among the scores give lower reliability estimates that tests in
which the variability is high. Too difficult or too easy tests for a
group will tend to be less reliable because the differences
among the pupils in such tests are narrow.
(vi) Objectivity of Scoring : The more subjectively a measure is
scored, the lower its reliability. Objective-type tests are more
reliable than subjective/Essay type tests.
(vii) Ambiguous Wording of Items : When the questions are
interpreted in different ways at different times by the same
pupils, the test becomes less reliable.
(viii) Inconsistency in Test Administration : Such as deviations in
timing, procedure, instructions, etc. fluctuations in interests
and attention of the pupils, Shifts in emotional attitude make a
test less reliable.
198
(ix) Optional Questions : If optional questions are given, the same
pupils may not attempt the same items on a second
administration, thereby the reliability of the test is reduced.
8.5 ITEM ANALYSIS :
Item analysis begins after the test is over. The responses of
the examinees are to be analysed to check the effectiveness of the
test items. The teacher must come to some judgments regarding
the difficulty level, discriminating power and content validity of
items. Only those items which are effective are to be retained,
while those which are not should either be discarded or improved.
This is known as the process of item-analysis.
A test should be neither too easy nor too difficult; and each
item should discriminate validity among the high and low achieving
students.
(i)
The difficulty value of each item.
(ii)
The discriminating power of each item.
(iii) The effectiveness of distracters in the given item.
8.6 STEPS INVOLVED IN ITEM – ANALYSIS :
(i)
Arrange the response sheets from the highest score the lowest
score.
(ii)
From the ordered set of response sheets, make two groups.
Put those with the highest scores in one group (top 27%) and
199
those with the lowest scores (lowest 27%) in the order group.
The responses of the respondents in the middle 46% of the
group are not included in the analysis.
(iii) For each item (T/F, completion types) count the number of
respondents in each group who answered the item correctly.
For alternate response type of items, count the number of
students in each group who choose each alternate.
A]
Estimating Item Difficulty :
The difficulty of a test is indicated by the percentage of
pupils who get the item right
Difficulty =
R 100
N 1
R – number of pupils who answered the item correctly
N – total number of pupils who attempted the item.
D. index
Item 1
Item 2
Item 3
Item 4
Item 5
0.375
0.75
0.50
0.15
0.60
The higher the difficulty index, the easier the item. If an item
is too difficult, it cannot discriminate at all and adds nothing to test
reliability or validity. An item having 0% or 100% difficulty index
has no discriminating value, Half of the items may have difficulty
indices between 25% and 75%. One-fourth of the items should
200
have a diificulty index greater than 0,75 and one-fourth of the
items should have a diificulty index less than 0.25..
B]
Estimating Discrimination Index :
The discriminating power of a tool item refers to the degree
to which it discriminates between the bright and the dull pupils in a
group.
An estimate of an item’s discriminating power can be
obtained by the formula:
Discriminating power =
Ru – RL
1 N
2
Ru – Number of correct responses from upper group.
RL – Number of correct responses from lower group.
N – Total number of pupils who attempted the item.
If all the respondents from the upper and lower group
answer an item correctly or if all fail to answer it correctly, the item
has no validity, since in neither case does the item separate the
good from the poor respondents in the sample.
The item has a high discrimination power if the number of
high scorers is greater in the upper group as compared to the
number of high scorers in the lower group.
Items have zero or negative validity need to be discarded.
201
The higher the discrimination index, the better the item.
Thus the results of item-analysis help one judge the worth or
quality of a tool and also help in revising the tool items.
(v) Standardisation of a Tool : A tool is said to be standardized if
it is constructed according to some 1) well defined procedure; 2)
administered according to definite instructions; 3) scored according
to a define plan and 4) that it provides a statement of norms.
A tool is standardized in respect of content; method of
administration; method of scoring; and setting up of norms.
Thus standardization is a process for refining a measuring
instrument through scientific procedures.
Its steps are as follows :
1. Preparing a draft form of the tool and writing items as per
the operational definition of the tool. Items should be
selected in such a way that the expected respondent
behaviour in different situations is reflected in the items.
2.
Computing discrimination index and difficulty index (if it is
a test) of the items. In other words, conducting item
analysis. Through this process, item validity is established.
3.
Ascertaining content validity, face validity, construct
validity and criterian validity as the case may be.
4.
5.
Ascertaining the reliability of the tool.
Fixing the time limit. This includes recording the time
taken by different individuals at the time of the preliminary
try out so as to fix the time limit for the final administration
of the tool. It also depends upon the purpose of the tool.
Time allowances must always take into consideration the age
202
and ability of the respondents, the type of items used and
the complexity of the learning outcomes to be measured.
6.
Writing the directions for administering the tool. Careful
instructions for responding to different type of items and for
recording responses should be given/provided. The
directions should be clear, complete and concise so that
each and every respondent knows what he/she is expected
to do. The respondent should be instructed how and where
to mark the items, the time allowed and reduction of errors,
if any, to be made in scoring. Instructions for scoring are to
be given in the test manual.
7.
Preparing a scoring key. To ensure objectivity in scoring,
the scoring should be done in a pre-determined manner. In
quantitative research, scoring key is prepared in advance.
8.
Establishing norms. Computing the norms (age-wise,
gender-wise, grade-wise, urban-rural location-wise and so
on). Norms provide the user of a standardized tool with the
basis for a practical interpretation and application of the
results. A respondent’s score can be interpreted only by
comparing it with the scores obtained by similar
respondents. In the process of standardization, the tool
must be administered to a large, representative sample for
whom it is designed.
9.
Preparing manual of the tool. Every standardized tool
should be accompanied by the tool manual. The purpose of
the manual is to explain what the tool is supposed to
measure, how it was constructed, how it should be
administered and scored and how the results should be
interpreted and used. It should also explain the nature of
the sample selected, the number of cases in the sample and
the procedure of obtaining the norms. The manual should
display the weaknesses as well as the strengths of the tool
and should provide examples of ways in which the tool can
be used as well as warnings concerning limitations and
possible misuse of the results.
Suggested Readings
203
1. Best, J. and Kahn, J. Research in Education (9 ed), New Delhi:
Prentice Hall of India Pvt. Ltd. 2006.
2. Campbell, D. T. and Fiske, D. W. ―Convergent and Discriminant
Validation by the Multitrait-Multimethod Matrix‖.
Psychological Bulletin. 56. pp. 81-105.
3. Garret, H. E. Statistics in Psychology and Education. New York:
Longmans Green and Co. 5th edition 1958.

204
9
TOOLS OF RESEARCH
Unit Structure :
9.0
Objectives
9.1
Introduction
9.2
Rating scale
9.3
Attitude scale
9.4
Opinionnaire
9.5
Questionnaire
9.6
Checklist
9.7
Semantic Differentiate scale
9.8
Psychological Test
9.9
Inventory
9.10
Observation
9.11
Interview
9.12
Let us sum up
9.0 OBJECTIVES :
After reading this unit you will be able to :
State different types of tools and techniques used for data
collection
Distinguish the basic difference between tools and techniques.
205
Describe concept, purpose and uses of various tools and
techniques in research.
State the tools coming under enquiry form, psychological test
observation and Interview.
9.1 INTRODUCTION :
In the last chapter, you have studied about how to prepare a
research tool. In this chapter we will study what are those research
tools, their concepts and uses in collection of data.
In every research work, if is essential to collect factual
material or data unknown or untapped so far. They can be obtained
from many sources, direct or indirect. It is necessary to adopt a
systematic procedure to collect essential data. Relevant data,
adequate in quantity and quality should be collected. They should
be sufficient, reliable and valid.
For checking new, unknown data required for the study of
any problem you may use various devices, instruments, apparatus
and appliances. For each and every type of research we need certain
instruments to gather new facts or to explore new fields. The
instruments thus employed as means for collecting data are called
tools.
The selection of suitable instruments or tools is of vital
importance for successful research. Different tools are suitable for
collecting various kinds of information for various purposes. The
research worker may use one or more of the tools in combination for
his purpose.
Research students should therefore familiarise
themselves with the varities of tools with their nature, merits and
limitations. They should also know how to construct and use them
effectively. The systematic way and procedure by which a complex
or scientific task is accomplished is known as the technique.
206
Techniques is the practical method, skill or art applied to a
particulate task. So, as a researcher we should aware of both the
tools and techniques of research.
The major tools of research in education can be classified
broadly into the following categories.
A. Inquiry forms
Questionnaire
Checklist
Score-card
Schedule
Rating Scale
Opinionnaire
Attitude Scale
B. Observation
C. Interview
D. Sociometry
E. Psychological Tests
Achievement Test
Aptitude Test
Intelligence Test
Interest inventory
Personality measures etc.
In this unit we will discuss some of the tools of each
categories.
207
9.2 RATING SCALE :
Rating scale is one of the enquiry form. Form is a term
applied to expression or judgment regarding some situation, object
or character. Opinions are usually expressed on a scale of values.
Rating techniques are devices by which such judgments may be
quantified. Rating scale is a very useful device in assessing quality,
specially when quality is difficult to measure objectively. For
Example, ―How good was the performance?‖ is a question which
can hardly be answered objectively.
Rating scales record judgment or opinions and indicates the
degree or amount of different degrees of quality which are arranged
along a line is the scale. For example: How good was the
performance?
Excellent Very good Good Average Below average Poor Very poor
___|________|_______|_____|_________|_________|_____|____
This is the must commonly used instrument for making
appraisals. It has a large variety of forms and uses. Typically, they
direct attention to a number of aspects or traits of the thing to be
rated and provide a scale for assigning values to each of the aspects
selected. They try to measure the nature or degree of certain aspects
or characteristics of a person or phenomenon through the use of a
series of numbers, qualitative terms or verbal descriptions.
Ratings can be obtained through one of three major
approaches:
Paired comparison
Ranking and
Rating scales
208
The first attempt at rating personality characteristics was the
man to man technique devised curing World-war-I. This technique
calls for a panel of raters to rate every individual in comparison to a
standard person. This is known as the paired comparison approach.
In the ranking approach every single individual in a group is
compared with every other individual and to arrange the judgment in
the form of a scale.
In the rating scale approach which is the more common and
practical method rating is based on the rating scales, a procedure
which consists of assigning to each trait being rated a scale value
giving a valid estimate of its status and then comparing the separate
ratings into an over all score.
Purpose of Rating Scale:
Rating scales have been successfully utilized for measuring
the following:
Teacher Performance/Effectiveness
Personality, anxiety, stress, emotional intelligence etc.
School appraisal including appraisal of courses, practices and
programmes.
Useful hints on Construction of Rating Scale:
A rating scale includes three factors like:
i) The subjects or the phenomena to be rated.
ii) The continuum along which they will be rated and
iii) The judges who will do the rating.
209
All taken three factors should be carefully taken care by you
when you construct the rating scale.
1) The subjects or phenomena to be rated are usually a limited
number of aspects of a thing or of a traits of a person. Only the
most significant aspects for the purpose of the study should be
chosen. The usual may to get judgement is on five to seven point
scales as we have already discussed.
2) The rating scale is always composed of two parts:
i) An instruction which names the subject and defines the
continuum and
ii) A scale which defines the points to be used in rating.
3) Any one can serve as a rater where non-technical opinions, likes
and dislikes and matters of easy observation are to be rated. But
only well informed and experienced persons should be selected
for rating where technical competence is required. Therefore,
you should select experts in the field as rater or a person who
form a sample of the population in which the scale will
subsequently be applied. Pooled judgements increase the
reliability of any rating scale. So employ several judges,
depending on the rating situation to obtain desirable reliability.
Use of Rating Scale :
Rating scales are used for testing the validity of many
objective instruments like paper pencil inventories of personality.
They are also advantages in the following fields like :
Helpful in writing reports to parents
Helpful in filling out admission blanks for colleges
Helpful in finding out student needs
Making recommendations to employers.
210
Supplementing other sources of understanding about the child
Stimulating effect upon the individuals who are rated.
Limitations of Rating Scale :
The rating scales suffer from many errors and limitations like
the following:
As you know that the raters would not like to run down their
own people by giving them low ratings. So in that case they give
high ratings to almost all cases. Sometimes also the raters are
included to be unduly generous in rating aspects which they had to
opportunity to observe. It the raters rate in higher side due to those
factors, then it is called as the generosity error of rating.
The Errors of Central Tendency :
Some observes wants to keep them in safe position.
Therefore, they rate near the midpoint of the scale. They rate almost
all as average.
Stringency Error :
Stringency error is just the opposite of generosity of error.
These types of raters are very strict, cautions and hesitant in rating in
average and higher side. They have a tendency to rate all individuals
low.
The Hallo Error :
When a rater rates one aspect influenced by other is called
hallo effect. For if a person will be rated in higher side on his
achievement because of his punctually or sincerely irrespective of
211
his perfect answer it called as hallo effect. The biased-ness of the
rater affects from one quality to other.
The Logical Error :
It is difficult to convey to the rater just what quality one
wishes him to evaluate. An adjective or Adverb may have no
universal meaning. It the terms are not properly understood by the
rater and he rates, then it is called as the logical error. Therefore,
brief behavioural statements having clear objectives should be used.
Check Your Progress - I
Q.1 What is rating scale?
Q.2 What are the approaches of rating?
212
Q.3 Explain the various purposes of rating scale?
Q.4 What are the factors to be considered during construction of
rating scale?
Q.5 What is rating scale? Explain it‘s uses.
213
Q.6 Write short notes on:
i) The Generosity Error
ii) The Logical Error
iii) The Hallo error
iv) Stringency error
v) The errors of central Tendency
9.3 ATTITUDE SCALE :
Attitude scale is a form of appraisal procedure and it is also
one of the enquiry term. Attitude scales have been designed to
measure attitude of a subject of group of subjects towards issues,
institutions and group of peoples.
The term attitude is defined in various ways, ―the behaviour
which we define as attitudinal or attitude is a certain observable set‖
organism or relative tendency preparatory to and indicative of more
complete adjustment.‖
214
- L. L. Bernard
―An attitude may be defined as a learned emotional response
set for or against something.‖
-
Barr David Johnson
An attitude is spoken of as a tendency of an individual to read
in a certain way towards a Phenomenon. It is what a person feels or
believes in. It is the inner feeling of an individual. It may be
positive, negative or neutral.
Opinion and attitude are used sometimes in a synonymous
manner but there is a difference between two. You will be able to
know when we will discuss about opinionnaire. An opinion may not
lead to any kind of activity in a particular direction. But an attitude
compels one to act either favourably or unfavourably according to
what they perceive to be correct. We can evaluate attitude through
questionnaire. But it is ill adapted for scaling accurately the
intensity of an attitude. Therefore, Attitude scale is essential as it
attempts to minimise the difficulty of opinionnaire and questionnaire
by defining the attitude in terms of a single attitude object. All
items, therefore, may be constructed with graduations of favour or
disfavour.
Purpose of Attitude Scale :
In educational research, these scales are used especially for
finding the attitudes of persons on different issues like:
Co-education
Religious education
Corporal punishment
Democracy in schools
215
Linguistic prejudices
International co-operation etc.
Characteristics of Attitude Scale :
Attitude scale should have the following characteristics.
It provides for quantitative measure on a unidimensional scale of
continuum.
It uses statements from the extreme positive to extreme negative
position.
It generally uses a five point scale as we have discussed in rating
scale.
It could be standardised and norms are worked out.
It disguises the attitude object rather than directly asking about
the attitude on the subject.
Examples of Some Attitude Scale :
Two popular and useful methods of measuring attitudes indirectly,
commonly used for research purposes are:
Thurstone Techniques of scaled values.
Likert‘s method of summated ratings.
Thurstone Technique :
Thurstone Technique is used when attitude is accepted as a
uni-dimensional linear Continuum. The procedure is simple. A
large number of statements of various shades of favourable and
unfavourable opinion on slips of paper, which a large number of
judges exercising complete detachment sort out into eleven plies
ranging from the most hostile statements to the most favourable
ones. The opinions are carefully worded so as to be clear and
unequivocal. The judges are asked not express tier opinion but to
216
sort them at their face value. The items which bring out a marked
disagreement between the judges un assigning a position are
discarded. Tabulations are made which indicate the number of
judges who placed each item in each category. The next step
consists of calculating cumulated proportions for each item and
ogives are constructed. Scale values of each item are read from the
ogives, the values of each item being that point along the baseline in
terms of scale value units above and below which 50% of the judges
placed the item. It we‘ll be the median of the frequency distribution
in which the score ranges from 0 to 11.
The respondent is to give his reaction to each statement by
endorsing or rejecting it. The median values of the statements that
he checks establishes his score, or quantifies his opinion. He wins a
score as an average of the sum of the values of the statements he
endores.
Thurstone technique is also known as the technique equal
appearing intervals.
Sample Items From Thurstone Type Scales :
Statement
Scaled
value
I think this company treats its employees
10.4
Better than any other company does.
9.5
It I had to do it over again I‘d still work for this
company.
5.1
The workers put as much over on the company as the
company puts over on them.
2.1
You have got to have pull with certain people around
0.8
217
here to get ahead. An honest man fails in this company.
The Likert Scale :
The Likert scale uses items worded for or against the
proposition, with five point rating response indicating the strength of
the respondent‘s approval or disapproval of the statement. This
method removes the necessity of submitting items to the judges for
working out scaled values for each item. It yields scores very
similar to those obtained from the Thurstone scale. It is an important
over the Thurstone method.
The first step is the collection of a member of statements
about the subject in question. Statements may or may not be correct
but they must be representative of opinion held by a substantial
number of people. They must express definite favourableness or
unfavourableness to a particular point of view. The number of
favourable and unfavourable statements should be approximately
equal. A trial test maybe administered to a number of subjects.
Only those items that correlate with the total test should be retained.
The Likerts calling techniques assigns a scale value to each of
the five responses. All favourable statements are scored from
maximum to minimum i. e. from a score of 5 to a score of one or 5
for strongly agree and so on 1 for strongly disagree. The negative
statement or statement apposing the proposition would be scored in
the opposite order . e. from a score of 1 to a score of 5 or 1 for
strongly agree and so on 5 for strongly disagree.
The total of these scores on all the items measures a
respondent‘s favourableness towards the subject in question. It a
scale consists of 30 items, Say, the following score values will be of
interest.
218
30 5 =150
Most favourable response possible
30 3 = 90
A neutral attitude
30 1= 30
Most unfavourable attitude
It is thus known as a method of summated ratings. The
summed up score of any individual would fall between 30 and 150.
scores above 50 will indicate a favourable and scores below go an
unfavourable attitude.
Sample Items from Linkert Type Minnesota
Scale on Morale
Responses
Items
SA A U D SD
Times are getting better
SA A U D SD
Any man with ability and willingness to work
hard has a good chance of being successful.
SA A U D SD
Life is just a series of disappointments.
SA A U D SD
It is great to be living in those exciting times.
SA A U D SD
Success is more dependent on lack than on
real ability.
Limitations Of Attitude Scale :
In the attitude scale the following limitations may occur:
An individual may express socially acceptable opinion conceal
his real attitude.
219
An individual may not be a good judge of himself and may not
be clearly aware of his real attitude.
He may not have been controlled with a real situation to discover
what his real attitude towards a specific phenomenon was.
There is no basis for believing that the five positions indicated in
the Likert‘s scale are equally spaced.
It is unlikely that the statements are of equal value in ‗forness‘ or
―againstness‖.
It is doubtful whether equal scores obtained by several
individuals would indicate equal favourableness towards again
position.
It is unlikely that a respondent can validity react to a short
statement on a printed form in the absence of real like qualifying
Situation.
In sprite of anonymity of response, Individuals tend to respond
according to what they should feel rather than what they really
feel.
However, until more precise measures are developed, attitude
scale remains the best device for the purpose of measuring attitudes
and beliefs in social research.
Check Your Progress – II
Q.1 What is attitude
Characteristics.
scale?
Explain
it‘s
purpose
and
220
Q.2 What is attitude scale?
attitudes in research.
Explain the methods of measuring
Q.3 Define attitude. Explain Likerts scale to measure attitude.
9.4 OPINIONNAIRE :
―Opinion polling or opinion gauging represents a single
question approach. The answers are usually in the form of ‗yes‘ or
‗no‘. An undecided category is often included. Sometimes large
number of response alternatives if provided.‖
- Anna Anastusi
The terms opinion and attitude are not synonymous, through
sometimes we used it synonymously. We have till now discussed
that attitudes scale. We have also discussed that attitudes are
221
impressed opinions. You can now understand the difference
between opinionnaire and attitude scale, when we discuss of out
opinionnaire, it is characteristics and purposes.
Opinion is what a person says on certain aspects of the issue
under considerations. It is an outward expression of an attitude held
by an individual. Attitudes of an individual can be inferred or
estimated from his statements of opinions.
An opinionnaire is defined as a special form of inquiry. It is
used by the researcher to collect the opinions of a sample of
population on certain facts or factors the problem under
investigation. These opinions on different facts of the problem under
study are further quantified, analysed and interpreted.
Purpose :
Opinionnaire are usually used in researches of the descriptive
type which demands survey of opinions of the concerned
individuals. Public opinion research is an example of opinion
survey. Opinion polling enables the researcher to forecast the
coming happenings in successful manner.
Characteristics :
The opinionnaire makes use of statements or questions on
different aspects of the problem under investigation.
Responses are expected either on three point or five point scales.
It uses favourable or unfavourable statements.
It may be sub-divided into sections.
The gally poll ballots generally make use of questions instead of
statements.
The public opinion polls generally rely on personal contacts
rather than mail ballots.
Sample Items of Opinionnaire :
222
The following statements are from the opinionnaire on the
reforms in educational administration introduced in A. P. during
1956-66.
1) Democratic decentralization has helped to develop
democratic values and practices in rural people.
A U D
2) There has been consequent improvements of
educational standards.
3) Specified subject inspectorate is better than Panel A U D
type of inspectorate.
4) Inspection stripped of Administrative powers does
not help much.
A U D
5) Primary education should be brought under a
separate directorate as was done in some status.
A U D
A U D
9.5 QUESTIONNAIRE :
A questionnaire is a form prepared and distributed to secure
responses to certain questions. It is a device for securing answers to
questions by using a form which the respondent fills by himself. It
223
is a systematic compilation of questions that are submitted to a
sampling of population from which information is desired.
Questionnaire rely on written information supplied directly by
people in response to questions.
The information from
questionnaires tends to fall into two broad categories – ‗facts‘ and
‗opinions‘. It is worth stressing that, in practice, questionnaires are
very likely to include questions about both facts and opinions.
Purpose :
The purpose of the questionnaire is to gather information
from widely scattered sources. It is mostly used in uses in cases
where one can not readily see personally all of the people from
whom he desires responses. It is also used where there is no
particular reason to see them personality.
Types :
Questionnaire can be of various type on the basis of it‘s
preparation. They are like:
Structured v/s Non Structured
Closed v/s Open
Fact v/s Opinion
Structured v/s Non-Structured Questionnaire :
The structured questionnaire contains definite, concrete and
directed questions, where as non-structured questionnaire is often
224
used in interview and guide. It may consist of partially completed
questions.
Closed v/s Open Questionnaire :
The question that call for short check responses are known as
restricted or closed form type. For Example, they provide for
marking a yes or no, a short response or checking an item from a list
of responses. Here the respondent is not free to wrote of his own, he
was to select from the selected from the supplied responses. On the
other hand, increase of open ended questionnaire, the respondent is
free to response in his own words. Many questionnaire also
included both close and open type questions. The researcher selects
the type of questionnaire according to his need of the study.
Fact and Opinion :
Incase of fact questionnaire, the respondent is expected to
give information of facts without any reference to his opinion or
attitude about them.
But incase of opinion questionnaire the respondent gives the
information about the facts with his own opinion and attitude.
Planning the Use of Questionnaire :
The successful use of questionnaire depends on devoting the
right balance of effort to the planning stage, rather than rushing too
early into administering the questionnaire. Therefore, the researcher
should have a clear plan of action in mind and costs, production,
organization, time schedule and permission should be taken care in
the beginning. When designing a questionnaire, the characteristics
of a good questionnaire should be kept in mind.
225
Characteristics of A Good Questionnaire :
Questionnaire should deal with important or significant topic to
create interest among respondents.
It should seek only that data which can not be obtained from
other sources.
It should be as short as possible but should be comprehensive.
It should be attractive.
Directions should be clear and complete.
It should be represented in good Psychological order proceeding
from general to more specific responses.
Double negatives in questions should be avoided.
Putting two questions in one question also should be avoided.
It should avoid annoying or embarrassing questions.
It should be designed to collect information which can be used
subsequently as data for analysis.
It should consist of a written list of questions.
The questionnaire should also be used appropriately.
When is it appropriate to use a questionnaire for research?
Different methods are better suited to different circumstances
and questionnaire are no exception to it. Questionnaire are used at
their most productive:
When used with large numbers of respondents.
When what is required tends to be fairly straight forward
information.
When there is a need for standardize data from indentical
information.
When time is allows for delays.
When resources allow for the cast of printing and postage.
When respondents can be expected to be able to read and
understand the questions.
226
Designs of Questionnaire :
After construction of questions on the basis of it‘s
characteristics it should be designed with some essential routines
like:
Background information about the questionnaire.
Instructions to the respondent.
The allocation of serial numbers and
Coding Boxes.
Background Information about The Questionnaire
Both from ethical and practical point of view, the researcher
needs to provide sufficient background information about the
research and the questionnaire. Each questionnaires should have a
cover page, on which some information appears about:
The sponsor
The purpose
Return address and date
Confidentiality
Voluntary responses and
Thanks
Instructions to the Respondent :
It is very important that respondents are instructed to go
presented at the start of the questionnaire which indicate what is
expected from the respondents. Specific instructions should be
given for each question where the style of questions varies through
out the questionnaire. For Example – Put a tick mark in the
appropriate box and circle the relevant number etc.
227
The Allocation of Serial Numbers :
Whether dealing with small or large numbers, a good
researcher needs to keep good records. Each questionnaire therefore
should be numbered.
Advantages of Questionnaire :
Questionnaire are economical. In terms of materials, money
and time it can supply a considerable amount of research data.
It is easier to arrange.
It supplies standardized answers
It encourages pre-coded answers.
It permits wide coverage.
It helps in conducting depth study.
Disadvantages :
It is reliable and valid, but slow.
Pre-coding questions can deter them from answering.
Pre-coded questions can bias the findings towards the researcher.
Postal questionnaire offer little opportunities to check the
truthfulness of the answers.
It can not be used with illiterate and small children.
Irrespective of the limitations general consensus goes in
favour of the use of questionnaire. It‘s quality should be improved
and we should be restricted to the situations for which it is suited.
Check Your Progress – III
228
Q.1 Distinguish between opinionnaire and questionnaire.
Q.2 Write short notes on:
(a) Closed and open questionnaire.
(b) Structured and Non-Structure questionnaire.
(c) Fact and Opinion.
The serial number helps to distinguish and locate if necessary.
It can also helps to identify the date of distribution, the place and
possibility the person.
Coding Boxes :
229
When designing the questionnaire , it is necessary to prevent
later complications which might arise at the coding stage.
Therefore, you should note the following points:
Locate coding boxes neatly on the right hand side of the page.
Allow one coding box for each answer.
Identify each column in the complete data file underneath the
appropriate coding box in the questionnaire.
Besides these, the researcher should also be very careful
about the length and appearance of the questionnaire, wording of the
questions, order and types of questions while constructing a
questionnaire.
Criteria of Evaluating a Questionnaire :
You can evaluate your questionnaire whether it is a standard
questionnaire or not on the basis of the following criteria:
It should provide full information pertaining to the area of
research.
It should provide accurate information.
It should have a decent response rate.
It should adopt an ethical stance and
It should be feasible.
Like all the tools, it also has some advantages and
disadvantages based on it‘s uses.
9.6 CHECKLIST :
230
A checklist, is a type of informational job aid used to reduce
failure by compensating for potential limits of human memory and
attention. It helps to ensure consisting and completeness in carrying
out a task. A basic example is ‗to do list‘. A more advanced
checklist which lays out tasks to be done according to time of a day
or other factors.
The checklist consists of a list of items with a place to check,
or to mark yes or no.
Purpose :
The main purpose of checklist is to call attention to various
aspects of an object or situation, to see that nothing of importance is
overlooked. For Example, if you have to go for outing for a week,
you have to list what things you have to take with you. Before
leaving home, if you will check your baggage with the least there
will be less chance of forgetting to take any important things, like
toothbrush etc. it ensures the completeness of details of the data.
Responses to the checklist items are largely a matter of fact, not of
judgment. It is an important tool in gathering facts for educational
surveys.
Uses :
Checklists are used for various purposes. As we have
discussed that we can check our requirements for journey, Birthday
list, proforma for pass-port, submitting examination form or
admission form etc. in every case, it we will check before doing the
work, then there is less chance of overlooking any, important things.
As it is useful in over daily life, it is also useful in educational field
in the following way.
To collect acts for educational surveys.
To record behaviour in observational studies.
231
To use in educational appraisal, studies – of school buildings,
property, plan, textbooks, instructional procedures and outcomes
etc.
To rate the personality.
To know the interest of the subjects also. Kuder‘s interest
inventory and Strong‘s Interest Blank are also checklists.
Hints on Constructing Checklist :
Items in the checklist may be continuous or divided into groups
of related items.
Items should be arranged in categories and the categories in a
logical or psychological order.
Terms used in the items should be clearly defined.
Checklist should be continuous and comprehensive in nature.
A pilot study should be taken to make it standardized.
Checklist can be constructed in four different ways by arranging
items differently.
(1) In one of the arrangement all items found in a situation are to be
checked. For Example, a subject may be asked to check ( ) in the
blank side of each activity undertaken in a school.
(2) In the second form, the respondent is asked to check with a ‗yes‘
or ‗no‘ or asked to encircle or underline the response to the given
item. For Example, (1) Does your school have a house system?
Yes/No
(3) In this form, all the items are positive statements with checks ( )
to be marked in a column of a right. For Example, (1) The
school functions as a community centre ( ).
(4) The periodical tests are held – fortnightly, monthly, quarterly,
regularly.
232
The investigator has to select any one of the format
appropriate to his problem and queries or the combination of many
as it requires.
Analysis and Interpretation of Checklist Data :
The tabulation and quantification of checklist data is done
from the responses. Frequencies are counted, percentages and
averages calculated, central tendencies, measures of variability and
co-efficient of correlation completed as and when necessary. In long
checklists, where related items are grouped together category wise,
the checks are added up to give total scores for the category wise
total scores can be compared between themselves or with similar
scores secured through other studies.
The conclusions from checklist data should be arrived at
carefully ad judiciously keeping in view the limitations of the tools
and respondents.
Merits :
Students can measure their own behaviour with the help of
checklist.
Easy and simple to use and frame the tools.
Wanted and unwanted behaviours can be included.
Personal - Social development can be checked.
Limitations :
Only the presence or absence of the ability can be tested.
Yes or no type judgement can only be given.
How much can not be tested through checklist.
233
For Example, you want to test the story telling still of a
student. You can check only whether the student developed or not
developed the skill but you can not study how much he has
developed?
When we want to check ‗yes‘ or ‗no‘ of any ability, checklist
is used.
Check Your Progress - IV
Q.1
Prepare a checklist for any skill.
9.7 SEMANTIC DIFFERENTIAL SCALE:
Semantic differential is a type of a rating scale designed to
measure the connotative meaning of objects, events and concepts.
The connotations are used to drive the attitude towards the given
object, event of concept.
Semantic Differential:
234
sweet
falt
warm
beautiful
meaningful
brave
bright
bitter
unfalt
cold
ugly
meaningless
cowardly
dark
Fig. 1 Modem Japanese version of the semantic Differential.
The Kanji characters in background stand for ―God‖ and
―Wind‖ respectively, with the compound reading ―Kamikaze‖.
(Adapted from Dimensions of Meaning. Visual Statistics Illustrated
at VisualStatistics.net.)
Osgood‘s semantic differential was designed to measure the
connotative meaning of concepts. The respondent is asked to choose
where his or her position lies, on a scale between two bipolar
adjectives (for example: ―Adequate-Inadequate‖, ―Good-Evil‖ or
―Valuable-Worthless‖). Semantic differentials can be used to
describe not only persons, but also the connotative meaning of
abstract concepts - a capacity used extensively in effect control
theory.
Theoretical Background
Nominalists and Realists :
Theoretical underpinnings of Charles E. Osgood‘s semantic
differential have roots in the medieval controversy between then
nominalists and realists. Nominalists asserted that only real things
are entities and that abstractions from these entities, called
universals, are mere words. The realists held that universals have an
235
independent objective existence either in a realm of their own or in
the mind of God. Osgood‘s theoretical work also bears affinity to
linguistics and general semantics and relates to Korzybski‘s
structural differential.
Use of Adjectives :
The development of this instrument provides an interesting
insight into the border area between linguistics and psychology.
People have been describing each other since they developed the
ability to speak. Most adjectives can also be used as personality
descriptors. The occurrence of thousands of adjectives in English is
an attestation a of the subtleties in descriptions of persons and their
behaviour available to speakers of English. Roget‘s Thesaurus is an
early attempt to classify most adjectives into categories and was
used within this context to reduce the number of adjectives to
manageable subsets, suitable for factor analysis.
Evaluation, Potency and Activity :
Osgood and his colleagues performed a factor analysis of
large collections of semantic differential scales and found three
recurring attitudes that people use to evaluate words and phrases:
valuation, potency, and activity. Evaluation loads highest on the
adjective pair ‗active-passive‘ defines the activity factor. These
three dimensions off affective meaning were found to be crosscultural universals in a study of dozens of cultures.
This factorial structure makes intuitive sense. When our
ancestors encountered a person, the initial perception had to be
whether that person represents a danger. Is the person good or bad?
Next, is the person strong or weak? Our reactions to a person
markedly differ it perceived as good and strong, good and weak, bad
and weak, or bad and strong. Subsequently, we might extend our
236
initial classification to include cases of persons who actively threaten
us or represent only a potential, danger, and so on. The evaluation,
potency and activity factors thus encompass a detailed descriptive
system of personality. Osgood‘s semantic differential measures
these three factors. It contains sets of adjective pairs such as warmcold, bright-dark, beautiful-ugly, sweet-bitter, fair-unfair, bravecowardly, meaningful-meaningless.
The studies of Osgood and his colleagues revealed that the
evaluate factor accounted for most of the variance in scalings, and
related this to the idea of attitudes.
Usage :
The semantic differential is today one of the most widely
used scales used in the measurement of attitudes. One of the reasons
is the versatility of the items. The bipolar adjective pairs can be
used for a wide variety of subjects, and as such the scale is
nicknamed ―the ever ready battery‖ of the attitude researcher.
A.
Semantic Differential Scale :
This is a seven point scale and the end points of the scale are
associated with bipolar labels.
1
Unpleasant
Submissive
7
2
3
4
5
6
Pleasant
Dominant
Suppose we want to know personality of a particular person.
We have options –
237
a. Unpleasant / Submissive
b. Pleasant / Dominant
Bi-polar means two opposite streams. Individual can score
between 1 to 7 or 3 to 3. On the basis of these responses profiles are
made. We can analyse for two for three products and by joining
these profiles we get profile analysis. It could take any shape
depending on the number of variables.
Profile Analysis
----------------/-----------------------/------------------------/----------------------
Mean and median are used for comparison. This scale helps
to determine overall similarities and differences among objects.
When Semantic Differential Scale is used to develop an
image profile, it provides a good basis for comparing images of two
or more items. The big advantage of this scale is its simplicity,
while producing results compared with those of the more complex
scaling methods. The method is easy and fast to administer, but it is
also sensitive to small differences in attitude, highly versatile,
reliable and generally valid.
Statistical Properties :
Five items, or 5 bipolar pairs of adjectives, have been proven
to yield reliable findings, which highly correlate with alternative
measures of the same attitude.
238
The biggest problem with his scale is that the properties of the
level of measurement are unknown. The most statistically should
approach is to treat it as an ordinal scale, but it can be argued that the
neutral response (i. e. the middle alternative on the scale) serves as
an arbitrary zero point, and that the intervals between the scale
values can be treated as equal, making it an interval scale.
A detailed presentation on the development of the semantic
differential is provided in the monumental book, Cross-Cultural
Universals of Affective Meaning. David R. Heise‘s Surveying
Cultures provides a contemporary update with special attention to
measurement issues when using computerized graphic rating scales.
9.8 PSYCHOLOGICAL TESTS:
Among the most useful and most frequently employed tools
of educational research psychological tests occupy a very significant
position. Psychological tests are described to describe and measure
a sample of certain aspects of human behaviour or inner qualities.
They yield objective descriptions of some psychological aspects of
an individual‘s personality and translate them in quantitative terms.
As we have mentioned earlier there are various kinds of
psychological tests. In this unit we will discuss ‗Aptitude tests‘ and
‗Inventories‘.
Aptitude Tests :
―Aptitude tests attempt to predict the capacities or the degree
of achievement that may be expected from individuals in a particular
activity‖.
239
Aptitude is a means by which one can find the relative
knowledge of a person in terms of his intelligence and also his
knowledge in general.
Purpose :
The purpose of aptitude test is to test a candidate‘s profile.
Aptitude test helps to check one‘s knowledge and filters the good
candidates. The ability of creativity and intelligence is proved by
the aptitude test. It always checks the intelligence and fastness of
the person in performance.
Importance of Aptitude Test :
Research data show that individually administered aptitude
tests have the following qualities:
They are excellent predictors of future scholastic achievement.
They provide ways for comparison of a child‘s performance with
other in a same situation.
They provide a profile of strength and weaknesses.
They asses difference among individuals.
Uses Of Aptitude Test :
Aptitude tests are valuable in making programme and
curricula decisions. In general they have three major uses:
Instructional : Teacher can use aptitude test results to adopt their
curricula to match the level of students or to design assignments for
students who differ widely.
Administrative : Result of Aptitude tests help in determining the
programmes for college on the basis of aptitude level of high-school.
240
It can also be identify students to be accelerated or given extra
attention, for exampling and in predicting job training performance.
Guidance : result of aptitude tests help counsellors to help parents
and students. Parents develop realistic expectations for their Child‘s
performance and students understand their own strength and
weaknesses.
Intelligence tests are also a kind of aptitude test as they
describe and measure the general ability which enters into the
performance of every activity and thus predict the degree of
achievement that may be expected from individuals in various
activities.
Aptitude test, however have proved of great value for
research in educational and vocational guidance, for research in
selection of candidates for particular course of study or professional
training and for research of the complex causal relationship type.
9.9 INVENTORY :
Inventory is a list, record or catalog containing list of traits,
preferences, attitudes, interests or abilities used to evaluate personal
characteristics or skills.
The purpose of inventory is to make a list about a specific
trait, activity or programme and to check to what extent the presence
of that ability types of Inventories like
Internet Inventory and
Personality Inventory
241
Interest Inventory :
Persons differ in their interests, likes and dislikes. Internets
are significant element in the personality pattern of individuals and
play an important role in their educational and professional careers.
The tools used for describing and measuring interests of individuals
are the internet inventories or interest blanks. They are self report
instruments in which the individuals note their own likes and
dislikes. They are of the nature of standardised interviews in which
the subject gives an introspective report of his feelings about certain
situations and phenomena which is then interpreted in terms of
internets.
The use of interest inventories is most frequent in the areas of
educational and vocational guidance and case studies. Distinctive
patterns of interest that go with success have been discovered
through research in a number of educational and vocational fields.
Mechanical, computational, scientific, artifice, literary, musical,
social service, clerical and many other areas of interest have been
analysed informs of activities. In terms of specific activities, a
person‘s likes and dislikes are sorted into various interest areas and
percentile scores calculated for each area. The area where a person‘s
percentile scores are relatively higher is considered to be the area of
his greatest interests, the area in which he would be the happiest and
the most successful.
As a part of educational surveys of many kinds, children‘s
interest in reading, in games, in dramatics, in other extracurricular
activities and in curricular work etc. are studied.
One kind of instrument, most commonly used in interest
measurement is known as Strong‘s Vocational Interest Inventory. It
compares the subject‘s pattern of interest to the interest patterns of
successful individuals in a number of vocational fields. This
inventory consists of the 400 different items. The subject has to tick
mark one of the alternatives i. e. L(for like), I(indifference) or
242
D(Dislike) provided against each item. When the inventory is
standardised, the scoring keys and percentile norms are prepared on
the basis of the responses of a fairly large number of successful
individuals of a particular vocation. A separate scoring key is
therefore prepared for each separate vocation or subject area. The
subject‘s responses are scored with the scoring key of a particular
vocation in order to know his interest or lack of interest or lack of
interest in the vocation concerned. Similarly his responses can be
scored with scoring keys standardised for other vocational areas. In
this way you can determine one‘s areas of vocational interest.
Another well known interest inventories, there are also personality
inventories to measure the personality. You can prepare inventories
of any ability to measure it.
Check Your Progress – V
Q.1 What are psychological tests? Explain the use of aptitude test
as a tool of research.
Q.2 What are inventories?
example.
Explain it‘s role in research with
243
9.10 OBSERVATION :
Observation offers the researcher a distinct way of collecting
data. It does not rely on what people say they do, or what they say
they think. It is more direct than that. Instead, it draws on the direct
evidence of the eye to witness events first hand. It is a more natural
way of gathering data. Whenever direct observation is possible it is
the preferable method to use.
Observation method is a technique in which the behaviour of
research subjects is watched and recorded without any direct contact.
It involve the systematic recording of observable phenomena or
behaviour in a natural setting.
Purpose :
The purpose of observation techniques are:
To collect data directly.
To collect substantial amount of data in short time span.
To get eye witness first hand data in real like situation.
To collect data in a natural setting.
Characteristics :
It is necessary to make a distinction between observation as a
scientific tool and the casual observation of the man in the street.
An observation with the following characteristics will be scientific
observation.
Observation is systematic.
244
It is specific.
It is objective.
It is quantitative.
The record of observation should be made immediately.
Expert observer should observe the situation.
It‘s result can be checked and verified.
Types of Observation :
On the basis of the purpose of observation may be of varied type
like:
Structured and Unstructured
Participant and Non-participant
Structured and Unstructured Observation :
In the early large stage of an investigation, it is necessary to
allow maximum flexibility in observation to obtain a true picture of
the phenomenon as a whole. In the early stage, it we attempt to
restrict the observation to certain areas, then there we‘,, be the risk of
overlooking some of the more crucial aspects. As the investigator
studies the significant aspects and observes some restricted aspects
of the situation to derive more and rigorous generalizations. So in
the first stage of observation, the observation is wide and
unstructured and as the investigation proceeds observation gets
restricted and structured.
Participant and Non-Participant Observation:
In participant observation, the observer becomes more or less
one of the groups under observation and shares the situation as a
visiting stranger, an attentive listener, an eager learner or as a
245
complete participant observer, registering, recording and interpreting
behaviour of the group.
In non-participant observation, the observer observes through
one way screens and hidden microphones. The observer remains a
look from group. He keeps his observation as inconspicuous as
possible. The purpose of non-participant observation is to observe
the behaviour in a natural setting. The subject will not shift his
behaviour or the will not be conscious hat someone is observing his
behaviour.
The advantages and disadvantages of participant and nonparticipant observation depend largely on the situation. Participant
observation is helpful to study about criminals at least participating
with person sometime. It gives a better in sight into the life.
Therefore it has a built in validity test. It‘s disadvantages are that it
is time consuming As he develops relationship with the members,
there is a chance of lousing his neutrality, objectivity and accuracy
to rate things as they are:
Non-participant observation is used with groups like infants,
children or abnormal persons. It permits the use of recording
instruments and the gathering of large quantities of data.
Therefore, some researchers feel that it is best for the
observer to remain only a partial participant and to maintain his
status of scientific observer apart from the group.
Steps of Effective Observation:
As a research tool effective observation needs effective
Planning
246
Execution
Recording and
Interpretation
Planning:
While planning to employ observation as a research technique
the following factors should be taken into consideration.
Sample to be observed should be adequate.
Units of behaviour to be observed should be clearly defined.
Methods of recording should be simplified.
Detail instruction should be given to observes if more than one
observe is employed to maintain consistency.
Too many variables should not be observed simultaneously.
Excessively long period of observation without rest period
should be avoided.
Observes should be fully trained and well equipped.
Records of observation must be comprehensive.
Execution :
A good observation plan lends to success only when followed
with skill and expert execution. Expert execution needs:
Proper arrangement of special conditions for the subject.
Assuming the proper physical position for observing.
Focusing attention on the specific activities or units of behaviour
under observation.
Observing discreetly the length and number of periods and
internals decided upon.
Handling well the recording instruments to be used.
Utilising the training received in terms of expertness.
247
Recording:
The two common procedures for recording observations are:
Simultaneous
Soon after the observation
Which methods should be used depend on the nature of the
group? The type of behaviour to be observed. Both the method has
their merits and limitations. The simultaneous form of recording
may distract the subjects while after observation the observer may
distract the subjects while after observation the observer may fail to
record the complete and exact information. Therefore for a
systematic collection of data the various devices of recording should
be used. They are like – checklist, rating scale and score card etc.
Interpretation:
Interpretation can be done directly by the observer at the time
of his observation. Where several observers are involved, the
problem of university is there. Therefore, in such instances, the
observer merely records his observations and leaves the matter of
interpretation to an export that is more likely to provide a unified
frame of reference. It must of course, be recognized that the
interpreter‘s frame of reference is fundamental to any interpretation
and it might be advisable to insist on agreement between interpreters
of different background.
Limitations of Observation :
The limitations of observation are:
Establishing validity is difficult.
Subjectivity is also there.
It is a slow and labourious process.
248
It is costly both in terms of time of time and money.
The data may be unmanageable.
There is possibility of biasness
These limitations can be minimized by systematic observation
as it provides a framework for observation which all observes will
use. It has the following advantages.
Advantages of Observation :
Data collected directly
Systematic and rigorous
Substantial amount of data can be collected in a relatively short
time span.
Provides pre-coded data and ready for analysis.
Inter observer reliability is high.
However, observation is a scientific technique to the extent
that it serves a formulated research purpose, planned systematically
rather than occurring haphazardly, systematically recorded and
related to more general propositions and subjected to checks and
controls with respect to validity, reliability and precision.
Check Your Progress – VI
Q.1 Explain the steps of observational techniques with it‘s merit‘s
and limitations.
249
Q.2
Write short notes on:
(a) Participant and non0participant observation.
(b) Structured and non-structures observation.
9.11 INTERVIEW :
Interviews are an attractive proposition for the project
researcher. Interviews are something more than conversation. They
involve a set of assumptions and understandings about the situation
which are not normally associated with a casual conversion.
Interviews are also refered as an oral questionnaire by some people,
but it is indeed mush more than that. Questionnaire involves indirect
data collection, whereas Interview data is collected directly from
others in face to face contact. As you know, people are hesitant to
wrote something than to talk. With friendly relationship and rapport,
the interviewer can obtain certain types of confidential information
which might be reluctant to put in writing.
Therefore research interview should be systematically
arranged. It does not happen by chance. The interviews not done by
secret recording of discussions as research data. The consent of the
subject is taken for the purpose of interview. The words of the
interviews can be treated as ‗on the record‘ and ‗for the record‘. It
should not be used for other purposes besides the research purpose.
The discussion therefore is not arbitrary or at the whim of one of the
250
parties. The agenda for the discussion is set by the researcher. It is
dedicated to investigating a given topic.
Importance of Interview :
Whether it is large scale research or small scale research, the
nature of the data collection depends on the amount of resources
available. Interview is particularly appropriate when the researcher
wishes to collect data based on:
Emotions, experiences and feelings.
o Sensitive issues.
o Privileged information.
It is appropriate when dealing with young children, illiterates,
language difficulty and limited, intelligence.
It supplies the detail and depth needed to ensure that the
questionnaire asks valid questions while preparing questionnaire.
It is a follow up to a questionnaire and complement the
questionnaire.
It can be combined with other tools in order to corroborate facts
using a different approach.
It is one of the normative survey methods, but it is also applied in
historical, experimental, case studies .
Types of Interview :
Interviews vary in purpose, nature and scope. They may be
conducted for guidance, therapentic or research purposes. They may
be confined to one individual or extended to several people. The
following discussions describe several types of interview.
Structured Interview :
Structured interview involves fight control over the format of
questions and answers. It is like a questionnaire which is
administered face to face with a respondent. The researcher has a
251
predetermined list of questions. Each respondent is faced with
identical questions. The choice of alternative answers is restricted to
a predetermined list. This type of interview is rigidly standardised
and formal.
Structured interviews are often associated with social surveys
where researchers are trying to collect large volumes of data from a
wide range of respondents.
Semi-Structured Interview :
In semi-structures interview, the interviewer also has a clear
list of issues to be addressed and questions to be answered. There is
some flexibility in the order of the topics. In this type of interviewee
is given chance to develop his ideas and speak more widely on the
issues raised by the researcher. The answers are open-ended and
more emphasis is on the interviewee elaborating points of interest.
Unstructured Interview :
In case of unstructured interview, emphasis is placed on the
interviewee‘s thoughts. The role of the researcher is to be as
unintruisve as possible. The researcher introduces a theme or topic
and then letting the interviewee develop his or her ideas and pursue
his or her train of thought. Allowing interviewees to speak their
minds is a better way of discovering things about complex issues. It
gives opportunity for in depth investigations.
Single Interview :
This is a common form of semi structured or un-structured
interview. It involves a meeting between one researcher and one
252
informant. It is easy to arrange this type of interview. It helps the
researcher to locate specific ideas wit specific people. It is also easy
to control the situation in the part of the interviewer.
Group Interview :
In case of group interview, more than one informant is
involved. The numbers involved normally about four to six people.
Here you may think that it is difficult to get people together to
discuss matters on one occasion and how many voices can contribute
to the discussion during any one interview. But the crucial thing to
bear in mind. Here is that a group interview is not an opportunity for
the researcher to questions to a sequence of individuals, taking turns
around a table. ‗group‘ is crucial here, because it tells us that those
present in the interview will interact with one another and that the
discussion will operate at the level of the group. They can present a
wide range of information and varied view points.
According to Lewis –
―Group interviews have several advantages over individual
interviews. In particular, they help to reveal consensus views, may
generate richer responses by allowing participants to challenge one
another‘s views, may be used to verify research ideas of data gained
through other methods and may enhance he reliability of responses.‖
The disadvantages of this type of interview is that the views
of ‗quieter‘ people does not come out. Certain members may
dominate the talk. The most disadvantage is that whatever opinions
are expressed are acceptable by the group irrespective of their
opinions contrary to it. Private opinion does not given importance.
Focus Group Interview :
253
This is an extremely popular form of interview technique. It
consists of a small group of people, usually between six and nine in
number. This is useful for non-sensitive and non-sensitive and noncontroversial topics. The session usually revolve around a prompt, a
trigger, some stimulus introduced by the interviewer in order to
‗focus‘ the discussion. The respondents are permitted to express
themselves completely, but the interviewer directs the live of
thought. In this case, importance is given on collective views rather
than the aggregate view. It concentrates on particular event or
experience rather than on a general line of equality.
Requirements of a Good Interview :
As a tool of research good interview requires:
Proper preparation.
Skillful execution and
Adequate recording and interpretation.
Preparation for Interview :
The follow actors need to be determined in advance of the
actual interview:
Purpose and information needed should be clear.
Which type of interview best suited for the purpose should be
decided.
A clear outline and framework should be systematically
prepared.
Planning should be done for recording responses.
Execution of the Interview :
Rapport should be established.
Described information should be collected with a stimulating and
encouraging discussion.
Recording device should leased without distracting the
interviewee.
Recording and Interpreting Responses :
It is best to record through tape recorder.
254
It the responses is to be noted down, it should be either noted
simultaneously or immediately after it.
Instead of recording responses, sometimes the researcher noted
the evaluation directly interpreting the responses.
Advantages of Interview :
Interviews techniques has the following advantages:
Depth Information :
Interviews are particularly good at producing data which deal
with topics in depth and in detail. Subjects can be probed, issues
pursued lines of investigation followed over a relatively lengthy
period.
Insights :
The researcher is likely to gain valuable insights based on the
depth of the information gathered and the wisdom of ―key
informants‖.
Equipment :
Interviews require only simple equipment and build on
conversation skills which researchers already have.
Information Priorities :
255
Interviews are a good method for producing data based on
informant‘s priorities, opinions and ideas. Informants have the
opportunity to expand their ideas, explain their views and identify
what regard as their crucial factors.
Flexibility :
Interviews are more flexible as a method of data collection.
During adjustments to the line of enquiry can be made.
Validity :
Direct contact at the point of the interview means that data
can be checked for accuracy and relevance as they are collected.
High response rate :
Interviews are generally pre-arranged and scheduled for a
convenient time and location. This ensures a relatively high
response rate.
Therapeutic:
Interviews can be a rewarding experience for the informant,
compared with questionnaires, observation and experiments, there is
a more personal element to the method and people end to enjoy the
rather rare chance to talk about their ideas at length to a person
whose purpose is to listen ad note the ideas without being critical.
Disadvantages of Interviews :
256
Irrespective of the above advantages, it has the following
disadvantages.
Time Consuming :
Analysis of data can be difficult and time consuming. Data
preparation and analysis is ―end loaded‖ compared with, for
instance, questionnaires, which are preceded and where data are
ready for analysis once they have collected. The transcribing and
coding of interview data is a major task for the researcher which
occurs after the data have been collected.
Difficulty in data analysis :
This method produce non-standard responses.
Semistructured and unstructured interviews produce data that are not pre
coded and have a relatively open format.
Less Reliability :
Consistency and objectivity are hard to achieve. The data
collected are, to an extent, unique owing to the specific content and
the specific individuals involved. This has an adverse effect on
reliability.
Interviewer Effect :
The identify of the researcher may affect the statements of
the interviewee. They may say what they do or what they prefer to
do. The two may not tally.
257
Inhibitions :
The tape recorder or video recorder may inhibit the important.
The interview is an artificial situation where people are speaking for
the record and on the record and this can be daunting for certain
kinds of people.
Invasion of Privacy :
Interviewing can be an invasion of Privacy and may be
upsetting for the informant.
Resources :
The cost of interviewer‘s fine, of travel and of transcription
can be relatively high it the informants are geographically
widespread.
On the basis of the merits and limitations of the interview
techniques it is used in many ways for research and non-research
purposes. This technique was used in common wealth teacher
training study to know the traits must essentials for success in
teaching. Apart from being an independent data collection tool, it
may play an important role in the preparation of questionnaires and
check lists which are to be put to extensive use.
Check Your Progress - VII
Q.1 Explain different types of interview for the purpose of research.
258
Q.2 Write short notes on:
(a) Importance of Interview.
(b) Requisites of a good interview.
9.12 LET US SUM UP :
You would recall that we have touched upon the following
learning items in this unit.
For the purpose of collecting new relevant data foe a research
study, the investigator needs to select proper instruments termed as
tools and techniques.
The major tools of research can be classified into broad
categories of inquiry form, observation, interview, social measures
and Psychological tests.
259
Among the inquiry forms, we have discussed in this unit are
Rating scale, attitude scale, opinionnaire, questionnaire checklist and
semantic differential scale. Observation and Interview are explained
as the techniques of data collection. In psychological tests, Aptitude
tests and inventories are discussed.
Rating scale is a technique which is designed or constructed
to asses the personality of an individual. It is very popular in testing
applied psychology, vocational guidance and counseling as well as
in basic research. They measure the degree or amount of the
indicated judgments.
Attitude scale is the device by which the feelings or beliefs of
persons are described and measured indirectly through securing their
responses to a set of favourable statements. Thurstone and Likert
scale are commonly adopted for attitude scaling.
Opinionnaire is a special form of inquiry. It is used by the
researcher to collect the opinions of a sample of population. It is
usually used in descriptive type research.
Questionnaire is a tool which used frequently. The purpose is
to gathered information from widely scattered sources. Data
collected in written form through this tool.
Checklist is a selected list of words, Phrases, Sentences and
paragraphs following which an observer records a check mark to
denote a presence or absence of whatever is being observed. It calls
for a simple yes / no judgments. The main purpose is to call
attention to various aspects of an object or situation, to see that
nothing of importance is overlooked.
260
Semantic Differential Scale is a seven point scale and the end
points of the scale are associated with bipolar labels. This scale
helps o determine overall similarities and differences among objects.
Aptitude tests are psychological tests attempt to product the
capacities or the degree of achievement expected from individuals in
a particular activity. The purpose is to test a candidate‘s profile.
Inventory is a list, or record containing traits, preferences,
attitudes interests or abilities used to evaluate personal
characteristics or skills. Strong‘s vocational interest inventory is an
example of interest inventory.
Observation method is a technical in which the behaviour
research subjects is watched and recorded without any direct contact.
It deals with the overt behaviour of persons in controlled or
uncontrolled situations.
Interview is an oral type of questionnaire where the subject
supplies the needed information in a face to face situation. It is
specially appropriate for dealing with young children, illiterates, dull
and the abnormal.
Unit End Exercises:
1. State the characteristics of a questionnaire.
2. What are the disadvantages of an Interview?
3. Prepare items using Rating scale,
Questionnaire for a research proposal.
Interview
and
261
Reference Books :
Siddhu Kulbir Singh (1992). Methodology of Research in
Education, Sterling Publisher, New Delhi.
Sukhia S. P. and Mehrotra P. V. (1983). ―Elements of
educational Research‖ Allied Publisher Private Limited New
Delhi
Denscombe, Martyn (1999). ―The Good Research Guide‖ Viva
Books Private Limited, New Delhi.

262
10
DATA ANALYSIS AND REPORT WRITING
Unit Structure :
10.0
10.1
10.2
10.3
10.4
Objectives
Introduction
Types of Measurement Scale
Quantitative Data Analysis
10.3.1 Parametric Techniques
10.3.2 Non- Parametric Techniques
10.3.3 Conditions to be satisfied for using parametric
techniques
10.3.4 Descriptive data analysis (Measures of central
tendency, variability, fiduciary limits and graphical
presentation of data)
10.3.5 Inferential data analysis
10.3.6 Use of Excel in Data Analysis
10.3.7 Concepts, use and interpretation of following statistical
techniques: Correlation, t-test, z-test, ANOVA, Critical
ratio for comparison of percentages and chi-square
(Equal
Probability
and
Normal
Probability
Hypothesis).
Testing of Hypothesis
10.0 OBJECTIVES :
After reading this unit the student will be able to :
Explain different types of measurement scales with appropriate
examples.
State the Conditions to be satisfied for using parametric
techniques
List the examples of parametric and non-parametric techniques.
State the statistical measures of descriptive and inferential data
analysis.
263
Explain concepts of different statistical techniques of data
analysis and their interpretation.
10.1 INTRODUCTION :
Statistical data analysis depends on several factors such as the
type of measurement scale used, the sample size, sampling technique
used and the shape of the distribution of the data. These will be
described in this unit.
10.2 SCALES OF MEASUREMENT :
The level of measurement refers to the relationship among the
values that are assigned to the attributes for a variable. It is important
to understand the level of measurement as it helps you to decide how
to interpret the data from the variable concerned. Second, knowing
the level of measurement helps you to decide which statistical
techniques of data analysis are appropriate for the numerical values
that were assigned to the variables.
Types of Scales of Measurement
There are typically four scales or levels of measurement that are
defined :
a.
b.
c.
d.
Nominal
Ordinal
Interval
Ratio
a. Nominal Scale : It is the lowest level of measurement. A
nominal scale is simply some placing of data into categories,
without any order or structure. A simple example of a nominal
scale is the terms we use. For example, religion as the names of
religion are categories where no ordering is implied.. Other
examples are gender, medium of instruction, school types and so
on. In research activities, a YES/NO scale is also an example of
nominal scale. In nominal measurement the numerical values just
"name" the attribute uniquely. The statistical techniques of data
analysis which can be used with nominal scales are usually nonparametric.
b. Ordinal Scale : An ordinal scale is next up the list in terms of
power of measurement. In ordinal measurement, the attributes
can be rank-ordered. Here, distances between attributes do not
264
have any meaning. For example, on a survey you might code
Educational Attainment as 0=less than High School; 1=some
High School; 2=High School; 3=Junior College; 4=College
degree; 5=post-graduate degree. In this measure, higher numbers
mean more education. But is distance from 0 to 1 same as 3 to 4?
Of course not. The simplest ordinal scale is a ranking. There is
no objective distance between any two points on your subjective
scale. An ordinal scale only lets you interpret gross order and not
the relative positional distances. The statistical techniques of data
analysis which can be used with nominal scales are usually nonparametric statistics. These would include Karl-Pearson‘s
coefficient of correlation and non-parametric analysis of
variance.
c. Interval Scale : In interval measurement, the distance between
attributes does have meaning. For example, when we measure
temperature (in Fahrenheit), the distance between 30 and 40 is
same as distance between 70 and 80. The interval between values
is interpretable. Because of this, it makes sense to compute an
average of an interval variable, whereas it does not make sense to
do so for ordinal scales. The rating scale is an interval scale. i.e.
when you are asked to rate your job satisfaction on a 5 point
scale, from strongly dissatisfied to strongly satisfied, you are
using an interval scale. This means that we can interpret
differences in the distance along the scale. There is no absolute
zero in the interval scale. For example, if a student gets a score of
zero on an achievement test, it does not imply that his
knowledge/ability in the subject concerned is zero as on another,
similar test in the same subject consisting of another set of
questions, the student could have got a higher score. Thus a score
of zero does not imply a complete lack of the trait being
measured in the subject. When variables are measured in the
interval scale, parametric statistical techniques of data analysis
can be used. However, non-parametric techniques can also be
used with interval and ratio data.
d. Ratio Scale : Finally, in ratio scale, there is always an absolute
zero that is meaningful. This means that you can construct a
meaningful fraction (or ratio) with a ratio variable. Weight is a
ratio variable. In educational research, most "count" variables are
ratio, for example, the number of students in a classroom. This is
because you can have zero students and because it is meaningful
to say that "...we had twice as many students in a classroom as
compared to another classroom." A ratio scale is the top level of
265
measurement. When variables are measured in the ratio scale,
parametric statistical techniques of data analysis can be used.
It is important to recognise that there is a hierarchy implied in
the level of measurement idea. In general, it is desirable to have a
higher level of measurement (e.g., interval or ratio) rather than a
lower one (nominal or ordinal).
Check Your Progress - I
Q.1
State the different types of measurement scale.
Q.2
Give three examples where you can use the following scales
of measurement :
(a) Nominal Scale
(b) Ordinal Scale
(c) Interval Scale
(d) Ratio Scale
10.3 QUANTITATIVE DATA ANALYSIS
10.3.1 Parametric and
10.3.2 Non-Parametric Techniques :
10.3.3 Conditions to be satisfied for Using Parametric
Techniques :
These are as follows :
1. The sample size is greater than 30.
266
2.
3.
4.
5.
6.
Data are normally distributed.
Data are measured in interval or ratio scales.
Variances of different sub-groups are equal or nearly equal.
The sample is selected randomly.
Observations are independent.
10.3.4 Descriptive Data Analysis
Descriptive statistics are used to present quantitative
descriptions in a manageable form. In a research study, we may have
lots of measures. Or we may measure a large number of people on
one measure. Descriptive statistics help us to simply large amounts
of data in a sensible way. Each descriptive statistic reduces lots of
data into a simpler summary. For instance, consider a simple number
used to summarize how well a batter is performing in baseball, the
batting average. This single number is simply the number of hits
divided by the number of times at bat (reported to three significant
digits). A batter who is hitting .333 is getting a hit one time in every
three at bats. One batting .250 is hitting one time in four. The single
number describes a large number of discrete events. For example,
we may describe the performance of students of a class in terms of
their average performance. Descriptive statistics provide a powerful
summary that may enable comparisons across people or other units.
Measures of Central Tendency : These include Mean, Median and
Mode which indicate the average value of the variable being studied
(Mean), the value above and below which lie 50% of the sample
values (Median) and the value which occurs the maximum number
of times in the sample (Mode). These help in determining the extent
of normality of the distribution of scores on the variable being
studied.
Measures of Variability : These include the standard deviation,
skewness and kurtosis. The standard deviation indicates the
deviation of each score from the Mean. Skewness indicates whether
majority of the scores lie to the left of the Mean (positively skewed),
to the right of the Mean (negatively skewed) or bell-shaped
(normally distributed). Kurtosis indicates whether the distribution of
the scores is flat (platykurtic), peaked (leptokurtic) or bell-shaped
(mesokurtic).
Fiduciary Limits : These indicates the interval (or the fiduciary
limits) within which the Mean of the population will lie at 0.95 or
0.99 levels of confidence. The fiduciary limits or the confidence
interval of the population Mean is estimated based on the sample
mean. The sample Mean is known as the ‗Statistic‘ and the
267
population mean is known as the ―parameter‘. The computation of
population Mean requires the use of Standard Error of the Mean.
Similarly, fiduciary limits or the confidence interval of the
population standard deviation is also computed.
Graphical Presentation of Data : This includes bar diagrams, pie
charts and line graph. The line graph is usually used to represent the
distribution of the scores obtained on a variable with the objective of
indicating the shape of the distribution. The bar diagram are used for
making comparisons of Mean scores on the variable being studied in
various sub-groups such as boys v/s girls, urban v/s rural, privateaided v/s private-unaided v/s municipal schools, SSC v/s CBSE v/s
ICSE v/s IGCSE schools and so on. Pie charts are used to indicate
proportion of different sub-groups in the sample or the variance of a
specific variable associated with another variable.
Check Your Progress - I
Q.1 Explain the meaning of
techniques of data analysis.
parametric
and
non-[parametric
Q.2 State the conditions necessary for using parametric techniques
of data analysis.
Q.3 Which are the measures of central tendency and variability?
Why is it necessary to compute these?
268
Q. 4
You want to (i) compare the Mean Academic Achievement
of boys and girls, (ii) show whether the Academic
Achievement scores of students are normally distributed or
not and (iii) show the proportion of boys and girls in the
total sample. State the graphical techniques to be used in
each of these cases.
10.3.5 Inferential Data Analysis
Descriptive data analysis only describes the data and the
characteristics of the sample in terms of statistics. Its findings could
not be generalised to larger population.
On the other hand, the findings of the inferential analysis can
be generalised to larger population.
10.3.6 Use of Excel in Data Analysis : The MS-Excel is an
excellent tool for analysing data using statistical techniques
descriptive statistics such as the Mean, Medial, Mode, SD, Skewness
and Kurtosis and inferential techniques including t-test, ANOVA,
correlation. It also helps in presenting data graphically through bar
diagrams, line graph and pie chart.
10.3.7 Concepts, Use and Interpretation of Statistical Techniques
A. Correlation : When the variables are in the interval or ratio
scale, correlation and regression coefficients are computed. A
Pearson product-moment correlation coefficient is a measure of
linear association between two variables in interval or ratio
scale. The measure, usually symbolised by the letter r, varies
from –1 to +1, with 0 indicating no linear association. The word
―correlation‖ is sometimes used in a non-specific way as a
synonym for ―association.‖ Here, however, the Pearson‘s
product-moment correlation coefficient is a measure of linear
association computed for a specific set of data on two variables.
For example, if a researcher wants to ascertain whether teachers‘
job satisfaction is related to their school climate, Pearson‘s
269
product-moment correlation coefficient could be computed for
teachers‘ scores on these two variables.
Interpretation of “r” : This takes into account four major
aspects as follows:
a. Level of significance (usually at 0.01 or 0.05 levels in
educational research).
b. Magnitude of ―r‖: In general, the following forms the
basis of interpreting the magnitude of the ―r‖:
No.
Value of “r”
Magnitude
1
0.00-0.20
Negligible
2
0.21-0.40
Low
3
0.41-0.60
Moderate
4
0.61-0.80
Substantial
5
0.81-1.00
Very High
c. Direction of ―r‖: The ―r‖ could be positive, negative or
zero. (i) A positive ―r‖ signifies that the relationship between
two variables is direct i.e. if the value of one variable is high,
the other is also expected to be high. For example, if there is a
substantial relationship between IQ and Academic
Achievement of students, it implies that higher the IQ of
students. higher is likely to be their Academic Achievement.
(ii) A negative ―r‖ signifies that the relationship between two
variables is inverse i.e. if the value of one variable is high, the
other is also expected to be low. For example, if there is a
substantial relationship between Anxiety and Academic
Achievement of students, it implies that higher the Anxiety of
students. lower is likely to be their Academic Achievement.
(iii) The relationship between two variables could be zero.
d. The Coefficient of Determination : It refers to the
percentage of variability in one variable that is associated
with variability in the other variable. It is computed using the
formula 100r2. The square of the correlation coefficient i.e.
100r2 is another PRE (proportionate reduction in error)
measure of association.
270
B. t-test : A t-test is used to compare the Mean Scores obtained by
two groups on a single variable. It is also used when F-ratio in
ANOVA is found to be significant and the researcher wants to
compare the Mean scores of different groups included in the
ANOVA. It can also be used to compare the Mean Academic
Achievement of two groups such as (i) boys and girls or (ii)
Experimental and Control groups etc. The t-test was introduced
in 1908 by William Sealy Gosset, a chemist working in Ireland.
His pen name was "Student".
The assumptions on which the t-test is used are as follows:
(a) Data are normally distributed. This can be ascertained by using
normality tests such as the Shapiro-Wilk and KolmogorovSmirnov tests.
(b) Equality of variances which can be tested by using the F-test or
the more robust Levene‘s test, Bartlett‘s test or the BrownForsythe test.
(c) Samples may be independent or dependent, depending on the
hypothesis and the type of samples. For the inexperienced
researcher, the most difficult issue is often whether the samples
are independent or dependent. Independent samples are usually
two randomly selected groups unrelated to teach other such as
boys and girls or private-aided and private-unaided schools in a
causal-comparative research. On the other hand, dependent
samples are either two groups matched (or a "paired" sample)
on some variable (for example, IQ or SES) or are the same
people being tested twice (called repeated measures as is done
in an experimental design).
Dependent t-tests can also used for matched-paired samples,
where two groups are matched on a particular variable. For example,
if we examined the IQ of twins, the two groups are matched on
genetics. This would call for a dependent t-test because it is a paired
sample (one child/twin paired with one another child/twin).
However, when we compare 100 boys and 100 girls, with no
relationship between any particular boy and any particular girl, we
would use an independent samples test. Another example of a
matched sample would be to take two groups of students, match
each student in one group with a student in the other group based on
IQ and then compare their performance on an achievement test.
Alternatively, we assign students with low scores and students with
high scores in two groups and assess their performance on an
achievement test independently. An example of a repeated measures
t-test would be if one group were pre-tested and post-tested as is
271
done in educational research quite often especially in experiments. If
a teacher wanted to examine the effect of a new set of textbooks on
student achievement, he could test the class at the beginning of the
treatment (pre-test) and at the end of the treatment (post-test). A
dependent t-test would be used, treating the pre-test and post-test as
matched variables (matched by student).
Types of t-test :
(a) Independent one-sample t-test : Suppose in a research, on the
basis of past research, the researcher finds that the Mean
Academic Achievement of students in Mathematics is 50 with a
SD of 12. If the researcher wants to know whether this year‘s
students‘ Academic Achievement in Mathematics is typical, he
takes μo = 50 which is the population Mean and S = 12 where S
is the population SD. Suppose N = 36. In testing the null
hypothesis that the population mean is equal to a specified value
μ0, one uses the statistic t the formula for which is follows:
t = [M - μo ] ’ √S/N
where S is the population standard deviation and N is the sample
size. The degrees of freedom used in this test is N − 1.
(b) Independent two-sample t-test : This is of the following three
types:
(i) Equal sample sizes, equal variance : This test is only used when
both the samples are of the same size i.e. N1 = N2 = N and when
it can be assumed that the two distributions have the same
variance. Its formula is as follows:
Standard Error of the Difference between Means (SED)
= √ (σ21 + σ22) ÷ N )
t = (M1 – M2) ÷ SED
Where,
M1= Mean of Group 1
M2= Mean of Group 2
σ1= SD of Group 1
σ2= SD of Group 2
(ii) Unequal sample sizes, equal variance : This test is used only
when it can be assumed that the two distributions have the same
272
variance. The t statistic to test whether the means are different
can be calculated as follows:
Standard Error of the Difference between Means (SED)
= √ (σ21 / N1 + σ22/N2)
t = (M1 – M2) ÷ SED
Where,
M1= Mean of Group 1
M2= Mean of Group 2
σ1= SD of Group 1
σ2= SD of Group 2
(iii) Unequal sample sizes, unequal variance : This test is used only
when the two sample sizes are unequal and the variance is
assumed to be different. The t statistic to test whether the
means are different can be calculated as follows:
Standard Error of the Difference between Means (SED)
= √ (σ21 / N1 + σ22/N2)
t = (M1 – M2) ÷ SED
However, the tabulated t (tt) against which the obtained t (to)
is compared in order test its significance is calculated differently
using the following formula :
Suppose for Sample 1, for df = N-1, tabulated t (tt) at 0.05 level of
significance = x
and
Sample 2, for df = N-1, tabulated t (tt) at 0.05 level of significance =
y.
SE1 = Standard Error of M1
SE2 = Standard Error of M2
Corrected tabulated t (tt) at 0.05 level of significance
= (SE21 × x) + (SE22 × y) ÷ (SE21 + SE22)
Where,
273
M1= Mean of Group 1
M2= Mean of Group 2
σ1= SD of Group 1
σ2= SD of Group 2
(c) Dependent t-test for paired samples : This test is used when the
samples are dependent; that is, when there is only one sample
that has been tested twice (repeated measures) or when there are
two samples that have been matched or "paired" as is usually
done in experimental research. Its formula is as follows:
Standard Error of the (Mean)1 = σ1 ’ √ N1 = SEM1
Standard Error of the (Mean)1 = σ2 ’ √ N2 = SEM2
Standard Error of the Difference between Means (SED)
= √ (SEM1+ SEM2 – 2r. SEM1 SEM2)
t = (M1 – M2) ÷ SED
Where,
M1= Mean of Pre-test Scores
M2= Mean of Post-test Scores
σ1= SD of Group 1
σ2= SD of Group 2
r = Coefficient of Correlation between Pre-test and Post-test
Scores
Alternatives to the t test
The t test can be used to test the equality of the means of two
normal populations with unknown, but equal, variance. To relax the
normality assumption, a non-parametric alternative to the t test can
be used and the usual choices are (a) for independent samples, the
Mann-Whitney U test and (b) for related samples, either the
binomial test or the Wilcoxin signed-rank test. To test the equality of
the means of more than two normal populations, an analysis of
variance can be performed.
z-test : It is used to compare two coefficients of correlation. For
example, a researcher studies the relationship between job
satisfaction and school climate among two groups, viz., male and
274
female teachers. Further, he may want to ascertain whether this
relationship differs among male and female teachers. In this case, he
will have two coefficients of correlation, r1 for male teachers and r2
for male teachers for the variables of job satisfaction and school
climate. In such a case, z-test is used the formula for which is as
follows:
z = (z1 – z2) ’ √(1 / N1-3 + 1/ N2-3)
ANOVA : Analysis of variance (ANOVA) is used for comparing
more than two groups on a single variable. It a collection of
statistical models and their associated procedures, in which the
observed variance is partitioned into components due to different
explanatory variables. In its simplest form ANOVA gives a
statistical test of whether the means of several groups are all equal,
and therefore generalizes Student‘s two-sample t-test to more than
two groups.
There are three conceptual classes of such models:
1.
Fixed-effects model assumes that the data came from
normal populations which may differ only in their
means. The fixed-effects model of analysis of variance
applies to situations in which the experimenter applies
several treatments to the subjects of the experiment to
see if the response variable‘s values change. This
allows the experimenter to estimate the ranges of
response variable values that the treatment would
generate in the population as a whole.
2. Random-effects model assumes that the data describe a
hierarchy of different populations whose differences are
constrained by the hierarchy. Random effects models are used
when the treatments are not fixed. This occurs when the
various treatments (also known as factor levels) are sampled
from a larger population. Because the treatments themselves
are random variables, some assumptions and the method of
contrasting the treatments differ from ANOVA.
3. Mixed-effect model describes situations where both fixed and
random effects are present.
Types of ANOVA : These are as follows :
(a) One-way ANOVA : It is used to test for differences among
275
two or more independent groups. Typically, however, the
one-way ANOVA is used to test for differences among three
or more groups, since the two-group case can be covered by
student‘s t-test. When there are only two means to compare,
the t-test and the F-test are equivalent with F = t2. For
example, a researcher wants to compare students‘ attitude
towards the school on the basis of school types (SSC, CBSE
and ICSE). In this case, there is one dependent variable,
namely, attitude towards the school and three groups, namely,
SSC, CBSE and ICSE schools. Here, the one-way ANOVA is
used to test for differences in students‘ attitude towards the
school among the three groups.
(b) One-way ANOVA for repeated measures : It is used when
the subjects are subjected to repeated measures. This means
that the same subjects are used for each treatment. Note that
this method can be subject to carryover effects. This
technique is often used in experimental research in which we
want to compare three or more groups on one dependent
variable which is measured twice i.e. as pre-test and post-test.
(c) Two-way ANOVA : It is used when the researcher wants to
study the effects of two or more independent or treatment
variables. it is also known as factorial ANOVA. The most
commonly used type of factorial ANOVA is the 2×2 (read as
"two by two", as you would a matrix) design, where there are
two independent variables and each variable has two levels or
distinct values. Two-way ANOVA can also be multi-level
such as 3×3, etc. or higher order such as 2×2×2, etc. but
analyses with higher numbers of factors are rarely done by
hand because the calculations are lengthy. However, since the
introduction of data analytic software, the utilization of
higher order designs and analyses has become quite common.
For example, a researcher wants to compare students‘ attitude
towards the school on the basis of (i) school types (SSC,
CBSE and ICSE) and (ii) gender. In this case, there is one
dependent variable, namely, attitude towards the school and
two independent variables, viz., (i) school types including
three levels, namely, SSC, CBSE and ICSE schools and (ii)
gender including two levels, namely, boys and girls. Here, the
two-way ANOVA is used to test for differences in students‘
attitude towards the school on the basis of (i) school types
and (ii) gender. This is an example of 3×2 two-way ANOVA
as there are three levels of school types, namely, SSC, CBSE
and ICSE schools and two levels of gender, namely, boys and
276
girls. it is known as two-way ANOVA as it involves
comparing one dependent variable (attitude towards the
school) on the basis of two independent variables, viz., (i)
school types and (ii) gender.
(d) MANOVA: When one wants to compare two or more
independent groups in which the sample is subjected to
repeated measures such as pre-test and post-test in an
experimental study, one may perform a factorial mixeddesign ANOVA i.e. Multivariate Analysis of Variance or
MANOVA in which one factor is a between-subjects variable
and the other is within-subjects variable. This is a type of
mixed-effect model. It is used when there is more than one
dependent variable.
(e) ANCOVA : While comparing two groups on a dependent
variable, if it is found that they differ on some other variable
such as their IQ, SES or pre-test, it is necessary to remove
these initial differences. This can be done through using the
technique of ANCOVA.
Assumptions of Using ANOVA
1. Independence of cases.
2. Normality of the distributions in each of the groups.
3. Equality or homogeneity of variances, known as
homoscedasticity i.e. the variance of data in groups should be the
same. Levene‘s test for homogeneity of variances is typically
used to confirm homoscedasticity. The Kolmogorov-Smirnov or
the Shpiaro-Wilk test may be used to confirm normality.
According to Lindman, F-test is unreliable if there are deviations
from normality whereas Ferguson and Takane claim that the Ftest is robust. The Kruskal-Wallis test is a non-parametric
alternative which does not rely on an assumption of normality.
These together form the common assumption that the errors are
independently, identically, and normally distributed for fixed
effects models.
Critical Ratio for Comparison of Percentages : This technique is
used when the researcher wants to compare two percentages. Its
formula is as follows :
(i) For uncorrelated percents :
CR = P1-P2 ÷ (SE of Percentage)
277
Where,
P1 = Percentage occurrence of observed behaviour in Group 1
P2 = Percentage occurrence of observed behaviour in Group 2
P = (N1P1 + N2P2) ÷ (N1+ N2)
P = Percent occurrence of observed behaviour
Q = 1=P
SE of difference between percentages = √ [PQ(1/ N1 + 1/ N2)]
(ii) For correlated percents :
CR = P1-P2 ÷ (SE of Percentage)
Where,
P1 = Percentage occurrence of observed behaviour in Group 1
P2 = Percentage occurrence of observed behaviour in Group 2
P = (N1P1 + N2P2) ÷ (N1+ N2)
P = Percent occurrence of observed behaviour
Q = 1=P
SE of difference between percentages
= √[(SE) P1+(SE) P2 - 2r(SE) P1 ×(SE) P2]
The obtained CR is compared with tabulated CR given in
statistical table to test its significance.
Chi-square (Equal Probability and Normal Probability
Hypothesis): A chi-square test (also chi-squared or χ2 test) is any
statistical test in which the test statistic has a chi-squared distribution
when the null hypothesis is true, or any distribution in which the
probability distribution of the test statistic (assuming the null
hypothesis is true) can be made to approximate a chi-square
distribution as closely as desired by making the sample size large
enough. Chi-square is a statistical test commonly used to compare
observed data with data we would expect to obtain according to a
specific hypothesis. For example, if a researcher expects parents‘
attitude towards sex education to be provided in schools to be
normally distributed, then he might want to know about the
"goodness to fit" between the observed and expected results i.e.
whether the deviations (differences between observed and expected)
278
the result of chance, or whether they are due to other factors. The
chi-square test tests the null hypothesis, which states that there is no
significant difference between the expected and observed result. If it
is assumed that the expected frequencies are equally distributed in
all the cells, the chi-square test is known as equal distribution
hypothesis. On the other hand, if it is assumed that the frequencies
are expected to be distributed normally, the chi-square test is known
as normal distribution hypothesis. The chi-square (χ 2) test measures
the alignment between two sets of frequency measures. These must
be categorical counts and not percentages or ratios measures.
Thus, Chi-squared, χ2 = ∑(fo - fe)2 / fe
where,
fo = observed frequency and
fe = expected frequency.
It may be noted that the expected values may need to be
scaled to be comparable to the observed values. A simple test is that
the total frequency/count should be the same for observed and
expected values. In a table, the expected frequency, if not known,
may be estimated as: fe = (row total) x (column total) / N, where N
is the total number of all the rows (or columns). The obtained Chi
Square is compared with that given in the Chi Square table to
determine whether the comparison shows significance.
In a table, the degrees of freedom are computed as follows :
df = (R - 1) × (C - 1),
Where R = number of rows
C = number of columns.
Chi-square indicates whether there is a significant association
between variables, but it does not indicate just how significant and
important this is.
Check Your Progress – II
Suggest appropriate statistical technique of data analysis in the
following cases :
(a) You want to find out the relationship between academic
achievement and motivation of students.
279
(b) You want to compare the academic achievement of boys and
girls :
(c) You are conducting a survey of teachers‘ opinion about
admission criteria for junior college admissions and want to
know whether their opinion is favourable or not :
(d) You want to compare the academic achievement of students
from private-aided, private-unaided and municipal schools.
(e) You want to compare whether the percentage of girls and
boys enrolling for secondary school :
280
10.4 TESTING OF HYPOTHESIS :
There are a number of ways in which the testing of the
hypothesis may bear on a theory. According to Wallace, it can
(i) Lend confirmation to the theory by not disconfirming it.
(ii) Modifying the theory by disconfirming it, but not at a crucial
point; or
(iii) Overthrow the theory by disconfirming it at a crucial point
in its logical structure, or in its competitive value as
compared with rival theories.
Methods of Testing of Hypothesis
There are three major methods of testing of hypothesis as
follows:
1. Verification : The best test of a hypothesis is to verify whether
the inferences reached from the propositions are consistent with
the observed facts. Verification is of two types as follows: (a)
Direct Verification by Observation or Experimentation and (b)
Indirect Verification by deducing consequences from the
supposed cause and comparing them with the facts of experience.
This necessitates the application of the principle of deduction.
2. Experimentum Crucis : This is known as crucial instance or
confirmatory test. When a researcher is confronted with two
equally competent but contradictory hypotheses, he needs one
instance which explains the phenomenon and helps in accepting
any one of the hypotheses. When this is done through an
experiment, the experiment is known as ‗Experimentum Crucis‘.
3. Consilience of Inductions : This refers to the power which a
hypothesis has of ‗explaining and determining cases of a kind
different from those which were contemplated in the formation of
hypothesis‘. In other words, the hypothesis is accepted and its
value is greatly enhanced when it is found to explain other facts
also in addition to those facts which it was initially designed to
explain.
Errors in Testing of Hypothesis:
A researcher tests the null hypothesis using some statistical
technique. Based on the test of statistical significance he / she
accepts or rejects the null hypothesis and thereby either rejects or
accepts the research hypothesis respectively.
281
If the null hypothesis is true and is accepted or when it is false
and is rejected, the decisions taken are true.
However, error in testing of hypothesis occurs under the
following two situations:
(i) If the null hypothesis (H0) is true but is rejected and
(ii) If the null hypothesis (H0) is false but is accepted.
The former is the example of Type I error while the latter is the
example of Type II error in testing of hypothesis.
Type I error occurs when a true null hypothesis is rejected. It is
also known as Alpha (α) error.
Type II error occurs when a false null hypothesis is accepted. It is
also known as Beta (β) error.
This is shown in the following table :
Possible
Situations
H0 True
H0 False
Possible Outcomes
Accept H0
Reject H0
Correct Decision
Type I error
(α error)
Type II error
Correct Decision
(β error)
When the sample size N is fixed, if we try to reduce Type I error,
the chances of making Type II error increase. Both types of errors
cannot be reduced simultaneously. More about this will be discussed
in the section on statistical analysis of data.
Check Your Progress – III
(a)
What are the major methods of hypothesis testing ?
(b) What are the different types of errors in the hypothesis testing?
282
Suggested Readings
1. Guilford, J. P. and Fruchter, B. Fundamental Statistics in
psychology and Education. Singapore : McGraw –Hill
Book Company. 1981.
2. Best, J. and Kahn, J. Research in Education (9 ed), New Delhi:
Prentice Hall of India Pvt. Ltd. 2006.
3. Garret, H. E. Statistics in Psychology and Education. New York:
Longmans Green and Co. 5th edition 1958.

283
11
QUALITATIVE DATA ANALYSIS
Unit Structure:
11.0 Objectives
11.1 Introduction
11.2 Qualitative Data Analysis
Data Reduction and Classification
Analytical Induction
Constant Comparison
11.0 OBJECTIVES :
After reading this unit the student will be able to :
Explain the meaning of qualitative data analysis.
State the broad focus of qualitative research.
State the specific research questions usually formulated in
qualitative research.
State the principles and characteristics of qualitative data
analysis.
Explain the strategies of qualitative data analysis.
11.1 INTRODUCTION :
Meaning: Qualitative data analysis is the array of processes and
procedures whereby a researcher provides explanations,
understanding and interpretations of the phenomenon under study on
the basis of meaningful and symbolic content of qualitative data. It
provides ways of discerning, examining, comparing and contrasting
and interpreting meaningful patterns and themes. Meaningfulness is
determined by the specific goals and objectives of the topic at hand
wherein the same set of data can be analysed and synthesised from
multiple angles depending on the research topic. It is based on the
interpretative philosophy. Qualitative data are subjective, soft, rich
284
and in-depth descriptions usually presented in the form of words.
The most common forms of obtaining qualitative data include semistructured and unstructured interviews, observations, life histories
and documents. The process of analysing is difficult rigorous.
Broad Focus of Qualitative Research : These include finding
answers to the following questions:
•
What is the interpretation of the world from the participants‘
perspective?
•
Why do they have a particular perspective?
•
How did they develop such a perspective?
•
What are their activities?
•
How do they identify and classify themselves and others?
•
How do they convey their perspective of their situation?
•
What patterns and common themes surface in participants‘
responses dealing with specific items? How do these patterns
shed light on the broader study questions?
•
Are there any deviations from these patterns? If so, are there
any factors that might explain these atypical responses?
•
What stories emerge from these responses? How do these
stories help in illuminating the broader study questions?
•
Do any of these patterns or findings suggest additional data
that may be required? Do any of the study questions need to
be revised?
•
Do the patterns that emerge substantiate the findings of any
corresponding qualitative analysis that have been conducted?
If not, what might explain these discrepancies?
Specific Research Questions Usually Formulated in Qualitative
Research
•
What time did school start in the morning?
•
What would students probably have to do before going to
school ?
•
What was the weather like in the month of data collection?
•
How did the students get from home to school ?
•
What were the morning activities ?
•
What did the students do in the recess?
•
What were the activities after recess?
285
•
What was the classroom environment like?
•
What books and instructional materials were used?
•
At what time did the school got over?
•
What would students probably have to do after school got
over ?
•
What kind of homework was given and how much time it
required?
Before describing the process of qualitative data analysis
process, it is necessary to describe the terms associated with process.
Principles of Qualitative Data Analysis
These are as follows;
1 Proceeding systematically and rigorously (minimise human
error).
2 Recording process, memos, journals, etc.
3 Focusing on responding to research questions.
4 Identifying appropriate level of interpretation suitable to a
situation.
5 Simultaneous process of inquiry and analysis.
6 Seeking to explain or enlighten.
7 Evolutionary/emerging.
Characteristics of Qualitative Data Analysis:
According to Seidel, the process has the following characteristics :
a. Iterative and Progressive: The process is iterative and
progressive because it is a cycle that keeps repeating. For
example, if you are thinking about things, you also start noticing
new things in the data. You then collect and think about these
new things. In principle the process is an infinite spiral.
b. Recursive : The process is recursive because one part can call
you back to a previous part. For example, while you are busy
collecting things, you might simultaneously start noticing new
things to collect.
c. Holographic : The process is holographic in that each step in the
process contains the entire process. For example, when you first
286
notice things, you are already mentally collecting and thinking
about those things.
11.2
COMPONENTS OF QUALITATIVE DATA ANALYSIS
:
According to Miles and Huberman, following are the
major components of qualitative data analysis :
(A) Data Reduction : "Data reduction refers to the process of
selecting, focusing, simplifying, abstracting, and transforming the
data that appear in written up field notes or transcriptions." First, the
mass of data has to be organized and somehow meaningfully
reduced or reconfigured. These data are condensed so as to make
them more manageable. They are also transformed so that they can
be made intelligible in terms of the issues being addressed. Data
reduction often forces choices about which aspects of the
accumulated data should be emphasised, reduced or set aside
completely for the purposes of the topic at hand. Data in themselves
do not reveal anything and hence it is not necessary to present a
large amount of unassimilated and uncategorized data for the
reader's consumption in order to show that you are "perfectly
objective". In qualitative analysis, the researcher uses the principle
of selectivity to determine which data are to be singled out for
description. This usually involves some combination of deductive
and inductive analysis. While initial categorizations are shaped by
pre-established research questions, the qualitative researcher should
remain open to inducing new meanings from the data available. Data
reduction should be guided primarily by the need to address the
salient question(s) in a research. This necessitates selective
winnowing/sifting which refers to removing data from a group so
that only the best ones which are relevant for answering particular
research questions are left. This is difficult as not only qualitative
data are very rich but also because the person who analyses the data
also often plays a direct, personal role in collecting them. The
process of data reduction starts with a focus on distilling what the
different respondents report about the activity, practice or
phenomenon under study to share knowledge. The information given
by various categories of sample is now compared – such as the
information given by experienced and new teachers or the
information given by teachers, principal, students and/or parents
about central themes of the research. In setting out these similarities
and dissimilarities, it is important not to so "flatten" or reduce the
287
data that they sound like close-ended survey responses. The
researcher should ensure that the richness of the data is not unfairly
and unnecessarily diluted. Apart from exploring the specific content
of the respondents' views, it is also a good idea to take note of the
relative frequency with which different issues are raised, as well as
the intensity with which they are expressed.
(B) Data Display :
Data display provides "an organized,
compressed assembly of information that permits conclusion
drawing..." A display can be an extended piece of text or a diagram,
chart or matrix that provides a new way of arranging and thinking
about the more textually embedded data. Data displays, permits the
researcher to extrapolate from the data enough to begin to identify
systematic patterns and interrelationships. At the display stage,
additional, higher order categories or themes may emerge from the
data that go beyond those first discovered during the initial process
of data reduction. Data display can be extremely helpful in
identifying whether a system is working effectively and how to
change it. The qualitative researcher needs to discern patterns of
among various concepts so as to gain a clear understanding of the
topic at hand. Data could be displayed using a series of flow charts
that map out any critical paths, decision points, and supporting
evidence that emerge from establishing the data for each site. The
researcher may (1) use the data from subsequent sites to modify the
original flow chart of the first site, (2) prepare an independent flow
chart for each site; and/or (3) prepare a single flow chart for some
events (if most sites adopted a generic approach) and multiple flow
charts for others.
(C) Conclusion Drawing and Verification : Conclusion drawing
requires a researcher to begin to decide what things mean. He does
this by noting regularities, patterns (differences/similarities),
explanations, possible configurations, causal flows, and
propositions. This process involves stepping back to consider what
the analysed data mean and to assess their implications for the
questions at hand. Verification, integrally linked to conclusion
drawing, entails revisiting the data as many times as necessary to
cross-check or verify these emergent conclusions. Miles and
Huberman assert that "The meanings emerging from the data have to
be tested for their plausibility, their sturdiness, their ‗confirmability‘
- that is, their validity". Validity in this context refers to whether the
conclusions being drawn from the data are credible, defensible,
warranted, and able to withstand alternative explanations. When
qualitative data are used with the intension of identifying
dimensions/aspects of a concept for designing/developing a
quantitative tool, this step may be postponed. Reducing the data and
288
looking for relationships will provide adequate information for
developing other instruments.
Miles and Huberman describe several tactics of
systematically examining and re-examining the data including noting
patterns and themes, clustering cases, making contrasts and
comparisons, partitioning variables and subsuming particulars in the
general which can be employed simultaneously and iteratively for
drawing conclusions in a qualitative research. This process is
facilitated if the theoretical or logical assumptions underlying the
research are stated clearly. They further identify 13 tactics for testing
or confirming findings, all of which address the need to build
systematic "safeguards against self-delusion" into the process of
analysis.
THE PROCEDURES OF QUALITATIVE DATA ANALYSIS
These are as follows:
1 Coding/indexing
2 Categorisation
3 Abstraction
4 Comparison
5 Dimensionalisation
6 Integration
7 Iteration
8 Refutation (subjecting inferences to scrutiny)
9 Interpretation (grasp of meaning - difficult to describe
procedurally)
Steps of Qualitative Data Analysis
The Logico-Inductive process of data analysis is as follows;
•
•
Analysis is logico-inductive.
Data are mostly verbal.
•
Observations are made of behaviours, situations, interactions,
objects and environment.
•
Becoming familiar with the data.
289
•
Data are examined in depth to provide detailed descriptions of
the setting, participants and activity (describing).
•
Coding pieces of data.
•
Grouping them into potential themes (classifying) which are
identified from observations through (reading / memoing).
•
Themes are clustered into categories.
•
Categories are scrutinised to discover patterns.
•
Explanations are made from patterns.
•
Interpreting and synthesizing the organised data into general
written conclusions or understandings based on what is
observed and are stated verbally (interpreting).
•
These conclusions are used to answer research questions.
Terms associated with Qualitative Data Analysis:
•
Data : It is the information obtained in the form of words.
•
Category : It is a classification of ideas and concepts. When
concepts in the data are examined and compared with one
another and connections are made, categories are formed.
Categories are used to organise similar concepts into distinct
groups.
•
Pattern : It is a link or the relationship between two or more
categories that further organises the data and that usually
becomes the primary basis of organising and reporting the
outcomes of the study. Pattern seeking means examining the
data in as many ways as possible through understanding the
complex links between situations, processes, beliefs and
actions.
Qualitative data analysis is a predominantly an inductive
process of organizing data into categories and patterns (relationship)
among the categories.
Types of Codes Usually Used in Educational Research :
Seidel identifies three major types of codes in qualitative
analysis of data:
1. Descriptive Coding : This is when coding is used to describe
what is in the data.
2. Objectivist Coding : According to Seidel and Kelle, an
objectivist approach treats code words as ―condensed
representation of the facts described in the data‖. Given this
290
assumption, code words can be treated as substitutes for the
text and the analysis can focus on the codes instead of the text
itself. You can then imitate traditional distributional analysis
and hypothesis testing for qualitative data. But first you must
be able to trust your code words. To trust a code word you
need: 1) to guarantee that every time you use a code word to
identify a segment of text that segment is an unambiguous
instance of what that code word represents, 2) to guarantee
that you applied that code word to the text consistently in the
traditional sense of the concept of reliability, and 3) to
guarantee that you have identified every instance of what the
code represents. If the above conditions are met, then: 1) the
codes are adequate surrogates for the text they identify, 2) the
text is reducible to the codes, and 3) it is appropriate to
analyze relationships among codes. If you fall short of
meeting these conditions then an analysis of relationships
among code words is risky business.
3. Heuristic Coding : In a heuristic approach, code words are
primarily flags or signposts that point to things in the data.
The role of code words is to help you collect the things you
have noticed so you can subject them to further analysis.
Heuristic codes help you reorganize the data and give you
different views of the data. They facilitate the discovery of
things, and they help you open up the data to further intensive
analysis and inspection. The burdens placed on heuristic
codes are much less than those placed on objective codes. In a
heuristic approach code words more or less represent the
things you have noticed. You have no assurance that the
things you have coded are always the same type of thing, nor
that you have captured every possible instance of that thing in
your coding of the data. This does not absolve you of the
responsibility to refine and develop your coding scheme and
your analysis of the data. Nor does it excuse you from
looking for ―counter examples‖ and ―confirming examples‖
in the data. The heuristic approach does say that coding the
data is never enough. It is the beginning of a process that
requires you to work deeper and deeper into your data.
Further, heuristic code words change and evolve as the
analysis develops. The way you use the same code word
changes over time. Text coded at time one is not necessarily
equivalent with text coded at time two. Finally, heuristic code
words change and transform the researcher who, in turn,
changes and transforms the code words as the analysis
proceeds.
291
Bogdan and Biklen (1998) provide common types of coding
categories, but emphasize that your hypotheses shape your coding
scheme.
Setting/Context codes provide background information on
the setting, topic, or subjects.
1.
Defining the Situation codes categorize the world view of
respondents and how they see themselves in relation to a
setting or your topic.
2.
Respondent Perspective codes capture how respondents
define a particular aspect of a setting. These perspectives may
be summed up in phrases they use, such as, "Say what you
mean, but don't say it mean."
3.
Respondents' Ways of Thinking about People and Objects
codes capture how they categorize and view each other,
outsiders, and objects. For example, a dean at a private school
may categorize students: "There are crackerjack kids and
there are junk kids."
4.
Process codes categorize sequences of events and changes
over times.
5.
Activity codes identify recurring informal and formal types of
behaviour.
6.
Event codes, in contrast, are directed at infrequent or unique
happenings in the setting or lives of respondents.
7.
Strategy codes relate to ways people accomplish things, such
as how instructors maintain students' attention during
lectures.
8.
Relationship and social structure codes tell you about
alliances, friendships, and adversaries as well as about more
formally defined relations such as social roles.
9.
Method codes identify your research approaches, procedures,
dilemmas, and breakthroughs.
Check Your Progress - I
(a) State the components of qualitative data analysis.
292
(b) Which are the different types of codes used in qualitative
research in education?
(c) What are the terms associated with qualitative data analysis?
(d) Explain the steps of qualitative data analysis.
STRATEGIES OF QUALITATIVE DATA ANALYSIS:
Some of these are as follows:
A. Analytical Induction : Analytic induction is a way of building
explanations in qualitative analysis by constructing and testing a set
of causal links between events, actions etc. in one case and the
iterative extension of this to further cases. It is research logic used to
collect, develop analysis and organise the presentation of research
findings. It refers to a systematic and exhaustive examination of a
limited number of cases in order to provide generalisations and
identify similarities between various social phenomena in order to
develop contacts or ideas. Its formal objective is causal explanation.
It has its origin in the theory of symbolic interaction which stipulates
that a person‘s actions are built up and evolve over time through
processes of learning, trial-and-error and adjustment to responses by
others. This helps in searching for broad categories followed by
development of subcategories. If no relevant similarities can be
identified, then either the data needs to be re-evaluated and the
definition of similarities changed, or the category is too wide and
heterogeneous and should be narrowed down. In analytical
induction, definitions of terms are not identified/determined at the
beginning of research. They are rather, considered hypotheses to be
tested using inductive reasoning. It allows for modification of
293
concepts and relationships between concepts aimed at representing
reality of the situation most accurately.
According to Katz, "Analytic induction (AI) is a research
logic used to collect data, develop analysis, and organize the
presentation of research findings. Its formal objective is causal
explanation, a specification of the individually necessary and jointly
sufficient conditions for the emergence of some part of social life.
AI calls for the progressive redefinition of the phenomenon to be
explained (the explanandum) and of explanatory factors (the
explanans), such that a perfect (sometimes called "universal")
relationship is maintained. Initial cases are inspected to locate
common factors and provisional explanations. As new cases are
examined and initial hypotheses are contradicted, the explanation is
reworked in one or both of two ways The definition of the
explanandum may be redefined so that troublesome cases either
become consistent with the explanans or are placed outside the scope
of the inquiry; or the explanations may be revised so that all cases of
the target phenomenon display the explanatory conditions. There is
no methodological value in piling up confirming cases; the strategy
is exclusively qualitative, seeking encounters with new varieties of
data in order to force revisions that will make the analysis valid
when applied to an increasingly diverse range of cases. The
investigation continues until the researcher can no longer practically
pursue negative cases."
Usually, three explanatory mechanisms are available for
presenting the findings in analytical induction as follows:
(b) Practicalities of action.
(c) Self-awareness and self-regard.
(d) Sensual base of motivation in desires, emotions or a sense of
compulsion to act.
The steps of analytical induction process are as follows :
a) Develop a hypothetical statement drawn from an individual
instance.
b) Compare that hypothesis with alternative possibilities taken from
other instances. Thus the social system provides categories and
classifications, rather than being imposed upon the social system.
Progress in the social sciences is escalated further by comparing
aspects of a social system with similar aspects in alternative
social systems. The emphasis in the process is upon the whole,
even though elements are analysed as are relationships between
294
those elements. It is not necessary that the specific cases being
studied are ―average‖ or representative of the phenomena.
According to Cressey, the steps of analytical induction process
are as follows :
a) A phenomenon is defined in a tentative manner.
b) A hypothesis is developed about it.
c) A single instance is considered to determine if the hypothesis is
confirmed.
d) If the hypothesis fails to be confirmed, either the phenomenon is
redefined or the hypothesis is revised so as to include the
instance examined.
e) Additional cases are examined, and if the new hypothesis is
repeatedly confirmed, some degree of certainty about the
hypothesis is ensured.
f) Each negative case requires that the hypothesis be reformulated
until there are no exceptions.
B. Constant Comparison : Many writers suggest about the ways of
approaching your data so that you can do the coding of the data with
an open mind and recognize noteworthy patterns in the data. Perhaps
the most famous are those made by the grounded theorists. This
could be done through constant comparison method. This requires
that every time you select a passage of text (or its equivalent in video
etc.) and code it, you should compare it with all those passages you
have already coded that way, perhaps in other cases. This ensures
that your coding is consistent and allows you to consider the
possibility either that some of the passages coded that way do not fit
as well and could therefore be better codes as something else or that
there are dimensions or phenomena in the passages that might well
be coded another way as well. But the potential for comparisons
does not stop there. You can compare the passage with those codes
in similar or related ways or even compare them with cases and
examples from outside your data set altogether. Previously coded
text also needs to be checked to see if the new codes created are
relevant. Constant comparison is a central part of grounded theory.
Newly gathered data are continually compared with previously
collected data and their coding in order to refine the development of
theoretical categories. The purpose is to test emerging ideas that
might take the research in new and fruitful directions. In the case of
far out comparisons, the comparison is made with cases and
situations that are similar in some respects but quite different in
295
others and may be completely outside the study. For example, still
thinking about parental help, we might make a comparison with the
way teachers help students. Reflecting on the similarities and
differences between teaching and parental relationships might
suggest other dimensions to parental help, like the way that teachers
get paid for their work but parents do not.
Ryan and Bernard suggest a number of ways in which those
coding transcripts can discover new themes in their data. Drawing
heavily on Strauss and Corbin (1990) they suggest these include:
a. Word repetitions : Look for commonly used words and
words whose close repetition may indicated emotions
b. Indigenous categories (what the grounded theorists refer to
as in vivo codes) : It refers to terms used by respondents with
a particular meaning and significance in their setting.
c. Key-words-in-context : Look for the range of uses of key
terms in the phrases and sentences in which they occur.
d. Compare and contrast : It is essentially the grounded theory
idea of constant comparison. Ask, ‗what is this about?‘ and
‗how does it differ from the preceding or following
statements?‘
e. Social science queries : Introduce social science explanations
and theories, for example, to explain the conditions, actions,
interaction and consequences of phenomena.
f. Searching for missing information : It is essential to try to
get an idea of what is not being done or talked out, but which
you would have expected to find.
g. Metaphors and analogies : People often use metaphor to
indicate something about their key, central beliefs about
things and these may indicate the way they feel about things
too.
h. Transitions : One of the discursive elements in speech which
includes turn-taking in conversation as well as the more
poetic and narrative use of story structures.
i. Connectors : It refers to connections between terms such as
causal (‗since‘, ‗because‘, ‗as‘ etc) or logical (‗implies‘,
‗means‘, ‗is one of‘ etc.)
296
j. Unmarked text : Examine the text that has not been coded at
a theme or even not at all.
k. Pawing (i.e. handling) : It refers to marking the text and
eyeballing or scanning the text. Circle words, underline, use
coloured highlighters, run coloured lines down the margins to
indicate different meanings and coding. Then look for
patterns and significances.
l. Cutting and sorting : It refers to the traditional technique of
cutting up transcripts and collecting all those coded the same
way into piles, envelopes or folders or pasting them onto
cards. Laying out all these scraps and re-reading them,
together, is an essential part of the process of analysis.
297
C. Triangulation ; According to Berg and Berg, triangulation is a
term originally associated with surveying activities, map making,
navigation and military practices. In each case, there are three
known objects or points used to draw sighting lines towards an
unknown point or object. Usually, these three sighting lines will
intersect forming a triangle known as the triangle of error. Assuming
that the three lines are equal in error, the best estimated place of the
new point or object is at the centre of the triangle. The word
triangulation was first used in the social sciences as metaphor
describing a form of multiple operationalisation or convergent
validation. Campbell and Fiske were the first to apply the
navigational term triangulation to research. The simile is quite
appropriate because a phenomenon under study in a qualitative
research is much like a ship at sea as the exact description of the
phenomenon in a qualitative research is unclear. They used the term
triangulation to describe multiple data collection strategies for
measuring a single concept. This is known as data triangulation.
According to them, triangulation is a powerful way of demonstrating
concurrent validity, particularly in qualitative research. Later on,
Denzin introduced another metaphor, viz., ‗line of action‘ which
characterises the use of multiple data collection strategies (usually
three), multiple theories, multiple researchers, multiple
methodologies or a combination of these four categories of
researcher activities. This is aimed at mutual confirmation of
measures and validation of findings. The purpose of triangulation is
not restricted to combining different kinds of data but to relate them
so as enhance the validity of the findings.
Triangulation is an approach to research that uses a
combination of more than one research strategy in a single
investigation. Triangulation can be a useful tool for qualitative as
well as quantitative researchers. The goal in choosing different
strategies in the same study is to balance them so each
counterbalances the margin of error in the other.
Used with care, it contributes to the completeness and
confirmation of findings necessary in qualitative research
investigations.
Choosing Triangulation as a Research Strategy
298
Qualitative investigators may choose triangulation as a
research strategy to assure completeness of findings or to
confirm findings. The most accurate description of the elephant
comes from a combination of all three individuals' descriptions.
Researchers might also choose triangulation to confirm findings
and conclusions. Any single qualitative research strategy has its
limitations. By combining different strategies, researchers
confirm findings by overcoming the limitations of a single
strategy. Uncovering the same information from more than one
vantage point helps researchers describe how the findings
occurred under different circumstances and assists them to
confirm the validity of the findings.
Types of Triangulation
1.
Data Triangulation : Time, Space, Person
2.
Method Triangulation : Design, Data Collection
3.
Investigator Triangulation
4.
Theory Triangulation
5.
Multiple Triangulation. which uses a combination of two
or more triangulation techniques in one study.
299
Each of these are described in detail in the following
paragraphs
1. Data Triangulation
According to Denzin (1989) there are three types of data
triangulation: (a) time, (b) space, and (c) person.
(a) Time Triangulation : Here, the researcher/s collect data
about a phenomenon at different points in time. However,
studies based on longitudinal designs are not considered
examples of data triangulation for time because they are
intended to document changes over time. Triangulations of data
analysis in cross sectional and longitudinal research is an
example of time triangulation.
(b) Space Triangulation : It consists of collecting data at more
than one site. At the outset, the researcher must identify how
time or space relate to the study and make an argument
supporting the use of different time or space collection points in
the study. By collecting data at different points in time and in
different spaces, the researcher gains a clearer and more
complete description of decision making and is able to
differentiate characteristics that span time periods and spaces
from characteristics specific to certain times and spaces.
(c) Person Triangulation : According to Denzin, person
triangulation has three levels, viz., aggregate, interactive and
collective. It is also known as combined levels of triangulation.
Here researchers collect data from more than one level of
person, that is, a set of individuals, groups, or collectives.
Researchers might also discover data that are dissimilar among
levels. In such a case, researchers would collect additional data
to resolve the incongruence. According to Smith, there are seven
levels of ‘person triangulation’ as follows :
i.
The Individual Level.
ii.
Group Analysis : The interaction patterns of individuals and
groups.
iii.
Organisational Units of Analysis : Units which have
qualities not possessed by the individuals making them up.
300
iv.
Institutional Analysis : Relationships within and across the
legal (For example, Court, School), political (For example,
Government), economic (For example, Business) and
familial (For example, Marriage) institutions of the society.
v.
Ecological Analysis : Concerned with spatial explanation.
vi.
Cultural Analysis : Concerned with the norms, values,
practices, traditions and ideologies of a culture.
vii.
Societal Analysis : Concerned with gross factors such as
urbanisation, industrialisation, education, wealth, etc.
301
2. Methods Triangulation
Methods triangulation can occur at the level of (a) design
or (b) data collection.
(a) Design Level Triangulation : Methods triangulation at the
design level has also been called between-method triangulation.
Design methods triangulation most often uses quantitative
methods combined with qualitative methods in the study design.
There is simultaneous and sequential implementation of both
quantitative and qualitative methods. Theory should emerge
from the qualitative findings and should not be forced by
researchers into the theory they are using for the quantitative
portion of the study. The blending of qualitative and quantitative
approaches does not occur during either data generation or
analysis. Rather, researchers blend these approaches at the level
of interpretation, merging findings from each technique to
derive a consistent outcome. The process of merging findings "is
an informed thought process, involving judgment, wisdom,
creativity, and insight and includes the privilege of creating or
modifying theory”. lf contradictory findings emerge or
researchers find negative cases, the investigators most likely will
need to study the phenomenon further. Sometimes triangulation
design method might use two different qualitative research
methods. When researchers combine methods at the design
level, they should consider the purpose of the research and make
a logical argument for using each method.
(b) Data Collection Triangulation : Methods triangulation at the
data collection level has been called within-method triangulation.
Using methods triangulation at the level of data collection,
researchers use two different techniques of data collection, but each
technique is within the same research tradition. The purpose of
combining the data collection methods is to provide a more holistic
and better understanding of the phenomenon under study. It is not an
easy task to use method triangulation; it is often more time
consuming and expensive to complete a study using methods
triangulation.
3. Investigator Triangulation
302
Investigator triangulation occurs when two or more
researchers with divergent backgrounds and expertise work
together on the same study. To achieve investigator
triangulation, multiple investigators each must have prominent
roles in the study and their areas of expertise must be
complementary. All the investigators discuss their individual
findings and reach a conclusion, which includes all findings.
Having a second research expert examine a data set is not
considered investigator triangulation. Use of methods
triangulation usually requires investigator triangulation because
few investigators are expert in more than one research method.
4. Theory Triangulation
Theory triangulation incorporates the use of more than
one lens or theory in the analysis of the same data set. In
qualitative research, more than one theoretical explanation
emerges from the data. Researchers investigate the utility and
power of these emerging theories by cycling between data
generation and data analysis until they reach a conclusion.
5. Multiple Triangulation
It uses a combination of two or more preceding triangulation
techniques in one study.
Reducing Bias in Qualitative Data Analysis:
Bias can influence the results. The credibility of the
findings can be increased by:
a. Using multiple sources of data. Using data from different
sources helps in cross-checking the findings. For example,
combine and compare data from individual interviews with data
from focus groups and an analysis of written material on the
topic. If the data from these different sources point to the same
conclusions, the findings are more reliable.
b. Tracking choices. The findings of the study will be more
credible if others understand how the conclusions were drawn.
Keep notes of all analytical decisions to help others follow the
reasoning. Document reasons for the focus, category labels
created, revisions to categories made and any observations noted
concerning the data while reading and re-reading the text.
303
c. Document the process used for data analysis. People often see
and read only what supports their interest or point of view.
Everyone sees data from his or her perspective. It is important to
minimise this selectivity. State how data was analysed clearly so
that others can see how decisions were made, how the analysis
was completed and how the interpretations were drawn.
d. Involving others. Getting feedback and input from others can
help with both analysis and interpretation. Involve others in the
entire analysis process, or in any one of the steps. Have several
people or another person review the data independently to
identify themes and categories. Then compare categories and
resolve any discrepancies in meaning.
Drawbacks to be Avoided:
a. Do not generalise results. The goal of qualitative work is not to
generalise across a population. Rather, a qualitative data
collection approach seeks to provide understanding from the
respondent's perspective. It tries to answer the question ―why‖.
Qualitative data provide for clarification, understanding and
explanation, not for generalizing.
b. Choose quotes carefully. Use of quotes can not only provide
valuable support to data interpretation but is also useful in
directly supporting the argument or illustrate success. However,
avoid using people's words out of context or editing quotes to
exemplify a point. Use quotes keeping in mind the purpose for
including quotes. Include enough of the text to allow the reader
to decide what the respondent is trying to convey.
c. Respect confidentiality and anonymity when using quotes. Even
if the person's identity is not noted, others might be able to
identify the person making the remark. Therefore, get people's
permission to use their words.
d. Be aware of, state and deal with limitations. Every study has
limitations. Presenting the problems or limitations encountered
when collecting and analysing the data helps others understand
the conclusions more effectively.
Check Your Progress – II
Explain the meaning of the following terms :
(a) Analytical Induction :
304
(b) Constant Comparison :
(c) triangulation :
SUGGESTED READINGS
Bogdan R. B. and Biklen, S. K. Qualitative Research for
Education: An Introduction to Theory and Methods.
Third Edition. Needham Heights, MA: Allyn and Bacon.
1998.
Miles, M. B. and Huberman, M. A. Qualitative Analysis : An
Expanded Sourcebook. Thousand Oaks, CA: Sage. 1994.
Katz, J. "Analytic Induction," in Smelser and Baltes, (eds)
International Encyclopedia of the Social and Behavioral
Sciences. 2001.
Ragin, C. C. Constructing Social Research: The Unity and Diversity
of Method. Pine Forge Press, 1994.
Strauss, A. and Corbin, J. Basics of Qualitative Research. Grounded
Theory Procedures and Techniques. Newbury Park, CA :
Sage. 1990.


305
12
RESEARCH REPORTING
Unit Structure:
12.0
Objectives
12.1
Introduction
12.2
Types of Research Report
12.2.1 Format
12.2.2 Style
12.2.3 Mechanism of report writing with reference to
Dissertation and thesis and papers.
12.3 Bibliography
12.4 Evaluation of Research Report
12.0 OBJECTIVES :
After reading this unit, the student will be able to
(a) Decide the style, format and mechanisms of writing a
research report.
(b) Write down bibliography correctly and comprehensively.
306
(c) Explain how to write a research report.
12.1 INTRODUCTION :
Educational research is shared and communicated to others
for dissemination of knowledge. After completion of research
activities, the researcher has to report the entire activities that are
involved in research process systematically in writing. For clear and
easy understanding of readers, writing a good research report
requires knowledge of the types of research reporting, rules for
writing and typing, format and style of research reporting and the
body of the report. However, scholarship, precision of thought and
originality of a researcher cannot be undermined in producing a
good research report.
12.2 TYPES OF RESEARCH REPORT :
Research reports mainly take the form of a thesis,
dissertation, journal article and a paper to be prescribed at a
professional meeting. Research reports vary in format and style.
For example, there are difference found in a research report
prepared as a thesis or dissertation and a research report prepared
as a manuscript for publication.
The dissertation and thesis are more elaborate and
comprehensive. While research papers prepared for journal articles
and professional meeting are more precise and concise.
307
12.2.1 Format :
Format refers to the general pattern of organisation and
arrangement of the report. It is an outline that includes sections
and subsections or chapters and subchapters or headings and
subheadings followed to write research report. All research reports
follow a format that is parallel to the steps involved in conducting a
study. The format of a research report is generally well spelled out
in contents. Different universities, institutions and organizations
publishing professional journals follow style manual prepared on
their own. Some institutions follow by style manuals prepared by
other professional bodies like the American Psychological
Associations, the University of Chicago and the Harvard Law Review
Association. The Publication Manual of the American Psychological
Association (APA), the Chicago Manual of Style, and A Uniform
System of Citation (USC) published by Harvard Review Association
are some of the worth mentioning style manuals that are followed
by researchers to follow format and style while writing research
reports.
The APA format is widely followed because it eliminates
formal footnotes. It provides detailed information about research
format for all types of research reports on various behavioural and
social science disciplines. The CMC presents guidelines for use of
quotations, abbreviations, names and terms and distinctive
treatment of words, numbers, tables, mathematics in type and
writing footnotes. Some historians and ethnographers prefer to
use the CMC and USC.
The common format used to write research report of
quantitative studies for a degree requirement is as follows.
Preliminary pages
1. Title Page
308
a) Title
b) Degree requirement
c) University or institution’s name
d) Author’s name
e) Supervisor’s name
f) University Department
g) Year
2. Acknowledgements
3. Supervisor’s Certificate
4. Table of contents
5. List of Tables
6. List of Figures
Main Body the Report
1. Chapter – I : Introduction
a. Theoretical Framework
b. Rationale of the study
c. Statement of the problem
d. Definitions of terms
e. Objectives
f. Hypothesis
g. Scope and Delimitations of the study
h. Significance of the study
2. Chapter II : Review of Related Literature
3. Chapter III : Methodology and Procedures
a. Design and Research method
309
b. Population and sample
c. Tools and techniques of data collection
d. Techniques of data analysis
4. Data Analyses
5. Results and Discussions
6. Conclusions and Recommendations
7. Bibliography
8. Appendices
The common format followed for qualitative research
including historical and analytical research is different from the
format followed in quantitative research. The common format
used usually to write research report of qualitative studies for
degree requirements is as follows.
1. Preliminary pages (same as in quantitative research)
2. Introduction
a) General problem statement
b) Preliminary Research Review
c) Foreshadowed Problems
d) Significance of the study
e) Delimitations of the study
3. Design and Methodology
a) Site selection
b) Researcher’s Role
c) Purposeful / Theoretical Sampling
d) Data collection strategies
310
4. Qualitative Data analysis and Presentation
5. Presentation of Findings : An Analytical interpretation
6. Bibliography
7. Appendices
The common format followed for writing a research report
as an article or a paper for a journal and seminar is as follows :
1. Title and author’s name and address
2. Abstract
3. Introduction
4. Method
a) Sample
b) Tools
c) Procedure
5. Results
6. Discussions
7. References
Check Your Progress-I
1. What is the format of writing a dissertation?
311
12.2.2 Style :
Style refers to the rules of spelling, capitalization,
punctuations and typing followed in preparing the report. A
researcher has to follow some general rules for writing and typing a
research report. The rules that are applicable both for quantitative
and qualitative research report are as follows :
1. The research report should be presented in a creative, clear,
concise and comprehensive style. Literary style of writing is to
be replaced by scientific and scholarly style reflecting precise
thinking. Descriptions should be free from bias, ambiguity and
vagueness. Ideas need to be presented logically and sequentially
so that the reader finds no difficulty in reading.
2. The research report should be written in a clear, simple,
dignified and straight forward style, sentences should be
grammatically correct. Colloquial expressions, such as ‘write up’
for report and ‘put in’ for insert should be avoided. Even great
ideas are sometimes best explained in simple, short and
coherent sentences. Slang, flippant phrases and folksy style
should be avoided.
3. Research report is a scientific document but not a novel or
treatise. It should not contain any subjective and emotional
statements. Instead, it should contain factual and objective
statements.
4. Personal pronouns such as I and me, and active voice should be
avoided as far as possible. For example, instead of writing I
randomly selected 30 subjects, it is advisable to write thirty
subjects were selected randomly by the investigator.
312
5. Sexist language should be replaced by non-sexist language while
writing research report. Male or female nouns and pronouns
(he and she) should be avoided by using plurals. For example,
write children and their parents have been interviewed rather
than child and his parents were interviewed.
6. Instead of using titles and first names of the cited authors, last
name is needed. For example, instead of writing Professor John
Dewey, write Dewey.
7. Constructed forms of modal auxiliaries and abbreviations should
be avoided. For example, shouldn’t, can’t, couldn’t should not
be used. However, abbreviations can be used to avoid
repetition if the same has been spelled out with the
abbreviation in parentheses. For example, researcher can write
NCERT if he/she has used NCERT in parenthesis in his/her earlier
sentences like National, Council of Educational Research and
Training (NCERT).There are few exceptions to this rule for wellknown abbreviations such as IQ.
8. Use of tense plays an important role in writing a research
report. Past tense or present perfect tense is used for review of
related literature and description of methodology, procedure
results and findings the study, Present tense is appropriate for
discussing results and presenting research conclusions and
interpretations. Future tense, except in research proposals, is
rarely used.
9. Economy of expression is important for writing a research
report. Long sentences and long paragraphs should be avoided.
Short, simple words are better than long words. It is important
that thought units and concepts are ordered coherently to
provide a reasonable progression from paragraph to paragraph
smoothly.
10. Fractions and numbers which are less than ten should be
expressed in words For example, six schools were selected or
fifty percent of students were selected.
11. Neither standard statistical formula not computations are given
in the research report.
313
12. Research report should not be written hurriedly. It should be
revised many times before publication. Even typed manuscripts
require to be thoroughly proofread before final typing.
13. Typing is very important while preparing research report. Use of
computer and word processing programme has made the work
easy. However, following rules of typography require to be
followed.
1
2
i) A good quality of hand paper 8 , by 11 in size and 13 to 16
pound in weight should be used.
ii) Only one side of the sheet is used in typing.
iii) The left margin should be 1
1
inches. All other margins i.e.
2
the top, the bottom and the right should be 1 inch.
iv) All material should be double spaced.
v) Times New Roman or A Oldman Book Style with 12 size front
can be used for typing words in English and book titles can
be italicized.
vi) Direct quotations not over three typewritten lines in length
are included in the text and enclosed in quotation marks.
Quotations of more than three lines are set off from the text
in a double – spaced paragraph and indented five spaces
from the left margin without quotation marks. However,
original paragraph indentations are retained. Page numbers
are given in parentheses at the end of a direct quotation.
Check Your Progress-II
1. State the style in which a research report needs to be written.
314
12.2.3 Mechanisms of Writing Dissertation and Thesis :
In reality, the terms dissertation and thesis carry the same
meaning. Thesis is an English (UK) term whereas dissertation is an
American term. However, in India, the term thesis is used to denote
work carried out for Ph.D. degree whereas the term dissertation is
used to denote with work carried out for M. Ed. and M.Phil. degrees
especially in theacademic discipline of Education. Both format or
outlines stated earlier vendor format section. Thesis and
dissertation should be complete and comprehensive. The main
sections of a dissertation and thesis are (i) Preliminary pages, (ii)
Main body of the report and (iii) Appendices.
i) Preliminary pages : The Preliminary pages include title page,
supervisor’s certificate, acknowledgement page, table of
contents, list of tables and figures. The title page usually
includes the title of the report, the author’s name, the
degree requirement, the name and location of the
college or university according the degree and the date or
year of submission of the report. Name, designation and
institutional affiliation of the guide are also written. The
title of a dissertation and thesis should clearly state the
purpose of the study. The title should be typed in capital
letters, should be centred, in an inverted pyramid form
and when two or more lines are needed, should be
double spaced. An example of the title page is given in
the following box – 1.
BOX-1
315
DEVELOPMENT OF SCIENCE CONCEPTS IN HEARING IMPAIRED CHILDREN
STUDYING IN SPECIAL SCHOOLS AND INTEGRATED SETTINGS
Thesis submitted to the University of Mumbai
for the Degree of
DOCTOR OF PHILOSOPHY
in
ARTS (EDUCATION)
By
Vaishali Sawant
Guided by
Dr. D. Harichandan
Department of Education
University of Mumbai
2010
As per the requirement of some universities and these
include certificate of the supervisor under whose guidance or
supervision the research mark was completed.
316
Most theses and dissertations include an acknowledgement
page. This page permits the researcher to express appreciation of
persons who have contributed significantly to the completion of the
report. It is acceptable to thank one’s own guide or supervisor who
helped at each stage of the research work, teachers, students or
principals who provided data for the research and so on. Only
these persons who helped significantly for completion of research
work should be acknowledged.
The table of contents is an outline of the dissertation or
thesis which indicates page on which each major section (chapter)
and subsection beings.
The list of tables and figures are given in a separate page
that gives number, title of each table and figure and page on which
it can be found. Entries listed in the table of contents should be
identical to headings and subheadings in the report, and table titles
and figures, titles should be the same titles that are given to the
actual tables and figures in the main body of the report.
The main body of the report : The main body of the report includes
introduction, review of related literature, methodology and
procedures,
results
and
discussion,
conclusions
and
recommendations and appendices. The introduction section
includes a theoretical framework that introduce the problem,
significance of the study both from theoretical and practice points
of view, description of the problem, operational definition of the
terms, objectives, statement of hypotheses with rationale upon
which each hypothesis is based and sometimes delimitations of the
study. The problem requires to be stated in interrogative
statement or series of questions which answers are to be sought by
the researcher through empirical investigation. Abstract terms and
variables used in the problem require to be operationally defined.
317
The problem should be stated that it should aim at finding the
relationship between two variables.
The hypotheses related should be supported by the rationale
deduced from the previous research studies or experiences with
evidences. Hypothesis which is a tentative answer to the research
question should be stated concisely and clearly so that it can be
tested statistically or logically with evidences. An example of an
hypothesis based on the rationale is given in the Box – 2.
Box – 2
Rationale – 1
Cognitive development possess through form successive stages from
Piagian perspective, namely, sensory – motor, pre-operational, concrete
operational and formal operational stages. Each stage starts with specific
age and is characterised by the development of some specific concepts.
Hence, development of science concepts is related to age. It is assumed that
age plays vital role in the development of concepts. The hypothesis derived
from this rationale is :
H1 : There exists significant difference among hearing impaired children at
each successive age levels in the development of concepts in science.
The delimitation of the study should include such aspects as
variables, sample, area or site, ratings tools and techniques to
which the study has been delimited.
The second chapter of the main body of the report includes
review of literature. In this chapter, the past research works
relating to the present study under report should be described and
318
analysed. The description that includes last name of the previous
researcher with year of study in parenthesis includes mainly
method and findings briefly and precisely. More description of
studies and findings has no meaning unless those descriptions and
findings are analsed critically to find out research gaps to be
bridged by the present study. Therefore, it is required that after
description of the previous research work, the research report
should contain a critical appraisal to find out significance the study.
The methodology and procedure section includes the
description of subjects, tools and instruments, design, procedures.
The description of subjects includes a definition and description of
population from which sample is selected, description of the
method used in selecting sample and size of the sample, the
description of population should indicate its size, major
characteristics such as age, grade level, ability level and socioeconomic status. The method and sampling techniques used for
selection of sample requires to be described in detail along with the
size of the sample.
The instruments and tools used for investigation require
detailed description. The description includes the functions of the
instrument, its validity, reliability and searing procedures. If an
instrument is developed by the investigator, the descript needs to
be more detailed that specifies the procedures followed for
developing the tools, steps used for determining validity and
reliability, response categories, scoring pattern, norms, if any, and
guidelines for interpretations. A copy of the instrument with
scoring key and other pertinent information related to the
instrument are generally given in appendix of the dissertation and
thesis but not given in the main body of the report.
The description of the design is given in detail. It also
includes a rationale for selection of the design.
319
The results section describes the statistical techniques
applied to data with justification, preselected alpha levels and the
result of each analysis. Analysis of data is made under subheadings
pertaining to each hypothesis. Tables and figures need to present
findings (in summary form) of statistical analysis in vertical columns
and horizontal rows. Graphs are also given in the main body of the
report. Tables and figures enable the reader to comprehend and
interpret data quickly. It is advisable to use several tables rather
than to use one table that is crowded. Good tables and figures are
self – explanatory. Each table should be presented on the same
page. Large tables or graphs should be reduced to manuscript page
size either by Photostat on some other process of reproduction.
The word table or figure is centred between the page margins and
typed in capital letters, followed by the table or figure number in
Arabic numerals. Tables and figures are numbered continuously
but separately for each chapter.. Title of the table and figure is
placed in double space below the word table and figure. The title of
the table and figure should be brief and clear indicating the nature
of table presented. Column headings and row headings of a table
should be clearly labeled. If no data is available for a particular cell,
indicate the lack by a dash (-), rather than a zero (0).
Each table and figure is followed by description
systematically in simple language with statistical or mathematical
language in parenthesis. An example in given in Box – 3.
Box – 3
TABLE 1
Z-RATIO OF CONSERVATION RESPONSES OF HEARING IMPAIRED CHILDREN
IN INTEGRATED EDUCATIONAL SETTING (IED) AND SPECIAL SCHOOLS
Concepts
Conservation of Conservation of Conservation of
320
The result emerged should be followed by discussion and
interpretation. Each result is discussed in terms of the original
hypothesis to which it relates, and in terms of its agreement or
disagreement with previous results obtained by other researchers
in other studies. Sometimes, researcher uses a separate section
titled ‘Discussions’ where all the results emerged are explained
either individually or joined both at micro level and macro level.
The conclusion and recommendation chapter includes
description of the major findings, discussion of the theoretical and
practical implications of the findings and recommendations for
future research or future action. In this section, a researcher is free
to discuss any possible revisions and additions to existing theory
321
and to encourages studies designed to test hypotheses suggested
by the results. The researcher may also discuss the implications of
findings for educational practical and suggest studies that can be
replicated in other settings. The researcher may also suggest
further studies to be designed for investing different dimensions of
the problem investigated.
The references of bibliography section of the dissertation
and thesis consists of lists of all the sources alphabetically by
author’s lost names that mere cited in the report. Most of the
authors’ names were cited in the introduction and review of related
literature, sections, Primary sources that are cited in the body of
the dissertation and thesis are only included in the references.
Appendices are necessary in thesis and dissertation reports.
Appendices include information and data pertinent to the study
which are not important to be included in the main body of the
report or are too lengthy. Tests, questionnaire, career letters, raw
data and data analysis sheet are included in the appendices.
Sometimes, subject index that includes important concepts used in
the main body of the report. The list of those concepts are given
alphabetically with the page on which each can be found.
Appendices are named alphabetically followed with short title
relating to the theme. (For example, APPENDIX A Non-verbal Text
on concept attainment in Biology.)
MECHANISM OF WRITING PAPERS :
Paper includes a research report prepared for publication in
a journal or for presentation in seminar and professional meeting.
Paper prepared for journal and seminar follows the same
mechanism of writing, style and format. The main purpose of a
paper is for sharing the ideas emerged with other researchers,
322
which is not possible through dissertation and thesis. The content
and format of a paper and a thesis are very similar except that the
paper is much shorter. Lengthy thesis or dissertation may once
again be prepared of two papers or articles. The research paper
follows the format given below.
1. Title
2. Author’s name
3. Abstract (About 100 to 120 words)
4. Introduction
5. Method
6. Results
7. Discussion
8. Reference
Writing of the title of a research paper follows the same
mechanism it is followed in dissertation and thesis. Below the title,
author’s name and address is given.
An abstract of the paper consisting of 100 to 120 words, and
containing mainly objectives, methods and findings are given
before main body of the paper. Other preliminary pages of the
dissertation and thesis are not required in the research paper either
to be published in a journal or presented in a seminar.
The introduction section of a research paper consists of a
brief description of theoretical background, agreements and
disagreement of previous researchers on findings related to the
topic center report, objective, hypotheses.
323
The method section deals with sample size and sampling,
instruments and tools, design and procedure of collecting data. In
dissertation and thesis, detailed description is necessary, whereas,
in research paper the same is to be writing very precisely and
comprehensively. The author has to exercise judgment in
determining which are the critical aspects of the study and which
aspects require more in depth description.
The result section of a research paper includes tables and
figures including graphs. However, the tables and figures require to
be described in the light of the hypothesis for its acceptance and
rejection. The findings described should be supported by statistical
values and alpha levels with mathematical signs of less than (<) or
greater than (>) etc. in parenthesis. For example, as it can be seen
in table 1, high creative and low creative teachers differed
significantly on attitude towards class room teaching (t = 4.24; df =
121; p<.01), child centred practices (t = 2.14; df = 131; p<.05),
educational process (t = 3.38; df = 131; p<.01) and pupils (t = 2.87;
df = 131;
p <.01) in favour of high creative teachers as mean
attitude towards class room teaching of high creative teachers is
greater than their counterparts.
The discussion section is very important in a research paper.
Each finding is discussed in the right of its agreement and
disagreement with the previous findings followed with justification
based on previous theory and existing body of knowledge. In this
section, the researchers are free to give their critical judgment for
using new dimensions that were emerged out of the study in order
to add something new to the existing body of knowledge or for
revision and modification of theory.
Critical and analytic
description is highly essential in discussion.
Lastly, names of authors cited in the paper are given in
alphabetical orders beginning with first name in the reference.
324
Check Your Progress-III
1. Explain the mechanisms of writing a dissertation.
12.3 BIBLIOGRAPHY :
A bibliography is list of all the sources which the researcher
actually used for writing a research report. Bibliography and
references and sometimes used interchangeably, buy both are
different from each other. References consist of all documents
including books, journal, articles, technical reports, computer
programmes and unpublished works that are cited in the main body
of a research report. i.e. dissertation, thesis, journal article,
seminar paper, etc. References includes mainly primary sources. A
bibliography, in contrast, contains everything that is either cited or
not cited in the body of the report but are used by the researcher.
It includes both primary and secondary sources. The common trend
has been to use bibliography in dissertation and thesis, and
reference in journal articles and papers.
325
The American Psychological Association (APA) Publication,
Manual, the Chicago Manual of Style and Uniform System of
Citation of Harvard Review Association are available that guide a
researcher to write bibliography. However, the APA style is widely
used to write bibliography is educational research. It provides
guidelines for writing different types of sources such as boxes,
journal articles, unpublished dissertation and thesis, unpublished
papers presented at the meeting and seminars, unpublished
manuscripts, technical reports etc.
Bibliography is arranged in alphabetical order by the last
names of the first authors. When no author is given, the first word
of the title on sponsoring organisation is used to begin the entry.
Each entry in the bibliography starts at the left margin of the page,
with subsequent lines double spread and indented. Extra space is
not given between the entries.
Following illustrations provide information for writing
different type of entries in the bibliography.
1. Book :
Vaizey, J. (1967). Education in the modern world. New York :
McGraw Hill.
2. Book with multiple authors :
Barzun, J. & Graff, H. F. (1977), The modern researcher, New
York : Harcourt, Brace, Hovanovich.
3. Book in subsequent edition :
Hallahan, D.P. & Kauffman, J.M. (1982), Exceptional children (2 nd
ed.), Englewood Cliffs, NJ : Prentice Press.
4. Editor as author :
Mitchell, J. V., Jr (Ed.), (1985). Mental measurement yearbook
(9th ed.), Highland Park, NJ : Gryphon Press.
326
5. No author given :
Prentice – Hall author’s guide. (1978), Englewood Cliffs, NJ :
Prentice Hall.
6. Corporate or association author
American Psychological Association, (1983), Publication manual
(3rd ed.), Washington, DC : Author
7. Part of a series of books :
Terman, L.M. & Oden, M.H. (1947), Genetic studies of genius
series : Vol. 4. The gifted child grows up. Stanford, CA : Stanford
University Press.
8. Chapter in an edited book :
Kahn, J.V. (1984), Cognitive training and its relationship to the
language of profoundly related children. In J.M. Berg (Ed.),
Perspectives and progress in mental retardation. Baltimore :
University Park, 211 – 219.
9. Journal article :
Seltzer, M.M. (1984) Correlates of community opposition to
community residences for mentally retarded persons. American
Journal of Mental Deficiency, 89, 1 – 8.
10. Magazine article :
Meer, J. (1984, August), Pet theories, Psychology Today,
pp. 60 – 67.
11. Unpublished paper presented at a meeting :
Schmidt, M., Khan, J.V. & Nucci, L. (1984, May), Moral and social
conventional reasoning of trainable mentally retarded
adolescents. Paper presented at the annual meeting of the
American Association on Mental Deficiency, Minneapolis, MN.
327
12. Thesis or dissertation (unpublished) :
Best, J.W. (1948), An analysis of certain selected factors
underlying the choice of teaching as a profession. Unpublished
doctoral dissertation, University of Wisconsin, Madison.
13. Unpublished manuscripts :
Kahn, J.V., Jones, C., & Schmidt, M. (1984). Effect of object
Preference on sign learnability by severely and profoundly
retarded children : A pilot study. Unpublished manuscript,
University of Illionois at Chicago.
Kahn, J.V., (1991). Using the Uzgiris and Hunt scales to
understand sign usage of children with severe and profound
mental retardation, Manuscript submitted for publication.
14. Chapter accepted for publication :
Kahn, J.V. (in press), Predicting adaptive behavior of severely
and profoundly mentally retarded children with early cognitive
measures. Journal of Mental Deficiency Research
15. Technical report :
Kahn, J.V., (1981) : Training sensorimotor period and language
skills with severely retarded children. Chicago, IL : University of
Illinois at Chicago. (ERIC Document Reproduction Service, No. ED
204 941).
12.4 EVALUATION OF RESEARCH REPORT :
Evaluation of a research report is essential to find out major
problems and shortcomings. Through a critical analysis, the student
may gain some ideas into the nature of research problem,
methodology for conducting research, the process by which data
are analysed and conclusions are drawn, format of writing research
328
report, style of writing. The following questions are suggested to
evaluate each components of research report.
1. The Title and Abstract
Are the tile and abstract clear and concise?
Do they promise no more than the study can provide?
2. The problem
Is the problem stated clearly?
Is the problem researchable?
Is background information on a problem presented?
Is the significance of the problem given?
Are the variables defined operationally?
3. The Hypothesis
Are hypotheses testable and stated clearly?
Are hypotheses based on sound rationale?
Are assumptions, limitations and delimitations stated?
4. Review of Repeated Literature
Is it adequately covered?
Are most of the sources primary?
Are important findings noted?
Is it well organised?
Is the literature given directly relevant to the problem?
Have the references been critical analysed and the results of
studies compared and constructed?
Is the review well organised?
Does it conclude with a brief summary and its implications
for the problem investigated?
5. Sample
Are the size and characteristics of the population studied
described?
Is the size of the sample appropriate?
329
Is the method of selecting the sample clearly described?
6. Instruments and Tools
Are data gathering instruments described clearly?
Are the instruments appropriate for measuring the intended
variable?
Are validity and reliability the instruments discussed?
Are systematic procedure followed if the instrument was
developed by one researcher?
Are administration, searing and interpretation procedures
described?
7. Design and Procedure
Is the design appropriate for testing the hypotheses?
Are the procedures described in detail?
Are control procedures described?
8. Results
Is the statistical method appropriate?
Is the level of significance given?
Are tables and figures given?
Is every hypothesis tested?
Are the data in each table and figure described clearly?
Are the results stated clearly?
9. Discussions
Is each finding discussed?
Is each finding discussed in term of its agreement and
disagreement with previous studies?
Are generalizations consistent with the results?
10. Conclusions and Recommendations
Are theoretical and practical implications of the findings
discussed?
Are recommendations for further action made?
Are recommendations for further research made?
11. Summary
Is the problem restated?
330
Are the number and type of subjects and instruments
described?
Are procedures described?
Are the major findings and conclusions described ?
Check Your Progress-IV
1. What are the criteria of evaluating a research report?
Suggested Readings
Turabian, K. Manual for Writers of Term Papers, Theses, and
Dissertations. (7th ed.) Chicago: University of Chicago
Press. 2007.
Cardasco, F. and Gatner, E. (1958).Research Report Writing. New
York : Barner and Noble.

331

Syllabus
M.A. Education, Part-I
Paper III : Research Methodology in Education
Course Objectives:
To develop an understanding about the meaning of research
and its application in the field of education.
To enable students to prepare a research proposal.
To enable students to understand different types of variables,
meaning and types of hypothesis, sampling techniques and tools
and techniques of educational research.
To develop an understanding about the different types of
research methodology of educational research.
To enable students to understand quantitative and qualitative
data analysis techniques.
Module I: Educational Research and Its Design
1. Educational Research :
(a) Sources of Acquiring Knowledge: Learned
tradition, experience, scientific method.
authority,
(b) Meaning, steps and scope of educational research.
(c) Meaning, steps and assumptions of scientific method. Aims
and characteristics of research as a scientific activity.
(d) Ethical Considerations in Educational Research.
(e) Paradigms of educational research: Quantitative and
Qualitative.
(f) Types of research Fundamental, Applied and Action.
2. Research Design
332
(a) Meaning, definition, purposes and components of research
design.
(b) Difference between the terms research method and research
methodology.
(c) Research Proposal : Its Meaning and Need.
i)
Identification of a research topic : Sources and Need
ii)
Review of related literature
iii) Rationale and need of the study
iv) Definition of the terms : Real, nominal and Operational.
v)
Variables.
vi) Research questions, aims, objectives and hypotheses,
vii) Assumptions, if any.
viii) Methodology, sample and tools.
ix) Scope, limitations and delimitations.
x)
Significance of the study.
xi) Techniques of data analysis and unit of data analysis.
xii) Bibliography.
xiii) Time Frame.
xiv) Budget, if any.
xv) Chapterisation.
Module II: Research Hypotheses and Sampling
3. Variables and Hypotheses
(a) Variables :
i)
Meaning of Variables
ii)
Types of Variables (Independent,
Extraneous, Intervening and Moderator)
Dependent,
(b) Hypotheses :
i)
Concept of Hypothesis
ii)
Sources of Hypothesis
iii) Types of Hypothesis (Research, Directional, Nondirectional, Null, Statistical and Question-form)
333
iv) Formulating Hypothesis
v)
Characteristics of a good hypothesis
vi) Hypothesis Testing and Theory
vii) Errors in Testing of Hypothesis
4. Sampling :
(a) Concepts of Universe and Sample
(b) Need for Sampling
(c) Characteristics of a good Sample
(d) Techniques of Sampling
i)
Probability Sampling
ii)
Non-Probability Sampling
Module III: Research Methodology, Tools and Techniques
5. Research Methodology
(a) Descriptive Research
i)
Causal – Comparative
ii)
Correlational
iii) Case Study
iv) Ethnography
v)
Document Analysis
vi) Analytical Method
(b) Historical Research : Meaning, Scope of historical research,
Uses of history, Steps of doing historical research (Defining
the research problem and types of historical inquiry,
searching for historical sources, Summarizing and evaluating
historical sources and Presenting pertinent facts within an
334
interpretive framework.) Types of historical sources, External
and internal criticism of historical sources.
(c) Experimental Research
i)
Pre-Experimental Design, Quasi – Experiemental Design
and True – Experimental Designs,
ii)
Factorial Design / Independent Groups and repeated
measures.
iii) Nesting Design
iv) Single – subject Design
v)
Internal and External Experimental Validity
vi) Controlling extraneous and intervening variables.
6. Tools and Techniques of Research
(a) Classical Test Theory and Item Response Theory of Test
Construction.
(b) Steps of preparing a research tool.
i)
Validity (Meaning, types, indices and factors affecting
validity)
ii)
Reliability (Meaning, types, indices and factors affecting
reliability)
iii) Item Analysis (Discrimination Index, Difficulty index)
iv) Index of Measurement Efficiency
v)
Standardisation of a tool.
(c) Tools of Research
i)
Rating Scale,
ii)
Attitude Scale,
335
iii) Opinionnaire
iv) Questionnaire
v)
Aptitude Test
vi) Check List
vii) Inventory
viii) Semantic Differential Scale
(d) Techniques of Research
i)
Observation
ii)
Interview
(Tools to be used for collecting data using these
techniques to be discussed in detail.)
Module IV: Data Analysis and Report Writing
7. Data Analysis
(a) Types of Measurement Scale (Nominal, Ordinal, Interval and
Ratio)
(b) Quantitative Data Analysis
i)
Parametric Techniques
ii)
Non-Parametric Techniques
iii) Conditions to be satisfied for using parametric
techniques
iv) Descriptive data analysis (Measures of central tendency,
variability, fiduciary limits and graphical presentation of
data)
v)
Inferential data analysis
336
vi) Use of Excel in Data Analysis
vii) Concepts, use and interpretation of following statistical
techniques : Correlation, t-test, z-test, ANOVA, Critical
ratio for comparison of percentages and chi-square
(Equal probability and Normal Probability Hypothesis).
viii) Testing of Hypothesis
(c) Qualitative Data Analysis
i)
Data Reduction and Classification
ii)
Analytical Induction
iii) Constant Comparison
8. Research Reporting
(a) Formal, Style and Mechanics of Report Writing with
Reference to
i)
Dissertation and Thesis and
ii)
Paper
(b) Bibliography
(c) Evaluation of Research Report
References:
1.
Best, J.W. and Kahn, J (1997) Research in Education (7th Ed)
New Delhi: Prentice-Hall of India Ltd.
2.
Borg. B.L. (2004) Qualitative Research Methods. Boston:
Pearson.
337
3.
Bogdan, R.C. and Biklen, S.K. (1998) Qualitative Research for
Education: An Introduction to Theory and Methods. Boston
MA: Allyn and Bacon.
4.
Bryman, A. (1988) Quantity and Quality in Social Science
Research. London: Routledge.
5.
Charles, C.M. and Merton, C.A.(2002) Introduction to
Educational Research. Boston: Allyn and Bacon.
6.
Cohen, L and Manion, L. (1994) Research Methods in
Education. London: Routledge.
7.
Creswell, J.W. (2002) Educational Research. New Jersey:
Upper Saddle River.
8.
Creswell, J.W. (1994) Research Design. London: Sage
Publications.
9.
Denzine, N.K. and Lincoln, Y.S.(Eds) (1994) Handbook of
Qualitative Research London, Sage Publications.
10.
Diener, E. and Crandall, R. (1978) Ethics in Social and
Behavioural Research. Chicago: University of Chicago Press.
11.
Dillon, W.R. and Goldstein, M. (1984) Multivariate Analysis
Methods and Applications. New York, John Wiley and Sons.
12.
Gay, L.R. and Airasian, P. (2003) Educational Research. New
Jersey: Upper Saddle River.
13.
Husen, T. and Postlethwaite, T.N. (eds.) (1994) The
International Encyclopaedia of Education. New York: Elsevier
Science Ltd.
14.
Keeves J.P. (ed.) (1988) Educational Research, Methodology
and Measurement: An International Handbook Oxford,
Pergamon.
15.
McMillan, J.H. and Schumacher, S. (2001) Research In
Education. New York: Longman.

Download