Uploaded by isd.somyamcm

SURVEY RESEARCH

advertisement
SURVEY RESEARCH
What is Survey Research?
According to Isaac & Michael (1997) a survey research gives answer to questions that have been
raised, to solve problems that have been posed or observed, to assess needs and set goals to
determine whether or not specific objectives have been met, to establish baselines against which
future comparisons can be made, to analyze trends across time and generally, to describe what
exists, in what amount and in what context.
In other words, survey is a structured set of questions or statements given to people in order to
measure their attitude, beliefs, values or tendencies to it. Kraemer (1991) identified three
distinguishing characteristics of survey research.
1. Survey research is used to quantitatively describe specific aspects if a given population.
These aspects often involve examining the relationships among variables.
2. The data required for survey research are collected from people and are thus subjective.
3. Survey research uses a selected portion of the population from which the findings can later
be generalized back to the population.
In survey research, independent and dependent variables are used to define the scope of study, but
cannot be explicitly controlled by the researcher. Avwokeni (2006) states that, in a survey design,
data are collected from the subjects without imposing any condition or treatment on them.
Important factors affecting survey research (Akinade & Owolabi, 2009):
1. As survey research deals with the characteristics, attitudes, and behaviors and individuals or
a group of individuals called a sample, direct contact with those persons must be established
by the survey researcher.
2. The success of a survey research depends upon the willingness and co-operativeness of the
sample selected for the study. The people selected for the survey research must be willing to
give the desired information. In case, they are not willing, and do not co-operate with the
survey researcher he should drop the plan in favor of some other technique.
3. Survey research requires that the researcher must be a trained professional. He must have
manipulative skill and research insight. He must possess social intelligence so that he may
deal with people effective and be able to extract the desired information from others.
Determinants for adopting a survey design (Christopher & Udoh, 2020)
i.
Establish the Goals: What the researcher went to learn or how he will use the
information. The usual goal of a survey is to describe a population. So, the goals of the
project will determine whom the researcher will survey and what he ask them. If the
researcher goals are unclear, the results will probably be unclear. Such typical goals
include, learning about employee attitudes, rating of current products and services,
associations’ member’s opinions etc.
ii.
Select Your Sample: There are two main components in determining whom the
researcher will interview. The first is deciding what kind of people to interview.
Researcher often calls this group the target population. If the researcher conduct an
employee attitude survey or an association membership, the population is obvious. But if
the researcher is trying to determine the likely success of a product, the target
population may be less obvious. Correctly determining the target population is critical
because if the right kinds of people are not interviewed, the researcher will not
successfully meet his or her goals. Secondly, is to decide how many people the
researcher need to interview. Statisticians know that a small representative sample will
reflect the group from which it is drawn. The larger the sample, the more precisely it
reflects the target group. With a sample size, the researcher will be able to follow-up the
contacts to encourage respondents. As a result of these efforts, participants and data
quality may be improved.
iii.
Timing: Developing a timeline is a crucial step in planning your survey. When is the
survey data needed? When would be the best time to contact potential participants?
Does the researcher want to survey during a break or at a time when respondents are
engaged in another activity with your unit? What other workloads issues may affect
those involved in sending the survey, collecting and analyzing the result? How long will it
take to design and obtain approval for the survey. How much time is needed for
collecting responses? The answers to all these questions will help to develop a realistic
timeline for the study.
iv.
Mode: The combination of choices (face-to-face interviews, telephone interviews, and
self-administered questionnaires), the researcher make about contacting sample
members, administrating questions and recording responses is called the “mode”. The
mode is influence by the survey population, the study topic, and how the researcher
plans to use the information he or she gather.
v.
Develop Questions: Writing good survey questions requires keeping the goal of the
survey firmly in mind and then formulating each question with the perspective of the
respondent. This entails careful development of well-worded questions with appropriate
response formats.
vi.
Reliability: Reliability is the extent to which repeatedly measuring the same property or
population produces the same result. Ideally, each survey question will mean the same
thing to everyone, including those administering the survey. So, careful design and
refinement is necessary.
vii.
Validity: Validity is the extent to which a survey question measures the property it is
supposed to measure (the project’s purpose).
Purpose of Survey (Lau, 2017)
There are three broad types of survey studies reported in the eHealth literature: exploratory,
descriptive, and explanatory surveys. They are described below.
1. Exploratory Surveys – These studies are used to investigate and understand a particular
issue or topic area without predetermined notions of the expected responses. The design is
mostly qualitative in nature, seeking input from respondents with open-ended questions
focused on why and/or how they perceive certain aspects of an eHealth system. An example
is the survey by Wells, Rozenblum, Park, Dunn, and Bates (2014) to identify organizational
strategies that promote provider and patient uptake of phrs.
2. Descriptive Surveys – These studies are used to describe the perception of respondents and
the association of their characteristics with an eHealth system. Perception can be the
attitudes, behaviour and reported interactions of respondents with the eHealth system.
Association refers to an observed correlation between certain respondent characteristics and
the system, such as prior eHealth experience. The design is mostly quantitative and involves
the use of descriptive statistics such as frequency distributions of Likert scale responses from
participants.
3. Explanatory Surveys – These studies are used to explain or predict one or more
hypothesized relationships between some respondent characteristics and the eHealth
system. The design is quantitative, involving the use of inferential statistics such as regression
and factor analysis to quantify the extent to which certain respondent characteristics lead to
or are associated with specific outcomes.
Determining the type of survey to be used

Time Period – Surveys can take on a cross-sectional or longitudinal design based on the time
period involved. In cross-sectional design the survey takes place at one point in time giving a
snapshot of the participant responses. In longitudinal design the survey is repeated two or
more times within a specified period in order to detect changes in participant responses over
time (Bethelem, 2009).

Respondent Group – Surveys can involve a single or multiple cohorts of respondents. With
multiple cohorts they are typically grouped by some characteristics for comparison such as
age, sex, or eHealth use status (Johnson, 2014).

Variable Choice – In quantitative surveys one needs to define the dependent and
independent variables being studied. A dependent variable refers to the perceived outcome
that is measured, whereas an independent variable refers to a respondent characteristic that
may influence the outcome (such as age). Typically, the variables are defined using a scale
that can be nominal, ordinal, interval, or ratio in nature (Layman & Watzlaf, 2009). In a
nominal scale, a value is assigned to each response such as 1 or F for female and 2 or M for
male. In an ordinal scale, the response can be rank ordered such as user satisfaction that
starts from 1 for very unsatisfied to 4 for very satisfied. Interval and ratio scales have
numerical meaning where the distance between two responses relate to the numerical
values assigned. Ratio is different from interval in that it has a natural zero point. Two
examples are weight as a ratio scale and temperature as an interval scale.

Data Collection – Surveys can be conducted by questionnaire or by interview with
structured, semi-structured or non-structured questions. Questionnaires can be
administered by postal mail, telephone, e-mail, or through a website. Interviews can be
conducted in-person or by phone individually or in groups. Pretesting or pilot testing of the
instrument should be done with a small number of individuals to ensure its content, flow
and instructions are clear, consistent, appropriate and easy to follow. Usually there are one
or more follow-up reminders sent to increase the response rate (Ungel et al, 2014).

Analytical Method – Survey responses are analyzed in different ways depending on the type
of data collected. For textual data such qualitative analyses as content or thematic analysis
can be used. Content analysis focuses on classifying words and phrases within the texts into
categories based on some initial coding scheme and frequency counts (Ungel et al, 2014).
Thematic analysis focuses on identifying concepts, relationships and patterns from texts as
themes. For numeric data, quantitative analysis such as descriptive and inferential statistics
can be used. Descriptive statistics involves the use of such measures as mean, range,
standard deviation and frequency to summarize the distribution of numeric data. Inferential
statistics involve the use of a random sample of data from the study population to make
inferences about that population. The inferences are made with parametric and nonparametric tests and multivariate methods. (Nardi, 2018)
Types of Survey Methods
1) Personal Interviews: In a personal interview, also known as a survey interview, a direct
conversation between the interviewer and the respondent is held to elicit some information
from the latter: the interviewer here neither tries to educate the respondent nor help him
(Singh, 2015). In the face-to-face interview it is possible to record more than the verbal
responses of the interviewee, which are often superficial. Types of personal interview include
(Marginson, 2004):
 Unstructured informal interview
The unstructured informal interview is normally conducted as a preliminary step in the
research process to generate ideas/hypotheses about the subject being investigated so
that these might be tested later in the survey proper. Such interviews are entirely
informal and are not controlled by a specific set of detailed questions. Rather the
interviewer is guided by a pre-defined list of issues. These interviews amount to an
informal conversation about the subject. Informal interviewing is not concerned with
discovering 'how many' respondents think in a particular way on an issue (this is what
the final survey itself will discover). The aim is to find out how people think and how
they react to issues, so that the ultimate survey questionnaire can be framed along the
lines of thought that will be most natural to respondents.
The respondent is encouraged to talk freely about the subject, but is kept to the point on
issues of interest to the researcher. The respondent is encouraged to reveal everything
that he/she feels and thinks about these points. The interviewer must note (or taperecord) all remarks that may be relevant and pursue them until he/she is satisfied that
there is no more to be gained by further probing. Properly conducted, informal
interviews can give the researcher an accurate feel for the subject to be surveyed. Focus
groups, discussed later in this chapter, make use of relatively unstructured interviews.

Structured standardised interview
With structured standardised interviews, the format is entirely different. A structured
interview follows a specific questionnaire and this research instrument is usually used as
the basis for most quantitative surveys. A standardised structured questionnaire is
administered where specific questions are asked in a set order and in a set manner to
ensure no variation between interviews.
Respondents' answers are recorded on a questionnaire form (usually with pre-specified
response formats) during the interview process, and the completed questionnaires are
most often analysed quantitatively. The structured interview usually denies the
interviewer the opportunity to either add or remove questions, change their sequence
or alter the wording of questions.

Some respondent biases that can influence personal interviews include:
Faulty memory: Some respondents may answer a question incorrectly simply because
they have a poor memory. The key to avoiding this problem is to steer clear of questions
requiring feats of memory. For example, questions such as, "Can you tell me what your
crop yield was four years ago?" should be avoided. Other aspects of faulty memory that
were mentioned in the previous chapter were telescoping and creation (Groves, 1979).

Exaggeration and dishonesty: There can be a tendency on the part of some respondents
to exaggerate claims about their conditions and problems if they think it will further their
cause and lead to improvement in their well-being. The interviewer must be alert to, and
note any, inconsistencies arising. This is best achieved by checking key pieces of
information with a variety of sources (Yeung, 1995).

Failure to answer questions correctly: If rapport is not developed sufficiently, the
respondent may be unwilling to respond or fail to give sufficient attention or
consideration to the questions asked, and if the respondent does not understand a
question properly he may give inappropriate answers. The interviewer needs to ensure
that the respondent fully understands the questions being asked and is responding in the
appropriate context (Quintero et al, 2009).

Misunderstanding purpose of interview: Some respondents may perceive the purpose of
the survey to be a long-winded form of 'selling', particularly if the interviewer is asking
them what they think about a new product. Their comments, therefore, about such
issues as 'propensity to purchase' need to be looked at within a context where they may
be expecting to have to buy the product at some stage and are trying to strike a hard
bargain. To avoid such problems arising it is important to carefully explain the objectives
of the survey, the identity of the interviewer and sponsor, and what is required of the
respondent, prior to the interview proper (Yeung, 1995).

Courtesy bias: In interview situations it is quite possible that one will come across the
problem of courtesy bias, i.e. the tendency for respondents to give answers that they
think the interviewer wants to hear, rather than what they really feel. The respondents
may not wish to be impolite or to offend the interviewer, and may therefore endeavour
to give 'polite' answers. Courtesy bias can be an obstacle to obtaining useful and reliable
data and therefore needs to be minimised. Generally, however, the creation of a good
interview environment and an appropriate relationship between the interviewer and the
respondent can help avoid too much courtesy bias arising (Liedtka, 1992).

Some interviewer-specific that can influence personal interviews include:
Desire to help the respondent: The interviewer may become too sympathetic to the
problems and conditions of the respondent, and this can affect the conduct of, and
results obtained from, the interview. Objectivity must be retained at all times (Quintero
et al, 2009).

Failure to follow instructions in administering the questions: It is often tempting for the
interviewer to change the wording of a question or introduce inflections in questions.
This can affect the respondent's understanding and can bias his/her replies. Particular
problems may arise if the respondent does not understand the question as stated and
the interviewer tries to simplify the question. The altered wording may constitute a
different question. When questions are open-ended, this can involve the interviewer in
formulating probing questions that go beyond the printed words. Unless the probes
follow instructions faithfully the potential for bias is great (Groves, 1979).

Reactions to responses: When respondents give answers, the interviewer must be careful
not to 'react.' A note of 'surprise' or 'disbelief may easily bias the respondent's
subsequent answers. Interviewers must respond with a uniform polite interest only
(Liedtka, 1992).
2) Telephone Interviews: Hopper (1992) describes telephones as “the primary electronic medium
for interpersonal communication which creates “a new consciousness about spoken language.”
Whilst telephone interviews among consumers, are very common in the developed world, these
are conducted with far less frequency in the developing world. The reason is somewhat obvious,
i.e. only a relatively small proportion of the total population has a telephone in the house.
Moreover, telephone owners tend to be urban dwellers and have above average incomes and are
therefore unrepresentative of the population as a whole (Colombotos, 1969).
To a greater extent, telephone interviewing has potential in surveys of businesses, government
agencies and other organisations or institutions. Even then, it is still the case that telephone
surveys are rarely without bias (Katz, 2002). Whilst it is true that many businesses have a
telephone, small businesses and even medium-sized enterprises are far less likely to have access
to telephones.
Telephone interviews afford a certain amount of flexibility. It is possible, for example, for
interviewers to put complex questions over the telephone. The interviewers can probe, skip
questions that prove irrelevant to the case of a particular respondent and change the sequence
of questions in response to the flow of the discussion, and earlier replies can be revisited. The
interaction between interviewer and interviewee that is possible over the telephone simply is
not achievable through a mailed questionnaire. In comparison to personal interviews, telephone
interviews do not appear to enjoy any margin of advantage. Perhaps the only advantages are
those of speed and cost. Even then, manpower costs in developing countries tend to be very low
and so only speed remains as a potential advantage over personal interviews (Carr & Worth,
2001).
3) Mail Questionnaire: Mailed questionnaires is a tool of research where a respondent sends his
answers through mail to the researcher. It is administered by mail to designated respondents
under an accompanying cover letter and is returned, by mail, by the respondent to the research
organization (Singh, 2015). The questionnaire is typically sent in a packet that contains a cover
sheet, introducing the research being conducting, and a pre-paid return envelope for the
responses. While the response rate is typically lower than other forms of questionnaires, this can
be improved with reminders and incentives (Dillman, 1972). There are examples of government-
run, compulsory mail out questionnaires, such as the Australian Census, which collect valuable
population data.
Some advantages of mail questionnaires include (Patten, 2016): 1) sample is not limited to access
to technology such as phones or internet, 2) participants are able to think about their responses,
3) no interviewer present helps control for interviewer effect on participants responses and 4)
cost is lower than telephone questionnaires.
Some disadvantages of mail questionnaire include (Patten, 2016): 1) Can have a low response
rate if people view the questionnaire as junk mail, 2) Questions cannot be probed or explained,
3) Participants may return incomplete surveys, 4) Possibility of a self-selection bias.
4) Panel Survey: In empirical social research a survey is designated as a panel when the same
survey units are contacted at different times, measuring the same characteristics. This makes it
possible to capture changes as a function of time or underlying factors. It is distinguishable from
repeated cross section surveys, which follow the same design, but use independent samples
(Zumkeller et al, 2006).
The Multi-day Survey (‘Short Panel’). The continuous form of observation is often defined as a
short panel. Examples are the measurement of behaviour over one week, a month, or even
longer These approaches are used for longitudinal surveys, i.e., multi-day data of one week
(Sikkel & Hoogendern, 2012).
Why are Panel Surveys used?
 Longitudinal analysis
Panel surveys allow researchers to track changes and trends over time within a specific group of
individuals or households. By collecting data at multiple points in time, researchers can analyze
individual-level dynamics, study patterns of stability and change, and observe the impact of
various factors on the target population.
 Causal inference
This enables researchers to examine causal relationships by observing changes in both the
independent variables (such as policies, interventions, or events) and the dependent variables
(such as behaviors, attitudes, or outcomes) over time. This longitudinal approach helps identify
the effects’ directionality and control for confounding factors more effectively than crosssectional studies.
 Individual heterogeneity
It provides insights into individual differences and variations in a population’s behavior, attitudes,
or outcomes. By following the same individuals over time, researchers can investigate why some
individuals experience different outcomes or respond differently to interventions or policies,
allowing for a more nuanced understanding of complex social phenomena.
 Event analysis
This allows researchers to examine the effects of specific events or policy changes on individuals
or households. By collecting data before and after an event or policy implementation,
researchers can assess its impact, explore how individuals adapt or respond to changes, and
study the short-term and long-term consequences.
 Policy evaluation
These are useful for evaluating the effectiveness of policies, programs, or interventions over
time. By measuring outcomes and tracking changes within a panel of participants, researchers
can assess whether a policy achieves its intended goals, identify areas for improvement, and
provide evidence for decision-making and policy refinement.
 Data richness
Panel surveys work repeatedly collect detailed information from the same individuals or
households. This allows researchers to study complex phenomena, explore the interplay of
various factors, and delve into the mechanisms underlying behavior or outcomes. The
longitudinal nature of panel surveys also enables the collection of retrospective data, which can
provide insights into past experiences or events that may influence present behaviors or
outcomes.
Advantages of using Panel Surveys (Looseveldt & Sonck, 2008):
 Panel surveys are well-suited for studying changes that occur due to seasonal or cyclical
factors. Researchers can observe how participants’ behaviors or opinions evolve over
different periods.
 It enables researchers to study specific subgroups within the sample, as they can identify
and track individuals with specific characteristics or experiences, yielding more granular
insights.
 While setting up, it can be resource-intensive initially. It becomes cost-effective in the
long run because data collection is focused on maintaining the existing panel rather than
recruiting new participants for each wave.
 This can be adapted to various research project objectives, including studying economic
behavior, health outcomes, social dynamics, and consumer preferences. The ability to
revisit the same participants enables researchers to explore different aspects of their
lives.
Disadvantages of Panel Surveys

The main problem of panel surveys for data users is attrition, that is, units dropping out of
the survey not by design. Attrition (or a low so-called retention rate) reduces sample size and
has negative effects on the statistical power. If attrition is selective, this causes in addition
biased estimates of changes of outcomes (Voorpostel & Lipps 2011).

Panel conditioning (Warren & Halpern-Manners 2012) describes the phenomenon that
answers to survey questions may be influenced by being member of the panel. Such effects
may result in phenomena such as declining frequencies of “don’t know” responses or
instances of learning how to answer difficult questions. Panel conditioning may also result in
strategic response behavior that may not reveal true information such as respondent’s
attempts to avoid answering filtered follow-up questions or deliberate misreporting.

Panel data help assessing issues like selection into treatment or questions of causal ordering.
When it comes to analyzing reverse causality, the correct specification of temporal lags
remains an important challenge in panel studies. This can be explained by the fact that the
time intervals between panel waves and manifestations of true effects are in general not
aligned (Kitamura, 1989).
Non-response bias
Non-response bias refers to the mistake one expects to make in estimating a population
characteristic based on a sample of survey data in which, due to non-response, certain types of
survey respondents are under-represented (Berg, 2005).
(A) Common causes of nonresponse bias (Hansen & Hurwitz, 1946)
In addition to requests for sensitive information and invitation issues, there are several other
causes of nonresponse bias, including poor survey design, wrong target audience, refusals, failed
delivery, and accidental omission.
1. Poor survey design: How long will your survey take to complete? Our research shows
that abandon rates increase for surveys that take more than 7-8 minutes to complete,
and completion rates drop from 5-20%. Make sure your survey is short and easy to
understand to reduce the risk of nonresponse bias.
2. Wrong target audience: Before you send out your survey, ensure you’re using the right
target audience. For example, a survey about working hours and wages sent to students
and unemployed individuals will have fewer responses than if it is sent to employed
people.
3. Refusals: Some respondents will just say “no” to completing a survey. It could be a bad
day or time for them, or they may just not want to do it. Remember, just because they
said “no” today doesn’t mean they won’t take one of your surveys another time.
4. Failed delivery: It’s unfortunate that some surveys end up going directly into a spam
folder. You might not even know that your survey wasn’t received, and it will just be
recorded as a nonresponse. Before you send your survey out, we suggest you track
respondents to know if your email was opened, how many clicked through to your
survey, and who responded to your survey.
5. Accidental omission: On occasion, someone will simply forget to complete your survey.
It’s challenging to prevent this from happening, and hopefully, this is only a small
number of your nonresponses.
(B) The following ways have been recommended for reducing nonresponse in telephone and
face-to-face surveys:

Advance letter. A short letter is sent in advance to inform the sampled respondents about
the upcoming survey. The style of the letter should be personalized but not overdone. First, it
announces that a phone call will be made, or an interviewer wants to make an appointment
to do the survey face-to-face. Second, the research topic will be described. Last, it allows
both an expression of the surveyor's appreciation of cooperation and an opening to ask
questions on the survey (Lynn, 2008).

Training. The interviewers are thoroughly trained in how to ask respondents questions, how
to work with computers and making schedules for callbacks to respondents who were not
reached (Chen & Haziza, 2019).

Short introduction. The interviewer should always start with a short introduction about him
or herself. She/he should give her name, the institute she is working for, the length of the
interview and goal of the interview. Also, it can be useful to make clear that you are not
selling anything: this has been shown to lead to a slightly higher responding rate (Berinsky,
2008).

Respondent-friendly survey questionnaire. The questions asked must be clear, non-offensive
and easy to respond to for the subjects under study (Kreuter et al, 2010).
REFERENCES:1. Akinade, E. A., & Owolabi, T. (2009). Research methods: A pragmatic approach for social
sciences, behavioural sciences and education. Lagos: Connel Publications.
2. Avwokeni, J. A. (2006). Research Methods Process Evaluation and Critique. Porthar court:
Unicampus Tutorial Services.
3. Berinsky, A. J. (2008). Survey non-response. The SAGE handbook of public opinion research,
309-321.
4. Bethlehem, J. (2009). Applied survey methods: A statistical perspective. John Wiley & Sons.
5. Carr, E. C., & Worth, A. (2001). The use of the telephone interview for research. NT
research, 6(1), 511-524.
6. Chen, S., & Haziza, D. (2019). Recent developments in dealing with item non-response in
surveys: A critical review. International Statistical Review, 87, S192-S218.
7. Christopher, I. C., & Udoh, E (2020). The Determinants of Survey Design in a Research
Undertaking. International Journal of Recent Research in Commerce Economics and
Management (IJRRCEM). 7(1), 29-37.
8. Colombotos, J. (1969). Personal versus telephone interviews: effect on responses. Public
Health Reports, 84(9), 773.
9. Dillman, D.A., 1972. Increasing mail questionnaire response in large samples of the general
public. The Public Opinion Quarterly, 36(2), pp.254-257.
10. Engel, U., Jann, B., Lynn, P., Scherpenzeel, A., & Sturgis, P. (2014). Improving survey methods:
Lessons from recent research.
11. Groves, R. M. (1979). Actors and questions in telephone and personal interview
surveys. Public Opinion Quarterly, 43(2), 190-205.
12. Hansen, M. H., & Hurwitz, W. N. (1946). The problem of non-response in sample
surveys. Journal of the American Statistical Association, 41(236), 517-529.
13. Hopper, R. (1992). Telephone conversation. Indiana University Press.
14. Johnson, T. P. (2014). Handbook of health survey methods (Vol. 565). John Wiley & Sons.
15. Katz, J. E. (2002). Machines that become us: The social context of personal communication
technology. Transaction Publishers.
16. Kitamura, R. (1989). Advantages and Disadvantages of a Panel Survey.
17. Kraemer, H. C. (1991). Without increasing sample size. Psychopharmacology bulletin, 27(3-4),
217.
18. Kraemer, K. L. (Ed.). (1991). The information systems research challenge (vol. III) survey
research methods. Harvard University Graduate School of Business Administration.
19. Kreuter, F., Olson, K., Wagner, J., Yan, T., Ezzati-Rice, T. M., Casas-Cordero, C., & Raghunathan,
T. E. (2010). Using proxy measures and other correlates of survey outcomes to adjust for
non-response: examples from multiple surveys. Journal of the Royal Statistical Society Series
A: Statistics in Society, 173(2), 389-407.
20. Lau, F. (2017). Methods for survey studies. Handbook of eHealth Evaluation: An EvidenceBased Approach [Internet]. University of Victoria.
21. Layman E. J., Watzlaf V.J. (2009). Health informatics research methods: Principles and
practice. American Health Information Management Association.
22. Liedtka, J. M. (1992). Exploring ethical issues using personal interviews. Business Ethics
Quarterly, 161-181.
23. Loosveldt, G., & Sonck, N. (2008). An evaluation of the weighting procedures for an online
access panel survey. In Survey Research Methods, 2(2), 93-105.
24. Lynn, P. (2008). The problem of non-response, chapter 3, 35-55. International Handbook of
Survey Research Methods.
25. Marginson, D. E. (2004). The case study, the interview and the issues: a personal
reflection. The real-life guide to accounting research (pp. 325-337). Elsevier.
26. Nardi, P. M. (2018). Doing survey research: A guide to quantitative methods. Routledge.
27. Patten, M. (2016). Questionnaire research: A practical guide. routledge.
28. Quintero, A. J., Segal, L. S., King, T. S., & Black, K. P. (2009). The personal interview: assessing
the potential for personality similarity to bias the selection of orthopaedic
residents. Academic Medicine, 84(10), 1364-1372.
29. Sikkel, D., & Hoogendoorn, A. (2012). Panel surveys. In International handbook of survey
methodology, 479-499. Routledge.
30. Singh, A. K. (2015). Tests, measurements and research methods in behavioural sciences.
Bharati Bhawan.
31. Voorpostel, M., & Lipps, O. (2011). Attrition in the Swiss Household Panel: Is change
associated with later drop-out?. Journal of Official Statistics, 27(2), 301-318.
32. Warren, J. R., & Halpern-Manners, A. (2012). Panel conditioning in longitudinal social science
surveys. Sociological Methods & Research, 41(4), 491-534.
33. Yeung, H. W. C. (1995). Qualitative personal interviews in international business research:
some lessons from a study of Hong Kong transnational corporations. International Business
Review, 4(3), 313-339.
34. Zumkeller, D., Madre, J. L., Chlond, B., & Armoogum, J. (2006). Panel surveys. Travel Survey
Methods, 363-398. Emerald Group Publishing Limited.
Download