Uploaded by eBook Source

Health Informatics Research Methods Principles and Practice, 2e Valerie Watzlaf, Elizabeth Forrestal

advertisement
Get Complete eBook Download by Email at discountsmtb@hotmail.com
Get Complete eBook Download by Email at discountsmtb@hotmail.com
Get Complete eBook Download link Below for Instant Download:
https://browsegrades.net/documents/286751/ebook-payment-link-forinstant-download-after-payment
Get Complete eBook Download by Email at discountsmtb@hotmail.com
i
Health Informatics Research Methods
Principles and Practice
Second Edition
Volume Editors
Valerie J. Watzlaf, PhD, MPH, RHIA, FAHIMA
Elizabeth J. Forrestal, PhD, RHIA, CCS, FAHIMA
Get Complete eBook Download by Email at discountsmtb@hotmail.com
3
1
Research Frame and Designs
Elizabeth J. Forrestal, PhD, RHIA, CCS, FAHIMA
Learning Objectives
Use and explain the terms research, research frame, theory, model, and research methodology.
Designate the appropriate placement of a research project on the continuum of research from
basic to applied.
Differentiate among research designs.
Provide appropriate rationales that support the selection of a research design.
Use key terms associated with research frames and designs appropriately.
Key Terms
Applied research
Artifact
Basic research
Case study
Causal-comparative research
Causal relationship
Comparative effectiveness research (CER)
Confounding (extraneous, secondary) variable
Context
Control group
Correlational research
Cross-sectional
Deductive reasoning
Dependent variable
Descriptive research
Empiricism
Ethnography
Evaluation research
Experimental (study) group
Experimental research
Generalizability
Health informatics research
Health information management (HIM) research
Health services research
Health technology assessment (HTA)
Historical research
Independent variable
Inductive reasoning
Longitudinal
Mixed-methods research
Model
Get Complete eBook Download by Email at discountsmtb@hotmail.com
Naturalistic observation
Negative (inverse) linear relationship (association)
Nonparticipant observation
Observational research
Parsimony
Participant observation
Positive (direct) linear relationship (association)
Positivism
Primary source
Prospective
Qualitative approach
Quantitative approach
Quasi-experimental research
Random sampling
Randomization
Randomized controlled trial (RCT)
Research
Research design
Research frame
Research method
Research methodology
Retrospective
Rich data
Rigor
Scientific inquiry
Secondary source
Simulation observation
Theory
Translational research
Triangulation
Usability testing
Variable
4
Research is a systematic process of inquiry aimed at discovering or creating new knowledge about
a topic, confirming or evaluating existing knowledge, or revising outdated knowledge. This chapter
explains the purpose of research and defines terms associated with it, such as research frame, theory,
model, and scientific inquiry, and describes several research designs that are used in health informatics
and health information management (HIM). Examples of these research designs being used by health
informatics and HIM researchers are provided throughout the chapter.
Research answers questions and provides solutions to everyday problems. It also provides clear,
step-by-step processes that result in a comprehensive approach to questions and problems. These
processes allow people to collect reliable and accurate facts they can analyze and interpret. Research
information is relevant to health professionals and others because research provides evidence they can
use not only in fulfilling their responsibilities but also in conducting operations and improving practice.
This analysis and interpretation becomes valuable information that can be used to draft policies,
respond to administrative and legislative queries, and make decisions. The following real-world case
illustrates how healthcare leaders can use information from research to create contingency plans and
estimate risk.
Real-World Case
Get Complete eBook Download by Email at discountsmtb@hotmail.com
According to analysts at the Health Research Institute of PricewaterhouseCoopers (PWC),
nearly 40 percent of consumers “would abandon or hesitate using a health organization if it is
hacked” (PWC 2015, 1). The analysts obtained this information through research. In an online
survey, 1,000 US adults provided their perspectives of the healthcare environment and their
preferences related to the use of healthcare services. These adults represent a cross-section of the
US population in terms of their insurance status, age, gender, income, and geography. Moreover,
more than 50 percent of the respondents would avoid or be wary of using Internet-connected
healthcare devices, such as pacemakers and drug infusion pumps, if a security breach were
reported. Healthcare leaders can factor this information into the cost projections for breaches and
cyber attacks of information systems as they create contingency plans and estimate risk.
What Are Health Informatics Research and HIM Research?
Health informatics research is the investigation of the process, application, and impact of
computer science, information systems, and communication technologies to health services. Health
information management (HIM) research involves investigations into the practice of acquiring,
analyzing, storing, disclosing, retaining, and protecting information vital to the delivery, provision, and
management of health services. HIM research has a narrower scope than health informatics research.
Both health informatics research and HIM research are at the intersection of research from several
disciplines, including medicine, computer science, information systems, biostatistics, and business, to
name just a few. Consequently, researchers and practitioners have conducted research in multiple
ways, which reflect the investigators' range of experiences. Because health informatics and HIM
researchers ask research questions covering a wide range of topics, their research projects are
stimulating, dynamic, and varied.
Health informatics research and HIM research are often influenced by current events, new
technologies, and scientific advancements. Recent research studies include how activity trackers and
mobile phone apps can improve users' health. For example, researchers at Harvard University are
using a smartphone app to collect data to assess the health and well-being of former professional
football players (Harvard University 2016). While adults of all ages, genders, and cultures may
participate in the research study, the research focuses on the everyday experiences of former
professional football players—their memory, balance, heart health, pain, and mobility.
The sections that follow address the purposes of health informatics and HIM research, research
frames, and scientific inquiry.
Purposes of Health Informatics Research and HIM Research
The purposes of health informatics research and HIM research are directly related to the definition of
research—creating knowledge, confirming and evaluating existing knowledge, and revising outdated
knowledge. Thus, the purposes of health informatics research and HIM research are as follows:
To formulate theories and principles of health informatics and HIM
To test existing theories, models, and assumptions about the principles of health informatics and
HIM
5
To build a set of theories about what works, when, how, and for whom
To advance practice by contributing evidence that decision makers can use
To train future practitioners and researchers
To develop tools and methods for the process of health informatics research and HIM research
(Wyatt 2010, 436)
Generally, the overarching purpose of health informatics research is to determine whether the
application of health information technologies and the assistance of health informaticians have helped
users improve health (Friedman 2013, 225). Similarly, the overarching purpose of HIM research is to
determine whether the health information has the integrity and quality necessary to support its clinical,
financial, and legal uses (AHIMA 2016).
Get Complete eBook Download by Email at discountsmtb@hotmail.com
Research Frame
A field's body of knowledge is built on research, and research is conducted within research frames.
A research frame, or research paradigm, is the overarching structure of a research project. A research
frame comprises the theory or theories underpinning the study, the models illustrating the factors and
relationships of the study, the assumptions of the field and the researcher, the methods, and the
analytical tools. The research frame is a view of reality for the researcher and his or her discipline. Each
field has its own theories, models, assumptions, methods, and analytic tools. Fields also have preferred
means of disseminating knowledge; some fields prefer books, whereas others prefer journal articles.
Theories and Models
Many theories and models are potentially applicable to health informatics research and HIM
research. These theories and models come not only from healthcare but also from computer science,
business, and many other fields. Researchers select the theory or model that best suits their purpose
and addresses their question or problem. Table 1.1 lists many of these theories and models along with
representative examples of related publications by leading theorists or developers.
Table 1.1 Selected theories and models used in health informatics and HIM research
Theories or Models
Related Publications by Noted Theorists or Developers
Adult learning theories (e.g. Rogers, C.R. 1969. Freedom to Learn. Columbus, OH: Merrill Publishing.
experiential learning theories)
AHIMA data quality management Davoudi, S., J.A. Dooling, B. Glondys, T.D. Jones, L. Kadlec, S.M. Overgaard, K.
Ruben, and A. Wendicke. 2015. Data quality management model (2015 update).
model
Journal of AHIMA 86(10):62–65.
Change theories
Lewin, K. 1951. Field Theory in Social Science. New York: Harper and Brothers
Publishers.
Cybernetics theory
Wiener, N. 1948. Cybernetics; or Control and Communications in the Animal and
Machine. New York: John Wiley.
Diffusion of innovations theory
Rogers, E.M. 2003. Diffusion of Innovations, 5th ed. New York: Free Press. (1st ed.
1962)
Dominant
design—a
dynamic Abernathy, W.J. and J.M. Utterback. 1978. Patterns of industrial innovation.
model of process and product Technology Review 80(7):40–47.
Utterback, J.M. 1996. Mastering the Dynamics of Innovation, 2nd ed. Boston:
development (A-U model)
Harvard Business School Press.
Fuzzy set theory
Zadeh, L.A. 1965. Fuzzy sets. Information and Control 8(3):338–353.
General systems theory (GST; Von Bertalanffy, L. 1950. An outline of general system theory. British Journal for the
evolved into open systems theory Philosophy of Science 1(2):134–165.
and closed systems theory)
Information behavior theories
Wilson, T.D. 1999. Models in information behavior research. Journal
Documentation 55(3):249–270.
of
Information
processing
and
Miller, G.A. 1956. The magical number seven, plus or minus two: Some limits on
cognitive learning theories (e.g. our capacity for processing information. Psychological Review 63(2):81–97.
chunking)
Sweller, J. 1988. Cognitive load during problem solving: Effects on learning.
Cognitive Science 12(2):257–285.
(Continued)
6
Get Complete eBook Download by Email at discountsmtb@hotmail.com
Table 1.1
(Continued)
Theories or Models
Related Publications by Noted Theorists or Developers
Information systems success (D&M DeLone, W.H. and E.R. McLean. 1992. Information systems success: The quest for
IS success) model
the dependent variable. Information Systems Research 3(1):60–95.
Knowledge engineering theories
Gruber, T.R. 1993. A translation approach to portable ontology specifications.
Knowledge Acquisition 5(2):199–221.
Newell, A. 1982. The knowledge level. Artificial Intelligence 18(1):87–127.
Learning styles theories
Kolb, D.A. 1984. Experiential Learning: Experience as the Source of Learning and
Development. Englewood Cliffs, NJ: Prentice-Hall.
Open systems theory
See General systems theory
Rough set theory
Pawlak, Z. 1982. Rough sets. International Journal of Computer and Information
Sciences 11(2):341–356.
Seven-stage model of action
Norman, D.A. and S.W. Draper. 1986. User Centered System Design: New
Perspectives on Human-Computer Interaction. Hillsdale, NJ: Lawrence Erlbaum
Associates.
Social learning theories
Bandura, A. 1982. Self-efficacy mechanism in human agency. American
Psychologist 37(2):122–147.
Sociotechnical
theories
(e.g.
Cherns, A. 1987 (March). Principles of sociotechnical design revisited. Human
sociotechnical systems [STS] and Relations 40(3):153–161.
Sittig, D.F. and H. Singh. 2010. A new sociotechnical model for studying health
sociotechnical model)
information technology in complex adaptive healthcare systems. Quality and Safety
in Health Care 19 (Suppl3):i68–i74.
Swiss cheese model
Reason, J. 2000. Human error: Models and management. BMJ 320(7237):768–770.
System of systems (SoS) theory Jackson, M.C. and P. Keys. 1984. Towards a system of systems methodologies.
(e.g. chaos theory and complex Journal of the Operational Research Society 35(6):473–486.
systems theory)
Systems development life cycle Benington, H.D. 1983 (reprint of 1956). Production of large computer programs.
(SDLC) model
Annals of the History of Computing–IEEE 5(4):350–361.
Technology
(TAM)
acceptance
model Davis, F.D., R.P. Bagozzi, and P.R. Warshaw. 1992. Extrinsic and intrinsic motivation
to use computers in the workplace. Journal of Applied Social Psychology
22(14):1111–1132.
Thompson, R.L., C.A. Higgins, and J.M. Howell. 1994. Influence of experience on
User acceptance theories (e.g.
unified theory of acceptance and personal computer utilization: Testing a conceptual model. Journal of Management
Information Systems 11(1):167–187.
use of technology [UTAUT])
Venkatesh, V., M.G. Morris, G.B. Davis, and F.D. Davis. 2003. User acceptance
of information technology: Toward a unified view. MIS Quarterly 27(3):425–478.
Source: Adapted from Nelson and Staggers 2014, Venkatesh et al. 2003, Dillon and Morris 1996, and Gorod et al.
2008.
A theory is the systematic organization of knowledge that explains or predicts phenomena, such as
behavior or events, “by interrelating concepts in a logical, testable way” (Karnick 2013, 29). They
provide definitions, relationships, and boundaries. For example, the theory of diffusion of innovations is
commonly used in studies related to health information technology (HIT). The theory explains how new
ideas and products—innovations—spread, and it includes definitions of innovation and communication
and elements (concepts) of the process of diffusion (Rogers 2003, xvii–xviii, 11). Using the theory of
diffusion of innovations, health informatics researchers investigated what key strategic leaders knew
about various information technology (IT) innovations and how those innovations were implemented.
The researchers found that the strategic leaders—that is, chief information officers (CIOs), and directors
of nursing—significantly disagreed on the number of IT functions available in their hospital and on the
implementation status of several functions (Liebe et al. 2016, 8). The researchers concluded that
Get Complete eBook Download by Email at discountsmtb@hotmail.com
leaders' agreement can initiate adoption, but disagreements among leaders could be a barrier to
successful IT adoption (Liebe et al. 2016, 3).
Using theories to examine phenomena and complex relationships optimally and systematically
advances scientific knowledge (Fox et al. 2015, 71; Shapira 2011, 1312). Researchers begin with
informed predictions or raw theories of what they believe will happen. As they collect observations and
data, they refine their theories. Researchers strive for parsimony or elegance in their theories.
Parsimony means that explanations of phenomena should include the fewest
7
assumptions, conditions, and extraneous complications. The best theories simplify the situation,
explain the most facts in the broadest range of circumstances, and most accurately predict behavior
(Singleton and Straits 2010, 25).
A model is an idealized representation that abstracts and simplifies a real-world situation so the
situation can be studied, analyzed, or both (Gass and Fu 2013, 982). Models visually depict theories by
using objects, graphic representations, or smaller-scaled versions of the situation being studied. A
model includes all known properties of a theory. Health informatics and HIM researchers often select
models associated with sociotechnical theories and user acceptance theories, such as Sittig's and
Singh's sociotechnical model (2010) and the technology acceptance model (TAM) (Davis et al. 1992).
Readers may also encounter other models applicable to health informatics and HIM research, such as
the seven-stage model of action (Norman and Draper 1986), the Swiss cheese model (Reason 2000),
and DeLone's and McLean's information systems (IS) success model (2003).
Sittig's and Singh's sociotechnical model, shown in figure 1.1, presents the dimensions (factors)
critical to the success of HIT implementations in adaptive, complex environments (2010, 3–8). This
model includes dimensions from social systems (the “socio” part of “sociotechnical”), such as workflow
and communication, and technical systems, such as hardware and software infrastructure. In the
sociotechnical perspective, both systems are important and complementary (Whetton and Georgiou
2010, 222). The comprehensive model illustrates eight dimensions:
Hardware and software computing infrastructure
Clinical content
Human computer interface
People
Workflow and communication
Internal organizational policies, procedures, and culture
External rules, regulations, and pressures
System measurement and monitoring (Sittig and Singh 2010)
The theorists specifically emphasize that the dimensions are not independent, sequential,
hierarchical steps; instead, the dimensions are interactive and interrelated.
Get Complete eBook Download by Email at discountsmtb@hotmail.com
Figure 1.1 Illustration of the complex interrelationships between the dimensions of Sittig's and Singh's
sociotechnical model
Source: Sittig and Singh 2010, p. i69. Reprinted with permission.
8
Sittig and Singh's model has been used to analyze a large health system's investigative reports of
safety incidents related to electronic health records (EHRs) (Meeks et al. 2014, 1053). The health
informatics researchers' analysis identified emerging and commonly recurring safety issues related to
EHRs. Another set of health informatics researchers used the model to describe the environment in the
emergency department so that a pediatric clinical decision support system would be designed with
appropriate decision rules for children with minor blunt head traumas (Sheehan et al. 2013, 905).
Research Methodology
Research methodology is the study and analysis of research methods and theories. A research
method is a set of specific procedures used to gather and analyze data. Research methodologists
tackle questions such as “What is research?” or “Which method of data collection results in the greatest,
unbiased response rate?” For example, researchers evaluated blogging as a way to collect data from
young adults age 11 to 19 who had juvenile rheumatoid arthritis (a chronic autoimmune disorder)
(Prescott et al. 2015, 1). Although the data collection method was promising, the researchers concluded
that blogging probably should be combined with other collection methods.
Continuum of Basic and Applied Research Research is often categorized as basic or applied, but
these two types of research are actually the ends of a continuum, not separate entities. In practice, the
distinction between basic and applied research is sometimes unclear; however, research
methodologists generally differentiate them as follows:
Basic research answers the question “Why” and focuses on the development of theories and their
refinement. Basic research is sometimes called bench science because it often occurs in
laboratories. In health informatics and HIM, basic research comprises the development and
evaluation of new methods and theories for the acquisition, storage, maintenance, retrieval, and use
of information.
Get Complete eBook Download by Email at discountsmtb@hotmail.com
Applied research answers the questions “What?”, “How?”, “When?”, or “For whom?” Most health
informatics and HIM researchers who conduct applied research focus on the implementation of
theories and models into practice. Applied research, particularly clinical applied research, is often
done in healthcare settings, such as at the bedside or in the clinic. The following are examples of
clinical applied research questions:
What systems work best to support health professionals in making decisions?
What types of HIT and which methods of HIT implementation will improve the exchange of health
data across the continuum of care?
What is the impact of accurately coded data on the financial status of healthcare organizations?
What features of HIT increase the safety of the administering medications?
When does HIT reduce the costs of the delivery of health services?
How can health informatics practice improve workflows in various healthcare settings and
between settings?
How do leaders' ways of implementing health information systems affect users' satisfaction and
utilization?
What features of HIT help people improve their health and for whom do these features work best?
When should training be provided to best support health professionals' use of new features of
EHRs?
Most of the examples of research studies provided in this chapter and the rest of the book are
applied research.
The translation of the discoveries of basic science into practice in the community has been slow,
even though these discoveries have the potential to benefit individuals and populations (NCATS 2015).
To spur that translation, the federal government has supported translational research, a form of applied
research that health analysts and policymakers describe as “bench-to-bedside.” Translational
research has two aspects: applying discoveries generated during basic research to the development of
research studies with human subjects, and enhancing the sector's adoption of best practices and costeffective strategies to prevent, diagnose, and treat health conditions (NIH 2016). For example,
translational research may take knowledge from basic science, such as a newly discovered property of
a chemical, and convert that knowledge into a practical application, such as a new drug. Generally,
translational research makes the benefits of scientific discoveries available to the practitioners in the
community and to the public.
Quantitative, Qualitative, and Mixed-Methods Approaches to Research Research
methodologists describe three overarching approaches to research: the quantitative approach, the
qualitative approach, and mixed-methods research approach. The quantitative approach explains
phenomena by making predictions, collecting and analyzing evidence, testing alternative theories, and
choosing the best theory. The qualitative approach involves investigations to describe, interpret,
9
and understand processes, events, and relationships as perceived by individuals or groups
(Holloway and Wheeler 2010, 3). Mixed-methods research, also known as mixed research, combines
(mixes) quantitative and qualitative theoretical perspectives, methods, sampling strategies, data
collection techniques, data sets, analytic procedures, representational modes, or any combination of
these aspects of research (Sandelowski 2014, 3). The purpose of the research question determines the
approach.
In the quantitative approach, the desired end result of research is objective knowledge that has
generalizability. Generalizability means capable of being applied to other similar situations and people.
As the word quantitative implies, researchers using the quantitative approach collect data that can be
numerically measured and lead to statistical results. The quantitative approach is informed by the
philosophy of positivism (Ingham-Broomfield 2014, 33). Positivism, which dates back to the mid–19th
century, proposes that knowledge should be based on universal laws, objectivity, and observed facts
(Hasan 2016, 318–319; Comte 1853, 2). In health informatics and HIM research, an example of
quantitative research would be a study that calculates the percentage of patients that use a healthcare
organization's patient portal.
Get Complete eBook Download by Email at discountsmtb@hotmail.com
In the qualitative approach, the desired end result is specific knowledge that is particular to the
situation. Qualitative researchers study nonnumerical observations, such as words, gestures, activities,
time, space, images, and perceptions. For example, researchers using the qualitative approach explore
reasons for people's decisions or attempt to interpret their actions. Qualitative researchers are careful to
place these observations in context, which means the specific conditions of the situation, including
time, space, emotional attitude, social situation, and culture. The researchers attempt to understand
phenomena through their subjects' perspective and in their subjects' terms. Additionally, in the
qualitative approach, research often takes place in the natural setting of the issue rather than in a
researcher-created scenario or laboratory (Abma and Stake 2014, 1150). As a result, the qualitative
approach is sometimes called the naturalistic approach (Ekeland et al. 2012, 3). In health informatics
and HIM research, an example of research using the qualitative approach would be an exploration of
the reasons why patients are uncomfortable using a healthcare organization's patient portal.
Mixed-methods research seeks to combine the strengths of the quantitative and qualitative
approaches to answering research questions. The combination of methods may occur concurrently and
simultaneously within a single study, or mixed methods might be applied sequentially across
chronological phases of an investigation or across a series of related studies. Reasons to conduct
mixed-methods research include corroborating the results of other studies, clarifying and expanding the
results of other studies, and resolving or explaining discrepancies in other studies.
Mixed-methods research is suited to investigations of large topics or complex phenomena, such as
in health informatics, HIM, and health-related issues. Consequently, many research methodologists
have noted the importance of using mixed-methods research in studying HIT and health information
systems (Lee and Smith 2012, 251). In health informatics and HIM research, an example of mixedmethods research would be an initial physician survey asking physicians to estimate the number of
minutes that they or their extenders (nurses and physician's assistants) spend responding to patients'
queries from the patient portal. In a follow-up face-to-face interview, researchers could ask the
physicians to explain how they feel about the portal's effect on the patient-physician relationship.
Scientific Inquiry
Scientific inquiry is “a way of generating knowledge” (Salazar et al. 2015, 25). In scientific inquiry,
people use diverse ways to systematically gather data about phenomena, critically analyze the data,
propose explanations based on their evidence, and develop understanding and knowledge. One
component of scientific inquiry is empiricism, the theory that true knowledge is based on observations
and direct experiences that can be perceived through the physical senses, such as eyesight or hearing
(Salazar et al. 2015, 32). Research is based on empirical data rather than other sources of knowledge,
such as authority or tradition. Scientific inquiry includes the considerations of types of reasoning and
rigor, concepts that will be discussed in the next sections.
Reasoning
In scientific inquiry, researchers use two types of reasoning, inductive and deductive, to justify their
decisions and conclusions. Inductive reasoning, or induction, involves drawing conclusions based on
a limited number of observations. Inductive reasoning is “bottom up,” meaning that it goes from the
specific to the general. Researchers who use inductive reasoning begin with observations, detect
patterns or clusters of relationships, form and explore tentative hypotheses, and generate provisional
conclusions or theories. For example, during a student's field work experience, he or she might observe
that all coding professionals in the coding department at XYZ hospital had the credential of certified
coding
10
specialist (CCS). From this observation, the student might conclude that all coding professionals
have the CCS credential. A potential flaw in inductive reasoning is that the observations could be
abnormal or could be limited in number, omitting some possible observations. In our example, the
student's conclusion would be incorrect if he or she did not observe that one coding professional had
the registered health information technician (RHIT) credential instead of the CCS credential. Inductive
Get Complete eBook Download by Email at discountsmtb@hotmail.com
reasoning is associated with the qualitative approach because qualitative researchers begin at the
bottom with their observations (Kisely and Kendall 2011, 364).
Deductive reasoning, or deduction, involves drawing conclusions based on generalizations, rules,
or principles. Deductive reason is “top down,” meaning that deductive reasoning goes from the general
to the specific. Researchers using a deductive reasoning begin with a theory, develop hypotheses to
test the theory, observe phenomena related to the hypotheses, and validate or invalidate the theory. For
example, the same student might use the generalization that all coding professionals have the CCS
credential. Based on this assumption, the student may conclude that because Jane Doe is a coding
professional in the department, she must have the CCS credential. Similar to inductive reasoning,
deductive reasoning can also be flawed. A flaw in deductive reasoning can occur when the
generalization or rule is wrong. As we just noted, a coding professional may have the RHIT credential.
Therefore, in this example, the student's initial assumption was incorrect. Deductive reasoning is
associated with the quantitative approach because quantitative researchers test hypotheses (Wilkinson
2013, 919).
Scientific inquiries can use inductive and deductive reasoning in a cyclical process. Early,
exploratory research often takes an inductive approach. Researchers record observations to induce
(generate) empirical generalizations. These empirical generalizations, describing and explaining the
observations, are developed into theories. Researchers then use the theories to deduce (infer)
hypotheses (tentative predictions). Once researchers have generated a theory, they use the deductive
approach to test or validate the theory by comparing their predictions to empirical observations. The
cycle can start at any point in the process and continues to loop as the researchers refine the theories
(Singleton and Straits 2010, 28).
Rigor
The integrity and quality of a research study is measured by its rigor. The definition of rigor varies
for qualitative and quantitative researchers. For quantitative researchers, rigor is the “strict application of
the scientific method to ensure unbiased and well-controlled experimental design, methodology,
analysis, interpretation and report of results … and includes transparency in reporting full experimental
details so that others may reproduce and extend the findings” (NIH/AHRQ 2015). As a result, rigor
improves objectivity and minimizes bias (Eden et al. 2011, 30). For qualitative researchers, rigor is the
trustworthiness of the interpretation of the study's findings (Morse 2015, 1212; Guba and Lincoln 1989,
233). For both sets of researchers, rigor establishes the validity and reliability of the study's results and
conclusions.
Research Designs
A research design is a plan to achieve the researchers' purpose: answering a question, solving a
problem, or generating new information. The research design is the infrastructure of the study. There
are seven common research designs (see table 1.2). Each of these designs has a role in health
informatics and HIM research.
Get Complete eBook Download by Email at discountsmtb@hotmail.com
Table 1.2 Types of research designs and the application in health informatics and HIM studies
Type of Design
Purpose
Selected Methods
Example of Health Informatics or
HIM Study
Historical
Understand past events
Case study
Biography
Study of the factors that led to the
creation and development of
clinical decision support systems in
the 1960s and 1970s
Descriptive
Describe current status
Survey
Observation
Survey of clinicians to determine
how and to what degree they use
clinical decision support systems
Correlational
Determine existence and
degree of a relationship
Survey
Data mining
Study to determine the relationship
among
individual
clinicians'
attributes, the health team's
characteristics, the setting, and use
of clinical decision support systems
(Continued)
Table 1.2
(Continued)
11
Type of Design
Purpose
Selected Methods
Example of Health Informatics or
HIM Study
Observational
Describe and detect patterns
and regularities in existing
situations
or
natural
surroundings
Evaluation
Assess
efficiency,
effectiveness, acceptability,
or other attribute
Survey
Case study
Observation
Usability study
Double-blind
controlled trial
Experimental
Establish cause and effect
Double-blind
randomized
controlled trial
Pretest-posttest control group
method
Solomon four-group method
Posttest-only control group
Study to evaluate the influence of a
clinical decision support system on
clinicians' prescribing of antibiotics
for acute respiratory infections, with
clinicians randomly assigned to an
intervention group and a control
group
Quasiexperimental
(causalcomparative)
Detect causal relationship
One-shot case study
One-group pretest-posttest
Static group comparison
Study to investigate the antibiotic
prescribing practices for acute
respiratory infections of primary
care clinician teams using a clinical
decision support system before
and
after
an
educational
intervention on the system
Case study
Ethnography
Nonparticipant observation
Participant observations
Study to observe clinicians' use of
clinical decision support systems in
the examination rooms in an
academic health center's specialty
clinic
Study to evaluate the efficacy of
the implementation of a clinical
decision support system in an
academic health center's specialty
randomized clinic
Source: Adapted from Forrestal 2016, 576.
Researchers choose a research design for a particular study. Many research topics are suited to any
one of the designs described in table 1.2, whereas other research topics are better suited to one
research design than another. Which design is appropriate depends on the study's objectives and the
researcher's statement of the problem in the problem statement, which is explained in detail in chapter
10.
Get Complete eBook Download by Email at discountsmtb@hotmail.com
Researchers can choose among a variety of research designs to investigate the same broad
question or problem. For example, researchers can use different designs for different aspects of the
question or problem. They can extend the breadth and scope of their question or problem by exploring
related issues, but they may need to adopt different research designs for those issues. Preliminary,
exploratory, investigations are typically descriptive, correlational, or observational. As researchers refine
these investigations, they conduct causal-comparative and experimental studies. Other possible
refinements include using the results of a preliminary study to identify or test techniques for sampling or
for collecting and analyzing data in subsequent studies. Researchers also combine designs to address
their particular research questions or problems. For instance, studies may include both descriptive and
correlational findings.
The examples of studies in the fourth column of table 1.2 represent a progression of research
studies on one topic, clinical decision support systems. The progression shows how a research team
might sequentially use the results from one study to guide the design of its next study. The descriptive
study merely looks at the extent of clinicians' use of a clinical decision support system; in contrast, the
correlational study expands the study to examine attributes, characteristics, and other factors
associated with the system's use. Eventually, the research team might conduct a double-blind,
randomized controlled trial on a specific aspect of the clinical decision support system that compares an
intervention (experimental) group and a control (nonexperimental) group.
The following subsections describe each of the research designs listed in table 1.2: historical
research, descriptive research, correlational research, observational research, evaluation research,
experimental research, and quasi-experimental (causal-comparative) research. For each research
design, a relevant health informatics or HIM example is discussed. Table 1.2 also lists examples of
research methods typically associated with each design. Several of these research methods are
explained in chapters 2 through 8. Appropriate choices of research designs and research methods
increase the likelihood that the data (evidence) collected are relevant, high quality, and directly related
to the research question or problem.
12
Historical Research
Historical research examines historical materials to explain, interpret, and provide a factual
account of events (Atkinson 2012, 20). The purposes of historical research include discovering new
knowledge, identifying trends that could provide insights into current questions or problems, relating the
past to contemporary events or conditions, and creating official records. In historical research, the
investigator systematically collects, critically evaluates, and analyzes and interprets evidence from
historical materials (Polit and Beck 2012, 500). These historical materials are known as primary and
secondary sources.
Primary sources, also sometimes called primary data, are firsthand sources. In historical research,
these firsthand sources include original documents, artifacts (objects, such as computers or paper
records), and oral histories (first-person, spoken accounts). Generally, these firsthand sources are
created or collected for a specific purpose. For example, original data obtained by researchers in a
research study to answer a specific question and the article in which they published their data might be
used as a primary source. Secondary sources, also called secondary data, are secondhand sources.
In historical research, these secondhand sources are created by people uninvolved with the event.
Generally, secondary sources aggregate, summarize, critique, analyze, or manipulate the primary
sources, and, as such, they are derived from primary sources. Clinical data warehouses are secondary
sources because they aggregate many individual patient health records and because the data are used
for strategic planning or trend analysis rather than treating the individual patient. This chapter is another
example of a secondary source because it describes and summarizes the original reports of others.
Some methodologists categorize encyclopedias, textbooks, and other references as tertiary sources
because they summarize information from secondary sources. (See figure 1.2 for examples of primary
and secondary sources.)
Get Complete eBook Download by Email at discountsmtb@hotmail.com
Figure 1.2 Primary and secondary sources
As noted earlier, historical researchers begin by systematically collecting sources, which means they
consider all possible relevant primary and secondary sources, their location, and how to access them,
and then choose the best sources for their study. Historical researchers often use records or other
documents. For example, historical researchers might investigate the increasing importance of IT
departments in healthcare organizations by examining organizational charts over time. Possible
questions that these historical researchers could ask include the following:
When did IT departments first begin to appear on the charts?
13
What were these departments called?
Which types of healthcare organizations first established these departments?
Where were the departments' leaders in the organization's chain of command? Were they midlevel
managers or upper administration?
In addition to using documents as a source of data, historians also use oral histories, particularly to
obtain eyewitness accounts. “Oral history is a method for documenting history in a vivid way by
recording the voices of those who have experienced it” (Ash and Sittig 2015, 2). Historians might record
interviews with early leaders in a field, such as Dr. Lawrence Weed, who was the first in medicine to
propose problem-oriented medical records as a way of recording and organizing the content of health
records and advocated for EHRs (Jacobs 2009, 85). The researchers also could record interviews with
staff members of a healthcare organization about how the organization's information systems functioned
during a time when the facility was flooded or a natural disaster led to unusually high numbers of
injuries and patient admissions. Historical research must be open to the scrutiny and critical
assessment of other researchers. Therefore, historical researchers must ensure the preservation of
their source documents, such as tapes and transcripts of interviews, and make them available to other
researchers (American Historical Association 2011, 7).
The following are examples of hypothetical historical research investigations in health informatics
and HIM:
Get Complete eBook Download by Email at discountsmtb@hotmail.com
While detailing the history of health informatics from 1950 to 1975, health informatics researchers
find previously undetected communications between hardware developers.
While identifying trends in patients' use of social media, health informatics or HIM researchers
suggest ways to increase patients' engagement with patient portals.
While identifying trends in the development of standards organizations, health informatics or HIM
researchers account for the organizations' impact on the establishment of standards.
While relating the views of early adopters of big data to current practice, health informatics
researchers predict views of clinicians in small or solo practices.
While tracing the history of clinical vocabularies, medical terminologies, nomenclatures, and coding
and classification systems, especially those that no longer exist (such as the Standard
Nomenclature of Diseases and Operations), HIM researchers hypothesize about their influence on
current clinical vocabularies, medical terminologies, nomenclatures, and coding and classification
systems.
While tracing the history of the American Health Information Management Association (AHIMA) or of
the American Medical Informatics Association (AMIA), HIM researchers or health informatics
researchers establish the official records of these organizations' development.
An example of historical research related to health informatics is the US National Library of
Medicine's oral history project called Conversations with Medical Informatics Pioneers, which is
published online (NLM 2015).
A field's history is one aspect of its body of knowledge. Therefore, historical research, while focused
on the past, can inform current and future practice of health informatics and HIM.
Descriptive Research
Descriptive research determines and reports on the current status of topics and subjects.
Descriptive research studies seek to accurately capture or portray dimensions or characteristics of
people, organizations, situations, technology, or other phenomena. Descriptive research is useful when
researchers want to answer the questions “what is,” “what was,” or “how much” (Bickman and Rog
2009, 16). Some descriptive research studies are also correlational, which means they detect
relationships. “Descriptive studies are usually the best methods for collecting information that will
demonstrate relationships and describe the world as it exists” (ORI 2015).
The data gathered in descriptive research are used to establish baselines, make policy, conduct
ongoing monitoring, assess operations, or demonstrate impacts. In the practice of health informatics
and HIM, descriptive research can function as a way to establish benchmarks against which outcomes
of future changes can be evaluated. For example, administrators might need to establish a baseline of
users' productivity on an existing health information system so that in the future the organization can
compare users' productivity on a new information system to their former productivity. In another
example, information specialists might need to know the clinicians' levels of familiarity with a new
technology before the technology is rolled out throughout the organization.
14
Descriptive research is exploratory and typically depicts the frequency of specific elements, the
number of individuals, the level or quantity of factors, or the ranges of occurrences. Tools commonly
used to collect descriptive data include surveys, interviews, observations, and existing data sets, such
as those listed as secondary sources in figure 1.2. Obtaining a representative sample (a small group
that adequately reflects relevant characteristics of the total population that might be studied) adds to the
credibility of a descriptive study. Representative sampling is detailed in chapters 2 and 11. Descriptive
research can collect data about a phenomenon (such as people or organizations) at one point in time,
or it can follow the phenomenon over time. (Cross-sectional and longitudinal time frames are explained
later in this chapter.) Familiar examples of descriptive research are the US decennial census and
opinion polls. Descriptive studies allow researchers to learn details about potential factors that may
affect outcomes; however, descriptive studies do not allow researchers to explain whether or how
factors cause outcomes. As such, descriptive studies are often precursors to explanatory studies, such
as experimental studies (explained later in the chapter) that can establish cause-and-effect
relationships.
Get Complete eBook Download by Email at discountsmtb@hotmail.com
Contemporary descriptive studies have covered several health informatics and HIM topics, such as
information governance, safety of HIT, education in health informatics, impact of the Health Information
Technology for Economic and Clinical Health (HITECH) Act of 2009, the information literacy of health
professionals, and other issues. The real-world case presented at the beginning of this chapter is an
example of descriptive research in health informatics. The researchers (analysts) obtained information
on the current status of US adults' perspectives on the healthcare environment. The study reported the
percentage of consumers who would abandon or hesitate using a health organization if it were hacked.
The researchers' tool was the online survey.
In another example, Fenton and colleagues, a team of health informatics and HIM researchers,
investigated what the term medical record meant in research consent forms (Fenton et al. 2015, 466).
Prior to enrolling subjects in research studies, researchers must obtain written documentation of the
subjects' voluntary and informed (knowledgeable) consent in the form of a research consent form.
Fenton and colleagues wondered whether everyone involved in a research study—subjects,
researchers themselves, and research support staff—understood what data were included in the term
medical record because the content of medical records has changed over the past 20 years. In the past,
the medical record was the collection of documents in the paper record. However, as medical records
have been computerized, their content has expanded to include other electronic data, such as data in
pharmacy management systems and blood bank systems. To investigate its question, the team
reviewed the language in the research consent forms of 17 academic health centers that had received
Clinical and Translational Science Awards (grants) from the National Institutes of Health. In an article
about the study, Fenton and colleagues provided descriptive statistics, such as tallies on the number
and types of consent forms (how many and which type—long forms, short forms, adult forms, pediatric
forms, and so forth) and frequencies (number of occurrences) of medical record and related terms (such
as health record, electronic medical record, and so forth). The team concluded, based on its findings,
that the term medical record, as it is used in informed consent documents for research studies, is
ambiguous and does not support information management and governance. Fenton and colleagues
noted that a limitation of its study was that the 17 academic health centers were a convenience sample
(easily accessible because their research consent forms were posted on the Internet) and, as such,
may not be representative of all organizations' research consent forms. Research consent (informed
consent) forms are explained in detail in chapter 14.
Correlational Research
Correlational research detects the existence, direction, and strength (or degree) of associations
among characteristics. These characteristics can be phenomena, factors, attitudes, organizational
features, properties, traits, indicators, performance measures, or any other attribute of interest.
Correlational research is quantitative and exploratory, and it indicates existing associations that can be
examined and possibly explained using experimental research studies. As mentioned in the previous
subsection, correlational research is descriptive when the researchers are detecting associations.
However, this type of research can also be predictive when researchers suggest that a change in one
characteristic (or characteristics) will or will not follow a change in another characteristic (or
characteristics). The association between the items' change is nonrandom—that is, it is not due to pure
chance.
In correlational studies, the characteristics are often called variables. Variables are characteristics
that are measured and may take on different values. However, based on the specific type of variable or
correlational study, other terms may be used instead of the term variable. Variables that covary—
change together—may be called covariables or covariates. In some correlational studies, such as
canonical and prediction studies, the variables are known as predictor variables (predictors) and
criterion variables (or outcome variables). For example, in a prediction study, a mother's smoking
(predictor variable) predicts a low birth weight (criterion variable) in her baby.
15
In correlational research, data are collected on at least two measured variables. These data are
collected using tools similar to those used in descriptive research: surveys, standardized tests,
interviews, observations, and existing data sets, such as those listed as secondary sources in figure
1.2. Readers should note that researchers often combine data from their own questionnaires and EHRs
Get Complete eBook Download by Email at discountsmtb@hotmail.com
with data from large databases, such as the Centers for Medicare and Medicaid Services' (CMS's)
Hospital Compare.
For example, researchers might conduct a correlational study investigating the associations among
three variables: stress, anxiety, and feelings of personal accomplishment. On a scatter plot, the
researchers could graph the scores (values) of the variables and find that the scores clustered around a
straight line. A straight-line association is known as a linear association, which can be either positive or
negative, as follows:
A positive (direct) linear relationship (association) exists when the scores for variables
proportionately move in the same direction (Singleton and Straits 2010, 91). For example, the
researchers in the hypothetical study about stress, anxiety, and personal accomplishment might
summarize their findings by stating that as participants' scores on stress increased, their scores on
anxiety also proportionately increased. The researchers could also state their findings conversely,
indicating that as the participants' scores on stress decreased, their scores on anxiety also
proportionately decreased. In both statements of the results, the variables are proportionately
moving in the same direction—as one increased, so did the other, or as one decreased, so did the
other. This association is a positive relationship because the variables' scores are moving in the
same direction, as demonstrated in figure 1.3a.
A negative (inverse) linear relationship (association) exists when the scores of the variables
proportionately move in opposite (inverse) directions (Singleton and Straits 2010, 91). For example,
reporting on the same hypothetical study, the researchers might state that as participants' scores on
stress increased, their scores on feelings of personal accomplishment proportionately decreased.
Conversely, the researchers could state that as participants' scores on stress decreased, their
scores on feelings of personal accomplishment proportionately increased. In both statements of the
results, the variables are proportionately moving in the opposite directions—as one increased, the
other decreased, or as one decreased, the other increased. This association is a negative (inverse)
relationship because the variables' scores are proportionately moving in opposite directions, as
demonstrated in figure 1.3b.
Figure 1.3 Examples of positive linear relationship, negative linear relationship, and curvilinear
relationship
This subsection focuses primarily on linear associations because they are frequently studied in
correlational research. Curvilinear associations, which are named for their shapes, such as s-curves, jcurves, and u-curves, are not as commonly studied. Figure 1.3c presents an s-curve that shows the rate
of learning. In this learning curve, learning begins slowly, then rapidly increases, and then plateaus.
Curvilinear and other nonlinear associations require different statistical techniques than those used to
analyze linear associations.
The strength of a linear association can be understood as the accuracy of the prediction (Singleton
and Straits 2010, 93). The strength of the association among variables can range from 0.00 to +1 or
from 0.00 to −1, detailed as follows:
Strength of 0.00 means absolutely no association.
Strength between 0.00 and +1 or between 0.00 and −1 means that the variables sometimes, but not
always, move together.
Strength of 1 or −1 means a perfect association, with the variables moving exactly in tandem.
Get Complete eBook Download by Email at discountsmtb@hotmail.com
Terms and rough guidelines on cutoffs for the strength of associations are as follows:
No or little association: 0.00 to 0.09 and 0.00 to −0.09
Small or weak association: 0.10 to 0.29 and −0.10 to −0.29
16
Medium or moderate association: 0.30 to 0.49 and −0.30 to −0.49
Large or strong association: 0.50 to 0.10 and −0.50 to −0.10 (Ebrahim 1999, 14)
Generally, the closer the values are to +1 or −1, the stronger the linear association.
Correlational research cannot establish a causal relationship. A causal relationship demonstrates
cause and effect, such that one variable causes the change in another variable. There are two reasons
why researchers who conduct correlational studies cannot make causal statements:
Correlational research is not experimental, and experimental research is one way to establish cause
and effect (experimental research is covered later in this chapter).
An unknown variable could be creating the apparent association identified in the correlational study.
This unknown variable is called a confounding (extraneous, secondary) variable because it
confounds (confuses) interpretation of the data.
In the hypothetical stress-anxiety-personal accomplishment study we have been considering, the
correlational design does not allow the researchers to state that (1) stress caused anxiety to increase
(or vice versa); (2) stress caused the feelings of personal accomplishment to decrease (or vice versa);
or (3) any of the variables caused the other ones to change. In this case, a confounding variable, such
as financial problems, poor coping skills, or low self-confidence, could explain the research's results.
What the researchers who conducted the stress-anxiety-personal accomplishment study can state is
that the variables, stress, anxiety, and personal accomplishment, are related to one another.
Finally, another shortcoming of some correlational studies is that they rely on self-reported data from
the subjects. Unfortunately, retrospective self-reported data can be influenced by the subjects' biases,
their selective memories, or the perceived social desirability of certain data options (study participants
may tend to over report “good” behavior and under report “bad” behavior). Some descriptive studies
share this shortcoming.
Researchers have conducted correlational studies on topics related to health informatics and HIM,
including factors related to the adoption of HIT, patients' rates of social media usage and their ratings of
providers, healthcare professionals' security practices and the personal characteristics of those
professionals, use of mobile apps and maintenance of health regimens, and many more topics.
For example, researchers investigated correlations among three variables: the ability of a mobile
device to monitor teens' asthma control, asthma symptoms (such as coughing), and the teens' quality of
life (such as limited activities) in the short-term (Rhee et al. 2015, 1). The researchers' small
convenience sample was 84 teens (42 teens with a current asthma diagnosis; 42 without asthma)
between the ages of 13 and 17 years. Data came from the mobile device, the asthma teens' automated
diaries, laboratory test results, and three questionnaires and were analyzed using correlational statistics
(Pearson product-moment correlation coefficient and multiple regression, which are explained in
chapter 9). Data from the device showed the current status of teens' asthma control and predicted the
teens' asthma symptoms and quality of life. The researchers stated that the study was limited by the
small convenience sample and the self-reported data on the questionnaires.
Observational Research
Observational research is exploratory research that identifies factors, contexts, and experiences
through observations in natural settings. Its focus is the participants' perspective of their own feelings,
behaviors, and perceptions. Observational research is highly descriptive and provides insights into what
subjects do, how they do it, and why they do it. Observational researchers may use either the
quantitative approach or the qualitative approach. However, most observational research is classified as
qualitative research because of its emphasis on uncovering underlying beliefs and meanings.
Observational researchers observe, record, analyze, and interpret behaviors and events. They
attempt to capture and record the natural, typical behavior of their participants (subjects) in their context
Get Complete eBook Download by Email at discountsmtb@hotmail.com
Get Complete eBook Download link Below for Instant Download:
https://browsegrades.net/documents/286751/ebook-payment-link-forinstant-download-after-payment
Get Complete eBook Download by Email at discountsmtb@hotmail.com
and their natural surroundings. Observational research is intensive, and researchers amass volumes of
detailed data. Common tools in observational research are case notes, check sheets, audiotapes, and
videotapes. Typically, observational researchers spend prolonged periods in the setting or events being
researched; however, some research topics, such as behaviors in natural disasters, prevent this type of
prolonged engagement. Data collection and analysis often are concurrent with additional participant
interviews, which are used to address perceived gaps or capture missing viewpoints. Observational
research focuses on collecting rich data, thick descriptions and layers of extensive details from multiple
sources. Sources of data include observations, interviews, and artifacts. Artifacts are objects that
humans make that serve a purpose and have meaning (Malt and Paquet 2013, 354–355). Healthrelated examples of artifacts include administrative records, financial records,
17
policy and procedure manuals, legal documents, government documents, flowcharts, paper-based
charts, EHRs, sign-in sheets, photographs, diaries, and many more items. Through analysis,
observational descriptions and details are categorized into overarching themes. The themes are then
interpreted to answer the research question. Details, such as participants' narratives, are often used in
the reporting of observational studies to illustrate the researchers' interpretations. Care is taken to
include and to account for discrepancies, nuances, idiosyncrasies, and inconsistencies.
Observational researchers often use triangulation to support their findings. Triangulation is the use
of multiple sources or perspectives to investigate the same phenomenon. The multiple sources or
perspectives can include data (multiple times, sites, or respondents), investigators (researchers),
theories, and methods (Carter et al. 2014, 545). The results or conclusions are validated if the multiple
sources or perspectives arrive at the same results or conclusions. This technique lends credence to the
research.
There are many types of observational research. In 1990, a research methodologist identified more
than 40 types of observational studies, although many of these types overlap or are identical (Tesch
1990, 58). Three common types of observational research are nonparticipant observation, participant
observation, and ethnography.
Nonparticipant Observation
In nonparticipant observation, researchers act as neutral observers who neither intentionally
interact with nor affect the actions of the participants being observed. The researchers record and
analyze observed behaviors as well as the content of modes of communication, such as documentation,
speech, body language, music, television shows, commercials, and movies. Three common types of
nonparticipant observation are naturalistic observation, simulation observation, and case study.
Naturalistic Observation In naturalistic observation, researchers record observations that are
unprompted and unaffected by the investigators' actions. Researchers can conduct naturalistic studies
in organizations; for example, they might study how the implementation of a type of HIT affected work
flow at a hospital. Researchers conducting a naturalistic observation within an organization face the
problem of remaining unobtrusive. The researchers' mere presence can affect people's behavior
(known as the Hawthorne or John Henry effect). However, the strength of naturalistic studies, if
researchers resolve this problem in their study's design, is that participants tend to act more naturally in
their real setting than in simulated settings. Naturalistic studies are sometimes conducted after a
disaster, such as a computer system breach, earthquake, hurricane, or other phenomenon.
Researchers conducting natural experiments on disasters wait for the phenomenon to occur naturally.
For example, researchers interested in the effectiveness of IT disaster recovery plans could establish
baseline data and tools for data collection before a disaster occurred and then wait for the opportunity to
record their observations of the actions and results when a disaster actually happened.
Simulation Observation In simulation observation, researchers stage events rather than allowing
them to occur naturally. Researchers can invent their own simulations or use standardized vignettes to
stage the events. For example, researchers may conduct a simulation study to evaluate an application's
use, content, and format. Simulations could also develop and test the data collection tools for a
naturalistic observation study of a disaster or other event. Researchers can build or use simulation
laboratories to substitute for the actual setting. To allow observations of activities and behaviors, these
laboratories may have one-way windows or projection screens.
Get Complete eBook Download by Email at discountsmtb@hotmail.com
Case Study A case study is an in-depth investigation of one or more examples of a phenomenon,
such as a trend, occurrence, or incident. Case studies are intensive, and researchers gather rich data.
The “case” can be a person, an event, a group, an organization, or a set of similar institutions.
Depending on their research question, researchers select various types of sample cases. Types of
sample cases include similar cases but with differing outcomes, differing cases but with the same
outcome, diverse or contrasting cases, typical or representative cases, influential or pioneering cases,
and other types. Case study researchers can also combine types of sample cases. For example, to
identify general good practices researchers could select typical or representative cases that had the
same successful outcome. Researchers report their case studies as detailed accounts or stories.
Participant Observation
In participant observation, researchers participate in the observed actions, activities, processes, or
other situations. Participant observation research is used to investigate groups, processes, cultures,
and other phenomena. The researchers record their observations of other people's daily lives, the
contexts of people's actions, and their own experiences and thoughts. Researchers using participant
observation may reflect the insiders' perspectives, which can be both an
18
advantage and a disadvantage of this approach. As insiders, the researchers may have unique
insights into the environment and context of the situation. At the same time, however, the researchers
may share the biases and blind spots of insiders. Therefore, in participant observation, researchers
attempt to maintain neutrality while being involved in the situation.
Researchers can participate overtly (openly) or covertly (secretly) in participant observation. Covert
observation involves the deception of other participants in the study and other organizational members,
and many ethicists consider it to be unethical (Spicker 2011, 118–123). Covert observation may involve
breaches of the right to privacy and the principle of informed consent. Additionally, covert observation
can undermine human relationships by eroding trust and disregarding honesty. On the other hand, the
ethical principal of utility—the greatest good for the greatest number—and the advancement of science
may override the ethical breaches of covert observation (Parker and Ashencaen Crabtree 2014, 35–36).
In general, researchers who are considering covert observation as their research method should assess
whether the data could be collected using another method and should seek counsel from appropriate
research oversight entities.
Ethnography
Ethnography, which has its origins in anthropology, is the exhaustive examination of a culture by
collecting data and making observations while being in the field (a naturalistic setting). Ethnographers
amass great volumes of detailed data while living or working with the population that they are studying.
This observational method includes both qualitative and quantitative approaches and both participant
and nonparticipant observation. Characteristics of ethnography include immersion in the field, the
accrual of volumes of data, and great attention to the environment. Ethnographers gather data with field
notes (jotted observations in notebooks), interviews, diagrams, artifacts, and audio, video, and digital
recordings. Ethnographers gather data about the following:
Environment, such as physical spaces, participants, participants' general roles and responsibilities,
and other aspects of the setting
Organizational characteristics, such as strategic and tactical plans, procedures, processes,
organizational charts, and division of labor
Flow of activities, such as stages in a process, participants' particular roles and perspectives, and
sequences of interactions and practices
Collaborative or cooperative actions and informal relationships, such as conversational groups and
hand-over of tasks
Physical, communication, and digital resources, such as computers, audit trails, and keystroke logs
(Crabtree et al. 2012, 78–84).
Get Complete eBook Download by Email at discountsmtb@hotmail.com
Ethnography can obtain insights not discoverable in other research designs. However, the design
may result in quantities of data that are difficult to analyze without making large investments of money
and time.
Observational research may be combined with other designs, such as descriptive research. Focused
on specific contexts, observational researchers do not attempt to generalize to other situations.
However, the findings of observational research may result in testable hypotheses for subsequent
quantitative research studies that do seek generalizability. Characteristics of high-quality observational
research are credibility, transferability, dependability, and confirmability (Guba and Lincoln 1989, 236–
243). These characteristics are defined in table 1.3. However, observational research can be timeconsuming, human resource–intensive, and expensive.
Table 1.3 Criteria for quality observational research
Criterion
Definition
Credibility
Procedures of the study that support the accuracy and representativeness of the thoughts, feelings,
perceptions, and descriptions of the subjects under study. Examples include involvement of appropriate
duration and intensity; revision of hypotheses to account for facts; and ongoing review of data,
analysis, and interpretation with participants.
Transferability
Degree to which key characteristics of contexts are similar and, thus, applicable to other contexts.
Transferability affects the ability to apply the results of research in one context to other similar contexts.
Detailed and extensive (rich) descriptions of the context are the primary way to establish transferability.
(Continued)
Table 1.3
(Continued)
19
Criterion
Definition
Dependability
Reliability of data achieved by being able to explicitly track and account for changes in the design and
methods of the study occasioned by changing conditions, identification of gaps, or other important
factors. The dependability audit is the means to open decision making to public inspection.
Confirmability
Ability to trace data to original sources and ensure that another person analyzing, categorizing, and
interpreting the data based on the research's documentation would confirm the logic. Confirmability is
established by external reviewers conducting a confirmability audit.
Source: Guba and Lincoln 1989, 236–243.
Health informatics and HIM researchers are building a body of knowledge for evidence-based
practice. Observational research supports this effort because observational research records what
actually occurred in real time, rather than what subjects retrospectively recall occurred. For example,
health informatics researchers can observe how users actually navigate an application, follow what
really happened in an implementation, or understand other types of situations related to HIT.
The following are some reasons why health informatics and HIM researchers might choose to
conduct observational research:
To study work flow—Health informatics and HIM researchers can observe artifacts that support
workers' use of various health information technologies and applications. Are manuals sitting by the
device or are diagrams taped to the wall? Are cheat sheets and sticky tabs posted on workstations?
Can these artifacts be categorized? Could the hardware or software be designed to include the
necessary information; thus, eliminating the need for artifacts? Under what conditions, such as
during the performance of particular tasks or at certain times of the day, do users pay attention to or
ignore alerts?
To investigate the functioning of health information systems—Health informatics and HIM
researchers can observe the exchange and transmission of health information throughout a staged
event, such as a disaster drill. In the drill, does the information system support users, both familiar
and unfamiliar with the system, and provide information when and where it is needed? After an
Get Complete eBook Download by Email at discountsmtb@hotmail.com
actual disaster, health informatics and HIM researchers can observe and record the actual
performance of post-disaster information recovery plans.
To uncover why clinicians, administrators, and other personnel use (or do not use) various
decision support tools—Health informatics and HIM researchers can create scenarios of complex
diagnostic or administrative problems requiring decisions for which the health information systems
had embedded resources. The researchers might then ask the users to explain how they obtain the
information needed to solve the problem and record the users' explanations.
To examine specific modules (applications) within health information systems—How can data
from health departments' syndromic surveillance systems be combined with data from health
information exchange systems to support the delivery of care and services during and after
disasters?
To investigate why healthcare organizations' online programs to engage patients work (or do
not work)—How do patients respond to targeting and how can tailoring algorithms augment
engagement?
A case study of how well portals convey information to patients provides an example of an
observational research study (Alpert et al. 2016, 1). The patient-participants were selected because
they had used the portal at least once in the past year, were between the ages of 18 and 79 years, and
had an upcoming clinic appointment. The volunteer clinician-participants (full-time physicians, residents,
nurses, and an emergency medical technician) were recruited through e-mails and announcements.
The researchers collected data by conducting 31 interviews with the patient-participants and by holding
two focus groups of clinicians (focus groups are discussed in detail in chapter 3). In the interviews and
focus groups, the researchers asked the participants to describe their best and worst experiences with
the portal. The average length of the interviews was 14 minutes; the average length of the focus groups
was 51 minutes. All responses were audio-recorded and transcribed. Data collection continued until the
investigators identified that similar phrases and words were recurring and no new concepts were being
revealed. Content analysis was used to analyze the data (content analysis is described in chapter 3).
The researchers' results suggested that some simple modifications to the portals, such as increased
interactivity and personalized messages, could enhance the patients' understanding of the information.
The researchers noted that a limitation of the study was that the participants represented a small subset
of all users in terms of ages, ethnicities, and socioeconomic classes.
20
Evaluation Research
Evaluation research is the systematic application of criteria to assess the value of objects
(Øvretveit 2014, 6–13). Systematic is a key term in the definition of evaluation research. Value can be
assessed in terms of merit, worth, quality, or a combination of these attributes (CDC 2012, 6). Examples
of evaluated objects include policies, programs, technologies (including procedures or
implementations), products, processes, events, conditions, and organizations. Criteria to assess these
activities or objects can be related to many of their aspects, such as conceptualization, design,
components, implementation, usability, effectiveness, efficiency, impact, scalability, and generalizability.
Depending on the focus of the research, the researchers' educational background, and the research's
funding source, other terms may be used for studies conducted under the large umbrella of evaluation
research (see table 1.4).
Get Complete eBook Download by Email at discountsmtb@hotmail.com
Table 1.4 Terms used to describe evaluation research studies
Term
Description
Health Informatics and HIM Example
Outcomes
research
Research that seeks to improve the delivery of
patient care by studying the end results of health
services, such as quality of life, functional status,
patient satisfaction, costs, cost-effectiveness, and
other specified outcomes (In and Rosen 2014, 489).
Investigation of whether a clinical decision support
system that links the EHR to treatment protocols,
drug information, alerts, and community resources
for the care of patients with HIV infection improves
patients' quality of life.
Health
services
research
Multidisciplinary research that studies how social Investigation of whether the degree of an
factors, financing systems, organizational structures organization's adoption of health information
and processes, health technologies, and personal technologies affects patient safety.
behaviors affect access to healthcare, its quality and
cost, and overall health and well-being. The
research is usually concerned with relationships
among need, demand, supply, use, and outcome of
health services (Stephens et al. 2014).
Health
technology
assessment
(HTA)
Evaluation of the usefulness (utility) of a health
technology in relation to cost, efficacy, utilization,
and other factors in terms of its impact on social,
ethical, and legal systems. The purpose of HTA is to
provide individual patients, clinicians, funding
bodies, and policymakers with high-quality
information on both the direct and intended effects
and the indirect and unintended consequences
(INAHTA 2016). Technology, in this context, is
broadly defined as the application of scientific
knowledge to practical purposes and includes
methods, techniques, and instrumentation. Health
technologies promote or maintain health; prevent,
diagnose, or treat acute or chronic conditions; or
support
rehabilitation.
They
include
pharmaceuticals,
medical
devices,
medical
equipment, medical diagnostic and therapeutic
procedures, organizational systems, and health
information technologies.
The Technology Assessment Program of the
Agency for Healthcare Research and Quality
(AHRQ) conducts technology assessments based
on primary research, systematic reviews of the
literature,
meta-analyses,
and
appropriate
qualitative methods of synthesizing data from
multiple studies (AHRQ 2016a). CMS uses AHRQ's
HTAs to make national coverage decisions for the
Medicare program.
Comparative
effectiveness
research
(CER)
Research that generates and synthesizes
comparative evidence about the benefits and harms
of alternative methods to prevent, diagnose, treat,
and monitor a clinical condition, or to improve the
delivery of care. This evidence can assist
consumers,
clinicians,
purchasers,
and
policymakers to make informed decisions that will
improve healthcare at both the individual and
population levels (AHRQ 2016b).
Investigation to determine whether a self-managed
online diabetes support program or a clinicianmoderated online diabetes support program is most
effective or beneficial for a given patient.
(Continued)
21
Get Complete eBook Download by Email at discountsmtb@hotmail.com
Table 1.4
(Continued)
Term
Description
Health Informatics and HIM Example
Usability
testing
Evaluation that assesses whether a product or service
achieves its intended goals effectively, efficiently, and
satisfactorily for representative users in a typical
setting. In this evaluation, the users are the focus, not
the product or service. Effectiveness is defined by
whether a wide range of intended users can achieve
the intended goal. Efficiency is considered in terms of
the users’ time, effort, and other resources.
Satisfaction is defined by whether users are satisfied
with the experience and also takes into account their
relative opinion about this particular product or service
when other alternatives are available (ISO 2013).
Investigation of whether users can schedule an
appointment, obtain laboratory test results, and e-mail
their provider using the healthcare organization’s
portal.
Common types of evaluation studies are needs assessments, process evaluations, outcome
evaluations, and policy analyses (see table 1.5). Evaluation researchers conduct studies for several
reasons, including the following:
To ascertain progress in implementing key provisions of a plan, process, or program
To assess the extent of achieving desired outcomes
To identify effective practices for achieving desired results
To determine opportunities to improve performance
To evaluate the success of corrective actions (GAO 2012, 13)
Table 1.5 Common types of evaluation research studies
Type
study
of Description
Health Informatics or HIM Example
Needs
Collecting and analyzing data about proposed Survey of patients to determine their preferences and
assessment programs, projects, and other activities or objects to priorities for various features in the healthcare
determine what is required, lacking, or desired by an organization's patient portal.
employee, a group, an organization, or another user.
Data are also collected on the extent, the severity,
and the priorities of the needs (Shi 2008, 213).
Process
evaluation
(also known
as
formative
evaluation)
Monitoring programs, projects, and other activities or
objects to check whether their development,
implementation, or operation is proceeding as
planned; may include investigation of alternative
processes, procedures, or other activities or objects
(GAO 2012, 15).
Assessment of the roll-out of new features in the
organization's patient portal to determine whether the
roll-out is achieving the project's milestones and
within budget. Adjustments to the process can then
be made as needed.
Outcome
evaluation
(also known
as
summative
evaluation)
Collecting and analyzing data at the end of an
implementation or operating cycle to determine
whether the program, project, or other activity or
object has achieved its expected or intended impact,
product, or other outcome; includes investigation of
whether any unintended consequences have
occurred (Shi 2008, 218–219). The impact evaluation
is a form of outcome evaluation that looks at longterm effects, such as what would have happened had
the activity or object not been implemented or
whether the impact has extended beyond the initial
target population (GAO 2012, 16).
Comparison between the level of patients' interaction
with the organization's patient portal and the level of
interaction reported by industry peers. Organizational
leaders can use the findings to help decide whether
the portal vendor's contract should be renewed or
revised.
(Continued)
22
Get Complete eBook Download by Email at discountsmtb@hotmail.com
Table 1.5
Type
study
Policy
analysis
(Continued)
of Description
Identifying options to meet goals, estimating the costs
and consequences of each option prior to the
implementation of any option, and considering
constraints of time, information, and resources (Shi
2008, 219–220).
Health Informatics or HIM Example
A federal agency’s identification of various ways to
increase patient self-management and engagement
by using health information technologies; including,
for each way, an analysis of its benefits and costs;
and a prediction of its consequences.
Evaluation research can involve a quantitative, qualitative, or mixed-methods approach. Moreover,
evaluation research can use any of the other research designs; it is the purpose—evaluation—that
classifies the design as evaluation. For example, in a descriptive evaluation study, researchers could
compile data on the characteristics and numbers of people using a technology. In another example,
researchers could conduct an observational evaluation study to investigate how users navigate through
new features of the patient portal.
Evaluation researchers collect data by using tools such as case studies, field observations,
interviews, experiments, and existing data sets, such as those identified in figure 1.2.
Evaluation research is assessed against four overarching standards, as stated below:
Utility: Relevant information will be provided to people who need it at the time they need it.
Feasibility: There is a realistic likelihood that the evaluation can succeed if given the necessary
time, resources, and expertise.
Propriety: Appropriate protections for individuals’ rights and welfare are in place and the
appropriate stakeholders, such as users and the surrounding community, are involved.
Accuracy: The evaluation's results will be accurate, valid, and reliable for their users. (CDC 2012,
10)
Like other investigators, individuals conducting evaluation research must follow established
protocols (detailed sets of rules and procedures), use defensible analytic procedures, and make their
processes and results available to other researchers. Evaluation research should be capable of being
replicated and reproduced by other researchers. Examples of evaluation studies include investigations
of the usability, utilization, effectiveness, and impact of telehealth services, network-based registries,
clinical decision support systems for various medical specialties (such as radiology, emergency
medicine, and others) and conditions (such as diabetes, asthma, and others), mapping systems to
capture data for decision support systems, and mobile devices. Researchers use a variety of
techniques to conduct the evaluation studies.
Using a mixed-methods research approach, a contracted research agency conducted an outcome
evaluation and a policy analysis of the Regional Extension Center (REC) program for the Office of the
National Coordinator for Health Information Technology (ONC) (Farrar et al. 2016). As part of the
HITECH Act, the REC program funded support to healthcare providers to assist them in adopting EHRs
and other HIT. These providers included solo and small physician practices, federally qualified health
centers and rural clinics, critical access hospitals, and other providers for underserved populations
(often rural and poor).
The agency's researchers collected descriptive data through interviews and focus groups with REC
representatives, an electronic survey of RECs, and surveys of Health Information Technology Research
Center (HITRC) online portal users. A few examples of questions that researchers asked to evaluate
the REC program included the following:
How did RECs structure and organize their programs?
What contextual conditions influenced the implementation and operation of the REC programs?
Was REC participation associated with adoption of EHRs?
Was REC participation associated with receiving incentives through the Medicare and Medicaid
EHR Incentive Programs?
Was REC participation associated with positive opinions about EHRs?
Get Complete eBook Download by Email at discountsmtb@hotmail.com
The researchers described the impact of the REC program by comparing outcomes for REC
participants to outcomes for nonparticipants. The evaluation study found that 68 percent of the eligible
professionals who received incentive
23
payments under the federal meaningful use incentive program (stage 1) were assisted by an REC,
compared to just 12 percent of those who did not work with an REC (Farrar et al. 2016, 5). Generally,
the REC program had a “major impact” (Mason 2016). In terms of policy analysis, the evaluation study
identified several points that policymakers should consider. For example, the REC model was an
effective way to achieve program goals; also, tools and resources should be in place prior to the startup
of a program (Farrar et al. 2016, 53–54).
Experimental Research
Experimental research is a research design in which researchers follow a strict procedure to
randomly assign subjects to groups, manipulate the subjects' experience, and finally measure any
resulting physical, behavioral, or other changes in the subjects. Experimental researchers create strictly
controlled situations and environments in which to observe, measure, analyze, and interpret the effects
of their manipulations on subjects or phenomena. Researchers conduct experimental research to
establish cause-and-effect (causal) relationships. As previously noted in the subsection on correlational
research, experimental research is one research design that can establish causal relationships.
Experimental research has four features, which are as follows (Campbell and Stanley 1963):
Randomization: The process begins with random sampling, which is the unbiased selection of
subjects from the population of interest. (Random sampling is discussed in detail in chapter 11.)
Then, randomization, or the random allocation of subjects to the comparison groups, occurs. Of the
comparison groups, the experimental (study) group comprises the research subjects who receive
the study's intervention, whereas the control group comprises those who do not receive the study's
intervention.
Observation: The dependent variable, which is the hypothesized change, is measured before and
after the intervention. Observation is used broadly and could be a pretest and a posttest. (It is
acceptable to omit the before observation.)
Presence of a control group: The experiment must compare outcomes for participants who do and
do not receive the intervention.
Treatment (intervention): The researcher manipulates the independent variables, which are the
factors or actions that the researchers are proposing will cause the hypothesized change. In this
context, treatment is defined broadly, beyond its usual meaning of therapy, to refer to any type of
intervention. Treatment could mean a computer training program, an algorithm to extract medical
abbreviations from bibliographic databases, a specific technology or application, or a procedure to
implement health information and communication technologies.
Studies that have all four of these features are classified as experimental, but studies that lack any
of the features are classified as quasi-experimental. Experimental research methods include pretestposttest control group method, Solomon four-group method, and posttest-only group method (see
chapter 4).
In experimental studies, researchers actively intervene to test a hypothesis (a measurable statement
of the researchers' supposition). The researchers follow strict protocols, which are detailed set of rules
and procedures. Protocols must be established in advance (a priori) of the study's inception. This
explicit documentation of protocols assists researchers in planning their study and in consistently
conducting it. Furthermore, protocols promote accountability, research integrity, and transparency
(Moher et al. 2015, 8).
Experimental researchers randomly select participants (subjects) and then randomly allocate the
participants into either an experimental or control group. In experimental studies, these groups are often
called arms, with the experimental group being called the intervention arm and the control group known
as the control arm. As a part of the random allocation, blinding or masking often occurs. Blinding
prevents the parties involved in an experimental study—the subjects, the researchers, and the study
managers or analysts—from knowing whether a participant belongs to the experimental or control
Get Complete eBook Download by Email at discountsmtb@hotmail.com
group. Table 1.6 describes the various types of blinding. The purpose of blinding is to minimize the risk
of subjective bias stemming from the researcher's or the subject's expectations and perceptions. These
subjective biases are known as observer-expectancy effect and subject-expectancy effect. In observerexpectancy effect, the researcher (observer) expects a particular outcome and then unconsciously
manipulates the experiment to achieve it. In subject-expectancy effect, a research participant (subject)
expects a particular outcome and either unconsciously manipulates the experiment or reports the
expected outcome.
Table 1.6 Types of blinding in research
Type
Blinding
24
of Description
Single-blind
Only the subjects are blinded to knowing whether or not they are receiving the intervention.
Double-blind
Both researchers and subjects are blinded to knowing whether or not particular subjects are receiving
the intervention.
Triple-blind
Staff members managing the study's operations and analyzing the data, researchers, and subjects are
all blinded to knowing whether or not particular subjects are receiving the intervention.
According to the protocol, the researchers systematically manipulate independent variables (factors)
in interventions for the experimental group. In doing so, the researchers test the variables' influences on
the dependent variables (effects or outcomes). To assess the variables' influences, the researchers
conduct an initial observation (measurement), such as a pretest, to establish a baseline. Then, after the
manipulation—the intervention—the researchers conduct a second observation, such as a posttest
(sometimes, multiple observations are made).
As they manipulate factors, experimental researchers are careful to fully control their environments
and subjects. Control is the processes used to maintain uniform conditions during a study in order to
eliminate sources of bias and variations and to remove any potential extraneous factors that might
affect the research's outcome. Control is an important aspect of experimental research because the
researchers' end goal is to pinpoint the cause of the intervention's effect without any possible alternative
explanations related to bias, variation, or unknown factors. In a well-controlled experiment, any
differences between the experimental group's measured outcomes and the control group's measured
outcomes would be due to the intervention.
Given their importance in research studies, independent variables and dependent variables need
more discussion. Independent variables are antecedent or prior factors that researchers manipulate
directly; they are also called treatments or interventions. Dependent variables are the measured
variables; they depend on the independent variables. The selection of dependent variables to measure
reflects the results that the researcher has theorized. They occur subsequently or after the independent
variables. See table 1.7 for a side-by-side comparison of the characteristics of independent and
dependent variables. The features of experimental research—randomization, observation, control
group, and treatment—and the procedures of the protocol aim to make the dependent variable—the
outcome variable—the only difference between the two groups and, thereby, establish a causal
relationship between the treatment and the outcome. In other words, experimental research tests
whether the independent variable causes an effect in the dependent variable.
Get Complete eBook Download by Email at discountsmtb@hotmail.com
Table 1.7 Comparison of characteristics of independent variables and dependent variables
Independent Variable
Dependent Variable
Cause
Effect
Originates stimulus or is treatment
Receives Stimulus or treatment
Causes or influences change in dependent variable
Measured for effect or influence of stimulus or treatment
Manipulated
Measured for effect of manipulation of independent
variable
Antecedent, prior
Successor, subsequent
Action
Consequence
Other terms: Covariable or covariate, predictor, predictor Other terms: Covariable or covariate, criterion variable,
variable, treatment, treatment variable
outcome, outcome variable
Randomized controlled trials (RCTs), studies in which subjects are randomly selected and
randomly assigned to an experimental group or a control group, are an important type of experimental
research, particularly in medicine. Controlled refers to the use of a control group. Alternative terms for
RCTs are clinical trials, randomized clinical trials, and randomized control trials. Clinical refers to at the
bedside, meaning the experiment involves investigations of diagnostic and therapeutic procedures,
drugs, devices and technologies, and other biomedical and health interventions.
Both health informatics and HIM researchers may conduct RCTs. RCTs related to these fields have
been conducted to evaluate the effectiveness of HIT in medication safety; the impact of technologies—
such as cellular phones, telehealth,
25
the Internet, tablets, activity trackers, social media, and other technologies—on exercise and
physical activity, diet, medication compliance, treatment adherence, engagement, and other outcomes;
the effect of decision support systems used by various types of clinicians (nurses, surgeons, trauma
physicians, and others) for various conditions (pediatric blunt head trauma, kidney disease, and other
conditions) and for various functions (medication prescribing, medication reviews, diagnosing diseases,
and other functions); and other studies evaluating health technologies and their use.
For example, Ford and colleagues conducted an RCT to investigate whether a single-screen display
of all reports could increase the timely and accurate acknowledgement of critical and noncritical results
as compared to other systems (Ford et al. 2016, 214). The researchers explained that in many current
EHRs, reports—such as those providing laboratory test results—are displayed on multiple screens. The
researchers obtained 100 reports each from two EHR systems and displayed them in the two systems'
respective formats. Then, the researchers displayed the same 200 reports in their test single-screen
display. On a single test computer, the study's participants, 12 physicians and 30 nonphysician
providers, reviewed and processed the 400 reports. The study's results showed that the single-screen
display was superior compared with the other two systems, both in reducing review times and improving
accuracy.
Although experimental studies can lead to improvements in healthcare, health informatics and HIM
researchers face barriers when conducting experimental studies. These barriers include practical
constraints, such as how to blind clinicians to the fact that they are using a new technology; pressures
because the timelines to implement new technologies are short; complex workflows that involve many
different clinicians, professionals, and patients; and the highly regulated environments of human
subjects research and healthcare delivery.
Quasi-experimental Research
Quasi-experimental studies involve nonrandomized groups, observations, and treatments. Quasiexperimental research searches for plausible causal factors or indicates that a causal relationship
could exist. To conduct quasi-experimental investigations, researchers approximate the environment of
true experiments. In quasi-experimental studies, researchers investigate possible cause-and-effect
Get Complete eBook Download by Email at discountsmtb@hotmail.com
relationships by exposing one or more experimental groups to one or more treatment conditions and
comparing the results to one or more control groups not receiving the treatment. However, the
phenomenon or variables under study do not allow the researchers to control and manipulate all their
relevant aspects. Often, a quasi-experimental study does not use randomization, a key element of a
true experimental design. As the term quasi implies, quasi-experimental research design is “similar to”
or “almost” experimental research, but quasi-experimental research lacks the ability to establish causal
relationships as experimental research can. Quasi-experimental research is also called causalcomparative research or ex post facto (retrospective) research, when it involves a variable from the
past or phenomenon that has already occurred. Types of quasi-experimental research methods include
one-shot case method, one-group pretest-posttest method, and static group comparison method (see
chapter 4).
In quasi-experimental studies, researchers compare the outcomes of various factors to detect
differences or associations. They investigate how a particular independent variable (factor, event,
situation, or other independent variable) affects a dependent variable (outcome, effect, or other
dependent variable). A quasi-experimental study is the appropriate choice when any of these following
situations exist:
The independent variables cannot be manipulated (for example, gender, age, race, birth place).
The independent variables should not be manipulated (for example, accidental death or injury, child
abuse).
The independent variables represent differing conditions that have already occurred (for example,
medication error, heart catheterization performed, smoking).
These situations prevent people from being randomly assigned into experimental and control
groups. To do so as a true experiment would be infeasible or unethical.
In some quasi-experimental studies, the researchers manipulate the independent variable. However,
despite the manipulation, these studies are still quasi-experimental because the researchers do not
randomly assign participants to groups. Data may be collected before and after an event, exposure,
intervention, or other independent variable, or all the data may be collected after all the variables of
interest have occurred. The researchers then identify one or more effects (dependent variables) and
examine the data by going back through time, detecting differences or seeking out associations.
Quasi-experimental research studies cannot establish causal relationships because the studies may
potentially have biases or confounders. First, studies that lack random assignment may be biased.
Second, in other typical quasi-experimental studies, the researchers lack control over all variables. For
example, the researchers observe the effect of a
26
factor that has already occurred, such as an intervention, a diagnostic or therapeutic procedure, a
risk factor, an exposure, an event, or some other factor. The investigators are not themselves
manipulating the factor, and this lack of control allows the possible introduction of a confounding
variable.
Quasi-experimental studies on health informatics and HIM topics have been conducted to compare
physical activity and mental and cardiometabolic health between people living near a green space and
people living at a distance from a green space; evaluate the effectiveness of HIT in medication safety;
assess the effectiveness of decision support systems for various diseases and functions; and evaluate
the impact of technologies, such as cellular phones, text messaging, delivery of telehealth services,
social media, and online educational module programs on physical and health activities, knowledge and
performance, treatment and medication adherence, engagement, health outcomes, and other
outcomes. Study data in these quasi-experiments were often collected through questionnaires and
surveys.
A quasi-experimental study conducted by Bottorff and colleagues investigated the potential of an
online, man-centered smoking cessation intervention to engage men in reducing and quitting smoking
(Bottorff et al. 2016, 1–2). The pretest-posttest study included one group of 117 male smokers. Data
were collected through online questionnaires. The study's results revealed that the intervention's
website had the potential to serve as a self-guided smoking cessation resource. Predictors of the
number of times a participant attempted to quit were the number of resources he used on the website
Get Complete eBook Download by Email at discountsmtb@hotmail.com
and the subject's confidence in his ability to quit. Most of the men reported they had quit smoking for 24
hours or longer since using the intervention's website. The researchers reported that limitations of their
study included the sample's potential failure to represent all male smokers, the possibility that selfreported measures introduced recall and reporting bias, and smoking cessation was not verified.
Time Frame as an Element of Research Design
Time frame is an element of all seven types of research design. There are two pairs of time frames:
retrospective and prospective, and cross-sectional and longitudinal.
Retrospective Time Frame Versus Prospective Time Frame
Research that uses a retrospective time frame looks back in time on that which has already
occurred. For example, using a retrospective time frame, researchers could conduct a study about early
adopters of HIT. The researchers could ask the early adopters to list factors or reconstruct events that
led to their early adoption. For some types of questions, such as those related to historic events, a
retrospective design is the only possible design.
Research that uses a prospective time frame follows subjects into the future to examine
relationships between variables and later occurrences. For example, researchers could identify
individuals who have successfully increased their physical activity using activity trackers. The
prospective time frame would then follow these individuals or subjects into the future to see what
occurs.
Cross-Sectional Time Frame Versus Longitudinal Time Frame
A cross-sectional time frame collects or reviews the data at one point in time. For example,
researchers could conduct a study for a professional association on the characteristics of its members,
such as age, gender, job titles, educational attainment, and opinions of the associations' web page.
Because cross-sectional studies are snapshots, they may potentially collect data for an entirely
unrepresentative time period.
A longitudinal time frame collects data from participants in three or more waves (phases) to
compare changes in health, satisfaction, effectiveness, perceptions, and other variables of interest
(Ployhart and Ward 2011, 414). Conducting a study using a longitudinal time frame is complex and
requires more explanation than the other time frames (Ployhart and Vandenberg 2010, 95). The
duration of longitudinal studies can be days, a week, months, years, or lifetimes. The study duration and
the frequency and timing of the data collections (observations) depend on the topic. For example,
researchers who conducted a usability study found that users' issues and problems changed as they
gained experience with the technology (Rasmussen and Kushniruk 2013, 1068).
The Nurses' Health Study is an example of a longitudinal study. Since 1976, the Nurses' Health
Study has followed the health of more than 275,000 nurses over their lifetimes (Nurses' Health Study
2016). Similar examples are cancer registries and other disease registries that collect data on patients
from the diagnosis of their condition through their deaths.
27
Review Questions
1.
2.
What are the purposes of health informatics research and HIM research?
What is a theory, and what is the relationship between theories and research frames? What is
parsimony and how does it relate to theories?
3.
A health informatics researcher is designing a new storage paradigm for heterogeneous data
(such as images, audio, structured data fields, and unstructured free text) and conducting
research on its functioning. Is the researcher conducting basic research or applied research?
4.
Provide two potential research questions for which historical research would be the
appropriate research design. Suggest two primary sources for each potential research question.
How do primary sources differ from secondary sources?
Get Complete eBook Download by Email at discountsmtb@hotmail.com
5.
6.
7.
8.
9.
10.
Consider the following statement: “Descriptive research studies serve no purpose in
contemporary health informatics research.” Why do you agree or disagree with this statement?
What is the difference between a positive (direct) relationship and a negative (inverse)
relationship?
What kinds of research questions could be best answered by observational research?
“Control” is the managerial function in which performance is monitored in accordance with
organizational policies and procedures. How are evaluation research and the managerial function
of control similar? How are they different?
What are the four features of experimental research? Explain how experimental research is
differentiated from quasi-experimental research.
The researchers hypothesized that active engagement with a personal health record
positively affects life-long physical fitness. Why would the researchers choose to conduct a
longitudinal study?
Application Exercises
1.
Go to the website of the Flint Water Study organization and review its report “Chronology of
MDHHS E-mails, along with Select MDHHS/MDEQ/MI State Public Statements Pertaining to
Blood Lead Levels of Kids in Michigan, Primarily in Flint” (Edwards et al. 2015).
In which type of research design(s) could the documents in the “Chronology” be used? Why
are the documents categorized as primary sources? Identify additional examples of primary
sources in the Final Report of the Flint Water Advisory Task Force (Flint Water Task Force 2016).
2.
In health informatics, early reports of data mining of electronic records date to the 1960s
(Collen 1967, 8; Collen et al. 1971, 142). As an example, read the article “Computer analyses in
preventive health research” by Collen in which he reports on an automated multiphasic screening
program (Collen 1967). More recently, Dr. Mona Hanna-Attisha and her colleagues mined the
electronic health records of all children younger than five years who previously had a blood lead
level processed through the Hurley Medical Center's laboratory (Hanna-Attisha et al. 2016, 294).
The Hurley Medical Center's laboratory processes the blood lead level tests for most of children
living in Genesee County, where Flint, MI, is located. Blood lead levels were obtained for both
before (2013) and after (2015) the change in water source for Flint.
What is the time frame of Collen's research? How many patients were in the program as
shown in table 1, part A of the article? What data did the screenings obtain for the data
acquisition center of the preventive health services research program?
What is the time frame of the study by Hanna-Attisha and her colleagues? How many records
(n) were included in the study? What data did the researchers obtain from the electronic health
records?
3.
Dr. Mona Hanna-Attisha and her colleagues, after obtaining the blood lead levels, assessed
the percentage of elevated blood lead levels in both time periods (2013 versus 2015) and
identified geographical locations through spatial analysis (analyzing locations to explain diseases
or other phenomenon; geospatial analysis is discussed in chapter 7). Figure 2 in the article is the
spatial analysis.
28
Mapping (creating a map of disease sites or other phenomenon) and spatial analysis of toxic
substances and diseases are not new. In 1854, John Snow drew maps of a London district
indicating the locations of cholera deaths and water pumps (Brody et al. 2000, 65). Snow was
able to show that cholera mortality was related to the water source. He used the maps as
visualization tools when he presented the results of his investigations (Brody et al. 2000, 68).
(See figure 1.4 for Snow's map.)
Get Complete eBook Download by Email at discountsmtb@hotmail.com
Figure 1.4 Snow's map of cholera mortality and water pumps
Source: Snow 1855.
The Agency for Toxic Substances and Disease Registry (ATSDR) is mandated by Congress
to conduct public health assessments of waste sites, provide health consultations concerning
specific hazardous substances, conduct health surveillance, maintain registries, respond to
emergency releases of hazardous substances, apply research in support of public health
assessments, develop and disseminate information, and provide education and training
concerning hazardous substances (ATSDR 2016). As a part of these mandated functions, the
ATSDR provides spatial analyses by state showing where the ATSDR has found toxic
substances or diseases during its health consultations (HCs) or public health assessments
(PHAs).
29
Go to the website of the ATSDR (2016). Click “Lead” of “Most Viewed Toxic Substances.”
Click on the Substances Map of the left side of the web page. Then, in “Select a Substance,”
select “lead” in the drop-down menu. Then, in “Select a State,” select your state in the drop-down
menu and click “View Map.” Has the ATSDR found lead in your state? Where?
Get Complete eBook Download by Email at discountsmtb@hotmail.com
References
Abma, T.A. and R.E. Stake. 2014. Science of the particular: An advocacy of naturalistic case
study in health research. Qualitative Health Research 24(8):1150–1161.
Agency for Healthcare Research and Quality (AHRQ). 2016a. Technology Assessment Program.
http://www.ahrq.gov/research/findings/ta/index.html.
Agency for Healthcare Research and Quality. 2016b. What Is Comparative Effectiveness
Research?
http://effectivehealthcare.ahrq.gov/index.cfm/what-is-comparative-effectivenessresearch1.
Alpert, J.M., A.H. Krist, R.A. Aycock, and G.L. Kreps. 2016. Applying multiple methods to
comprehensively evaluate a patient portal's effectiveness to convey information to patients. Journal
of Medical Internet Research 18(5):e112.
American Health Information Management Association (AHIMA). 2016. What Is Health
Information? http://www.ahima.org/careers/healthinfo.
American Historical Association. 2011. Statement on Standards of Professional Conduct.
https://www.historians.org/jobs-and-professional-development/statements-and-standards-of-theprofession/statement-on-standards-of-professional-conduct.
Ash, J.S. and D.F. Sittig. 2015 (January). Origins of These Conversations with Medical Informatic
Pioneers. In: Conversations with Medical Informatics Pioneers: An Oral History Project. Edited by
Ash, J.S., D.F. Sittig, and R.M. Goodwin. U.S. National Library of Medicine, Lister Hill National
Center for Biomedical Communications. https://lhncbc.nlm.nih.gov/publication/pub9119.
Atkinson, M. 2012. Key Concepts in Sport and Exercise Research Methods. Thousand Oaks,
CA: Sage Publications.
Bandura, A. 1982. Self-efficacy mechanism in human agency. American Psychologist 37(2):122–
147.
Bickman, L. and D.J. Rog. 2009. The Sage Handbook of Applied Social Research Methods, 2nd
ed. Thousand Oaks, CA: Sage Publications.
Bottorff, J.L., J.L. Oliffe, G. Sarbit, P. Sharp, C.M. Caperchione, L.M. Currie, J. Schmid, M.H.
Mackay, and S. Stolp. 2016. Evaluation of QuitNow Men: An online, men-centered smoking
cessation intervention. Journal of Medical Internet Research 18(4):e83.
Brody, H., M.R. Rip, P. Vinten-Johansen, N. Paneth, and S. Rachman. 2000. Map-making and
myth-making in Broad Street: The cholera epidemic, 1854. Lancet 356(9223):64–68.
Campbell, D.T. and J.C. Stanley. 1963. Experimental and Quasi-Experimental Designs for
Research. Chicago: Rand McNally.
Carter, N., D. Bryant-Lukosius, A. DiCenso, J. Blythe, and A.J. Neville. 2014. The use of
triangulation in qualitative research. Oncology Nursing Forum 41(5):545–547.
Centers for Disease Control and Prevention (CDC). 2012. Introduction to Program Evaluation for
Public Health Programs: A Self-Study Guide. https://www.cdc.gov/eval/guide.
Comte, A. 1853. The Positive Philosophy of Auguste Comte. Translated by H. Martineau. New
York: D. Appleton and Co.
Crabtree, A., M. Rouncefield, and P. Tolmie. 2012. Doing Design Ethnography. London: SpringerVerlag.
Davoudi, S., J.A. Dooling, B. Glondys, T.D. Jones, L. Kadlec, S.M. Overgaard, K. Ruben, and A.
Wendicke. 2015. Data Quality Management Model (2015 update). Journal of AHIMA 86(10):62–65.
DeLone, W.H. and E.R. McLean. 2003. The DeLone and McLean model of information systems
success: A ten-year update. Journal of Management Information Systems 19(4):9–30.
Dillon, A. and M. Morris. 1996. User Acceptance of New Information Technology: Theories and
Models. In Annual Review of Information Science and Technology, vol. 31. Edited by M. Williams.
Medford, NJ: Information Today: 3–32.
30
Ebrahim, G.J. 1999. Simple Linear Regression. Chapter 2 in Research Method II: Multivariate
Analysis.
Oxford
Journals,
Journal
of
Tropical
Pediatrics,
online
only
area.
http://www.oxfordjournals.org/our_journals/tropej/online/ma_chap2.pdf.
Get Complete eBook Download by Email at discountsmtb@hotmail.com
Eden, J., L. Levit, A. Berg, and S. Morton. 2011. Finding What Works in Health Care: Standards
for Systematic Reviews. Washington, DC: National Academies Press.
Ekeland, A.G., A. Bowes, and S. Flottorp. 2012. Methodologies for assessing telemedicine: A
systematic review of reviews. International Journal of Medical Informatics 81(1):1–11.
Farrar, B., G. Wang, H. Bos, D. Schneider, H. Noel, J. Guo, L. Koester, A. Desai, K. Manson, S.
Garfinkel, A. Ptaszek, and M. Dalldorf. 2016. Evaluation of the Regional Extension Center Program,
Final Report. Washington, DC: Office of the National Coordinator for Health Information Technology.
https://www.healthit.gov/sites/default/files/Evaluation_of_the_Regional_Extension_Center_Program_
Final_Report_4_4_16.pdf.
Fenton, S. H., F. Manion, K. Hsieh, and M. Harris. 2015. Informed consent: Does anyone really
understand what is contained in the medical record? Applied Clinical Informatics 6(3):466–477.
Ford, J.P., L. Huang, D.S. Richards, E.P. Ambinder, and J.L. Rosenberger. 2016. R.A.P.I.D. (root
aggregated prioritized information display): A single screen display for efficient digital triaging of
medical reports. Journal of Biomedical Informatics 61:214–223.
Forrestal, E. 2016. Research Methods. Chapter 19 in Health Information Management:
Concepts, Principles, and Practice, 5th ed. Edited by P. K. Oachs and A.L. Watters. Chicago: AHIMA
Press.
Fox, A., G. Gardner, and S. Osborne. 2015. A theoretical framework to support research of
health service innovation. Australian Health Review. 39(1):70–75.
Friedman, C.P. 2013. What informatics is and isn't. Journal of the American Medical Informatics
Association 20(2):224–226.
Gass, S.I. and M.C. Fu, eds. 2013. Encyclopedia of Operations Research and Management
Science, 3rd ed. New York: Springer.
Gorod, A., B. Sauser, and J. Boardman. 2008. System-of-systems engineering management: A
review of modern history and a path forward. IEEE Systems Journal 2(4):484–499.
Guba, E. and Y. Lincoln. 1989. Fourth Generation Evaluation. Newbury Park, CA: Sage.
Hanna-Attisha, M., J. LaChance, R.C. Sadler, and A. Champney Schnepp. 2016. Elevated blood
lead levels in children associated with the Flint drinking water crisis: A spatial analysis of risk and
public
health
response.
American
Journal
of
Public
Health
106(2):283–290.
http://www.ncbi.nlm.nih.gov/pubmed/26691115.
Harvard University. 2016. The Football Players Health Study at Harvard University. Football
Players Health Study in Motion: The New App. https://footballplayershealth.harvard.edu/joinus/teamstudy-app.
Hasan, M.N. 2016. Positivism: To what extent does it aid our understanding of the contemporary
social world? Quality and Quantity 50(1):317–325.
Holloway, I. and S. Wheeler. 2010. Qualitative Research in Nursing and Healthcare, 3rd ed.
Ames, IA: Wiley-Blackwell.
In, H. and J.E. Rosen. 2014. Primer on outcomes research. Journal of Surgical Oncology
110(5):489–493.
Ingham-Broomfield, R. 2014. A nurse's guide to quantitative research. Australian Journal of
Advanced Nursing 32(2):32–38.
International Network of Agencies for Health Technology Assessment (INAHTA). 2016. HTA
Glossary. http://htaglossary.net/HomePage.
International Organization for Standardization (ISO). 2013. Usability of Consumer Products and
Products
for
Public
Use—Part
2:
Summative
Test
Method.
https://www.iso.org/obp/ui/#iso:std:iso:ts:20282:-2:ed-2:v1:en.
Jacobs, L. 2009. Interview with Lawrence Weed, MD—the father of the problem-oriented medical
record looks ahead. Permanente Journal 13(3): 84–89.
Karnick, P.M. 2013. The importance of defining theory in nursing: Is there a common
denominator? Nursing Science Quarterly 26(1):29–30.
Kisely, S. and E. Kendall. 2011. Critically appraising qualitative research: A guide for clinicians
more familiar with quantitative techniques. Australasian Psychiatry 19(4):364–367.
Lee, S., and C.A.M. Smith. 2012. Criteria for quantitative and qualitative data integration: Mixedmethods research methodology. CIN: Computers, Informatics, Nursing 30(5):251–256.
31
Get Complete eBook Download by Email at discountsmtb@hotmail.com
Liebe, J.D., J. Hüsers, and U. Hübner. 2016. Investigating the roots of successful IT adoption
processes—an empirical study exploring the shared awareness-knowledge of directors of nursing
and chief information officers. BMC Medical Informatics and Decision Making 16(10):1–13.
Malt, B.C. and M.R. Paquet. 2013. The real deal: What judgments of really reveal about how
people think about artifacts. Memory and Cognition 41(3):354–364.
Mason, T.A. 2016 (April 12). Regional extension centers—essential on-the-ground support for
https://www.healthit.gov/buzz-blog/regional-extensionEHR
adoption.
Health
IT
Buzz.
centers/regional-extension-centers-essential-ground-support-ehr-adoption.
Meeks, D.W., M.W. Smith, L. Taylor, D.F. Sittig, J.M. Scott, and H. Singh. 2014. An analysis of
electronic health record-related patient safety concerns. Journal of the American Medical Informatics
Association 21(6):1053–1059.
Moher, D., L. Shamseer, M. Clarke, D. Ghersi, A. Liberati, M. Petticrew, P. Shekelle, L.A.
Stewart, and the PRISMA-P Group. 2015. Preferred reporting items for systematic review and metaanalysis protocols (PRISMA-P) 2015 statement. Systematic Reviews 4(1):1–9.
Morse, J.M. 2015. Critical analysis of strategies for determining rigor in qualitative inquiry.
Qualitative Health Research 25(9):1212–1222.
National Center for Advancing Translational Sciences (NCATS). 2015. Clinical and Translational
Science Awards Program. https://ncats.nih.gov/files/ctsa-factsheet.pdf.
National Institutes of Health and Agency for Healthcare Research and Quality (NIH/AHRQ). 2015
(December 17). Advanced Notice of Coming Requirements for Formal Instruction in Rigorous
Experimental Design and Transparency to Enhance Reproducibility: NIH and AHRQ Institutional
Training Grants, Institutional Career Development Awards, and Individual Fellowships. Notice NOTOD-16-034. http://grants.nih.gov/grants/guide/notice-files/NOT-OD-16-034.html.
National Institutes of Health (NIH) Office of Extramural Research. 2016. Grants and Funding
Glossary. http://grants.nih.gov/grants/glossary.htm.
Nelson, R. and N. Staggers. 2014. Health Informatics: An Interprofessional Approach. St. Louis,
MO: Mosby.
Norman, D.A. and S.W. Draper. 1986. User Centered System Design: New Perspectives on
Human-Computer Interaction. Hillsdale, NJ: Lawrence Erlbaum Associates.
Nurses' Health Study. 2016. http://www.nurseshealthstudy.org.
Office of Research Integrity (ORI), US Department of Health and Human Services. 2015. Basic
Research Concepts for New Research Staff, Research Design: Descriptive Studies.
http://ori.hhs.gov/education/products/sdsu/res_des1.htm.
Øvretveit, J. 2014. Evaluating Improvement and Implementation for Health. Maidenhead, UK:
McGraw-Hill Education.
Parker, J. and S. Ashencaen Crabtree. 2014. Covert research and adult protection and
safeguarding: An ethical dilemma? Journal of Adult Protection 16(1):29–40.
Ployhart, R.E. and R.J. Vandenberg. 2010. Longitudinal research: The theory, design, and
analysis of change. Journal of Management 36(1):94–120.
Ployhart, R.E. and A. Ward. 2011 (December). The “quick start guide” for conducting and
publishing longitudinal research. Journal of Business and Psychology 26(4):413–422.
Prescott, J., N.J. Gray, F.J. Smith, and J.E. McDonagh. 2015. Blogging as a viable research
methodology for young people with arthritis: A qualitative study. Journal of Medical Internet
Research 17(3):e61.
PricewaterhouseCoopers (PWC) Health Research Institute. 2015. Top Health Industry Issues of
2016: Thriving in the New Health Economy. https://www.pwc.com/us/en/health-industries/top-healthindustry-issues/assets/2016-us-hri-top-issues.pdf.
Rasmussen, R. and A. Kushniruk. 2013. Digital video analysis of health professionals'
interactions with an electronic whiteboard: A longitudinal, naturalistic study of changes to user
interactions. Journal of Biomedical Informatics 46(6):1068–1079.
Reason, J. 2000. Human error: Models and management. BMJ 320(7237):768–770.
Rhee, H., M.J. Belyea, M. Sterling, and M.F. Bocko. 2015. Evaluating the validity of an
automated device for asthma monitoring for adolescents: Correlational design. Journal of Medical
Internet Research 17(10):e234.
Get Complete eBook Download by Email at discountsmtb@hotmail.com
Rogers, E.M. 2003. Diffusion of Innovations, 5th ed. New York: Free Press.
Salazar, L.F., R.A. Crosby, and R.J. DiClemente. 2015. Research Methods in Health Promotion,
2nd ed. San Francisco, CA: Jossey-Bass.
32
Sandelowski, M. 2014. Unmixing mixed-methods research. Research in Nursing and Health
37(1):3–8.
Shapira, Z. 2011. I've got a theory paper—do you?: Conceptual, empirical, and theoretical
contributions to knowledge in the organizational sciences. Organization Science. 22(5):1312–1321.
Sheehan, B., L.E. Nigrovic, P.S. Dayan, N. Kuppermann, D.W. Ballard, E. Alessandrini, L. Bajaj,
H. Goldberg, J. Hoffman, S.R. Offerman, D.G. Mark, M. Swietlik, E. Tham, L. Tzimenatos, D.R.
Vinson, G.S. Jones, and S. Bakken. 2013. Informing the design of clinical decision support services
for evaluation of children with minor blunt head trauma in the emergency department: A
sociotechnical analysis. Journal of Biomedical Informatics 46(5):905–913.
Shi, L. 2008. Health Services Research Methods, 2nd ed. Albany, NY: Delmar.
Singleton, R.A., Jr. and B.C. Straits. 2010. Approaches to Social Research, 5th ed. New York:
Oxford University Press.
Sittig, D.F. and H. Singh. 2010. A new sociotechnical model for studying health information
technology in complex adaptive healthcare systems. Quality and Safety in Health Care
19(Suppl3):i68–i74.
Snow, J. 1855. On the Mode of Communication of Cholera, 2nd ed. London: Churchill.
Spicker, P. 2011. Ethical covert research. Sociology 45(1):118–133.
Stephens, J., R. Levine, A.S. Burling, and D. Russ-Eft. 2014 (October). An Organizational Guide
to Building Health Services Research Capacity. Final Report. AHRQ Publication No. 11(12)-0095EF.
Rockville,
MD:
Agency
for
Healthcare
Research
and
Quality.
http://www.ahrq.gov/funding/training-grants/hsrguide/hsrguide.html.
Tesch, R. 1990. Qualitative Research: Analysis Types and Software Tools. New York: Falmer
Press.
Thompson, R.L., C.A. Higgins, and J.M. Howell. 1994. Influence of experience on personal
computer utilization: Testing a conceptual model. Journal of Management Information Systems
11(1):167–187.
US Government Accountability Office (GAO). 2012. Designing Evaluations: 2012 Revision.
http://www.gao.gov/products/GAO-12-208G.
US National Library of Medicine (NLM). 2015. Medical Informatics Pioneers.
https://lhncbc.nlm.nih.gov/project/medical-informatics-pioneers.
Venkatesh, V., M.G. Morris, G.B. Davis, and F.D. Davis. 2003. User acceptance of information
technology: Toward a unified view. MIS Quarterly 27(3):425–478.
Whetton, S. and A. Georgiou. 2010. Conceptual challenges for advancing the sociotechnical
underpinnings of health informatics. Open Medical Informatics Journal 4:221–224.
Wilkinson, M. 2013. Testing the null hypothesis: The forgotten legacy of Karl Popper? Journal of
Sports Sciences 31(9):919–920.
Wilson, T.D. 1999. Models in information behavior research. Journal of Documentation
55(3):249–270.
Wyatt, J. 2010. Assessing and improving evidence based health informatics research. Studies in
Health Technology and Informatics 151:435–445.
Resources
Altheide, D.L. and J.M. Johnson. 1994. Criteria for Assessing Interpretive Validity in Qualitative
Research. Chapter 30 in Handbook of Qualitative Research. Edited by N.K. Denzin and Y.S. Lincoln.
Thousand Oaks, CA: Sage Publications: 485–499.
Agency for Toxic Substances and Disease Registry (ATSDR). 2016. https://www.atsdr.cdc.gov/.
Get Complete eBook Download by Email at discountsmtb@hotmail.com
Barnum, C.M. 2011. Usability Testing Essentials: Ready, Set … Test. Burlington, MA: Morgan
Kaufman.
Cameron, R. 2011. Mixed methods research: The five Ps framework. Electronic Journal of
Business Research Methods 9(2):96–108.
Cohen, D. and B. Crabtree. 2006. Qualitative Research Guidelines Project. Robert Wood
Johnson Foundation. http://www.qualres.org.
Collen, M.F. 1967. Computer analyses in preventive health research. Methods of Information in
Medicine
6(1):8–14.
http://methods.schattauer.de/en/contents/archivepremium/issue/1311/manuscript/15365/show.html.
Collen, M.F., L.S. Davis, and E.E. Van Brunt. 1971. The computer medical record in health
screening.
Methods
of
Information
in
Medicine
10(3):138–142.
http://methods.schattauer.de/en/contents/archivepremium/issue/1292/manuscript/15206.html.
33
Edwards, M., S. Roy, W. Rhoads, E. Garner, and R. Martin. 2015. Chronological Compilation of
E-Mails from MDHHS Freedom of Information Act (FOIA) Request #2015-557.
http://flintwaterstudy.org/wp-content/uploads/2015/12/MDHHS-FOIA.pdf.
Flint
Water
Task
Force.
2016
(March).
Final
Report.
http://www.michigan.gov/documents/snyder/FWATF_FINAL_REPORT_21March2016_517805_7.pdf
.
Khong, P.C., E. Holroyd, and W. Wang. 2015. A critical review of theoretical frameworks and the
conceptual factors in the adoption of clinical decision support systems. CIN: Computers, Informatics,
Nursing 33(12):555–570.
Shortliffe, E.H. and M.S. Blois. 2014. Biomedical informatics: The science and the pragmatics.
Chapter 1 in Biomedical Informatics: Computer Applications in Health Care and Biomedicine, 4th ed.
Edited by Shortliffe, E.H. and J.J. Cimino. New York: Springer Verlag: 3–37.
US Department of Health and Human Services (HHS). 2016. What and Why of Usability.
https://www.usability.gov/what-and-why/index.html.
Get Complete eBook Download by Email at discountsmtb@hotmail.com
35
2
Survey Research
Valerie J. Watzlaf, PhD, MPH, RHIA, FAHIMA
Learning Objectives
Describe survey research and how it is used in health informatics.
Display and discuss examples of structured (closed-ended) and unstructured (open-ended)
questions used in health informatics research.
Demonstrate the appropriate organization of survey questions in relation to content, flow,
design, scales, audience, and appropriate medium.
Apply appropriate statistics to measure the validity and reliability of the survey questions.
Plan and carry out the pilot testing of the questionnaire, whether it is used as a self-survey or
interview instrument.
Calculate the appropriate sample size for the survey instrument.
Select appropriate follow-up procedures to retrieve a good response rate.
Depict what statistics can be generated from collecting data via a survey instrument.
Key Terms
Advisory committee
Census survey
Closed-ended (structured) questions
Cluster sampling
Convenience sample
Construct validity
Criterion-related validity
Cronbach’s alpha
Face validity
Factor analysis
Health Information National Trends Survey (HINTS)
Institutional Review Board (IRB)
Interval scale
National Center for Health Statistics (NCHS)
National Health Interview Survey (NHIS)
Nominal scale
Open-ended (unstructured) questions
Ordinal scale
Pilot test
Prevarication bias
Ratio scale
Recall bias
Reliability
Response rate
Sample size
Selection bias
Get Complete eBook Download by Email at discountsmtb@hotmail.com
Simple random sampling
Stratified random sampling
Survey research
Systematic random sampling
Test-retest reliability
Validity
Web-based survey
Survey research includes a method for collecting research data by asking questions, with the
responses being collected via the mail, through websites, mobile apps, or by telephone, fax, e-mail, or
text message. Survey research is one of the most common types of research used in health informatics.
Most of the research performed in health informatics is still
36
new and emerging, and surveys are often used when very little is known about a particular topic.
The survey method allows the researcher to explore and describe what is occurring at a particular point
in time or during a specific time period.
In survey research, the researcher chooses a topic of study and begins to formulate criteria that will
help develop questions he or she may have about that topic. The survey can explore, for example, a
certain disease, community, organization, culture, health information system, or type of software. Often,
a random sample of participants from an appropriate population is chosen to answer standardized
questions. The survey questionnaire can be completed directly by the participant, with directions
included within the questionnaire or cover letter, or it can be administered by mail, online, through a
mobile app, by fax, or in person. Surveys also can be administered via an interview either by phone,
over the Internet, or in person. The researcher needs to weigh all the variables at hand to determine the
best method of administering the survey. The researcher’s main goal is to collect the most appropriate
and accurate data that will answer the questions most pertinent to the research topic.
Whether developing a new survey instrument or adapting an existing one, the researcher will need
to address many issues, such as the content of the survey, the audience or respondents, how the
survey will be administered, whether to send it to a sample of the population or the entire population,
and what type of statistics will be generated.
Then, the researcher should consider whether incentives will be used to increase the response rate,
how to maintain confidentiality of the responses, and how to minimize bias (error within the study
design). With survey research, the study design may be limited by multiple kinds of bias, including
nonresponse bias (the survey data are only based on responses to the survey and do not reflect what
the nonrespondents might have answered), recall bias (respondents may not remember correctly so
their answers will be inaccurate), and prevarication bias (respondents may exaggerate or lie in their
answers to the questions, especially when answering questions related to salary or other sensitive
matters). Nonresponse bias and prevarication bias are discussed later in this chapter. Recall bias is
discussed in detail in chapter 5.
Once the data are collected, researchers must perform appropriate statistical analysis of the data.
The results of this analysis can be displayed in tables and graphs and reported to interested audiences.
Investigators have conducted survey research studies in many health informatics areas, such as
electronic health records (EHRs), coding and classification systems, population health, and so forth.
Survey research in health informatics is useful and will continue to be conducted with vigor as health
informatics applications advance.
The following real-world case demonstrates how the Centers for Disease Control and Prevention
(CDC) and the National Center for Health Statistics (NCHS), which is a part of the CDC and provides
data and statistics to identify and address health issues within the United States, use survey research to
compile useful statistics.
Real-World Case
Get Complete eBook Download by Email at discountsmtb@hotmail.com
Get Complete eBook Download link Below for Instant Download:
https://browsegrades.net/documents/286751/ebook-payment-link-forinstant-download-after-payment
Download