Response to Task Order Request for Proposal (TORP) – RMADA-2015-0002
Evaluation of the Comprehensive End-Stage
Renal Disease (ESRD) Care (CEC) Initiative
TECHNICAL PROPOSAL
January 05, 2015
Submitted to:
Eddie Woodard & Erin Murphy Colligan
Centers for Medicare & Medicaid Services
7500 Security Blvd.
Baltimore, MD
E-mail: Eddie.Woodard@cms.hhs.gov | Erin.Colligan@cms.hhs.gov
Submitted by:
American Institutes for Research, Health and Social Development Program
Dun and Bradstreet Number:
04-173-3197
Tax Identification Number (TIN)
25-0965219
This proposal includes proprietary and business confidential data and shall not be disclosed outside the Government and
shall not be duplicated, used, or disclosed—in whole or in part—for any purpose other than to evaluate this proposal.
However, if an agreement is awarded to this offeror as a result of—or in connection with—the submission of these data, the
Government shall have the right to duplicate, use, or disclose the data to the extent provided in the resulting agreement.
This restriction does not limit the Government’s right to use the information contained in these data if they are obtained from
another source without restriction. Notice of Trademark: “American Institutes for Research” and “AIR” are registered
trademarks. All other brand, product, or company names are trademarks or registered trademarks of their respective
owners.
American Institutes for Research
1000 Thomas Jefferson Street NW, Washington, DC 20007-3835 | 202.403.5000 | TTY 877.334.3499 | www.air.org
Evaluation of the Comprehensive EndStage Renal Disease (ESRD) Care (CEC)
Initiative
January 05, 2015
Author(s): Julie Jacobson Vann, PhD
Douglas D. Bradham, DrPH
Tamika Cowans, MPP
Marisa E. Domino, PhD
Brandy Farrar, PhD
Elizabeth Frentzel, MPH
Jennifer Flythe, MD, MPH, FASN
Steven Garfinkel, PhD
Daniel Harwell, MPH
Tandrea Hilliard, MPH
Vaibhav Jain, MPH
Erin Kavanaugh
Sean McClellan, PhD
HarmoniJoie Noel, PhD
1000 Thomas Jefferson Street NW
Washington, DC 20007-3835
202.403.5000 | TTY 877.334.3499
www.air.org
Copyright © 2015 American Institutes for Research. All rights reserved.
January 2015
Contents
Page
Letter of Transmittal
Chapter 1 - Statement of the Contract Objectives and Technical Approach ...................................1
1.1 Contract Objectives ................................................................................................................1
1.1.1. Introduction and Background
1
1.1.2. Key Challenges in the Evaluation ......................................................................................2
1.1.3 AIR Team
1.2
4
Technical Approach .........................................................................................................6
1.2.1
Task 1: Project Management and Administration
6
1.2.2
Task 2: Prepare the Evaluation Design Report
8
1.2.3
Task 3: Beneficiary Surveys
15
1.2.4
Task 4: Data Analysis (All Project Years)
17
1.2.5
Years)
Task 5: Develop Quarterly Reports of ESCO Performance (All Project
22
1.2.6
Task 6: Annual Reports
24
1.2.7
Task 7: Qualitative Data Collection (All Project Years)
24
1.2.8 Task 8: Observe and Participate in the Learning Network Process for
ESCOs and Prepare Reports (All Project Years)
27
1.2.9
28
Task 9: Prepare and Deliver Analytic Files
Chapter 2 – Personnel Qualifications (4-6 Pages, now 7) .............................................................29
Chapter 3 – Management Plan and Facilities ................................................................................35
3.1
Project Management and Organization ..........................................................................35
3.2
Quality Assurance ..........................................................................................................37
3.3
Plan for Effective Value Management ...........................................................................37
3.4
Corporate Capacity ........................................................................................................38
3.5
Subcontractor Management ...........................................................................................38
Chapter 4 - Past Performance of the Organization ........................................................................39
References ......................................................................................................................................46
Appendix A. Résumés [P.App Title] ...............................................................................................1
Appendix B. Xxxxx [P.App Title] ...................................................................................................1
January 2, 2015
Eddie Woodard & Erin Murphy Colligan
Centers for Medicare & Medicaid Services
7500 Security Blvd.
Baltimore, MD
RE: RMADA-2015-0002
Dear Mr. Woodard & Dr. Murphy Colligan,
American Institutes for Research (AIR) is pleased to submit its proposal in response to the
Centers for Medicare & Medicaid Services’ (CMS) solicitation for the evaluation of the
Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative. This work is central to
AIR’s mission to conduct and apply behavioral and social science research to improve people’s
lives and well-being, with a special emphasis on the disadvantaged. AIR is committed to the
promotion of better, more efficient, cost-effective, and more patient-centered health care through
rigorous health services and policy research.
We welcome the opportunity to submit this proposal and look forward to serving CMS on this
project. AIR and our teamed partners, The University of North Carolina at Chapel Hill (UNC),
Datastat, Precision Health Economics, and consultants Margarita Hurtado and Charles Ragin
(collectively, the AIR team), has extensive knowledge of the clinical complexities of ESRD,
quantitative and qualitative research expertise, rapid-cycle reporting capabilities, and
considerable survey development and administration experience. AIR offers the depth and
breadth of experience and capabilities necessary to rigorously evaluate new and emerging health
care programs such as the CEC Initiative in a dynamic marketplace.
We have attached electronic copies of the proposal as requested. This offer is good for 120 days
from the date of receipt thereof by the Government and is predicated upon the terms and
conditions of this solicitation. Please address technical questions to Dr. Julie Jacobson Vann,
Senior Researcher, who may be reached at 919-918-4503 (jjacobsonvann@air.org). Business
questions should be directed to Vickie Brooks, Contract Officer in AIR’s Contracts & Grants
Office, at 202-403-5886 (vbrooks@air.org). Our cost proposal will remain firm for 120 calendar
days from the date of receipt by the Government.
Sincerely,
Kristin Carman, Ph.D.
Vice President
Health Policy & Research
Health and Social Development Program
202–403–5090
kcarman@air.org
1000 Thomas Jefferson Street NW, Washington, DC 20007-3835 | 202.403.5000 | TTY 877.334.3499 | www.air.org
Chapter 1 - Statement of the Contract Objectives and
Technical Approach
1.1 Contract Objectives
1.1.1. Introduction and Background
The United States is home to more than 600,000 persons with end-stage renal disease (ESRD),
who commonly experience exceptionally high rates of morbidity and mortality and poor quality
of life1. Although mortality rates among persons with ESRD have decreased over the past 20
years, all-cause mortality rates for patients with ESRD who are 65 years and older are 7 times
higher than those for patients without ESRD. Persons requiring chronic dialysis spend nearly 12
days per year hospitalized, and once discharged, have a 36% risk of re-hospitalization within 30
days1. Persons with ESRD consumed 6.3% of the total 2011 Medicare budget, while
representing just 1.4% of Medicare enrollees1.
ESRD patients receive care from numerous health care providers and require several care
transitions across a variety of health care settings, including dialysis facilities, outpatient clinics,
hospitals, emergency departments (EDs), physicians’ offices, and skilled nursing facilities.
Coordinated and well-communicated care is essential for seamless transitions. Its absence
contributes to this population’s high utilization and mortality rates. Realigning incentives may
both improve outcomes and reduce Medicare expenditures for ESRD2.
The Centers for Medicare & Medicaid Services (CMS) developed the Comprehensive ESRD
Care Initiative (CECI) to improve care and health for persons with ESRD while reducing ESRD
care expenditures. This initiative aims to align financial incentives for providers to improve care
coordination by creating ESRD seamless care organizations (ESCOs). It builds on shared savings
models for Accountable Care Organizations (ACOs) developed previously by CMS3, in which
providers share savings and/or losses with CMS or take full risk for beneficiary expenditures.
Medicare has sponsored three ACO initiatives: the Medicare Shared Savings Program ACOs, the
Advance Payment Model, and the Pioneer ACOs. All the models include a novel financial
arrangement holding the ACO accountable for Medicare Part A and B total expenditures, a
method for attributing beneficiaries to ACOs, and quality benchmarks, but the specific
parameters have varied across the three models3. The Medicare Shared Savings Program
(MSSP), the largest of the initiatives, allowed ACOs to build on fee-for-service payments and
choose either shared savings only, or both shared savings and losses, in return for potentially
greater shared savings. In contrast, the Pioneer ACOs were required to share savings and losses,
and Advance Payment ACOs were fully capitated. Although some Medicare ACOs have faced
losses or dropped out of the program4, 64 out of 243 ACOs saved Medicare enough money to
earn bonuses in 2013, the second year of the program6. This MSSP model is receiving strong
interest from new applicants4,7. Preliminary analyses from the Pioneer model have indicated that
ACOs with varied organizational structures and market characteristics have achieved savings8, so
the potential for ESCOs is promising.
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—1
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
The CECI builds on the fee-for-service and one- or two-sided risk arrangements used for the
MSSP and Pioneer ACO models. However CECI risk structures reflect the unique relationship
between ESRD patients and their nephrologists and dialysis facilities. In the CECI, the level of
risk assumed by ESCOs will depend on both their size and status as participant-owners. ESCOs
with large dialysis organizations (LDOs), defined as those with over 200 dialysis facilities, will
share up to 75% of savings and losses with CMS (i.e., the two-track model). ESCOs that include
only non-LDO (medium and small) facilities will share up to 50% of savings with CMS, but will
not have to share losses (i.e. the one-track model). The ESCO participants who are not owners,
including clinical partners other than dialysis facilities and nephrologists, are not required to
assume downside risk, but are not prohibited from doing so. This alignment of care quality and
financial incentives is intended to benefit patients, ESCO partners, and CMS through reduced
hospitalizations, re-hospitalizations, duplicative testing, and improved clinical care and
outcomes. Additionally, because of the necessity of constant interaction and communication
between patients with ESRD and their dialysis facilities and nephrologists, ESCOs may be in a
better position than other ACOs to effectively engage their patients3,9,10,11.
This project will evaluate CECI by identifying the most effective ESCO strategies for
simultaneously improving processes and care and reducing cost, while controlling for alternative
explanations and evaluating for unintended consequences. Section 3021 of ACA gives the
Secretary of Health and Human Services the authority to expand the scope and duration of
effective models through rule making, rather than statutory change. This authority creates an
important opportunity to scale the CEC model rapidly if it proves to be effective. Thus, the rigor
of the evaluation and the credibility and defensibility of results are more critical than ever.
1.1.2. Key Challenges in the Evaluation
The clinical complexity of ESRD patients, the extensive variation we expect in ESCO
philosophy and organization, and the consequent methodological complexity of the evaluation
present several notable challenges.
Clinical Complexity of the Population. Medicare beneficiaries with ESRD typically have
multiple comorbidities, take over ten prescription medicines, and receive care from numerous
health care providers on a regular basis12,13. Such care complexities leave persons with ESRD
vulnerable to poorly coordinated care and its consequences, such as unnecessary hospitalizations
and ED visits, medication errors, and duplicative testing. Thus, integrated care delivery with a
focus on care quality and cost containment may improve clinical outcomes and reduce cost.
Understanding the clinical complexity of this population and their care needs is essential if we
are to ask right questions; measure the most important program features, care processes, and
patient outcomes; interpret the data appropriately; draw meaningful conclusions, and provide
useful feedback to program participants to drive rapid improvement.
Additionally, unintended consequences may arise from changes to health system financial
incentives and payment systems which may disproportionately affect vulnerable and
disadvantaged populations. For example, black patients on dialysis typically require higher
dosing of erythropoietin stimulating agents and vitamin D to achieve target metrics of anemia
and bone-mineral-disease management compared to non-blacks, leading to a 21% higher mean
monthly expenditure for bundled services among blacks14. While recent post-bundled payment
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—2
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
implementation analyses show no significant changes in management approach or laboratory
measures across races15, the potential for disparities resulting from payment reform exist.
Furthermore, another recent study found that facilities located in neighborhoods with higher
proportions of blacks had worse survival rates and were less likely to achieve hemoglobin and
dialysis adequacy targets compared to facilities with lower proportions of blacks16. In efforts to
develop financially advantageous ESCOs, LDO’s may target ESCO development at facilities
with lower minority populations. For these reasons, it will be imperative that the evaluator
carefully assess differences among patients included and excluded from ESCOs and consider
disparities in care that may result from ESCO practices.
ESCO Philosophy and Organization. Many ESCOs will likely draw on the primary care
concepts of the patient-centered medical home and the medical neighborhood17,18 to improve
care. In this model, the dialysis facility, as the medical home, would provide comprehensive
patient-centered care through a multi-disciplinary provider team and coordinate patient care
across a constellation of other health care system providers, the medical neighborhood. A wellfunctioning medical home and neighborhood have several important features: (1) clear
agreement on the respective roles of neighbors; (2) sharing of clinical information needed for
effective decision making and reducing duplication; (3) individualized care plans and tracking
procedures for complex patients; (4) continuity of care during patient transitions between
settings; and (5) strong community linkages that include both clinical and nonclinical services9.
Three additional organizational factors will be especially critical to understand:

ESCO Participants. By design, ESCOs must include dialysis facilities and nephrologists.
Because of the complexity of patients with ESRD and the CECI’s focus on total costs of
care, ESCOs will likely bring in a broad set of Medicare providers and organizations19,
including the hospital, key sub-specialists, and others.

Leadership. The most successful ESCO leaders will play many roles. They must take
responsibility for the partnership, empower partners, create an environment where
opinions are discussed openly, work to resolve conflicts, combine resources and skills of
partners, and help the collective group develop creative strategies to be successful20.

Health IT. Successful ESCOs, including participant-owners and non-owners, may use
health IT, including care management information systems, to: (a) access up-to-date
records, (b) improve care coordination and transitions, (c) engage patients through online
patient portals, and (d) target care management tools through risk stratification.
Additionally, ESCOs with strong analytic capacity will be able to identify patients
quickly in times of need21.
Measuring and analyzing the variation in these patient and organizational attributes, in both the
rapid cycle and impact evaluation activities, will contribute greatly to our understanding of why
some ESCOs perform better than others and to helping ESCOs improve performance during and
following the demonstration.
Methodological Complexity. The complexities posed by ESRD and ESCO organization require
sophisticated evaluation design and execution if the results are to be credible. This evaluation
will have four key components: (1) impact analyses, (2) case study analyses, (3) rapid cycle
evaluation, and (4) support for the Learning Networks.
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—3
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
The impact evaluation will determine whether ESCOs achieve better care, better health, and
lower costs for their patients22,23 through a quarterly interrupted time series (ITS) analysis, with
and without comparison groups, depending on the research question and the data. We will use
propensity score methods to identify multiple comparison groups for each ESCO. Multivariate
regression analyses will be used to estimate the association between ESCO implementation and
risk-adjusted outcomes. Although patient-level outcomes are our ultimate interest, interventions
occur at the ESCO level, which requires hierarchical modeling to account for clustering. This
cross-sectional ITS model is our starting point, but we will also investigate alternatives in search
of the most robust statistical models for each question. Candidates include longitudinal ITS with
a panel of early enrollees, adjusting for attrition bias, and construction of episodes of care. An
annual cross-sectional ITS model will be used with patient survey data, using the first annual
survey as a baseline. We will also investigate the value of merging survey and claims data.
The case study data will be collected through focus groups and interviews with administrators,
medical directors, nephrologists, nurses, care managers, social workers, personal care
technicians, dietitians, patients, and caregivers. Consistent with our mixed methods approach,
qualitative findings about ESCOs and their activities will be used to draw conclusions about the
implementation process, code additional organizational and environmental covariates for the
statistical models, and understand the “why” of the statistical findings.
The rapid cycle evaluation and quarterly feedback will draw on the quarterly monitoring data and
the impact and case study findings as they become available24. As ESCOs evolve, monthly
telephone calls in year one and quarterly thereafter will enable us to assess changes in care
management processes and other innovations.
The quarterly and annual reports produced through the rapid cycle evaluation process will be
designed specifically to help ESCOs identify successful approaches. The ESCOs can then share
these findings with the Learning Network, thereby expediting the diffusion of successful
strategies. In addition to providing data and helping identify successful strategies, we will also
assess the effectiveness of the learning and diffusion process and provide ongoing feedback on
how the learning networks themselves can be more effective.
In sum, the CEC evaluation demands an evaluation contractor who understands the clinical
complexity of ESRD patients and the clinical care landscape, and who has the expertise to
manage varied quantitative and qualitative methodological challenges inherent to conducting this
evaluation. AIR has brought together an exceptional team that brings this needed capacity, as
discussed below.
1.1.3 AIR Team
The American Institutes for Research (AIR) has assembled the team and project structure to meet
these challenges. Our subcontractors include the University of North Carolina (UNC), Precision
Health Economics (PHE), and DataStat. Founded in 1946, AIR is one of the world’s largest
behavioral and social science research and evaluation organizations with about 1,600 employees.
We have led many large, complex CMS contracts and the evaluation of many health, education,
and workforce innovations. Recently, we have had a strong record in CMS evaluations as a
subcontractor, including Strong Start, Graduate Nurse Education (GNE), and the Dual Eligibles
Measurement and Monitoring Evaluation (DEMME). For the CECI evaluation, AIR evaluation
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—4
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
researchers are leading the evaluation design; claims, survey, and case study data collection; data
management; integrated mixed-methods analysis; and the reporting tasks. UNC provides clinical
and epidemiological expertise in ESRD, additional econometrics, and organizational behavior
expertise. DataStat will conduct the beneficiary surveys. Dr. Gupta from PHE is an
econometrician who is actively engaged in studies of the economics of ESRD. Consultant
Charles Ragin is a pioneer developer of the Qualitative Comparative Analysis (QCA) method,
which is a centerpiece of our process and impact analysis. The features of our proposal that make
us a strong choice for the CECI evaluation include:

A core management team with experience in CMMI evaluations and ESRD. Julie
Jacobson Vann, RN, PhD, our project director, is an experienced clinician and evaluator.
Before joining AIR in 2011, she was an evaluator for one of the managed care provider
networks in the North Carolina Medicaid program. She recently led AIR’s subcontract
and the case study work for CMMI’s GNE Evaluation. Dr. Jacobson Vann has worked
closely with the nephrology faculty at UNC for several years and they recently published
a joint article on care of ESRD patients in the NC Medicaid program. Jennifer Flythe,
MD, is an experienced UNC nephrologist who will serve as co-project director. Before
joining AIR in 2014, our project manager, Tandrea Hilliard (PhD expected 2015).
worked at UNC including as a researcher at the UNC Kidney Center for four years.

The core management team will be supported by clinical and evaluation leadership
teams, comprising persons with decades of experience. The clinical leadership team
includes Ron Falk, MD, chair of the Division of Nephrology at UNC and Dr. Jacobson
Vann’s recent coauthor. The evaluation leadership team includes Thomas Reilly, Deputy
Director of CMMI until joining AIR in 2013 and Steven Garfinkel, PhD, who has
participated in 15 CMS evaluations since 1980 and was Principal Investigator for
developing the CAHPS In-Center Hemodialysis survey25, critical for CECI.

A long history of cooperation between AIR and UNC. AIR and UNC’s Sheps Center
currently collaborate on at least 5 contracts for CMS and the Agency for Healthcare
Research and Quality (AHRQ). Drs. Jacobson Vann, Garfinkel, and Douglas Bradham,
leader for Task 4, received their doctorates from the Department of Health Policy and
Management (HPM) at UNC, as will Ms. Hilliard. Brandy Farrar, Task 7 leader and lead
qualitative data analyst for the GNE evaluation, came to AIR from UNC’s Sheps Center.
Jacobson Vann, Garfinkel, and Hilliard are all located at AIR’s Chapel Hill, NC office.
UNC’s subcontract leader, Marisa Domino, PhD is a HPM health economist with
extensive experience in Medicare and Medicaid evaluation. Chris Shea, PhD, is an expert
in organizational behavior. Alan Brookhart, PhD from the Departments of Epidemiology
and Biostatistics at UNC, is one of the nation’s leading experts on the epidemiology of
ESRD and has done pioneering work on the use of propensity score methods to construct
comparison groups in studies of ESRD interventions.

Demonstrated success in the difficult task of designing effective data visualization
for rapid cycle evaluation and improvement. This task will build on our work in
Strong Start and DEMME, and be led by Dennis Nalty, PhD, who leads AIR’s Center for
Data Visualization. Dr. Nalty won the CMS Administrator’s Award for his leadership in
the development of a rapid cycle reporting, data visualization, and feedback system for
State Health Insurance Assistance Program grantees and for his technical assistance in
helping SHIPs understand and act on those data.
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—5
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.

Demonstrated expertise in CMMI Learning and Diffusion activities. AIR is currently
the Learning and Diffusion contractor for the Bundled Payment and Federally Qualified
Health Center models.

Survey expertise with frail and elderly populations. DataStat, a certified Medicare
CAHPS® and Health Outcomes Survey vendor and our data collection subcontractor, has
a long history of collecting data from vulnerable populations on sensitive topics. Since
2005, DataStat has been the only survey organization certified by CMS to conduct data
collection among the most frail and elderly Medicare populations in the Health Outcomes
Survey – Modified project, which surveys PACE beneficiaries across the country
annually about health status. The project is a complex one that involves the beneficiaries
who are able to respond to the survey, but also their designated care givers.

A project structure organized around the integration of data from multiple sources
to support conclusions about each research question. We will accumulate and manage
claims, medical record, monitoring, qualitative, and survey data in a virtual data core,
from which the impact, rapid cycle evaluation (RCE), and case study analysis teams will
extract the information they need to address their research questions. The data core will
be directed by Sean McClellan, PhD, who conducted similar work for the Palo Alto
Medical Foundation Research Institute, prior to joining AIR in 2014.
Our team has excellent experience working with Medicare claims, medical records data and cost
data; using quantitative analytic methods, including propensity score methods and hierarchical
regression modeling; and large scale, multi-site qualitative data collection and analysis. Because
we are not part of other ESCO activities, we will be independent evaluators, weighing all aspects
of the evaluation equally. In the sections that follow we will describe our approach to the
evaluation in more detail.
1.2
Technical Approach
1.2.1 Task 1: Project Management and Administration
Objective. Work collaboratively with the CMS Contracting Officer’s Representative (COR) and
CMS staff to achieve evaluation goals by (1) developing and implementing project management
structures, systems, plans and processes, materials, and communication mechanisms that
optimally support the project team and CMS, (2) monitoring and completing evaluation tasks
efficiently, effectively, thoroughly, and in accordance with the Schedule of Deliverables and
budget, and (3) providing informative, clear, and useful reports on schedule.
Approach. Our guiding principle for managing this project will be to create systems that make it
relatively easy for team members to complete project and evaluation goals at the highest level of
performance. In Chapter 3, Management Plans and Facilities, we describe these plans and
systems in detail. Here we provide a brief summary of the deliverables.
Conference Calls. Our PD, PM, and leaders of active tasks Leaders will participate in semimonthly conference calls with the COR to discuss project plans, progress, issues and challenges,
next steps, and proposed solutions. Our team will report preliminary findings from quantitative,
qualitative, cost, and survey analyses via brief written summaries and data dashboards. Members
of the clinical and evaluation leadership teams will attend as needed. Meeting agendas and
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—6
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
supplemental materials will be sent to the COR at least two business days before each call and
revised as recommended by the COR. The PM or designee will take meeting notes and prepare
brief organized meeting summaries of each conference call, which will be sent electronically to
the COR within 5 business days of each call. The PM will monitor progress and follow up with
team members on action items.
Monthly Progress Reports. We will submit monthly progress reports, including task-specific
accomplishments, problems, and solutions for each task. They will be submitted to the COR in
electronic format within 5 days of the end of each month and paper format with our monthly
voucher. Monthly reports will be organized by task and include current activities and progress on
each task; challenges, issues, expected delays in deliverables, proposed or implemented
solutions, and an assessment of the effectiveness of actions taken; planned objectives and
activities for the upcoming month; expected changes to personnel, management and/or the
evaluation design, and actions that we expect to need from the COR and other CMS staff during
the coming month, such as the need to review and provide feedback on a deliverable; resource
consumption and budget updates, including forecasts of project and financial performance, and a
summary of planned versus actual resource consumption by task.
In-person meetings. The PD will work with the COR to plan the in-person kickoff meeting
(Task 1.3.1), annual meetings (Task 1.3.2), and other evaluation update meetings as needed, to
be held at CMS in Baltimore, Maryland (MD) with the COR and CMS staff. We will submit
Draft Briefing Materials (Task 1 Deliverables), including agenda, presentation slides, and other
materials, within 1 week of the award date for the kickoff meeting, and within 11.5 months after
the award date and every 12 months after this date for annual meetings. The PD and PM will
coordinate with our team to revise materials based on COR input. Final Briefing Materials will
be submitted to the COR electronically 2 days before the kickoff and each annual meeting. Our
team will bring hard copies of materials for distribution to the CMS staff as needed for the
kickoff and annual meetings. The PM will prepare draft meeting summaries for each in-person
meeting and revised based on COR feedback. The PD, PM, task leaders, and other key staff from
AIR and UNC will attend the kickoff meeting to discuss the proposed study design, project work
plan and expectations with the COR and CMS staff in Baltimore, MD within 2 weeks after the
contract award date. The PD, PM, task leaders, and other key staff will attend in-person
meetingswith the CORat CMS in Baltimore, MD at least annually, beginning 12 months after the
award date. Additional staff may attend annual meetings virtually. Our team will present interim
evaluation findings and progress on achieving project goals as described in the Draft Annual
Reports (Task 6) from the previous years. We will seek input from the COR and CMS staff on
report drafts, and discuss analysis strategies, planned activities for the coming year, technical
issues and proposed solutions, and other topics as suggested by the COR.
Data Acquisition Plan. The PD will lead the preparation of a written data acquisition plan to be
described in an Operations Plan (Task 1) and Evaluation Design Report (Task 2). Within 2
months of the award date, we will prepare and submit to the COR written requests to obtain
CMS data and Data Use Agreements (DUAs; Deliverable 1.4). Requests submitted in year 1 will
cover all project years and be amended as needed. Additional DUAs will be submitted to CMS
as needed during the project.
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—7
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
1.2.2 Task 2: Prepare the Evaluation Design Report
Objective. The Evaluation Design Report (EDR) will serve as the roadmap for project activities,
deliverables, and expenditures, and as the plan against which CMS can evaluate our
performance. We describe our planned design in Task 2 and the details of its execution (e.g.,
specified statistical models and qualitative data analysis) under Task 4, Data Analysis.
Approach and Conceptual Model Guiding the Evaluation. Our evaluation will be guided by a
conceptual model that builds on the ACO Evaluation Logic Model developed by Fisher et al26.
Our adaptation (Exhibit 1) has been shaped to fit the CECI features and relevant mediating
factors. In our model, ESCO composition and structure, provider characteristics, delivery system
characteristics and services offered, performance management systems, and communication
components of the CEC intervention are expected to produce better health, better care, and lower
costs. These relationships are mediated by patient characteristics, market characteristics, and
other contextual factors. The new ESCO financial arrangements, including shared savings, and
risk and guaranteed discounts of the Medicare program, are expected to have on effect on
outcomes indirectly by influencing the development and implementation of the ESCO model
features and relationships. Variation in these characteristics among CECI awards argues for
treating the intervention dichotomously (intervention or control beneficiary) and alternatively as
a separate variable for each characteristic in analyses of the intervention group alone.
Our evaluation model is supported by several theories and models to address the organizational,
economic, policy, clinical care, and epidemiologic domains that influence the three-part aim
outcomes. The ACO Evaluation Logic Model Fisher 201224 focuses on the complexity of ACO
implementation, and emphasizes the influence that ACO network structure, local context, and
ACO contract features may have on ESCO performance. Wagner’s Chronic Care Model
highlights the six elements of health systems that are expected to improve care for persons with
chronic illnesses, such as organization of the system, linkages to community resources, selfmanagement support interventions, delivery system design, provide decision support, and
clinical information systems27. Innovative care management interventions that involve
assessment, collaborative care planning and goal setting, education, and support for patients and
families may lead to improved patient self-care and outcomes. The Model of Physician Labor
Supply is a provider utility maximization model that suggests that clinician behavior is sensitive
to reimbursement systems and that physicians will strive to maximize reimbursement.
Rogers’ Diffusion of Innovation model will support our evaluation of learning systems through
an assessment of key attributes, including the features of the teaching and learning strategies,
communication methods, and context in which innovations are introduced28. The web of
causation, originally conceived by MacMahon and his colleagues29, proposed that diseases or
effects develop as the result of multiple factors or causes, each of which also results from
complex antecedents that create the web30 Our evaluation will address several key individual,
family and community-level factors that are important determinants of health outcomes and costs
for persons with ESRD.
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—8
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
Exhibit 1. Conceptual Model to Guide the Evaluation of the CECI
CMS
Innovation
Center Policy
Implementation of
ESCOs
Mediating
Factors
Outcomes
ESCO-Specific New
Financial
Arrangements
ESCO Composition,
Structure, Governance &
Leadership
Provider
Characteristics
CEC
Model
Initiative
Delivery System
Innovations &
Services
Performance
Management &
Measurement
Beneficiary
Characteristics
Better Care
Market
Characteristics
Better Health
Policy & Other
Contextual
Factors
Lower Costs
Communication,
Information Management
& Learning Systems
General Analytic Models to Answer Research Questions. The Innovation Center’s research
questions (RQs) seek to identify the impact of the CECI, including unintended consequences and
subpopulation variation, and reasons why favorable and unfavorable outcomes are observed.
Thus, we have selected a convergent parallel mixed methods research design31,32 to execute the
evaluation. This design will allow us to assess the initiative’s impact on outcomes that have
standardized metrics as well as those that are best understood by observing and using
individuals’ narrative accounts of their perspectives and experiences. In addition, this study
design will allow us to measure and assess a range of additional factors that may be associated
with favorable and unfavorable outcomes, such as environmental, organizational,
implementation, and beneficiary characteristics. Identification, development, and analysis of
measures will be structured such that the quantitative and qualitative methods confirm,
complement, and expand upon each other to produce the most robust understanding possible of
the implementation and impact of the CECI.
The outcomes specified in the RQs vary for each domain of the three-part aim:
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—9
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.

Better Care: clinical processes, access to care, care coordination, patient-provider
communication, Meaningful Use of EHRs and other HIT, such as care management
information systems.

Better Health: clinical outcomes, patient experiences of care, quality of life, health
status and functioning

Lower Cost: utilization of hospital services (including ED visits, hospitalizations,
readmissions), physician and pharmacy services, total costs to the Medicare program,
cost shifting to the Medicaid program, costs to beneficiaries (copayments & deductibles).
However, the basic structure of the RQs are parallel throughout, which enables us to establish
general research designs for both statistical and case study methods, which we will adapt as
necessary to answer each RQ.
General Statistical Design for Claims Measures. The availability of claims data for patients in
ESCOs and comparison facilities enables us to use the interrupted time series (ITS) evaluation
design with a propensity score weighted comparison group as the general model for all outcomes
measured with claims. We will use ITS with comparison groups, except where comparison group
data are not available (e.g., understanding which ESCO characteristics contribute to best
performance). The unit of analysis will be each patient’s data summarized for the quarter,
starting 8 quarters before the initial implementation of the ESCOs (i.e., the patient-quarter). Each
quarter will be a cross-sectional census of patients who meet the study’s eligibility criteria. The
availability of claims data for the providers who form ESCOs during the pre-ESCO period
enables us to construct measures for the pre-intervention quarters for both the intervention and
comparison groups.
General Statistical Design for Survey Data. Measures of patient experience, quality of life, and
functional status will come from survey data. The availability of survey data for both
intervention and comparison groups in each year of the demonstration is an unusually powerful
feature of the CECI, which enables us to model trends using the ITS with comparison group
approach rather than simply change in pre-post means. However, these survey data remain less
flexible than the claims, because we have only one measurement per year. The design for
beneficiary outcomes measured with survey data assumes that the first annual survey is a preintervention observation, because ESCOs will not have had time to have an impact.
General Statistical Design without a Comparison Group. When outcomes and explanatory
measures are not available for the comparison group (e.g., measures from monitoring and EHR
data and ESCO organizational characteristics) we will use an ITS study design without a
comparison group. This approach is less powerful than the ITS with comparison groups design,
because it fails to control for concurrent changes, such as the Medicare ESRD Quality Incentive
Program (QIP) initiative, but these findings will contribute to the conclusions drawn from all
statistical and case study results combined.
General Case Study Model. The case studies will use qualitative and quantitative data to create
an evolving picture of each ESCO. The quarterly monitoring statistics and periodically updated
stakeholder interviews, focus groups, and document reviews will be the main sources of data for
the case studies.
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—10
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
Rapid Cycle Evaluation (RCE). In traditional evaluation approaches, the evaluator is an
independent observer reporting occasionally or at the end of the study. In contrast, RCE engages
the evaluators in frequent data collection, interpretation, and feedback from the beginning to
support rapid improvement. The surveys will be updated annually, but the claims, EHR,
monitoring, stakeholder interviews, and focus groups will be acquired at least quarterly and used
for traditional sociometric-econometric and actuarial-accounting modeling, but also for quarterly
feedback reports to the CECI sites for improvements throughout the model test. The challenge
posed by RCE for traditional evaluation is the ever-evolving design of the intervention being
evaluated. Attribution of effects to the intervention can be obscured by changes in design and
implementation in response to the continuous feedback encouraged by RCE. Our approach to
integrating the two perspectives is based on four assumptions:
1. It is naïve to assume that interventions didn’t evolve before RCE. Social interventions
have always evolved during evaluation. In the RCE perspective, however, we document
the changes as we move along and take them into account in our case studies and
statistical modeling so they inform our conclusions systematically. The ITS approach is
particularly valuable, because it enables us to alter the coding of characteristics over time.
2. If we know that the intervention can be improved during the evaluation, failing to
improve it as soon as possible is counterproductive, because it reduces the chances of
finding an effect, even if the attribution of the effect to a specific characteristic might be
more difficult.
3. Careful case study work minimizes any confounding of attribution in statistical models
from RCE, by making sure we understand what, why, and when changes were made.
4. The ACA demands more rigor in evaluations, because it permits the Secretary to make
Program-wide innovations without Congressional approval. RCE can muddy attribution
of effects if it is not monitored well, but this risk is more than offset by the additional
data from monitoring systems that RCE generates.
Construction of the Comparison Groups. Constructing comparison groups will entail not just
balancing characteristics of patients, but also the dialysis facilities to which they are assigned.
Just as patients are assigned to ESCOs according to their “first touch” with a dialysis facility in
each quarter, we will assign all comparison patients to the first dialysis facility they visit each
quarter as observed in the claims data. Once patients have been assigned to facilities, we will use
propensity score weighted (PSW) models to refine control group observations in order to better
estimate the effect of ESCO participation33, 34. Our approach will include patient demographics
and comorbidities, baseline spending and access, urban/rural indicators, regional medical
utilization, and facility characteristics (i.e., size, ownership, independent or hospital based, and
types of dialysis offered). Because of the need to include characteristics of comparison and
intervention participants, we will not incorporate organizational characteristics of ESCO
awardees, as these measures will not be available for controls.1 We will work with the COR and
However, we will model the effect of variation in ESCO characteristics using ITS without a
comparison group for outcomes that first are found to change using the ITS with comparison groups
model.
1
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—11
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
ESCOs to refine the propensity model during all study years, incorporating new data, such as
state-specific measures or reports on dialysis centers, as they become available. We will conduct
pilot analyses of a convenience sample of control centers to determine what effect additional
center-level factors will have on the outcomes. Because of the hierarchical nature of the
evaluation, we will use advances in clustered propensity score methods for all analyses35. We
will use PSW rather than propensity score matching for this task because PSW provides
estimates of average treatment effects, or estimates of the effect of bring the demonstration
model to scale, which are of greatest policy significance and are more generalizable than the
matched sample29,30. Consistent with the assignment of patients to ESCOs, PSW models will be
rerun each quarter in order to incorporate observations from newly diagnosed or newly affiliated
individuals during the study period. The PSW approach will be used for all outcomes, including
those coming from claims, medical records, and survey data. Additionally, to identify
comparison patients for inclusion in the survey sampling frame in each survey collection year,
we will use a cross sectional many-to-one propensity score matching (PSM) model using the
same set of variables as in the PSW approach.
Data Sources and Management. Several data sources will support the evaluation.
Primary Data Sources:
1. Survey data. We will use the Kidney Disease Quality of Life (KDQOL) Survey and the
ICH-CAHPS surveys to measure patient-reported experience of care and outcomes. See
Task 3.
2. Interviews with ESCO personnel and partners. We will conduct in-person and
telephone interviews with key personnel in each ESCO. These interviews will solicit
descriptive information about the interventions, their implementation, and perceived
impact along with the use and perceived helpfulness of the Learning Network. See Task
7.
3. Implementation assessment tool. The evaluation team will develop an implementation
assessment tool to track each ESCO’s progress in transitioning to the intervention model
and pursuing stated intervention goals. See Task 7.
4. Interviews and focus groups with intervention patients. We will conduct in-depth, inperson interviews and focus groups with patients who are receiving care from the ESCOs.
Intervention patients will be interviewed to assess their experiences receiving care before
and after the interventions, as well as their perceptions of the impact of the intervention
on their health in greater depth than can be had from the surveys. See Task 7.
5. Learning Network Survey of ESCO personnel. We will conduct a survey of ESCO
personnel to assess their use and perceived helpfulness of the Learning Network. See
Task 8.
6. Observations of Learning Network meetings. We will observe each Learning Network
meeting, documenting the activities via field notes. See Task 8.
Secondary Data Sources:
1. Medicare Claims. We will obtain claims data, for all beneficiaries with ESRD, for at
least 2 years before and 4 years after the intervention begins. Patient demographic and
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—12
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
comorbidity data will be drawn from the CMS Master Beneficiary Summary file.
Records on costs and utilization will be drawn from Medicare Part A, B and D claims and
the Medicare Provider Analysis and Review (MedPAR) file. Dually-eligible beneficiaries
will be identified through the Medicare-Medicaid Linked file. We will also ask that the
Renal Management Information System (REMIS) file, which tracks the ESRD patient
population for both Medicare and non-Medicare patients, be made available.
We plan to use CMS’s Chronic Condition Warehouse (CCW) Virtual Research Data
Center (VRDC) to house the data. Although limited and time-lagged data are available
directly through the VRDC, we expect that CMS will provide the claims discussed above
on a quarterly basis, which will then be uploaded to VRDC.
2. Medicaid Claims. Dually-eligible beneficiaries are an important population in general
for subgroup analysis, but especially important for ESRD patients and to determine
whether intervention effects on Medicare spending are compounded or offset by effects
on Medicaid spending. In addition, Medicaid spending for persons with ESRD enrolled in
Medicaid only during the wait for Medicare eligibility may also be affected by ESCO
transformation.
For patients attributed to ESCOs, we will work with ESCOs to directly obtain Medicaid
claims. For comparison patients, we will collect Medicaid claims on a rapid cycle basis
from the Medicaid agencies in states where ESCOs are located. We will open discussion
with state agencies immediately following kick-off to establish procedures for obtaining
timely Medicaid claims or encounter data on a quarterly basis throughout the evaluation.
3. Medical record data. For patients assigned into ESCOs, data from medical records will
provide information on important clinical outcomes. Because ESCOs must provide data
for measures selected for the ACO Quality Measure Assessment Tool (QMAT) to the
monitoring and quality contractors, we will also plan to integrate those measures into our
analyses. In conjunction with our expert nephrologists, we will also work with the COR
and the ESCOs through the Learning Network and Quarterly Reporting process to
identify new clinical outcomes from medical records to be included on an ongoing basis.
All outcomes from medical records will be fully linked with other administrative data.
4. Monitoring data. We expect to receive monitoring data from the monitoring contractor
quarterly for use in the quarterly reports to CMs and the RCE reports to the awardees.
5. Other data sources: Some descriptors of dialysis facilities will be derived from Dialysis
Compare, including for-profit or non-profit ownership, after-hours access, and number of
dialysis stations. Market descriptors will be derived from the Area Resource File, Census
data, and CMS reports36, 37. We will also rely on the Monitoring contractor for quarterly
data from the ESCOs.
The data we plan to use for each RQ are listed in Table 1.
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—13
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
Table 1. Summary of Data Sources by Research Questions
Learning Network
Observations
Implementation
assessment tool
Focus Groups &
Interviews with patients
ESCO staff Interviews
Public† (A, C, D, U)
MU Records
Survey* (E,F, I, K, L)
QIP Reports
Medical Records
Claims
Research Question Outcome
(3-Part Aim Domain)
1.
2.
3.
4.
5.
6.
Guidelines Adherence (Care)
  
I
U
  
Access to care (Care)

EI
  
Care coordination (Care)
EI
  
Meaningful Use of HIT (Care)
I

  
Patient-provider communication (Care)
EI
  
Unintended referrals to transplants or other care
 
  
processes (Care)
7. Factors associated with better care (Care)

L
ACD    
8. Clinical (Health)
 
  
9. Patient experiences of care, quality of life, and
EFIK
  
functional status ( Hea lt h)
10. Unintended health outcomes (Health)
 
 
11. Factors associated with improved health (Health)?

L
ACD    
12. Decreased use of ED visits, hospitalizations, &

  
readmissions (Cost)
13. Increased use of physician or pharmacy services (Cost)

  
14. Decreased total cost of care (Cost)

  
15. Unintended cost shifting to Medicaid, private payers, or

 
the beneficiary (Cost)
16. Factors are associated with lower cost (Cost)

L
ACD    
* E = ESRD Survey; F = Focus group survey; I = ICH CAHPS; K = KDQOL Survey; L = Learning Network Survey.
†
A = Area Resource File; C = Census; D=Dialysis Compare; U = US Renal Data System
Data Management and Security. This study will assemble data from many sources; some of
it sensitive. Analysts will be located at several AIR and UNC offices. They will use the data
from multiple sources for each of the analytic purposes and reports. In this complex data
environment, we will centralize qualitative and quantitative data management and analytic
file construction in a core data management team. Exhibit 2 illustrates the function of the
data management core.
Exhibit 2. Data Management Core
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—14
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
To facilitate security and access by authorized
staff from several locations, we will use
CMS’ Virtual Research Data Center to store
and analyze all data received from CMS and,
potentially data obtained from other sourcs if
this is permitted. AIR and UNC operate in
secure data environments and AIR is in the
process of obtaining FedRAMP certification.
Nevertheless, we plan to use the VRDC,
which will provide a secure data
infrastructure within which to conduct
analyses for quarterly and annual reports.
Deliverables. Deliverables include the
following documents.
1. Draft Evaluation Design Report (Yr 1).
We will deliver a Draft Evaluation Design
Report to the COR within 6 weeks after the
award date. It will include introduction; CEC
and ESCO background; purpose and goals of the evaluation; brief descriptions of the ESCO
awardees; research questions and data sources for each question; evaluation framework; data
collection and acquisition plan; data security plan; data analysis plan that emphasizes a
synthesis of qualitative, quantitative findings; expected limitations of the data and analysis
approaches, plans for quarterly and annual reports, and other content requested by the COR.
2. Final Evaluation Design Report (Yr 1). We will incorporate comments from the COR
and deliver the Final Design Report within 4 weeks of receiving written comments.
3. Updates to Evaluation Design Report (Years 2-5, Option year 1). Task Leaders will
track and document changes to the evaluation design. Annually, we will fully review the
Design Report 2 months before a Final update is due, discuss proposed changes with the
COR, and submit a draft updated Design Report 11 months after submission of the
previous Design Report. We will address the COR’s comments and deliver a final update
within 4 weeks of receiving written comments from the COR. We will also track all
changes that need to be made between annual updates and communicate these with the
COR on an as-needed basis by telephone and/or email and in the monthly Progress
Report.
1.2.3 Task 3: Beneficiary Surveys
1.2.3.1 Subtask 3.1: Baseline survey of controls (Year 1). Our team will conduct a baseline
survey of matched control groups within 6 months of the contract award. This survey will be
collected annually to measure change in the control groups over time. Data from patients
receiving care from dialysis centers within the ESCOs will already be reported to CMS through
another contractor. AIR will only survey ESCO beneficiaries in the first year if data from the
other contractor are unavailable.
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—15
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
Survey Content. To increase efficiency, we plan to develop the survey for control patients
referencing the ESRD Beneficiary Survey, which contains relevant domains from the KDQOL
and ICH CAHPS® surveys, such as quality of life, experiences with care, functional status, and
nephrologists’ communication and caring. These are the domains that can be measured for both
intervention and control patients and help answer several of the research questions. AIR would
not measure access to care or care coordination in the survey for controls unless these domains
were added to the surveys for intervention patients so that comparisons could be made. AIR will
work with CMS staff to prioritize and clarify domains of interest.
Cognitive Testing. AIR proposes an optional task to conduct one round of cognitive testing with
15 respondents in both English and Spanish. Given that the ESRD Beneficiary Survey combines
questions from several surveys including ICH-CAHPS and KDQOL, it is important to test how
the items are understood in this new context and order across these primary languages.
Repeated Cross-sectional Data Collection. We will employ a repeated cross-sectional design to
collect survey data. Rather than following the same patients over time, we will select a new
sample annually. This approach reduces the threat of attrition bias associated with tracking the
same respondents over time.
Survey Administration. We will administer the survey using a mixed-mode methodology with
two mailed surveys followed by telephone calls for nonresponders over 12 weeks (see Table 2).
This approach is consistent with the ICH-CAHPS methodology and has yielded higher response
rates than either mail or telephone modes alone. Using a mailed survey as the primary data
collection mode is especially important because CMS databases generally do not maintain
telephone numbers. Finding telephone numbers using databases such as Relevate is costly and is
not always successful. We recognize that there needs to be consistency between the intervention
and comparison group surveys to avoid potential mode effects, and will work with the contractor
for the aligned beneficiary surveys and CMS to determine the survey mode. Our team
recommends allowing proxy respondents to complete a survey on behalf of sample persons,
when necessary. The survey will be administered in English and Spanish because they are the
two most common languages in the U.S.
Table 2. Survey Data Collection Timeline
Survey Operations Step
Date
Send prenotification letter to the respondent explaining the survey
Week 1
Send a package containing a questionnaire, cover letter, and postage-paid return envelope
Week 2
Send a second package to nonrespondents
Week 5
Initiate telephone follow-up of nonrespondents
Week 8
End data collection
Week 12
Sampling. We will retrieve encrypted ID numbers and personal characteristics needed for
stratification from the Medicare Master Beneficiary data to construct the sampling frame. Once
the samples are drawn, the selected IDs will be matched to their contact information. For patients
without valid phone numbers, we would attempt to get this information through commercial
directory assistance services. We would also contact the dialysis centers directly to get updated
contact information.
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—16
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
Comparison patients will be selected using propensity score methodology, such as propensity
score matching (PSM) or weighting (PSW), using patient and dialysis center level characteristics
from claims data (See Section 1.2.2, Design Report). The final choice of method will be made
following the kick-off meeting, after we know the identity and characteristics of the ESCOs.
Pseudo ESCOs or comparison groups will be created by grouping together patients who are
similar to the patients in each of the ESCOs. Then we will take a random sample of patients
within pseudo ESCOs.
Table 3 shows the minimum sample size required for 80% power to detect a difference between
each ESCO and its matched comparison group in terms of the survey outcomes with expected
effect sizes between 0.3 and 0.5 across different domains of patient experience38,39.
Table 3. Minimum Sample Size Required for 80% Statistical Power
Expected Effect Size
Small (0.3)
Medium (0.5)
Minimum Required Number of Completed
Surveys for Each Comparison Group
320
140
Starting Sample Size for Each
Comparison Group
800
350
We will design our sample to detect the smallest effect size that is needed for the analysis; and
one of the ICH-CAHPS composites has a small effect size, so we will target 320 completed
surveys per comparison group. We expect the response rate to be approximately 40% based on
Datastat’s experience with similar populations. If we divide our target number of completed
surveys by the expected response rate, we get the starting sample of 320/.4 = 800 patients per
comparison group. For a high end estimate we will assume there will be 15 ESCOs, and we will
create 15 matched comparison groups in the base year. Our total starting sample size would be
12,000 ESRD beneficiaries based on 800 patients for each of the 15 comparison groups.
1.2.3 Subtask 3.3: Follow-up surveys of controls (Years 2-5; Option year 1). The same
survey and survey administration protocol will be used in the annual follow-up surveys of
controls to maintain consistency over time for analysis purposes. We will update the sampling
frame each year to account for changes in the patient population contact information.
1.2.3.3 Subtask 3.4: Optional Baseline Survey of Participants (Year 1). If the KDQOL or
ICH CAHPS data are not available in Year 1, then AIR will conduct a concurrent baseline
survey of intervention beneficiaries using the same survey instrument and protocol as for
controls.
1.2.3.3 Subtask 3.4: Optional Baseline Survey of Participants (Year 1). If the KDQOL or
ICH CAHPS data are not available in Year 1, then AIR will conduct a concurrent baseline
survey of intervention beneficiaries using the same survey instrument and protocol as for
controls.
1.2.4 Task 4: Data Analysis (All Project Years)
Objective. To address the RQs with as much rigor as possible using multiple research methods
and data sources in order to (1) establish a comprehensive picture of the value added by CECI;
(2) enable the Innovation Center to decide if it should make a case to the Chief Actuary for
bringing CECI to scale; (3) provide credible results that will enable the Innovation Center to
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—17
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
defend its conclusions and actions to the Chief Actuary, the Administrator, the Secretary, and
Congress.
Approach. With 10-15 ESCO awards, we have few degrees of freedom for analyses at the
demonstration site (i.e., organization) level. Thus, we plan to estimate our beneficiary-level
models separately for each demonstration awardee and for all awardees pooled. We will
summarize the results for each ESCO based on the frequencies with which the ESCO has a
favorable, unfavorable, or no effect on each outcome measure. We will summarize the results for
CECI based on the preponderance of favorable or unfavorable outcomes across the sites. The
pattern will tell a story about the effectiveness of the model. This approach has been used
successfully in other CMS demonstration evaluations of delivery system redesign interventions
that use Medicare claims40 (Lee et al., 1997). Pooled data will enable us to understand how
variation in structure and process among the CECI awardees affects outcome measures. The case
study and monitoring data will be used to code environmental and organizational characteristics
for the statistical models and enable us to understand why observed statistical effects occurred.
Here we describe our impact, monitoring, and case study analysis plans. Table 4 illustrates how
we will operationalize measures for these models using an example RQ from each domain of the
3-part aim.
Core Model of CECI Impact. The core model to assess the impact of the ESCOs on care,
health, and cost is the ITS with comparison groups. Selection of control variables and data will
be based on the conceptual model and data sources described in Exhibits 1 and Table 1 in Task
2. Analyses will be longitudinal and conducted at the patient-level. Analyses will be adjusted for
comorbidities, dialysis modality, medications, and contextual characteristics.
This framework is summarized in the following core PSW ITS model. Outcomes will address the
3-part aim. Control variables include characteristics of both patients and the environment.
(EQ 1) Outcomeit = ESCOi + Quart1t + Quart2t + … + QuartTt + ESCO* Quart1it + ESCO*
Quart2it + … + ESCO* QuartTit + PtDemographicsi + PtComorbiditiesit +
EscoContextit + DialysisFacCharit + εit
where: ESCO indicates persons with ESRD attributed to the treatment group; QuartT indicates
the quarter in the post period; ESCO* QuartT indicates the quarter-specific effect attributable to
ESCOs. PtDemographics include patient age, chronic condition indicators, sex and
race/ethnicity; DialysisFacChar includes the characteristics of dialysis facilities, including the
number of patients attributed to them, non-profit, hospital-based, and others; ε is the model error
term. This approach will allow us to determine which effects of ESCO are estimated to occur in
which quarter, and which outcomes demonstrate a trend away from the control observations.
Models will be stratified by payer (Medicare only, Duals, Medicaid only) in order to determine
separate ESCO effects in each population. All models will be propensity-score weighted, for
doubly robust models. Interactions will be examined before analyses are finalized. Models will
be estimated separately for each ESCO. We will evaluate merging survey data with the claims to
provide additional covariates, but our experience37 (Lee et al., 1997) suggests that the loss in
sample size resulting from limiting the claims data analysis to patients who also provide survey
data is not worth the contribution from those additional covariates.
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—18
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
The appropriate model specification will be used for each outcome measure. Generalized
estimating equations (GEE) will be used for all outcome models to accommodate the repeated
measures nature of the data, with appropriate distributional and link functions. A modified Park
test will be used to determine the appropriate distribution, such as negative binomial and gamma
distributions for highly skewed measures such as utilization and costs41,42,43.
Longitudinal Changes in Patients Assigned to ESCOs. Our models for each design will
initially use cross-sectional observations of all eligible intervention and comparison group
members at each point in time. The findings will represent the full, intent-to-treat population and,
together with PSW, provide the most externally valid estimates to inform the decisions about
bringing the CECI to scale. For this cross-sectional approach, we will draw a new comparison
group for each period. However, we will also estimate models for the panel of persons initially
assigned to the ESCOs and weighted comparisons, to track the impact of the intervention on
participants over time. These models will be subject to censoring bias due to attrition from
transplant and death. We will account for this attrition using methods, such as joint modeling,
that simultaneously model the longitudinal chronic conditions’ outcome Y and risk of death D as
𝑓(𝑌𝑖 , 𝐷𝑖 ) = [𝑌] × [𝐷|𝑌]. This approach generates unbiased estimates by appropriately
accounting for the healthy survivor effect44, 45. The major limitation of the joint model is the
computational complexity. To facilitate interpretation, we will compare results of our joint model
to results with standard strategies46. We will re-estimate the cross-sectional and longitudinal
models each quarter as additional quarters of claims data become available.
Model to Understand ESCO Implementation Activities. Because ESCO-specific
implementation activities will be unobserved for the comparison patients, we will use a second
model, which will not include a comparison group, to evaluate these effects on beneficiary
outcomes using pooled data from all the ESCOs.
(EQ 2) Outcomeit = Quart1t + Quart2t + … + QuartTt + ESCO* Quart1it + ESCO* Quart2it +
… + ESCO* QuartTit + PtDemographicsi + PtComorbiditiesit +
EscoStructureit + EscoContextit + EscoCapabilitiesActivitiesit + εit
Where: EscoStructure includes: Non-Profit Facility, Multiple-SDO, Ownership (chain,
independent), leadership, provider characteristics, including the number and breadth of provider
and organizational ESCO participants; the quality of inter-organizational relationships47;
EscoContext includes: payer and provider concentration and market power, Current per capita
spending and utilization, state policy environment (e.g., Medicaid payment levels; state-level
ESRD initiatives); EscoCapabilitiesActivities: includes HIT (Meaningful Use compliant EHR,
Health information exchange, analytics), care management processes across the care continuum,
quality improvement methods used, and their scope and extent of deployment, and provider
engagement in strategies and processes; ε is the model error term. Interactions will be examined
before analyses are finalized.
Model for Outcomes Measured with Survey Data. Patient experience of care, quality of life,
functional status, patient-provider communication, some care coordination, and some access
measures will come from the survey data. We will use ITS with comparison groups design in
regression models with the appropriate model specification for each outcome measure.
(EQ 3) Outcome = ESCOi + PtDemographics + ε
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—19
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
Where: ESCOi, indicates ESCO or control patient. The magnitude of the regression coefficient
will indicate how much of an impact the intervention had on the outcomes of interest. Patient
demographic variables such as race/ethnicity, education, income, and marital status, and general
health status, will be included in the model to determine if the outcomes vary for different types
of beneficiaries. Standard errors will be adjusted for the clustered sampling design in SAS or
Stata. The model will be estimated using annual cross-sectional survey data and re-estimated
each year as the additional data become available.
Handling Missing data. Loss to follow-up (e.g., transition to another modality) is common in
longitudinal analysis of ESRD. To address missing data from such transitions in sub-group
analyses, we will include as many variables as possible to achieve the result of having data that
are likely missing at random. We will use likelihood-based methods to address the
missingness48,49 (Hogan et al., 2004; Laird, 1988). Sensitivity analyses, including sub-group
analyses, will document the effect of missing values.
Table 4. Example measures Better Care, Better Health, Lower Cost
RQ
Did CEC initiative improve or
have a negative effect on…
2
Care: Access to care
8
Health: Clinical outcome measures
12 &
13
Cost: Medicare utilization
Examples of specific measures
 Ease of getting appointments (beneficiary survey)
 Wait times (beneficiary survey)
 Vascular surgeon and transplant specialist provider visits (Claims)
 Time to transplants (claims/medical record)
 ESCO Standardized Mortality Ratio (claims)
 The incidence and prevalence of chronic conditions and disease
complications (claims)
 Immunization rates (influenza and pneumococcal) (Claims)
 Physician visits (?- not sure about this one- falls under cost reduction too
but may be helpful as a metric for coordination of care)
 120 day mortality rate
 ED visits
 Hospitalizations following ED; or following transfer from another hospital.
 Hospital Days Ambulatory care sensitive inpatient admissions
 Readmissions for: ESRD same 1st DX, for Non-ESRD Dxs
 Non-Dialysis Primary care visits, by specific CDK Comorbidities
 Dialysis Primary care visits, by specific CKD Comorbidities
 ESRD-related Specialty visits, by specific CKD Comorbidities
 Number of medications, by specific CKD Comorbidities
1.2.4.2 Case Study Analyses
Thematic analyses. Primary qualitative data in the form of transcripts and notes collected during
the interviews (i.e., individual and small group), focus groups, and direct observations of ESCO
sites during the Learning Network activities will be systematically coded for key themes and
patterns using NVivo50. The conceptual model described above will inform the development of
the “start list”51 of codes that will be used to analyze the data. An initial review of data will be
used to extend and revise this initial coding scheme to develop an analytic codebook that is
thorough, reflective of emergent patterns and themes, and precise. Coding will occur in teams,
with step-wise independent and collaborative coding, as well as consistent checks for inter-rater
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—20
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
reliability. Inconsistencies in coding will be reviewed and discussed until team consensus is
reached. Themes will be identified within and across cases.
Qualitative Comparative Analysis (QCA). Once coding is complete, we will use QCA to
determine which qualitatively assessed conditions or characteristics are associated with patient
outcomes. We will use the QCA procedures outlined by Ragin (2008)52, which involve four main
steps.
1. We will use an iterative process to identify which conditions to assess whereby both
theoretical/empirical knowledge and themes emerging from the evaluation data inform
the identification of plausible conditions. We anticipate that the kinds of conditions that
will likely be considered include: organizational characteristics, innovation
characteristics, implementation strategies, and local delivery system conditions.
2. Once the conditions are identified, each ESCO will be assessed to determine the extent to
which it displays the condition. In QCA, this process is called calibration. We will
develop a calibration metric for each condition and use relevant data to score each case.
Two scorers will rate each case, based on the systematic coding and analysis of the data
done for Task 7. Discrepancies in scores will be resolved through discussion and
consensus.
3. The relationship between combinations of conditions and outcomes will be analyzed
using fuzzy set QCA (fsQCA). Tests of consistency and coverage constitute the two
analytic tools of QCA. If a particular combination of conditions is present when the
outcome of interest is also present in the vast majority of cases displaying this set of
conditions (80% or more of the time), consistency is high and a meaningful empirical
relationship between the combination of causal conditions and the outcome is indicated.
If a particular set of conditions is one of a few, versus one of many sets of conditions that
are present when the outcome is also present, coverage is high and empirical relevance is
indicated. Consistency and coverage are somewhat analogous to the concepts of
significance and explained variation (e.g., R2) in multivariate regression analyses.
4. The final step is to assess the minimum combinations of causal conditions that are
necessary and/or sufficient to produce the outcome. This test identifies the most
parsimonious causal combinations that are associated with favorable outcomes.
At the completion of the QCA, the necessary and sufficient conditions will be entered into the
regression models as indicators of local and organizational context and implementation. These
models will tell us to what extent the observed effects of the intervention are moderated by these
conditions.
1.2.4.3 Rapid Cycle Monitoring Analyses for Oversight, Rapid Cycle Improvement (RCI),
and Evaluation
Monitoring data will be derived from the monitoring contractor (to be determined following
award), the case study interviews and focus groups, and the implementation assessment tool and
Learning Network surveys. We assume that the monitoring contractor will include Dialysis
Compare measures computed at feasible intervals. The analyses will be descriptive, focusing
primarily on means and frequency distributions of quantitative of monitoring variables at the
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—21
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
ESCO level, the periodic case study data updates agreed upon with the COR, based on the
Innovation Center’s oversight needs, the RCI needs of ESCOs, and the quality of data obtained
by the monitoring contractor. These data will be analyzed and reported quarterly for oversight
and RCI using a section 508-compliant web template based on the monitoring systems AIR
developed for CMS’ State Health Insurance Assistance Program (SHIP) and Dual Eligibles
Measurement and Monitoring Evaluation (DEMME). The template will display quarterly and
cumulative values. The data will also be used to develop hypotheses to test using rigorous ITS
models with claims and survey data, which control for covariation and confounding, and will
contribute to the final conclusions and recommendations along with the impact and case study
analyses.
The analysis of case study qualitative data for RCI will use several strategies. Once the codebook
for the qualitative data is finalized, meeting and interview notes will be structured such that they
can be auto-coded in NVivo. We will add a structured question to the qualitative data collection
protocols asking respondents to give one word that best describes their experiences thus far with
the demonstration. This single question can then be quickly abstracted from the interview or
focus group transcript, input into NVivo, analyzed for frequent concepts and displayed visually
via a word cloud. Such findings act as a “temperature” check of the status of implementation.
The implementation assessment tool (Task 7) will be structured in Microsoft Word, with the
input of items linked directly to an Excel database. This database will automatically transform
and link to charting features in the web-based template so that it will include results from both
the quantitative and qualitative monitoring activities. Together this process will allow for
virtually “real-time” reports of the status of key implementation processes displayed in an easy to
interpret visual format. Together these techniques will allow for quick and efficient turnaround
of data analysis while still producing helpful findings for the quarterly reports.
1.2.5 Task 5: Develop Quarterly Reports of ESCO Performance (All Project Years)
Objective. To prepare and submit timely quarterly formative reports that reflect the plans
specified in the Design Report (Task 2), are in a format approved by the COR, and meet CMS’
information needs and expectations.
Approach. Beginning 6 months after the go-live date for LDOs and SCOs and every 3 months
thereafter, our team will submit a Draft Quarterly Report of ESCO performance to the COR in
electronic format. Using COR feedback, Draft Quarterly Reports will be finalized. Final
Quarterly Reports will be submitted to the COR beginning 7 months after the go-live date and
every 3 months thereafter.
Quarterly reports will be used to monitor performance of the ESCOs and CECI and provide
rapid-cycle feedback to support ESCOs in implementing evidence-informed changes over time.
Rapid-cycle findings will be reported in tables, other data visualizations, and evaluation briefs.
Quarterly reports will represent a subset of data elements, qualitative findings, and observations
included in the more extensive Annual Reports (Task 6). Data elements to be included will focus
on key structure, process, outcome, cost utilization and health care environment measures.
Sample quantitative data elements may include: patient volume by dialysis type; referrals to
transplants; per member per month total costs and utilization for ED, inpatient,, and physician
services; and 30-day inpatient readmission rates per admission and per population.
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—22
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
Data will be summarized at the ESCO level, overall LDOs, overall SDOs, all ESCOs combined,
and comparison entities. The ESCO-specific chapters of the report will compare ESCO
performance to overall weighted averages for all ESCOs combined, SDOs combined, LDOs
combined, and comparison entities. Findings that are compared across all ESCOs with identifiers
will only be shared with CMS and will not be distributed to ESCOs or comparison entities. Data
sources will include claims, medical record, survey, qualitative, and observation.
Rapid Cycle Evaluation Briefs. Consistent with the rapid-cycle framework, the evaluation team
will prepare evaluation briefs highlighting key emergent themes to disseminate early findings to
CMS and ESCOs. For example, one evaluation brief will describe implementation processes
among ESCOs, including challenges and strategies for success. These reports may prove
beneficial for ESCOs that are experiencing difficulties in the early phases of the demonstration.
Another brief will highlight beneficiary and family caregiver perspectives to assess the initial
impact of the Initiative on access and quality of care. Internally, these reports will directly inform
the evaluation process through the identification of additional factors for consideration that may
not be included in proposed regression models. The rapid-cycle framework also supports the
application of the constant comparison method53, in which the findings from data collected in
each cycle will be systematically compared to elucidate patterns and change over the course of
the demonstration, improving data accuracy, and to uncover previously unspecified areas for
subsequent exploration. Findings from qualitative data analysis will determine the exact topics
for each and all findings will be included in the final narrative report.
Data Visualization. Our team will develop, for the COR review, a draft library of data
visualizations to support the monitoring and rapid-cycle feedback needs. These data
visualizations can be used for traditional static reports (paper- or PDF-based) and/or interactive
data visualizations, such as dashboards. Upon COR review and selection, AIR will incorporate
visualization into static reports and-or interactive reports as directed by CMS. Potential graphics
may include trend-based analyses that display quarterly patterns of selected key indicators. Trend
visualizations would be developed to support roll-up aggregations to year-to-date, calendar year,
project year, and other date range summaries. Trend visualizations would include statistical
process control chart features, such as central tendency measures and standard deviation bands,
and could include measures of trend directionality and significance. Other trend-based features
might include the display of comparable aggregate trends for similar entity types, for high-risk
groups, for project versus control groups, and actual versus target performance charts. Additional
trend features that may prove useful and appropriate for this project might include the use of
sparklines and small multiples data visualizations – in which large collections of trend
performance across entities can be displayed in a single screen or page for quick review. Other
static or interactive graphic features that may be effective for these data include: project versus
control group graphical comparisons for current periods and for trends, ranked identified and deidentified comparisons of entity performance on key cost, quality, and utilization measures,
bullet charts, target population and at-risk group segmented analysis results, regression graphics,
and difference in difference (DiD) graphics. AIR’s Dennis Nalty currently performs this work for
CMMI’s dual eligibles demonstration (DEMME). Dr. Nalty, who will lead this work for CECI,
received a CMS National Recognition Award in 2011 for his work on the monitoring and RCE
system he developed for the State Health Insurance Assistance Program (SHIP).
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—23
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
Learning Network. Findings will be summarized and shared in quarterly and annual reports (Task
6), presented as an interactive PowerPoint during annual meetings, and made available on the
Learning Network’s secure knowledge management system or collaboration site in the form of a
discussion thread to facilitate information exchange. We will also leverage our mixed methods
approach to actively engage Learning Network contractors in the determination of useful strategies
for information dissemination by directly soliciting participant feedback using each data collection
method54 (Gagnon, 2011). As changes are expected to occur over time, assessing and adapting to
meet evolving stakeholder needs will be critical to the translation of information obtained from this
evaluation into sustainable process improvements.
1.2.6 Task 6: Annual Reports
Objective. To prepare and submit cumulative summative reports to the COR that describe up-todate evaluation findings.
Approach. Our team will submit a Draft Annual Report to the COR in electronic format 11
months after the award date and one year later for each of the remaining evaluation years. Using
feedback from the COR, Draft Annual Reports will be finalized. Final Annual Reports will be
submitted to the COR beginning 13 months after the award data and one year later for all years
except the final project year, which will be completed before the project ends.
The Annual Reports will be prepared in a format specified or approved by the COR. The content
will include the background, purpose, goals, brief ESCO descriptions, research framework, and
evaluation methods as described in the Design Report. Additionally, these reports will
summarize the findings of all analysis approaches, organized by the 3-part aim of Better Care,
Better Health, and Lower Cost, and by research question within these 3 domains. Our team will
present a synthesis of findings for each research question that uses qualitative findings to help
explain or provide context to difference-in-difference and other quantitative or cost-related
analyses. For example, qualitative findings will be used to describe the strategies that were
employed by ESCOs to decrease ED utilization, and increase utilization of home-based dialysis
approachres.The Annual Reports will be discussed at the Annual Meetings at CMS (See
Subtasks 1.3 & 1.3.2).
1.2.7 Task 7: Qualitative Data Collection (All Project Years)
Objective. Qualitative data collection will serve five main purposes for the evaluation: (1)
provide context and explanation for outcome and impact findings; (2) document the processes
ESCOs engage in to improve health, care, and costs; (3) identify facilitators and challenges to
meeting the CECI’s goals; (4) identify considerations for replicating the new model in other
markets; and (5) assess the long-term sustainability of the new models.
Approach. For Task 7, AIR brings together an extensive team of experts in rigorous qualitative
research methods including interviews, focus groups, case studies, and qualitative comparative
analysis (QCA). Dr. Brandy Farrar, who has 12 years of experience and expertise in evaluating
the effectiveness, viability, and impact of innovative programs designed to improve the quality
of, access to, and capacity of health care services, will lead this task. Dr. Farrar is experienced in
QCA and will be advised by Charles Ragin, a QCA pioneer.
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—24
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
AIR has devised an approach to the qualitative data collection that will meet the need for
evidence-based qualitative methodology, rapid cycle feedback, logistical feasibility, good
financial resource stewardship, and minimized burden for awardees. Data will be collected using
case study methodology. In case-study research, data collection and analysis are tailored to be
appropriate and relevant to each case of interest. There will likely be considerable variation
across ESCOs in their structure and implementation processes and the case-study approach will
accommodate these differences. A core set of data will be collected across cases (i.e., ESCOs).
However, the specific details of the data-collection process such as the quantity and composition
of the interviewee sample and the specific probing questions will be tailored to fit the particular
ESCO configuration, management structure, and project characteristics. To facilitate the case
study approach, a kick off meeting will be held with each awardee to identify liaisons and key
informants for the qualitative data collection within each awardee, recruitment strategies for
patients, and to determine the timing of data collection. The AIR evaluation team will continue
to work with each site’s liaison to refine and solidify their data collection plan. Our analytic
methods are described in Task 4.
Data Collection. Data will be collected via in-person site visits and telephone interviews.

In person site visits with ESCOs. In-person site visits will occur in Years 1 and 5 for
LDOs and Years 2 and 5 for SDOs. During these site visits, we will conduct: (1) semistructured interviews with key personnel associated with each ESCO; (2) focus groups
and interviews with intervention patients; and (3) brief surveys with intervention patients.
Each site visit team will consist of two trained and experienced moderators, one with a
clinical background and one experienced in health care organization management. All
interviews and focus groups will be audiotaped.

Semi-structured interviews with key CEC Initiative Personnel. We will conduct
approximately 6 – 10 in-depth semi-structured interviews with key personnel associated
with each ESCO. Each interview will be designed to last approximately one hour. We
anticipate the following types of interviewees: (1) strategic planning and decision making
personnel (executive director, medical director, CFO, COO, office manager, etc.); (2)
operations staff (receptionists, billing specialists, schedulers, etc.); (3) quality
improvement personnel (e.g., quality champions, health information technology
personnel, etc.); (4) Clinical staff (e.g., physicians, nurses, medical assistants, social
workers, etc.); and (5) Community partners (e.g., community-based organizations, county
or state agencies, etc.)
We will tailor semi-structured interview guides for each respondent type, only posing questions
that the interviewee has direct knowledge of and is suited to answer. At the first site visit, we will
gather background, contextual, and baseline qualitative information about the demonstration
projects, such as the structure of the ESCOs, the core activities the awardees are engaging in to
meet the project goals, and early implementation challenges. At the final site visit, we will assess
perceived outcomes, return on investment, sustainability mechanisms, and overall lessons
learned by awardees.
Focus groups and interviews with intervention patients. We will conduct one patient focus
group and approximately 10 individual patient interviews per ESCO at each site visit. The
purpose of the patient data collection is to understand patients’ care experiences, and to assess
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—25
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
their awareness of the CECI and perspectives on the impact that the new care model has had on
their health.
Our goal is to recruit a diverse sample of participants of different ages, sex, racial and ethnic
groups, and health status. Focus groups and individual interviews will be designed to last for two
hours and 30 – 60 minutes, respectively. AIR will work with each site to determine the best
strategy for recruiting beneficiaries to participate in focus groups. There are four likely options:
1. Sites will provide the names and contact information of their participating beneficiaries.
AIR staff will then select a sample of these patients to assure diversity across important
patient characteristics, contact, screen, and request their participation in a focus group or
interview.
2. Using a data release form (developed by AIR), office staff in the clinic will ask patients
as they come in for their ICH treatments whether they agree to have their contact
information released to AIR staff for the purposes of requesting their participation in a
focus group or interview. Office staff will submit to AIR the names and contact
information for consenting patients. We will then select a diverse sample of these patients
to contact, screen, and request their participation.
3. AIR will develop a recruitment flyer and request that office staff hand the flyer to
patients as they come in for treatment. Patients who are interested can contact AIR based
on the information on the flyer, at which point AIR will screen and recruit participants.
4. AIR will allow the sites to recruit patients to participate in the focus groups or interviews.
AIR will provide relevant clinic staff with recruitment scripts and written and verbal
instructions on how to recruit using methods that are consistent with human subjects
protections and that minimize selection bias.
Option 1 is the preferred recruitment strategy. However, past experience suggests that sites may
be reluctant to provide AIR with the names and contact information. Thus, AIR will work with
sites to develop a mutually satisfactory strategy. We have budgeted $100 per patient for
incentives.
Brief survey with intervention patients. At the start of each focus group and interview, site
visitors will administer a brief survey to capture patients’ perspectives on the care coordination
services they are receiving. This survey will contain items about patient-provider
communication, communication among providers, shared decision-making, continuous care
planning and monitoring, coordination with other entities regarding the care plan, and patient
self-management. Items will be drawn from ICH CAHPS, the AHRQ care coordination survey
AIR is developing, and the self-management composite AIR developed for the new cancer
CAHPS survey.
Ongoing telephone interviews. During year 1 for LDOs and year 2 for SDOs, AIR will have a
monthly standing telephone check-in meeting with key personnel identified during the first site
visit to stay abreast of the awardees’ activities. Every third month, these monthly meetings will
be more detailed quarterly progress updates. Quarterly progress updates will continue through
years 2 and 3 for LDOs and years 3 and 4 for SDOs and then taper to biannually for year 4 for
LDOs.
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—26
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
During the monthly meetings and quarterly interviews, AIR will administer an implementation
assessment tool that will be based on empirically validated evidence, as well as data collected at
the first site visit. This tool will contain items that assess organizational culture, intervention
buy-in, communication processes, financial and human capital resource appropriation, systems
changes that support the intervention, and intervention practices. Response options will be
arranged along a continuum of planning to implementing, and will be supplemented with openended questions to gather respondents’ perspectives on the various domains of the
implementation process.
1.2.8 Task 8: Observe and Participate in the Learning Network Process for ESCOs and
Prepare Reports (All Project Years)
Objective. To (1) assess whether ESCOs perceive the Learning Network sessions and activities
as useful; (2) identify if and how ESCOs used the Learning Network to facilitate innovations in
ESRD care, patient experiences, and quality of life; and (3) share feedback on the learning
network process with the Learning and Diffusion contractor so that they can make adjustments to
meet awardees’ needs.
Approach. We will use a mixed-methods approach. This approach was purposefully designed to
reflect the importance of direct and consistent involvement of the evaluation team with the
Learning Network to ascertain the shared information system’s usefulness and applicability.
Methods. The primary data collection methods will be: (1) briefings to the Learning Network
contractor about key challenges and learning needs of awardees; (2) direct observation of and
participation in quarterly meetings; (3) post-meeting teleconferences with meeting participants; and
(4) an online survey of Learning Network participants. Our analytic methods are described in Task
4. The sections that follow detail the data collection processes.

Case Study Data from Task 7. Using data gathered through the qualitative data collection
task (Task 7), the evaluation team will develop PowerPoint presentations to brief the
Learning Network contractor about where awardees are in their implementation, identify
common challenges and learning needs, and identify awardee-specific challenges. This
information will inform the Learning Network contractor’s development of targeted topics
and tools for the learning meetings and identify any individualized technical assistance that
might be warranted. See Table 5 for sample topics and research questions for each data
collection method.
Direct observation of and participation in quarterly meetings. Evaluation team members will
observe each Learning Network meeting. A using an observation guide and protocol developed by
AIR. The protocol will list the core topics of interest and provide guidance on how to document
behavioral information. For example, team members will be instructed to note not only what
participants say, but also their non-verbal cues such as if they seem frustrated, energetic,
passionate, surprised, ambivalent, dismissive, angry, etc. about a particular topic. In addition to
informal participation in the learning session activities, the evaluation team will use a portion of
the meeting to share and facilitate group discussions of preliminary research findings as a rapid
cycle feedback mechanism.
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—27
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.

Post meeting data collection with Learning Network participants. There will be two
post meeting data collection activities for Learning Network participants. The first will
occur during the monthly, quarterly, and site visit data collection with awardees described
in Task 7. The evaluation team will assess administrators’ and providers’ perspectives
about the application of the Learning Network to developing program and process
innovations. The second data collection activity will occur immediately following each of
the Learning meetings. Each participant will be asked to complete a brief 5-10 minute
online survey. This survey will be developed once the Learning Network has been
established and a clear infrastructure and associated activities have emerged.
Table 5. Sample Topics and Research Questions for each Data Collection Method
Primary Research
Questions
1. Do ESCOs perceive
the Learning Network
sessions and activities
as useful?
2. How have ESCOs
utilized the Learning
Network to facilitate
innovations in the
following areas…?:
 Quality of care
 Patient experience
with care
 Patient quality of life
 Utilization outcomes
 Medicare program
savings or costs
Research Topics
Data Sources
Amount of Time Spent on
Topics: How much time was
spent addressing participant
questions regarding
implementation of the CEC
Initiative during Learning
Network meetings?
Learning Network
Perceptions: Which Learning
Network activities do
participants perceive as most
useful to their respective
ESCO or facility?
Innovations and
Implementation Strategies:
What new services or
programs are available as a
result of information obtained
from the Learning Network?
Mode of Contact
Direct Observation of and
Participation in Quarterly
Meetings


Researchers
Learning
Network
participants
(i.e., ESCO
stakeholders)
Site and Model Features:
What are the organizational
and operational characteristics
of new programs implemented
at the facility as a result of
information obtained from the
Learning Network?
Post-Meeting Online Participant
Survey
Follow-Up Data Collection
Follow-Up Data Collection
1.2.9 Task 9: Prepare and Deliver Analytic Files
Objective. Deliver analytic files and related documentation used to prepare quarterly and annual
reports to CMS in an approved format and on time.
Approach. The project team will develop a brief data transfer plan that specifies the strategies,
activities, and procedures for preparing and transferring data generated in this project and related
documentation to CMS. This plan will also specify the processes for ongoing tracking and
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—28
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
documenting of data sources throughout the project. A living summary of all data sources will be
created by the Task Leaders and centralized with the data center manager to maintain version
control. Data cleaning for quantitative datasets will include: consistency edits, skip pattern
checks, logic checks, and missing data evaluation. Data cleaning for qualitative data will include:
formatting, organization, and copy-editing of notes, reports, and transcripts. Data documentation
will include: file names; variable matrices with variable names, format, definitions, and coding
for each variable; description of procedures used to create any composite variables or scales;
response rates for surveys; data collection procedures; data considerations or anomalies; and any
other key information requested by CMS. Within 12 months of the beginning of the 5th project
year, our team will prepare and deliver analytic files that were generated to prepare all quarterly
and annual reports in this evaluation. The data files will be provided in ASCII or SAS. As part of
the data transfer process, our team will brief CMS staff about the data and documentation.
Chapter 2 – Personnel Qualifications (4-6 Pages, now 7)
Our approach to this important work is centered on a project team with the expertise in ESRD,
rapid cycle evaluation, rigorous qualitative and quantitative data collection and analysis, and
disseminating meaningful results to accomplish this task. Our project is organized around a
project leadership core which includes our project director, our evaluation leadership team, and
our clinical leadership team as displayed in Exhibit 2, the Organizational Chart. Our project
director will lead all aspects of the project and will be advised by each of the leadership teams.
Each key task will be led by task leaders. Staff with key roles are also listed. In addition, the
roles, skills, availability of each staff are summarized in Table 6.
Exhibit 2. Organizational Chart
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—29
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
Table 6. Expertise of Professional Staff
Name (% Available for Project)
Role and Expertise
Organization
Julie Jacobson Vann, Ph.D., M.S., R.N. (75%)
Project Director & Lead for 1, 2, 5, 6
AIR
Role: Dr. Jacobson Vann will direct the project and will be responsible for overall administration, coordination across the
tasks, and quality of all deliverables. Dr. Jacobson Vann will supervise the work of AIR staff, and our subcontractors and
consultants: University of North Carolina, DataStat, Axene Health Partners, and Precision Health Economics. Dr. Jacobson
Vann will also serve as the key point of contact for CMS, govern direct communications between CMS and task leads at
important junctures, and will provide appropriate staff for project activities. In addition, she will provide guidance and input
across all tasks, and lead tasks 1, 2, 5, & 6.
Expertise: Dr. Jacobson Vann has 36 years of experience in health services that spans the delivery of patient care and public
health services, leadership of health services and managed care organizations, research and evaluation, and academic
teaching. She has planned and directed health services research focused on: innovative Medicaid delivery systems enhanced
with care management, care management information technology, and disease management; health promotion and disease
prevention, community-based performance improvement initiatives, and implementation science. Dr. Jacobson Vann recently
directed the development and evaluation of a pilot nurse practitioner-delivered educational intervention aimed at improving
care compliance for persons with chronic kidney disease. She directed the qualitative evaluation of the Graduate Nurse
Education Demonstration, a CMS-funded initiative to increase the volume of advanced practice registered nurses and primary
care clinicians. She conducted site visits for the Strong Start evaluation and contributed to the design of the project monitoring
system. Prior to joining AIR she conducted evaluations and cost analyses of statewide performance improvement initiatives for
a North Carolina Medicaid enhanced primary care case management program for 9 years that focused on care coordination,
disease management, healthy weight promotion, and utilization of high-cost medications. Dr. Jacobson Vann received a
Bachelor of Science in Nursing from the University of Wisconsin – Eau Claire, and a Master’s of Science (MS) in Health Care
Management from the University of Wisconsin – Milwaukee. Her Ph.D. in Health Policy and Administration is from the
University of North Carolina at Chapel Hill (UNC) School of Public Health.
Ronald Falk, M.D. (20%)
Clinical Lead
UNC
Role: Dr. Falk will provide clinical expertise as part of the Clinical Leadership Team for the proposed effort and be the key
point of contact for the project.
Expertise: Dr. Ronald Falk has been the Chief of the Division of Nephrology and Hypertension since 1993 and is Director of
both the UNC Kidney Center and the UNC Solid Organ Transplant Program. He earned his medical degree from the UNC
School of Medicine. He is Chair and co-founder of Carolina Dialysis, LLC, consisting of four separate dialysis centers, and
Carolina Dialysis of Mebane, LLC. He is co-founder of the Carolina Vascular Access Center developed in collaboration with
Capital Nephrology, Durham Nephrology, and MedWork Partnership, LLC. An internationally recognized leader in nephrology
since the mid-1980s, Dr. Falk has over 3 decades of experience in biomedical research and clinical leadership and over 2
decades of direct experience leading and managing the clinical and administrative aspects of a large dialysis practice network.
As President of the American Society of Nephrology in 2012, he was instrumental in establishing the Kidney Health Initiative
(KHI), a partnership formed with the US Food and Drug Administration whose mission is to advance scientific understanding of
kidney health and patient safety implications of medical products and to foster partnerships to optimize evaluation of drugs,
devices, biologics and food products. KHI is already conducting several important pilot projects in outcome measures, data
standards and patient-centered projects.
Jennifer E. Flythe, MD, MPH (15%)
Clinical Lead
UNC
Role: Dr. Flythe will provide critical nephrology and dialysis care expertise as part of the Clinical Leadership Team. She will
assist with the evaluation design report, the data analysis, and the quarterly and annually reports.
Expertise: Jennifer Flythe, MD, MPH is a clinician scientist focused on investigating chronic dialysis procedural risk factors
and patient-reported outcomes. She is a member of the American Society of Nephrology Dialysis Advisory Group and was the
Associate Medical Director of the Brigham and Women’s Hospital dialysis unit, serving on its quality improvement and
governance committees. Her clinical duties concentrate exclusively on the care of chronic dialysis patients. As a dialysis
outcomes researcher, she has extensive experience working with dialysis-specific claims data and other large dialysis
database data and will provide study design, analytic, and interpretation support. As a leading national expert on dialysis
treatment-related fluid complications, she will provide unique insight and guidance regarding important clinical outcomes,
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—30
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
Name (% Available for Project)
Role and Expertise
Organization
practice patterns, and other dialysis-specific knowledge relevant to ESCOs. Additionally, she has invaluable experience in the
development and administration of validated survey instruments in dialysis-dependent patient populations, and she will be
instrumental in developing qualitative and patient-reported outcomes data standards and data collection instruments for this
contract. Her medical degree is from the UNC and Master’s in Public Health (MPH) from the Harvard School of Public Health.
Steve Garfinkel, Ph.D., M.P.H. (10%)
Evaluation Leader
AIR
Role: Dr. Garfinkel will provide input into the analytical plans as part of the Evaluation Leadership Team and will be the
corporate quality reviewer for all deliverables.
Expertise: Steven Garfinkel has 40 years’ experience in health services research, with a particular focus on health insurance,
including Medicare, Medicaid, and private health insurance; health care outcomes and quality, especially the development of
interventions and the evaluation of their impact; and health care organization. His technical work includes the design and
evaluation of financing, quality improvement, and health information technology demonstration programs; the design and
implementation of surveys intended to address health policy issues and measure quality of care; the design and
implementation of qualitative and quantitative studies of health care organization and communications; and the analysis of
data derived from surveys, controlled experiments, medical records, and health insurance claims. He has worked on over 20
delivery system and reimbursement redesign evaluations sponsored by the CMS, AHRQ, the Centers for Disease Control and
Prevention, the Robert Wood Johnson Foundation, and the California HealthCare Foundation. He is a member of AIR’s
Institutional Review Board, and previously a member of the Biomedical IRB at UNC. He is a member of the editorial board of
Medical Care Research and Review. He received his MPH and doctorate from the UNC School of Public Health.
Tom Reilly, Ph.D., M.A. (20%)
Evaluation Leader
AIR
Role: Dr. Reilly will provide input into the analytical plans as part of the Evaluation Leadership Team.
Expertise: Dr. Reilly joined AIR in 2013 as a Managing Researcher. Prior to joining AIR, Dr. Reilly served for 23 years in a
variety of analytic and research management positions at CMS. In his last position at CMS he served as the Deputy Director
for Operations at the Innovation Center, where he was a member of the Senior Executive Service. He also served as the
Director of the Data Development and Services Group in the Center for Strategic Planning; the Deputy Director of the Office of
Research, Development, and Information; the Deputy Director of the Beneficiary Education and Analysis Group and Director
of the Division of Beneficiary Analysis in the Center for Beneficiary Choices. Dr. Reilly also worked at the AHRQ, where he
was the Director of the National Healthcare Quality Report. He also served in the Program Evaluation and Methodology
Division of the U.S. General Accountability Office and the Statistical Research Division of the U.S. Census Bureau. His main
areas of specialization are Medicare and Medicaid programs and data, performance measurement and reporting, program
evaluation, and project management. Dr. Reilly received his masters in Sociology from the University of Akron and Ph.D. in
Sociology from the Johns Hopkins University.
Tandrea Hilliard, M.P.H. (75%)
Project Manager and Researcher
AIR
Role: Ms. Hilliard will manage the project and will coordinate, track, monitor, and align project activities. In addition, she will
participate in cleaning and analyzing data in task 2, supporting the analyses, and conducting interviews in Task 7.
Expertise: She is skilled in both quantitative and qualitative research methods, and currently leads analysis and reporting
tasks for several large-scale projects. She has an extensive mixed-methods research background in the areas of chronic
disease prevention and management and health disparities. Ms. Hilliard is experienced in study design and implementation,
primary data collection, and managing and analyzing large databases. Further, she has applied experience in conducting
biomedical, and social and behavioral research with the ESRD patient population. She received her MPH from East Carolina
University and is completing a PhD in health policy and management from UNC.
Doug Bradham, Dr.P.H., M.A., M.P.H. (75%)
Task 4 Lead
AIR
Role: Dr. Bradham will lead quantitative portion of the data analysis, and support tasks 5 and 6, the reports.
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—31
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
Name (% Available for Project)
Role and Expertise
Organization
Expertise: Dr. Bradham has more than 30 years of experience conducting empirical comparative effectiveness analyses,
interventional impact studies, outcome studies, and quality improvement studies with cost benefit and effectiveness program
evaluations, retrospective observational cohorts in large claims databases, and randomized clinical trials. During his academic
and applied research career, Dr. Bradham has participated and guided numerous projects for state- and county-level agencies
and VA quality improvement initiatives, seeking to document the economic impact of policy-related interventions, as noted in
more than 80 publications. He has led numerous similar pilot projects investigating the economic costs, and benefits of,
interventions for quality improvement in nursing, public health, pharmacy, geriatrics, pediatrics, radiology, gerontology, cancer
care, preventive interventions, and other services. He received his doctor of public health and both masters from UNC.
HarmoniJoie Noel, Ph.D., M.A. (25%)
Task 3 Lead
AIR
Role: Dr. Noel will lead all aspects of the survey, including sampling, cognitive testing as needed, managing the data
collection process, analysis, and writing the results for tasks 5 & 6.
Expertise: Dr. Noel has extensive experience in survey design, including writing and pretesting survey questions, designing
the sampling strategy and data collection procedures, and conducting psychometric and complex statistical analyses. Dr. Noel
has experience analyzing survey data using a variety of software tools such as SAS, Stata, and Mplus. She leads
questionnaire development, sampling and data collection design, and data analysis for projects related to health care reform,
electronic health records, and patient centered outcomes research. She is co-directing the survey development and field
testing of two surveys to measure experiences with the recently created Health Insurance Marketplaces and Qualified Health
Plans under the ACA for CMS. She received her M.A. and Ph.D. from the University of Nebraska at Lincoln in Sociology.
Brandy Farrar, Ph.D. (50%)
Tasks 7 and 8 Lead
AIR
Role: Dr. Farrar will develop the research plan for qualitative research, develop interviewer guides, develop observation
research protocols, oversee the site visits, and lead the qualitative analysis for both tasks.
Expertise: Dr. Farrar currently leads several case study evaluations of programs designed to strengthen health care delivery
systems via widespread adoption of health information technology through regional extension centers and innovative models
of maternity care titled the Strong Start II Evaluation for CMS. Dr. Farrar is skilled in the design and implementation of semistructured interviews and focus groups to assess the implementation process, systems changes, resource use, and strategies
to enhance efficiency, and outcomes, of complex innovations. She has used Qualitative Comparative Analysis to evaluate
programmatic conditions associated with the Jobs to Careers Initiative were associated with improved career self-efficacy for
frontline health care workers. Dr. Farrar received her Ph.D. in Sociology from North Carolina State University.
Sean McClellan, Ph.D. (90%)
Task 9 Lead
AIR
Role: Dr. McClellan will manage all data, including cleaning and merging datasets for the team to use to analyze claims and
other quantitative data. He will also analyze quantitative data in task 4 and support the development of the quarterly and
annually reports and dashboards (Tasks 5 & 6).
Expertise: Dr. McClellan as seven years of experience conducting research and analysis on healthcare services and policy,
and has worked with a broad variety of data sources and types, including Medicare and Medicaid claims and surveys from
patients and physician practices. He has expertise in the areas of: the use of health IT, quantitative study design,
organizational behavior, and analysis of survey claims and electronic health record data. He received a doctorate from the
University of California at Berkeley in Health Services and Policy Analysis and completed a post-doctoral fellowship at the
Palo Alto Medical Foundation Research Institute.
Roger Akers, M.S. (65%)
Database Manager
UNC
Role: Mr. Akers will be responsible for oversight of the UNC Sheps Center dedicated servers and file space for handling largescale, sensitive research datasets. He will assist in the preparation of Data Use Agreements to CMS and preparation of
analytic files used to prepare quarterly and annual reports and their documentation to CMS and evidence of compliance with
DUAs.
Expertise: Mr. Akers is the Deputy Director of Data Management and Information Technology for the UNC Sheps Center. Mr.
Akers received a master degree in Information Science from UNC.
Alan Brookhart, Ph.D., M.A. (25%)
Statistician
UNC
Role: Dr. Brookhart will participate in the creation and editing of the EDR and model refinement of primary and secondary data
analysis to address the study research questions and provide guidance on the quarterly and annual reports.
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—32
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
Name (% Available for Project)
Role and Expertise
Organization
Expertise: Dr. Alan Brookhart is an Associate Professor who conducts methods-oriented healthcare epidemiologic
research focusing primarily on the development and application of novel statistical methods and study designs for
comparative effectiveness research using large healthcare utilization databases. He has made significant
contributions to the development of instrumental variable approaches that can be used to estimate causal effects in
the presence of unmeasured or poorly recorded confounding variables. He received his doctorate and masters of
arts in biostatistics at the University of California at Berkeley. He received his masters in applied math at the
University of Georgia.
Marisa Domino, Ph.D. (25%)
UNC Site Lead
UNC
Role: Dr. Domino will oversee all aspects of work conducted by the UNC team on the project, will coordinate efforts with the
AIR team, and will contribute to creating and editing the EDR. She will participate in kick off- and annual meetings. She will be
the key point of contact from AIR.
Expertise: Dr. Domino is a Professor in the Department of Health Policy and Management with 20 years of research
expertise. Her research focuses on health economics, health care and health insurance to low income populations, agency
relationships in health care, and medical provider behavior, as noted in her 70+ publications. Specifically, she has led
research projects for Robert Wood Johnson Foundation, AHRQ, and the Health Resources and Services Administration
examining the quality and cost implications on specific new health care models, such as the patient centered medical homes,
as well as clinical issues such as depression. She received her doctorate in health economics from Johns Hopkins University.
Elizabeth Frentzel, M.P.H. (25%)
Qualitative Researcher
UNC
Role: Ms. Frentzel will support the qualitative research and conduct interviews, focus groups, and site visits in Task 7.
Expertise: Ms. Frentzel is a Principal Research Scientist with almost 20 years of experience in qualitative research and
program evaluation. She develops research interview guides and protocols; conducts in-depth interviews, focus groups, and
cognitive interviews; analyzes the results of qualitative research; writes reports; and directs projects. Previously, she
participated in the development of the ICH-CAHPS reports for providers, conducting cognitive testing of the materials at
dialysis facilities. She received a MPH from UNC.
Margarita Hurtado, Ph.D. (XX%)
Translation Consultant
AIR
Data Visualizer
AIR
Role:
Expertise:
Dennis Nalty, Ph.D. (45%)
Role: Dr. Nalty will be responsible for visualizing the data for the quarterly and annual reports.
Expertise: Dr. Nalty is a principal research scientist for performance measurement and management. An expert in managing
and analyzing health and consumer service research, he has developed performance and quality monitoring systems for
Medicare and substance abuse treatment programs at the national, State, and local levels. He has developed national, State,
and local executive dashboards, highlighting key performance indicators for management monitoring for Medicaid and
Medicare. Dr. Nalty received his Ph.D. in Sensory Sciences & Statistics from the University of Texas, Austin.
Christopher Pugliese, M.P.P. (75%)
Research Associate
AIR
Role: Mr. Pugliese will assist with analysis of the survey data.
Expertise: Christopher Pugliese has expertise in both qualitative and quantitative research methods with a background in
econometrics and survey analysis methods. He has experience with a variety of data analysis and management tools,
including STATA, Mplus and NVIVO. Mr. Pugliese received a Masters of Public Policy from Georgetown University.
Charles Ragin, Ph.D. (XX%)
QCA Expert
AIR
Survey Design Expert
UNC
Role:
Expertise:
Bryce Reeve, Ph.D. (10%)
Role: Dr. Reeve will serve as an advisor on the design and methods associated with measuring quality of care, patient
experience with care, and patient-reported quality of life.
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—33
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
Name (% Available for Project)
Role and Expertise
Organization
Expertise: Dr. Reeve is an Associate Professor is trained in psychometrics, and his 20 years of work focuses on enhancing
the application of patient-reported outcomes (PROs) in clinical research and practice to improve the quality of care for
pediatric and adult cancer patients. This includes the development of PRO measures using qualitative and quantitative
methodologies and integration of PRO data in research and healthcare delivery to inform decision-making. Prior to his faculty
position with UNC, Dr. Reeve served as a Program Director for the National Cancer Institute from 2000 to 2010. Dr. Reeve
received his PhD from UNC.
Chris Shea, Ph.D. (15%)
HIT Expert
UNC
Role: Dr. Shea will lead aspects of the evaluation focused on health information technology (HIT) and “meaningful use.”
Expertise: Dr. Shea is an Assistant Professor with 15 years of research focusing on evaluating innovations within health care
settings, particularly innovations supported by HIT for the purpose of improving care quality. He has led studies assessing
capacity and readiness for implementing “meaningful use” of electronic health records (EHR) in ambulatory practice settings
within the UNC Health Care System. He also has led projects aimed at developing valid, reliable, and pragmatic survey
measures of health organization variables that historically have been difficult to measure. Dr. Shea received his Ph.D. from
North Carolina State University.
Paula Song, Ph.D., M.H.S.A, M.A. (10%)
Health Care Finance and ACO expert
UNC
Role: Dr. Song will participate in the primary data collection via key informant interviews, focus groups and survey of ESCO
officials, providers and stakeholders.
Expertise: Dr. Song is an Associate Professor with expertise in health care finance, ACOs, payment reform, community
benefit, and utilization and access for vulnerable populations including the underinsured and children with disabilities. Dr. Song
is assessing care coordination for children with disabilities in an ACO where she will conduct key informant interviews, focus
groups with patients and caregivers, administer a caregiver survey and conduct claims data analysis. She also conducts case
studies of commercial ACOs that operate in the private sector. Dr. Song received her Ph.D. in health services organization
and policy from the University of Michigan.
Marielle Weindorf
Survey Director
DataStat
Role: Ms. Weindorf will lead the Datastat efforts and oversee her staff who will administer the survey(s). She will be the
DataStat key contact for the AIR team. Upon finishing the survey, she will provide the raw and cleaned data back to AIR.
Expertise: Ms. Weindorf is the Health Care Research Director at DataStat. She has over 15 years of experience with directing
large-scale survey research projects and has directed all of DataStat’s major CAHPS related survey projects, with a special
focus on large coalition multi-stakeholder projects. Ms. Weindorf directed and managed all aspects of a large-scale CAHPS
based survey project sponsored by the California Cooperative Healthcare Reporting Initiative, and the annual California
Managed Risk Medical Insurance Board CAHPS Projects. Ms. Weindorf has a Bachelors in Political Science from the
University of Michigan.
Mark Whelan (55%)
Database Programmer
UNC
Role: Mr. Whelan will oversee the setup, support, and maintenance of enhanced research systems and tools to provide
secure workspace for project communication and collaboration, encrypted database systems for remote data collection and
project tracking, and the security and integrity of research data.
Expertise: Mr. Whelan is the Systems Architect & Administrator for the Sheps Center for Health Services Research at UNC.
He has a Bachelors in Psychology from Davidson College and a Global Information Assurance Certification – Security
Essentials.
Lily Wong (XX%)
Programmer/ Analyst
UNC
Role: Ms. Wong will work collaboratively with AIR programmers to conduct all analyses for Task 4 on primary and secondary
data, assist in the preparation of analytic files used to prepare quarterly and annual reports, and their documentation to CMS
and evidence of compliance with DUAs.
Expertise: Ms. Wong has expertise in secondary data analysis, including the analysis of claims data and data sets related to
ESRD. NEED EDUCATION INFORMATION.
Manshu Yang, Ph.D. (65%)
Survey Design
Role: Dr. Yang will provide sampling and data collection guidance for Task 3.
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—34
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
AIR
Name (% Available for Project)
Role and Expertise
Organization
Expertise: Dr. Yang has been involved in the development of survey instruments, the sampling, data collection, and analysis
of survey data, the experimental and quasi-experimental design, and health program evaluation for various research projects.
She is the lead for data management and quality assurance and lead analyst for the monitoring data for the Strong Start
project with Urban Institute and CMS. She has extensive experience in psychometric and statistical analyses for quantitative
data and developing survey measurement tools, using software such as SAS, WPS, R, Mplus, and SPSS. She received her
Ph.D. in quantitative psychology from the University of Notre Dame.
Chapter 3 – Management Plan and Facilities
AIR’s management and control procedures, refined over decades of project management and
program implementation experience, will support milestone achievement and completion of
project deliverables of the highest quality.
3.1
Project Management and Organization
We will manage this task order using a combination of strategies based on organizational and
leadership theories, LEAN, management information and monitoring systems, and
communications. We will use high-performing management systems tailored to this project to
monitor activities, quickly disseminate information to enhance processes, and take corrective
action as needed. Our guiding principle is to perform at the highest level, meet our obligations to
CMS, yet remain flexible so that, together with CMS, we can adapt to the uncertainties involved
in the implementation and evaluation of a complex Innovation Center model.
Dr. Jacobson Vann and Ms. Hilliard will be responsible for the day-to-day project management,
working closely with UNC’s leadership (Drs. Domino and Flythe). They will hold periodic and
as needed meetings with the clinical and evaluation leadership teams. Dr. Jacobson Vann will
lead the development of the Evaluation Design Report (Subtasks 2.1, 2.1.1, and 2.2), which will
be updated and used throughout the project to guide our team’s efforts to execute the project.
Task Leaders and technical experts will contribute sections to the Plan and be responsible for
revisions and execution under Dr. Jacobson Vann’s leadership.
The task leaders will use a Project Planning Template to document, contrast, and critique
alternative strategies for accomplishing project tasks. This structured tool will be used to
centralize all brainstorming ideas related to a specific task in order to facilitate project planning.
The existing template is used to document the purpose, background information, alternative
strategies, recommended strategy, and a matrix for noting features, advantages, and
disadvantages of each alternative strategy. The respective task leader will initiate the tool when
relatively complex discussions occur and require informed decision-making by all or part of the
team.
Effective communication strategies are essential components of a comprehensive performance
management system. Our project team will communicate among themselves and with CMS
through email, telephone calls, routine and ad hoc meetings, and sharing of written and
electronic documentation. The PD will lead routine internal meetings with the project team
approximately every 1 to 2 weeks to discuss progress and challenges, and brainstorm solutions.
Meetings will be supported with specific written agendas developed with team input, and
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—35
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
meeting minutes with follow-up action items and assigned personnel. The PD or PM will follow
up with assigned action items. Our project organization is displayed in Exhibit 2.1 in Chapter 2,
Personnel. The proposed initial labor allocations are displayed in Exhibit 3.1, and the schedule in
Exhibit 3.2.
Exhibit 3.1. Labor Allocation Chart by Task
Name
1
2
3
4
5
6
7
8
9
Total
AIR
Jacobson Vann
Bradham
Farrar
Frentzel
Garfinkel
Hilliard
McClellan
Nalty
Noel
Pugliese
Reilly
Yang
DataStat
Weindorf
Senior Analyst
Database Manager
Junior Analyst
IT Support
Survey support
UNC
Falk
Domino
Flythe
Brookhart
Atkins
Whelan
Song
Shea
Wong
Consultants
Hurtado
Ragin
Total Hours
Exhibit 3.2 Project Schedule
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—36
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
FTE
% Avail
Tasks
Year 1
1
1: Project
Management and
Administration
2: Evaluation Design
Report
3: Beneficiary Surveys
4: Data Analysis
5: Quarterly Reports
of ESCO Performance
6: Annual Reports
7: Qualitative Data
Collection
8: Learning Network &
Prepare Reports
9: Prepare and
Deliver Analytic Files
2
 
 

3
4
Year 3
3
1
2
3
4
1
2
  
  


  
  


     
     


 
 



  


4
1
 

3



2
Year 5
2

4
Year 4
1

= In-person or telephone meeting
3.2
Year 2



  
3 4
 




 


 = Deliverable
Assumes April 1, 2015 contract start date
Quality Assurance
For the last three years, AIR has operated a formal quality assurance (QA) program with a
corporate vice president in charge and quality champions in each research program. The QA
program requires that every deliverable and many additional products provided to clients must be
reviewed by an independent expert in the topic before it is completed. The QA reviewer is
usually an AIR employee who does not work on the project, but for particularly important and
high-stakes projects we might also engage an external reviewer, as we did for AHRQ’s $10
million Community Forum project, where we engaged a well-known clinical trials statistician to
review our complex randomization design. We attribute our excellent performance on CMS’ $25
million Marketplace enrollee satisfaction survey project (see Ch. 4) to this system. Since 2012,
AIR has received 5 points out of 5 on in all domains (except one domain for which we received a
4) on our annual XXX assessments by CCSQ staff.
3.3
Plan for Effective Value Management
To efficiently and effectively manage the project, we will develop a project-specific management
information and tracking system to monitor all tasks and subtasks for AIR staff and all
subcontractors and subcontactor staff. In addition, AIR uses Deltek’s Costpoint Reporting
System and Time Collection and Expense system and our internal Project Planning and
Reporting System (PPRS) to manage complex projects with multiple, simultaneous, and
overlapping tasks in an efficient manner and to stay on schedule and budget. The project will
have charge codes by task and subtask and by year, which allows us to effectively and efficiently
manage concurring tasks and subtasks. As CostPoint reports become available each month, the
management team updates their labor allocations and other direct costs in PPRS to assure that
sufficient staff and hours are available within the remaining budget to complete the project. Dr.
Jacobson Vann and Ms. Hilliard will identify variances monthly and be able to report projected
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—37
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
staffing, schedule, and budget issues to the COR as soon as they become apparent. If CMS
chooses to use an Earned Value Management system for this project, as suggested in the TORP,
we will work with the COR to tailor our reporting systems to support the requirements of the
EVM system as seamlessly as possible.
3.4
Corporate Capacity
The facilities and resource capabilities of each of the team’s organizations provide the
environment and tools that contribute to our outstanding services. Our facilities enable us to
carry out most tasks in house; and our technical capabilities and resources reflect both the needs
of our clients and the most innovative, state-of-the-art advances. Exhibit 3.3 describes our team’s
facilities and resources and how they will support and enhance our research and deliverables.
Exhibit 3.3 Corporate Capacity of Each Organization
American Institutes for Research. AIR is based out of Washington, DC and leases more than 425,000 square feet in 20
locations across the country which includes offices, warehouses and a processing center. We have over almost 2,000 research,
technical, administrative, and clerical personnel. Our staff includes nurses, physicians, health economists, sociologists, health
services researchers, political scientists, education researchers, industrial psychologists, computer experts, systems analysts,
statisticians, engineers, linguists, communications experts, conference coordinators, writers, editors, and graphic artists. Nearly
60% of our program staff holds advanced degrees, and 39% of these hold PhDs or equivalent terminal degrees. More than 1,500
workstations and 400 virtualized servers run across a secure redundant modern network with very high speed internet
connectivity and robust connections to all AIR sites. Project staff have access to multiple data management and analysis products
such as SPSS, Stata, M-Plus, R, and WPS. In addition, we use Tableau for data visualizations.
University of North Carolina Sheps Center. The Sheps Center is an interdisciplinary health services research center within the
University of North Carolina at Chapel Hill. The Sheps Center is located in its own 35,000 square foot building less than a mile
from the center of the UNC-Chapel Hill campus. The Center can access faculty from multiple departments across campus without
the need for subcontracts, which allows it to function as a “single point of contract” for agencies that are funding what is often
interdisciplinary health services research. With respect to corporate capabilities, UNC - Chapel Hill has five health science schools
on one campus: Public Health, Nursing, Medicine, Pharmacy, and Dentistry, providing access to all clinical specialties that might
be needed for this project. Extensive clinical and health services research is conducted by faculty in these schools. An internal
information systems staff provides daily administration and technical support for more than 200 high-end personal computers and
a cluster of servers. Within the Center, programmers have available the versions of SAS, SPSS, Stata, and Lisrel. SUDAAN,
Limdep, S-Plus and numerous other software packages are available through the UNC centralized computing facility.
DataStat. DataStat specializes in survey data collection services and advanced reporting, specifically in support of health services
research and public policy research. No other survey organization in the country exceeds our combined level of quality and
efficiency in this area. DataStat employs over 100 staff members including the professional research staff, Computer–Aided
Telephone Interviewing (CATI) facility interviewers, supervisors, monitors and trainers, and staff in our automated printing and
mailing facility. Our professional staff are organized around project teams, similar to academic research units. Our highest level
researchers, the Senior Research Directors, oversee project teams and provide coordination and consultation. DataStat is housed
in a 16,000 square foot building approximately three miles from the University of Michigan campus.
3.5
Subcontractor Management
Our organizational and management structure and supporting role descriptions will delineate
clear lines of responsibility, authority, and communication for the full project team, including
AIR staff and all subcontractors, vendors, and consultants. AIR will maintain technical and
fiduciary responsibility, including project planning, monitoring technical performance, and
monitoring budgets, for all subcontractors, consultants, and vendors. AIR has required each
subcontractor, noted in the staffing table above, to identify one senior person to work directly
with Dr. Jacobson Vann and lead his or her organizations’ involvement in the project. Each of
these individuals will be accountable for producing high-quality deliverables in a timely fashion
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—38
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
and resolving any performance issues. The subcontract leaders will be Marisa Domino for UNC,
Marielle Weindorf for DataStat, and XXXXX Gupta for PHE.
Chapter 4 - Past Performance of the Organization
This collection of projects highlights how AIR and our partners successfully execute projects of
significance that integrate qualitative and quantitative data and classical and rapid cycle
evaluation to address research questions with real world application for health care quality
improvement, reimbursement and delivery system reform, and redesign. We have selected three
evaluations, one major CMS project survey, and one project focusing on ESRD knowledge.
Contract Information
AIR: Evaluation of the Health
Information Technology for
Economic and Clinical Health
(HITECH) Regional Extension
Centers
Client: U.S. Department of Health and
Human Services (HHS), Office of the
Coordinator for Health Information
Technology
Contract Number:
HHSP23320095626WC
Contract Value: $4,277,831
Period of Performance:
3/31/2010 – 3/27/2015
Technical Contact:
Dustin Charles, M.P.H.
Dustin.Charles@hhs.gov
202-690-3893
Key Project Staff: David Schneider,
Brandy Farrar, HarmoniJoie Noel,
Grace Wang, Johannes Bos, Steven
Garfinkel
Project Summary
Under contract with the HHS Office of the Coordinator for Health Information
Technology AIR is conducting an evaluation to measure the effectiveness of 62
Regional Extension Centers (RECs) and the Health Information Technology
Research Center (HITRC) in meeting the requirements of the Health Information
Technology for Economic and Clinical Health (HITECH) Act of 2009. This act is
designed to promote the national adoption and meaningful use of electronic health
records (EHRs). This evaluation has two primary objectives: (1) to document the
implementation and effects of the initiative, and (2) to support HHS, its HITRC, and
individual RECs. Information from this evaluation will provide timely feedback and
information to help continuously improve the centers’ ability to support adoption by
providers. The mixed method evaluation utilized qualitative and quantitative methods
to measure the effectiveness and efficiency of the REC program in promoting
adoption and meaningful use of electronic health records among targeted providers.
The purpose of the evaluation is to assess the implementation and impact of the
REC program. Our conceptual model is similar to the CEC Evaluation model,
although patient outcomes and characteristics are outside the scope of this
evaluation. Our research questions are somewhat similar as well, examining the
relationship of characteristics of the REC grantees to implementation and outcomes,
whether the RECS support provider access to and use of information regarding
EHRs, whether the RECS have improved provider participation in EHRs and whether
it results of meaningful use of HIT.
We answer these questions using four distinct but interrelated studies that employ
both qualitative and quantitative methods: Typology (quantitative), HITRC User
Experience Study (quantitative), Case Studies (qualitative), and the Impact Study
(quantitative). The findings of each study are integrated to elaborate, enhance,
illustrate, and clarify relevant results.
Relevance to RFTO: AIR’s evaluation of the REC Program exemplifies AIR’s
extensive experience with evaluating health policy interventions, demonstrations,
and initiatives. This project highlights AIR’s experience evaluating observational,
non-randomized studies as is required for the evaluation of the CEC Initiative. The
project also provides additional examples of AIR’s work in designing and conducting
survey research as well as analyzing survey data.
AIR: Development of an Enrollee
Satisfaction Survey for Use in the
Health Insurance Marketplace
Client: Centers for Medicare &
Medicaid Services
Contract & Task Order Number: GS10F-0112J / HHSM-500-2012-
The Affordable Care Act (ACA) authorized the creation of Health Insurance
Marketplaces (Marketplaces) to help individuals and small employers shop for,
select, and enroll in high quality, affordable private health plans. Section 1311(c)(4)
of the ACA requires the Department of Health and Human Services to develop an
enrollee satisfaction survey system that assesses consumer experience with
qualified health plans (QHPs) offered through a Marketplace. It also requires public
display of enrollee satisfaction information by the Marketplace to allow individuals to
easily compare enrollee satisfaction levels between comparable plans. To respond to
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—39
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
Contract Information
00100G
Contract Value: $24,260,553
Period of Performance:
8/15/2012 – 2/28/2017
Technical Contact:
Kathleen Jack, CMS
410-786-7214
Kathleen.Jack@cms.hhs.gov
Key Project Staff: Steven Garfinkel,
Nancy Gay, Thomas Reilly, Julie
Jacobson Vann, HarmoniJoie Noel,
Tandrea Hilliard, Graciela Castillo,
Manshu Yang, Dennis Nalty, Kathy
Paez, Christian Evensen, Emily Elstad,
Susan (San) Keller, Coretta Mallery
Project Summary
these requirements, CMS asked AIR to develop and test the Health Insurance
Marketplace Experience Survey and QHP Enrollee Experience Survey and to
provide technical assistance to the Health Insurance Marketplaces.
As part of the evaluation process for these new surveys, AIR conducted a field test in
2014 of both surveys to allow AIR to perform numerous reliability and validity
assessments of these surveys. This analysis included performing confirmatory factor
analysis, exploratory factor analysis, driver analysis, case-mix adjustments, and
multivariate logistic regression. Additional analyses are being performed in order to
utilize methodologies that maximize response rates. The AIR-led team is responsible
for implementing the Marketplace and QHP Enrollee surveys through 2017, including
analyzing the data and reporting the survey results in a consumer-friendly format.
Relevance to RFTO: This project highlight’s AIR experience with survey research
methods, particularly CAHPS surveys, including designing questionnaires, rigorous
qualitative research, developing and implementing data collection procedures, and
analyzing survey data.
The HIM CES project also highlights AIR’s experience with collecting and analyzing
data that are used to provide publicly-available scores that have business and
financial implications for issuers of QHPs.
HIM CES serves as an excellent example of AIR’s experience managing large-scale
projects, including overseeing the work of numerous sub-contractors.
AIR: Standardizing Antibiotic Use in
Long-Term Care Settings (SAUL)
Agency: AHRQ
Contract & Task Order Number:
HHSA290200600019I /
HHSA29032002T
Contract Value: $1,199,206
Period of Performance: 9/29/2009 –
8/15/2012
Technical Contact:
Deborah G. Perfetto, AHRQ
301–427–1295
Deborah.Perfetto@AHRQ.hhs.gov
Key Project Staff: Steven Garfinkel,
Elizabeth Frentzel, Julie Jacobson
Vann
AIR: Strong Start for Mothers and
Newborns Evaluation (Strong Start
II) (Subcontractor to Urban Institute)
Clients: Centers for Medicare &
Medicaid Services, Urban Institute
The AIR SAUL project created a communication tool focused on improving antibiotic
stewardship around urinary tract infections (UTI) in nursing homes: the Suspected
UTI Situation, Background, Assessment, and Recommendation (SBAR). The AIR
team found that 25 to 75 percent of antibiotics prescribed for UTIs were prescribed in
the absence of signs or symptoms of infection for asymptomatic bacteriuria (ASB).
For the field test, the AIR-led team used a pre- and post-implementation interrupted
time series analysis, with control, to determine the effect of the Suspected UTI SBAR
tool on prescriptions for ASB. In addition, interviews were conducting prior to and
after the implementation to understand the characteristics of each nursing home as
well as the level of implementation.
When implemented, the Suspected UTI SBAR tool was associated with reduced
antibiotic prescriptions for suspected ASB by one-third, from 73 percent to 49
percent of total prescriptions for suspected UTIs. Similarly, the likelihood of a
prescription being written for ASB decreased significantly in the homes that
implemented the Suspected UTI SBAR tool (OR = 0.35; 95% CI, 0.16 to 0.76)
compared to homes that did not implement it.
Relevance to RFTO: The SAUL project highlights AIR’s experience with conducting
a mixed-methods approach, using medical record data, infection log data, and the
minimum data set 2.0 and 3.0 to evaluate whether interventions improve the quality
and safety of care that patients receive while reducing the cost of care. The SAUL
project also is an example of AIR’s experience in working with a vulnerable
population where it is critical to monitor for unintended consequences during the
implementation.
The Strong Start for Mothers and Newborns initiative, funded under the Affordable
Care Act, aims to improve maternal and infant outcomes for women enrolled in
Medicaid and the Children’s Health Insurance Program (CHIP). The initiative is
currently supporting service delivery through 27 awardees and 191 provider sites,
across 30 states, the District of Columbia, and Puerto Rico, and will serve up to
80,000 women. The Innovation Center contracted the Urban Institute and its
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—40
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
Contract Information
Project Summary
Prime Contract & Task Order Number:
HHSM-500-2010-00024I / HHSM-500T0004
subcontractor partner, AIR, to conduct a 5-year cross-site evaluation of the program
which will be critical to determining whether a wider dissemination should be
supported.
Subcontract Number:
08575-004-00-AIR-01
As part of the evaluation project team, AIR is responsible for conducting qualitative
case studies, collecting participant-level process data, and providing state technical
assistance (TA) for the linkage of Medicaid and vital records data as part of an
impact analysis. As part of this effort, AIR has conducted focus groups, one-on-one
interviews, and observational studies, which were subsequently coded and distilled
into site-specific memos to provide an in-depth understanding of individual Strong
Start sites.
Contract Value: $1, 209,183 (Currently
Funded), $3,385,241 (Total Contract
Value)
Period of Performance:
8/12/2013 – 12/15/2014, with four
option years that go through
08/11/2018
Technical Contact:
Ian Hill, Urban Institute
202-261-5422
ihill@urban.org
Caitlin Cross-Barnet, CMS
caitlin.cross-barnet@cms.hhs.gov
410-786-4912
Key Project Staff: Kathy Paez, Julie
Jacobson Vann, Brandy Farrar,
Jennifer Lucado, Ushma Patel
UNC-DEcIDE Comparative
Effectiveness of IV Iron
Formulations in ESRD-Anemia
Client: AHRQ
Contract & Task Order Number:
HHSA29020050040I, Task Order #6
Contract Value: $2,836,647
Period of Performance:
7/14/2010 – 7/14/2013
Technical Contact:
Barbara Bartman MD, MPH, AHRQ
301–427–1515
Barbara.Bartman@AHRQ.hhs.gov
Key Project Staff: Alan Brookhart,
Alan Ellis, Janet Freburger, Anne
Jackman, Abhi Kshirsagar, Lily Wang,
Wolfgang Winklemayer
Additionally, AIR is responsible for collecting quantitative data from implementation
sites quarterly to provide timely feedback to CMMI, the evaluation, and Strong Start
awardees and sites on key indicators of performance and interim outcomes.
Relevance to RFTO: Our work on Strong Start II is an example of AIR’s experience
with evaluating large demonstration projects, which includes qualitative and
quantitative data, to provide clients with a more complete understanding of the
effectiveness and the key indicators of performance across three maternity care
models.
This project illustrates AIR’s experience in collecting large-scale survey and clinical
outcome data in paper and electronic formats across multiple organizations and
sites. It also demonstrates AIR’s experience in utilizing rapid cycle evaluation and
monitoring tools to identify data quality issues on a quarterly basis, provide timely
feedback to sites, and permit continuous improvement in performance and
outcomes.
Anemia is a highly prevalent condition among the approximately 500,000 people in
the United States with ESRD and is associated with increased morbidity, mortality,
and health care costs. The anemia of ESRD is managed primarily through treatment
with recombinant human erythropoietin and the administration of intravenous iron.
Currently, two formulations of iron are in widespread use in dialysis patients: iron
sucrose and sodium ferric gluconate. There are no data from large populations on
the head-to-head safety or effectiveness of these formulations. There is also little
evidence regarding the optimal dosing of intravenous iron. This task order contract
addressed these important evidence gaps through a large-scale observational study
of two large cohorts of dialysis patients over a 3 year period.
This study analyzed data from patients who received dialysis from a DaVita clinic
from 2004-2009 or a Renal Research Institute clinic from 2001-2009 where the
primary payer was Medicare. This research used propensity score analysis, marginal
structural models, case crossover analysis, and a natural experiment analysis will be
used to estimate treatment effects. Ultimately, this study found that patients who
received a bolus versus maintenance iron were at increased risk of infection-related
hospitalization, which suggests that the use of maintenance iron would result in
fewer infections.
Relevance to RFTO: This project highlights the AIR team’s (UNC’s) experience in
analyzing Medicare claims data, clinical quality measures, and medical records to
improve patient safety and reducing costs among patients receiving dialysis.
This research also demonstrates that our experience in performing statistical
analyses that control for potentially confounding variables within a non-randomized
study design.
This project also exhibits our experience in conducting research with dialysis patients
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—41
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
Contract Information
Project Summary
that results in better care for Medicare beneficiaries, improves health outcomes, and
reduces the costs of care.
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—42
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
References
Introduction
Centers for Medicare & Medicaid Services. (2014). Pioneer ACO Model. Retrieved from
http://innovation.cms.gov/initiatives/Pioneer-ACO-Model/ X per Sean
Centers for Medicare & Medicaid Services. (2014). Fact sheets: Medicare ACOs continue to
succeed in improving care, lowering cost growth. Retrieved from
http://www.cms.gov/Newsroom/MediaReleaseDatabase/Fact-sheets/2014-Fact-sheetsitems/2014-09-16.htm
Berwick, D. M. (2011). Launching accountable care organizations — the proposed rule for the
Medicare Shared Savings Program. The New England Journal of Medicine, 364, (16).
Stecker, E. C. (2013). The Oregon ACO experiment — bold design, challenging execution. New
England Journal of Medicine, 368(11), 982-985. X per Sean
Gadegbeku, C., Freeman, M., & Agodoa, L. (2002). Racial disparities in renal replacement
therapy. Journal of the National Medical Association, 94(8), 45S-54S. X per Sean
McClellan, W.M., Newsome, B.B., McClure, L.A., Howard, G., Volkova, N., Audhya, P., &
Warnock, D.G. (2010). Poverty and racial disparities in kidney disease: The REGARDS
study. American Journal of Nephrology, 32(1), 38-46. X per Sean
Miles, M. & Huberman, A. (1994). Qualitative data analysis: An expanded sourcebook (2nd ed.).
Thousand Oaks, CA: Sage Publications. Ask Brandy
Gagnon, M. L. (2011). Moving knowledge to action through dissemination and exchange.
Journal of Clinical Epidemiology, 64 (1), 25-31.
Area Health Resources Files (AHRF). (2013-2014). US Department of Health and Human
Services, Health Resources and Services Administration, Bureau of Health Workforce,
Rockville, MD.
Office of the Assistant Secretary for Planning and Evaluation. (2014). Issue Brief: Health
Insurance Marketplace: March Enrollment Report, October 1, 2013 – March 1, 2014.
Retrieved from
http://aspe.hhs.gov/health/reports/2014/MarketPlaceEnrollment/Mar2014/ib_2014mar_en
rollment.pdf
Austin, P. C. (2011). An Introduction to Propensity Score Methods for Reducing the Effects of
Confounding in Observational Studies. Multivariate Behavioral Research, 46(3), 399–
424. doi:10.1080/00273171.2011.568786
Duncan, D. F. (2007). Epidemiology: Basis for disease prevention and health promotion.
Multicausality and webs of causation. Retrieved from
http://duncansepidemiology.tripod.com/id9.html.
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—43
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
Fahey, D. F., & Burbridge, G. (2008). Application of diffusion of innovations models in hospital
knowledge management systems: lessons to be learned in complex organizations.
Hospital Topics, 86(2), 21-31.
Fisher, E. S., Shortell, S. M., Kreindler, S. A., Van Citters, A. D. & Larson, B. K. (2012). A
framework for evaluating the formation, implementation, and performance of
accountable care organizations. Health Affairs, 31(11), 2368-2378.
Hogan, J. W., Roy, J. & Korkontzelou, C. (2004). Handling drop-out in longitudinal studies.
Statistics in Medicine, 23(9), 1455-97.
Jones, A. M. (2010) Models For Health Care. Health Econometrics and Data Group (HEDG)
Working Paper. Retrieved from
http://www.york.ac.uk/media/economics/documents/herc/wp/10_01.pdf
Laird, N. M. (1988). Missing data in longitudinal studies. Statistics in Medicine, 7(1–2):305–
315.
Lee, A. J., Garfinkel, S. A., Khandker, R. & Norton, E. C. (1997). The Impact of Medicare
SELECT on Cost and Utilization in Eleven States. Health Care Financing Review, 19(1),
19–40.
Li, F., Zaslavsky, A. M. & Landrum, M. B. (2013). Propensity score weighting with multilevel
data. Statistics in Medicine, 32(19), 3373-3387.
Manning, W. G., Basu, A. & Mullahy, J. (2005). Generalized modeling approaches to risk
adjustment of skewed outcomes data. Journal of health economics, 24(3), 465-488
MacMahon, B., Pugh, T. F. & Ipsen, J. (1960). Epidemiologic Methods. London: J. & A.
Churchill.
Deb, P., Manning, W. G. & Norton, E. C. (2013). Modeling Health Care Costs and Counts.
MiniCourse. iHEA World Congress in Sydney, Australia, 2013. Retrieved from
http://harris.uchicago.edu/sites/default/files/iHEA_Sydney_minicourse.pdf
Shrank, W. (2013). The Center for Medicare and Medicaid Innovation’s Blueprint for RapidCycle Evaluation of New Care and Payment Models. Health Affairs, 32(4), 807–812.
Stuart, E.A., DuGoff, E., Abrams, M., Salkever. D. & Steinwachs, D. (2013). Estimating Causal
Effects in Observational Studies Using Electronic Health Data: Challenges and (some)
Solutions. eGEMS (Generating Evidence & Methods to improve patient outcomes),1(3),
4.
McWilliams, J. M., Landon, B. E., Chernew, M. E., & Zaslavsky, A. M. (2014). Changes in
Patients' Experiences in Medicare Accountable Care Organizations. New England
Journal of Medicine, 371(18), 1715-1724.
?? Sean isn’t sure
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—44
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
Spybrook, J., Raudenbush, S. W., Xiao-feng, L., Congdon, R. & Martínez, A. (2011). Optimal
Design Software for Multi-level and Longitudinal Research (Version 3.01) [Software].
Available from www.wtgrantfoundation.org. ?? Sean doesn’t think its relevant
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—45
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
References
Evaluation of the Comprehensive End-Stage Renal Disease (ESRD) Care (CEC) Initiative—46
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal.
Appendixes
[P.App Cover-No TOC]
Appendix A.
Résumés
[P.App Title]
Title of Proposal—A-1[Footer]
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal. [Footer2]
Appendix B.
Xxxxx
[P.App Title]
Title of Proposal—B-1[Footer]
Use or disclosure of data contained on this sheet is subject to the restriction on the cover of this proposal. [Footer2]
LOCATIONS
Domestic
Washington, D.C.
1
Atlanta, GA
U.S. Renal Data System. (2013). USRDS 2013 Annual Data Report: Atlas of Chronic
Kidney
Disease and End-Stage Renal Disease in the United States. National Institutes
of Health,
Baltimore,
MD
National Institute of Diabetes and Digestive and Kidney Diseases, Bethesda, MD, 2013.
Chapel Hill, NC
Retrieved from http://www.usrds.org/2013/pdf/v1_00_intro_13.pdf
Chicago, IL
Columbus, OH
2
French, D. D., LaMantia, M. A., Livin, L. R., Herceg, D., Alder, C. A., Boustani,
M. A.MD
(2014).
Frederick,
Healthy aging brain center improved care coordination and produced net savings. Health
Honolulu, HI
Affairs;33(4):613-8.
Indianapolis, IN
3
Pham, H. H., Cohen, M., & Conway, P. H. (2014). The Pioneer Accountable Care
Organization
Naperville,
IL
Model: Improving Quality and Lowering Costs. The Journal of the American Medical
New York, NY
Association, 312(16), 1635-1636.
Rockville, MD
Sacramento, CA
4
San
Mateo, CAto
Centers for Medicare & Medicaid Services. (2014). Fact sheets: Medicare ACOs
continue
succeed in improving care, lowering cost growth. Retrieved from
Waltham, MA
http://www.cms.gov/Newsroom/MediaReleaseDatabase/Fact-sheets/2014-Fact-sheetsitems/2014-09-16.htm
International
1000 Thomas Jefferson Street NW
Washington, DC 20007-3835
6
202.403.5000
| TTY:One-Quarter
877.334.3499Of
Kaiser Health
News. (2014).
Egypt
Honduras
ACOs Save Enough Money To Earn
Bonuses.
Ivory
Coast
Retrieved
from http://kaiserhealthnews.org/news/one-quarter-of-acos-save-enoughwww.air.org
money-to-earn-bonuses/
Kyrgyzstan
Liberia
7
Tajikistan
Evans, M. (2014). BREAKING: 89 ACOs will join Medicare Shared Savings Program in
Zambia
January. Modern Healthcare. Available from
http://www.modernhealthcare.com/article/20141222/NEWS/312229929?utm_source=link20141222-NEWS-312229929&utm_medium=email&utm_campaign=mh-alert
8
L&M Research, LLC. (2013). Evaluation of CMMI accountable care organization initiatives:
Effect of pioneer ACOs on Medicare spending in the first year. A report developed under
Contract # HHSM-500-2011-0009i/HHSM-500-T0002 for the Centers for Medicare & Medicaid
Services. Retrieved from http://innovation.cms.gov/Files/reports/PioneerACOEvalReport1.pdf
9
Lewis, V. A., McClurg, A. B., Smith, J., Fisher, E. S., & Bynum, J. P. (2013). Attributing
patients to accountable care organizations: performance year approach aligns
stakeholders’ interests. Health Affairs, 32(3), 587-595.
10
Luft, H. S.(2010). Becoming accountable—opportunities and obstacles for ACOs. New England
doi: 10.1056/NEJMp1009380
Journal of Medicine,363(15):1389-91.
11
McWilliams, J. M., Chernew, M. E., Dalton, J. B., & Landon, B. E. (2014). Outpatient care
patterns and organizational accountability in Medicare. JAMA Internal Medicine, 174(6), 938-45.
12
Chiu, Y., Teitelbaum, I., Madhukar, M., Marie de Leon, E., Adzize, T., & Mehrotra, R. (2009).
Pill burden, adherence, hyperphosphatemia, and quality of life in maintenance dialysis patients.
Clinical Journal of the American Society of Nephrology, 4, 1089-1096.
13
Manley, H. J., Cannella, C.A. (2005). Nondialysis (home) medication utilization and cost in
diabetic and nondiabetic hemodialysis patients. Nephrology News Issues, 19(2):27-8, 33-4, 36-8.
14
Roach, J. L., Turenne, M. N., Hirth, R. A., Wheeler, J.R., Sleeman, K. S., & Messana, J. M.
(2010). Using race as a case-mix adjustment factor in a renal dialysis payment system: potential
and pitfalls. American Journal of Kidney Disease,56(5):928-36. doi:
10.1053/j.ajkd.2010.08.006.
15
Turenne, M. N., Cope, E. L., Porenta, S., Mukhopadhyay, P., Fuller, D. S., Pearson, J. M…
Robinson, B. M. (2014 Oct 9). Has Dialysis Payment Reform Led to Initial Racial Disparities in
Anemia and Mineral Metabolism Management? Journal of American Society of Nephrology.
16
Saunders, M. R. & Chin, M. H. (2013). Variation in Dialysis Quality Measures by Facility,
Neighborhood, and Region. Medical Care. 51(5):413-417.
17
Huang, X. & Rosenthal M, B. (2014). Transforming specialty practice--the patient-centered
medical neighborhood. New England Journal of Medicine,370(15):1376-9
18
Berenson, R. A., Hammons, T., Gans, D. N., Zuckerman, S., Merrell, K….Williams, A. F.
(2008).
A house is not a home: keeping patients at the center of practice redesign. Health
Affairs,27(5):1219-30. doi: 10.1377/hlthaff.27.5.1219
19
Shortell, S. M., McClellan, S. R., Ramsay, P. P., Casalino, L. P., Ryan, A. M., & Copeland, K.
R. (2014). Physician practice participation in Accountable Care Organizations: The
emergence of the unicorn. Health Services Research, 49(5), 1519-36.
20
Weiss, E. S., Anderson, R. M., & Lasker, R. D. (2002). Making the Most of Collaboration:
Exploring the Relationship Between Partnership Synergy and Partnership Functioning. Health
Education & Behavior, 29(6), 683–698.
21
Gawande, A. (2011). The hot spotters: Can we lower medical costs by giving the neediest
patients better care? The New Yorker; 40-51. Available at:
http://www.newyorker.com/magazine/2011/01/24/the-hot-spotters
22
Institute of Medicine. (2001). Crossing the quality chasm: A new health system for the 21st
Century. Washington DC: National Academy of Sciences.
23
Berwick, D. M., Nolan, T. W. & Whittington, J. (2008). The triple aim: Care, health, and cost.
Health Affairs27(3):759-69. doi: 10.1377/hlthaff.27.3.759.
24
Shrank, W. (2013). The Center for Medicare and Medicaid Innovation’s Blueprint for RapidCycle Evaluation of New Care and Payment Models. Health Affairs, 32(4), 807–812.
25
Weidmer, B. A., Cleary, P. D., Keller, S. D., Evensen, C., Hurtado, M. P., … Hays, R. D.
(2014). Development and evaluation of the CAHPS® survey for in-center hemodialysis
patients. American Journal of Kidney Diseases; 64(5):753-760.
26
Fisher, E. S., Shortell, S. M., Kreindler, S. A., Van Citters, A. D. & Larson, B. K. (2012). A
framework for evaluating the formation, implementation, and performance of
accountable care organizations. Health Affairs, 31(11), 2368-2378.
27
Wagner, E. H., Austin, B. T., Davis, C., Hindmarsh, M., Schaefer, J. & Bonomi, A. (2001).
Improving chronic illness care: Translating evidence into action. Health Affairs, 20(6),
64-78.
28
Fahey, D. F., & Burbridge, G. (2008). Application of diffusion of innovations models in
hospital knowledge management systems: lessons to be learned in complex
organizations. Hospital Topics, 86(2), 21-31.
29
MacMahon, B., Pugh, T. F. & Ipsen, J. (1960). Epidemiologic Methods. London: J. & A.
Churchill.
30
Duncan, D. F. (2007). Epidemiology: Basis for disease prevention and health promotion.
Multicausality and webs of causation. Retrieved from
http://duncansepidemiology.tripod.com/id9.html.
31
Clark, V. L. P., & Creswell, J. W. (2011). Designing and conducting mixed methods research.
Thousand Oaks, CA.: Sage.
32
Morse, J. M., & Niehaus, L. (2009). Mixed method design: Principles and procedures (Vol. 4).
Walnut Creek, CA: Left Coast Pr.
33
Stuart, E.A., DuGoff, E., Abrams, M., Salkever. D. & Steinwachs, D. (2013). Estimating
Causal Effects in Observational Studies Using Electronic Health Data: Challenges and
(some) Solutions. eGEMS (Generating Evidence & Methods to improve patient
outcomes),1(3), 4.
34
Austin, P. C. (2011). An Introduction to Propensity Score Methods for Reducing the Effects of
Confounding in Observational Studies. Multivariate Behavioral Research, 46(3), 399–
424. doi:10.1080/00273171.2011.568786
35
Li, F., Zaslavsky, A. M. & Landrum, M. B. (2013). Propensity score weighting with multilevel
data. Statistics in Medicine, 32(19), 3373-3387.
36
Area Health Resources Files (AHRF). (2013-2014). US Department of Health and Human
Services, Health Resources and Services Administration, Bureau of Health Workforce,
Rockville, MD.
37
Office of the Assistant Secretary for Planning and Evaluation. (2014). Issue Brief: Health
Insurance Marketplace: March Enrollment Report, October 1, 2013 – March 1, 2014.
Retrieved from
http://aspe.hhs.gov/health/reports/2014/MarketPlaceEnrollment/Mar2014/ib_2014mar_en
rollment.pdf
38
McWilliams, J. M., Landon, B. E., Chernew, M. E., & Zaslavsky, A. M. (2014). Changes in
Patients' Experiences in Medicare Accountable Care Organizations. New England Journal of
Medicine, 371(18), 1715-1724.
39
Wood, R., Paoli, C. J., Hays, R. D., Taylor-Stokes, G., Piercy, J. &Gitlin, M.(2014). Evaluation of the consumer
assessment of healthcare providers and systems in-center hemodialysis survey. Clinical Journal ofthe American
Society of Nephrology, 9(6), 1099-108.
40
Lee, A. J., Garfinkel, S. A., Khandker, R. & Norton, E. C. (1997). The Impact of Medicare
SELECT on Cost and Utilization in Eleven States. Health Care Financing Review, 19(1),
19–40.
41
Manning, W. G., Basu, A. & Mullahy, J. (2005). Generalized modeling approaches to risk
adjustment of skewed outcomes data. Journal of health economics, 24(3), 465-488.
42
Jones, A. M. (2010) Models For Health Care. Health Econometrics and Data Group (HEDG)
Working Paper. Retrieved from
http://www.york.ac.uk/media/economics/documents/herc/wp/10_01.pdf
43
Deb, P., Manning, W. G. & Norton, E. C. (2013). Modeling Health Care Costs and Counts.
MiniCourse. iHEA World Congress in Sydney, Australia, 2013. Retrieved from
http://harris.uchicago.edu/sites/default/files/iHEA_Sydney_minicourse.pdf
44
Diggle, P. J., Sousa, I. & Chetwynd, A. G. (2008). Joint modelling of repeated measurements
and time-to-event outcomes: The fourth Armitage lecture. Statistics in Medicine, 27, 16,
2981-2998.
45
Murphy, T. E., Han, L., Allore, H. G., Peduzzi, P. N., Gill, T. M. & Lin, H. (2011). Treatment
of death in the analysis of longitudinal studies of gerontological outcomes. The Journals of
Gerontology. Series A, Biological Sciences and Medical Sciences, 66(1), 109-14
46
Kurland, B. F., Johnson, L. L., Egleston, B. L. & Diehr, P. H. (2009). Longitudinal data with
follow-up truncated by death: Match the analysis method to research aims. Statistical
Science, 24(2), 211-222.
47
Shortell, S. M. (1990). Effective Hospital-Physician Relationships. Ann Arbor, Mich: Health
Administration Press Perspectives.
48
Hogan, J. W., Roy, J. & Korkontzelou, C. (2004). Handling drop-out in longitudinal studies.
Statistics in Medicine, 23(9), 1455-97
49
Laird, N. M. (1988). Missing data in longitudinal studies. Statistics in Medicine, 7(1–2):305–
315.
50
NVivo qualitative data analysis software; QSR International Pty Ltd. Version 10, 2012.
51
Creswell, J. W., and Creswell, J. W. Qualitative inquiry and research design: Choosing
among five approaches, 3rd ed. Los Angeles: SAGE Publications, 2013.
52
Ragin, C. C. (2008). Qualitative Comparative Analysis Using Fuzzy Sets (fsQCA). Pp 87-121
in Benoit Rihoux and Charles Ragin (eds.) Configurational Comparative Analysis. Thousand
Oaks, CA and London: Sage Publications. Retrieved from
http://www.u.arizona.edu/~cragin/fsQCA/software.shtml
53
Glaser, B.G. (1965). The Constant Comparative Method of Qualitative Analysis. Social
Problems, 12(4), 436-445.
54
Gagnon, M. L. (2011). Moving knowledge to action through dissemination and exchange.
Journal of Clinical Epidemiology, 64 (1), 25-31.