Program Assessments Study

advertisement
Program Assessments Study
Work Plan
2010-2012 Nonresidential
Program Process Evaluation
Submitted to:
Energy Division
California Public Utilities Commission
505 Van Ness Ave.
San Francisco, CA 94102
Submitted by:
Itron, Inc.
1111 Broadway, Suite 1800
Oakland, CA 94607
(510) 844-2800
November 10, 2011
Table of Contents
1 Introduction ........................................................................................................... 1
1.1 General Approach .......................................................................................... 2
1.2 Evaluation Objectives and Researchable Issues ........................................... 3
1.3 Study Areas and Program Groups ................................................................. 5
2 Methodology .......................................................................................................... 8
2.1 Definition of Terms ......................................................................................... 8
2.2 Program Decomposition Model .................................................................... 10
2.3 Quantitative Outcome Metrics ...................................................................... 13
2.4 Program Context Characteristics ................................................................. 15
2.5 Benchmarking Metrics ................................................................................. 17
3 Coordination with EM&V Portfolio ..................................................................... 20
3.1.1 Coordination with the Audit Impact Study and the Evaluability Assessment for
LGP and Third Party Programs (Work Order 36) ..........................................................20
3.1.2 Coordination with the Portfolio Strategy and Management Assessment Project .........21
3.1.3 Coordination with Nonresidential Downstream Lighting Impact Evaluation (Work
Order 29) .......................................................................................................................21
3.1.4 Coordination with Custom Impact Evaluation (Work Order 33) ...................................23
3.1.1 Participant Survey Coordination ...................................................................................26
4 Scope of Work ..................................................................................................... 27
4.1 Program Group Assessment Scope of Work ............................................... 27
4.2 Summary Report and Administrative Model Comparison ............................ 32
4.3 Database and Documentation ..................................................................... 34
4.4 Draft Program Assessment Group Report Outline ....................................... 34
4.5 Draft Summary Report and Administrative Model Comparison .................... 36
5 Work Plan Timeline and Budget......................................................................... 37
6 Appendices .......................................................................................................... 38
6.1 Program List with Mapping to Program Assessment Groups....................... 38
6.2 Previous Itron Best Practices Study Data Collection Instrument .................. 38
6.3 Previous Itron Best Practices Program Benchmarking Tool ........................ 38
6.4 Net-to-Gross Survey Instrument .................................................................. 38
6.5 Downstream Lighting Impact Evaluation Programs and Measure Group
Detail ........................................................................................................... 39
6.6 Custom Impact Evaluation Programs and Measure Group Detail ................ 42
Itron, Inc.
i
Table of Contents
2010-2012 Custom Impact Evaluation Research Plan – Final Draft
List of Tables
Table 1-1: Summary of Study Areas .......................................................................... 6
Table 3-1: WO29 Nonresidential Downstream Lighting Impact Evaluation
Timeline............................................................................................................. 22
Table 3-2: Custom Impact Study Timeline .............................................................. 25
Table 5-1 Summary of Project Timeline .................................................................. 37
Table 5-2 Summary of Preliminary Budgets .... Ошибка! Закладка не определена.
Table 6-1: Gross Savings and NTG Program Groupings for PGE’s Programs ....... 39
Table 6-2: Gross Savings and NTG Program Groupings for SCE’s Programs ........ 40
Table 6-3: Gross Savings and NTG Program Groupings for SDG&E’s Programs .. 41
Table 6-4: WO033 2010 Savings Claims by Program ............................................. 43
List of Figures
Figure 6-1: 2010 WO33 Savings Claim by Aggregate Measure Group ................... 46
Itron, Inc.
ii
Table of Contents
1
Introduction
This research plan addresses the Nonresidential Program Assessments Project addressing 20102012 California investor-owned utilities’ (IOU) energy efficiency programs.
Program evaluation adds value by providing feedback and actionable recommendations to
improve program strategies, portfolio strategies, and implementation. Feedback is also valuable
as evidence to stakeholders that programs achieve intended outcomes. Ideally, evaluation
considers the energy impacts and the market effects of a program; it does so in light of the
strategies employed and the policy objectives and characteristics of the target markets. The
nonresidential portfolio of programs is a large and varied set. It includes over 200 programs with
various administrative models, a range of objectives, a large number of different markets, a mix
of strategies, and a diverse sets of technologies. Consequently, it is not feasible or necessarily
cost effective, to do impact evaluation on each individual program or, even, process evaluation.
This project is designed to cost effectively and systematically provide meaningful feedback and
lessons learned associated with this large pool of programs. This project seeks to integrate
available quantitative measures of program outcome such as impact, cost effectiveness and netto-gross ratios, with comprehensive reviews of design, management and implementation; all
within the context of program policy objectives and markets of operation. To accomplish this,
the project will draw from the techniques and findings developed for the Itron Best Practices
Study (see www.eebestpractices.com). This project will complete due-diligence reviews of
programs against known best practices and identify lessons learned and new best practices for
the California IOU’s primarily nonresidential energy efficiency programs. In addition, the study
will leverage other concurrent evaluation efforts to supplement the primary data collection for
analysis.
Parties urge the Commission to increase the use of third party and local government partnership
programs. At the same time there is a call to lessen the number and complexity of energy
efficiency programs. This study seeks to provide input on which programs might best support
the high priority objectives of cost effective, comprehensive long term savings. This study is
designed to assist in determining which programs have best helped—or are in the best position to
help, achieve deeper and more comprehensive retrofits and to provide long term market effects
and non-energy benefits. The Commission seeks input on how to optimally restructure the
portfolio in the bridge period (2013-2014) and beyond. This Study will assist stakeholders in
Itron, Inc.
1
Program Assessments Work Plan
Program Assessments Work Plan
understanding the performance of programs, program types, and administrative models. It will
examine and consider how this performance relates to the marketplace, to designs, to
management and implementation, and to the unique characteristics of the administrative models.
1.1 General Approach
For this project the programs will be grouped into categories of similar design and orientation,
and assessments will be performed for each Group. The similarities in objectives and design
support a much greater degree of comparability from which to assess best practices. As stated
above, the assessments will leverage techniques and findings from the previous Best Practice
Benchmarking for EE Programs (see, for example, www.EE BestPractices.com), build off the
lessons learned from that effort, and make adjustments as necessary given any unique features or
objectives of particular program Groups.
The Program Assessments will be performed at the Group level. However, each Group
assessment will contain common elements addressing a common set of research questions and
use a similar set of tools. Moreover, reports from each Group assessment will have a common
structure set forth by the Project Coordination Group. In addition to Group specific reports,
there will be a Summary Report presenting key findings across the Group Reports. The
Summary Report will make informative comparisons to answer a number of higher-order
questions. This will include a review and comparison across ‘administrative models’ by which
we mean the particular combination of administration and implementation represented by
government partnerships, third party programs and IOU Core programs. The objective is to
explore potential comparative advantages in meeting different policy objectives in different
markets. The need for such comparisons is driven by a desire among policy makers and the
IOUs to evolve these models and to further optimize their deployment to improve portfolio
performance. .
This document provides the guidelines and work plan governing the project as a whole as well as
the common elements of each Program Group Assessment. Each individual Program Group
Assessment Team will adapt these guidelines into their own customized plan for Assessment,
taking into account the particular features and research needs relating to the Group. Each
Program Group Assessment Plan will involve a set of program reviews. The Assessment results
will not yield a ‘score’ for each program, but may summarize the ordinal ranking of programs
within a Group or category.
The Group Assessments will be based on interviews with program managers and reviews of
program documentation and collateral, as well as statistics related to the ‘outcome’ of programs,
such as dollars per kWh saved and market penetration. It may also rely on trade ally interviews,
interviews with Account Executives, and input from other EM&V activities to provide a more
Itron, Inc.
2
Program Assessments Work Plan
Program Assessments Work Plan
comprehensive review of programs. The Assessment will provide a characterization of each
program, a review against known best practices, and the development of new best practices.
These assessments will focus on the four “controllable” program components, design,
management, implementation, and evaluation. Each of these is described in detail in Section 2.2
.
1.2
Evaluation Objectives and Researchable Issues
This study is designed to improve understanding of program performance, to document lessons
learned, to improve current and future program performance and better optimize future portfolio
strategies. As such the study will provide timely feedback to the IOUs program personnel to
support real-time corrective actions, as necessary. It will also provide feedback to portfolio
strategists and high level-decision makers regarding the relative performance of program types,
and the particulars of their relative strengths and weaknesses. In particular, this study will
provide feedback to the CPUC Energy Division to support the refinement of policy governing
the mix of programs offered, as well as program design, evaluation and reporting requirements.
The Program Assessment objectives and the researchable issues it seeks to examine are
described as follows:
Program Group Operating Landscape:

What are the recent trends in markets and within technologies that may change the
relative importance of the barriers to energy efficiency addressed by this Program Group?

What are the projected or recent changes in codes and standards that may alter the design
objectives or efficacy of design strategies?
o What implications do these have on optimal program design?

What are the critical policy objectives supported by the programs? Are there trends or
changes in policy that affect the program objectives and outcomes?

What are the new and innovative programmatic strategies and do they represent improved
adaptation to markets or recent trends?
Alignment to Best Practices:

For each program group, program category and individual program examined:
o How do program practices align to the known set of best practices?
o What are the areas of most opportunity for improvement with respect to best
practices?
Itron, Inc.
3
Program Assessments Work Plan
Program Assessments Work Plan
o What are the important differentiating program context characteristics1?
o What evidence emerges from a review of practices, outcomes and contextual data
that confirm or contradict the supposition that known best practices lead to
success?


What, if any, updates or refinements of best practices are implied by these
findings?
What changes are recommended for programs to rise to their highest potentials?
Performance Characterization, Lessons learned and New Best Practices:

For each Program group or category, what are the generalized and transferable best
practices implied by reviews of program practices, outcomes and contextual data?

Who is excellent at what they do? How are they doing it and why does it work?

What are the implications of newly identified best practices on resource allocations to
programs by category and type?

Based on these findings:
o What, if any, changes are recommended to the design and implementation of
specific programs and program groups?
o What, if any, changes are recommended to overall portfolio strategy and
management?
o What, if any, changes are recommended to policies governing portfolio design,
evaluation and reporting requirements?
Administrative Model Comparison
This Study element explores potential comparative advantages of the three program
“administrative models,” by which we mean the particular combination of administration and
implementation represented by local government partnerships, third party programs and IOU
Core programs. Research into potential comparative advantages must consider variations in
policy objectives, strategies, and the particular characteristics of the customers and markets
addressed. In the face of the wide range of possible combinations and permutations of these
variables, this study must rely on a directed and inductive approach. More precisely, this study
component will identify a set of hypotheses that can be tested, and where the results from such
tests will help to inform how models evolve, rather than to determine a static state that may be
presumed. The hypotheses selected for testing will be identified through a process involving
1
See Section 2.4 Program Context Characteristics on page 2-13 for a full discussion.
Itron, Inc.
4
Program Assessments Work Plan
Program Assessments Work Plan
conversations with stakeholders as well as a careful look back at the origins and evolution of the
various ‘models’.
1.3 Study Areas and Program Groups
This section describes the main Study Areas and Program Groups addressed by the Program
Assessments Study.
Table 1-1 below presents the primary study areas. As discussed above, the PA Project is made
up of multiple Program Group Assessments, as well as a Summary Study and Administrative
Model Comparison. Most of the main study areas are dedicated to Program Groups, but there is
also a one dedicated to management and coordination, and another to the Summary Study
/Administrative model piece.
The Group Assessments are bound together by a common set of guidelines and tools. Although
this document presents a fairly complete starting point for the data collection and benchmarking
tools; there will likely be some refinement of these as the PCG considers them carefully over the
next several weeks. The management and coordination activity will support the ongoing
management and coordination of guidelines, as well as the development of reporting templates,
insurance of cross-group consistency and comparability.
The Program Groups were created through a basic design review of all the nonresidential and
cross cutting programs. Through this high level review, all programs were placed into Program
Groups. Following this grouping exercise, a subset of the groups was selected for study. Those
Groups are shown in Table 1-1 below. Also shown are the percentage of the total nonresidential
and cross cutting portfolio that the Group represents of certain key metrics, including spending
and savings figures. A full list of the nonresidential and cross-cutting programs and their
associated groupings is attached in Appendices, Section 6.1 .
Itron, Inc.
5
Program Assessments Work Plan
Program Assessments Work Plan
Table 1-1: Summary of Study Areas
Number
of
Programs
Spentto-date
kWh to
date
kW to
date
Therms
to date
-
-
-
-
-
Audits & Pump Test
16
2%
1%
2%
1%
Calculated
12
17%
17%
12%
55%
Deemed
14
26%
34%
41%
14%
SCE/SCG
37
1%
1%
1%
0%
PG&E Energy Watch
22
6%
4%
4%
0%
SDG&E
9
1%
0%
0%
0%
Gov’t/Institutional Partnerships
17
5%
2%
2%
3%
Agriculture
8
1%
0%
0%
1%
Commercial
49
13%
10%
9%
5%
20
6%
4%
2%
17%
-
-
-
-
-
204
78%
75%
73%
96%
Study Area
Program Assessments Core
Tools/Mgt & Coordination
Core
LGP
Third
Party
Resource
Industrial
Administrative Model
Comparison Study
Total
In general, the Program Groups with a comprehensive ongoing evaluation effort were not
selected for Study. Program groups excluded for this reason, and the related ongoing studies are
as follows:











New Construction. Ongoing Studies: 1037, 1057
HVAC. Ongoing Studies: WO14, WO322, 10413
pilots related to Emerging Technologies (ETP). Ongoing Study: WO15
Workforce Education and Training (WET). Ongoing Study: 1035
Marketing, Education and Outreach (MEO). Ongoing Study: 1043
Zero-Net-Energy (ZNE). Ongoing Studies: 1039, 1040, 1045
Lighting Market Transformation. Ongoing Studies: 1017, 1018
Codes & Standards. Ongoing Studies: 1038, 1050, 1051, 1053
CEI. Ongoing Study: 1023
OBF. Ongoing Study: WO12
IDSM. Ongoing Study: WO16
2
Impact evaluation of residential and small commercial HVAC.
3
This is a market characterization/behavioral study.
Itron, Inc.
6
Program Assessments Work Plan
Program Assessments Work Plan
Some groups are excluded due to program features that may be more difficult to fit into the
relatively standardized format that we propose for overall priorities and budget constraints.
Pilots and the non-resource third party program groups fall into this category.
Itron, Inc.
7
Program Assessments Work Plan
Program Assessments Work Plan
2
Methodology
This section presents the core concepts, principals and techniques that will govern the Program
Group Assessments.
2.1
Definition of Terms
The list below provides definitions of terms used extensively in this research plan to describe the
study methodology. These items are discussed in detail in the remainder of Section 2.
Program Decomposition – refers to the process of disaggregating programs into underlying
subparts to allow for analysis of specific program features. Two levels of decomposition are
planned – a primary decomposition into components and a secondary decomposition into subcomponents.
Program Component – refers to the first level of our program decomposition, which we further
disaggregate into sub-components. We will decompose programs into four primary components:
program design, program management, program implementation, and evaluation.
Program Sub-component – is a further disaggregation of a program component. The program
decomposition model consists of the following sub-components:

Program Design:
Program Theory/Linkages
Structure/Steps/Processes & Procedures

Program Management: Project Management, Reporting & Tracking, Quality Control &
Verification

Program Implementation: Outreach/Marketing/Advertising, Participation Process &
Customer Service, Installation & Delivery
&
Partnerships,
Program
These sub-components are further defined in Section 2.2 .
Crosscutting Outcome Metrics – are the quantitative measures of program performance at the
overall program level. Crosscutting metrics include:
Itron, Inc.
8
Program Assessments Work Plan
Program Assessments Work Plan

Dollar Per kWh and kW saved; Non-incentive cost per customer, Market Penetration,
Adoption, and Saturation Rates; and Sustainability/Market Effects.
Some crosscutting metrics, such as $ per kWh saved, are directly quantitative. Other
crosscutting metrics, such as sustainability and market effects, will require a judgmental scoring
based on the information available to the study team. Each crosscutting metric for each in-scope
program will ultimately receive an ordinal ranking of first, second, or third tier. In some cases
no ranking will be provided, if the metric is related to an element not incorporated in the
programs design. For example, some programs may not be designed to achieve sustainability,
while others may not have measured or measurable energy savings. Scores will not be published
with program names, i.e., associations will be anonymous (e.g., Program 1, Program 2, etc.).
Component/Sub-component Metrics – Many individual sub-components will also have their
own metrics, which will be used for ordinal scoring. Each sub-component for each in-scope
program will receive a ranking of first, second, or third tier. The basis for these scores may be
quantitative, qualitative, or a combination. Scores will not be published with program names.
The scores will be used by the Project Team to help organize the analysis.
Best Practice – as defined in this study, will refer to the specific and generalized features of
program sub-components that receive a Tier 1 score. The focus of this study is defined to be on
sub-component, not overall program, with the intention of providing recommendations to align
current programs with best practices, and to identify best practices that can be generalized and
have a high likelihood of transferability to other programs.
Program Context Characteristics - the outcome of a program also depends on the context in
which it operates. Understanding that context will be critical to the analysis process: wherever
possible, we will analyze the changeable decomposed program elements in light of a program's
less mutable context. To facilitate this process, we identified several contextual elements to
include in the data collection process and consider during the analysis. As described later in this
section, we divide these characteristics into two categories: program design policy elements, and
socio-economic and other immutable factors.
Program Categories – are the basis for grouping “like” programs to compare across components
and sub-components. There may be multiple Program Categories within a single Program
Assessments Group. The categories will be used in the process of selecting which programs to
benchmark, in the specific benchmarking approach and in interpretation and analysis of results.
Program categories may be defined in a number of ways, for example, as a function of target
market (e.g., sector, vintage, segment, end use, value chain, urban/rural); approach (e.g.,
information-focused, incentive-focused [prescriptive; custom/performance based], etc.);
objective (e.g., resource acquisition, market transformation, equity, etc.), and geographic scope
(e.g., local, utility service territory, state, region, nation); among other possible dimensions.
Itron, Inc.
9
Program Assessments Work Plan
Program Assessments Work Plan
2.2 Program Decomposition Model
Program decomposition refers to the process of disaggregating programs into underlying
subparts to allow for analysis of specific program features. Two levels of decomposition are
planned – a primary decomposition into components and a secondary decomposition into subcomponents. Our approach utilizes systematic decomposition to define and analyze components
and sub-components for each program. We plan to decompose programs into four components:
program design, program management, program implementation, and evaluation. Each of these
is further decomposed into sub-components (as discussed in the following sub-sections).
Decomposition into components and sub-components will serve several purposes. First, the goal
of the project is to characterize current program practices and benchmark those practices against
best practices within specific program elements such as marketing, tracking systems,
participation processes, etc. The results will be a better understanding of program performance
and opportunities for improvement, as well as an improved understanding of practices likely to
have transferable value to others. The decomposition provides a uniform approach to compare
programs and is well suited to developing new or refining existing programs. These
programmatic building blocks will also permit cross comparison of program components from
multiple sectors, and across administrative models where similarities and differences can be
isolated and understood within the context of program outcomes and contextual characteristics.
Program Decomposition – Program Design
Program design subcomponents are focused on laying a solid foundation for a successful
program. Good program design begins with good program theory and a complete understanding
of the marketplace. Baselines are also important when evaluating success, while contingency
planning can stop projects from stalling indefinitely. The program design category is
decomposed according to the elements described below.
Program Theory, Linkages & Partnerships – Program theory and related design elements are
subjective in nature and cannot be measured by a quantitative metric such as $/kWh. However,
projects that demonstrate a clear “story” and understanding of the market, and have developed
the right linkages and partnerships to successfully target that market, are likely to be more
successful than programs that lack such characteristics. In addition, programs that have
developed contingency plans as part of the design are also more likely to be successful.
Program Steps, Processes & Procedures – Like any complex project, successful energy
efficiency programs require well thought-out processes and procedures. Programs that clearly
articulate the steps involved in implementation, as well as clearly delineate management
Itron, Inc.
10
Program Assessments Work Plan
Program Assessments Work Plan
responsibilities and structures have a higher likelihood of succeeding relative to those that do
not. Design processes likely to be among the best practices will be those that fully describe the
management and organizational structures necessary to optimize program performance and
include testing of procedures.
Program Decomposition – Program Management
We decompose program management into the following subcomponents:
Project Management – A key function of program management is project management. Project
management effectiveness is likely to be correlated with the effectiveness of the
management/organizational structure plan developed during program design.
Project
management represents the ability of the implementer to cost-effectively manage all aspects of
the programmatic process by effectively executing the management/organizational plan. Project
management effectiveness is especially critical for implementers of large, complex programs or
programs with multiple sub-contractors or other partners.
Project Reporting & Tracking – For the purposes of this effort, tracking is defined as the systems
and units of measure that provide an indication of program participation, budgets, markets and
other program data. Reporting is defined as the products associated with accessing and using the
information in the tracking systems to communicate and improve the program, both internally
and externally. Clear concise reports that track, for example, progress towards milestones and
current expenses compared to projected levels are invaluable to program managers. Programs
with standardized, comprehensive, and periodic reports will be more likely to identify problems
early than those that lack such systems. In addition, choosing the right unit of measure to track a
program can also be a predictor of success.
Quality Control and Verification – We take a broad definition of the term quality control,
meaning it to encompass both the quality control of the program processes as well as the quality
control of program equipment or measures. Verification is more narrowly defined as ensuring
that measures were actually installed, audits were actually performed etc. Systems for assessing
the quality of program delivery, and for verifying the accuracy and prudence of tracking data,
equipment and payments, are key to satisfied customers and successful programs. Programs that
lack comprehensive quality control procedures are more likely to suffer from errors (such as
tracking and payment) that reduce overall program effectiveness or result in poor customer
satisfaction, which can reduce participation by word-of-mouth to other potential participants.
Program Decomposition – Program Implementation
Implementation can be broken into a number of subcomponents; our decomposition consists of
outreach/marketing/advertising, the participation process and customer service, and installation
& delivery mechanisms.
Itron, Inc.
11
Program Assessments Work Plan
Program Assessments Work Plan
Program Outreach/Marketing/Advertising – Program marketing and outreach approaches can be
compared for their effectiveness. For example, measures of $/participant or participation rate
could be used to compare (ordinal) one program versus another. To further assess marketing
effectiveness, indicators of $ per end user made aware or knowledgeable about a program or
service could also be benchmarked, to the extent that such data can be consistently collected
across programs. Note the need to give careful consideration to the populations being reached
and their relative similarity. Clearly, hard to reach (HTR) and non-HTR populations are
different, as are training and incentive programs.
Participation Process & Customer Service – The ease or difficulty of a program’s participation
process, and the associated customer service support, can both be critically important indicators
of ultimate program success. The participation process and customer service element is
comprised of the procedures, forms, communications, and other interactions that occur among
prospective and ultimate participants and program implementers. Some programs that may have
all of the other attributes of success may be sub-optimal simply because the process of
participation is unduly burdensome, or because the customers are not getting high levels of
responsiveness from program administrators.
Installation & Delivery Mechanisms – Installation & Delivery picks up the implementation
process at its finale. To what extent do the program’s implementation and design features carry
through to the implementation process? Some programs may score well on outreach and
marketing but result in few actual installations of efficiency measures or other ultimate indicators
of success (e.g., increase in knowledge of efficiency options for trade allies participating in a
training program). The effectiveness of any financial incentives would be captured under this
sub-component; however, financial incentives may be important enough to decompose into their
own separate subcomponent (at a minimum, incentive levels will be tracked explicitly in the
program database). Programs that finish strongly with respect to installation and delivery of
final products and services will score higher on this indicator than those that do not.
Program Decomposition – Evaluation and Adaptability
In addition to the design, management and implementation components, we believe that
programs should also be screened for the effort that has been put into evaluating their
effectiveness, and for their effectiveness at adapting to evaluation findings and changing market
conditions. For example, programs that are carefully evaluated and adjusted to ensure their
effectiveness, and that can rapidly adapt to actual and changing market conditions, are more
likely to be effective. Rigid programs that are designed, managed or implemented in such a way
as to make adaptability impossible are more likely to fail. We include this element in the
analysis to capture program features that promote adaptability.
Itron, Inc.
12
Program Assessments Work Plan
Program Assessments Work Plan
2.3 Quantitative Outcome Metrics
The program components and subcomponents provide the breakdown of the various aspects of
the program that program implementers can modify and improve to create better programs. The
overall outcome of a program, however, is often measured through high-level metrics such as
$/kWh saved. Each Program Assessment will collect, track, and analyze quantitative outcome
metrics for member programs to help determine the impact of different subcomponents on the
overall impact of a program. Note, however, that these outcome measures, by themselves, are
often poor proxies for programmatic best practices because of the many confounding contextual
and other variables that underlie them as well as the significant differences in budget and
program impact tracking and measurement around the country. We will attempt to collect data
on the following outcome metrics: cost effectiveness (e.g., $/kWh saved, TRC, etc.); net market
penetration rates, participant adoption rates, and measure saturation levels; and
sustainability/market effects. Each quantitative metric for each in-scope program will ultimately
receive an ordinal ranking of first, second, or third tier. Metrics will not be published with
program names, i.e., associations will be anonymous (e.g., Program 1, Program 2, etc.).
Cost Effectiveness Indicators ($/kWh or $/kW Saved, Benefit-Cost Ratios)
These indicators are very attractive as overall quantitative measures of a program’s effectiveness
because total program impacts can often be compared with total dollars spent. Unfortunately, in
practice, extreme care and caution must be applied to collecting and assessing this indicator. A
key limitation on the usefulness of these indicators is the extent to which all costs and impacts
are properly and consistently accounted for across programs. At a minimum, this figure should
be tracked on both a net and gross impact basis, where possible. The Portfolio Strategy and
Management Assessment project may provide more refined cost number, or insight into the
available cost figures.
In addition, while cost effectiveness is usually a discrete, quantitative number, it needs to be
analyzed within the context of a program’s environment and goals. For example consider two
commercial programs, one focused on cost-effectiveness, and another on equity. Sole
consideration of cost-effectiveness would imply targeting the largest commercial customers,
while equity would imply targeting smaller hard-to-reach customers. Correlating program
outcomes to help determine best practice components would depend of the contextual definition
of what is “best”. Because their objectives are negatively correlated with respect to costeffectiveness, one would not want to directly compare these programs against each other for this
indicator. One could, however, make comparisons relative to other programs with the same
objective. Inappropriately using metrics without consideration of cross-purpose goals will result
in erroneous comparisons and inaccurate policy conclusions.
Itron, Inc.
13
Program Assessments Work Plan
Program Assessments Work Plan
Net Penetration Rates, Participant Adoption Rates, and Measure Saturation Levels
These can be some of the most important indicators of the effectiveness of resource acquisition
programs; unfortunately, they are also some of the least well tracked and, surprisingly, often
poorly understood. As discussed above under cost-effectiveness, $ per unit of net impact
generally provides a more robust indicator of success than does $ per unit of gross impact.
Although important and helpful to understanding program effectiveness, net impacts alone do
not tell the whole story. Ideally, one wants to be able to examine the rate and level of efficiency
adoptions as well. For example, a program may have a reasonable net-to-gross ratio but still
have a relatively low (and slow) rate of market penetration. As a result, one program may be
more cost-effective than another but be less likely to result in any significant change in
efficiency market share over a given period of time. Key challenges with these indicators are
defining and collecting data on the denominator needed for their calculation (e.g., what is the
appropriate population or subpopulation that should be used to divide the efficiency actions).
Few programs track all of the in-program and out-of-program data needed to measure these
indicators.
Sustainability/Market Effects
Sustainability is an important crosscutting indicator of program effectiveness. Programs that
create lasting market effects are more beneficial than those that do not, all else being equal.
Persistence of savings can also be an element of sustainability. The proportion of evaluation
effort placed on examining market change sustainability versus persistence of savings may
depend upon the desire for resource acquisition versus market transformation at any point in time
in a jurisdiction. Currently in California there is a renewed interest in market transformation and
sustainability. Consideration of these program elements will be a high priority area for the
Study.
Program Performance Metrics (PPMs) and Market Transformation Indicators (MTIs)
The CPUC has worked in conjunction with the IOUs to develop program performance metrics
and market transformation indicators to complement and enhance other quantitative measures of
program outcome. The PPMs and MTI are intended to support comparisons across years and
across programs on a variety of program objectives, including penetration and transformation
accomplishments. This evaluation will leverage these statistics as another quantitative
measurement of program accomplishments. As with all quantitative measurements, PPM and
MTI statistic interpretation and weight will be tempered by program context characteristics
discussed next.
Itron, Inc.
14
Program Assessments Work Plan
Program Assessments Work Plan
2.4 Program Context Characteristics
In addition to the changeable program elements outlined in Section 2.2 Program Decomposition
Model, the outcome of a program also depends on the context in which it operates.
Understanding that context is critical to the analysis process: wherever possible, each Program
Group will track and analyze the changeable decomposed program elements in light of a
program's less mutable context. To facilitate this process, several key contextual elements are
identified for tracking. These elements can be organized into two broad categories: program
design policy elements, and socio-economic and other immutable factors.
Program Design Policy Elements
Energy efficiency programs and portfolios are often designed with specific policy objectives in
mind, and those objectives can often impact the outcome of a program. For example, programs
that target hard-to-reach areas may not exhibit the same rates of participation as those that do
not. A correct analysis should take that design policy element into account. Below is a list of
the types of design policy elements we will attempt to track and consider:
Energy efficiency policy objectives – policies that emphasize different goals such as market
transformation, resource acquisition, equity, etc. will drive different program designs and
program objectives.
Market barriers addressed – programs that seek to mitigate difficult barriers may have poorer
performance-related metrics because they attack tough problems, in contrast to programs that
may have excellent ostensible metrics because of cream skimming.
Measure mix – the mix of measures installed in a program can significantly affect a program’s
cost-effectiveness. For example, residential program cost-effectiveness can vary several-fold
simply as a function of the year-to-year mix of CFL's as compared to other measures.
Demand/energy – the extent of peak demand versus energy focus of the program can, by
definition, affect the cost-effectiveness of the indicator in question (e.g., a peak demand oriented
program may score poorly on an $/kWh metric). This can be considered a part of the measure
mix factor listed above.
Multi-year policy objectives – if consistent, help programs to achieve goals that require medium
to long-term market presence and extensive program infrastructure; if inconsistent, make
achievement of such goals more difficult.
Multi-year funding levels – if consistent, allow programs to set multi-year goals and maintain
consistent presence and messages among end-users and supply-side market actors; if
inconsistent, make maintaining a stable market presence more difficult.
Itron, Inc.
15
Program Assessments Work Plan
Program Assessments Work Plan
Program/Market Lifecycle – where a program or key measure is in its product lifecycle will
affect its cost-effectiveness. For example, a program seeking impacts from the last 50 percent of
the market to adopt a product that has penetrated the first 50 percent of the market should be
expected to be more costly than one attacking a market with a low or insignificant saturation
level. There are at least two reasons for this. First, in more highly saturated markets, it is more
difficult to find the remaining measure opportunities and, second, the remaining market is
typically characterized by late majority and laggard organizations that are more resistant to
adopting new products and practices. In addition, a program in the first-year of a multi-year plan
to impact a market may have poor first-year metrics because of the associated startup costs and
time it takes to create awareness and other program effects.
Socio-Economic And Other Immutable Factors
Beyond program design policy elements, there are many broader socio-economic factors and
other immutable factors that can affect the outcome of the program. The team has identified the
following, though this list is not meant to be comprehensive:
Climate – for example, HVAC measures are more cost-effective in severe climates than in mild
climates because absolute savings are strongly a function of base usage levels.
Customer/target market actor mix – the mix of customers and trade allies often plays a role in
cost-effectiveness, for example, a program in a market with larger commercial customers will
tend to be more cost effective than an identical program in a market of smaller commercial
customers, all other things being equal; similarly, programs with customer segments with longer
full-load equivalent hours will be more cost-effective than those with lower average full-load
hours of operation.
Customer density – delivering energy efficiency program to a relatively dense population base
will be less costly than delivering to a sparser population, all other things being equal.
Customer Energy Rates – higher electricity rates should lead to higher levels of measure
adoption, all else being equal.
Economic Conditions – willingness to invest in new products and practices changes in response
to short-term economic conditions, which may vary across regions.
Customer Values – efficiency program effectiveness can vary as a function of differences in
customer values, again, all else being equal.
Itron, Inc.
16
Program Assessments Work Plan
Program Assessments Work Plan
2.5 Benchmarking Metrics
Program reviews will address each sub-component in the decomposition model using a set of
qualitative and quantitative benchmarking metrics. The scores on these metrics will determine
the ordinal ranking of a program in a category and sub-component. While some metrics may be
quantitative in nature, we expect that most of the underlying information supporting the scoring
process will be qualitative. The following subsections define for each sub-component the critical
elements of successful programs. The areas described below form the basis for measuring and
comparing programs across sub-components. A benchmarking tool representing reviews across
these key areas was developed as part of the previous Itron Best Practices Study. This tool will
serve as a starting point guideline for the benchmarking of programs. The Project Coordination
Group will have a chance to review and update this tool over the next several weeks. The tool is
presented in Appendix, Section 6.3 .
Program Design: Theory, Linkages & Partnerships
Successful program design starts with a good program theory. Assessment activities will seek
evidence of a well-thought out and documented program theory that includes buy-in from
planners, implementers and other key players. Program theory should address potential barriers
to adoption and methods to overcome those barriers. A program's theory and design should also
leverage appropriate linkages & partnerships in multiple areas, and should incorporate these
linkages and partnerships at the design stage.
Program Design: Structure, Policies and Procedures
Good program structure, policies and procedures begin with a well thought-out "process plan"
that describes both the program structure and the associated policies and procedures. We will
look for process plans that clearly illustrate a step-by-step participation processes. These
processes should be tested for effectiveness and contingencies. We will look for evidence of a
program process plan that is both used and updated.
Program Management: Project Management
Assessments will look for evidence of a clear and reasonable organization plan, with clearly
defined responsibilities. There will be an exanimation of the appropriateness of matches
between resources and tasks. The project plan should be used and updated regularly.
Program Management: Reporting & Tracking
Best practices in this arena entail the cost-effective tracking of useful and appropriate metrics
that can efficiently be translated into reporting information. The tracked variables should
generate useful information at appropriate intervals, and this information should be used to
maintain program effectiveness.
Itron, Inc.
17
Program Assessments Work Plan
Program Assessments Work Plan
Program Management: Quality Control & Verification
Successful programs should have a verification process in place that is part of both the
implementation and evaluation phases. The precision level of the verification should be
balanced against cost to ensure overall cost-effectiveness. Verification should be accompanied
by a comprehensive quality control process that addresses both the quality of the implementation
process, as well as that of equipment or measures installed as part of the program.
Program Implementation: Outreach, Marketing & Advertising
In evaluating outreach, marketing and advertising efforts, we will seek measures of marketing
effectiveness such a total marketing costs and marketing costs per participant made aware of the
program. Good outreach, marketing and advertising efforts should result in relatively high
program awareness, knowledge, and participation levels. We will look for evidence of
innovative or successful marketing and outreach mechanisms, and assess the appropriateness of
the marketing strategies for the program objectives and targeted populations.
Program Implementation: Participation Process
The participation process is a critically important element of a program's ultimate success.
Standard measures of customer satisfaction provide one indication of a program's effectives at
enrolling and processing customers. Good programs should measure satisfaction with multiple
aspects of the participation process, and should collect sufficient information at every stage to
support evaluation, tracking and reporting needs. Programs should also check for and limit to
the extent possible the administrative burden they place on customers (some burdens may be
necessary to fulfill good practice requirements for other sub-components such as quality control
and verification). We will look for evidence of successful mechanisms to streamline the
customer participation process, and check to see whether the program results in many callbacks,
reinstalls, and quality control problems. We will also review the customer inquiry and complaint
process, and look for evidence that the participation process encourages a higher adoption of
measures among its targeted participants. The time it takes to make it through the entire
participation process, including receiving any incentives, is another indicator we will investigate.
Program Implementation: Installation & Delivery
Delivery and/or installation objectives will be reviewed against outcomes. Successful programs
should demonstrate evidence of installation and delivery follow-though on marketing and
outreach efforts. Programs will be assessed for their ability to address installation and delivery
problems, ability to work with subcontractors, and use of partners and recruitment resources to
ensure a smooth delivery process. The effectiveness of any incentives in inducing measure
installations will also be assessed here.
Itron, Inc.
18
Program Assessments Work Plan
Program Assessments Work Plan
Program Evaluation: Evaluation & Adaptability
Good programs should obtain feedback from both participants and non-participants and measure
program accomplishments and progress relative to a program theory. This would usually be
accomplished through a thorough program evaluation; however, some programs may achieve the
equivalent result through activities that are built into the implementation process and carried out
by the program manager. Each Group Assessment will endeavor to assess how programs use
evaluation results or other feedback mechanisms to improve over time. Assessments will seek to
identify flexibility and adaptability in the program design and implementation that facilitates
rapid readjustments.
Itron, Inc.
19
Program Assessments Work Plan
Program Assessments Work Plan
3
Coordination with EM&V Portfolio
The Program Assessments Team will be expected to coordinate extensively across project
activities, and with a variety of other EM&V activities occurring in parallel. Data from other
evaluation efforts are expected to both serve and benefit from activities conducted within the
Program Assessments project.
There are two nonresidential impact evaluations that have significant overlap with the programs
addressed in this Program Assessments Study Work Plan. These include the Work Order 29
Downstream Lighting Impact Evaluation, and Work Order 33 Custom Impact Evaluation. There
will be extensive coordination with these two studies, particular the custom impact evaluation
where activities are planned specifically to support the Program Assessments Project. There will
be research questions related to or arising from work completed within the impact evaluations
that may be best addressed as part of the PA project; some of these are already known and are
discussed in the subsections that follow.
Other studies that will be coordinated very closely with the Program Assessments Project are the
Portfolio Strategy and Management Assessment and possibly an Audit Evaluability Assessment
for LGP and Third Party programs. Key areas of coordination with each of these areas are
outlined below:
3.1.1 Coordination with the Audit Impact Study and the Evaluability Assessment
for LGP and Third Party Programs (Work Order 36)
The Audit Impact Study recently completed an Evaluability Assessment of the statewide Core
NRA programs. The findings from this Assessment should be reviewed by the corresponding
Program Assessments Team assigned the Core NRA programs to provide additional insight and
to avoid redundancy over the two investigations.
In addition, the Energy Division is currently reviewing a proposal to complete an Evaluability
Assessment focused on audit activities within the LGP and Third Party programs. If this work is
approved, this effort will provide input to, and benefit from Program Assessments Project. This
Assessment is focused on the nonresidential LGP and Third Party programs offering energy
audit services. The assessment will assist the Energy Division in understanding the scale and
scope of the audit activities performed through these program types, as well as the quality and
Itron, Inc.
20
Program Assessments Work Plan
Program Assessments Work Plan
detail surrounding the respective record keeping. If the Energy Division chooses to pursue the
LGP and Third Party Audit Evaluability Assessment, the Evaluability Assessment team will
coordinate with the Program Assessment team in developing the data collection instrument. The
Evaluability Assessment and the Program Assessment scope overlap with respect to their mutual
concern with understanding the type, number and scope of audit services performed, the rate at
which audit services lead to participation, the range and scope of tools used to generate audit
recommendations, the types of follow up activities that program staff perform to encourage
outcome, and finally, and perhaps most importantly, the quality of the audit related record
keeping and tracking system content.
3.1.2 Coordination with the Portfolio Strategy and Management Assessment
Project
The Portfolio Strategy and Management Assessment Project is another activity that will be
coordinated with the Program Assessments. This project will be probing questions relating to
program costs, which will be useful to the PA project by providing more comprehensive and
refined cost effectiveness figures, or it may provide a breakdown of costs into areas that will
provide some insight into the effectiveness of programs in different activity areas. Moreover, the
Program Assessments project may support the work of the Management Assessment by
providing timely feedback relating to the operational soundness of programs, their management
and communication within the management chain. The Management Assessments work is also
concerned with comparative advantages in administration, and the administrative model
comparison work planned within the Program Assessment may provide useful support for
addressing that research area.
3.1.3 Coordination with Nonresidential Downstream Lighting Impact Evaluation
(Work Order 29)
The Nonresidential Downstream Lighting impact Evaluation will estimate unit energy savings
(UES) for selecting lighting measure groups, and realization rates for custom lighting measures.
The study will also measure net-to-gross ratios for the programs offering incentives for lighting
retrofits.
There are nearly 100 different programs serving nonresidential customers that offer incentives
for nonresidential downstream lighting measures. In 2010, energy savings from downstream
lighting measures represented 18% of the overall ex ante gross kWh savings portfolio for the
IOU’s energy efficiency programs. Gross impact measurement will be a statewide, measurefocused, evaluation. The statewide results will be applied back to the program level, rather than
performing separate impact evaluations for key programs. For net-to-gross ratios, however,
program level (or groups of programs) estimates will be developed.
Itron, Inc.
21
Program Assessments Work Plan
Program Assessments Work Plan
Key areas of coordination between the downstream lighting study and program assessments
include the sharing of net-to-gross results and gross realization rates as they become available.
In addition, the downstream lighting evaluation will be able to provide distributions by space
type and building type within each program and program group. This input can enhance the
context characterization for the lighting-focused programs.
The lighting evaluation will look to the Program Assessments on a number of fronts as well.
There is one research issue that has already been identified by the lighting team that can be
addressed via the Program Assessments Study . In Program Year 2010 there were a significant
portion of lighting projects incented through a calculated mechanism rather than deemed,
particularly among third party programs. It is not entirely clear why the projects required custom
savings calculations.
The Program Assessments can help to identify advantages or
disadvantages of this new practice. Why are third parties using the calculated approach? Is the
approach more accurate? How do the assumed operating hours in the custom incentive
applications for lighting compare to those assumed in deemed savings algorithms? From an
evaluation standpoint, a disadvantage to the custom approach is that evaluators do not receive
measure counts in tracking systems, because the measure quantity is recorded in kWh. This
means that a UES approach cannot be used to estimate ex post savings, only a realization rate
approach. Realization rates are not as useful as UES results for future planning.
Specific programs and program groups addressed by the Downstream Lighting Impact
Evaluation are presented in the Appendix, Section 6.5 .
Timing of Downstream Lighting Impact Study Results
Table 3-1 below present the timeline and key dates for the Downstream Lighting impact
evaluation. The timeline shows that net-to-gross and gross impact draft results will be available
July 2012. To support the June deliverables for the Program Assessments project, real time
sharing of results will need to occur during both projects’ reporting period, in late spring of 2012.
Table 3-1: WO29 Nonresidential Downstream Lighting Impact Evaluation Timeline
Activity
2010 Impact Evaluation
Collect field data and phone surveys for Gross and NTG
Analyze field data and document findings, analyze NTG phone survey results
Draft NTG values based on 2010 Participant Data Collection
Draft Gross Savings values based on 2010 Participant Data Collection
2011 Impact Evaluation
Update Evaluation Research plan based on 2011 participation
Collect field data and phone surveys for Gross and NTG
Itron, Inc.
22
Start Date
End Date
Early November
2011
Jan 2011
Apr 2012
Apr 2012
June 2012
July 2012
July 2012
March 2012
Sept 2012
Program Assessments Work Plan
Program Assessments Work Plan
Activity
Analyze field data and document findings, analyze NTG phone survey results
Draft NTG values based on 2011 Participant Data Collection
Draft Gross Savings values based on 2011 Participant Data Collection
2012 Impact Evaluation
Update Evaluation Research plan based on 2012 participation
Collect field data and phone surveys for Gross and NTG
Analyze field data and document findings, analyze NTG phone survey results
Draft NTG values based on 2012 Participant Data Collection
Draft Gross Savings values based on 2012 Participant Data Collection
Final 2010-12 Impact Report
Integrate 2010-2012 results
Prepare and submit draft report to ED contract manager
ED, DMQC, evaluation contractor discussion of revisions to report
Final report submitted to ED contract manager
Public posting and review of final report
Start Date
July 2011
Apr 2013
July 2012
Feb 2013
Apr 2014
End Date
Dec 2012
Jan 2013
Jan 2013
March 2013
Sept 2013
Dec 2013
Jan 2014
Jan 2014
Mar 2014
May 2014
May 2014
Jun 2014
Jun 2014
3.1.4 Coordination with Custom Impact Evaluation (Work Order 33)
The Custom Impact Evaluation is focused on the measurement of impact from projects receiving
custom or calculated incentives. The Custom Impact study is unusual in that it is concerned not
only with a rigorous measurement of energy impact, but also incorporates components designed
to provide qualitative program level feedback relating to impact performance. As such, these
study activities are of particular interest to the Program Assessments Project.
The priorities for the custom impact evaluation effort and the researchable issues it seeks to
examine are as follows:
1.
Estimating the level of achieved gross impact savings, determining what factors
characterize gross realization rates, and, as necessary, assessing how realization rates can
be improved.
2.
Estimating the level of free ridership, determining the factors that characterize free
ridership, and, as necessary, providing recommendations on how free ridership might be
reduced.
3.
Estimating participant spillover from efficient equipment installations made as a result of
the program but without the provision of an incentive.
4.
Providing timely feedback to IOUs to improve program effectiveness.
5.
Analyzing the extent to which recommendations from the 2006-2008 Impact Evaluations
have been implemented by the IOUs, and making a qualitative assessment to determine if
Itron, Inc.
23
Program Assessments Work Plan
Program Assessments Work Plan
the implementation of those recommendations resulted in changes to higher net and gross
realization rates.
6.
Determining whether the impact estimation methods, inputs, and procedures used by the
IOUs and implementers are consistent with the CPUC’s policy and decisions and best
practices. The evaluation will identify issues with respect to impact methods, inputs, and
procedures and make recommendations to improve IOU savings estimates and realization
rates.
7.
Improving baseline specification, including collecting and reporting on dual baseline.
Estimating the extent of any program-induced acceleration of replacement of existing
equipment and, in such cases, the RUL of the pre-existing equipment.
8.
Collecting data and information to assist with other research or study areas, which could
include measure cost estimation, cost effectiveness analysis, load shapes for measures,
updates to DEER, strategic planning and future program planning.
Custom Impact Data Collection and Sampling Plan
During the 2010 program year a total of approximately 117 programs incented measures covered
by the WO033 Custom Impact evaluation. The study sample plan calls for 600 high rigor M&V
sample points to support gross realization rate (“GRR”) estimation. Due in part to the high cost
of each sample point, the sampling domains for the GRR estimation include IOU and fuel type,
and do not extend to the program level. However, the study also provides for an additional 300
Lower Rigor (“LR”) points that will be deployed specifically to provide program-level feedback
and to support the Program Assessments Project. The feedback based on the LR points will be
more qualitative, and will not necessarily yield program specific gross impact estimates. In
addition to the 600 GRR and 300 LR points, the study will also collect a supplemental sample of
960 net-to-gross only points.
The 300 LR points will be less costly per point then the GRR points and, as a result, will
contribute to a larger overall sample to provide feedback on more programs and measures than
would otherwise be the case. However, this additional information will be primarily qualitative
in nature. That is, these lower rigor points will, in general, not be used to estimate gross
realization rates (GRR), although that may be possible for some portion of the lower rigor points.
The lower rigor points will generally involve desk reviews of ex-ante application and program
documents, and in some instances, on-site or phone verification, to assess ex-ante savings and
baseline-related methods, procedures, estimates, and assumptions. Results from the lower rigor
sample points will support the Program Assessment Project. Concomitantly, the results of the
LR and GRR reviews will be provided to the IOUs to accomplish timely feedback on the impactrelated practices of specific programs.
Itron, Inc.
24
Program Assessments Work Plan
Program Assessments Work Plan
Another critical factor in the Custom Impact study design is the distribution of data collection
and reporting across the cycle timeline. A decision issued on July 14, 2011 (Decision No. 11-07030) regarding the ex-ante review and freezing process4 changes the methods for determining
final energy savings credit. Savings claims for custom and non-DEER measures are subject to
different treatment depending on whether they were reported before or after the date of that
decision. For this reason, this Custom Impact research plan sets the reporting periods as the
before-decision (BD) period and an after-decision (AD) period change in June of 2011.
For the 900 GRR and LR points, the sample plan calls for 1/3 of the points to be collected over
the BD period and 2/3 over the AF period. The net-to-gross only points will be evenly divided
over the BD and AD periods.
There are 36 programs that have 1% or more of each IOU’s 2010 custom impact claims in kWh,
kW or therms. These 36 programs make up 92% of the custom5 2010 kWh savings claim and
98% of the therm claim. Further details regarding the measures and programs addressed in the
Custom Impact Evaluation are presented in the Appendix, Section 6.6 .
Timing of Custom Impact Study Results
Much of the results of the gross realization rate (GRR) sampling, the lower rigor (LR) points,
and the NTG points will unfold during the July 2012 through January 2013 period, with the last
set of final reports scheduled for July 2014. Table 3-2 below presents the schedule for key
Custom Impact Evaluation tasks and reporting of study results.
Table 3-2: Custom Impact Study Timeline
Activity
Evaluation Reporting and Feedback Memoranda
First annual impact report
Feedback memorandum on program rules, procedures and influence
Second annual impact report
Final evaluation impact report
Activities Supporting First Annual Impact Report
Collect field data and conduct document reviews, conduct phone surveys for
NTG
Analyze field data and document findings, analyze NTG phone survey results
Prepare and submit first annual draft impact report to ED contract manager
Public posting and review of first annual impact report
Incorporate revisions into first annual impact report for public release and final
ED approval
4
5
Start Date
End Date
May 2012
Dec 2012
May 2013
Mar 2014
July 2012
Jan 2013
July 2013
July 2014
Oct 2011
Mar 2012
Nov 2012
May 2012
Apr 2012
Jun 2012
Jun 2012
Jul 2012
Ex-ante review and freezing work efforts are managed and implemented under WO002. WO033 is providing
support for WO002 ex-ante review by using evaluation methods that may be used to inform ex-ante review of
future projects.
Measures falling within the WO033 scope.
Itron, Inc.
25
Program Assessments Work Plan
Program Assessments Work Plan
Activity
Early Feedback on Program Procedures and Freeridership
Analyze program changes since 2006-08 evaluations, assess compliance with
program rules / procedures, free ridership and program influence (presented in
evaluation reports)
Assess compliance with program rules / procedures, free ridership and
program influence (via memorandum)
Activities Supporting Second Annual Impact Report
Collect field data and conduct document reviews, conduct phone surveys for
NTG
Analyze field data and document findings, analyze NTG phone survey results
Prepare and submit second annual draft impact report to ED contract manager
Public posting and review of second annual impact report
Incorporate revisions into second annual impact report for public release and
final ED approval
Activities Supporting Final 2010-2012 Impact Report
Collect field data and conduct document reviews, conduct phone surveys for
NTG
Analyze field data and document findings, analyze NTG phone survey results
Prepare and submit draft final report to ED contract manager
Public posting and review of final report
Incorporate revisions into report for public release and final ED approval
Start Date
End Date
Nov 2011
Jul 2012/Jul
2013/ Jul 2014
Nov 2011
Jan 2013
May 2012
Mar 2013
June 2012
Apr 2013
Apr 2013
May 2013
Jun 2013
Jul 2013
May 2013
Mar 2014
Jun 2013
Apr 2014
Apr 2014
May 2014
Jun 2014
Jul 2014
3.1.1 Participant Survey Coordination
Although no participant surveys are planned for the Program Assessments, some participant
survey data will be collected in support of the assessments through other work orders. In
particular, the Net-to-Gross survey fielded by the Custom Impact Evaluation team will include a
battery of market and process questions that can be leveraged to provide some additional insight
to the Program Assessments. Please see the Appendices, Section 6.4 .]
At this time it is unclear whether the participant surveys fielded in support of the Downstream
Lighting Impact Evaluation will have the flexibility to include additional questions in support the
Program Assessments. There may be an opportunity to include a short battery, which would
likely include the same questions presented in the “Process” section of the Custom Impact net-togross survey.
Itron, Inc.
26
Program Assessments Work Plan
Program Assessments Work Plan
4 Scope of Work
This section begins with a description of the tasks and scope involved in completing each
Program Group Assessment. This is followed by a description of the tasks and scope required
for the Summary Report and the Administrative Models Comparison.
4.1 Program Group Assessment Scope of Work
Get to know the programs

Review and summarize data in Program Characterization database

Gather and review program theory and design documents

Use information above for initial clustering of member programs into categories by
o target market,
o program objectives,
o market actor support roles,
o technologies,
o incentive strategies
o and/or non-incentive services
Needs Assessment
The Program Group Assessment needs assessment will be coordinated closely with the
administrative model needs assessment, as described in Section 4.2

Conduct in-depth interviews with key personnel holding high level knowledge and
perspective cutting across the Program Group
o Interview IOU staff
o Interview PUC staff
o Project Managers and Contractors/Project staff for related EM&V activities:
Itron, Inc.
27
Program Assessments Work Plan
Program Assessments Work Plan


Custom and Lighting Impact evaluations

Audit Evaluability Assessment

Portfolio Strategy and Management Assessment
Refine and inform subdivision of PA Group into categories
o Prioritize and refine research needs unique to the Group, or for a program
category within a group.
Subdivide Program Assessment Group into Program Categories
It may be appropriate to subdivide Program Groups into categories where there is a natural
clustering that can support more informative comparison, or a reduced variation within a
sampling strategy. The specific criteria used to determine categories within a PA Group should
be tailored to the specific set of programs considered, and is left to the PA Group teams to
determine. Ideally program categories would consist of programs with similar theory and logic
models operating in similar markets. However, in some cases, such category definitions may not
be possible or desirable.
Construct a Sampling Strategy for Each Program Category
For some program categories, there may not be sufficient budget or sufficient variation in design
and delivery to warrant program specific analysis of each member program. Individual Group
Assessment leads, in conjunction with the Project PCG will determine an optimal sampling
strategy for each Program Category. Specify which programs will undergo reviews based on
secondary sources, reviews based on feedback gained through synergies with other EM&V
efforts, as well as programs that will also include primary data collection activities.
Define Cross-Category Research Objectives specific to the Program Assessment Group
What research questions will be addressed that draw on comparisons across program categories
within a Program Group?
Identify the administrative model comparison research questions that can be addressed within or
across the program categories in the PA Group.
Identify the gaps in known best practices for the program categories represented in the PA Group
Program Category Operational Landscape Review

Perform a secondary source and literature review to identify important trends within key
markets and technologies, evolution in codes and standards, and relevant changes in the
policy environment.
Itron, Inc.
28
Program Assessments Work Plan
Program Assessments Work Plan

Consider the various roles of key participating trade allies and review recent market
research or characterization reports.
Trade Ally and Account Executive Interviews
Trade ally interviews can be an important source of feedback on the efficacy and relevance of
program strategies and their management. The need for and design of Trade Ally interviews
may take on different characteristics within each Program Assessment Group. In some cases, the
interviews may support a specific program review, and in others they may provide feedback
across many programs or confirmation of a market barrier or important trend. It will be
important that Trade Ally interview plans are coordinated across Program Assessment Groups,
and collapsed into the minimum number of unique interviews, and, of course, avoidance of any
multiple interview requests of the entity.
Similarly, IOU Account Executives have a unique perspective on program processes, customer
perceptions, and overall delivery and efficacy. Account Executives are typically assigned to
larger customers. Program groups with some emphasis on larger customer segments and/or
involvement with Account Executives may consider including these interviews in primary data
collection plans.
Program Reviews
Consistent with Program Category sampling strategy, program specific reviews will be
conducted. Data sources to support reviews fall into three categories; those based on secondary
sources, those based on synergies with the current EM&V portfolio, and those based on primary
data collection. Generally, and particularly for program categories with many members, a
greater subset of programs will undergo secondary review and reviews via synergy than will
undergo reviews that also include primary data collection.
Program Reviews: Secondary Sources

Review and document program context characteristics (See Section 2.4 )

Gather quantitative program outcome metrics (See Section 2.3 )

Review previous program evaluation results

Gather and review program filings and related regulatory documents
Program Reviews: Synergy Sources

Identify the relevant set of current evaluation efforts and what information they can
contribute to the Assessment. Also identify what information the Program Assessments
activity can provide to other evaluation efforts.
Itron, Inc.
29
Program Assessments Work Plan
Program Assessments Work Plan
o Custom Impact Study
o Downstream Lighting Impact Study
o Audit Impact Study
o Portfolio Strategy and Management Assessment
Program Reviews: Primary Sources
The Program Group Assessment team will work closely together, and with the PCG, and with
other related EM&V project teams to finalize the data collection instrument. It is anticipated that
the data collection instrument will be a little bit different for each program group, but will remain
largely the same, in support of consistency and comparability across reports. The data collection
instrument developed for the previous Itron Best Practices study will serve as a starting point for
the instrument development. This instrument can be found in the Appendices, Section 6.2
For each program sampled for primary source review, the PA Team members will gather
program information primarily through interviews with program representatives.
Step 1: Contact Program Representatives
This initial contact will explain to program representatives the purpose of the study, and will ask
for the representative's participation, or for a go-ahead to contact members of their organization.
Requests will be made for any readily available information not already found in reviews of
secondary sources, such as regulatory filings, procedures manuals, marketing materials,
evaluations, etc. and schedule a time and date for an in-depth interview.
Step 2: Integrate Existing Documentation
Prior to an interview, we will integrate all existing information sources into the Program
Assessments Database and into our data collection instrument. We will try to resolve or flag any
data issues or inconsistencies.
Step 3: Conduct Interviews
During our in-depth interviews with program representatives, we will focus on collecting
information not found during our initial research. We will also attempt to resolve any data
inconsistencies.
In addition to collecting information germane to the program, program representatives will be
questioned regarding their general knowledge of program development and tools that they have
found useful when conceiving and constructing their own programs. Additionally program
Itron, Inc.
30
Program Assessments Work Plan
Program Assessments Work Plan
managers will be queried about some of the best and worst practices they have seen in the
industry in their program area. A comprehensive data collection instrument used in the previous
Itron Best Practices Study will serve as a starting point from which some customization can take
place to ensure:

confirm key findings from secondary source review, as needed

address research needs unique to the program, program category or program group

fill gaps remaining after secondary source review

Remove unnecessary or redundant elements
Step 4: Update Program Best Practices Database
Once the interview is completed, the results will be transferred to a Database. The database will
house all of the data collected through the survey instrument, as well as quantitative program
outcome metrics. The database will support greater flexibility during the analysis stage and
highlight areas of missing data or inconsistent data.
Step 5: Perform Final Check-In with Program Representatives
The PA team will circle back one last time with representatives to discuss the final data that is in
our Best Practices Database. Efforts will be made to resolve any discrepancies with the program
manager. Once all interviews are completed, we will finalize all data in our Best Practices
Database to prepare for the analysis phase of the study.
Benchmark Programs Using Best Practices Benchmarking Tool
In this step the teams will use data sources described above to benchmark programs against
known best practices. The previous best practices study developed a self-benchmarking tool
which provides a starting list of benchmarking criteria reflecting the set of known best practices.
Some of these items will be updated by the project teams, in conjunction and expressed concent
of the PCG. Although the benchmarking process may result in an ordinal ranking of programs
by program component and sub-component, scores will not be reported or made public at the
program level.
Identify New Best Practices
In this step, the Program Assessments Team will integrate all of the data elements to identify
new best practices or updates to existing best practices. Critical data elements for the integrated
Itron, Inc.
31
Program Assessments Work Plan
Program Assessments Work Plan
review include program context characteristics, quantitative outcome metrics, recent trends in
market, technologies and policy environments, together with benchmarking results.
The integrated review should include a comparison of practices, to known best practices, and to
program outcomes. This should be done in light of an understanding of the relevant context
characteristics in order to:
o Confirm or refine/redefine known best practices
o Develop new best practices targeted to specific contextual characteristics and
program objectives
Summarize Program Group Assessment Findings
In order to provide support for Bridge funding decisions, a memorandum of key findings will be
issued at the end of May 2012. Draft and Final Reports will follow in June and July, 2012. The
impact studies have a more drawn out timeline, so not all of the impact findings can be
incorporated into the June reports. The schedule for post-Bridge portfolio decisions is the third
quarter of 2013. It may be useful to provide an update to the June reports incorporating
additional impact data and perhaps conducting some select follow up data collection. At this
time, the need and scope for such efforts is unclear. The issue will be revisited upon completion
of the June reporting.
4.2 Summary Report and Administrative Model Comparison
Review and document the history and rationale behind the introduction of each administrative
model
Each administrative model has a different origin and a different reason for introduction into the
portfolio. The history and evolution of the rationale behind each model is crucial grounding for
this study area. As with all histories, a variety of perspectives may emerge relating to the
evolution of different ‘models’. A Review of these perspectives and the common themes or
language that arise from the review will provide context for this Study component.
Itron, Inc.
32
Program Assessments Work Plan
Program Assessments Work Plan
Identify Study Hypotheses
Research into potential comparative advantages of administrative models must consider
variations in policy objectives, strategies, and the particular characteristics of the customers and
markets addressed. In the face of the wide range of possible combinations and permutations of
these variables, this study must rely on a directed and inductive approach. More precisely, this
study component should identify a set of hypotheses that can be tested. These hypotheses need
to be selected such that their testing will help to inform how models evolve to improve
performance of the overall portfolio.
The hypotheses addressed here will be identified through a process involving conversations with
policy and strategic decision-makers, CPUC personnel and IOU personnel with an interest in
issues related to the comparative performance of administrative models, as well as a careful look
back at the origins and evolution of the various ‘models’ completed in step one above.
Work with each Program Group Assessment team to address administrative model hypotheses
testing in their respective scope and data collection instruments.
It will be necessary to coordinate closely with all Program Assessment Groups to ensure that
each Group Assessment supports the testing of administrative model hypotheses.
Conduct Interviews and Literature Reviews to Draw Additional Insight from Experience in Other
States
Experiences in other states may be a source of additional insight in the administrative models
study. Once hypotheses are in place, the team may conduct a literature review and selected
interviews with out-of-state portfolio or program managers, policy makers or other key decision
makers.
Summarize Overarching Finding, and Results of the Administrative Model Study
In order to provide support for Bridge funding decisions, a memorandum of key findings will be
issued at the end of May 2012. Draft and Final Reports will follow in June and July, 2012. The
impact studies have a more drawn out timeline, so not all of the impact findings can be
incorporated into the June reports. The schedule for post-Bridge portfolio decisions is the third
quarter of 2013. It may be useful to provide an update to the June reports incorporating
additional impact data and perhaps conducting some select follow up data collection. At this
time, the need and scope for such efforts is unclear. The issue will be revisited upon completion
of the June reporting.
Itron, Inc.
33
Program Assessments Work Plan
Program Assessments Work Plan
4.3 Database and Documentation
All of the data collected in support of this study will be housed in a master database. The
database will hold fields for quantitative outcome metrics; it will summarize related secondary
source documents; and it will contain all of the data collected through the program representative
interviews. This database documentation will be a valuable tool for the evaluation, planning and
policy communities in its ability to provide a consistent and up-to-date summary of key program
characteristics and practices. To preserve the integrity of data collection, the identity of the
specific programs and individuals referenced in the database may be withheld from public view.
4.4 Draft Program Assessment Group Report Outline
A preliminary draft outline for each Program Assessments Group Report is presented below.
The outline for the Summary Report and the Administrative Model Comparison piece will differ
somewhat and will be specified at a later date. Major sections are described below with their
sub-section headings.
1. Executive Summary Address major findings and most significant recommendations.
Executive Summary will be sufficient to be a stand-alone document.
a. Study Findings
i. Study Approach – Brief overview of study approach
ii. Data Collection Summary – Brief summary of data collection activities
iii. Program Groups and Categories – Summary of the PA Group, and their
mapping to categories.
b. Key Findings
i. Benchmarking results and related recommendations
ii. Summary of Best Practices Identified in the study
1. Introduction –Background for the study including discussion of research objectives.
a. Overview of Research Objectives
b. Overview of Report
c. Data Assessment – Overview of data used in the project.
d. Primary Data Sources
e. Secondary Data Sources.
f. Synergy Sources (from concurrent/ongoing EM&V efforts)
Itron, Inc.
34
Program Assessments Work Plan
Program Assessments Work Plan
2. Methodology – Presents methodology for the Study with focus on the decomposition
model, including program categories, components, and sub-components, and the program
sampling and benchmarking approach. Data sources and uses are also presented
a. Methodology Program Categorization and Sampling Approach
b. Methodology – Decomposition Model
i. Program Components/sub-components and metrics
c. Program Benchmarking Approach
3. Results – Overview of study results with general themes.
a. Program Assessment Group and Category Characterization
i. Summarize key program attributes and objectives, by Group and Category
ii. If relevant, describe origin of program designs and policy intent
b. Contextual Characterization findings .
programs
Presentation of contextual data for
i. Program Design Policy Elements
ii. Socio-Economic and Other Immutable Factors
c. Discussion of Operational Landscape
i. Technology trends
ii. Market trends
iii. Evolution of Codes and Standards
iv. Notable innovations in program design
v. Policy environment; recent and projected changes
4. Benchmarking Results
a. The following section would apply for each program category and for each
program
component
and
sub-component
(Program
Design/Management/Implementation)
i. Describe current practices and their relative frequencies by program
category
ii. Discussion of relevant contextual characteristics
iii. Summarize program benchmarking results for this category
1. Assess overall performance by program category
Itron, Inc.
35
Program Assessments Work Plan
Program Assessments Work Plan
a. Summarize Tier scores for each benchmarking criteria
iv. Identify opportunities for improvement
v. Describe particularly successful practices
vi. Describe notable practices;
1. Current practices that are best avoided or
2. Innovative practices where the outcome remains uncertain.
5. Identification of New Best Practices
a. Summary of program performance and areas of excellence
i. Profile programs that are excellent at what they do within each program
category.
ii. Highlight the policy objectives best addressed by the programs.
b. Identify new generalized and transferrable best practices.
i. Provide rationale, description of identification process and supporting data
c. Describe relationships between specific practices, benchmarking results,
quantitative outcome metrics, and context characterization.
4.5 Draft Summary Report and Administrative Model Comparison
[Will be provided at a later date]
Itron, Inc.
36
Program Assessments Work Plan
Program Assessments Work Plan
5 Work Plan Timeline
Table 5-1 below present the timeline and key dates for the Program Assessments Study. The
timeline shows that the database of results will be available at the end of May 2012; draft
Program Group Assessments will be available July 2012; and draft Administrative Model
Comparison and Summary Report will be available at the beginning of August. All final reports
will be completed by the end of July 2012.
IOU Bridge Applications are scheduled for late April 2012, with comments and replies
scheduled for June. In support of this schedule, a memorandum of key study findings will be
issues at the end of May. The ability of this study to support Bridge funding decision is
dependent on adhering to the timetable below. The IOUs Post-Bridge Portfolio Applications are
scheduled for the third quarter of 2013. The additional period provides an opportunity, if it is
desired at the time, to update the June reports with additional impact performance data available
from the Custom Impact Evaluation and the Downstream Lighting Impact evaluation, in
particular, as well as findings from the Portfolio Strategy and Management Assessment.
Table 5-1 Summary of Project Timeline
Activity
Start Date
End Date
Contractor Selection
10/28/11
11/30/11
Program Group Plans (adaptation of workplan)
11/30/11
1/30/12
Data Collection
2/1/12
4/30/12
Delivery of Database
2/1/12
5/21/12
Memorandum of Key Findings
5/21/12
5/31/12
Program Group Report Draft
2/1/12
6/30/12
Program Group Reports Final
Administrative Models Comparison and Summary
Report Draft
Administrative Models Comparison and Summary
Report Final
Optional: Update to Incorporate additional Impact
Study Data
Optional: Selected follow up data collection and
Update to Reports
7/2/12
7/31/12
2/1/12
6/30/12
7/2/12
7/31/12
Jan 2013
March 2013
Jan 2013
March 2013
Itron, Inc.
37
Program Assessments Work Plan
Program Assessments Work Plan
6
Appendices
6.1 Program List with Mapping to Program Assessment Groups
Program Mapping to
PA Groups.xlsx
6.2 Previous Best Practice Benchmarking Study Data Collection
Instrument
In-depth V10
Form.doc
6.3 Previous Best Practice Benchmarking Tool
BP_Benchmarking_To
ol _Draft.xlsx
6.4 Draft Net-to-Gross Survey Instrument
NR Participant NTG
Draft Survey.xlsx
Itron, Inc.
38
Program Assessments Work Plan
Program Assessments Work Plan
6.5 Downstream Lighting Impact Evaluation Programs and Measure
Group Detail
Table 6-1 below summarizes the programs offering nonresidential downstream lighting, and how
they will be grouped in the lighting evaluation for net impact and gross impact measurement.
Groupings are shown for each IOU. Also shown is the distribution of 2010 gross ex ante kWh
and kW savings claim for all nonresidential downstream lighting across all programs.
Table 6-1: Gross Savings and NTG Program Groupings for PGE’s Programs
Gross
Program
Group
Core/Statewid
e Custom
NTG Program Group
PGE Calculated Ag
IOUPrgID
PGE21031
IOUPrgName
AG CALCULATED INCENTIVES
PGE Calculated Com
PGE Calculated Ind
PGE Deemed*
PGE21011
PGE21021
PGE21032
COM CALCULATED INCENTIVES
IND CALCULATED INCENTIVES
Agricultural Programs - Deemed
PGE21012
PGE21022
PGE2194
PGE2196
PGE2130
PGE2142
PGE2146
PGE2131
PGE2133
PGE2135
PGE2134
PGE2143
Commercial Programs - Deemed
Industrial Programs - Deemed
Energy Fitness Program
RightLights
AMBAG ENERGY WATCH
SAN MATEO COUNTY ENERGY WATCH
SILICON VALLEY ENERGY WATCH
CITY OF SAN JOAQUIN ENERGY WATCH
FRESNO COUNTY ENERGY WATCH
MADERA COUNTY ENERGY WATCH
KERN COUNTY ENERGY WATCH
SAN LUIS OBISPO COUNTY ENERGY
WATCH
SANTA BARBARA COUNTY ENERGY
WATCH
PGE2144
PGE2140
PGE2137
PGE2138
PGE2145
SIERRA NEVADA ENERGY WATCH
SAN JOAQUIN COUNTY ENERGY WATCH
MENDOCINO COUNTY ENERGY WATCH
NAPA COUNTY ENERGY WATCH
SONOMA COUNTY ENERGY WATCH
PGE2132
PGE2125
PGE2136
PGE2139
PGE2147
PGE21261
PGE21262
PGE21263
EAST BAY ENERGY WATCH
LGEAR
MARIN COUNTY ENERGY WATCH
REDWOOD COAST ENERGY WATCH
SAN FRANCISCO ENERGY WATCH
CALIFORNIA COMMUNITY COLLEGES
UC/CSU
STATE OF CALIFORNIA
DEPT OF CORRECTIONS &
REHABILITATION
Comprehensive Retail Energy Management
Core/Statewid
e Deemed
PGE Energy Fitness
PGE RightLights
PGE DI Ecology
PGE DI RHA
PGE DI Staples
PGE2141
Local
Government
Partnership
PGE DI Staples/RHA
**
PGE DI Synergy
PGE DI TEAA
PGE LGP East Bay
PGE LGP LGEAR
PGE LGP Marin
PGE LGP Redwood
PGE LGP SF
PGE SW Partnership
Third/Local
Itron, Inc.
PGE 3P
PGE21264
PGE2183
39
%
kw
h
2%
14
%
1%
4%
22
%
5%
4%
5%
2%
1%
1%
0%
3%
0%
2%
% kw
1%
9%
0%
4%
26%
6%
6%
5%
1%
1%
1%
0%
4%
0%
2%
1%
1%
0%
0%
2%
1%
0%
0%
0%
10
%
1%
0%
1%
6%
0%
1%
0%
2%
1%
0%
0%
0%
11%
1%
1%
1%
6%
0%
1%
0%
1%
0%
1%
0%
Program Assessments Work Plan
Program Assessments Work Plan
Party
Implementer
PGE2185
PGE2189
PGE2190
PGE2193
PGE2195
PGE2197
PGE2199
EnergySmart Grocer
Cool Controls Plus
LodgingSavers
School Energy Efficiency
Energy Savers
SCCR
Energy-Efficient Parking Garage
Retail Furniture Store Energy Efficiency
PGE2200
Program
PGE2202
LED Accelerator
PGE2205
Casino Green
PGE2212
California Preschool Energy Efficiency Program
K-12 Private Schools and Colleges Audit
PGE2213
Retrofit
PGE2223
Heavy Industry Energy Efficiency Program
PGE2232
Light Exchange Program
PGE2233
Wine Industry Efficiency Solutions
PGE2235
Dairy Industry Resource Advantage Pgm
*Note that PG&E’s Deemed programs will be further broken out by those vendors using the Trade Pro tool.
0%
0%
1%
0%
2%
0%
1%
0%
0%
0%
0%
1%
0%
1%
0%
1%
0%
1%
0%
2%
0%
0%
0%
0%
1%
0%
0%
0%
0%
0%
0%
0%
**Note that PGE2144 will be split between PGE DI Staples and PGE DI RHA based on the vendor performing the retrofit.
Table 6-2: Gross Savings and NTG Program Groupings for SCE’s Programs
Gross
Program
Group
Core/Statewide
Custom
Core/Statewide
Deemed
Direct Install
NTG Program Group
SCE Calculated Com
SCE Calculated
Ind_Ag
IOUPrgID
SCE-SW-002B
SCE-SW-003B
SCE-SW-004B
IOUPrgName
Commercial Energy Efficiency Program
Industrial Energy Efficiency Program
Agriculture Energy Efficiency Program
%
kwh
11%
4%
0%
%
kw
6%
3%
0%
SCE Deemed Ag
SCE Deemed Com
SCE Deemed Ind
SCE Direct Install
SCE LGP ELP
SCE-SW-004C
SCE-SW-002C
SCE-SW-003C
SCE-SW-002D
SCE-L-004a
SCE-L-004c
SCE-L-004e
SCE-L-004f
SCE-L-004g
SCE-L-004h
SCE-L-004i
SCE-L-004m
SCE-L-004n
SCE-L-004o
SCE-L-004p
SCE-L-004q
SCE-L-004r
SCE-L-005a
SCE-L-005b
SCE-L-005d
SCE-L-005e
SCE-L-005f
SCE-L-005g
SCE-TP-006
Agriculture Energy Efficiency Program
Commercial Energy Efficiency Program
Industrial Energy Efficiency Program
Commercial Energy Efficiency Program
Energy Leader Partnership Program
Energy Leader Partnership Program
Energy Leader Partnership Program
Energy Leader Partnership Program
Energy Leader Partnership Program
Energy Leader Partnership Program
Energy Leader Partnership Program
Energy Leader Partnership Program
Energy Leader Partnership Program
Energy Leader Partnership Program
Energy Leader Partnership Program
Energy Leader Partnership Program
Energy Leader Partnership Program
Institutional and Gov't Core EE Partnership
Institutional and Gov't Core EE Partnership
Institutional and Gov't Core EE Partnership
Institutional and Gov't Core EE Partnership
Institutional and Gov't Core EE Partnership
Institutional and Gov't Core EE Partnership
Healthcare EE Program
0%
42%
10%
23%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
1%
0%
0%
56%
11%
18%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
Local
Government
Partnership
SCE LGP Institutional
Third/Local
Itron, Inc.
SCE 3P
40
Program Assessments Work Plan
Program Assessments Work Plan
Party
Implementer
SCE-TP-013
Food & Kindred Products
0%
0%
SCE-TP-014
SCE-TP-016
Primary and Fabricated Metals
Nonmetallic Minerals and Products
0%
0%
0%
0%
SCE-TP-025
SCE-TP-033
Retail Energy Action Program
Automatic Energy Review for Schools Program
0%
0%
0%
0%
SCE-TP-036
SCE-TP-037
Energy Efficiency for Entertainment Centers
Private Schools and Colleges Program
0%
0%
0%
0%
SCE-TP-0608
SCE-TP-031
Coin Operated Laundry Program
Management Affiliates Program
0%
1%
0%
1%
SCE 3P Preschool
SCE-TP-038
California Preschools Program
0%
0%
SCE 3P Schools
SCE-TP-024
Public Pre, Elementary and High Schools
1%
0%
SCE 3P Mgmt
Affiliates
Table 6-3: Gross Savings and NTG Program Groupings for SDG&E’s Programs
Gross
Program
Group
Core/Statewide
Custom
Core/Statewide
Deemed
Third/Local
Party
Implementer
NTG Program Group
SDGE Calculated
SDGE Deemed COM
SDGE Deemed IND_AG
SDGE BID
SDGE MEC
IOUPrgID
SDGE3105
SDGE3109
SDGE3106
SDGE3101
SDGE3110
SDGE3117
SDGE3167
IOUPrgName
SW-ComA - Calculated
SW-IndA - Calculated
SW-ComB - Deemed
SW-AgB - Deemed
SW-IndB - Deemed
Local03 - Local Non-Residential (BID)
3PNRes09 - Mobile Energy Clinic (MEC)
%
kwh
11%
0%
39%
0%
5%
38%
1%
%
kw
6%
0%
49%
0%
6%
32%
1%
Measure Groups Studied, Nonresidential Downstream Lighting Evaluation
Gross and net impact results will reflect the study of selected high impact measure groups. The
nonresidential downstream lighting impact evaluation selected the following measure groups for
study:

CFL – Advanced

CFL -- Basic

Controls: Occupancy Sensor

High Intensity Discharge (HID)

High Bay Fluorescent

Linear Fluorescent

Linear Fluorescent Delamping
Itron, Inc.
41
Program Assessments Work Plan
Program Assessments Work Plan
6.6 Custom Impact Evaluation Programs and Measure Group Detail
These 36 programs and their custom savings claims are shown in Table 6-4. The savings are
expressed as a percent of total IOU custom claims in 2010, as well as a percent of all claims.
Itron, Inc.
42
Program Assessments Work Plan
Program Assessments Work Plan
Table 6-4: WO033 2010 Savings Claims by Program
WO033 2010 Claimed Savings
Program ID
Program Name
kWh
kW
Therms
2010 Claim
PGE21011
COM Calculated Incentives Program
83,788,860
9,967
1,009,868
PGE21012
Commercial Program – Deemed
39,643,065
7,942
1,539,473
PGE21021
IND Calculated Incentives Program
53,803,305
6,282
14,659,709
PGE21022
Industrial Programs – Deemed
4,602,249
828
1,327,992
PGE21031
AGR Calculated Incentives Program
35,624,784
6,523
3,323,997
PGE21032
Agricultural Program – Deemed
89,510,349
20,426
1,381,690
PGE21035
Pump Efficiency Services
18,778,036
2,186
-
PGE21042
Savings by Design Coml NC
48,137,830
11,072
765,890
PGE21261
CA Community Colleges EE Partnership
2,758,408
1,525
257,938
PGE21262
UC/CSU EE Partnership
14,475,411
2,179
1,316,413
PGE2147
San Francisco EW Partnership
1,722,340
266
298,878
PGE2182
Boiler EE Program
120,605
12
654,358
PGE2185
Energy Smart Grocer
16,892,656
1,197
3,885
PGE2186
Enhanced Automotive Initiative
-
-
183,340
PGE2222
EE Services for Oil & Gas Production
44,248,407
4,245
-
PGE2223
Heavy Industry EE Program
22,490,505
3,012
1,656,890
PGE2225
Refinery EE Program
2,661,480
335
895,239
Other Programs
Other PG&E Programs
68,377,393
11,069
635,815
547,635,681
89,065
29,911,377
1,671,530,335
290,482
17,503,953
Total PG&E Custom Savings
PG&E Total Portfolio Savings
SCE-SW-002B
Commercial Energy Efficiency Program
– Calculated
41,332,777
6,641
-
SCE-SW-002C
Commercial Energy Efficiency Program
– Deemed
20,074,458
3,582
(132,470)
SCE-SW-003B
Industrial Energy Efficiency Program –
Calculated
109,660,429
13,299
-
SCE-SW-004B
Agriculture Energy Efficiency Program
– Calculated
19,246,001
2,704
-
SCE-SW-005A
New Construction Program
53,053,962
11,104
147,462
Other Programs
Other SCE Programs
59,078,149
10,147
12,558
302,445,776
47,478
27,550
1,963,140,252
373,422
3,099,439
Total SCE Custom Savings
SCE Total Portfolio Savings
Itron, Inc.
43
Program Assessments Work Plan
Program Assessments Work Plan
Table 2-1 (Cont’d): WO033 2010 Savings Claims by Program
WO033 2010 Claimed Savings
Program ID
Program Name
kWh
kW
Therms
2010 Claim
SCG3603
#SW-AgB – Deemed
-
-
343,531
SCG3607
#SW-ComA – Calculated
-
-
993,269
SCG3608
#SW-ComB – Deemed
-
-
699,017
SCG3611
#SW-IndA – Calculated
-
-
9,101,839
SCG3612
#SW-IndB – Deemed
-
-
4,077,913
Other
Programs
Other SCG Programs
-
-
398,054
Total SCG Custom Savings
-
-
15,613,624
3,080,937
1,918
23,088,770
89,107
10
223,664
SCG Total Portfolio Savings
SDGE3101
SW-AgB – Deemed
SDGE3105
SW-ComA – Calculated
6,601,312
901
128,200
SDGE3106
SW-ComB – Deemed
8,273,491
939
64,532
SDGE3109
SW-IndA – Calculated
1,500,226
181
8,820
SDGE3110
SW-IndB – Deemed
454,074
21
9,805
SDGE3117
Local03 - Local Non-Residential (BID)
39,456,447
4,321
502,597
SDGE3118
SW-NCNR - NRNC Savings By Design
8,142,219
1,746
755,285
SDGE3162
3P-NRes02 - SaveGas - Hot Water
Control
-
-
9,625
SDGE3170
3P-NRes13 - Retro commissioning (RCx)
955,426
111
12,364
Other
Programs
Other SDG&E Programs
2,056,327
126
3,639
Total SDG&E Custom Savings
67,528,628
8,358
1,718,529
SDG&E Total Portfolio Savings
268,479,825
43,650
661,839
Total WO033 Claimed Savings
917,610,086
144,900
47,271,080
2010 Portfolio Savings
2010 Statewide Portfolio Claimed
Savings
3,906,231,349
709,471
44,354,001
2010 WO033 Savings Claim Relative to 2010 Portfolio Goals
2010 Total Portfolio Goals
2,276,000,000
WO033 % of 2010 Portfolio Goals
40.3%
502,000
28.9%
47,100,000
100.4%
Among these 36 programs, the largest 10 to 20 programs account for 65% to 85% of the WO033
savings (based on 2010 tracking data). There remains uncertainty surrounding the particular
clustering of the Custom Impact study sample points across the programs, but the bulk of the 600
GRR points are expected to be concentrated in the largest 10-20 programs.
Itron, Inc.
44
Program Assessments Work Plan
Program Assessments Work Plan
The 300 LR points will be allocated to supplement the GRR points for a portion of the programs
with few or zero GRR points. To meet the objectives of the LR points, a statistically robust
sample is not necessarily required. For the programs selected into the LR sample frame, a small
number of projects will be randomly selected for review by the evaluation team.
Setting a threshold of something on the order of 10 to 15 projects for program-level feedback,
and using the 300 LR points to address the next set of programs beyond the 10 to 20 likely
covered at this threshold by the GRR sample (that is, LR points will be allocated only to those
programs not reaching 15 points in the GRR sample), we estimate that a total of 30 to 50
programs could be provided impact-related feedback over the life of the Custom Impact
Evaluation project, covering both gross impact procedures and net-to-gross findings.
The number of programs that will likely have impact-related feedback available over the first
half of 2012 will be substantially smaller—in the range of 10-20 programs. The PA Team will
work closely with the custom impact team in designing the sample plan for the LR points, as
well as for transferring findings to the PA Teams for analysis.
Measure Groups Studied, Nonresidential Downstream Lighting Evaluation
The CPUC Energy Division and their consultants initially identified and grouped measures with
similar characteristics into 74 measure groups. Some measure mapping and grouping will
continue to develop throughout the project for reasons that include changes in emphasis in
measure participation levels over time. For program year 2010, a total of 53 measure groups
have been mapped to the Custom Impact Evaluation. Note that this excludes custom lighting,
which is mapped to Nonresidential Lighting Impact Study.
To present the distribution of measure savings, these 53 measure groups are further collapsed
into 11 aggregate measure groups. Figure Ошибка! Используйте вкладку "Главная" для
применения Heading 1 к тексту, который должен здесь отображаться.-1 presents the
fraction of the total 2010 Custom Impact savings claim that is accounted for by each aggregate
measure group by fuel type. Process savings represents the greatest savings fraction for both
electric and gas, 38% and 62%, respectively. The next greatest electric savings claims are for the
refrigeration and HVAC aggregate measure groups, at 26% and 19%, respectively. Those top
three electric groups combined represent 84% of electric savings. The next greatest gas savings
claims are for the steam trap, whole building and pipe / tank insulation aggregate measure
groups, at 14%, 6% and 6%, respectively. Those top 4 gas groups combined represent 88% of
savings.
Itron, Inc.
45
Program Assessments Work Plan
Program Assessments Work Plan
Figure Ошибка! Используйте вкладку "Главная" для применения Heading 1 к
тексту, который должен здесь отображаться.-1: 2010 WO33 Savings Claim by
Aggregate Measure Group
0.0%
4.5%
5.5% 0.8%
0%
19.3%
6%
2%
5% 0%
1%
6%
14%
2.8%
26.4%
1.5%
1.0%
4%
0%
62%
38.2%
Total GWh: 918
Itron, Inc.
Total Millions of Therms: 47
46
Building Envelope
HVAC
Motor
Other
Pipe & Tank Insulation
Process
Refrigeration
Retrocommissioning
Steam Traps
Water Heat
Whole Building
Program Assessments Work Plan
Download