Draft Paper Integrating Performance Measures in Project Portfolios

advertisement
Title
Integrating Performance Measures to Exert Effective Leadership in Managing Project
Portfolios
Frank T. Anbari, Denis F. Cioffi, and Ernest H. Forman
Department of Decision Sciences, School of Business
The George Washington University
Funger Hall, Suite 415
2201 G Street, NW
Washington, DC 20052
Abstract
The analytical hierarchy process (AHP) is used to integrate measures from the relatively
objective earned value management method with other, more subjective, evaluation
methods to enhance the likelihood of success in exerting leadership in the management of
a portfolio of projects. AHP is used to not only measure the performance of individual
projects, but also to evaluate the contribution of these projects towards an organization’s
tactical and strategic objectives. These measures are crucial components of the iterative
process of selecting projects for a portfolio, monitoring and controlling their progress,
allocating and re-allocating resources, and terminating projects when they under-perform
or are no longer competitive in light of new opportunities or because of shifts in
organizational strategy or tactics.
Keywords
Analytical hierarchy process (AHP), earned value management method (EVM), project
and portfolio performance measures, project portfolio management, success factors,
measurement, integration, synthesis.
Introduction
In many organizations, the selection of projects that constitute an organization’s
portfolio, and their regular adjustment, continual refinement, and possible termination are
important, recurring efforts. These efforts involve effective prioritization and
adjustments of resource allocations among projects within the portfolio. These processes
often influence the future of the organization in a big way. In this paper, we show how
the analytical hierarchy process (AHP) can be used to integrate project measures from the
earned value management method (EVM) (Anbari, 2003) and other sources to enhance
the likelihood of successful portfolio management and leadership. The prioritization and
selection of projects for an organization’s portfolio has been discussed elsewhere
(Forman & Gass 2001 and Forman & Selly, 2001). Here we focus on the measurement
of individual project performance and the synthesis of the performance of the portfolio’s
constituent projects into measures reflecting the performance of units at higher levels in
the organization.
Our understanding of a successful project has evolved throughout the past 40 years;
Jugdev & Müller (2005) offer a “Retrospective Look” at this evolution. Often judgment
occurs after the completion of a project, and we are reminded to differentiate end-project
deliverables from the processes, i.e., the management, required to produce them
(Wateridge, 1995; Wateridge, 1998; de Wit, 1988; Lim & Mohamed, 1999). We should
also differentiate success criteria — how we measure success — from the factors that
generate success (Cooke-Davies, 2002). Leaders need to understand these factors and the
relevant criteria both during the active life of a project and after its completion so they
can implement their organization’s strategy by guiding a dynamic portfolio of projects.
Here we demonstrate that through AHP disparate measures and managerial judgment can
be integrated to synthesize a cohesive view of individual project performance as well as
the performance of a portfolio of projects.
The Analytic Hierarchy Process
The Analytic Hierarchy Process (AHP) is a method for structuring complexity,
measurement and synthesis. The AHP has been applied to a wide range of problems,
including selecting among competing strategic and tactical alternatives, the allocation of
scarce resources, and forecasting. It is based on the well-defined mathematical structure
of consistent matrices and the ability to generate true or approximate ratio scale priorities
using them (Mirkin & Fishburn, 1979; Saaty, 1980 and 1994). Forman & Gass (2001)
discuss the objectives of AHP as a general method for a variety of decisions and other
applications, briefly describe successful applications of AHP, and elaborate on the
efficacy and applicability of AHP compared to competing methods.
We will illustrate where ratio measures produced by AHP are instrumental in deriving
sound, mathematically meaningful measures of individual projects as well as measures of
a portfolio of projects. According to Stevens (1946), there are four levels of
measurement. The levels, ranging from lowest to highest in terms of meaning, are
nominal, ordinal, interval, and ratio. Each scale has all of the properties (both meaning
and statistical) of the levels above, plus additional ones. For example, a ratio measure
has ratio, interval, ordinal, and nominal properties. An interval measure does not have
ratio properties, but it does have interval, ordinal and nominal properties. Ratio measures
are necessary to represent proportion and are fundamental to good physical measurement.
Prior Applications of AHP in Project, Program, and
Portfolio Management
Dyer & Forman (1992) discussed the benefits of AHP as a synthesizing mechanism in
group decision making and explained why AHP is so well-suited to group decisionmaking. Because AHP is structured yet flexible, it is a powerful and straightforward
method that can be brought into almost any group decision support system and applied in
a variety of group decision contexts. Archer & Ghasemzadeh (1999) highlighted the
importance and recurrence of project portfolio selection in many organizations. They
indicated that many individual techniques are available to assist in this process, but
without AHP they saw no integrated framework to affect it. They therefore developed a
framework that separates the work into distinct stages to simplify the project portfolio
selection process. Each stage accomplishes a particular objective and produces inputs for
the next stage. Users are free to choose the techniques they find most suitable for each
stage or to omit or modify a stage to expedite the process or tailor it to their individual
specifications. The framework may be implemented in the form of a decision support
system, and Archer & Ghasemzadeh described a prototype system that supports many
related decision-making activities.
Al-Harbi (2001) discussed the potential use of AHP as a decision-making method in
project management and used contractor prequalification as an example. He constructed a
hierarchical structure for the prequalification criteria and the contractors wishing to prequalify for a project. He applied AHP to prioritize prequalification criteria and generated
a descending-order list of contractors to select the best contractors for performing the
project. He performed a sensitivity analysis to check the sensitivity of the final decisions
to minor changes in judgments, and pointed out that AHP implementation would be
simplified with Expert Choice software, which is available commercially.
Mahdi & Alreshaid (2005) examined the compatibility of various project delivery
methods with specific types of owners and projects. Options for project delivery include
design-bid-build, construction management, and design-build methods. Depending on the
requirements of the project, one method may be better suited than another. Project
requirements should be evaluated to determine the option most likely to produce the best
outcome for the owners. Mahdi & Alreshaid used AHP as a multi-criterion decisionmaking method to assist decision-makers in selecting the proper delivery method for their
projects, and they provided an example of selecting the proper project delivery method
for an actual project.
Selecting an Organization’s Portfolio of Projects
Deciding what projects to include in an organization’s portfolio of projects is extremely
important and entails a variety of challenges. Forman & Gass (2001) described how
AHP is used in allocating scarce resources to optimize the achievement of the
organization’s objectives within the constraints of scarce resources and project
dependencies:
An effective allocation of resources is instrumental to achieving an organization’s
strategic and tactical objectives. Information about what resources are available to
management is usually easy to determine. Much more difficult to ascertain is the
relative effectiveness of resources toward the achievement of the organization’s
goals, since all organizations have multiple objectives. Resource allocation decisions
are perhaps the most political aspect of organizational behavior. Because there are
multiple perspectives, multiple objectives, and numerous resource allocation
alternatives, a process such as the AHP is necessary to measure and to synthesize the
often conflicting objective and subjective information. An organization must be able
to:

Identify and structure its goals into objectives, sub-objectives, sub-subobjectives, and so on

Identify design alternatives (e.g., alternative R&D projects, or operational
plans for alternative levels of funding for each of the organization's
departments)

Measure (on a ratio scale) the relative importance of the objectives and subobjectives as well as how well each alternative is expected to contribute to
each of the lowest level sub objectives

Find the best combination of alternatives, subject to budgetary, environmental
and organizational constraints.
Deriving Priorities for Organizational Objectives
Priorities for the elements in the organization’s objectives hierarchy (see Error!
Reference source not found. for the hierarchy of objectives of the example used in this
paper) are typically derived by teams, the compositions and participation of which are
determined by the knowledge, experience and responsibilities of the participants.
Figure 1 – Example Objectives Hierarchy
For example, top-level executives (e.g., at the vice presidential or chief level) typically
make pairwise comparisons of the relative importance of the organization’s top level
objectives. The actual procedure often occurs at a meeting(s) where face to face
discussion and an exchanging of ideas are important. Electronic keypads can be used to
record judgments about the relative importance of the top level objectives. The
judgments can be anonymous at first, and then shared so that individuals can see other
perspectives. Error! Reference source not found. shows judgments about the relative
importance of the top-level objectives from five executives. Since there may be
considerable difference of opinion about the relative importance of the objectives, a
meeting facilitator is often employed to lead a discussion to bring out more fully what the
executives had in mind, including definitions, assumptions, and information that might
not be commonly available or expressed. This discourse, which most often leads to a
high degree of consensus, is an import part of the process. Because of AHP’s reciprocity
axiom (if A is 5 times B, then B is 1/5th A), the geometric mean is used to calculate a
combined judgment for the group. Referring again to Error! Reference source not
found., two of the executives thought that leveraging knowledge was more important
than improving organizational efficiency, two thought just the opposite, and one thought
the two objectives were equally important. The geometric average of these judgments
was that leveraging knowledge was just slightly more important. If desired, a supporting
evaluation can be used to weight each executive’s judgment based on criteria such as
knowledge, experience, and responsibility. This extra, outside step is rarely practiced
because discussion and eventual consensus will lead to more buy in by the participants.
Figure 2 – Pairwise Comparisons of Relative Importance of Two Top Level Objectives
Priorities derived from the complete set of combined judgments for the objectives in a
cluster are then calculated with standard AHP mathematics (using the “normalized
principal right eigenvector”). Error! Reference source not found. Error! Reference
source not found. shows the calculated priorities for the top level cluster.
Figure 3 – Derived Priorities for Top Level Corporate Objectives
The local and global priorities for the organization’s objectives are shown in Error!
Reference source not found.. The derived priorities within any cluster (such as the top
level cluster shown in Error! Reference source not found.) are called local priorities,
always sum to one, and distribute the total cluster priority among its elements.
Furthermore, because all these priorities are ratio level measures,
any element may then be further subdivided into smaller elements that conserve the
parent element’s total priority, and their fraction of the global priority is easily calculated
through simple multiplication.
Figure 4 -- Prioritized Corporate Objectives
For example, “Leveraging Knowledge” represents a global priority of 0.278 of the total
priority with respect to “Measuring Project Portfolio Performance.” We can further
divide this fraction of the total priority into three sub-elements that have priorities 0.324,
0.276, and 0.400 with respect to “Leveraging Knowledge,” i.e., these new sub-elements
together must retain the total priority of their parent and so sum to 1. If, however, we
want to understand these sub-elements as a fraction of the whole (i.e., as grandchildren of
“Measuring Project Performance”), we simply multiply their sub-element proportions by
the global priority of their parent, “Leveraging Knowledge,” to obtain global proportions
of 0.324*0.278= 0.090; 0.276*0.278=0.077; and 0.400*0.278=0.111. We stress again
that this calculation is possible only because all these numbers represent ratio scale
measures provided by the priority derivation process of AHP.
Evaluating Anticipated Project Benefits
Evaluating the anticipated benefits of projects toward an organization’s objectives is
useful for at least two purposes: (1) to decide which projects should and should not be
included in the portfolio of projects — a resource allocation problem that is not discussed
in detail in this paper; and (2) to roll up the individual project’s performance to derive
measures of how well the portfolio is performing at various levels of the organization.
Measures of the anticipated benefits as well as the priorities of the objectives must be
ratio scale measures if they are to be multiplied and rolled up to derive integrated or
synthesized performance measures for higher levels in the organization, such as
Vendor/Partner Access, Leveraging Knowledge, and Project Portfolio Performance
shown in Error! Reference source not found..
Project Alignment
Some projects are designed with a single objective in mind while others may have
multiple objectives. The lowest-level elements of the organization’s objectives hierarchy
(for example vendor partner access and improve service efficiencies in Error! Reference
source not found.) are called “covering objectives,” and a project’s anticipated benefit is
the sum of its anticipated contributions to those covering objectives to which it
contributes. Each of these in turn, is the product of the covering objectives’ relative
importance and the relative contribution of the project toward that covering objective.
The relative contribution of a project toward a covering objective can be evaluated using
either pairwise comparisons, or a ratings scale of intensities that possess ratio level
priorities. (The derivation of ratio scale priorities for rating intensities will be illustrated
later.) Error! Reference source not found. shows that the AS/400 Replacements project
was judged to make a very good (.722) anticipated contribution toward the Leveraging
Proven Technology objective.
Figure 5 – Rating the Anticipated Benefit of AS/400 Replacements to Leveraging Proven Technology
This priority is incorporated in two ways. First, it is used to derive the overall anticipated
benefit of the AS/400 Replacements project toward the organizations objectives, which in
turn is used in an optimization to determine which projects should be included in the
organizations portfolio of projects (which we will not discuss in this paper). Secondly, as
discussed above, it is used in rolling up the portfolio’s individual project’s performance
to derive measures of how well the portfolio is performing at various levels of the
organization.
Evaluating Project Performance
We next turn our attention to the evaluation of a project’s performance once it has been
funded, after which we will look at how the to ‘roll up’ individual project performance to
derive measures of how well the portfolio is performing at various levels of the
organization. A meaningful measure of a project’s performance cannot be made in
isolation, but must be relative to how well the project is performing in relation to the
organizations goals. This entails more than an “earned value” computation. It requires
an integration of multiple objective numerical measures as well as factors requiring
subjective judgment that may originally be expressed non-numerically. AHP is well
suited to eliciting subjective judgments and producing accurate ratio scale measures from
those judgments, thereby enabling integration of all the factors relevant to the
performance of a project.
Project performance measures have widened beyond measuring against the three
measures of planned budget, schedule, and scope. For example, Anbari (2004)
maintains that project performance needs to be measured against the quadruple
measures/objectives of scope, time, cost, and quality. Similarly, customer satisfaction has
become an essential ingredient of success, although it “remains a nebulous and complex
concept” (Jugdev & Müller, 2005) that might be largely explained simply by bringing
projects in at cost (Cioffi & Voetsch, 2006). Whatever the collected criteria, measuring
project success now demands a “diversified understanding” at both the project
management and executive levels of the organization (Jugdev & Müller, 2005). Atkinson
(1999), for example, suggested three categories for measuring a project's success after it
has been completed: the “technical strength” of the project deliverable; “direct benefits,”
i.e., to the organization itself; and “indirect benefits” to a “wider stakeholder
community.” AHP can help with these measures too.
Project Performance Measurement Components
Project success, as discussed above, is some combination of the project’s management
performance, its deliverables, and its contribution to the organization's objectives.
Project performance measurement usually involves multiple measurement components.
The complexity and details of these components are, in general, a function of the size of
the project as well as its importance to the organization. For large projects, formal,
standardized, more objective performance indicators such as those provided by the
Earned Value Method (EVM) are becoming more common. Kim, Wells, & Duffey
(2003) found that EVM is gaining higher acceptance due to favorable views of
diminishing EVM problems and improving utilities. They also found that a broader
approach considering users, methodology, project environment, and implementation
process can improve significantly the acceptance and performance of EVM in different
types of organizations and projects. EVM performance indicators may be supplemented
by other factors, such as project quality, that may require more subjective judgments.
Judgment is also required to integrate the various performance components into one
measure of a project's performance. This integration can be accomplished using ratio
scale priorities derived from pairwise comparisons, as is typical when using AHP (but not
necessarily typical with other methods). Small projects may not warrant the effort (and
thus the expense) needed to implement EVM, and one or several more subjective factors
may play a greater role in evaluating project performance.
Instead of using the same set of performance measurement components to evaluate every
project, we propose defining a set of measures, each with one or more components, such
that the performance of each project is evaluated with that measure (and its constituent
components) most suited to the size, impact, type (e.g., product or service), environment
(e.g., international or domestic), or other characteristics of the project. Error! Reference
source not found. shows one such measure, consisting of a hierarchy of measurement
components that could be applied to a class of projects large enough to merit the expense:
Example of a Measure and Its Components for a Large Project
Figure 6 – Hierarchy of Component Measures
Each of the lowest-level elements in the above hierarchy represents something that is
measured either objectively or subjectively. In either case, we transform the measure into
a value between 0 and 1 using a direct rating scale, a step function or an increasing or
decreasing utility curve that may be linear, concave, or convex. For example, a concave
increasing utility curve, such as that shown in Error! Reference source not found.
might be appropriate for transforming a project’s earned value cost performance index to
a range from 0 to 2, where a cost performance index of .1 or less maps to a priority value
of 0 and a cost performance index of 2 or more maps to a priority value of 1.0.
Figure 7. A Possible Utility Curve for EVM’s Cost Performance Index
Error! Reference source not found. shows a linear utility curve for a project’s schedule
variance percentage. It is defined such that a project that is 200% or more behind
schedule has no priority value for this measure, whereas one that is 200% or more ahead
of schedule has a value of 100%.
Figure 8. A Possible Utility Curve for Schedule Variance Percentage
As an example, applying this linear function to one of the projects in our example, the
AS/400 replacement, which is 15% behind schedule yields a value of 46%.
Component Priorities
To integrate or synthesize the various measure components for a project, managers and
team members must obtain ratio scale priorities that represent the relative importance of
the measure components. These priorities can best be derived using the traditional AHP
pairwise comparison process. Humans are more capable of making relative, rather than
absolute judgments, and much of the AHP process involves making pairwise relative
judgments. We illustrate this next.
In any given organization, some projects are more time-sensitive than others; some are so
heavily schedule driven that “time is of the essence” is often used in the contract to
highlight this mandate. Error! Reference source not found. shows an example of the
pairwise comparisons of the relative importance of the Earned Value components of
projects where meeting schedule is mandatory, such as systems remediation projects or
projects that will implement an organization's compliance with some governmental
regulations going into effect at a specific, near date. The rationale for these judgments
was made by two experts with more than 50 years of combined project management
experience, and the diagonal in the figure below shows the numerical representations of
their verbal judgments, which follow.
Figure 9 – Pairwise Relative Comparisons
To start, because project schedule is so important, the Earned Value Schedule
Performance Index was judged “strongly” more important than the Earned Value Cost
Performance Index. (Entries in the table are red if the column element is more important
than the row element). Despite this emphasis, the Schedule Performance Index is only
“moderately” more important than Cost Variance because performance indices are less
commonly used and therefore less readily understood. Schedule Variance was judged
“very strongly” more important than Cost Variance because this project is so "very
strongly" schedule driven. Nonetheless, Schedule Variance is only “moderately” more
important than the cost Variance at Completion because although this project is schedule
driven, expected cost overruns at the end of the project cannot be ignored completely.
The verbal judgments for elements not on the diagonal (and not discussed here) are not
necessary for calculating the relative numerical priorities of measure components.
However, they are important because they provide redundancy that leads to derived
priorities that more accurately approximate the ratio-scale priorities in the decisionmakers’ minds. Although the fundamental verbal scale used to elicit judgments is an
ordinal scale, Saaty’s (1980) empirical research showed that the principle eigenvector of
a pairwise verbal judgment matrix often produces priorities that approximate the true
priorities seen in ratio scales of common physical parameters such as distance, area, and
brightness (because, as Saaty showed, the eigenvector calculation has an averaging effect
– corresponding to finding the dominance of each alternative along all walks of length k,
as k goes to infinity). Therefore, if there is enough variety and redundancy, errors in
judgment, such as those introduced by using an ordinal verbal scale, can be reduced
greatly (Forman & Gass, 2001).
The priorities resulting from the judgments shown above (as well as their proportions
represented in bar graph form) are exhibited Error! Reference source not found.. An
important advantage of AHP is its ability to measure the extent to which an expert’s
judgments are consistent, as shown by the inconsistency ratio. (The inconsistency of this
set of judgments is a bit high, but the experts felt that each judgment was warranted and
the resulting priorities accurately reflected what they thought at the time.)
Figure 10 – Resulting Priorities
Integrating Component Measures
Once we have ratio scale measures of a project's performance with respect to each of the
components, as well as ratio scale measures of the relative importance of the components,
we can roll up the performance to higher levels in the component hierarchy. For
example, in Error! Reference source not found. we see the subcomponents of earned
value, along with their priorities from Error! Reference source not found. and the
performance priorities. The scheduled variance subcomponent in Error! Reference
source not found., for example, has a performance priority of 49% and an importance
priority of 46%, which when multiplied and added to the corresponding products of the
other sub-components, results in a performance priority of 49.92% for the earned value
component. We emphasize that the multiplication of the priorities of the components by
the project’s performance measures in the roll up process is mathematically meaningful
only because the measures are ratio scale measures.
Figure 11 – Integrated Earned Value Measure of a Project’s Performance
Moving up one more level in the hierarchy of measure components (Error! Reference
source not found. above) we see earned value measures integrated or synthesized with
the other measure components for the AS/400 project in Figure 12. The 16.15% priority
for the AS/400 project represents the relative contribution of this project toward one of
the organization's objectives and will be discussed below.
Figure 12 – Integrated Project Performance Measure
Example of Measure for a Small Project
Small projects (in some organizations, $20,000 or less) may not warrant measure
components as involved as those shown above for a large project. The simplest measure
might be a verbal ratings scale, consisting of rating adjectives such as High Performance,
Strong Performance, Moderate Performance, and so forth. However, the priorities
associated with these adjectives must be ratio scale measures if they are to be combined
with the other measures to produce an integrated measure that is mathematically
meaningful and in proportion to the project's performance. This is accomplished by first
performing pairwise comparisons of the rating intensities themselves e.g., comparing
“High Performance” to “Strong Performance” (see the relative lengths of the bars in
Figure 13), and “Strong Performance” to “Good Performance” (see Figure 14), which
results in ratio scale priorities for the rating intensities (see Figure 15) that are then used
to evaluate one or more projects (see Figure 16).
Figure 13 – Relative Preference for a High Performance vs. Strong Performance Project
Figure 14 – Relative Preference for a Strong Performance vs. Moderate Performance Project
Figure 15 – Ratio Scale Priorities for Rating Intensities
Figure 16 – Rating a Project’s Performance with Only One Component Measure
Management Action at Project Level
The results obtained thus far can be used for management action at the project level, to
understand better the progress of the project in light of the stated priorities. Consequently,
one may analyze tradeoffs among the traditional project dimensions (scope, schedule,
budget) and other project objectives (quality, customer satisfaction, repeat business);
request additional resources; reassign personnel; crash selected project activities; adjust
scope; conduct an audit; and so forth.
Evaluating Portfolio Performance -- Synthesizing to
Derive Performance Measures above the Project Level
We have described how to measure the performance of individual projects, taking into
account multiple measure components. While this information is important to project
managers, organizational leadership above the project management level has no way to
understand and track the performance of the entire portfolio of projects. Some projects
may be performing well and some not so well. Some of those performing well may be
relatively important or relatively unimportant, and similarly for those projects not
performing well. The question arises – how do we aggregate the performance of the
individual projects to derive composite measures of performance at higher levels in the
organization to produce a “performance dashboard”?
The answer lies with the ratio scale measures of the anticipated project benefits toward
the organization’s objectives (that were derived when the projects were considered for
funding in the resource allocation process) and the ratio scale measures of the actual
performance of each of the projects that were selected to be in organization’s portfolio
(such as the performance illustrated for the AS400 project in Figure 12. These ratio scale
measures can be summed (“rolled up”) to determine performance toward meeting the
higher-level organizational objectives to obtain a single integrated measure of the
performance of the entire project portfolio. The rollup is illustrated in Figure 17, which
is a view of the organization’s objectives hierarchy (Figure 1) expanded to show nodes
below leverage knowledge and vendor partner access. We see in this example that there
are two projects in the portfolio that contribute to the vendor partner access objective, one
of which is the AS400/Replacement project and the other of which is the Cisco Routers
project, each shown with their derived ratio scale performance measures of 51.80 (from
Figure 12) and 95.17% respectively. The relative ratio scale priorities of these two
projects (16.15% and 83.85%), that are used to ‘roll up’ the performances to the next
higher level are determined by normalizing each of the project’s anticipated contribution
to the Vendor Partner objective, as was derived during the resource allocation process
(the details of which are not shown in this paper).
Figure 17 – Dashboard Expanded to Show Projects Contributing to
Leverage Knowledge > Vendor Partner Access
The “dashboard” in Figure 17 contains colors according to an adjustable legend. The
colors enable management to get a fast visual impression of the performance throughout
the organization. (A more elaborate ‘wallboard’ view, also containing these colors is
presented below). However, the colors are somewhat arbitrary whereas the ratio scale
measures of performance, such as 71.77% overall and 85.26% for leveraging knowledge)
are more meaningful than the colors because colors are ordinal measures. So for
example, if one measure were just below the arbitrary cutoff for yellow, and another just
above, they would show yellow and green respectively, even though their performance
might be almost the same. In addition, the colors are subject to the arbitrary ranges
selected for the legend. Thus, a manager should examine the actual ratio performance
values and not just the colors.
The contribution of a specific objective to organizational performance can differ
depending on the objective to which it contributes. For example, a different dashboard
view is shown in Figure 18, where it can be seen that even though the AS/400 Project’s
performance is the same as that in Figure 17, its relative contribution toward the
Leveraging Proven Technology objective is 48.45% as compared to only 16.15% toward
the Vendor Partner Access objective.
Figure 18 – Dashboard Expanded to Show Projects Contributing to
> Minimize Risks > Leveraging Proven Technology
Wallboard
Whereas a picture of the performance toward the organization’s entire hierarchy of
objectives would require many of the dashboard views shown in Figure 17 and Figure 18,
a “wallboard” view, even more elaborate than that shown in Figure 19 (the details of
which are not important here) can be posted on the wall of a room to depict the
performance of the portfolio of projects toward the entire hierarchy of objectives.
Figure 19 – Wallboard Showing Objectives/Sub-Objectives and All Projects
Leadership through Management Action at the Portfolio
Level
In a business world where change is constant, projects — which by definition effect
change — represent the major tactical mechanisms for implementing an organization’s
strategy. Projects exist at all levels in an organization and should align with
organizational goals. Thus, just as individual project plans need to be integrated to make
the best use of organizational resources, an ensemble of projects in a portfolio should be
viewed as an integrated unit that contributes to advancing organizational strategy.
Leadership is required to move away from established plans, when such change is
needed.
The combination of project performance measures and organizational priorities, as
described above, determines the actions to be contemplated. Often resources need to be
re-allocated. Modern project management, when properly performed, allows tradeoffs
among schedule, budget, and scope while maintaining an integrated project, i.e., the
project’s schedule, budget, and scope remain consistent. For example, with careful,
iterative planning, schedules can be lengthened or shortened (“crashed,” if necessary) by
temporarily removing or adding resources. This resource re-allocation will affect a
project’s short-term budget, but it need not always change the total project budget. The
priorities of the organization determine the proper mix of short-term and long-term
goals, and these change, too.
Resource re-allocation can result in termination of projects. Terminating a project before
its planned completion may seem to represent an extreme or punitive action, but if a
project’s performance measure indicates poor achievement and it is not adequately
contributing to important organizational objectives, that project’s resources can best be
used elsewhere. The so-called sunk costs of a project should not be allowed to affect an
organization’s future. The proper questions to ask are what costs are necessary to
complete a project, what are the currently anticipated benefits, and what other project
opportunities are competing for scarce resources? The original plans serve only as
mileposts against which to measure progress. An organization’s strategic direction may
change due to changes in competition. New opportunities continually emerge.
Terminating a project forcefully requires courage, but too often we see projects allowed
to continue until they die by a variety of means such as attrition. A decision-maker’s job
is to use available (or to-be-available) resources in a way that maximizes the
organization’s objectives. This maximization can and should include qualitative
‘morale’ costs of terminating projects that are meeting their goals but are no longer
competitive for the organization.
The mechanism that we have described in this paper allows decision makers to measure
their project portfolios against the objectives to which they themselves have carefully
prioritized.
Conclusions
We have shown how the analytical hierarchy process (AHP) can be used to (1) measure
and integrate project performance from EVM and other sources, as well as (2) roll up
project performance to derive measures of performance at higher levels in the organization.
This ability enhances the likelihood of success in leading project portfolio management.
The approach presented in this paper is an extension of the process of selecting projects
for an organization’s portfolio. It brings consistency and rationality to the efforts
subsequently required: the continual refinement of projects, their effective prioritization,
adjustments of resource allocations among them, and possible termination of projects that
no longer are in optimal alignment with the organization’s goals (some of which may
have changed since the original project concept). In a business world that depends more
and more on successful project implementation for tactical and ultimately strategic
achievement, these important management and leadership efforts are vital to the future of
the any organization.
References
Al-Harbi, K. M. Al-S. (2001). Application of the AHP in project management.
International Journal of Project Management, 19 (1), 19-27.
Anbari, F. T. (2004). A Systems Approach to Six Sigma Quality, Innovation, and Project
Management. Proceedings of the 7th International Conference on Systems Thinking,
Innovation, Quality, Entrepreneurship and Environment (STIQE 2004), 7-12,
Maribor, Slovenia.
Anbari, F. T. (2003). Earned Value Project Management Method and Extensions. Project
Management Journal, 34 (4), 12-23.
Archer, N. P. & Ghasemzadeh, F. (1999). An integrated framework for project portfolio
selection. International Journal of Project Management, 17 (4), 207-216.
Atkinson, R. (1999). Project management: cost, time and quality, two best guesses and a
phenomenon, its time to accept other success criteria. International Journal of Project
Management, 17 (6), 337–342.
Cioffi, D. F., and Voetsch, J. (2006). A Single Constraint? Evidence That Budgets Drive
Reported Customer Satisfaction, submitted to the European Journal of Operational
Research, 10 December 2005.
Cleland, D. I. (Editor). (2004). Field Guide to Project Management, (2nd Ed.). New York,
NY: John Wiley & Sons.
Cooke-Davies, T. (2002). The “real” success factors on projects. International Journal of
Project Management, 20 (3), 185–190.
de Wit, A. (1988). Measurement of project success. International Journal of Project
Management, 6 (3), 164–170.
Dyer, R. F. & Forman, E. H. (1992). Group Decision Support with the Analytic Hierarchy
Process. Decision Support Systems, 8(2), 99-124.
Fleming, Q. W. & Koppelman, J. M. (2005). Earned Value Project Management, (3rd Ed.).
Newtown Square, PA: Project Management Institute.
Forman, E. H. & Gass, S. I. (2001). The Analytic Hierarchy Process - An exposition.
Operations Research, 49 (4), 469-486.
Forman, E. H. & Selly, M. A. (2001) Decision by Objectives, River Ridge NJ: World
Scientific Press.
Jugdev, K., Müller, R. (2005). Project success: A retrospective look at our evolving
understanding of the concept. Project Management Journal, in press.
Kerzner, H. (2006). Project Management: A Systems Approach to Planning, Scheduling,
and Controlling, (9th Ed.). New York, NY: John Wiley & Sons.
Mirkin B. G. and Fishburn, P. C. (1979). Group Choice. V. H. Winston: Distributed by
Halsted Press, Washington, DC.
Lewis, J. P. (2001). Project Planning, Scheduling, & Control: A Hands-On Guide to
Bringing Projects In On Time and On Budget, (3rd Ed.). New York, NY: McGraw-Hill.
Lim, C. S., Mohamed, M. Z. (1999). Criteria of project success: an exploratory reexamination. International Journal of Project Management, 17 (4), 243–248.
Mahdi, I. M. & Alreshaid, K. (2005). Decision support system for selecting the proper
project delivery method using analytical hierarchy process (AHP). International
Journal of Project Management, 23 (7), 564-572.
Project Management Institute (PMI®). (2004). A Guide to the Project Management Body
of Knowledge (PMBOK® Guide), (3rd Ed.). Newtown Square, PA: Project
Management Institute.
Saaty, T. L. 1980. The Analytic Hierarchy Process, McGraw-Hill Book Co., N.Y.
Saaty, T. L. (1994). How to make a decision: The analytic hierarchy process. Interfaces,
24 (6), 19-43.
Stevens, S. S., 1946, "On the Theory of Scales of Measurement", Science, (103, 1946),
pp. 677-680.
Wateridge, J. (1995). IT projects: a basis for success. International Journal of Project
Management, 13 (3), 169–172.
Wateridge, J. (1998). How can IS/IT projects be measured for success? International
Journal of Project Management, 16 (1), 59–63.
Download