Establishing effective metrics for new product development success

advertisement
Siemens PLM Software
Establishing effective metrics for
new product development success
www.siemens.com/plm
best practice brief
Companies that are unable to measure the performance of their product
development processes have little or no chance of successfully competing with
today’s best-in-class product makers. Metrics-driven improvement programs
differentiate industry leaders from the rest of the pack. Companies must be able
to understand how well they perform and how this performance affects their
financial bottom line if their improvement initiatives are to deliver a meaningful
return on investment.
PLM Software
Answers for industry.
Establishing effective metrics
Table of contents
Overview of effective metrics
1
Challenges
3
Best practice solutions
5
Key Siemens solution capabilities
8
Appendix A – Commonly used
program metrics
9
Overview of effective metrics
The value of effective metrics. A recent study of 940 executives by The Boston
Consulting Group1 found that 51 percent of the respondents expressed dissatisfaction
with the financial returns on investment (ROI) they are receiving from their innovation
initiatives.Yet they continue to invest, despite additional research showing that “there is no
correlation” between R&D spending and sales growth, earnings or shareholder returns.2
As these studies indicate, it is highly important to understand and optimize today’s
new product development processes. In essence, “how you spend is far more
important than how much you spend.”3
However, as you might expect, it is impossible to optimize a process, if you do not know
how to measure it. AMR Research examined this issue in detail and discovered that
while 79 percent of the companies it surveyed had formal new product development
processes, only 52 percent had actually applied metrics to these processes.4
In brief:
Best-in-class companies are far more
likely than their competitors to use key
performance indicators to regularly measure
their new product development projects.
Metrics-driven programs enable companies
to identify gaps in their new product
development capabilities, define how much
improvement is still needed and how their
improvement initiatives should be prioritized.
As the old adage suggests, you can only manage what you can measure. It should
come as no surprise that best-in-class companies are three times more likely than
their peers to use key performance indicators to measure their new product
development projects on a monthly basis.5 In fact, industry leaders generally measure
performance more frequently and on a broader scale than their competition.
While many companies struggle to measure the results of their R&D spending, the
focus on improving this deficiency is evidenced by the popularity of improvement
initiatives such as Six Sigma, as well as the rapid growth of copy cat approaches.
Many observers believe that today’s biggest challenge is convincing people to get on
board with a cross-functional approach to decision making. It is commonly asserted
that “we have a good process if only we would follow it.” This complaint is
symptomatic of poor organizational commitment. In many ways, use of the right
metrics encourages companies to align their functional discontinuities.
As The PDMA Handbook of New Product Development indicates, metrics-driven
programs enable companies to identify the gaps in their new product development
capabilities, as well as to define how much improvement is still needed and how these
improvement initiatives should be prioritized.6
In essence, effective, visible metrics that are consistently and constantly measured
drive a variety of business benefits.
Benefits of using metrics to drive improvement programs7
Benefit
Why it matters
Assess overall
Enables companies to evaluate their product development
development performance capabilities, gauge their effectiveness and identify
performance gaps
Prioritize improvement
investment
Allows companies to prioritize their improvement
initiatives and assess their alignment with established
strategies, investment requirements and associated returns
Monitor industry
best practices
Enables companies to establish external benchmarks they
can use to evaluate their competitiveness and compare
their performance against best-in-class companies
1
Benefits of using metrics to drive improvement programs7
Benefit
Why it matters
Improve operational
reliability
Helps companies establish a set of predictive measures
they can use to anticipate development-related
performance problems and take corrective actions
Facilitate behavioral
change
Allows companies to clearly define their metrics in terms
of organizational performance goals; helps individuals
understand how personal performance relates to overall
business performance; creates a basis for aligning
company incentives with performance goals
Stakeholders in metrics-driven improvement. Value-chain participants from
multiple organizations need to work together as a single team and use metrics to
align and drive their daily activities.
Value-chain participants benefiting from metrics-driven programs
Participant
Why metrics matter
Executive management
The CEO is ultimately responsible for ensuring that R&D
investment delivers acceptable revenue returns; the CIO
is responsible for making certain that an organizational
framework is in place to facilitate effective teamwork
across the product development cycle.
Product management
Product management is responsible for channeling
early marketing input (such as forecasts) into specific
development projects. Once the product has been
launched, the product manager needs to track the
product’s level of marketplace success.
R&D, design and
engineering
Since the product development organization represents
the major investment for most development projects, its
managers are frequently asked to minimize the cost of
their operations.
Marketing and sales
management
Marketing and sales organizations represent the sharp
end of the new product development process; they are
responsible for ensuring that the product’s marketing
and sales forecasts are accurate and that product sales
meet these expectations.
For effective new product development, these value-chain participants need to
understand the hierarchical relationships between each program’s drivers and goals.
In a seminal study on business management, The Human Side of Enterprise identified
two major management styles and labeled them Theory X and Theory Y.8 Theory X
represented a classic command-and-control structure that stressed authoritarian
principles and exemplified “an underlying belief that management must counteract an
inherent human tendency to avoid work”. In contrast,Theory Y “assumes that people
will exercise self-direction and self-control in the achievement of organizational
objectives to the degree to which they are committed to those objectives.”
2
In brief:
Executive management, product managers,
marketing and sales management and R&D,
design and engineering organizations use
metrics management to understand the
hierarchical relationships between each
program’s business goals and the functional
capabilities that drive the program’s
development processes.
Challenges
Falling short of full value. Many companies are not using metrics management to
drive improvement programs to their fullest advantage. Recent surveys indicate that
even though 70 percent of companies use metrics to review their project results, only
55 percent use metrics for performance and goal setting.9 Equally important, only 41
percent of these companies used metrics for external benchmarking and only 38
percent used them to link their strategies to individual goals.
Researchers explain this anomaly by categorizing their respondents’ reasons into four
primary categories.
Reasons for failing to fully leverage program metrics
Reason
Underlying causes
Wrong metrics
Effective performance metrics need to reinforce the
organization’s adherence to agreed upon business
objectives and practices.These relationships also need to
be balanced across multiple dimensions of the business.
Metrics need to support fact-based decision making –
rather than intuitive decisions – by providing irrefutable
evidence that problems exist and improvements can be
precisely targeted.These metrics also need to be easily
understood, communicated, quantified and recorded.
Inadequate tracking
mechanisms
Companies can only leverage metrics that their current
processes are able to support. For example, a company
cannot measure budget variance (i.e., planned cost vs.
actual project cost) if it does not have project
accounting processes in place. In essence, processes
must exist to collect and support the required metrics
in a meaningful and practical way.
In brief:
Even today, companies are not using programdriven metrics to their fullest advantage.
Effective metrics management is limited by the
use the wrong metrics, the failure to track
performance, the inability to tie performance
to bottom-line results, or the lack of an
actionable management process.
Generally, it is good practice to leverage data that is a
natural byproduct of the organization’s new product
development processes. Recent research confirms this
view as best-in-class companies measure key
performance indicators for new product development at
the enterprise level 60 percent of the time – while
laggards do not use this measure at all (0 percent).10
No bottom-line
implications
Successful improvement programs are grounded in hard
facts that provide clear linkage to valued business
results. Unfortunately, product-related decisions usually
are based on data about immediate assets, liabilities and
revenues that is not linked to process-based capabilities
and competencies.This over reliance on base measures
only promotes linear, incremental improvements.To
achieve the full benefit of a metrics-driven program,
metrics performance needs to be targeted as directly as
possible to the company’s income statement or
balance sheet.
Recent research by the Aberdeen Group indicates that
80 percent of the best practice companies they surveyed
coordinated their innovations strategies with their
operational organization.11
3
Section name
Reasons for failing to fully leverage program metrics
Reason
Underlying causes
Missing management
process
While it is important to tie metrics to clearly
understood processes and to have supporting tracking
mechanisms in place, it is also crucial to implement
project-level processes that identify major milestones,
timings, assigned actions and ownership responsibilities.
Companies with portfolio and project-level processes
have a framework and common language that is critical
for effective metrics management.The active participation
of senior management in product innovation is a key
differentiator for best-in-class companies.
Avoiding common pitfalls. Metrics-driven improvement programs should abide by
the following guidelines to ensure their real-world success:
• Make certain that major stakeholders agree upon the program’s metric
measurements
• Tie the program’s metric measurements to clear goals, assigned actions and defined
consequences
• Develop the program’s metrics so that they measure the right performance and
cause people to act in their company’s best interest (in contrast with simply “making
their numbers”)
• Develop program metrics that can be accurately, completely and efficiently collected
• Avoid developing excessive metrics that promote bureaucracy at the expense of
innovation
• Focus on gathering fewer, more meaningful metrics, such as measurements that drive
best-in-class performance, productivity and time, cost and quality improvement
• Ensure that the program’s metrics are clearly visible by using management
dashboards
• Make certain that the metrics’ details can be benchmarked for comparative
purposes
• Tie individual, group, project and enterprise metrics together to reflect the best
interest of the organization
• Make certain that the cause and effect of the program’s metrics are understood and
that a business-driven balance is achieved among the program’s participating groups
• Avoid developing complex metrics that are difficult to explain
• Understand the difference between performance metrics (which define what is going
on in a process) and diagnostic metrics (which explain why a process performs the
way it does)
4
In brief:
To avoid common metrics management
pitfalls, companies need to secure
agreement on the program’s metrics,
tie their metrics to clear business
goals, develop easily understandable
measurements, provide highly visible
dashboads, establish competitive
benchmarks, and understand the cause
and effect relationships that drive
each metric.
Best practice solutions
Successful metrics programs. A successful metrics-driven program
comprehensively defines the decision-making structure, organizational responsibilities,
business processes, program metrics, tracking mechanisms and reporting templates
that are used to analyze, improve and control the product development process.
The PDMA Handbook of New Product Development outlines a 10-step, three-phased
approach for addressing the challenges facing companies that want to implement
successful metrics-driven programs.
PDMA 10-step performance measurement approach
Phase
Step
Objectives
1. Define metrics program
Define metrics program
charter and work plan
In brief:
PDMA’s 10-step approach to performance
measurement provides companies with
a best practice tool for defining,
implementing and deploying a metricsdriven program for improving their
product development processes.
Phase 1
Define detail
definitions
2. Define strategy and
high-level objectives
3. Define balanced
performance metrics
4. Determine current
process capabilities
Define metrics program
objectives to articulate how
the organization benefits from
these metrics
Link organizational strategy
and high-level objectives to
performance metrics
Measure the right metrics
Assess current metrics and
leverage where possible
Phase 2
Implementation
5. Define decision-making
structure
6. Establish data collection
and reporting process
7. Define metrics tracking
systems
Define who will review current
performance and identify
improvement opportunities
Ensure timely, fact-based
operational decisions
Define tasks and responsibilities for data analysis and
reporting (facilitates efficient
metrics tracking)
Ensure appropriate tracking
Phase 3
Rollout
8. Establish pilot metric
process
9. Conduct ongoing
performance reviews
Identify first set of improvement
targets and test new
performance measurement
process
10. Implement continuous
improvement
Identify improvement opportunities; take corrective actions
Review and improve metrics
and their measurement
processes as necessary
Track bottom-line impact of
improvement program
5
Phase 1 identifies the metrics needed to evaluate current performance and establish
targets that will drive the product development organization. Successful companies
typically choose metrics that balance four key dimensions: quality, time, productivity
and cost.These metrics should extend beyond traditional performance measurements
and include key predictive measures. Similarly, organizations need to differentiate
between performance and diagnostic metrics.
Phase 2 defines the program’s data collection mechanisms and the management
responsibilities that need to be in place to collect, support and track the program’s
metrics. During this phase, organizations decide:
• Who will collect their data?
• Who will prepare and distribute the program’s periodic reports?
• What systems will be used to facilitate these tasks?
• Who will review the data?
• Who will communicate the conclusions drawn after the data has been analyzed?
Phase 3 identifies key opportunities for performance improvement and establishes
new processes that can be implemented in targeted operational units.
It is important to establish a balance between metrics that apply to different groups
and understand how these metrics interact with each other. Two highly respected
business advisors leverage a hierarchy of metric measurements that organizations can
use to tie their metrics together to foster better overall business behavior. PDMA has
outlined a ladder of abstraction to define four levels of business metrics.12 DMR
Associates broadly matches the same approach.13
PDMA’s ladder of abstraction14
4th order metrics
3rd order metrics
2nd order metrics
1st order metrics
Enterprise metrics/capabilities
(e.g. stock price, core competencies, growth, break even time, percent
of revenue from new products, proposal win percent, development
cycle time trend)
Portfolio/product metrics
(e.g. unit production costs, weight, range, mean time between faliures,
vintage NPD process metrics)
Integrated project/program metrics
(e.g. schedule performance, cost of delay, time-to-market,
program/project cost performance, balanced team scorecards)
Functional departmental and process metrics
(e.g. milestones, throughput, patents, sales, product quality, product cost,
efficiency, staffing vs. plan, turnover rate, errors per 1000 lines of code).
The function’s mission can be used to derive individual measurements.
Both approaches identify a hierarchy of drivers for different parts of organization that
can be used to coordinate the actions of an entire enterprise to meet today’s
business needs.These methodologies alleviate many of the communications issues that
occur when individuals use different business languages with different levels of detail.
Common program metrics. A variety of formal program execution management
techniques – including the stage gate process – require that key decisions be made
throughout the product lifecycle.The complexity of this requirement is compounded
by the need to incorporate an increasing number of engineering and development
disciplines within the product development cycle.The table in Appendix A provides a
list of program metrics commonly used by today’s product development organizations.
6
In brief:
Effective metrics management requires that
companies understand how different metrics
influence the actions of different business
functions. PDMA’s ladder of abstraction uses
four levels of metrics to address the unique
needs of company executives, portfolio and
product managers, project and program
managers, and functional unit managers.
Effective metrics. Developing effective metrics may appear to be simple and easy at
first. However, few companies have succeeded in using these practices to make a
substantial business difference day in and day out.The Business Process Reengineering website uses the following outline to define the SMART approach to
metrics management.
Specific – ensure that program metrics are specific and targeted to the area being
measured
Measurable – make certain that collected data is accurate and complete
Actionable – make the program’s metrics easy to understand and clearly chart
performance over time so that decision makers know which direction is “good” and
which direction is “bad”
In brief:
Using metrics only for their own sake
is not an effective form of metrics
management.Two best practices help
companies develop effective metrics to
measure product development
performance:
• Business Process Reengineering’s SMART
method for metrics management
• PDMA’s eight principles for effective
metrics
Relevant – measure what is important and avoid metrics that are not
Timely – ensure that program metrics produce data when it is needed
The PDMA Handbook of New Product Development takes this advice even further by
making the following suggestions:
• Align metrics with the organization’s success criteria – so that everyone from the CEO
to individual engineers understand how their personal performance can effect the
organization’s overall performance
• Define a manageable set of metrics – so that the organization can balance quality,
productivity, time and cost in a manner that enables each metric to have a positive
impact on the business as a whole
• Create simple, explicit and understandable metric definitions – so that everyone in the
organization can easily communicate by using the same business language
• Define program objectives up front – so that the organization can align these objectives
with the needs of the business and clearly identify what the program is trying
to achieve
• Identify ongoing improvement opportunities using program metrics – so that the
organization can monitor the program’s bottom-line impact, maintain the program’s
momentum and reestablish performance targets whenever necessary
• Increase visibility into the program’s operational metrics – so that the organization has a
complete solution for improving its decision-making structure, organization
structure, business processes, tracking mechanisms and reporting templates
• Benchmark regularly – so that the organization can establish internal standards and
predictively measure external success
• Review metric results regularly – so that senior management is able to review
organizational performance and rapidly correct rising dysfunctions
7
Key Siemens solution capabilities
Basic Siemens PLM Softare capabilities. Siemens PLM Software is the leading
global provider of product lifecycle management (PLM) software and services with
5.5 million licensed seats and 51,000 worldwide customers. Siemens’ vision is to
enable organizations and their partners to collaborate using global innovation
networks to deliver products that meet today’s most compelling business imperatives.
Siemens’ digital lifecycle management, digital product development and digital
manufacturing solutions allow organizations to manage all of the product lifecycle’s
diverse processes, including processes that pertain to the planning, development,
manufacturing and service/support functions in today’s extended enterprises.
Individual Siemens solutions help organizations outline their operational structures
and develop best practices to maximize enterprise effectiveness. Once is foundation is
in place, organizations can optimize formal processes around these best practices. At
this point, each of these solutions can extract, summarize and report key performance
indices on a process-specific and customer-specific basis. Equally important, these
indices can be applied across multiple processes to improve the organization’s overall
business performance.
These Siemens solutions are bolstered by Teamcenter® software – the industry’s de
facto standard for deploying PLM on an enterprise basis.Teamcenter is able to bring
all Siemens solutions into a cohesive framework that organizations can use to manage
all of their planning, development, manufacturing and service/support processes.
Using Teamcenter to measure performance. As part of this framework,
Teamcenter provides program execution management capabilities that executives,
product managers and program managers can use to increase their visibility into program
performance and broaden their control over program execution. Organizations can
use these capabilities to automate the extraction and reporting of performance metrics
and treat them as an intrinsic component of their work.To facilitate this unique
approach,Teamcenter combines its knowledge management, process orchestration
and project management capabilities with real-time dashboard functionality.
This approach enables organizations to extract key process information from
individual groups and gain visibility into their enterprise’s up-to-date and
comprehensive program information. In addition to providing key process information,
these techniques provide access to the program’s rolled-up performance metrics,
process metrics, customized strategy-specific KPI and risk analysis metrics.
Organizations also leverage Teamcenter capabilities to make certain that all product
lifecycle participants are fully aware of organization’s business goals and the individual
roles they play in meeting these objectives.Teamcenter can roll up the day-to-day
activities of these participants into dashboards that summarize the most important
metrics in the organization’s metrics hierarchy.
Teamcenter’s dashboard capabilities. As part of these capabilities,Teamcenter
dashboards provide executives and managers with extensive reporting and tracking
capabilities. Executives can receive summary updates that highlight the status of all of
their organization’s teams and projects.All entitled stakeholders can request big picture
views to multiple project schedules in easy-to-read Gantt format.They can also access
cross-project management reports that present organization-wide project information.
Executive dashboards facilitate visibility into consolidated resource and cost
information, as well as to best practice analyses. By being able to view numerous
reports in dashboard format, executive management has access to all of the vital
8
In brief:
At a basic level, Siemens solutions enable
companies to implement digital product
development, digital lifecycle management
and digital manfacturing solutions to
transform and improve all of the stages
in today's product lifecycle.
Siemens’ Teamcenter software enables
companies to establish a PLM foundation,
where they can inject best practices into
their metrics-driven product development
processes.Teamcenter enables companies
to extract, summarize and report on key
performance-related indicators.
Teamcenter excels at enabling individuals
and groups to understand how their
performance affects a given project, as well
as the company’s strategic business goals.
Teamcenter dashboards allow executives
and managers to track team performance,
generate project summaries, request big
picture business views, consolidate
resource/cost data and perform best
practice analyses.
product development metrics required to assess the performance of their most costly
investment – including custom views uniquely oriented for executive decision-making.
In essence,Teamcenter makes it easier to determine which processes are on track,
which are missing their targets and where improvements can be made.This crucial
functionality enables companies to use highly favored closed loop processes to
manage the product lifecycle’s entire set of processes. Siemens solutions help support
numerous continuous improvement initiatives that focus on improving customer
value, including six sigma, kaizen and lean projects.
Together, these capabilities provide deeper visibility and control into the activities that
comprise most product development and manufacturing processes while maintaining a
holistic view that covers the entirety of both processes.
The following table lists program metrics commonly used by today’s product
development organizations.This list was originally outlined by DRM Associates’
Product Development Forum.
Appendix A – Commonly used program metrics
Commonly used program metrics
Product
development function
Requirements and
specifications
Common metrics
• Number of customer needs identified
• Number of discrete requirements identified (overall
system and by subsystem)
• Number of requirements/specification changes
(cumulative or per unit of time)
• Requirements creep (new requirements/ total number
of requirements)
• Requirements change rate (requirements changes
accepted/total number of requirements)
• Percent of requirement deficiencies at qualification testing
• Number of to-be-determined requirements/ total
requirements
• Verification percentage (number of requirements
verified/total number of requirements)
Mechanical design
In brief:
Different functions within the product
development organization use unique
metrics to measure and compare their
performance.The following functional
groups employ their own commonly
used metrics.
Requirement and specification
Mechanical design
Electrical design
Program management
Enterprise management
Product assurance
Software engineering
Parts procurement
Product definition
Organizational/team management
Technology management
• Number of in-process design changes/number of parts
• Number of design review deficiencies/number of parts
• Number of drafting errors/number of sheets
• Number of print changes/total print features
9
Product
development function
Common metrics
Mechanical design
continued
• Drawing growth (unplanned drawings/total planned
drawings)
• Producibility rating or assembly efficiency
• Number of prototype iterations
• Percent of parts modeled in solids
Electrical design
• Number of design review changes/total terminations or
connections
• Number of post-design release changes/total
terminations or connections
• Percent fault coverage or number of faults
detectable/total number of possible faults
• Percent fault isolation
• Percent hand assembled parts
• Transistors or gates designed per engineering man-month
• Number of prototype iterations
• First silicon success rate
Portfolio and pipeline
• Number of approved projects ongoing development
work-in-progress (non-recurring, cumulative investment
in approved development projects, including internal labor
and overhead and external development expenditures
and capital investment, such as tooling and prototypes)
• Development turnover (annual sales divided by annual
average development work-in-progress)
• Pipeline throughput rate
• New products completed/released to production last
12 months
• Cancelled projects and/or wasted spending last 12 months
• Percent R&D resources/investment devoted to new
products (versus total of new products plus sustaining
and administrative)
• Portfolio balance by project/development type (percent
of each type of project: new platform/new market, new
product, product upgrade)
• Percent of projects approved at each gate review
• Number of ideas/proposed product in pipeline or
investigation phase (prior to formal approval)
Program management
• Actual staffing (hours or headcount) vs. plan personnel
turnover rate
• Percent of milestone dates met
• Schedule performance
• Personnel ratios
10
Product
development function
Program management
continued
Common metrics
• Cost performance
• Milestone or task completion vs. plan
• On-schedule task start rate
• Phase cycle time vs. plan
• Time-to-market or time-to-volume
Enterprise
• Breakeven time or time-to-profitability
• Development cycle time trend (normalized to program
complexity)
• Current year percent of revenue from products
developed in the last “x” years (where “x” is typically
the normal development cycle time or the average
product lifecycle period)
• Percent of products capturing 50 percent or more of
the market
• Percent of R&D expense as a percent of revenue
• Average engineering change cycle time
• Proposal win rate
• Total patents filed/pending/awarded per year
• R&D headcount and percent increase/decrease in R&D
headcount
Product assurance
• Actual MTBF/predicted MTBF
• Percent of build-to-packages released without errors
• Percent of testable requirements
• Process capability (Cp or Cpk)
• Product yield
• Field failure rate
• Design review cycle time
• Open action items
• System availability
• Percent of parts with no engineering change orders
Software engineering
• Man-hours per 1,000 software lines of code (KSLOC)
• Man-hours per function point
• Software problem reports (SPRs) before release per
1,000 software lines of code (KSLOC)
• SPRs after release per KSLOC
• Design review errors per KSLOC
• Code review errors per KSLOC
• Number of software defects per week
• SPR fix response time
11
Product
development function
Common metrics
Parts procurement
• Number of suppliers
• Parts per supplier (number of parts/number of suppliers)
• Percent of standard or preferred parts
• Percent of certified suppliers
• Percent of suppliers engaged in collaborative design
Product
• Unit production cost/target cost
• Labor hours or labor hours/target labor hours
• Material cost or material cost/target material cost
• Product performance or product performance/target
product performance or technical performance measures
(e.g., power output, mileage, weight, power consumption,
mileage, range, payload, sensitivity, noise, CPU frequency)
• Mean time between failures (MTBF)
• Mean time to repair (MTTR)
• System availability
• Number of parts or number of parts/number of parts
for last generation product
• Defects per million opportunities or per unit
• Production yield
• Field failure rates or failure rates per unit of time or
hours of operation
• Engineering changes after release by time period
• Design/build/test iterations
• Production ramp-up time
• Product ship date vs. announced ship date or planned
ship date
• Product general availability (GA) date vs. announced GA
date or planned GA date
• Percent of parts or part characteristics
analyzed/simulated
• Net present value of cash outflows for development
and commercialization and the inflows from sales
• Breakeven time
• Expected commercial value (equals the net present
value of product cash flows multiplied by the probability
of commercial success minus the commercialization
cost; this is multiplied by the probability of technical
success minus the development costs)
• Percent of parts that can be recycled
• Percent of parts used in multiple products
• Average number of components per product
12
Product
development function
Common metrics
Organization/team
• Balanced team scorecard
• Percent project personnel receiving team building/team
launch training/facilitation
• Average training hours per person per year or percent
of payroll cost for annual training
• IPT/PDT turnover rate or average IPT/PDT turnover rate
• Percent core team members physically co-located
• Staffing ratios (ratio of each discipline’s headcount on
project to number of design engineers)
Technology
• Percent team members with full access to product data
and product models
• CAD workstation ratio (CAD workstations/number of
team members)
• Analysis/simulation intensity (analysis/simulation runs
per model)
• Percent of team members with video-conferencing/
desktop collaboration access/tools
Footnotes
Innovation 2005,The Boston Consulting Group.
“Money Isn’t Everything,” The Booz Allen Hamilton Global Innovation 1000, Barry Jaruzelski, Kevin
Dehoff, Rakesh Bordia, 2005.
3
Ibid.
4
Unmanaged R&D Spending is the Leak that Shareholders Want Plugged, Kevin O’Marah, Laura Carrillo,
AMR Research, 2004.
5
Ibid.
6
The PDMA Handbook of New Product Development, Kenneth B. Kahn, George Castellion, Abbie
Griffin, 2005.
7
Ibid.
8
The Human Side of Enterprise, Douglas MacGregor, 1960.
9
The Performance Measurement Group, 2002.
10
Op. cit. Brown, Aberdeen Group.
11
Ibid.
12
Op. cit., PDMA Handbook
13
Product Development Metrics, Kenneth Crow, DMR Associates, 2001.
14
The originator of the ladder of abstraction is S.I. Hayakawa, who presented it in his famous book
Language in Thought and Action, Fourth Edition 1990.
1
2
13
About Siemens PLM Software
Siemens PLM Software, a business unit of the Siemens Industry
Automation Division, is a leading global provider of product
lifecycle management (PLM) software and services with nearly
six million licensed seats and 56,000 customers worldwide.
Headquartered in Plano, Texas, Siemens PLM Software works
collaboratively with companies to deliver open solutions that
help them turn more ideas into successful products. For more
information on Siemens PLM Software products and services,
visit www.siemens.com/plm.
Siemens PLM Software
Headquarters
Granite Park One
5800 Granite Parkway
Suite 600
Plano, TX 75024
USA
972 987 3000
Fax 972 987 3398
Americas
Granite Park One
5800 Granite Parkway
Suite 600
Plano, TX 75024
USA
800 498 5351
Fax 972 987 3398
Europe
3 Knoll Road
Camberley
Surrey GU15 3SY
United Kingdom
44 (0) 1276 702000
Fax 44 (0) 1276 702130
Asia-Pacific
Suites 6804-8, 68/F
Central Plaza
18 Harbour Road
WanChai
Hong Kong
852 2230 3333
Fax 852 2230 3210
© 2009 Siemens Product Lifecycle Management
Software Inc. All rights reserved. Siemens and the
Siemens logo are registered trademarks of Siemens AG.
Teamcenter, NX, Solid Edge, Tecnomatix, Parasolid,
Femap, I-deas and Velocity Series are trademarks
or registered trademarks of Siemens Product Lifecycle
Management Software Inc. or its subsidiaries in
the United States and in other countries. All other
logos, trademarks, registered trademarks or service
marks used herein are the property of their
respective holders.
www.siemens.com/plm
8/09
Download