Document 11269324

advertisement
IMPROVING THE PRODUCTIVITY OF AN R&D ORGANIZATION
By
Armando Miguel Hurtado Schwarck
B.S. Mechanical Engineering
Universidad Sim6n Bollvar, Caracas, Venezuela, 2005
Submitted to the Systems Design and Management Program in partial fulfillment of the
requirements for the degree of
Master of Science in Engineering and Management
at the
AM4HNVES
MASSACHUSTS
M
OF TECHNOLOGY
Massachusetts Institute of Technology
JUN 2 6 2014
May 2013
LIBRARIES
© Massachusetts Institute of Technology. All rights reserved
Signature of Author:
Signature redacted
Armando Miguel Hurtado Schwarck
System Design and Management Program
Certified by:
Signature redacted
.4
Ralph Katz
Senior Lecturer, Technological Innovation, Entrepreneurship and Strategic Management.
/!esis Supervisor
Signature redacted.
Accepted by:
Patrick Hale
Director, Program of System Design and Management
1
This page has been intentionally left blank.
2
IMPROVING THE PRODUCTIVITY OF AN R&D ORGANIZATION
by Armando Miguel Hurtado Schwarck
Submitted to the System Design and Management Program in partial fulfillment of the
requirements of the degree of
Master in Science in Engineering and Management
Abstract
This research demonstrates through a comprehensive case study, the application of
Lean manufacturing techniques, specifically Value Stream Mapping, to a product
development organization in the mass consumer products industry. With the guidance of
a methodology for Value Stream Mapping adapted to Product Development by Dr. High
McManus (McManus, 2005), I map specific processes related to the approval of tests
and spending R&D funds. This mapping allows the identification of wastes and
improvement in cycle time and in productivity of the process under study.
In order to achieve the results above, a precise definition of value and productivity was
needed. I derived this from the combination of R&D productivity concepts extracted from
the literature that were useful in this application (Tipping, Zeffren, & Fusfeld, 1995),
(Steven, Mytelka, & all, 2010), (Meyer, Tertzakian, & Utterback, 1997).
Value Stream Mapping also requires the process under study to be precisely bound. In
order to narrow the scope of study form the overall product development process to
something more manageable, a combination of qualitative interviews with employees
and quantitative data from seven past projects was analyzed. This analysis yielded that
a significant amount of time was spent by the organization on approval processes.
Additionally, procurement processes were highlighted as needing potential
improvements.
An important conclusion of the work is that approval processes, which are meant to
manage and maximize the returns on variable R&D spending, might be
counterproductive if we consider their impact on cycle time and the utilization of fixed
R&D assets.
Thesis Supervisor: Ralph Katz
Title: Senior Lecturer, Technological Innovation, Entrepreneurship and Strategic
Management.
3
This page has been intentionally left blank.
4
Acknowledgements
I would like to thank mostly my unnamed peers in the WCPco. who helped me in this
ambiguous journey towards greater R&D Productivity. Also, the guidance and support
from Ralph Katz and James Utterback, who were able to kludge me in the right direction
at critical times during this research. Lastly, I would like to acknowledge the support of
my family on whom I could always count on fur support and of my girlfriend Kareem, who
pushed me forward through many a late night.
5
This page has been intentionally left blank
6
Table of contents
Abstract .........................................................................................................................
3
Acknow ledgem ents ...................................................................................................
5
Introduction.................................................................................................................10
Research M otivation...............................................................................................
10
Hypothesis.................................................................................................................
11
Methodology...........................................................................................................
11
Case Study Background..........................................................................................
13
Company Overview ...............................................................................................
13
Current Developm ent Process..............................................................................
Discovery Stage.................................................................................................
Design Stage .....................................................................................................
Qualify Stage .....................................................................................................
Readiness Stage...............................................................................................
Launch Stage......................................................................................................
16
17
17
17
18
18
Breakdown of Defiverables by Function................................................................
20
Information Flow Structure in a Product Initiative Team........................................
21
Problem Statem ent .....................................................................................................
25
Managem ent Goals ..............................................................................................
25
Organizational Culture Sensing ............................................................................
Methodology .....................................................................................................
Relevant Cultural Themes.................................................................................
R&D Processes..............................................................................................
Time and Resources .....................................................................................
Trust ...................................................................................................................
I W ish I Could.... ............................................................................................
26
26
27
27
28
28
29
Specifics of the Problem Statem ent...........................................................................
29
Definitions for R&D Productivity............................................................................
31
General Problem s with Measuring R&D Productivity .............................................
31
The "Purist" or Holistic Metrics..............................................................................
32
Counting M etrics ...................................................................................................
34
Subjective M etrics .................................................................................................
Metrics in the Context of a Product Fam ily ...........................................................
35
Conclusions on R&D Metrics in the Literature.......................................................
37
36
Introduction to Product Development Value Stream Mapping (PDVSM).......38
7
M ethodology Background .......................................................................................
38
Value Stream Mapping O verview ..............................................................................
Scoping of the Problem .......................................................................................
M apping the Current State ................................................................................
Identifying W aste ................................................................................................
Im proving the Process .......................................................................................
Im plem entation and Future State .......................................................................
39
40
40
40
41
42
Applicability of the Methodology to the Problem at Hand ....................
42
Application of PDVSM to the Case Study.............................................................
45
Identification of Specific Process to Improve...........................................................
Q uantitative Analysis of Past Initiative Gant Charts............................................
Description of the Data Set:...........................................................................
Analysis of the Data Set: ................................................................................
Final Selection of Specific Process to Im prove..................................................
45
45
45
46
52
The Product Testing Approval, Build and Placement Process ................................
52
Stakeholders in the Product Testing Approval, Build and Placement Process........53
54
Bounding the Problem ........................................................................................
56
Definition of Value..............................................................................................
56
Productivity of the Process ..............................................................................
57
Secondary M etrics..........................................................................................
Value Creation in the Process............................................................................ 58
Mapping the Current State Value Stream ...............................................................
Process Data of the Current Value Stream ........................................................
Evaluation of Value ............................................................................................
59
66
69
Identification of W aste ...........................................................................................
70
Improving the Process ............................................................................................
Elim ination of Inefficient Reviews and Approvals ...............................................
Breaking Down Monum ents ................................................................................
Exploiting Underutilized Analysis........................................................................
Elim inating Unnecessary Motion .......................................................................
The Future State Value M ap ..............................................................................
72
72
74
75
76
77
Conclusions ................................................................................................................
Bibliography................................................................................................................85
8
83
List of Tables, Figures and Equations
Table
Table
Table
Table
Table
Table
1: Deliverables of Product Related Functions................................................. 20
3: Interaction of Different Functions................................................................
24
4: Product Initiatives Studied ...........................................................................
46
5: Data for the Current State Value Stream .....................................................
67
6: Future State Value Metrics .........................................................................
80
7: Comparison of Cycle time Reductions without Considering Re-Loops.........80
Figure 1: Stakeholders in a Product Initiative .............................................................
16
Figure 2: Product Initiative..........................................................................................18
Figure 3: Scope of PDVSM as outlined in the Manual (McManus, 2005)................... 39
Figure 4: Counts of Activities by Type ......................................................................
48
Figure 5: Total Duration in Days of the Sum of Activities.......................................... 49
Figure 6: Distribution of the Duration in Days of Approval Activities ..........................
50
Figure 7: Sum of Duration of Tasks per Activity Type Removing Regulatory Market
Clearance Approval...................................................................................................
51
Figure 8: Bounding of the Process ...........................................................................
55
Figure 9: Value Stream Mapping Nomenclature (McManus, 2005) ..........................
64
Figure 10: Current State Value Stream ....................................................................
65
Figure 11: Current State Value Stream Map with Metrics.......................................... 68
Figure 12: Management Approval Process ...............................................................
73
Figure 13: Component Procurement Process...........................................................
76
Figure 14: Future State Value Stream Map ...............................................................
79
Equation
Equation
Equation
Equation
Equation
Equation
1: R&D Productivity as Defined by WCPco.................................................
2: General R&D Return .............................................................................
3: R&D Productivity on a Per Cycle Basis .................................................
4: Platform Efficiency ...............................................................................
5: Platform Effectiveness...........................................................................
6: Productivity of the Process....................................................................
9
25
32
33
36
37
56
Introduction
Research Motivation
Innovation is the lifeblood of many businesses and a pillar of the modern economy. The
word is ever-present in management books and magazines and for good reason; in a
competitive marketplace, innovation is widely held as the key for long term survival of a
firm. Yet even with great commitment from corporations and governments in fuelling
R&D, a small, but growing number of experts believe that the productivity of current
innovation might actually be declining compared to decades past (Economist, 2012).
What interests me particularly about this topic is that when it comes to productivity in
manufacturing, the data shows continued and unquestionable growth over the decades
(Cobet & Wilson, 2002). Improvements in computing, statistics, operations research and
automation have all helped to sustain manufacturing productivity growth. Many of these
same developments are applicable in an R&D setting, so how could it be that
productivity of innovation might be flat or even in decline? Part of the difficulty in this
puzzle is the fact that there is no objective or agreed to definition of what R&D
productivity actually means or how it can be measured (Meyer, Tertzakian, & Utterback,
1997).
On a more personal note, I have worked in two different product development
organizations over the last nine years and I have been witness to how innovative
products are developed and brought to market in the context of mass consumer goods.
During my time in these organizations, the productivity of our work could anecdotally be
classified as inconsistent. I have witnessed productivity gains derived from investments
in development of models and more powerful simulations, analytics techniques, rapid
prototyping and live consumer testing enabled by mobile devices. But at the same time,
10
growing product and market complexity and the challenge of having to continuously
develop breakthrough technologies makes it difficult to sustain gains in innovation
productivity.
The objective and motivation behind this research topic is to find and apply cutting edge
Operations Research techniques in product development productivity to my current R&D
organization. I will be basing my work on methodologies developed under MIT's Lean
Advancement Initiative, specifically those related to Lean Product Development.
Hypothesis
Development and Manufacturing are two very different environments. In manufacturing,
the objective is to obtain, with increasing accuracy and efficiency, the same results (a
product) given a particular process. When engaging in development work, the objective
is to obtain results that are new, different and superior to anything else achieved in the
past. R&D work is, by its very nature, unpredictable and it does not easily lend itself to
being quantified. Given all of the above, I still believe that many of the techniques that
have resulted in the manufacturing productivity revolution of the last fifty years can still
be applied in a limited manner in a research and development process.
Methodology
The proposed hypothesis will be tested in the context of an actual product development
organization in which I am currently employed and will use as a case study. The
approach will begin with a literature review and the establishment of a useful definition
for R&D productivity in the context of my organization. It will then follow the use of
qualitative interviews of peers to further define the problem and potential wastes in the
product development process. Afterwards, I will gather quantitative data of the product
development process by analyzing the performance of past product development
11
initiatives. A total of seven projects will be analyzed. Following this analysis, I will apply
Value Stream Mapping to visualized the product development process, brainstorms
improvements and evaluate the potential gains in productivity that could be achieved.
The Lean Advancement Initiative, headed out of the Massachusetts Institute of
Technology is viewed by many in the field as a leading organization in the development
and application of Lean principles. With regards to product development, the body of
application has been summarized in the Product Development Values Stream Mapping
Manual (McManus, 2005). This manual provides a useful roadmap for those familiar with
Lean principles and looking to apply them to a Product Development Process. These
techniques will provide a valuable framework for the current research.
12
Case Study Background
Company Overview
The Worldwide Consumer Products Company (WCPco.) is a fortune 500 multinational
corporation. This company specializes in developing, manufacturing and distributing
mass produced consumer products sold through major retail outlets. Its main
organization structure consists of a matrix of business units (organized by product
category) and market development organizations (in charge of sales and marketing
execution and organized regionally). The case study for this research will focus on the
R&D organization inside one of the largest business units of the company. This business
unit was acquired by the company less than ten years ago and complete cultural
assimilation of the business unit into the overall company has not yet concluded.
The R&D organization of this business unit has over 1000 employees in 5 sites spread
over North America, Europe, Latin America and Asia. The core divisions of this
organization include:
*
Front End Development (FED): This organization focuses on proof of principle
technology development. Their main goal is to develop the next generation of
foundational technologies that will fuel the business in the future and are not
constrained by "go to market" timetables or current process and product
technologies.
*
Materials Development: There are four major categories of materials that are
integrated into the products of this business unit. This organization collaborates
with suppliers to develop the right formulation and supply chain capabilities to
meet the performance and cost targets of the products.
13
*
Product Design and Development (PDD): This organization has the core
capability of synthesizing the specific product design based on consumer needs,
available technologies and constraints such as costs and manufacturability.
*
Products Research (PR): This organization focuses on finding the key consumer
needs and translating them into technical product requirements specific enough
to be useful to the rest of the technical development functions.
*
Process and Equipment Development (P&E): WCPco. considers manufacturing
as one of its key competencies and maintains the development of most of its
manufacturing processes in-house. The process and equipment organization
works in close collaboration with the product development group to design the
processes for making the product based on capital, costs and supply
constraints.
" Analytical: This organization specializes in the development of measurement
methods and equipments to be used for either development or quality assurance
purposes.
*
Industrial Design (ID): The aesthetic look and ergonomics of the product and
packaging are considered by WCPco. as key to driving sales. Industrial Design
defines the overall look, shape and material finish of the product and the
packaging.
" Quality Assurance (QA): From an R&D perspective, quality assurance is
responsible for the quality of the data that drives recommendations, which
produced by the rest of the technical organization. QA establishes, maintains
and audits internal R&D processes that ensure that the work is being carried out
in compliance with corporate and governmental standards.
14
*
Packaging Development: As the name implies, this organization integrates the
need to protect and showcase the package, with the in-store shelf requirements
of the retailer and the costs constraints of the project.
*
Product Safety and Regulatory (PS&R): Assures safety compliance of the
product in accordance to governmental regulatory standards for both market
release and developmental consumer testing. Also assures packaging standards
meet all labeling regulations.
Additionally to the functions outlined above, the business unit has several other functions
that are key collaborators in any project but are not specifically part of R&D. These
include:
"
Marketing: Responsible for bringing together the entire commercial business
proposition. In the company lingo, marketing defines 'What" the overall
project should be (while R&D defines "How" to do it).
*
Consumer and Market Knowledge (CMK): This organization has a core
competency of understanding the market on a macroeconomic scale. It is
their job to define 'Who" the project should target.
"
Finance: Responsible for overall program economics.
*
Program and Project Management: Lead the overall coordination and
execution of the project by all functions. This group also maintains the
project schedule and budget.
*
Product Supply: Operates the entire supply chain once a product and
process is validated.
*
Legal: Leads the development of the Intellectual Property protection strategy
and ensures that product claims can be supported legally.
15
The above list is not all-encompassing of the functional roles inside the business unit but
is instead an outline of all the main stakeholders in a particular product initiative. All of
the above can be visually summarized in the Figure 1. For the purposes of this research
work, a "Product Initiative" is the main organizing process via which new products are
developed and introduced into the market.
CK
LZj
Figure 1: Stakeholders in a Product Initiative
Current Development Process
As explained above, the main organizing vehicle for developing products is a Product
Initiate. This vehicle is a gated development process comprised of the following stages:
16
Discovery Stage
In this stage, feasibility for an initiative is verified on several fronts. These include a right
to win with consumers and retailers, and a right to succeed from a technical,
commercial, financial and proprietary point of view. A small lead team from various
functions is formed and headed by program management to define the project scope
that will satisfy business strategy and objectives based on the feasibly study mentioned
above. At this stage in funding is low and any testing is only to demonstrate feasibility.
Existing technologies or technologies being proven out by the Front End Development
Organization are bundled to create an initial technical scope.
Design Stage
During this third stage, funding for testing and overall resources are increased with the
objective of transforming the initial project scope into a locked product, process, supply
and commercial proposition. Large scale, quantitative testing is used to create
confidence in approving major funding for production capital equipment purchases. A
commitment of launch date and volume forecasts is agreed to with the Market
Development Organization.
Qualify Stage
After committing to mayor funding at the conclusion of the design stage, manufacturing
equipment is purchased, installed and qualified at plants. Packaging and Product
performance is validated to meet the original design intent on a mass production scale.
Marketing and communications plans are concluded. A commitment to the trade and
external partners on launch timings and volumes is made at this time.
17
Readiness Stage
At this stage, all development work has concluded and initial launch volumes are
produced. Specific execution of sales and marketing plans on a local level are agreed to
at the conclusion of this phase.
Launch Stage
During the Launch Stage, the Sales of product in the initial market begin and are tracked
by the organization. Expansion into secondary regions and the overall launch sequence
is followed per the initiative scope.
The different stages of a product initiative described above are visualized in Figure 2.
Team
formation,
initial
feasibility,
establishment
of scope and
success criteria.
Technical and
commercial
proposition
locked via
quantitative
testing.
Funding, timing
and volumes
committed
internally
Installation and
qualification of
manufacturing
equipment.
Product and
packaging
qualified.
Timing and
volumes
committed
Initial launch
volumes
produced.
Specific local
marketing and
sales plans
Start of Sales in
first market.
Preparation for
global roll out
and monitoring
completed
externally
Figure 2: Product Initiative
Another important detail of product initiatives is the fact that they are not all treated
equally by the company. There are three classifications of product Initiatives:
*
Stream 1 Initiatives are the most highly resourced and technically complex. It is
typically expected of them to bring to market disruptive new technologies and
18
products. The business counts on these initiatives for driving long term growth
goals (5 years +).
*
Stream 2 initiatives have lesser risk and staffing than stream 1's. They bring
improved products to market without disrupting or reinventing the underlying
technology sustaining the business. The company counts on these initiatives to
meet midterm competitive challenges or increase profits of the current portfolio.
*
Stream 3 initiatives have minimal R&D staffing and budget. They look to refresh
the product portfolio in a short term manner by introducing new aesthetics or
leveraging existing technologies in a new way that is not disruptive.
19
Breakdown of Deliverables by Function
I
Above, I have described both the stakeholders in a product initiative and an overview of
the development process. I will not summarize some of the mayor deliverables and
interactions of those stakeholders throughout the development process. Table 1
showcases some of the major deliverables per phase for the product related functions.
Functionl
Product
Research
(PR)
1
A
;.1VA
F mu-
A-111
Virtual Design Validation
Execute any Safety/Regulatory
Testing
In-store Demo Training Materials
Completed (if Applicable)
External
Etra
Consumer Validation of Test
Methods
Provide PS&R with Lisage Studies
ResultsInMreLanigPn/olwu
In-Market
Large scale product qualification
test complete
Define
am
Suppo
btegy Aine
In-Market Monitoring
Claims
Approval
lisApoa
Learning Plan/Follow up
Plan
Final safety and regulatory
checklist complete
Production product consumer test
completed & success criteria met
I
Final Manufacturable Design
Locked
Final Design Evaluation completed Final Product Specifications
approved after validation on long
(based on production process
runs of production equipment
equipment)
I
Final designed product (final
colors, finishes, and branding)
Approved Specifications
Styled functional product
Product Performance specifications Technical performance evaluation
of final design completed
identified
Preliminary quality specifications
Finalized quality specifications
Initial Prototype Technical
Measurement Methods identified
Design Brief written and aligned
with cross-functional team
Industrial
Design
(ID)
9
Demos Completed
Preliminary product concept fit
demonstrated in collaboration with
Marketing.
Consumer Benefit Identified and
translated into technical
requirements
Proof of Principle Product
Consumer Testing (Small scale
Qualitative)
Preliminary Demos and claims
Update IP Strategy
Initial safety and regulatory
checklist complete
Conceptualization of
Manufacturable Design based on
proven technologies
lN'N.
'C
Consumer Services Training
Completed
Claims and demos locked & A
Identified
Product
Design &
Development
(PDD)
1
Consumer design target identified
in collaboration with CMK
Validated measurement methods
for all consumer relevant
measures (in partnership with
Analytical)
Final color camps created and
validated with consumers
Initial design directions created in Final industrial design approved
2D and/or 3D for consumer testing by the business
Final 3D product form, aesthetics
and ergonomics defined and
validated with consumers
Preliminary Safety and Regulatory
assessment and allow for limited
consumer exposure
Updated Preliminary Safety and
Regulatory assessment and
packaging/device components based on updated checklist
Final PS&R assessment and market Confirmation of Market
Surveillance Plan
clearance approved
Work with R&D and QA on
Complete regional input to artwork monitoring of in market product
Initial PS&R and (Legal) review of PS&R and (Legal) approval of
Product
safety.
process
claims
Safety and all claims/concepts
n
flo
as
Regulatory Written preliminary assessment of P
technology completed- based on PS&R assessment of final product
(PS&R)
including packaging complete.
completed checklist
Dossier developed and submitted
to appropriate Regulatory body by
_region
Table 1: Deliverables of Product Related Functions
20
In each of the major deliverables described above, there are a series of interim activities
that each function must define in order to reach the deliverable. These activities are
carried out in collaboration with other functions and are compiled into what is known as
the Holistic Learning Plan of the initiative. This plan is a comprehensive document that
allows other functions to visualize the deliverables of each other and to also determine
how they need to interact in other to achieve those deliverables.
Information Flow Structure in a Product Initiative Team
Now that I have described the functions involved in a product initiative, the stages of
development and the deliverables by function per stage, I also think it is important to
document how the work gets coordinated and completed. I have captured via interviews
with members of different teams, the overall structure of team meetings and sub
meetings in which the execution of work gets coordinated. This will be important as it is
the formal physical structure of how information flows through the system.
Because formal information flows are easily documented, I will present them, while
remaining conscious of the existence of informal information flow structures. These
information flows such as enterprise social networks, individual phone calls or ad-hoc
meetings between members, lunch meet-ups or other instances, are just as important in
the overall innovation process (Cross, Borgatti, & Parker, 2002).
The most basic of instance of information sharing is the multi functional team. These
teams comprise of members of different functions and at different levels and are operate
permanently on a weekly, by-weekly, monthly or quarterly frequency. For the purposes
of this case study, I will focus on the teams where R&D functions are present. These
teams are:
21
*
Steering Team: Comprised of program management and senior management of
the different functions. Meets on a monthly or quarterly frequency. Initiative
success criteria and business objectives are aligned. Major changes in scope are
approved at this level.
*
Program Team: Comprised of the lead team members for each function from
both the commercial and R&D side of the organization. Meets, by-weekly and
learning plans across the organization are shared and aligned as well as
decisions that do not violate mandates from the steering team. This team is
where the commercial and technical sides of the business exchange information.
*
Core Team: Comprised of mostly the technical functions (R&D and PS) and
Program Management. This team focuses on technical execution of learning
plans across all R&D functions. This team meets on a weekly or biweekly basis.
"
Implementation Team: This team is lead by the process and equipment side of
the organization and focuses on equipment design, implementation timings and
start of production. Packaging and Product Design and Development are
permanently invited to this meeting. Meets on a weekly basis.
*
R&D Schedule Team: All R&D functions meet to discuss progress on activities
and realign on schedules and timings. Meets on a weekly basis. This meeting is
lead by project management.
*
Product Design Team: Lead PDD engineers meet to discuss the execution of the
design learning plan in collaboration with other functions. Designs are peer
reviewed and input is gathered from other functions. Meets weekly.
*
Process Design Team: Lead engineers for all equipment process development
meet to discuss execution of learning plans. Other functions participate when
needed. Meet on a weekly basis.
22
*
Packaging Focus Team: In this team, requirements from sales and commercial
counterparts are incorporated into the specifics of packaging development. This
team is where stock keeping units (SKU's) are detailed and aligned. This team
meets biweekly.
*
Aesthetics Design Team: Lead by Industrial design, this meeting aligns the
overall aesthetic look for the product, packaging in accordance to the brand
strategy.
*
Platform Teams: Although not formally part of the specific initiative team, platform
teams align on technical standardization across the different product initiatives.
Platform Teams ensure that technologies can be leveraged across various
initiatives
in
accordance
to
platforming
strategies
outlined
by
senior
management.
A visual representation of the interaction of the different functions in the teams described
above can be seen in table 3.
23
Steering Team
Program Team
Core Team
Implementation Team
R&D Schedule Team
Product Design Team
Process Design Team
Packaging Focus Tearn
Aesthetic Design Team
Platform Teams
Table 2: Interaction of Different Functions
24
Problem Statement
Management Goals
The increase of productivity of any organization or process is always a desired outcome
for management. Increasing productivity as it is plainly understood in business literature
results in higher profits given that less resources (however these are defined) are spent
on a given desired outcome. For an R&D organization, as I will showcase later in this
work, productivity is rather difficult to precisely define. Notwithstanding, corporate R&D
leadership does have goals in place to improve the productivity of R&D as a whole. In
discussions with senior corporate management, it was understood that the company
defines R&D productivity on a corporate level as R&D spending as a percentage of Net
Outside Sales. Or as defined in the formula:
R&D Productivity =
Net Outside Sales
R&D Spending X 100
Equation 1: R&D Productivity as Defined by WCPco.
The above metric is meant to be interpreted as a long term metric. The logic behind
using this as a gross long term metric stems from the fact that if the R&D organization is
increasing in productivity (as defined by this formula), then Net Outside Sales should
grow at a faster pace than R&D spending. It is important to point out the long term intent
of this metric given that in the short term, measures such as cutting R&D budget would
immediately increase the above metric, but could have an unknown effect on longer term
Net Outside Sales. Due to the above, this particular metric performs a monitoring
function and is not particularly actionable in the short term.
25
Inside the specific business unit that is being used as a case study, there exists a strong
belief that a key to a healthy and productive R&D organization is rooted in elements of
organizational culture. The company is not alone in this belief, as it is also heavily
referenced in the literature of innovation (Tellis, Prabhu, & Chandy, 2009).
To this end, the Business Leadership sanctioned an internal study of R&D organizational
culture, specifically aimed towards understanding if there existed a strong "Learning
Culture" inside the organization. The term "Lerning Culture" is used to describe a culture
in which the emphasis of R&D work is to learn fundamental truths about our consumers
and products that will create value for the company. This term is meant to be contrasted
to that of a "Qualification Culture" in which tests are performed only to establish if a
product initiative can succeed in the market without any true understanding of why it was
successful or not. The R&D leadership in the business unit strongly believes that in the
long term, a "Learning Culture" will be more productive than a "Qualification Culture".
Organizational Culture Sensing
Methodology
The Culture sensing study was based on qualitative techniques and not large scale
employee surveys that could be used for quantitative analysis. R&D employees from
different levels and functions of the organization were placed together in small focus
groups of 5 or 6 employees where they were encourage to speak freely about their work
and challenges that they faced. Employees were segregated into focus groups
according to their organizational level. Focus groups were created for Senior Engineers,
Engineers and Technicians. From these focus groups, consistent themes were
highlighted and representative verbatim statements showcased.
26
Relevant Cultural Themes
Among discussions in the focus groups about the day to day work, four relevant themes
emerged around which employees expressed strong opinions. These themes are:
R&D Processes
With regards to this theme, it was highlighted by employees various things:
" There is not enough documentation in the handoff of information between
departments. Often times, the final results were passed down to stakeholders
without a record of other alternatives tested or failed tests. Ultimately, the
valuable knowledge that can be gleamed from failures is lost.
*
With regards to decision making, often important decisions are made at high
levels without consulting those involved in the actual work. Information flow up to
management well, but has more difficulty flowing down to the engineers involved
in the work. Often, small details that are known to those involved in the work
make high level decisions unviable.
*
Project timings are too tight to allow in depth "learning". Focusing too much on
developing a product that works, without understanding why it works means that
long term learning cannot occur. Project timings only allow "qualification" type
work. Many projects are scrapped when they encounter their first failure due to
this.
" Too much time is spent on obtaining all of the right approvals to carry out the
work. The internal R&D processes have more checks and balances than the
necessarily require.
" The complexity of the organizational structure, combined with the constant
evolution of said structure makes it at times difficult to understand who has
27
ownership of deliverables. This can be distracting and ultimately demotivational.
During day to day work, teams do not fully know how to interact with each other.
Time and Resources
*
There is a lack of resources to obtain complete learnings, especially when it
comes to budget (the same is not as true when it comes to people). Qualification
work is highly prioritized over learning so as to not impact project timings.
*
Good tools for exploration and learning are over utilized causing long queues.
These include specialized consumer test panels and resources for modeling and
simulation.
*
Time spent in team meetings directly reduces the amount of time that
employees can spend actually doing the work. There are too many meetings to
attend that do not necessarily add value.
Trust
*
A learning culture requires an environment where trial and error and ultimately
failure can be embraced. Current mindset is focused too much on execution and
results. Failed tests are not viewed positively.
*
Employees rarely take the initiative to learn because detailed decisions on
project scope are taken at high levels without consulting. This places any
additional learning in a position to "challenge" the current scope as going against
the mandates from higher level management.
*
Competitive employee rating system can create incentives to not collaborate.
Data and knowledge sharing is less straightforward when employees are in direct
competition against each other for ratings in performance reviews.
28
I
Wish I Could....
Some relevant employee verbatim statements were highlighted to create a less abstract
conversation. These include:
"I wish I could use other methodologies than those used in the past to uncover deeper
knowledge without so much resistance from management"
"I wish I could have more decision making ability on test approvals so that I could
execute them faster without so many approval loops"
"I wish I could be free to innovate in a way that could be auctioned regardless of whether
the learning is tied to a specific initiative"
"I wish I could have more clarity and visibility to the business problem being address so I
have the freedom to solve against the problem"
"I wish I could have more influence on initiative master plans and on leadership
decisions"
'I wish I could speed up the approval process needed for me to do my work"
Specifics of the Problem Statement
Given all of the organization information and context outlined above, the purpose of this
research is to dwell deep into the innovation process of the business unit and outline
actionable recommendations to improve productivity. Innovation productivity as defined
by the company would result in faster growth of Net Outside Sales than growth of R&D
spending.
In order to better guide the research work, I will dwell into an exposition of metrics for
R&D productivity that will be more actionable than that outlined in Equation 1. More
29
actionable metrics are necessary because in order to judge if a recommendation will in
fact improve R&D productivity, we currently have not reliable way of linking R&D
spending to Net Outside Sales other than by empirically observation. This empirical
observation would require years to validate a single proposal and is thus not actionable.
30
Definitions for R&D Productivity
As part of the literature review of this research, I dived into the use of R&D productivity
metrics in research and industry. I will now outline some metrics, how they are used and
recommend useful aspects of these metrics applicable to the research topic at hand.
R&D metrics in general is a broad research topic in and of itself and I intend only to
provide a basic general discussion of the topic.
General Problems with Measuring R&D Productivity
Unlike in a manufacturing environment, when one talks of productivity in an R&D setting,
there is no clear cut definition. In general productivity can be defined as the ration of
production output to what is required to produce it, or input.
One of the most frequently used metrics of productivity is Labor productivity in a
manufacturing environment. The output in this case is "goods meeting quality specs" and
the input is the labor time of the employees in manufacturing. This definition is
straightforward and unambiguous. Tying to convert this analogy in an R&D setting
presents many challenges. In R&D, the input can easily enough be defined as overall
R&D spending including both fixed (labor and facilities) and variable (testing and
prototyping) spending, however the output of an R&D organization is much more elusive
to define.
My review of the literature on R&D metrics is based on previous summaries (Brown &
Svenson, 1998), (Meyer, Tertzakian, & Utterback, 1997), (Tipping, Zeffren, & Fusfeld,
1995) plus additional studies demarcating their use in real industries. In general, I have
chosen to group R&D productivity metrics into four fundamental camps of thought:
31
The "Purist" or Holistic Metrics
These sets of metrics focus on obtaining unbiased, clearly quantifiable values for the
productivity of R&D. Tipping and all (Tipping, Zeffren, & Fusfeld, 1995) provide a good
example of this type of metric in what they refer to as R&D Return:
R&D Yield
R&D Return = R&D Effort
Equation 2: General R&D Return
Where:
R&D Yield = Gross Profitsform Sale of New Products
+ Gross Profitsfrom Lower Costs of Goods
And
R&D Effort = Annual Expenditure on R&D
The "Gross Profits from Sales of New Products" relates to additional profits seen by the
company due to new product launches that were the result of direct R&D Investments.
Gross Profits from Lower Costs of Goods makes reference to the fact that not all R&D
work results in new product sales; a significant portion of R&D investments are
dedicated to product cost savings and process efficiency improvements, both of which
results in greater profits.
There are various derivations of this basic holistic formula. The metric used by Corporate
R&D in WCPco. in Equation 1 is a variation on this formula except for the fact that it
focuses on Net Outside Sales as the only significant R&D output. Focusing on Net
Outside Sales, instead of profits simplifies the metric as profits are more difficult to
32
derive from an accounting perspective. It's important to note that cost savings profits are
not currently accounted for Equation 1.
One business example of where this type of thought process works well comes from
Paul et al (Steven, Mytelka, & all, 2010). They defined R&D productivity by taking the
approach that the drug discovery process is an assembly line and focusing on the use of
Little's Law in queuing theory. They assume that each drug candidate is a Work in
Progress (WIP) with a specific Probability of success assigned to it depending on its
stage of development. More specifically, their proposed definition is:
R&D Productivity =
WIP x p(TS)
CT
V
Equation 3: R&D Productivity on a Per Cycle Basis
Where WIP is the Work in Process (number of drug candidates), p(TS) is the probability
of technical success of those drug candidates under development, CT is the cycle time
or number of years that it takes to bring a drug candidate through the development
process, V is the value or sales that can be derived from a drug in market and C is the
total costs to bring a drug to market.
This second derivation of a "Holistic" R&D productivity metric does not consider costs
savings as a potential output of R&D, but it does introduce some interesting concepts
that could be applicable. For example, the term 'Work in Progress" is very analogous to
"Product Initiatives". Probability of success is also an interesting concept that I might use
moving forward as well as that of Cycle Time.
Overall, Holistic measures such as these carry the advantage of providing a good
answer to the question of the productivity of an R&D organization with a minimal number
33
of assumptions. However, they require empirical data that can only be gathered over
various years and thus can be limited in their use for day to day application. Additionally,
this empirical data might be very difficult to collect in an unbiased way; Profits from new
products and increases of Net Outside Sale are not only the result of R&D work but can
be highly influenced by marketing spending or by changes in the overall economic or
competitive landscape.
Counting Metrics
A second style of metric that I found during my literature review focuses on counting
measurable instances that occur inside of the product development process. These
metrics do not attempt to answer the question of overall system productivity but simply
take an approach that "all else being equal", increasing or decreasing certain counts of
instances will improve productivity. Brown (Brown & Svenson, 1998) makes a good
outline of these types of metrics. In his words, they include things like: "Research
proposals
written,
Papers
published,
Designs
produced,
Products
designed,
Presentations made, Books written, Patents received, Awards won, Projects completed,
among others."
There are also additional metrics of the product development process itself that would
fall into this style and are commonly used in industry. A study from The
Corporate
Executive Board's Research and Technology Executive Council (Council, 2010) that
surveyed industry showcases the common use of metrics such as: Number of Ideas
Entering the Innovation Process, Percentage of ideas Funded, Idea to Concept Time,
Spending on External Collaboration, Percentage of New Products Containing External
Technologies, Number of Active Projects in the Pipeline, On-Time Performance of
Projects, On-Budget Performance of Projects, Concept to Market Time, New Product
Success Rate and Cannibalization of existing Sales.
34
Overall, many of the counting metrics can be quantified easily without waiting for years
as in the case of the Holistic metrics. However, these metrics do come with a big
assumption: as mentioned above, they assume that "all else being equal" increasing or
decreasing these metrics means success. Brown (Brown & Svenson, 1998) argues
however that the counting metrics lack a key assessment of quality. Creating a great
deal of emphasis on these metrics might result in lower quality of these outputs. Just
because many more patents and papers and reports are being written, it does not mean
that their quality has remained the same or that their business impact is comparable.
Moving forward, I will look to use metrics of this kind when I fell they will add value and
not create disruptions or imbalances in the conclusions.
Subjective Metrics
Another sub group of metrics focuses not on objective quantifiable data, but on
subjective opinions and rating from employees and managers. These metrics are
derived mostly from internal company and industry surveys. The culture survey
described in statement of the problem is a good example of this king of metric. The
overall philosophy is that for the purposes of managing the organization, survey
information is enough to know that progress is being made.
There are various surveys that are popular. Robert Szackoyn (Szakonyi, 1994) proposes
a good survey that can be used to compare and contrast across different organizations.
This survey focuses on questions that can be calibrated across industries and
companies. Other surveys (Rao & Weintraub, 2013) focus on highlighting areas of
strength or weaknesses within an organization.
35
Overall, these types of metrics help pinpoint areas of improvement, but do not assist with
day to day measurement of progress. They have the advantage of being relatively
straightforward to obtain, yet as the name implies, are inherently subjective.
Metrics in the Context of a Product Family
One additional train of thought about how to measure R&D productivity takes into
account the fact that once a technology is developed, it can be further leveraged many
times in the future to yield additional benefits. This is the methodology suggested by
Meyer et al (Meyer, Tertzakian, & Utterback, 1997) and considers the output of R&D
different than that espoused by Steven et al (Steven, Mytelka, & all, 2010) survey.
Instead of considering the output of R&D as a as a single successful product launch,
Meyer et al try to measure the output as the launch of a successful product family or
"Platform" that can be expanded and improved in the future. This point of view adds an
additional dimension to the output of R&D work which is the fact that reused and
improved on far into the future. They introduce two specific metrics of interest:
Platform Eff iciency =
R&D Costs for DerivativeProduct
R&D Costs for PlatformVersion
Equation 4: Platform Efficiency
Where the "R&D Costs for a Derivative Product" correspond to the costs of subsequent
products initiatives based on an underlying technology platform and the "R&D Costs for
Platform Version" are the costs of developing that underlying platform in the first place.
Also, they outline the metric:
36
Platform Effectiveness
- Adjusted Aggregate Sales of a Product Platform
Adjusted Aggregate Costs of Developing the Platform
Equation 5: Platform Effectiveness
Where "Adjusted Aggregate Sales of a Product Platform" refers to the time adjusted
aggregate sales from all products derived from the underlying technology platform and
the 'Adjusted Aggregate Costs of Developing the Platform" refers to the time adjusted
aggregate development costs of the underlying platform and all the derivate products
made thereafter.
In practice, both of these metrics have some of the challenges of associated to the
holistic metrics described above in that they require quantitative data that takes years to
compile in order to be useful. However, the concept of aggregating the overall value over
time of a particular R&D output is a novel idea that I believe I can use moving forward.
Without this concept it is usually difficult to see the benefit of long large, long term R&D
technology investment projects.
Conclusions on R&D Metrics in the Literature
Overall, I would conclude from my literature review that R&D productivity in practice is
generally defined in the way that is most useful for the end user. R&D Productivity is a
very elusive concept due to the nature of the output of R&D work and in application, it is
best to remain pragmatic. From the four trains of thought outlined above, there are
useful concepts that can be extracted and used when they are deemed to provide good
guidance.
37
Introduction to Product Development Value Stream Mapping
(PDVSM)
Now that I have discussed in detail the specific problem of WCPco. management
wishing to improve the productivity of its R&D Organization and have also defined useful
metrics around what R&D productivity means, I will proceed to outline a methodology for
attacking the issue. This methodology is that of Lean Product Development Value
Stream Mapping. I will base this work on the guidelines set out by the Massachusetts
Institute of Technology's Lean Aerospace Initiative's Product Development Value Stream
Mapping Manual (McManus, 2005).
Methodology Background
Lean and Lean Manufacturing is a concept that refers to the practices of the Toyota
Production System which focuses on providing value and reducing waste (Womack,
Jones, & Roos, 1990). This way of thinking has been attributed as one of the key drivers
in transforming Toyota from a relatively small company to the world's largest automaker.
In a 2009 USA Today survey (Davidson, 2009) up to 61% of manufacturing companies
polled from various industries had adopted concepts derived from Lean Manufacturing.
Although there are some detractors to Lean concepts, for the most part, it is a
methodology that could be described as highly successful in improving processes.
The roots of Lean thinking are derived from a manufacturing context, despite this;
aspects of it have been applied to the product development process. Based primarily on
the work of a the master's thesis of Rich Millard (Millard, 2001) and the work of others,
Dr. Hugh McManus compiled best practices for Lean Value Stream Mapping in a
Product Development Context into an application oriented manual (McManus, 2005).
38
Value Stream Mapping Overview
Lean engineering process improvements have the objective of recognizing value
creation in a process and also highlighting wastes. In the context of a Product
Developed Process, what is viewed to move through the system is Information, instead
of physical products like in manufacturing. Value Stream Mapping is the methodology
through which value is visualized and wastes identified in a given process. In this sense,
Value Stream Mapping is a tool for achieving efficient processes.
The specific process which this research will focus on will be the Product Development
Process of the WCPco. in the business unit described in the case study background of
this paper. By recommendation of the PDVSM manual, a process of the scope and
complexity as that discussed in the case study background is much too broad for the
direct applications of the principles outlined in the manual and I will need to re-scope to a
particular application. The manual outlines the recommended scope as follows:
INDIVIDUAL TASK
YOUR PROCESS
ENTERPRISE VALUE STREAM
© Massachusetts Institute of Technology
Figure 3: Scope of PDVSM as outlined in the Manual (McManus, 2005)
39
The details of the application of the methodology are outlined in the manual, but I will
proceed to give a brief overview for the clarity of the reader on the steps behind this
methodology. These steps are:
Scoping of the Problem
In this initial stage, the first step is to identify the relevant stakeholders in the process.
Any process improvement initiative needs to be undertaken with the right stakeholders
present. Additionally, the specific problem needs to be bounded and defined
unambiguously in this scoping phase. Inputs and outputs of the process should be
identified. Of pivotal importance is also a clear definition of value for each of the
stakeholders, an understanding of how that value is created and a way to measure that
value.
Mapping the Current State
In mapping the current process, the methodology proposes three steps. The first one is
to arrange the process steps (defined as tasks) and information flows of the process.
The second step is to collect some performance data on those tasks and on the
information flows if possible. The third step is to evaluate the creation of value, through
the process, from tasks and information flows.
IdentifyingWaste
The identification of wastes in the process is the key step for outlining improvements.
Wastes in the process are gleamed from evaluation of the current state process map.
The original Lean Manufacturing methodology outlines eight different types of wastes in
a manufacturing process which have been reinterpreted for a product development
process. These wastes are:
40
*
Over Production: Too much information, data or testing is generated or is
distributed more than necessary.
*
Waiting: Waiting for decisions, approvals or inputs.
*
Inventory: Work that is queued up waiting for testing or analysis.
*
Excessive Processing: Unneeded re-processing of data, re-inventing solutions,
excessive approvals.
*
Transportation: Inefficient data sharing or re-formatting. Unnecessary handoffs of
data.
*
Defective outputs: Erroneous data collected, incorrect testing executed or
information is permanently lost.
*
Unnecessary Motion: Switching too often between tasks, more meetings than are
necessary.
*
Unused Employee Creativity: Not fully leveraging the engineering talent.
Improving the Process
To improve the process, it is recommended to follow the subsequent steps:
*
Identify all of the relevant wastes in the process map.
*
Establish a Takt Time: This is a concept from manufacturing that would need to
be creatively adapted to the development process at hand. Tack times allow
smooth coordination of activities between customers.
" Assure the availability of Information to all that need it, thus eliminating waiting.
" Balance the Line: In another concept borrowed from manufacturing, we are
asked to visualize the product development process as a value stream.
Balancing all of the elements in that value stream will reduce bottlenecks and
wait times.
41
"
Eliminate unnecessary or inefficient reviews and approvals.
*
Eliminate unnecessary Monuments and break down silos: In the terminology of
this methodology, "monuments" refers to legacy requirements or mindsets that
originally served a purpose but are no longer adding value. As for Silos, this
refers organizational barriers to the free sharing of information.
*
Eliminate unnecessary Motion: This refers to inefficiencies created by switching
between tasks or transporting objects or information unnecessarily.
" Eliminate unnecessary Analysis, exploit underutilized analysis.
" Eliminate unnecessary documents and re-formatting.
*
Draw the Future State map: This last step showcases what the final process
should look like after the elimination of all the wastes.
Implementation and Future State
Implementation will be left out of this particular research paper and will be up to the
management of WCPco. This does is not however meant to undermine the challenge of
implementation or does it imply that it is straightforward.
Applicability of the Methodology to the Problem at Hand
From the beginning of this research work, I have made special emphasis on seeking to
define and improve the productivity of R&D inside a case study organization. Product
Development Value Stream Mapping was not the only methodology applicable to this
problem. Other methodologies were considered such as Business Process Modeling.
Aspects of PDVSM that were favorable to its selection were:
" Proven and accepted track record of success in manufacturing environment.
" Ample application and support in the context of Product Development provided
by institutions with the prestige of the Massachusetts Institute of Technology.
42
*
Open source and non proprietary nature.
*
Straightforward application.
Additionally to the reasons outlined above, I also wished to make sure that the
methodology could tackle the problem in reference to the metrics of productivity
discussed. From the onset, there are various things that PDVSM focuses on improving:
*
Cycle time of the process to which it is applied.
*
Elimination of unnecessary work and approvals.
"
Further leverage existing information.
By looking back on out discussion on R&D Productivity, cycle time figures prominently
as a good metric to improve. Reducing cycle time of the process results in a better
utilization of fixed R&D assets. If this is combined with no drop in quality of the R&D
output, then it would increase productivity as costs per unit of R&D output would be
reduced in accordance to the definition of R&D productivity in Equation 3. This equation
refers to productivity in a scenario where reduced cycle time results in more work being
completed and more value extracted from that work:
R&D Productivity =
WIP x p(TS)
CT
V
C
However, there is a very important caveat to this assumption: reducing cycle time would
increase R&D variable spending in a given fiscal year because more tests would be
placed and money spent. If we look at Equation 2, we see that increasing R&D spending
might actually reduce productivity (because R&D Effort increases) if said increase does
not translate to a greater increase in bankable yields.
R&D Yield
R&D Return = R&D Effort
43
Due to the above, we can state that reducing cycle time of R&D processes alone will not
guarantee an increase in R&D productivity unless value can be extracted from the
additional work completed. This might seem like a trivial point, but it is very important to
highlight in the context of the product initiative process of WCPco. The company does
not have an unlimited capacity to sell and market new products, even if R&D was able to
design and qualify them.
In this sense, a reduction of the cycle time in the product
initiative process cannot be considered to mean that more initiatives are launched and
marketed. In order to say that in this context, reduced R&D process cycle will increase
R&D productivity, we must interpret this to mean that a reduction in R&D cycle time frees
up capacity to increase or broaden the scope and potential impact (measures in profits)
of a given product initiative.
On a separate topic, the elimination of unnecessary work and testing would improve
productivity as unnecessary testing increases variable R&D costs and unnecessary work
utilized the capacity of fixed R&D assets. As for the elimination of unnecessary approval,
this has already been outlined by the cultural sensing survey of the organization to be a
potential improvement area.
Increasing the reusability of information, designs and knowledge would also increase
productivity via the definition outlined to be relevant in the context of product families.
Because all of the statements outlined above, I believe that PDVSM can provide some
value to the problem at hand.
44
Application of PDVSM to the Case Study
In accordance to the methodology, the first thing that we need to do is bound the
problem accordingly. As was already mentioned, the scope of the entire product
development process is too broad and complex to be fully tackled by only this method.
However, choosing what part of the product development process to focus on is not
straightforward either. Below I will describe my methodology for selecting the specific
process to focus on.
Identification of Specific Process to Improve
As was shown in Figures 2 and Table 1 in the case study background, there are many
parts in the overall product development process and in order to select the right focus
area, I will rely on two sources of information.
The first source of information is quantitative data gathered from past product initiatives.
This takes the form of initiative Gant charts and Critical Path Schedules prepared by
project managers on past initiatives. This data could help outline quantitatively where
issues might lie. Additionally to this data, in order to give guidance on final selection of a
specific process, I will also turn to the qualitative Cultural Sensing Survey, described
previously, in which grievances made by employees might signal areas that are ripe for
improvement.
Quantitative Analysis of Past Initiative Gant Charts
Description of the Data Set:
A total of seven recent product initiatives were used to create the data set.
Characteristics of these product initiatives are summarized on table 4:
45
A
B
C
D
E
F
G
1
2
3
3
2
2
3
1
437
49
37
62
225
101
30
1
846
360
176
150
394
428
220
New Product and Process
New Materials
New Aesthetics
New Aesthetics
Updated Aesthetics for entire Product Family
New Components
Cost Sa\Angs
Table 3: Product Initiatives Studied
In the table above, "Stream" of an initiative refers to the tem described in the case study
background and it is a relative measure of the development complexity of the initiative.
Stream 1 is more complex, followed by stream 2 and 3.
This data set is representative of the types of products initiatives that the business unit
typically works on. Stream 1 initiatives are less common and we only see one reflected,
while we have three stream 2 initiatives and three stream 3 initiatives. This overall mix is
representative of the frequency in which each kind of initiative presents itself in the
product development process.
The total tasks of the initiatives are not necessarily a standard metrics as the initiatives
were managed by different people and the work breakdown structure was not performed
uniformly across all initiatives. Naming conventions for the different tasks were also non
standard.
Analysisof the Data Set:
Due to the inconsistencies in the naming of tasks and in the work breakdown structure
across initiatives, in order to properly analyze the data, I performed a reclassification of
all the 941 tasks. This reclassification of the tasks was done by placing each task into
one of the following categories:
46
*
Product Builds: This category comprises all activities directly related to the
making or assembling of products or components for testing or qualification.
*
Tests: This category comprises tasks that are considered the direct execution of
a type of tests. Tests can be technical lab tests, virtual tests, consumer tests or
process trials, commissioning or qualification tests.
" Analysis: This bucked houses all tasks that are related to the analysis of data,
tests results, or that are required to prepare reports once data or information
exists.
*
Peer Review: Tasks under this category comprise those directly related to peer
or management review of product designs, prints, specifications, reports or any
other kind of information with the specific intent to comment and modify, as
opposed of looking for approval
*
Approval: This category buckets all processes and tasks related to the approval
of designs, test designs, materials, specifications or any other output in the
product development process.
*
Procurement: These tasks are directly related to the acquisition and procurement
of raw materials for the production of test, qualification or start of sales products.
*
Equipment Build: As the name implies, these tasks are related to the construction
and installation of manufacturing equipment for the purposes of making test
products or products bound for the start of sales.
*
Desiqn: This category comprises all activities related directly to the design of
products, components or process equipments. Typically, this is specifically
relating to the time that it takes to create the virtual CAD models of the products
or process equipments.
47
"
Information Transfer: In this category, we would find activities of transfer or
conversion of data and information.
"
Pack/Ship: These activities refer to the packing and shipping of test products to
their experimentation sites, or shipping raw materials or shipping equipment to
factories and labs.
*
Other: Captures all other tasks that could not easily be classified into one of the
above categories.
After the proper categorization I analyzed the overall data. Figure 4 below shows the
overall count of activity types.
Distributions
Activiy type
Level
c
~
<
0
E
LU
((2
D
. C0N
Prob
76 0.08077
143 0.15197
Design
105 0.11158
86
27
1
50
0.09139
0.02869
0.00106
0.05313
Peer Review
11
0.01169
Procurement
55 0.05845
Equipment Build
Info
Other
Pack/Ship
>
Count
Analisys
Approval
Product Build
Test
Total
364
Missing
11 Levels
244 0.25930
143 0.15197
941 1.00000
Figure 4: Counts of Activities by Type
As I would have expected, in the R&D process, quite often, products are build and tests
are placed. What I found particularly interesting was also how often teams were required
48
to undertake a process of approval. We can see above that it is tied for second with
"Tests" for the most frequent activities undertaken. Figure 4 shows how often a type of
activity is undertaken, but I also chose to view the total amount of time spent when
adding the duration of each task by activity type. Figure 5 is a representation of these
sums.
Duration in dayN
Activity type
Analisys
Approval
Sum
644
1742
Activity type
Analisys
Design
1343
Design
Equipment Build
4700
Equipment Build
Info
Other
Info
Other
Pack/Ship
Duration in days
Sum
Approval
151
1
298
Pack/Ship
Peer Review
Procurement
Product Build
58
1236
1768
Peer Review
Test
1981
Test
Procurement
Product Build
Figure 5: Total Duration in Days of the Sum of Activities
I would like to point out that during all of these product initiatives, the majority of time
was spent on the construction of equipment for testing and for production. This figure is
not surprising as it has been mentioned in interviews with project managers that
equipment lead times tend to dominate the critical chain of product initiatives. It might
seem at first glance that I would choose to focus on the equipment build process, but
feedback from many involved in initiatives has steered me away from this given that
many of the long lead times in equipment builds cannot be modified.
49
As expected as well, a significant amount of time is spent on "Test" and "Product Build"
activities. It is interesting to highlight that "Approval" activities not only occur frequently,
but also require a significant amount of time. The business unit spends about as much
time building products for testing as it does acquiring approvals to build them. To
underscore the point that the current process requires an inordinate amount of time on
approvals, I would also like to highlight that more time is spent on these activities than
on actually designing or analyzing data (the core of R&D).
At this point in the analysis, it seems like "Approval" activities could be a particular area
of interest to focus on. Figure 6 shows the nature of the duration of those Approval
Activities.
Distributions Activitytype=Approval
in days
Duration
Summary Statistics
Quantiles
-
100.0% maximum
185
Mean
12.181818
99.5%
97.5%
90.0%
75.0%
50.0%
185
124
20
10
5
Std Dev
Std Err Mean
Upper 95% Mean
Lower 95% Mean
N
28.113361
2.3509574
16.829217
7.5344198
143
25.0%
10.0%
0
-
50
- -
10
- -1 :
1
150
"
quartile
median
quartile
2.5%
|
0.5%
0.0%
200
Figure 6: Distribution of the Duration
minimum
1
1
1
1
1
in Days of Approval Activities
It is evident that there is a long tail to the distribution of duration of these activities. The
mean duration is five days (or a week), but some values report to have lasted up to 185
days. I went back to the original Gant charts of the initiatives and tracked down the
Approval activities that were lasting so long (over three weeks). I came to realize that
these approval activities were related to market clearance for product launch form
50
governmental regulatory bodies. After discussions with experts, I came to the conclusion
that the approval activities of very long duration are, for the most part, out of the direct
control of the company. For the purposes of my analysis moving forward, I will remove
these regulatory market clearance approval activities as there is not much we can
improve on them. Figure 7 re-tabulates the total sum of durations for each activity type
when removing Market Regulatory Approval task.
I Duration in davs|
Iurationd
Sum
644
803
1343
4700
151
I
298
58
1236
1768
1981
Activity
Actvity tme
Analisys
Approval
Design
Equipment Build
Info
Other
Pack/Ship
Peer Review
Procurement
Product Build
Test
eSum
Analisys
Approval
Design
Equipment Build
Inf
Other
Pack/Ship
Peer Review
Procurement.
Product Build
Test
Figure 7: Sum of Duration of Tasks per Activity Type Removing Regulatory Market Clearance
Approval
Although the amount of time spent on approvals was reduced, this still accounts for a
significant amount of time. The organization spends more time waiting for approvals than
performing analysis on data.
Based on the above analysis, it appears that focusing the PDVSM effort specifically on
approval processes could yield results. Additionally, I will be mindful of procurement
processes when they appear as they also seem to account for a significant amount of
time.
51
Final Selection of Specific Process to Improve
The final selection of the specific process to apply the PDVSM methodology to is based
on the following rationale:
"
Feedback from the Culture Sensing Survey already provided clear indication that
the organization believes that too much time is spent on approval processes.
*
Quantitative analysis of product initiative Gant charts suggests that a non-trivial
amount of time is spent on approval processes.
*
Approval processes are not core to R&D work when compared to design, product
build, test and analyze processes.
Due to the above rationale, I will focus specifically on approval processes. I will also be
mindful that procurement and product build processes are not core R&D activities, yet
take up a significant amount of time. These last considerations will help me identify
specifically what kind of approval process I should focus on.
Of all the approval processes undertaken, one in particular is central to the R&D process
and also includes aspects of both procurement and product build activities. This specific
process is the "Product Testing Approval, Build and Placement Process"
The Product Testing Approval, Build and Placement Process
The selected process for improvement is central to everything that happens in R&D,
particularly in the "Discovery" and "Design" phases of the overall product initiative
process. The development of a product is based on placing and analyzing tests, none of
which can occur without first going through an approval process for said test. This
approval process also includes the approval of funds for the procurement of raw
52
materials and construction of pilot equipment needed for building the product to be
tested. Lastly, the test product is placed in a study, the results of which are analyzed and
stored for future analysis.
Stakeholders in the Product Testing Approval, Buildand Placement Process
Not all of the stakeholders identified in Figure 1 take part in the Product development
process, but those that do are:
"
Products Research: They are responsible for driving the process and from taking
a proposed test all the way through approval.
"
Product Design and Development: They are responsible for the product design,
procuring raw materials, building equipment and producing the product to be
placed for testing.
*
R&D Management: They approve the spending needed to build equipment and
products for the test as well as the costs of the test itself.
" Quality Assurance: They approve that the test being proposed meets the R&D
corporate standards for quality with the regards to producing data that will drive
important business decisions.
"
Product Safety and Regulatory: They approve the test from the point of view of
product safety and making sure panelists in the consumer test will not be
presented with a safety risk.
"
Project Management: They manage the overall schedule of tasks that need to be
completed, from inception of the test, to procurement of materials, construction of
pilot equipment, build of product and the shipment of said product for testing.
"
Multifunctional Team: This team is comprised of Materials, Process and
Equipment and Packaging Development counterparts who have a vested interest
in the characteristics of the product to be tested and who assist in the tasks
53
necessary to build the test product. Additionally, counterparts from product
platform teams participate in order to help with re-utilization of testing results.
*
Program Management: They represent the commercial and business side of the
organization and have a vested interest in ensuring that the test being proposed
will provide confidence that the product can meet the business objectives of the
Initiative.
*
Future Product Initiatives: Other initiative groups will reference data generated by
this test for further value creation.
Bounding the Problem
Following the PDVSM methodology, I now proceed to bound the specific process under
study with as much level of detail as possible. Specifics of the Product Testing Approval
Process are as follows:
" The process owner is the responsible Products Research representative for the
initiative.
*
There are two products that move through the process: One is the written test
proposal and the second is the virtual product design that transforms into a
physical product that eventually gets placed in a test.
*
The initial inputs to the process are: A written test proposal and a completed
virtual design of a product to be tested. Additionally, the process will require the
expenditure of R&D effort (as defined in Equation 2) throughout as a result of
utilization of fixed R&D assets and the expenditure of the variable R&D budget.
" Additional Inputs are: Feedback on the product design and on the proposed test
plan from the multifunctional team; product success criteria based on initiative
business needs; existing historical test data and virtual modeling and simulation
of the design.
54
"
Constraints include: Budget available for equipment and product builds as well as
for testing; test panel availability; prototyping capabilities and initiative timing
needs.
*
The output of the process is product on quality and ready to place in a test that
will provide additional confidence that the product initiative will be able to fulfill
the business need. Additional outputs are proper documentation of the test and
products placed.
*
Customers of this process are R&D management and the initiative program
management who will use the data from the test to reduce uncertainty. Also
future product Initiative groups are customers as they will in turn use data from
this test to draft additional recommendations.
Figure 8 is a visual representation of the bounding of the process.
Process Owner: Products Research
Input:
Output:
Learning Plan;
Virtual Product
Test Results that Add Value
to the Business;
Proper Test Documentation;
Design;
R&D Effort
Constraints:
R&D Budget;
Test Panel Availability;
Prototype Capabilities;
Timing Needs;
Additional Info:
Product Success Criteria;
Feedback from Peers;
Historical data;
Modeling and Simulation;
Figure 8: Bounding of the Process
55
2I
Definition of Value
The process as bounded above increases R&D effort as defined in Equation 2. This
expenditure of effort makes it necessary for the process o generate value in excess of
the effort expended for it to be productive. The precise quantification of the value of the
output of this process is rather difficult to obtain. As recommended by the PDVSM
manual, I will take a pragmatic approach to the definition of value. Specifically, I will
define value of the output as follows:
Value of the output is the degree to which the test results produced increase the
0
probability of the product initiative meeting the business success criteria, as
assessed by R&D Management and Program Management.
*
Additional value of the output can be defined as the degree to which the test
results increase the probability of success of future product initiatives as assess
by the Initiative teams at those future instances.
Productivityof the Process
Above, we have discussed value of the output, yet our purpose is to help increase R&D
productivity, not just output. I will then discuss specifically how to define the productivity
of the specific process under study. I will borrow concepts from Equations 2, 3 and 5 to
base my measure of productivity of the process:
Productivityof the Process
ProjectedProfits x a[p(TS)]' + Projected Future Profits x a[p(TS)]"
R&D Effort in process
Equation 6: Productivity of the Process
Where:
56
Projected Profits are the incremental profits to the business that are projected for the
current product initiative;
Projected Future Profits are the incremental profits to the business that are projected
from future initiatives leveraging the test results;
O[p(TS)]'
is the increase in probability of technical success, as assess by management,
of the current initiative that comes as a result of the execution of the test;
d[p(TS)]" is the increase in probability of technical success, as assess by management,
of a future initiative that comes as a result of the execution of the test;
R&D Effort in the process is the effort used up in the process. This effort takes the form
of both fixed R&D costs as a result of utilizing R&D assets and variable R&D costs as a
result of direct spending on testing, equipment and product builds.
The objective of defining productivity of the process so rigorously is not to actually
calculate the value of productivity, given that assumptions such as the "increase in
technical probability of success" of an initiative are quite subjective and calculating the
utilization of fixed R&D assets is not straightforward. The purpose of this rigorous
definition is to guide us in deriving more useful secondary metrics that we will know can
increase productivity of this process.
Secondary Metrics
*
Cycle time (CT) of the process will figure prominently in the analysis moving
forward. Reductions in cycle time will reduce the utilization of fixed R&D assets,
thus reducing R&D effort in the process and increasing the productivity.
"
Variable R&D costs incurred during the process will impact R&D effort and
reduce productivity. These costs usually account for procurement of materials,
57
building of pilot equipment and test product, plus the expenditures on actual
testing.
*
Fixed R&D costs incurred during the process also impact productivity. These
include labor and facilities costs. They are also a function of cycle time of the
process.
*
Increase in probability of technical success (d[p(TS)]) as described above is a
subjective measure and is assessed by management. This metric is important
because it expresses the usefulness (in a business sense) of the data generated
by the process.
*
Quality of test documentation (QTD) is also a subjective measure. It relates to
how well the entire test and results are documented so as to be useful in the
future. Increasing the quality of this documentation will increase the productivity
of the process if this test results are later used by future initiatives.
*
Quality of test results (QTR) is another subjective measure. This measure
represents the quality of the test results data generated, regardless of its
business usefulness. Test results data would be said to be of high quality if the
test design met corporate standards and if the test was executed as planned
using products of the desired quality. In other words, test results data of high
quality would be said to be "reliable". Test results of bad quality would simply not
be useful in increasing the probability of technical success.
Value Creation in the Process
Even though we have yet to fully map the process, I want to elaborate more specifically
how value is created in the process as it relates to the metrics discussed above.
Creation of value occurs through the following tasks:
58
" Approvals play a key part in value creation. Management approvals of test plans
ensure that these will meet their expectations with respect to the increase of the
probability of success of the initiative. Quality Assurance approvals of test plans
makes sure that the quality of the test results will be high and it also ensured that
the quality of test documentation is up to par. Product Safety and Regulatory
approvals ensure that liabilities to the company are not incurred
"
Peer Reviews of test plans and designs also increase the probability of success
of the initiative if test results are positive.
*
The procurement of materials, construction of equipment and build of product are
all necessary steps to get a test placed.
Mapping the Current State Value Stream
Based on Interviews with members of the business unit, my own personal experience
and based on the Gant charts of the initiatives already collected, I was able to map the
current state of the value stream. The mapping process was iterative and included both
processes that are explicitly included in Gant charts and also processes that occur
informally and were gleamed off the interviews. To start my description of the process, I
will first list the specific tasks:
"
Drafting a Test Proposal: This particular task is lead by Products Research and it
begins with a clear understanding of the product success criteria and the
specifics of the product design that wish to be tested. A test proposal is drafted in
accordance to what the engineer believes is required.
*
Schedule a Meeting: Review and approval meetings are scheduled by project
management. Team meetings occur weekly so are not difficult to schedule, but
59
meetings with management can be more difficult and time consuming to
schedule.
*
R&D Team Review: The multifunctional R&D team will meet to review both a test
proposal and a test product design. For the production of the test product,
assistance is usually required by both Materials Development and Process &
Equipment Development and this assistance is coordinated at the meeting. Input
to the product design is also given and will improve downstream success of the
initiative.
*
Ordering Components: This task is lead by Product Design and Development
and it requires finding suitable vendors (in-house or external to the company) for
the purchasing of raw materials or components that will be needed to make the
product intended for testing.
*
Scheduling an EO: This task is also owned by PDD. Once an order for
components comes it, the site making them needs to schedule time to run the
"Experimental Order" (or EO) of components. These components are often not
"off the shelf" and experimental orders can vary in complexity. Sometimes, the
equipment required to make the components does not exists and needs to be
constructed.
*
Build Equipment: If the components required are of a nature that the equipment
to make them does not exists, then that equipment needs to be built. This task is
owned by PDD and is performed in collaboration with Process & Equipment
Development. Usually, construction of equipment does not begin until funds are
approved.
" Making Components. Also owned by PDD, once an EO is scheduled, the
responsible vendor makes the required components.
60
*
Shipping Components: Often, the components need to be shipped to the
centralized prototyping lab of the business unit. This is coordinated by PDD.
*
Assemble Product: Once components are centralized, they are assembled by the
prototype lab into finished products ready for testing. The prototype lab does not
have unlimited capacity and services other initiatives, so queuing of builds can
occur. This task is managed by PDD.
*
Test Quality: Once products are assembles, they are tested for quality to ensure
that they meet the specifications required for testing. This task is performed by
PDD.
*
1st
Level up Management Review: This task is lead by Products Research. In
this management review, the test plan and designs are reviewed and approved.
At this stage, budget and spending on the testing is discussed and approved if
the design is seen to be in line with what is required to increase the probability of
success of the initiative.
"
Director Review: Depending on the importance of the test or if the amount
needed to approve for spending is large, a review at the director level is
scheduled. Once again, the test proposal, the product design and the required
funds are reviewed and approved. Often, directors have greater visibility of other
projects and future strategy and will provide input that can increase the
reusability of the product design or the test data. This task is also managed by
Products Research.
*
Create PS&R Request: Once there is general approval to proceed with the test
from management, a request for product safety and regulatory review of the test
plan and product is created. This request is entered into a database that also
61
allows the commencement of the assembly of test product by the prototype lab.
This task is lead by Products Research.
*
Review Request: The PS&R request contains all of the test documentation
including product traceability details. This request is reviewed by the prototype
lab and by the PDD and Quality Assurance organization to ensure all of the
product information is captured correctly and that the test meets corporate
guidelines for data generation.
*
Rout for Approval: Once all of the quality testing of the product is completed, it is
uploaded into the request and submitted for approval. This task is lead by
Products Research.
*
Wait for Approval of PS&R Request: The responsible members of both the
Product Safety and Regulatory organization together with the R&D Quality
Assurance organization approve the request to proceed with the test if it meets
all the requirements.
"
Ship Product to Agency: Testing is usually managed by an external agency.
Once all approvals for the test are completed and the product is packed, it gets
shipped to the agency for testing.
" Recruit Panelists: Once the agency receives the product, they begin the process
of recruiting panelists and delivering the test product to them for use and
evaluation.
" Testing and Results: Once the test is completed, results are analyzed by
Products Research with the input of PDD and later shared with the broader
organization.
*
Test Documentation: All documentation for the test, including results are stored
in the PS&R request database where it can be accessed for future analysis.
62
Before presenting the map of the current state value stream, I would like to provide the
reader with clarity of the process mapping nomenclature described by the PDMV
methodology. Figure 9 below explains this nomenclature. Figure 10 is a representation
of the current state value stream.
63
Some Process Mapping Symbols
Angal brackets
like this> indicate optional elements
noun-,>
verb
_Inputs
noun
Outcome B>
Outcome A
nO111a
veib noun
Outcome C>
Answer B
Answer A
- ---
<Answer C>
noun>
verb>
Action Task - Roctangle
Example Designer Creates 2-D Drawmg
UsuallV one maor. others common
Outputs. Usually one major. others possible (e g. when
information created affects, otm processes
incidentally)
Revieu Task - (hal
Example Panel Reviews 2-D Drawing
Inputs Usually one major. others common
Outputs- One or more major outcomes based on the
outcome of the review process
Max (or mal nor) include implicit decisions
Dasion Task - Diatnoud
Example Drawings Meet Requirements.
Inputs Usualh one major. others common
Outputs Two or more major outcomes. based on the
answer to the question others possible (e g. a minor
output could represent a low probability answer)
Extvroal Factor
Example. Interface Constraints
Inputs. Any combination (often none)
Outputs: Any combination (often one minor)
Represents tasks or information sources externaO to the
i
noun
Inventory
Inputs One
Outputs- Same as nput, after delay
Represents waiwg queues
Storage
Inputs. One
Outputs: None
Represents archnimg, knoisiedge base addition
MlaJor
__Information
Flow
aue stream
on the main flow of the %
AMinor Flow
Lower handwidih.feeder. or rework inforianonfoui
Burt
Draws attention to specialfeature or problem
C
Massachusetts Institute of Technology.
Figure 9: Value Stream Mapping Nomenclature (McManus, 2005)
64
0
03
2
-U)
M
C)
0
C)
-n
Prototype
Capability
Draft the
Research
Proposal
1
R&D
Team
Review
Cut
4
Assemble
-
Availability
Test Panel
a-
-
-
Ship Product
to Agency
Ap.
for
Request
Review
Request
Create PS&R
van
~Criteria;
-Witt)
Test Quality
EuL
Buld
Research
MakeModelingand
SEO
Order
Components
i1 SN P
Equipment
Availability
Component
Availaility
SM: Sched ule Meeting
SEO: Sche dule EO
RP: Recrui t Panelists
Product
Lear ning
Plan
No
SMRve
Success
Resufts
Test
Doc.
Simulation
Historical Data
P
es
test 7
Director
The Current State Value Map is not a completely rigid process. Depending on the test,
there might be variations, but what I have chosen to show is the best representation of
the overall process in aggregate. I would like to highlight in general how hectic and
chaotic this process this process is in its current phase.
Process Data of the Current Value Stream
In order to continue the analysis, I gathered data on the steps of the process. Sources of
this data were:
*
For the cycle time of each activity, I used the median duration from the Gant
chart dataset if information existed. When the activity was not formally
documented in a Gant chart, I used judgment from team members.
*
For variable R&D costs, I used standard average values that are documented as
part of product initiatives. These values represent historical averages as
suggested by project managers.
*
For Increase in the Probability of Technical Success (increase in p(Ts)), I used a
subjective valuation. If a tests is perfectly designed to deliver the greatest
increase in p(TS) that it could, then this value is 100. Each activity that inputs into
the design of the test increases the metric.
*
Quality of Test Documentation (QTD) works similarly to increase in p(TS). If at
the end of the process, the quality of the test documentation is perfect, then it is
assigned a value of 100. I subjectively valuated how each activity can increase
the QTD based on discussions with team members.
*
Quality of Test Results (QTR) again can have a maximum of 100 if the test is
executed meeting all the required standards for test data excellence.
Table 5 below shows the above metrics for the different tasks in the process.
66
Draft the research proposal
Schedule an R&D Team Re\iew
R&D Team Re\Aew
Schedule a 1 Up level Management Re\Aew
1 Up Level Management RevAew
Schedule Director Review
Director Review
Order Components
Schedule EO
Build Equipment
Make Components
Ship Components
Assemble Product
Test Product Quality
Create PS&R Request
Review Request
Rout PS&R Request for Approml
Obtain PS&R Approwal
Ship Product to Agency
Recruit Panel
Testing and Results
Test Documentation
5
3
-_30
-
-
1
-
10--
5
-
10_-_-_-_10
-
1
10
1
3
10
60
5
10
5
10
1
60,000
5,000
500
-
5
1
5
10
10
40
5
-
40--
-
-
-
-
-
-
-
-
-
5
-
-
20
5
-
20
10
-
-
-
20
40
10
500
-
-
-
-
10
-
-
-
-
-
-
50,000
20
10
-
-
20
-
-
-
-
Table 4: Data for the Current State Value Stream
There are important biases in the evaluation of things like Increase in p(TS), QTD and
QTR. Depending on who is asked, values can vary substantially. An important
assumption that we have made is that lacking any input, a Products Research engineer
can probably design a test that will obtain 50% of the total value possible without
significant input from others. This assumption could be easily challenged, but I will not
hold up the entire PDVSM process on this. The manual suggests that we approach
valuation pragmatically.
One other detail that is important to stress is the fact that the process can vary slightly
from test to test. Most noticeably, it will vary if equipment needs to be built to
manufacture components (which has a cycle time of 60 days). From the Gant chart data,
most tests do not require equipment to be built. Similarly, not all tests need to be
approved at the director level and can be approved by the one level up manager.
I then updated the current process map to show some of these metrics (excluding
variable costs) in Figure 11.
67
0)
00
U,)
C
(D
U)
(D
(0
2!
Prot )type
Capability
arch
Draf tthe
SEO: Schedule E
RP: Recruit Panel Ists
1*Team
Equipment
Availability
Component
Availability
osaSl
SM: Schedule M eeting
Legend:
Product
Design
Learning
Plan
Aseml
Ship
Components
SRout
SECI
Order
Components
Review
R&D
Test PanO
Availability
I
Test Ouality
approval
for
App.
to Agency
Yes
No
Yes
Direct
RPRsus
Test ing and
T"s
Doc.
Simulation
Modeling and
Historical Data;
Critpria;
RequestProduct Success
Review
Req uest
ew
Ship Product
~EuiReus
Bulkd
ResearchCraeP&
BudgetCraeP&
$M
SMnRevie
Meve
Evaluation of Value
The purpose of mapping the current values stream and on visually attaching metrics to
this value stream is to be able to obtain a clear visual understanding of how tasks create
value in the process. The methodology suggests that we might be able to spot non value
adding tasks from this initial observation, but most likely than not, they might not be
plainly visible. In our case, I do not believe that any particularly task is completely non
value added. There might be non value added activities inside some current buckets.
For example, the equipment build wait time is 60 work days, which is the largest in the
map and it is quite possible that we could dive into this process more specifically.
However, equipment builds have always been known to be an issue and a great deal of
attention has already been expended in streamlining these processes and I believe that
this will not be a good area to focus on.
Just from a glance of the process map, we see some tasks that are clearly very
important. Evaluation of the quality of the test product produces a lot of value for
relatively little costs and little time. As well, the initial drafting of the research proposal is
extremely important. Some of the review and approval tasks surprisingly do not add
much value. For example, the final PS&A approval adds very little value in general. This
might be an interesting area to investigate further.
An additional area of worry to focus on comes from the possibility of not receiving
approval for a research proposal by at the director level. By the time a proposal arrives
before a director for review, already 15 work days have been spent and activities such
as component procurement have already been started. Not receiving approval by the
director starts the entire process from the beginning and creates a great deal of rework.
69
Identification of Waste
Now that I have the current state very well mapped out and understood, I will proceed to
uncover wastes in the process. Specifically, to do this, I will call out the wastes from the
eight categories identified in the methodology. Category by category, below are the
identified wastes:
" Waiting: Scheduling approval meetings is a significant waste when it comes to
waiting. Scheduling experimental orders for component manufacturing is also an
important waste in waiting. Shipment of components are also a "waiting" waist
as well as the final PS&R approval wait time.
*
Inventory: With regards to the waste in storage of information, it is not visible in
the value stream map, but discussions with members of teams reveal that the
test documentation stored in PS&R requests often does not include the test data
or test results themselves since these results are acquired after the request is
approved and then never uploaded. This database is used to store all test
documentation and the fact that the actual test results are not always uploaded is
a major waste.
"
Excessive Processing: In general, the research proposal itself is excessively
processed and is one of the biggest waists. This research proposal gets drafted,
reviewed by the team, reviewed by 1 up management, reviewed at the director
level, reviewed by QA and finally reviewed by PS&R. All of these reviews and
approvals add value in ensuring the right output, but many of these requirements
could be stated upfront and forgo much of the reviews and approvals.
" Over Production: Research proposals that are later rejected by management
because they are not deemed to be able to increase the probability of technical
success of the initiative (frivolous testing) is a form of overproduction and I will
70
consider this a waste. The control of this waste, prior to spending R&D budged is
the reason the management approvals exist to begin with.
*
Transportation: In the sense of the PDVSM methodology, transportation refers to
wastes due to transportation of data (data hunting, or reformatting, etc). An
identified waste in the system is the fact that the PS&R database (where all test
documentation is finally stored) is very difficult to search in posterity. This
database does not allow users to search for tests by topic and, unless you know
the author of a particular test, it is very difficult to find relevant information.
*
Unnecessary Motion: The constant review and approval meetings can create
unnecessary motion when the requirements are not known up front and multiple
review and approve loops occur. Additionally, the physical shipment of
components and products are "motions" that could be minimized.
*
Defects: This waste relates mostly to a lack quality in the test data. It can stem
from defective product being tested or wrong product being placed by the
agency. It is also a defect if the test is not documented properly and the results
are lost for posterity.
*
Unused Employee Creativity: The potential waste in this case is not so much
unused employee creativity, but unused employee judgment. Many of the
approval processes are deemed excessive by the actual employees as was
clearly verbalized in the Organizational Culture Sensing Survey. In general,
experienced employees will have the judgment and common sense to design test
and execute research proposals without a significant loss in quality of the output
if the right expectations are established up front.
71
Improving the Process
There are various areas suggested by the PDVSM manual when it comes to improving
the process. I believe that the most relevant areas to improve are the elimination of
inefficient reviews and approvals, breaking down monuments; exploiting underutilized
analyses and the elimination of unnecessary motion. We explicitly did not explore the
establishment of a "tack time" as recommended by the PDVSM manual as it was
deemed too difficult to create a constant rhythm in testing timing in this process.
Elimination of Inefficient Reviews and Approvals
I have mentioned before that approvals in this process are deemed excessive by the
employees. However, as we can see by how they are scored on adding value, it is clear
that some value does come from the different approval processes and we cannot simply
eliminate them altogether. The main reason that management requires the reviews is
because they want to ensure that all testing will truly be value added. Since
management is the judge of value (by assigning how much they believe a particular test
will increase the probability of success of an initiative), their review and approval ensures
testing will provide value, especially when there are significant costs in the testing
(requiring building equipment or placing large scale expensive tests).
Outright removing the approvals could results in money being spent in frivolous or
unnecessary tests (increasing R&D effort for no yield), while the approval process itself
increases cycle time spent in the process which also increases R&D effort through a
greater utilization of fixed R&D assets. In both instances, we risk decreasing the
productivity of the process. We can see this more clearly when looking only at the
management approval process in Figure 12.
72
In that figure, we see that the majority of the value (increasing p(TS)) is added at the
research proposal drafting stage. The technical judgment of the drafting engineer is the
biggest contributor to value. The problem is that engineers drafting the proposals do not
always know what managers are looking for nor do they have the complete technical
picture that the review team can provide them.
Draft the
Research
Proposal
R&D
&D
Team
v\
ReM
SM
Review
Legend:
SM: Schedule Meeting
Figure 12: Management Approval Process
One proposal to circumvent this review and approval process is to increase the amount
of information available to the drafting engineer so that he can consider these variables
without the necessity of meeting. The drafting engineers need to know and understand
the criteria that managers have for judging if a test will add value without having to meet
directly for their approval. Some key information that should be made available:
*
Specific expectations about sample size of tests and target panelists required.
*
Continuous visibility to business success criteria of the product and specifically to
changes in project direction.
*
Up front information regarding the willingness to spend R&D budget at the time.
*
Up front information regarding the priorities of other work being managed through
the system.
73
Knowing all of the above and together with the judgment of experienced and trained
engineers would greatly reduce the chances of approval not being granted or eliminate
the need for approvals altogether. The approval process, all the way to director level is
costing the company 15 work days of delay even if there are no re-loops. This cost is
currently not as visible to managers who are more focused on controlling variable R&D
costs (plainly visible in budgets). Some ground rules could be established to help
facilitate a new process:
*
Any study that will cost less than the equivalent of 15 word days of R&D fixed
assets utilization should not require any kind of management approval.
"
Management can shift its focus from approving proposals to instead making their
criteria and considerations known beforehand. Communication vehicles need to
be created for this.
With regards to Quality Assurance and Product Safety & Regulatory Approval, a similar
approach can be taken. Both functions could shift from a mentality of having to approve
every request to one where the requirements are known upfront and audits after the fact
can be performed to ensure compliance by the engineers.
Breaking Down Monuments
The strict Quality Assurance and Product Safety and Regulatory approval process set up
for every test was established a few years ago in the business unit. As was briefly
mentioned in the case study background, this business unit is a recent acquisition.
During the acquisition process, it was deemed that operating procedures for test
placement and documentations did not meet the standards of the acquiring company (in
this case WCPco.). The opinion at the time was that testing was done too "fast and
furious" and that mistakes were made. There were instances of wrong products being
74
placed in tests, or bad quality products and products with no traceability of origin. This
QA and PS&R approval process was instituted to change the culture of the organization
with regards to test documentation rigor.
During the last couple of years, this process has been followed and the culture shift has
progressed in the engineers. Notwithstanding, all tests need to go through this process
before product can be release to the agency. Here, we could state that the QA and
PS&R approval process is a potential monument that has outlasted the needs of the
original designers of the system. I would in this case propose that the organization
change the mentality to one of "release and audit". Engineers already know what the test
documentation requirements are. Tests should not be help up for formal approval. I
would in this case propose that all tests be realized and audited after the fact to ensure
compliance instead of requiring approval on 100% of tests before placement.
One other advantage of the "place and audit" approach is that the way the system works
today, the PS&R request gets approved before the placement of the study and once the
study is conducted; there is not check to ensure that the results of the test (the most
valuable data) are uploaded to the database. As a result, it is common that PS&R
requests in the system have all the information except the actual test results! If we
changed the approach to training and auditing, then engineers would be more likely to
comply in uploading the actual test data after the end of the test.
Exploiting Underutilized Analysis
The way that I defined the productivity of the process in Equation 6, a key component of
productivity comes from the reutilization of test results to advance future initiatives
forward. In order to do this, an improvement in the way information can be search in the
PS&R test database would be required. As it stands today, tests are simply logged by
75
test number and by initiative regardless of context. The contextual search is what we
would need to add to the system to facilitate searching for this data and increase how
the business unit can exploit test results in the future.
Eliminating Unnecessary Motion
The component procurement process is one that was outlined in the Gant charts data
analysis as taking up a significant amount of time. In the process that we decided to
study, the procurement process can be outlined in Figure 13.
F~ernng
Le~arni
Draft the
~
R&D
Research
S
Proposal
(emponent
Availabilitv
Team
Ta
Review
Order
Components
Equipment
SEO
Legend:
Make
Components
SM: Schedule Meeting
SEO: Schedule EO
Assemble
Product
Figure 13: Component Procurement Process
76
As we can see, the process of ordering components, scheduling an experimental order,
making the components and shipping them to be assembled takes a significant amount
of time. A total of 27 work days is spent in the component procurement process. All of
this "motion" can at first glance be deemed necessary. However, in detailed discussion
with team members many of the components that projects use are similar.
An alternative approach would be to warehouse on site a large amount of various
components that could be used in testing instead of ordering them when the need
arises. This approach seems counterproductive by Lean Manufacturing standards as
having inventories is deemed a waste. However, in the Product Development Process,
the focus is not on minimizing inventories of products, but on increasing the efficiency of
producing data and information. Ordering components at hoc is costing 27 work days,
which could be eliminated if components were on site to begin with. Of course, there will
be fixed costs associated with the warehousing which would need to be compared to the
gain of 27 workdays on project timings.
This solution would not help in all cases as it is impossible to warehouse all components
that might at some point be needed for testing (given the unpredictability of testing
requests), but for those components that are repeatedly and routinely ordered, a
warehouse and inventory approach could be used.
The Future State Value Map
Given all the solutions outlined above, I drew the future state value map. It can be seen
in detail in Figure 14. We can see in the diagram how the process is much less
complicated. The general operating principle of the new process is one where
information about requirements is available at the start and greater trust is placed on the
judgment of the responsible engineers to proceed with testing without approval
77
processes. A post-test audit has been established in order to maintain QA and PS&R
compliance. I am also assuming that commonly used components are now warehoused
and the component procurement process can now be bypassed altogether if
components are present.
78
-4
3
C
'
C
-n
considerations
understanding
of management,
QA and PS&R
Upront
Proposal
Research
SM: Schedule Meeting
RP: Recruit Panelists
Legend:
Product
Design
V)
PLan
SDraft the
U
Yes
Testing and
esults
Test
5M
to Agency
Create PS&R
Request
-,
Ship Product
Test Quality
Assemble
Product
k
RAPp
Test Panel
sI
COn
11110
Availability
o,
Procure
component
Component
Availability
SIM
R&D
Team)
Revwnew
i
Jont
Audit
Post Test
St art Over
No
Management
Review
a
Table 6 below shows us some updated metrics for the Future State Value Stream.
Draft the research proposal
Schedule an R&D Team Revew
R&D Team RedAew
Schedule Joint Management Review
Joint Management Review
Create PS&R Request
Procure Components
Assemble Product
Test Product Quality
Ship Product to Agency
Recruit Panel
Testing and Results
Test Documentation
Test Audit
5
-
3
1
10
1
1
27 or 75
5
10
10
10
40
5
5
50
10_-_-_-_-
50--
10
-
10
40
40
-
5000 or 60000
-
-
-
500
-
-10
20
-
-
-
-
-
50,000
-
-
10
10
-
-
20
10
-
10
Table 5: Future State Value Metrics
For procurement of components, I considered that timings and costs would vary greatly
if equipment for those components needs to be constructed.
One important metric that needs to be compared is the reduction in cycle time for the
entire process compared to the original. Table 7 compares those values assuming no reloop in the approval process and also assuming one re-loop in approval.
Not requiring Director ievei approvai
Not requiring equipment
Requiring equipment
Requiring Director Level Approval
Not requiring equipment
Requiring equipment
to be built
to be built
117
176
132
191
to be built
to be built
117
187
143
213
Not requiring approval
Not requiring procurement of components
Requiring procurement without equipment
Requiring procurement with new equipment
Requiring approval
Not requiring procurement of components
Requiring procurement without equipment
Requiring procurement with new equipment 1
90117165101
128
176
1
121
148
196
__
Table 6: Comparison of Cycle tirme Reductions without Considering Re-Loops
80
Overall, we can see some gains in cycle time reduction depending on the situation. The
biggest gains occur in instances where we do not require equipment to be built and
when components will be found in the proposed inventory. Another gain that we see is
that better up front information should lead to less approval re-loops which also yield
greater cycle time gains. For example, under the new process, I now have a minimum of
90 workdays as a cycle time, which, if I consider the possibility of one approval re-loop
(common in today's process), is a potential reduction from 132 work days or 32%
reduction in cycle time. Of course, with the reduction in approvals, we need to ensure
that the process is working correctly and that non value added tests are not being
conducted.
Returning to our definition of process efficiency (Equation 6):
Productivity of the Process
ProjectedProfits x a[p(TS)]' + Projected Future Profits x a[p(TS)]"
R&D Effort in process
Where:
R&D Effort in process = Expenditure in R&D (Fixed and variable costs)
A reduction of cycle time will decrease R&D Effort in process by reducing the utilization
of fixed R&D assets. We have also however increased variable R&D costs by
maintaining a new inventory of components that could be potentially tested. Also,
reducing approvals might increase the chance of the tests not adding value if not done
carefully. All of these elements would need to be balanced to ensure that productivity
has indeed improved.
The recommendation to change the process to audit test documentation at the
conclusion of the process will also foment more engineers to upload final test report and
81
data. This, together with an implementation of the recommendation to improve the
contextual search capabilities of the PS&R database, will increase how data is reused
and leveraged in the future. Ultimately, this will yield more from expected future profits
and increase productivity.
82
Conclusions
I began this research journey with the goal of understanding if techniques developed to
increase the productivity of manufacturing processes could be applied in an R&D setting.
I also set out to understand how to define what R&D productivity is and if it could be
tracked, measured and improved. From this work, I can conclude the following:
R&D Productivity is not universally defined. Above all, the literature seems to stress a
practical application of the term and metrics that can be adapted to a specific need. The
work to precisely define and measure R&D productivity is ongoing and still has many
challenges ahead.
Regardless of the challenges in defining R&D productivity, trying to define it is not a
fruitless exercise. I was able to come up with a definition that was useful for what I
required and that did assist in the evaluation of recommendations. This practical
approach does yield value.
Value Stream Process Mapping, a technique pioneered by Lean Manufacturing
organizations, has been applied and can be adapted, with some limitations, to the
product development process. I was able to apply this technique to the case study of
Worldwide Consumer Products Co. and redesign a process with lower cycle time and
with greater ability to reuse test data for future purposes (pending implementation); both
of which resulting in a more productive process.
During this research into R&D productivity, I also concluded that there is a great
dependency between variable costs, fixed costs and productivity. To be productive an
R&D organization needs to extract as much value as they can from these expenses.
Variable R&D costs (in the form of spending budget) are easily visible and under the
direct control of managers who try to maximize their return through the use of thorough
83
approval processes for funding. The utilization of fixed R&D assets (labor and facilities)
is not visible in the way that variable costs are. In setting up approval processes for
variable spending, managers need to recognize that the approval process itself results in
the underutilization of fixed assets, which sits idle while waiting for funding approvals.
When possible, approval processes should be replaced by providing greater up front
visibility to
management
considerations and giving
greater responsibility and
accountability to the engineers involved in the execution of the work.
84
Bibliography
Brown, M. G., & Svenson, R. A. (1998). Measuring R&D Productivity. ResearchTechnology Management, 30.
Cobet, A. E., & W ilson, G. A. (2002). Comparing 50 years of labor productivity in US and
foreign manufacturing. Monthly Labor Review.
Council, R. a. (2010). Enhance the Commercial Impact of Innovation Investments.
Corporate Executive Board.
Cross, R., Borgatti, S., & Parker, A. (2002). Making Invisible Work Visible: Using Social
Networks Analisys to Support Strategic Collaboration. California Management Review,
25-46.
Davidson,
P.
(2009,
11
Retrieved
3).
from
USA
Today:
http://usatoday30.usatoday.com/money/industries/manufacturing/2009- 11-01-leanmanufacturing-recessionN.htm
Economist, T. (2012, January 12). Has the ideas machine broken down?
McManus, H. L. (2005). Product Development Value Stream Mapping (PDVSM) Manual
1.0. Cambridge, MA: Lean Aerospace Initiative.
Meyer, M. H., Tertzakian, P., & Utterback, J. M. (1997). Metrics for Managing Research
and Development in the Context of the Product Family. Management Science, 88-111.
Millard, R. L. (2001, June). Value Stream Analisys and Mapping for Product
Development. Master's Thesis in Aeronatics and Astronautics. Massachusetts Institute
of Technology.
85
Rao, J., & Weintraub, J. (2013, Spring). How Innovative is Your Company's Culture? MIT
Sloan Management Review, pp. 29-37.
Steven, P. M., Mytelka, D. S., & all, e. (2010). How to Improve R&D Productivity: The
Pharmaceutical Industry's Grand Challenge. Nature Reviews, 203.
Szakonyi, R. (1994). Measuring R&D Effectiveness. Research-Technology Management
Tellis, G. J., Prabhu, J. C., & Chandy, R. K. (2009). Radical Innovation Across Nations:
The Preeminence of Coorporate Culture. Journal of Marketing, 3-23.
Tipping, J. W., Zeffren, E., & Fusfeld, A. R. (1995). Assesing the Value of Your
Technology. Research-Technology Management.
Womack, J. P., Jones, D. T., & Roos, D. (1990). The Machine that Changed the World.
New York, London, Toronto, Sidney: Free Press.
86
Download