3-IS SE Workshop Summary Report.doc

advertisement
Summary of Workshops on Integrating Systems and
Software Engineering Conducted at University of
Southern California on 30 October 2007
Introduction
Three workshops were conducted on 30 October 2007 at the University of Southern
California (USC) to identify issues and recommendations with respect to Integrating Systems
and Software Engineering (IS&SE). The overall IS&SE Workshop was sponsored by Kristen
Baldwin, the Office of the Secretary of Defense (OSD) Deputy Director of Software
Engineering and Systems Assurance. The objectives of each workshop were to:

Identify the biggest issues and opportunities with respect to better integrating
systems and software engineering

Identify inhibitors to progress and how to overcome them

Evaluate the ability of the Incremental Commitment Model (ICM) Principles to
improve the integration of systems and software engineering

Evaluate the ability of the ICM to improve the integration of systems and software
engineering

Identify what else is needed (e.g., technology/management research, education
and training, regulations/specifications/standards).
Each of the three workshops had a different focus:

Technical aspects (facilitated by Rick Selby)

Management aspects (facilitated by Art Pyster)

Complex systems aspects (facilitated by Jo Ann Lane).
Each workshop developed a set of slides that attempted to capture the key ideas discussed in
each session. These outbriefing packages will be available in the near future at
http://csse.usc.edu. The list of attendees for each workshop may be found in Appendix A to
this report. The information in these packages is the raw data used to develop the key
findings and recommendations provided in this report. Note that the findings are based upon
the expert judgment and observations of the participants from industry and government and
not by actual studies of representative software-intensive system development projects.
Key Findings
All of the workshops indicated that a half day session was not sufficient to do a “deep dive”
into the issues and opportunities associated with the integration of systems and software
engineering and to come up with a set of well-formulated recommendations. However, each
session made an attempt to do this. This section integrates and summarizes the key findings
from the three workshops (technical, management, and complex systems). In addition, after
IS&SE Workshop Summary Report
30 October 2007
1
the workshops completed, Art Pyster developed several scenarios, listed in Appendix B, that
help to illustrate many of these findings.
Although there are good examples of successful integration of hardware, software, and
human factors in some past projects, there are also numerous examples of troubled projects
due to shortfalls in integrating their hardware, software, and human factors. Senior systems
engineering leadership in government, industry, and academia has been doing a
commendable job in identifying these shortfalls and their root causes, but a great deal of
improvement in current acquisition and management practices, integrated life cycle models,
processes, communications between systems and software engineering, methods, tools,
education, and training is needed to address not only current shortfalls but future trends
toward more complex, human-intensive, and software-intensive systems and systems of
systems.
Management and Personnel: Three primary management issues seem to best capture many
of the detailed issues discussed during the management workshop: 1) key acquisition
personnel do not understand how important software engineering and software technology
are in making critical decisions throughout the entire acquisition lifecycle; 2) throughout the
entire acquisition lifecycle, critical decisions are made by key acquisition personnel who lack
the necessary competencies in software engineering and software technology; and 3) too few
software engineers have an adequate combination of systems engineering background,
domain expertise, and physical sciences understanding to effectively participate in integrated
systems and software decision processes.
Some of the workshop attendees related observations where some of the most successful
software-intensive system development projects had key individuals that, while they were
primarily systems or software engineers, they had a good grounding in both disciplines. It
was also pointed out that these individuals are rare and that efforts are needed to encourage
this “crossover” between the two disciplines.
Integrated Life Cycle Models: Having dissimilar life cycle models for systems engineering
(single V) and software engineering (iterative, incremental, spiral) creates considerable
problems and disconnects between the two groups.
Processes, Methods, Measurements, and Tools: Current functional (owned-by) hierarchies
used in project management structures, work breakdown structures, and system architectures
are incompatible with layered (served-by) architectures better matched to most software
capabilities.
Current earned-value achievement milestones are frequently software-insensitive and create a
false sense of project progress. More generally, inadequate data is captured and analyzed on
multiple aspects of project progress (solution feasibility evidence, risk resolution progress,
system and software change and defect resolution progress), creating serious management
visibility problems and a lack of data on which to base process improvement initiatives.
Systems and software integration attempts have been primarily driven by software
engineering (using a “push” approach). However, this is often not “pulled” into the systems
engineering processes and, as a result, not much has yet been accomplished.
IS&SE Workshop Summary Report
30 October 2007
2
There were some discussions on the investigation of commercial best practices, but it was
pointed out that for commercial development, there are different incentives and
contractual/acquisition methods that make it difficult to adopt commercial practices for DoD
systems.
There are also innovative system development approaches currently emerging in the
European community and these may make some of today’s systems engineering activities
obsolete.
Once mechanisms are in place to facilitate the actual integration of the systems and software
engineering activities, these mechanisms need to be reflected in the project
planning/monitoring/measurement activities.
Communications Between Systems and Software Engineering: There seems to be a
considerable lack of understanding by many systems engineering personnel that software
engineering provides an essential technology for system integration and evolution. Key
system decisions are made without the participation of experienced software people. In
addition, experienced systems engineers do not stay with a project throughout the full life
cycle and therefore are not available to explain key decisions or appropriately adjust
decisions as problems arise. As a result, software engineering and systems engineering
misunderstand respective roles and the impacts of key decisions. The teams need to
understand that systems engineering is more than defining fixed interfaces to appropriate
levels of detail, to have a shared ownership of problems (e.g., system architecture issues), and
to focus more on the system mission(s), orchestrating for desired global behaviors and
required scalability.
Recommendations
Many recommendations were discussed with respect to the better integration of systems and
software engineering within each of the workshops and captured in the workshop outbriefing
packages. This section integrates and summarizes the key recommendations from these
workshops (technical, management, and complex systems). These key recommendations are
organized by theme. Due to a lack of time, there were no priorities assigned across the
classes of recommendations.
Management and Personnel
Develop education, training, mentoring, career pathing, and incentives to create more
software-capable systems engineers and systems-capable software engineers. Create and
retain as many “software-intensive system engineers” as possible. A primary program
viability review criterion for software-intensive systems should be the adequacy of the
staffing by such personnel committed to the program for both the acquirer and the developer.
Some initial steps are to determine the validity of the three primary management issues
through the analysis of some representative MDAP programs. Based on findings of the
analysis, provide new guidance on a) the qualifications of key staff for MDAP programs, b)
the processes those key staff use, and c) ways to grow key staff through such means as
education at the Defense Acquisition University (DAU) and elsewhere.
IS&SE Workshop Summary Report
30 October 2007
3
Integrated Life Cycle Models
An incremental, iterative approach is critical to system and software engineering
integration. It provides the systems engineer with the opportunity to experiment (and fail), in
order to better understand feasibility, and provides more frequent opportunities for systems
and software engineering touch-points. The principles underlying the Incremental
Commitment Model (ICM) provide strong support for this approach. Initiatives should be
established to supplement and/or reinterpret current approaches with these principles, and to
proceed to refine, tailor, pilot and explore the further refinement and more general use of the
ICM. Examples of areas needing tailoring and refinement are provided in the ICM questions
generated at the Workshop and their current answers provided in Appendix C.
Processes, Methods, Measurements, and Tools
Strongly support current initiatives to better integrate systems and software aspects of
systems architectures, work breakdown structures, management structures, and earned value
systems.
Define integrated measures for systems and software engineering.
Capture lessons-learned to avoid mistakes, repeat successes.
Define a common core set of metrics (common across acquisition organization, prime
contractor, and significant subcontractors) for software development that is visible at
the system level (build on ASN RDA initiative).
Strongly support the initiatives on cost and risk assessment, quality attribute
measurement and analysis, and improved requirements engineering capabilities
established at the recent DoD Software Acquisition Workshop.
Augment the current technology readiness assessment (TRA) deskbook with a second
set of software technology readiness levels (STRLs).
In general, prototype and pilot proposed process, measurement and other changes, and
determine their domain of applicability before establishing them as recommended or
required practices.
Communications Between Systems and Software Engineering
Develop a sharable representation and language between systems engineering
and software engineering (shared models/artifacts, not just data dictionaries or
boundary objects).
View development as collaboration among system elements, not traditional
decomposition into functions.
Develop better methods to support the explicit tradeoffs among different partitionings
of the system and different levels of resolution in order to support better reconciliation
among systems engineering and software engineering.
Systems and Software Engineering Research
Communicate to DDR&E and related efforts such as the NRC Software Producibility
Study the importance of further research in the areas above, particularly in the integration of
IS&SE Workshop Summary Report
30 October 2007
4
systems and software engineering capabilities (a key determinant of software producibility),
and the scalability and extendibility of current software engineering capabilities needed for
ultralarge software-intensive systems requiring simultaneous high assurance and rapid
adaptability to unforeseen change.
Continue to support DoD initiatives to fund research centers for Systems and
Software Engineering to help address the issues surrounding IS&SE and to develop
better-formulated approaches to resolving these issues.
Further investigate innovative complex/adaptive systems engineering approaches
emerging in the European community for applicability to DoD system development
problems.
IS&SE Workshop Summary Report
30 October 2007
5
APPENDIX A
List of Workshop Attendees
Technical Aspect:
Rick Selby, NGC/USC (Session Facilitator)
Anthony Peterson, Raytheon
Karen Owens, Aerospace Corporation
George Huling, LA Spin
JinBo Chen, NGC
Wilson Rosa, Air Force Cost Analysis Agency
Rich Turner, Stevens Institute of Technology
John Miller, MITRE
Elliot Axelband, USC/RAND
Jared Fortune, Aerospace/USC
Shawn Rahmani, Boeing
Daniel Winton, Aerospace
Winsor Brown, USC
Sue Koolmanojwong, USC (Scribe)
Management Aspects:
Art Pyster, Stevens (Session Facilitator)
William Bail, Mitre
Jim Cain, BAE Systems
Stan Settles, USC
Brandon Gautney, Dynetics
Brian Gallagher, SEI
Don Reifer, Reifer Consultants
David Seaver, Price System
Chris McCauley, EMsolve
Stuart Glickman, LM
Dewitt Latimer, USC (Scribe)
Complex Systems:
Jo Ann Lane, USC (Session Facilitator)
John Rieff, Raytheon
Tom Schroeder, BAE Systems
Kirstie Bellman, Aerospace
Bruce Amato, OSD
Bruce Kassan, exos Services
Barry Boehm, USC
Lee Whitt, NGC
Darrell Maxwell, Navy Rep
Steven Wong, NGC
Ali Nikolai, SAIC
Cynthia Nikoai, UND
Dan Ingold, USC (Scribe)
IS&SE Workshop Summary Report
30 October 2007
6
APPENDIX B
Scenarios Illustrating Problems When Systems and
Software Engineering Are Not Well-Integrated
Scenario #1: Missing Software Engineers
1. The Acquisition Strategy and Analysis of Alternatives assume a large degree of reuse
of an existing software component, or use of a COTS software package, or reuse of a
hardware component whose behavior is driven largely by embedded software.
2. Software engineers with the necessary domain knowledge are not actively engaged in
developing the acquisition strategy or performing the analysis of alternatives.
3. The assumption about how much reuse is possible or the suitability of the COTS
software package turn out to be false or the assumption about how difficult it will be
to modify the reused components turns out to be false.
4. Program cost and schedule are negatively impacted and possibly performance as well.
Scenario #2: Technical Naiveté
1. A Service Oriented Architecture is selected as the technical approach for a mission
planning system to allow for the greatest flexibility in adding new type of analyses
and new sources of information after initial deployment.
2. The PM and Chief Engineer worked on a software-intensive program before, but were
never deeply immersed in software engineering and software technologies. They
believe they know more about software than they actually do.
3. The PM and Chief Engineer do not appreciate the policy, performance, security, and
other “ility” challenges inherent in current SOA technology.
4. Because they lack insight into the technical challenges of SOA, the PM and Chief
Engineer do not involve software engineers that have the appropriate SOA skills to
help them address those challenges.
5. The PM and Chief Engineer build a schedule and budget that do not reflect the
engineering work necessary to achieve the “ility” related KPPs.
6. The program appears to make excellent progress until integration reveals the
engineering challenges in realizing the “ility” related KPPs.
Scenario #3: Lost Opportunity
1. A platform program requires extensive multi-year hardware development, but much
of the functionality and value of the platform will be realized through software.
IS&SE Workshop Summary Report
30 October 2007
7
2. The PM and Chief Engineer do not appreciate the opportunity to continually
demonstrate and refine software and system capability by integrating software with
emulated hardware.
3. The PM and Chief Engineer build a schedule that delays software development,
depriving them of an opportunity to gain invaluable insight into system performance,
human factors, algorithm performance, and hardware/software integration mismatches
as well as to build a stronger relationship with the customer through continual visible
demonstration of progress.
Scenario #4: Unrealistic Requirements
1. The concept for a communications system identifies very demanding non-functional
requirements, such as security and availability.
2. The ability to deliver those non-functional requirements will be driven largely by
software.
3. The implementation of those non-functional requirements will not fit within the
anticipated cost and schedule, but the software engineers who could help determine
the true cost and schedule implications of those requirements are left off the team that
develops the RFP.
4. The acquisition proceeds with infeasible cost and schedule, which is recognized later
in the development.
IS&SE Workshop Summary Report
30 October 2007
8
APPENDIX C
ICM Questions and Answers
During the Workshop, Rich Turner identified a number of significant questions about
the Incremental Commitment Model (ICM) and compiled others specifically raised at the
Workshop. The summary answers below identify which issues appear to be covered by the
current version of the ICM, and which issues appear to need further addressal, prototyping,
and piloting to ensure the ICM’s ability to support general use.
1) What about long-lead hardware? How to address risk of delay or erroneous/unusable
specification?
Risk of delay for system elements with long-lead hardware would be one of the key
risks assessed and traded off vs. performance, etc., in the ICM Definition Stage.
Longer Development Stage increments for such elements would be synchronized with
shorter increments for less-constrained elements.
2) Within each phase, are the increments still on the order of weeks? Delivery cycles
longer, but demonstrable progress at each increment?
The right-hand column in the “Common Risk-Driven Special Cases of the ICM”
decision table provides suggested lengths of builds and delivery increments for the
various special cases. The numbers are intended to be a stimulus to and not a
substitute for thought.
3) Where is hardware-software-wetware allocation made? How are COTS, GOTS,
legacy constraints addressed?
It is done concurrently in each phase of the Development Stage, and in the Agile
Rebaselining activity in each Development Stage increment. Deferring detailed
commitments in areas of high uncertainty and risk is preferable to making premature
commitments. Addressing COTS, GOTS, and Legacy constraints are part of the
concurrent tradeoff, risk, and stakeholder satisficing process.
4) How to account for changing budget priorities/allocations?
Budget cuts are a risk that can be best handled by cost-boxing: including and
architecting for deferrable software features that can be postponed when necessary,
as in row 5 of the “Common Risk-Driven Special Cases of the ICM” decision table.
5) Can portfolio management be adopted within this concept?
Portfolio management is addressed in row 9 (Family of Systems) of the “Common
Risk-Driven Special Cases of the ICM” decision table.
6) Macro scale seems great, but it’s the details that are scary. Need an example with real
activities (maybe several) that shows the flexibility this approach gives at working
level.
Agreed. The largest (1M SLOC; 3 HW installations) well-documented example I
know of is still CCPDS-R in Royce’s Software Project Management book. It would be
valuable to get a large commercial supply chain success story such as Wal-Mart,
Dell, or FedEx; or maybe more details of some of the CrossTalk Top-5 success
stories.
IS&SE Workshop Summary Report
30 October 2007
9
7) Who manages the stakeholders (e.g. identifies, engage, deals with volatility, facilitates
the satisficing)? How is it funded? How does this fit into existing OSD/Service
acquisition infrastructure? Are current organizations flexible enough to implement
this?
The closer one can get to the ideal OneTeam concept of Govt. personnel, LSIs,
primes, subs, and interoperators; the better. Some aspects like interagency MOAs
need to be done by the Govt. personnel; others are better done at lower levels. Some
systems of systems may just be too hard for this, such as a DHS integrated crisis
management system.
8) Stakeholders must include Congress, industrial health and competition (e.g. helicopter
manufacturer viability), public support, Service acceptance and needs
Agreed. These are real-world constraints that need to be satisficed, but without
sacrificing other real-world constraints such as budget and schedule achievability, and
by basing commitments on evidence vs. wishful thinking. For a recent example, see
http://www.nytimes.com/2007/11/11/washington/11satellite.html?_r=1&hp=&adxnnl=
1&oref=slogin&adxnnlx=1194811545-snd0MbJZMOVEGrIl94Kh2A#step1
9) Major technical issues of architecture, assurance, and quality attributes trades. How
are these handled in the FR?
By developing best-possible feasibility rationales within need constraints (e.g., rapidresponse schedules), identifying risks due to shortfalls in the evidence, providing bestpossible risk management plans for the risks, and having success-critical stakeholders
determine whether to commit to go forward based on the risks vs. needs.
10) How to manage/budget/defend event-driven milestones
An example plan for managing and budgeting Evidence-driven milestones is
appended at the end of this document. The best defense is probably to cite the
consequences of not doing it, as in the response to question 8.
11) How to budget for evidence development
It will necessarily involve engineering judgment. The COSYSMO cost drivers can help
scope and rationalize the costs.
12) Artifacts/methods for anticipating change, contingency planning
Keeping inside your adversary’s OODA loop requires investments in Observing and
Orienting. These would include various forms of risk-driven technology, COTS,
marketplace, and adversary intelligence, surveillance, and reconnaissance activities
(i.e., a C4ISR model vs. a purchasing-agent model of system acquisition), and proactive efforts to influence technology trends (e.g., via research support, COTS usergroup and standards-group participation, etc.)
13) Ways to continuously capture stakeholder and sys/sw/hf needs/win conditions that
allow satisficing
This is another tricky issue of balance between being open for all inputs vs. setting
thresholds and rapidly filtering out off-the-wall suggestions. More research is needed
in use of search engines, selective information push, and social networking
technology. We are now experimenting with a wiki version of our win-win satisficing
tool.
IS&SE Workshop Summary Report
30 October 2007
10
14) How to measure trust in a team (use psycho-social measures?)
Some work has been done in this area, but more needs to be done, particularly in a
government-acquisition context. See the “soft gauges”article in this month’s (Nov
2007) CrossTalk.
15) Is ICM economically viable for contractors?
Several things would make it more so: higher fees for participating in competitive
downselects, consolation prizes such as IV&V contracts for nonselected competitors,
multiple contract types with award fees based on things like maintainability of buildto-spec increments and adaptability of agile-rebaselining teams (cf., the ReiferBoehm Award Fee Plan paper in the 2006 Defense Acquisition Review Journal).
16) How are the number of contracting activities managed to reduce the effort required.
Tailoring separate contracts for the three-team Development and Operations Stage of
the ICM would make things incrementally better, but a great deal of experimentation
and re-education would be required to get to a really efficient operating point.
17) How to handle changing/emergent requirements given that the anchor point requires
demonstration that within the architecture “requirements” and “ops concept” can be
met?
The best that one can do about this is to treat these sources of uncertainty as risks,
develop risk management plans (including budgets, as in the OODA loop/ISR
discussion in question 12), and let stakeholders decide whether to commit to them
based on their risk/reward ratios.
18) Need budget example of where $$ are needed beyond traditional means (sort of like
the UAV example)
Agreed. The UAV example could be expanded upon.
19) ICM seems to require the frequent making of difficult decisions that in older models
are consistently deferred…
Agreed. Consistent deferral tends to lead to examples like the one in Question 8.
20) Tom: Feasibility rationale is key – prototypes, benchmarks, exercises, early working
versions – drive risks down. Finding independent experts is difficult. Must be really
convincing.
Agreed. DoD (and its components) need to grow and operate pools of experts such as
AT&T did for their Architecture Review Boards. Dave Castellano’s pool of
assessment people would be a good start.
21) Need better description of how ICM will impact technical work, especially integrating
software and systems engineering
Agreed. We are in the process of migrating our small-project Lean MBASE
guidelines into large project ICM guidelines. This will take a good deal of work (Rich
– you could help a good deal here).
22) Is ICM an inseparable process, or can certain parts be deployed as practices within
existing processes? (Analogous to agile method question) For example, is feasibility
rationale applicable on its own merits?
Definitely separable. One of our top near term recommendations is to strengthen
traditional SRRs and PDRs by adding Feasibility Rationales to them.
23) Are there a set of invariants (beyond the principles) for the ICM?
Here’s a start (needs more thought): (1) Identification, satisficing, and incremental
commitment of success-critical stakeholders; (2) Anchor point milestones and
feasibility rationales (implies incremental/evolutionary system definition,
IS&SE Workshop Summary Report
30 October 2007
11
development, and operation); (3) concurrent determination of processes,
requirements, and hardware/software/human factor solutions; (4) risk-driven degree
of detail of processes and products.
24) Requires a culture that admits risk is part of management, and that eliminating risk is
improbable if not impossible
Agreed. For some managers, mentioning risk to them is risky, but for those
managers, denying risk is even more risky.
25) Feasibility rationale are essentially independent of milestone definitions – is the set of
milestones provided a one-size-fits-all set or can different milestones be defined based
on program context and risk?
Particularly for systems of multi-owner, multi-timescale, independently evolving
systems, there will have to be specially-tailored milestones. The principles and
invariants should help guide the tailoring.
26) Practical guidance on the development and use of feasibility rationale must be
provided.
Agreed. Besides the diagram below, I’ve appended a counterpart set of criteria and
tasks that JoAnn and I came up with that will help here. Also we need to rework the
FR DID that is now part of the FCS SDP.
27) Could/should FRs be developed in the same way as safety or assurance cases? Can the
guidance for those support FR development? Could the FR be seen as an über-case
that organizes and gathers the information from both system and software specific
attribute cases as well as the business and stakeholder satisfaction cases?
The über-case perspective fits our current FR document organization around
stakeholder and business feasibility, product feasibility, process feasibility, and
resulting risk identification.
28) MGT Group: There is a lack of incremental demo paradigm where we should be
demonstrating the capability in the iterations of months not years.
This will vary by class of system as in the “Common Risk-Driven Special Cases of the
ICM” decision table. See answer to Question 2.
29) MGT Group: The ops concept complete with non-functional (performance, security,
all the -ilities) requirements is not done adequately
The OCD and other guidelines that we will start from in getting these and other
things done adequately are at
http://greenbay.usc.edu/csci577/fall2007/site/guidelines/LeanMBASE_Guidelines_V1.
9.pdf
These will give you a feel for what is and isn’t covered; -ilities are covered in
the OCD and SSRD, and emphasize Desired and Acceptable levels of service, in order
to provide a trade space for developers to work within.
30) MGT Group: Milestone exit criteria underemphasize satisfaction of the ops concept
and performance requirements.
Agree that these need more work.
IS&SE Workshop Summary Report
30 October 2007
12
This top-level plan identifies the major roles, responsibilities, activities, and milestones
involved in creating a successful system of systems (SoS) Life Cycle Architecture Package
evaluation.
IS&SE Workshop Summary Report
30 October 2007
13
LCA Feasibility Rationale Assessment Criteria
Feasibility Rationale: What is needed to assure feasibility?
 Results of prototypes, models, simulations, benchmarks, analyses, early products,
previous experience
 Representative operational scenarios
 Based on achieved/achievable performance parameters
 Using sufficiently mature technology of the desired scale
 Under realistic environmental assumptions
Questions to answer in assessing ability to demonstrate feasibility:
 What has been done so far?
 What is the rate of progress toward achievement?
 What are some intermediate achievable benchmarks?
 What further needs to be done?
o Concise risk mitigation plans for key uncertainty areas
o Why-What-When-Who-Where-How-How Much
o Including benchmarks
o Including required resources
**********************************************************************
Steps for Developing an LCA Feasibility Rationale
Each organization should be addressing these at every level of the supplier hierarchy.
Denoted by letters rather than numbers to indicate that they are done concurrently
A. Develop LCA Package components
-- OCD, SSRD, SSAD, LCP, prototypes, high-risk elements
B. Determine most critical feasibility assurance issues
-- Issues for which lack of feasibility evidence is program-critical
C. Evaluate feasibility assessment options
-- Cost-effectiveness, risk reduction leverage/ROI, rework avoidance
-- Tool, data, scenario availability
D. Select options, develop feasibility assessment plans
E. Prepare FR assessment plans and earned value milestones
-- Try to relate earned value to risk-exposure avoided rather than
budgeted cost
F. Begin monitoring progress with respect to plans
-- Also monitor project/technology/objectives changes and adapt plans
G. Prepare evidence-generation enablers
IS&SE Workshop Summary Report
30 October 2007
14
-- Assessment criteria
-- Parametric models, parameter values, bases of estimate
-- COTS assessment criteria and plans
-- Benchmarking candidates, test cases
-- Prototypes/simulations, evaluation plans, subjects, and scenarios
-- Instrumentation, data analysis capabilities
H. Perform pilot assessments; evaluate and iterate plans and enablers
I. Assess readiness for LCA Review
-- Shortfalls identified as risks and covered by risk mitigation plans
-- Proceed to LCA Review if ready
J. Hold LCA Review when ready; adjust plans based on review outcomes
K. Celebrate passage of successful LCA Review
IS&SE Workshop Summary Report
30 October 2007
15
Download