University of Southern California Center for Systems and Software Engineering Integrating Software and Systems Engineering (IS&SE): Study Results to Date Barry Boehm and Jo Ann Lane University of Southern California October 29, 2007 University of Southern California Center for Systems and Software Engineering Outline • • • • • Study and Workshop Scope and Context IS&SE Assessment: Technical Factors IS&SE Assessment: Management Factors IS&SE Assessment: Complex Systems Challenges Incremental Commitment Model (ICM) Content and Assessment • Conclusions and Recommendations 10/29/2007 ©USC-CSSE 2 University of Southern California Center for Systems and Software Engineering Project Summary: Integrating Systems and Software Engineering (IS&SE) • Statement of Work – Evaluate current DoD SysE, SwE processes and standards • Identify deltas, issue areas, recommendations – Evaluate ICM for integration applicability – Evaluate relevant DoD guidance, education, and training • Provide recommendations based on evaluations • Focus on addressing future DoD S&SE challenges • Key Milestones – October 19. Draft Results to Date briefing – October 29-30. USC Workshop – December 31. Final Report 10/29/2007 ©USC-CSSE 3 University of Southern California Center for Systems and Software Engineering Future DoD S&SE Challenges • • • • • • • Multi-owner, multi-mission systems of systems Rapid pace of change Emergence and human-intensiveness Reused components Always-on, never-fail systems Succeeding at all of these on short schedules Need to turn within adversaries’ OODA loop – Observe, orient, decide, act • Within asymmetric adversary constraints 10/29/2007 ©USC-CSSE 4 University of Southern California Center for Systems and Software Engineering Asymmetric Conflict and OODA Loops •Adversary •Picks time and place •Little to lose •Lightweight, simple systems and processes Observe new/updated objectives, constraints, alternatives Orient with respect to stakeholders priorities, feasibility, risks •Can reuse anything •Defender •Ready for anything Act on plans, specifications •Much to lose Decide on nextcycle capabilities, architecture upgrades, plans •More heavy, complex systems and processes Life Cycle Architecture Milestone for Cycle •Reuse requires trust 10/29/2007 ©USC-CSSE 5 University of Southern California Center for Systems and Software Engineering Software Can Help DoD Systems Adapt to Change, but Subject to Future Challenges… 90 F-22 F/A-22 80 70 B-2 B-2 60 50 F-16 F-16 40 F-15 F-15 30 F-111 F-111 20 10 F-4 A-7 1960 1964 Multi-year delays associated with software and system stability Software and testing delays push costs above Congressional ceiling 0 10/29/2007 Ref: Defense Systems Management College 1970 1975 ©USC-CSSE 1982 1990 2000 6 University of Southern California Center for Systems and Software Engineering Rapid Change: Ripple Effects of Changes Across Complex Systems of Systems Breadth, Depth, and Length Platform N • Breadth • • Platform 1 Infra C4ISR 1.0 2008 2.0 2010 3.0 4.0 5.0 2012 2014 2016 C4ISR Situation Assessment Info Fusion Sensor Data Management Sensor Data Integration Sensors Sensor Components : … Length Legend: DOTMLPF Doctrine, Organization, Training, Materiel, Leadership, Personnel, Facilities Depth 10/29/2007 ©USC-CSSE 7 University of Southern California Center for Systems and Software Engineering Average Change Processing Time: 2 Systems of Systems 160 140 Average workdays 120 to process 100 changes 80 60 40 20 0 Within Groups 10/29/2007 ©USC-CSSE Across Contract Groups Mods 8 University of Southern California Center for Systems and Software Engineering IS&SE Workshop Objectives: Identify • Biggest issues and opportunities? – In technology – In management – In complex systems of systems • Inhibitors to progress and how to overcome them? • Ability of six principles to improve IS&SE? • Ability of Incremental Commitment Model to improve IS&SE? • What else is needed? – – – – Technology/management research Education and training Regulations, specifications and standards Other? 10/29/2007 ©USC-CSSE 9 University of Southern California Center for Systems and Software Engineering Outline • • • • • Study and Workshop Scope and Context IS&SE Assessment: Technical Factors IS&SE Assessment: Management Factors IS&SE Assessment: Complex Systems Challenges Incremental Commitment Model (ICM) Content and Assessment • Conclusions and Recommendations 10/29/2007 ©USC-CSSE 10 University of Southern California Center for Systems and Software Engineering IS&SE Assessment: Technical and Management Factors • Implications of differing phenomenology – Hardware, software, human factors – Integrating systems and software architecture – Economic aspects • Current IS&SE Management Guidance – Examples: • Systems Engineering Plan Preparation Guide • Defense Acquisition Guide Chapter 4 (SysE) • DoDI 5000.2 – Shortfalls in current guidance 10/29/2007 ©USC-CSSE 11 University of Southern California Center for Systems and Software Engineering Underlying HwE, SwE, HFE Differences Difference Area Hardware Software Human Factors Major Life-cycle Cost Source Development, manufacturing Life-cycle evolution Training and operations labor Ease of Changes Generally difficult Good within architectural framework Very good, but peopledependent Nature of Changes Manual, laborintensive, expensive Electronic, inexpensive Need personnel retraining, can be expensive User-tailorability Generally difficult, limited options Technically easy; mission-driven Technically easy; mission-driven Indivisibility Inflexible lower limit Flexible lower limit Smaller increments easier to introduce Underlying Science Physics, chemistry, continuous mathematics Discrete mathematics, linguistics Behavioral sciences Testing By test organization; much analytic continuity By test organization; little analytic continuity By users 10/29/2007 ©USC-CSSE 12 University of Southern California Center for Systems and Software Engineering Implications for Integrating SysE and SwE: Current SysE Guidelines Emphasize Hardware Issues • Focus on early hardware decisions may lead to – – – – Selecting hardware components with incompatible software Inadequate hardware support for software functionality Inadequate human operator capability Late start of software development • Difficulty of hardware changes may lead to – High rate of change traffic assigned to software without addressing critical–path risks • Indivisibility may lead to single-increment system acquisition • Different test phenomena may lead to inadequate budget and schedule for testing software and human factors 10/29/2007 ©USC-CSSE 13 University of Southern California Center for Systems and Software Engineering System/Software Architecture Mismatches - Maier, 2006 • System Hierarchy • Software Hierarchy – Part-of relationships; no shared parts – Uses relationships; layered multi-access – Function-centric; single data dictionary – Data-centric; class-object data relations – Interface dataflows – Interface protocols; concurrency challenges – Dynamic functionalphysical migration – Static functional-physical allocation 10/29/2007 ©USC-CSSE 14 University of Southern California Center for Systems and Software Engineering Examples of Architecture Mismatches • Fractionated, incompatible sensor data management … … Sensor 1 SDMS1 … Sensor 2 Sensor 3 Sensor n SDMS2 SDMS3 SDMSn • “Touch Football” interface definition earned value – Full earned value taken for defining interface dataflow – No earned value left for defining interface dynamics • Joining/leaving network, publish-subscribe, interrupt handling, security protocols, exception handling, mode transitions – Result: all green EVMS turns red in integration 10/29/2007 ©USC-CSSE 15 University of Southern California Center for Systems and Software Engineering Effect of Software Underrepresentation •Software risks discovered too late •Slow, buggy change management •Recent large project reorganization Original (WBS-based) PM C4ISR Sys Engr Sensors Networks WMI SW SW SW Software Platforms SW SW 10/29/2007 ©USC-CSSE SW 16 University of Southern California Center for Systems and Software Engineering Software Development Schedule Trends #Years ~ 0.4 * cube root (KSLOC) 20 Years to Develop Software, Hardware 10 SW HW 0 10 100 1000 10000 100000 Thousands of source lines of code (KSLOC) 10/29/2007 ©USC-CSSE 17 University of Southern California Center for Systems and Software Engineering Percent of Time Added to Overall Schedule How Much Architecting is Enough? 100 90 10000 KSLOC 80 Percent of Project Schedule Devoted to Initial Architecture and Risk Resolution 70 Added Schedule Devoted to Rework (COCOMO II RESL factor) Total % Added Schedule 60 Sweet Spot 50 40 100 KSLOC 30 Sweet Spot Drivers: 20 Rapid Change: leftward 10 KSLOC 10 High Assurance: rightward 0 0 10 20 30 40 50 60 Percent of Time Added for Architecture and Risk Resolution 10/29/2007 ©USC-CSSE 18 University of Southern California Center for Systems and Software Engineering Review of SysE Plan (SEP) Guidelines: Advances Over Previous Approaches • • • • • • • • • Tailoring to milestones A, B, C (all) Better linkages to acquisition, program management (A2.5, 3, 6) More explicit focus on risk management (A 4.3, 6.3) More explicit focus on Tech. Readiness Levels (A2.3, 4.3) More explicit addressal of full range of stakeholders (A 3.2-5, 5.4) Addressal of Family/Systems of Systems considerations (A1.1, 3.5, B, C) Focus on critical technologies, fallback strategies (A2.3) Focus on event-based vs. schedule-based milestones (A5.1) Identification of IPT critical success factors (A3.2, 3.3, 3.4) 10/29/2007 ©USC-CSSE 19 University of Southern California Center for Systems and Software Engineering SEP Guidelines: Risky Assumptions I Sometimes OK for hardware; generally not for software • An omniscient authority pre-establishes the requirements, preferred system concept, etc. (A4.1, 4.2, 4.4) • Technical solutions are mapped and verified with respect to these (A4.1, 4.2, 4.4) • They and technology do not change very often (A4.2, 4.5, 6.2) – Emphasis on rigor vs. adaptability • The program has stable and controllable external interfaces (A4.5) • MRL’s and TRL’s exist independent of program scale and complexity (A2.3, 4.3, 4.5) • Systems do not include humans (Pref., A4.4, 3.2) – Emphasis on material requirements, solutions (A3.1, 3.2, 6.4, 6.5) • Confusion in satisfying users vs. all stakeholders (A2.1, 3.4) 10/29/2007 ©USC-CSSE 20 University of Southern California Center for Systems and Software Engineering SEP Guidelines: Risky Assumptions II Sometimes OK for hardware; generally not for software • Project organization is function-hierarchical and WBS-oriented (A3.1, 3.2) • Contractual, MOA arrangements are separable from technical issues (A3.5, 6.3) • All requirements are equally important (A4.2) • Reviews event/product-based vs. evidence-based (A5.2) • Producibility is important for manufacturing but not software (A6.1, in 6.4) • The program’s critical path can be accurately identified up front (A6.1) • Program managers only make decisions at review points (A6.2) • One can achieve optimum readiness at minimum life-cycle cost (A6.5) – No slack for contingencies • Most importantly, overfocus on technology maturity (A4) – Many more sources of risk in OSD/AT&L root cause analysis 10/29/2007 ©USC-CSSE 21 University of Southern California Center for Systems and Software Engineering DAG Chapter 4: SysE Evaluation: Summary Details in backup, CD charts • Defense Acquisition Guide Chapter 4 Strengths – Major improvement over previous sequential, reductionist approach – Addressal of systems of systems – Good emphasis on early V&V, trade spaces, thorough upfront effort • Shortfalls – Still some sequential, reductionist, hardware holdovers: complete requirements, top-down decomposition, RAM – Oversimplified addressal of systems of systems • Better in SoS SysE Guidebook – Reluctance to reinterpret related regulations, specifications, and standards • Inheritance of conflicts (people internal/external to system) – Minimal addressal of change analysis, external systems evolution, SysE support of acquisition management 10/29/2007 ©USC-CSSE 22 University of Southern California Center for Systems and Software Engineering Review of Current DoDI 5000.2 (2003) • Strengths – – – – – Good set of commitment milestones Emphasizes evolutionary and incremental acquisition Including technology development strategy for next increment Emphasizes maturing technology before committing to develop Spiral development with risk management, demos, user feedback • Shortfalls – Underemphasizes pre-milestone B risk management • Covers technology maturity, but not other feasibility issues – Requirements/architecture, plans, cost-schedule, staffing, acquisition, contracting, operational concept feasibility – Needs updating for recent issues • Systems of systems, time-defined acquisition, COTS-based systems, multi-timeline systems 10/29/2007 ©USC-CSSE 23 University of Southern California Center for Systems and Software Engineering Milestone B Focus on Technology Maturity Misses Most OSD/AT&L Systemic Root Causes 1 Technical process (35 instances) 6 Lack of appropriate staff (23) - V&V, integration, modeling&sim. 2 Management process (31) 7 Ineffective organization (22) 3 Acquisition practices (26) 8 Ineffective communication (21) 4 Requirements process (25) 9 Program realism (21) 5 Competing priorities (23) 10 Contract structure (20) •Can address these via evidence-based Milestone B exit criteria •Technology Development Strategy •Capability Development Document •Evidence of affordability, KPP satisfaction, program achievability 10/29/2007 ©USC-CSSE 24 University of Southern California Center for Systems and Software Engineering Outline • • • • • Study and Workshop Scope and Context IS&SE Assessment: Technical Factors IS&SE Assessment: Management Factors IS&SE Assessment: Complex Systems Challenges Incremental Commitment Model (ICM) Content and Assessment • Conclusions and Recommendations 10/29/2007 ©USC-CSSE 25 University of Southern California Center for Systems and Software Engineering Why Look at SoSs and Other Large Complex Systems • Many traditional systems engineers believe that SE is pretty much the same in these domains – The only differences are with respect to size and complexity – Others would add that today’s systems, SoSs, and other complex systems are much more software-intensive than in the past and that processes are evolving to better address this aspect of systems • Many in the SoS world have examples of how SE has evolved to better support SoSE and further state that SoSE is considerably different than traditional SE • Both are probably right… – Surveys and studies have shown that both traditional SE and SoSE perform the traditional SE activities/process – However, SoSE is finding better ways and there may be lessons learned that can be applied to most software-intensive system developments 10/29/2007 ©USC-CSSE 26 University of Southern California Center for Systems and Software Engineering Relationships between Traditional Systems, SoSs, and Complex Systems Traditional Simple Systems Traditional Systems and SoSs Large, Complex Systems and SoSs [Based on Kuras and White work, INCOSE, 2005] 10/29/2007 ©USC-CSSE 27 University of Southern California Center for Systems and Software Engineering Recent SoS-Related Research Findings* • Many types of SoS • SoS Engineering Teams: Varying degrees of responsibility and authority • Incorporating many agile-like approaches to handle – Multiple constituent systems – Asynchronous activities and events – Quickly taking advantage of opportunities as they appear • SoSE – Must support multiple purposes and visions – Requires significantly more negotiation – Is content to satisfice rather than optimize • SoSE activities map to traditional SE activities (e.g., DAG and EIA 632), but take on a different focus and scope * Based on OSD SoS SE pilot studies and USC CSSE SoSE cost model research 10/29/2007 ©USC-CSSE 28 University of Southern California Center for Systems and Software Engineering Relationship Among Core Elements of SoSE Assessing Assessing (actual) Assessing (actual) performance performance performance totocapability to capability capability objectives objectives objectives Translating Translating capability Translating capability objectives capability objectives objectives Understanding Understanding Understanding systems & systems & relationships systems & relationships (includes plans) relationships (includes plans) Orchestrating Orchestrating Orchestrating upgrades upgrades upgrades to toSoS SoS to SoS (includes plans) Developing, Developing, Developing, evolving and evolving and maintaining evolving and maintaining SoS design/arch maintaining SoS design/arch SoS design Typically not the role of the SE but key to SoS [assumes these are fixed] Block upgrade process for SoS & options Persistent framework overlay on systems in SoS [architecture] Monitoring Monitoring &Monitoring assessing &changes assessing &changes assessing Large role of external influences Addressing Addressing new Addressing new requirements new requirements &&options requirements options changes External Environment 10/29/2007 ©USC-CSSE 29 University of Southern California Center for Systems and Software Engineering SoSE Core Element Descriptions • Translating capability objectives – Developing a basic understanding of the expectations of the SoS and the core requirements for meeting these expectations, independent of the systems that comprise the SoS • Understanding systems and relationships – In a SoS, the focus is on the systems which contribute to SoS SE capabilities and their interrelationships (as opposed to in a system, the focus is on boundaries and interfaces) • Assessing actual performance to capability objectives – Establishing SoS metrics and methods for assessing performance and conducting evaluations of actual performance using metrics and methods • Developing, evolving, and maintaining an SoS architecture/design – Establishing and maintaining a persistent framework for addressing the evolution of the SoS to meet user needs, including possible changes in systems functionality, performance or interfaces 10/29/2007 ©USC-CSSE 30 University of Southern California Center for Systems and Software Engineering SoSE Core Element Descriptions (continued) • Monitoring and assessing changes – Monitoring proposed or potential changes and assessing their impacts to: • Identify opportunities for enhanced functionality and performance, and • Preclude or mitigate problems for the SoS and constituent systems (this may include negotiating with the constituent system over how the change is made, in order to preclude SoS impacts) • Addressing new requirements and options – Reviewing, prioritizing, and determining which SoS requirements to implement next • Orchestrating upgrades to SoS – Planning, facilitating, integrating, testing changes in systems to meet SoS needs 10/29/2007 ©USC-CSSE 31 University of Southern California Center for Systems and Software Engineering What is Working in SoSE? • Address organizational as well as technical perspectives • Focus on areas critical to the SoS – Leave the rest (as much as possible) to the SEs of the systems • Technical management approach reflects need for transparency and trust with focused active participation • SoS architectures are best when open and loosely coupled – Impinge on the existing systems as little as possible – Are extensible, flexible, and persistent over time • Continuous (‘up front’) analysis which anticipates change – Design strategy and trades performed upfront and throughout – Based on robust understanding of internal and external sources of change 10/29/2007 ©USC-CSSE 32 University of Southern California Center for Systems and Software Engineering Relationship Between Systems Engineering and Software Engineering in SoS Environment • Focus of SoS SEs: Primarily attempting to identify set of options for incorporating functions to support desired new capability • Core element activities do not tend to segregate systems engineering, software engineering, and human factors engineering – SEs take more of a holistic point of view in analyses and trade-offs – SEs looking for options and opportunities within the desired timeframe • Success in integration of systems and software engineering heavily influenced by the fact that SoS development seldom starts with a “clean sheet of paper” • Current challenge: What can we learn from this for new system developments starting with a fairly clean sheet of paper? 10/29/2007 ©USC-CSSE 33 University of Southern California Center for Systems and Software Engineering Comparison of Top-10 Risks Software-Intensive Systems of Systems - CrossTalk, May 2004 1. 2. 3. 4. 5. 6. Acquisition management and staffing Requirements/architecture feasibility Achievable software schedules Supplier integration Adaptation to rapid change Quality factor achievability and tradeoffs 7. Product integration and electronic upgrade 8. Software COTS and reuse feasibility 9. External interoperability 10. Technology readiness System and Software Risks - CSSE 2006-07 Top 10 Survey 1. Architecture complexity, quality tradeoffs 2. Requirements volatility 3. Acquisition and contracting process mismatches 4. Budget and schedule 5. Customer-developer-user 6. Requirements mismatch 7. Personnel shortfalls 8. COTS 9. Technology maturity 10. Migration complexity Many similarities… Updated risk survey information currently being analyzed… 10/29/2007 ©USC-CSSE 34 University of Southern California Center for Systems and Software Engineering SoSE Lessons Learned to Date • SoSs provide examples of how systems and software engineering can be better integrated when evolving existing systems to meet new needs • Net-centricity and collaboration-intensiveness of SoSs have created more emphasis on integrating hardware, software, and human factors engineering • Focus is on – – – – Flexibility Adaptability Use of creative approaches, experimentation, and tradeoffs Consideration of non-optimal approaches that are satisfactory to key stakeholders • SoS process adaptations have much in common with the Incremental Commitment Model 10/29/2007 ©USC-CSSE 35 University of Southern California Center for Systems and Software Engineering Outline • • • • Study and Workshop Scope and Context IS&SE Assessment: Technical Factors IS&SE Assessment: Management Factors IS&SE Assessment: Complex Systems Challenges • Incremental Commitment Model (ICM) Content and Assessment • Conclusions and Recommendations 10/29/2007 ©USC-CSSE 36 University of Southern California Center for Systems and Software Engineering ICM Practices and Assessment • From the spiral model to the ICM – Principles and example • Risk-driven incremental definition: ICM Stage I – Buying information to reduce risk • Risk-driven incremental development: ICM Stage II – Achieving both rapid change and high assurance • Multiple views of the ICM – Viewpoints and examples • ICM Assessment 10/29/2007 ©USC-CSSE 37 University of Southern California Center for Systems and Software Engineering From the Spiral Model to the ICM • Need for intermediate milestones – Anchor Point Milestones (1996) • Avoid stakeholder success model clashes – WinWin Spiral Model (1998) • Avoid model misinterpretations – Essentials and variants (2000-2005) • Clarify usage in DoD Instruction 5000.2 – Initial phased version (2005) • Explain system of systems spiral usage to GAO – Underlying spiral principles (2006) • Provide framework for human-systems integration – National Research Council report (2007) 10/29/2007 ©USC-CSSE 38 University of Southern California Center for Systems and Software Engineering Process Model Principles 1. 2. 3. Commitment and accountability Success-critical stakeholder satisficing Incremental growth of system definition and stakeholder commitment 4, 5. Concurrent, iterative system definition and development cycles Cycles can be viewed as sequential concurrentlyperformed phases or spiral growth of system definition 6. 10/29/2007 Risk-based activity levels and anchor point commitment milestones ©USC-CSSE 39 University of Southern California Center for Systems and Software Engineering Shared Commitments are Needed to Build Trust • New partnerships are increasingly frequent – They start with relatively little built-up trust • Group performance is built on a bedrock of trust – Without trust, partners must specify and verify details – Increasingly untenable in a world of rapid change • Trust is built on a bedrock of honored commitments • Once trust is built up, processes can become more fluid – But need to be monitored as situations change • Competitive downselect better than cold RFP at building trust 10/29/2007 ©USC-CSSE 40 University of Southern California Center for Systems and Software Engineering Incremental Commitment in Gambling • Total Commitment: Roulette – Put your chips on a number • E.g., a value of a key performance parameter – Wait and see if you win or lose • Incremental Commitment: Poker, Blackjack – Put some chips in – See your cards, some of others’ cards – Decide whether, how much to commit to proceed 10/29/2007 ©USC-CSSE 41 University of Southern California Center for Systems and Software Engineering Scalable remotely controlled operations 10/29/2007 ©USC-CSSE 42 University of Southern California Center for Systems and Software Engineering Total vs. Incremental Commitment – 4:1 RPV • Total Commitment – – – – – Agent technology demo and PR: Can do 4:1 for $1B Winning bidder: $800M; PDR in 120 days; 4:1 capability in 40 months PDR: many outstanding risks, undefined interfaces $800M, 40 months: “halfway” through integration and test 1:1 IOC after $3B, 80 months • Incremental Commitment [number of competing teams] – – – – – $25M, 6 mo. to VCR [4]: may beat 1:2 with agent technology, but not 4:1 $75M, 8 mo. to ACR [3]: agent technology may do 1:1; some risks $225M, 10 mo. to DCR [2]: validated architecture, high-risk elements $675M, 18 mo. to IOC [1]: viable 1:1 capability 1:1 IOC after $1B, 42 months 10/29/2007 ©USC-CSSE 43 University of Southern California Center for Systems and Software Engineering The Cone of Uncertainty: Usual result of total commitment 4x 2x 1.5x 1.25x Relative Cost Range x 90% confidence limits: - Pessimistic Better to buy information to reduce risk 0.8x 0.67x ^ 0.5x - Optimistic 0.25x Inadequate PDR Feasibility Plans and Rqts. Detail Design Spec. Product Design Spec. Rqts. Spec. Concept of Operation Product Design Detail Design Accepted Software Devel. and Test Phases and Milestones 10/29/2007 05/22/2007 (c) ©USC-CSSE USC-CSSE 4444 University of Southern California Center for Systems and Software Engineering ICM Principles and Approach • From the spiral model to the ICM – Principles and example • Risk-driven incremental definition: ICM Stage I – Buying information to reduce risk • Risk-driven incremental development: ICM Stage II – Achieving both rapid change and high assurance • Multiple views of the ICM – Viewpoints and examples 10/29/2007 ©USC-CSSE 45 University of Southern California Center for Systems and Software Engineering The Incremental Commitment Life Cycle Process: Overview Stage I: Definition Stage II: Development and Operations Anchor Point Milestones Synchronize, stabilize concurrency via FRs Risk patterns determine life cycle process 10/29/2007 ©USC-CSSE 46 University of Southern California Center for Systems and Software Engineering Anchor Point Feasibility Rationales • Evidence provided by developer and validated by independent experts that: If the system is built to the specified architecture, it will – Satisfy the requirements: capability, interfaces, level of service, and evolution – Support the operational concept – Be buildable within the budgets and schedules in the plan – Generate a viable return on investment – Generate satisfactory outcomes for all of the success-critical stakeholders • All major risks resolved or covered by risk management plans • Serves as basis for stakeholders’ commitment to proceed 10/29/2007 ©USC-CSSE 47 University of Southern California Center for Systems and Software Engineering There is Another Cone of Uncertainty: Shorter increments are better 4x Uncertainties in competition, technology, organizations, mission priorities 2x 1.5x 1.25x Relative Cost Range x 0.8x 0.67x 0.5x 0.25x Concept of Operation Feasibility Plans and Rqts. Detail Design Spec. Product Design Spec. Rqts. Spec. Product Design Detail Design Accepted Software Devel. and Test Phases and Milestones 10/29/2007 05/22/2007 (c) ©USC-CSSE USC-CSSE 4848 University of Southern California Center for Systems and Software Engineering The Incremental Commitment Life Cycle Process: Overview Stage I: Definition Stage II: Development and Operations Anchor Point Milestones Concurrently engr. OpCon, rqts, arch, plans, prototypes 10/29/2007 ©USC-CSSE Concurrently engr. Incr.N (ops), N+1 (devel), N+2 (arch) 49 University of Southern California Center for Systems and Software Engineering ICM Stage II: Increment View Rapid Change Short Development Increments Foreseeable Change (Plan) Increment N Baseline Short, Stabilized Development of Increment N Increment N Transition/O&M Stable Development Increments High Assurance 10/29/2007 ©USC-CSSE 50 University of Southern California Center for Systems and Software Engineering ICM Stage II: Increment View A radical idea? Unforseeable Change (Adapt) Rapid Change Short Development Increments Agile Future Increment Baselines Rebaselining for Future Increments Deferrals Foreseeable Change (Plan) Increment N Baseline Stable Development Increments Current V&V High Assurance Resources Short, Stabilized Development of Increment N Artifacts Increment N Transition/O&M Concerns V&V of Increment N Future V&V Resources Continuous V&V No; a commercial best practice and part of DoDI 5000.2 10/29/2007 ©USC-CSSE 51 University of Southern California Center for Systems and Software Engineering ICM Principles and Approach • From the spiral model to the ICM – Principles and example • Risk-driven incremental definition: ICM Stage I – Buying information to reduce risk • Risk-driven incremental development: ICM Stage II – Achieving both rapid change and high assurance • Multiple views of the ICM – Viewpoints and examples 10/29/2007 ©USC-CSSE 52 University of Southern California Center for Systems and Software Engineering RUP/ICM Anchor Points Enable Concurrent Engineering V C R 10/29/2007 A C R D C R ©USC-CSSE C C D I O C O C R 53 University of Southern California Center for Systems and Software Engineering ICM HSI Levels of Activity for Complex Systems 10/29/2007 ©USC-CSSE 54 University of Southern California Center for Systems and Software Engineering Different Risk Patterns Yield Different Processes 10/29/2007 ©USC-CSSE 55 University of Southern California Center for Systems and Software Engineering Common Risk-Driven Special Cases of the Incremental Commitment Model (ICM) Special Case Example Size, Complexit y Change Rate % /Month Criticality NDI Support 1. Use NDI Small Accounting 2. Agile E-services Low 1 – 30 Low-Med Good; in place 3. Scrum of Scrums Business data processing Med 1 – 10 Med-High 4. SW embedded HW component Multisensor control device Low 0.3 – 1 5. Indivisible IOC Complete vehicle platform Med – High 6. NDI- Intensive Supply Chain Management 7. Hybrid agile / plan-driven system Org, Personnel Capability Key Stage I Activities : Incremental Definition Key Stage II Activities: Incremental Development, Operations Acquire NDI Use NDI Agile-ready Med-high Skip Valuation , Architecting phases Scrum plus agile methods of choice <= 1 day; 2-6 weeks Good; most in place Agile-ready Med-high Combine Valuation, Architecting phases. Complete NDI preparation Architecture-based Scrum of Scrums 2-4 weeks; 2-6 months Med-Very High Good; In place Experienced; med-high Concurrent HW/SW engineering. CDR-level ICM DCR IOC Development, LRIP, FRP. Concurrent Version N+1 engineering SW: 1-5 days; Market-driven 0.3 – 1 HighVery High Some in place Experienced; med-high Determine minimum-IOC likely, conservative cost. Add deferrable SW features as risk reserve Drop deferrable features to meet conservative cost. Strong award fee for features not dropped SW: 2-6 weeks; Platform: 6-18 months Med – High 0.3 – 3 MedVery High NDI-driven architecture NDI-experienced; Med-high Thorough NDI-suite life cycle costbenefit analysis, selection, concurrent requirements/ architecture definition Pro-active NDI evolution influencing, NDI upgrade synchronization SW: 1-4 weeks; System: 6-18 months C4ISR Med – Very High Mixed parts: 1 – 10 Mixed parts; Med-Very High Mixed parts Mixed parts Full ICM; encapsulated agile in high change, low-medium criticality parts (Often HMI, external interfaces) Full ICM ,three-team incremental development, concurrent V&V, nextincrement rebaselining 1-2 months; 9-18 months 8. Multi-owner system of systems Net-centric military operations Very High Mixed parts: 1 – 10 Very High Many NDIs; some in place Related experience, medhigh Full ICM; extensive multi-owner team building, negotiation Full ICM; large ongoing system/software engineering effort 2-4 months; 1824 months 9. Family of systems Medical Device Product Line Med – Very High 1–3 Med – Very High Some in place Related experience, med – high Full ICM; Full stakeholder participation in product line scoping. Strong business case Full ICM. Extra resources for first system, version control, multistakeholder support 1-2 months; 918 months Complete Time per Build; per Increment C4ISR: Command, Control, Computing, Communications, Intelligence, Surveillance, Reconnaissance. CDR: Critical Design Review. DCR: Development Commitment Review. FRP: Full-Rate Production. HMI: Human-Machine Interface. HW: Hard ware. IOC: Initial Operational Capability. LRIP: Low-Rate Initial Production. NDI: Non-Development Item. SW: Software 10/29/2007 ©USC-CSSE 56 University of Southern California Center for Systems and Software Engineering Examples of Risk-Driven Special Cases 4. Software-Embedded Hardware Component Example: Multisensor control device • Biggest risks: Device recall, lawsuits, production line rework, hardware-software integration – DCR carried to Critical Design Review level – Concurrent hardware-software design • Criticality makes Agile too risky – Continuous hardware-software integration • Initially with simulated hardware • Low risk of overrun – Low complexity, stable requirements and NDI – Little need for risk reserve • Likely single-supplier software makes daily-weekly builds feasible 10/29/2007 ©USC-CSSE 57 University of Southern California Center for Systems and Software Engineering Example ICM HCI Application: Symbiq Medical Infusion Pump Winner of 2006 HFES Best New Design Award Described in NRC HSI Report, Chapter 5 10/29/2007 ©USC-CSSE 58 University of Southern California Center for Systems and Software Engineering Symbiq IV Pump ICM Process - I • Exploration Phase – – – – Stakeholder needs interviews, field observations Initial user interface prototypes Competitive analysis, system scoping Commitment to proceed • Valuation Phase – – – – – Feature analysis and prioritization Display vendor option prototyping and analysis Top-level life cycle plan, business case analysis Safety and business risk assessment Commitment to proceed while addressing risks 10/29/2007 ©USC-CSSE 59 University of Southern California Center for Systems and Software Engineering Symbiq IV Pump ICM Process - II • Architecting Phase – – – – – – Modularity of pumping channels Safety feature and alarms prototyping and iteration Programmable therapy types, touchscreen analysis Failure modes and effects analyses (FMEAs) Prototype usage in teaching hospital Commitment to proceed into development • Development Phase – – – – Extensive usability criteria and testing Iterated FMEAs and safety analyses Patient-simulator testing; adaptation to concerns Commitment to production and business plans 10/29/2007 ©USC-CSSE 60 University of Southern California Center for Systems and Software Engineering ICM Assessment • ICM principles and process are not revolutionary • They repackage current good principles and practices to make it easier to: – Determine what kind of process fits your project – Keep your process on track and adaptive to change • And harder to: – – – – Misinterpret in dangerous ways Gloss over key practices Neglect key stakeholders and disciplines Avoid accountability for your commitments • They provide enablers for further progress • They are only partially proven in DoD practice – Need further tailoring and piloting 10/29/2007 ©USC-CSSE 61 University of Southern California Center for Systems and Software Engineering Outline • • • • Study and Workshop Scope and Context IS&SE Assessment: Technical Factors IS&SE Assessment: Management Factors IS&SE Assessment: Complex Systems Challenges • Incremental Commitment Model (ICM) Content and Assessment • Conclusions and Recommendations 10/29/2007 ©USC-CSSE 62 University of Southern California Center for Systems and Software Engineering Draft Conclusions • Current SysE guidance much better than before – Still major shortfalls in integrating software, human factors – Especially with respect to future challenges • Emergent, rapidly changing requirements • High assurance of scalable performance and qualities • ICM principles address challenges – Commitment and accountability, stakeholder satisficing, incremental growth, concurrent engineering, iterative development, risk-based activities and milestones • Can be applied to other process models as well – Assurance via evidence-based milestone commitment reviews, stabilized incremental builds with concurrent V&V • Evidence shortfalls treated as risks – Adaptability via concurrent agile team handling change traffic 10/29/2007 ©USC-CSSE 63 University of Southern California Center for Systems and Software Engineering Draft Study Recommendations • Opportunistically apply ICM principles on current and upcoming projects – – – – Evidence-based Feasibility Rationale at SRR, PDR Concurrent Increment N development, Increment N+1 SysE Risk-driven process determination Continuous V&V, including early integration facility and executing architecture skeleton • Elaborate, prepare to pilot current ICM Guidelines – Based on workshop feedback – In concert with pilot early adopters • Iterate based on experience • Develop, apply related education and training materials 10/29/2007 ©USC-CSSE 64 University of Southern California Center for Systems and Software Engineering Backup Charts 10/29/2007 ©USC-CSSE 65 University of Southern California Center for Systems and Software Engineering General References Boehm, B., “Some Future Trends and Implications for Systems and Software Engineering Processes”, Systems Engineering 9(1), pp. 1-19, 2006. Boehm, B., Brown, W., Basili, V., and Turner, R., “Spiral Acquisition of Software-Intensive Systems of Systems, CrossTalk, Vol. 17, No. 5, pp. 4-9, 2004. Boehm, B. and Lane J., "21st Century Processes for Acquiring 21st Century Software-Intensive Systems of Systems." CrossTalk: Vol. 19, No. 5, pp.4-9, 2006. Boehm, B., and Lane, J., “Using the ICM to Integrate System Acquisition, Systems Engineering, and Software Engineering,” CrossTalk, October 2007 (to appear). Carlock, P. and Fenton, R., "System of Systems (SoS) Enterprise Systems for Information-Intensive Organizations," Systems Engineering, Vol. 4, No. 4, pp. 242-26, 2001. Carlock, P., and J. Lane, “System of Systems Enterprise Systems Engineering, the Enterprise Architecture Management Framework, and System of Systems Cost Estimation”, 21st International Forum on COCOMO and Systems/Software Cost Modeling, 2006. Lane, J. and Boehm, B., "System of Systems Cost Estimation: Analysis of Lead System Integrator Engineering Activities", Information Resources Management Journal, Vol. 20, No. 2, pp. 23-32, 2007. Lane, J. and Valerdi, R., “Synthesizing SoS Concepts for Use in Cost Estimation”, Proceedings of IEEE Systems, Man, and Cybernetics Conference, 2005. Madachy, R., Boehm, B., Lane, J., "Assessing Hybrid Incremental Processes for SISOS Development", USC CSSE Technical Report USC-CSSE-2006-623, 2006. Northrop, L., et al., Ultra-Large-Scale Systems: The Software Challenge of the Future, Software Engineering Institute, 2006. Pew, R. W., and Mavor, A. S., Human-System Integration in the System Development Process: A New Look, National Academy Press, 2007. 10/29/2007 ©USC-CSSE 66 University of Southern California Center for Systems and Software Engineering SoSE-Related References Carlock, P.G., and R.E. Fenton, "System of Systems (SoS) Enterprise Systems for Information-Intensive Organizations," Systems Engineering, Vol. 4, No. 4, pp. 242-261, 2001 Department of Defense (DoD), Defense Acquisition Guidebook, version 1.6, http://akss.dau.mil/dag/, 2006. Department of Defense (DoD), System of Systems Engineering Guide: Considerations for Systems Engineering in a System of Systems Environment, draft version 0.9, 2006. DiMario, Mike (2006); “System of Systems Characteristics and Interoperability in Joint Command Control”, Proceedings of the 2nd Annual System of Systems Engineering Conference Electronic Industries Alliance (1999); EIA Standard 632: Processes for Engineering a System Finley, James (2006); “Keynote Address”, Proceedings of the 2nd Annual System of Systems Engineering Conference Garber, Vitalij (2006); “Keynote Presentation”, Proceedings of the 2nd Annual System of Systems Engineering Conference INCOSE (2006); Systems Engineering Handbook, Version 3, INCOSE-TP-2003-002-03 Kuras, M. L., White, B. E., Engineering Enterprises Using Complex-System Engineering, INCOSE Symposium 2005. Krygiel, A. (1999); Behind the Wizard’s Curtain; CCRP Publication Series, July, 1999, p. 33 Maier, M. (1998); “Architecting Principles for Systems-of-Systems”; Systems Engineering, Vol. 1, No. 4 (pp 267-284) Meilich, Abe (2006); “System of Systems Engineering (SoSE) and Architecture Challenges in a Net Centric Environment”, Proceedings of the 2nd Annual System of Systems Engineering Conference Pair, Major General Carlos (2006); “Keynote Presentation”, Proceedings of the 2nd Annual System of Systems Engineering Conference Proceedings of AFOSR SoSE Workshop, Sponsored by Purdue University, 17-18 May 2006 Proceedings of the 2nd Annual System of Systems Engineering Conference, Sponsored by System of Systems Engineering Center of Excellence (SoSECE), http://www.sosece.org/, 2006. Proceedings of Society for Design and Process Science 9th World Conference on Integrated Design and Process Technology, San Diego, CA, 25-30 June 2006 Siel, Carl (2006); “Keynote Presentation”, Proceedings of the 2nd Annual System of Systems Engineering Conference United States Air Force Scientific Advisory Board (2005); Report on System-of-Systems Engineering for Air Force Capability Development; Public Release SAB-TR-05-04 10/29/2007 ©USC-CSSE 67 University of Southern California Center for Systems and Software Engineering List of Acronyms ACR B/L CCD COTS DCR DI DOTMLPF ECR FMEA GUI HSI ICM IOC IRR 10/29/2007 Architecting Commitment Review Baselined Core Capability Drive-Through Commercial Off-the-Shelf Development Commitment Review Development Increment Doctrine, Organization, Training, Materiel, Leadership, Personnel, Facilities Exploration Commitment Review Failure Modes and Effects Analysis Graphical User Interface Human-System Interface Incremental Commitment Model Initial Operational Capability Inception Readiness Review ©USC-CSSE 68 University of Southern California Center for Systems and Software Engineering List of Acronyms (continued) LCA LCO OC OCR OO&D OODA O&M PRR RACRS SoS SoSE TSE VCR V&V 10/29/2007 Life Cycle Architecture Life Cycle Objectives Operational Capability Operations Commitment Review Observe, Orient and Decide Observe, Orient, Decide, Act Operations and Maintenance Product Release Review Regional Area Crisis Response System System of Systems System of Systems Engineering Traditional Systems Engineering Validation Commitment Review Verification and Validation ©USC-CSSE 69 University of Southern California Center for Systems and Software Engineering Well-Meaning but Generally Obsolete Suggestions for Integrating Systems and Software Engineering • Use this diagram (spiral, V, waterfall, helix, …) • Use this method (classical SysE, agile, OOD, RUP, …) • Lock down the plans and requirements • Work the (hardware, software, human factors) first • Tailor your approach down from a monolithic standard 10/29/2007 ©USC-CSSE 70 University of Southern California Center for Systems and Software Engineering Problems with “Use this diagram” • Too easy to misinterpret – Often by trying to interpret it literally: examples • Only so much you can put in a 2-D diagram – Contingencies, concurrency, environment, change • Effectively creates a one-size-fits-all solution – Too many situations don’t fit 10/29/2007 ©USC-CSSE 71 University of Southern California Center for Systems and Software Engineering Problems with Classical Systems Engineering • Good for classical systems – Railroads, classical vehicles, engines, water and power distribution • Tries to impose modernist approach – Timeless, universal, general, written • On an increasingly postmodern world – Timely, local, specific, audio visual • Summary of current SEP Guide 10/29/2007 ©USC-CSSE 72 University of Southern California Center for Systems and Software Engineering Agile and Plan-Driven Home Grounds: Five Critical Decision Factors • Size, Criticality, Dynamism, Personnel, Culture Personnel (% Level 1B) (% Level 2&3) Criticality (Loss due to impact of defects) Many Lives Single Life 40 15 30 20 20 25 10 30 0 35 Essential Discretionary Funds Comfort Funds Dynamism (% Requirements -change/month) 1.0 0.3 3.0 10 30 3 90 10 70 30 50 100 30 300 Size (# of personnel) 10/29/2007 10 Culture (% thriving on chaos vs. order) ©USC-CSSE 73 University of Southern California Center for Systems and Software Engineering Problems with “Lock Down the Plans and Requirements” • • • • • • GAO 2000: “Don’t proceed if you don’t know your requirements” For people-intensive systems, requirements emerge through use For COTS-intensive systems, selected COTS capabilities determine requirements – It’s not a requirement if you can’t afford the custom solution Can’t pre-specify detailed plans for COTS-based systems – Step 1: Choose best-of-breed COTS products – Step 2: Compensate for shortfalls in chosen COTS products Rapid change will rapidly obsolesce locked-down plans and requirements However, can largely lock down plans and requirements for short increments – Subject to time-determined acquisition 10/29/2007 ©USC-CSSE 74 University of Southern California Center for Systems and Software Engineering Problems with “Work the (Hardware, Software, Human Factors) First • Each has different decision drivers – See table • Working one first sub-optimizes on its decision drivers – Often in conflict with accomplishing the other two 10/29/2007 ©USC-CSSE 75 University of Southern California Center for Systems and Software Engineering Problems with “Tailor Your Approach Down” • Works for the experts who developed the monolithic standard – They know what they can tailor out • Less expert managers tend to choose path of least resistance – Leave everything in – Often leads to much wasted effort, over-constrained solutions 10/29/2007 ©USC-CSSE 76 University of Southern California Center for Systems and Software Engineering Project Issue Summary- I From NDIA Defense Software Summit • Dr. James Finley, DUSD (A&T) – Software demand, requirements increasing – Many software performance, schedule, cost issues • NDIA SysE Top Software Issues Report – Weak integration of SysE and SwE – Fundamental SysE decisions made without software input – Ineffective software life-cycle planning and management • Especially for COTS/NDI decisions – Inadequate software engineering expertise – Inadequate software verification, distributed software engineering capability 10/29/2007 ©USC-CSSE 77 University of Southern California Center for Systems and Software Engineering DAG Chapter 4: SysE Evaluation: Concurrent Engineering • Strengths – Use of Integrated Product Teams, IPPD (4.1.5) – NDI evaluation during requirements development (4.2.4.1) – Emphasized for systems of systems (4.2.6) • Shortfalls – Logical analysis overly top-down, sequential (4.2.4.2) – V-diagrams too sequential, heavyweight (4.3.1, 4.3.2) – Functional decomposition incompatible with NDI, service/object-oriented approaches – Cost not considered in 4.3.1.3; highlighted in 4.3.1.4 – Evolutionary acquisition SysE too sequential (4.3.6) – Some hardware-only sections (4.4.8, 4.4.10) 10/29/2007 • “RAM system requirements address all elements of the system, including support and training equipment, technical manuals, spare parts, and tools” ©USC-CSSE 78 University of Southern California Center for Systems and Software Engineering DAG Chapter 4: SysE Evaluation: Stakeholder Satisficing • Strengths – Balanced, user-adaptive approach (4.1.1) – Use of Integrated Product Teams (4.1.5) • Shortfalls – Emphasis on “optimizing” (4.0, 4.1.3, 4.1.5) – “System” excludes people (4.1.1) – Overfocus on users vs. other stakeholders (4.3.1, 4.3.2) • Owners, Maintainers, Interoperators, Sponsors 10/29/2007 ©USC-CSSE 79 University of Southern California Center for Systems and Software Engineering DAG Chapter 4: SysE Evaluation: Incremental/Evolutionary Definition and Commitment • Strengths – Evolutionary acquisition (4.1.3, 4.1.4) – Emphasized for systems of systems (4.2.6) – Good lists of ASR, SRR content (4.3.1.4.2, 4.3.2.4.1) • Shortfalls – – – – IPPD emphasis on “optimizing” (4.1.5) System Design makes all life cycle decisions (4.2.3.1) ASR try to minimize future change (4.3.1.4.2) SRR overemphasis on fully-defined requirements (4.3.2.4.1) 10/29/2007 ©USC-CSSE 80 University of Southern California Center for Systems and Software Engineering DAG Chapter 4: SysE Evaluation: Risk-Driven Activities • Strengths – Good general approach, use of P(L), C(L) (4.2.3.5) – Good hierarchical risk approach (4.2.3.5) – Good overall emphasis on risk (4.3.1, 4.3.2) • Shortfalls – Underemphasis on reviewing feasibility evidence as major source of risk (4.2.3.5) – ASR underemphasis on feasibility evidence, risk (4.3.1.4.3) – No risk reserve approach, identification of top risk sources – No approach to risk-driven level of detail, level of activity, earned value management 10/29/2007 ©USC-CSSE 81 University of Southern California Center for Systems and Software Engineering Incremental Commitment In Systems and Life: Stage I (Definition) Anchor Point Milestones • Common System/Software stakeholder commitment points – Defined in concert with Government, industry organizations – Initially coordinated with Rational’s Unified Software Development Process • Exploration Commitment Review (ECR) – Stakeholders’ commitment to support initial system scoping – Like dating • Validation Commitment Review (VCR) • Stakeholders’ commitment to support system concept definition and investment analysis – Like going steady • Architecting Commitment Review (ACR) – Stakeholders’ commitment to support system architecting – Like getting engaged • Development Commitment Review (DCR) – Stakeholders’ commitment to support system development – Like getting married 10/29/2007 ©USC-CSSE 82 University of Southern California Center for Systems and Software Engineering Incremental Commitment In Systems and Life: Stage II (Development and Operations) Anchor Points • Increment N Operational Commitment Review (OCR) – Stakeholders’ commitment to support operations – Like having children • Concurrent Increment N+1 Development Commitment Review (DCR) – Stakeholders’ commitment to support Increment N+1 development – Based on feasibility-validated Increment N+1 architecture, plans • Rebaselined during Increment N development • Accommodating changes in requirements, priorities, NDI 10/29/2007 ©USC-CSSE 83 University of Southern California Center for Systems and Software Engineering Examples of Risk-Driven Special Cases 5. Indivisible IOC Example: Complete vehicle platform • Biggest risk: Complexity, NDI uncertainties cause cost-schedule overrun – Similar strategies to case 4 for criticality (CDR, concurrent HW-SW design, continuous integration) – Add deferrable software features as risk reserve • Adopt conservative (90%) cost and schedule • Drop software features to meet cost and schedule • Strong award fee for features not dropped – Likely multiple-supplier software makes multi-weekly builds more feasible 10/29/2007 ©USC-CSSE 84 University of Southern California Center for Systems and Software Engineering Spiral View of Incremental Commitment Model Cumulative Level of Understanding, Cost, Time, Product, and Process Detail (Risk-Driven) Concurrent Engineering of Products and Processes OPERATION2 OPERATION1 DEVELOPMENT ARCHITECTING VALUATION STAKEHOLDER COMMITMENT REVIEW POINTS: EXPLORATION 6 5 4 3 2 1 Opportunities to proceed, skip phases backtrack, or terminate 10/29/2007 ©USC-CSSE 1 Exploration Commitment Review 2 Valuation Commitment Review 3 Architecture Commitment Review 4 Development Commitment Revie 5 Operations1 and Development2 Commitment Review 6 Operations2 and Development3 Commitment Review 85 University of Southern California Center for Systems and Software Engineering Use of Key Process Principles: Annual CrossTalk Top-5 Projects Year Concurrent Engineering Risk-Driven Evolutionary Growth 2002 4 3 3 2003 5 4 3 2004 3 3 4 2005 4 4 5 Total (of 20) 16 14 15 10/29/2007 ©USC-CSSE 86 University of Southern California Center for Systems and Software Engineering Process Principles in CrossTalk 2002 Top-5 Software Projects Spiral Degree Concurrent Requirements/ Solution Development Risk–Driven Activities Evolutionary Increment Delivery STARS Air Traffic Control * Yes HCI, Safety For multiple sites Minuteman III Messaging (HAC/RMPE) * Yes Safety Yes; block upgrades FA-18 Upgrades * Not described Yes Yes; block upgrades Census Digital Imaging (DCS2000) ** Yes Yes No; fixed delivery date FBCB2 Army Tactical C3I ** Yes Yes Yes 10/29/2007 ©USC-CSSE 87 University of Southern California Center for Systems and Software Engineering NRC HSI Study Policy Recommendations for DoD, Other Govt. and Private Organizations • Refine and develop a system development process that embodies the five principles illustrated by the Incremental Commitment Model – Principles trump diagrams • Put HSI requirements on equal footing with traditional system engineering requirements – Not human factors first, but balanced with hardware and software factors • Account for HSI considerations when developing technical, cost and schedule aspects of the business offer – Also need better HSI cost and schedule estimation methods 10/29/2007 ©USC-CSSE 88