icse2001-travelreport

advertisement
Travel Report from 23rd International Conference on
Software Engineering (ICSE’2001), 14-18 May 2001,
Toronto, Canada.
Also SCM’10 and CBSE’4 workshops.
Scribe: Reidar Conradi, IDI, NTNU, conradi@idi.ntnu.no
1. ICSE’2001 reminders
-
-
-
-
Open Source: Processes, organization and development tools.
EasyWinWin from USC/Barry Boehm Process and tools using GroupSystems
CSCW tool, relation to reuse.
Telelogic DOORS tool for requirement engineering: How it is connected to
use cases and to traceability in general, and its impact for reuse facilitation.
Impact project to study industrial impact of software technologies. There is a
SCM follow-up by Jacky Estublier. Note, that SCM needs process support
(both low-level and collaborative), so point out relation to similar Process
Technology work.
Impact project; theme Languages and SE: Hint Mary Lou Soffa at SEI on
relation between Simula-67 and ADTs (“Structured Programming” book from
1972).
ISERN: Follow-up of shared inspection experiment, follow-up of session on
experience bases. Next meeting: 20–22. Aug. 2001 in Glasgow.
The European Conference on Software Quality, 08–12. June 2002, Helsinki:
Finalize the CfP and disseminate it.
Relation between reuse, CBD, COTS and product lines.
Introduce Clustra (Norwegian database technology company) to Gabby
Silberman, IBM Software on middleware, regarding high-capacity databases.
COTS: Commercial – Off – The – Shelves, try to define the term.
Look at next web-sites around discussion groups and components.
Software Engineering Curriculum: Check with CMU and others, and look for
articles or studies of the relevance of university courses.
Invite Jan Bosch to Trondheim and Oslo.
Invite Manny Lehman similarly.
Discuss evolution/maintenance studies e.g. with Mary Shaw and David
Notkin.
Bought extra proceedings of ICSE’2001 and proceedings for workshops W2,
W3, W10, W16 and tutorials T6, T7, T9 and T20 (and will get T12 at FCMD).
Our SCM’10 paper (by Westfechtel/Conradi), planning a journal upgrade.
Travel report from ICSE’2001, 14-18 May, Toronto, 2001
page 1 of 14
2. SCM ’10 Workshop
See André van der Hoek (ed.): “Proc. 10th Workshop on Software Configuration
Management: New Practices, New Challenges and New Boundaries (SCM’10), 1415. May 2001”, at ICSE’2001, Toronto.
The workshop attracted 31 participants. (I partly attended CBSE also, in parallel.)
2.1 Session 1: “OpenSource and CM”, Karl Fogel and Jim Blandy
Information growth started with the printing press and associated copyright laws
(enforced by distributors, not authors). Implies standardization of texts. Internet
makes it easier to copy and modify copyrighted material, and using collaborative
work tools and practices.
Emacs: Different branches and versions, using CVs.
Open Source: No imposed development methodologies; a chaotic process seems to
work because of idealism and distributed contributions. Needs good change logs.
What changes will be approved? – Using a vote. Tools must not facilitate
authoritative intervention. CVS not good enough: no new “subversion” SCM tool, but
no major changes.
Need re-integration from distributed changes.
New requirement:
- No heavy central facilities, like the ClearCase dynamic file system – need
light clients and with most functionalities available locally (e.g. doing “diffs”
between local and global versions).
- Post-work merges, gradual “commit access”
- Loose process, evolving.
- Dream question: Low threshold to start working.
Cf. published book - using a copy-editor, but having competing views and changes.
Need also to evolve and customize the tools: emacs, CVS, etc.
2.2 Session 2: “Versioning and Hypermedia”, David Hicks, Aarhus University
Versioning: Both of data and structure (as links).
Four layers: application-functionality in clients, hypermedia abstract views,
versioning, and repository. Links with simple revision numbers: not good enough.
Relations in XML documents as “X-links”?
Use “effectivity” on links taken from CAD area?
Activity-based changes: Groups of related change.
2.3 Session 5: Jan Bosch, Technical University of Groningen: “Variability in
Product Families”
Identify and isolate variability – delay related design decisions on how to represent
variability; i.e. decide at “variation points”.
Travel report from ICSE’2001, 14-18 May, Toronto, 2001
page 2 of 14
Ex.
Traditional: Decide at requirements level.
Current: Decide at architectural level.
Future: Decide a run-time level.
Domain vs. architectural engineering.
Variation point: implicit, designed, bound, open/closed, permanent (bound forever).
Recurring pattern: Variant vs. optional entity.
Binding times: Design vs. coding vs. run-time.
First class representation of variability mechanism and of dependencies, but cannot
manage several such mechanisms.
Lifecycle management of variation point: using different technologies for flexible
bindings, but how to manage such product lines (the process).
How do variation points vary? – need empirical studies!
Ex.: Adele’s instrumentation for Airbus 300, 3 lines of specification with synthesized
attributes are guiding all later choices.
Evaluation of variability:
- New variation point
- Change binding time (delay decisions?)
- Change variant adding time.
I.e. need more “uniform model” for managing variation in software systems, e.g.
partial evaluation. Or do we need several models?
Traceability of variability through all artefacts should be made explicit.
Try to encourage cooperation between communities: SCM, software architecture,…
Use an embedded scripting or domain-specific language, e.g. as in emacs, OpenOffice
etc.
European ComponentPlus project: How to test configurable components?
Cf. programming language: Both high-level abstractions (away from the machine),
and delaying decisions to run-time (polymorphism, dynamic binding of libraries,
incremental compilation, …).
2.4 Session 6: Bernhard Westfechtel and Reidar Conradi: “Software
Architecture and Software Configuration Management”
Architecture Description Language (ADLs) vs. UML and SQL vs. System
Description Languages in SCM systems: Unavoidable redundancies.
Planned vs. unplanned evolution: Planned covers both variants and revisions.
SCM systems: Generic, semantics-free.
But always “versioning” outside the ADL world.
Bottom-up (reuse with re-engineering) vs. top-down (start from scratch).
Need ADL tools to offer useful functionality.
Cannot expect ADL descriptions to be created and maintained without offering
practical (i.e. generative) tool support.
Travel report from ICSE’2001, 14-18 May, Toronto, 2001
page 3 of 14
3. Component Workshop (CBSE’4)
Judith Stafford, SEI et al. (eds.): “Proc. 4th ICSE Workshop on Component-Based
Software Engineering (CBSE’4): Compact Certification and System Predication, 14–
15 May 2001, Toronto”, ICSE’2001, 97 p.
Ca. 40 participants.
Session 3 on Relevant System Properties, Betty Cheng, University of Michigan,
Chair
3.1 Mark Woodman, Middlesex University: “Issues of CBD Product Quality and
Process Quality”
Cf. OO/CBD meta-modelling: Book by P. Henderson–Sellers and B. Unhelkar:
“OPEN modelling with UML”, Addison–Wesley, 2000.
3.2 Steve Riddle, University of Middlesex: “Protective Wrapping of OTS
Components”
COTS => OTS, including open software.
Complex to build wrappers.
Total criteria, domain – specific or not?
Stability and confidence is important.
3.3 Parastoo Mohagheghi and Reidar Conradi, NTNU/Ericsson
“Experiences with certification of reusable components in the GSN project in
Ericsson, Norway”.
The hard problem is the non-functional requirements.
3.4. David Garlan and Bradley Schmerl, Carnegie-Mellon University:
“Component-Based Software Engineering in a Pervasive Environment”
Move towards net-centric computing, “move” software towards environment – e.g.
guessing user intent.
New interpretation of good SE ideas:
1. Information hiding – but expose resource dependences.
2. Correct computation is the primary concern – but relative to environment.
3. System requirements – negotiable and adaptable.
4. Confirm errors – rather self-repair.
5. Preplan evaluation – rather adaptive.
Components should:
- Expose requirements and usage of resources.
- Provide a monitoring interface (or several or extendable interfaces).
- Have “good enough” quality.
CB systems should:
- Provide mechanisms for system adaptation.
Travel report from ICSE’2001, 14-18 May, Toronto, 2001
page 4 of 14
-
Support utility-based component composition.
Include mechanism for understanding the health of a system.
Session 4 on “Compositional Reasoning”, Murali Sitaraman, Chair.
3.5 Bruce W. Weide, Ohio State University: “Modular Regression Testing –
Connections to Component-Based Software”
Reason using information hiding, with or without explicit interfaces, using a
verification language and tools.
Reason using “value semantic”.
Technique can be used in courses and have demonstrated effect on reliability and
maintainability of systems.
3.6 Thomas Genssler and Christian Zeidler, FZI, Karlsruhe and ABB
Heidelberg: “Rule-driven component composition for embedded systems”
In PECOS project. Little actual reuse now.
Here: Simple ADL w/ constraints.
Should reason about dynamic behaviour.
3.7 Dave Mason, Ryerson Polytechnic University, Toronto: “Probability Density
Functions in Program Analysis”
Theoretical basis for statistical analysis of program execution.
Mathematics: Not sufficient!
Simulation appears also to be intractable!!
Components behaviour: Depends on use (external properties).
3.8 Heinz Schmidt, Monash University, Melbourne: “Trusted Components –
Towards Automated Assembly with Predictable Properties”
Looking at distributed red-time systems.
Ex. From car industry: Bosch common software architecture, needing simulation,
with code synthesis and VLSI compilation. Complex hardware/software-mechanical
systems, lots of processors, partly networked. Dynamic => organic. Need
compositional proof systems.
Dependability = reliability + safety + robustness + availability + security.
Using contracts, state machines, Petri nets.
Challenges: How to combine architecture and trust behavioural/non-functional
requirements, loosely coupled, make trusted systems from non-trusted components.
Session 7: “Modelling and Specification, Clemens Szyperski, Chair.
3.9 Päivi Kallio, VTT: “How to document components properly?”
Travel report from ICSE’2001, 14-18 May, Toronto, 2001
page 5 of 14
Make web-portal for components trading (EU-project).
Two American portals are already in place.
3.10 Marlon Vieira, Univ. of Irvine: Describing Component’s Access Points,
using rule-based approaches to describe component dependencies
Need classification of component dependences.
3.11 Dimitra Giannakopoulu, NASA: NASA work on CBSE
Intends to specify component interfaces.
How to express system properties in terms of component properties?
Modular separation in time and space.
Increase reuse by shared, domain-specific components.
Conclusion by Heinz Schmidt:
- Composition specification: at different abstraction level; when is it good
enough?
- Composition operators?
- Compositional reasoning: Measurement and prediction – non-functional
properties, trust and fitness?
- Predict “new” properties such as cost and risk.
- Need 2 – 4 reference problems from different domains for next CBD
workshop: Aerospace, telecom, dot.com (e-commerce), ... (discuss criteria?)
- Validation
4. ICSE’2001 conference, Toronto 14 – 18. May 2001
1100 participants in Westin Harbour Castle hotel.
Toronto: 3 mill. people, 60,000 new immigrants per year, over 100 ethnic
nationalities.
Hausi A. Müller (general chair), and Wilhelm Schäfer and Mary jean Harrold
(program co-chairs).
Next ICSE is 19–23 May 2002 in Buenos Aires.
4.1 ICSE opening
Gabby Silberman, IBM: Case advanced systems, case@ibm.org.
New national software engineering laboratory in Ottawa, supported by National
Research Council of Canada.
4.2 Keynote by Daniel Sabbah, IBM Software, Canada
IBM: 2nd largest software producer in the world.
Much middleware focus. 9 major platforms, including 4 for Unix.
Brands: Websphere, MQ, VisualAge.
Travel report from ICSE’2001, 14-18 May, Toronto, 2001
page 6 of 14
4,000 software engineers; 50 million lines of code, 10% new/changed per year. 2-year
development cycles, also 3-month web.com releases.
Using standards, also OpenSource. But Java “standards” move very fast.
TPF: Transaction server for e.g. United Airlines, 99.99% availability, 7,000
transactions/second.
Ex.: CICS: 18 – 24 months development cycles.
WebSphere: 6 months cycles, 3 month refreshes, 12-month total substitution.
Need customer understanding, founded up-scaling of system parts.
Time is money.
Relevance testing, slim user documentation. Much OO, iterative, concurrent
engineering. 5–10 X better productivity than in early 70’s.
Error rates: Still high.
Take calculated risks, quality is in the eyes of the beholder. New business models
require new technologies:
-
New interfaces: GUIs, voices, gestures.
Middleware as glue towards 9 underlying platforms.
Users want less end-user programming, rather turn-key systems.
Deficit: 1 mill. vacant IT jobs in USA in 2000. 47% of European companies have
vacant IT positions:
-
Just enough technology vs. specification completeness.
Absolute vs. perceived measures:
. Dynamic vs. static processes.
. Scenario-based design and testing.
-
Dynamic Triumvirate:
. Marketplace understanding.
. Understand architecture (patterns, OO).
. Incremental development.
Ex. Start simple - grow fast.
Using XML-based web-applications. Also role patterns for human users, as scenarios.
All software architects have to do coding!
Model/view separation à la Smalltalk: New GUI over old applications.
Lastly: Uses 30% of the time on process changes!
5 Global collaboration, Dewayne Perry, Chair
5.1 James Herbsleb, Lucent Technologies: Distributed development
Slows down the work. Process and tool dependences: Try reduce.
Communication low across sites, lots of informal communication that is important.
Travel report from ICSE’2001, 14-18 May, Toronto, 2001
page 7 of 14
Time zones and cultural differences do matter.
Ex. 30 m distance: Reduces communication drastically.
Hints: Partition work/processes. Travel early, establish liaisons, collaborative
sessions. Use CSCW technology?
6. Process Improvement, Daniel M. Berry, Chair
6.1 P. Abrahamson, University of Oulu: “Commitment Development in Software
Process Improvement: Critical Misconceptions”
2/3 of SPI programs fail – why? Often due to organizational factors.
Commitment: affective (I want to), continuance (I need to), normative (I ought to).
Model selection: Commitment to change (Conner and Patterson, 1982).
Model Misconception 1:
Model Misconception 2:
Model Misconception 3:
Model Misconception 4:
Linear causality.
Controllability (cannot force).
Singular construct.
All-positiveness (must be reasonable and with relevant
goals).
Implications for practice:
- Voluntary involvement
- Environment
- Commitment-enabling environment
- Embedded SPI
Should combine different forms of commitment.
6.2 James Herbsleb et al., Lucent: ”An Empirical Study of Global Software
Development: Distance and Speed”
Modification requests (MRs) drive the process, so will automatically collect extra data
for these.
I.e. 35% longer time for multi-site vs. local work.
Work duration: Related to number of people involved (again being related to multisite), and impact on other modules; i.e. people coordination is crucial.
If extra information is needed: takes 1 day locally, 2.5 days multi-site.
Explanations: Differences in finding right person, getting relevant information,…
But more help is given in multi-site, but not being received.
6.3 Jan Bosch, University of Groningen: “Software Product Lines:
Organizational Alternatives”
1. Small organization: < 30 developers, doing domain or application engineering.
Simple, not scalable.
2. Business units share a product (line), e.g. with 10 OO frameworks.
Travel report from ICSE’2001, 14-18 May, Toronto, 2001
page 8 of 14
3. Need special asset responsible over time, not mixed responsibles. Gradually a
domain-engineering unit, possible with component units.
Possibly a hierarchical domain engineering unit, e.g. totally over 100 subdomains in Nokia.
Influencing factors:
- Geographical distribution
- Project management maturity
- Organizational culture
- Type of systems.
7 Effectiveness of Inspections, Lionel Briand, Chair
7.1 A. Dunsmore et al., University of Strathclyde: “Object-Oriented
Inspections”
OO: Decentralized information, so need “ordered” reading.
Systematic vs. as-needed techniques. Read in what sequence?
Make mini-abstractions for each method (as a side effect).
Two-student groups, two artifacts, two methods.
Ad-hoc done before systematic: slightly more defects found by latter method, but with
less effect.
Ad-hoc techniques did not find all defects.
Worst students did better with systematic method, but best students possibly worse.
7.2 Stefan Biffl et al., Technical University of Vienna: “Evaluating the Accuracy
of Defect Estimation Models”
Using capture-recapture models.
Ex. Requirement document (35 pages), 169 inspectors in 31 teams, 86 seeded defects.
7.3 Stefan Biffl et al., Technical University of Vienna: “Investigating the CostEffectiveness of Re-inspections in Software Development”
Declining in 2nd inspection, but still cost-effective.
8 “Construction of Component-Based Systems”, Don Batory, University of
Texas, Chair
8.1 E. Truyen et al., University of Leuwen: “Dynamic and Selective
Combinations of Extensions in Component-Based Applications”
Somewhat hard to understand …
8.2 Eric Wohlstadter, University of California, Davis: “Generating Wrappers for
Command-line Legacy Systems”
Embed an existing application in a CORBA framework, ex. for JDB watch:
Travel report from ICSE’2001, 14-18 May, Toronto, 2001
page 9 of 14
Client talks to Server. Wrapper has Parser and Printer functions to, respectively,
analyze the request and “stringify” the necessary command-line.
Many waiting command-initiations, and each request is described by an
IDL/CORBA-object.
Also/simple specification language to generate such wrappers.
Describes exceptions, synchronization and multiplexing, data conversions.
Much related work on interface adaptation.
8.3 David H. Lorenz et al., North-Eastern University: “Components vs. Objects –
a Transformational Approach”
Transform OOD to CBD: large design space.
Ex. map classes to events and methods to components!
No direct analog in existing OOD methods.
Number of properties (methods) of a Java Beans class: must be decided statically.
Introducing a dataflow-oriented taxonomy of objects according to main (implicit)
invocation models: Transmitter, receiver,….
Design possibilities: Hard-wired, reflection beans, parameterized.
Using this classification to guide the design.
9 Mary Shaw, Carnegie-Mellon University: “Key-note - The coming-of-age of
Software Architecture Research”
Engineering: How to make useful things/solutions with limited time, resources and
knowledge.
Software architecture: Study of relations between components and systems.
15-20 years from idea to applicable results.
Software architecture: First informal box diagrams. Later ADLs and certain styles,
and analysis of such ADL-descriptions.
Using NEC Bibliographic citation counts.
New IEEE standard on the Practice of making software architect uses. Also one for
making product families.
Ex. How to express domain descriptions, how to express architectures, how to
analyze/predict these?
Research results (Brooks): Finding, observations, rules-of-thumb.
Research validation: Persuasion, implementation, evaluation, analysis (predictive
models), experiences (case studies).
Question
Strategy/Result
Validation
Feasibility
Characterization
Method
Selection
Generalization
Qualitative model
Technique
System
Analytic model
Empirical model
Persuasion
Implementation
Compare X and Y
Prediction/estimation
Case study
Challenges: Highly distributed/mobile systems, dependability, humane systems, more
emphasis on validation.
Heinemans’s law: “The Architecture reflects the organization that has developed it”.
Travel report from ICSE’2001, 14-18 May, Toronto, 2001
page 10 of 14
10 Bob Balzer: “Tolerating Inconsistencies - best ICSE’13 paper in 1991”
Exception rules on DB data: pragmatical approach to manage inconsistent data in a
technology-neutral way.
Underlying theme: Tolerating the world as it exists, e.g. adapting legacy (COTS)
systems, such as debuggers, graphical editors and safe email attachments.
NB: COTS is an important SE area.
11 New Paradigms in SE
11.1 Mathias Müller et al., University of Karlsruhe: “Extreme programming – a
case study”
XP: Cf. book by Kent Beck.
Paired programming and testing: Rather chaotic, but works OK in the end.
6– 8 developers is best. Design in small increments: Is hard.
Coaching is needed.
XP: Is it effective? Do the students learn SE from it?
11.2 Chris Ebert, Alcatel: “Improving Validation in a Global Team”
Focus on inspections and product-lines.
Focus on PDCA cycle, with specific quality targets.
Remote inspections: Only ½ effectiveness, ¼ cost-efficiency. Optimum reading
speed: 50-150 statements/hour. How to predict SPI results?
11.3 Hoh In et al., USC: “Applying EasyWinWin to quality requirements“
Four factors: Win condition – Issue – Option – Agreement. Give value to each
requirement: Cost, schedule.
Many dependencies. EasyWinWin tool using GroupSystems.
Use AI agents to help resolve conflicts.
12 Impact Project, Leon J. Osterweil, Chair
This is a project initiated by ICSE on the impact of SE Research on practice.
Goal: set of 18-20 reports, or 25-30 journal-quality articles.
ICSE’2001: Preliminary reports.
ICSE’2002: Special track?
ICSE’2003: Final reports.
Reviews/walkthroughs: Dieter Rombach, Dewayne Perry.
Configuration Management: Jacky Estublier, David Leblang (with Reidar Conradi
and others as co-authors).
Testing and Analysis: Lori Clarke, David Rusenblum.
Middleware: Wolfgang Emmerich.
Process Models: Volker Gruhn.
Travel report from ICSE’2001, 14-18 May, Toronto, 2001
page 11 of 14
12.1 Jacky Estublier, University of Grenoble: “SCM”
2 bill. $ annual market. 30-40% increase/year. Mature, major technology.
Good cooperation between research and industry.
Versioning starting point: APOLLO project (fire incident) in 1960’s.
12.2 Dieter Rombach, University of Kaiserslautern: “Inspections”
Starter: DEWLINE “discovery” of moon instead of missile in late 50s.
Human-based approach, cf. Cleanroom.
High potential for quality increase and cost reduction.
Differences: Focus, lifecycle phases, processes.
117 case studies: 20 on requirements, 31 on design, 54 on code, 12 on test cases.
Research on: Techniques, effects, process dependencies.
Systematic look at industrial results, e.g. from NASA-SEL, Allianz, AT&T, IBM, etc.
NASA: had to modify inspection process to succeed.
Research needed to achieve industrial successes.
12.3 Mary Lou Soffa, SEI: “Modern Programming Languages”
Basic concepts: classes, OO, abstraction, concurrency, exception handling.
13. Birds-of-a-Feather CeBASE meeting, Victor Basili, UMD, www.cebase.org.
Themes: Inspections, COTS, empirical methods, experience bases, SE education.
10 hottest questions in COTS: IEEE Computer, May 1991.
Experience base: “Raw” data, FAQs, Chats and discussion groups.
Next “web-meeting”: Monday 16th July 1130 EST, an e-workshop.
How to represent context/meta-data: Size of project, people background, and
application domain, type of organization and product, type of process, time period,
data confidence, empirical method. Little control of projects, from which data is
collected. Planning a tutorial and workshop in about a year. Common experience
database project (UMD et. al.), also Digital Library at USC.
14 Perspectives on Software Engineering, Marc Donner, Morgan Stanley
Insurance. Chair: David Notkin, University of Washington, Seattle
Maintain/evolve/operate + substitute/shut-down: more than classic lifecycles.
Missing skills: Debug, read, maintain and reuse code.
Dangerous attitudes: Only design and build, only new code, code more important than
data.
How to evolve old programs? 20 M lines in “Natural” language, 5 TB of data, 1,000
programmers.
Ideas:
- Promote more reuse, also based on outdated languages.
- How to “untie” entangled systems, previously well-architectured.
Travel report from ICSE’2001, 14-18 May, Toronto, 2001
page 12 of 14
-
Microsoft is not evil.
Unix has great development tools.
Mainframe has great execution environment.
NASA space shuttle: no lack of money to ensure software dependability.
Empirical work and evaluation: Important, make new communities – don’t give up,
ally with industry.
Beginners: Start with large programs, not “30 liners”.
15 Bernd Vogt, Lufthansa, “A CEO’s views onto Software Engineering”
Lufthansa: Privatized in early 90’s. Organizing its businesses into core areas:
Passenger services, catering, ground services, …
Star Alliance: 13 airlines, how to merge their IT systems into STAR Net?
New middleware: Eland technologies, TCP/IP, …
Basic project management processes.
From databases to data warehouses, used by middleware.
From client/server applications to ERP systems (e.g. SAP).
From browsers to portals (with email, agendas,….): The desktop of the future,
customized to me, running on different platforms (PCs, PDAs, mobile devices). I.e.
my private garden in a public workspace.
Uniform security solutions – now 7 different technologies.
How to ensure dependability?
15.1 Collaborative SE, Alan Brown, Rational
Charges: Deregulation, multiple partners, decreasing time-to-market, outsourcing.
Maturing distributed technology platforms:
- Component-based design technologies
- Multi-tier architectures.
Enterprise–specific vs. global and reusable solutions.
Consequences: more flexible, distributed teams.
Need coordination, collaboration, community building. Technologies to support
cooperation: e.g. web-based, artifact-centric (BSCW), CSCW tools,……..
Ex. Information services: acm.org, ibm.com/developerworks, theserverside.com
(Java), webmonkey.com, search engines (many), groups.yahoo.com (disc.groups),
slashdot.com, wikiweb for XP, groove.net, sourceforge.net (open system
development, 20.000 projects), sourcecost.com (setting up collaborative networks),
componentsource.com
(for CBD, 6000 components).
Apply it for training and education.
Travel report from ICSE’2001, 14-18 May, Toronto, 2001
page 13 of 14
16 Reuse with product lines, Linda Lindthrop, SEI.
Ex. Cummins Inc.: Diesel engine producers, w/ control systems. 20 product groups,
over 1.000 different products. Reusable architecture across products, i.e. product
lines.
Ex. Raytheon toolkit for National Reconaissance office. 10x increase in productivity
and reliability.
Ex. Market Marker GmbH: Merger business systems.
Reuse: Recognized already in NATO 1969 conference. More than single components,
rather product lines: Combines business opportunities and technological solutions.
I.e. not only software code, also test data, processes, domain insights.
Scope: How many products? 2-3 seems OK – as for reuse.
From opportunistic reuse to strategic reuse.
Activities: Develop core assets, configure products, management.
Travel report from ICSE’2001, 14-18 May, Toronto, 2001
page 14 of 14
Download