SE 2730 Final Review 1. Introduction 1) What is software: programs

advertisement
SE 2730 Final Review
1. Introduction
1) What is software: programs, associated documentations and data
2) Three types of software products: generic, custom, semi-custom
 Why is semi-custom product more and more popular?
3) What is software engineering: an engineering discipline concerning with all aspects of software production.
4) Software engineering vs. computer science vs. system engineering
5) Why is software engineering so important?
6) What is good software?
 maintainable, dependable, efficient, acceptable OR
 F.U.R.P.S.+: Features, Usability, Reliability, Performance, Scalability
2. Generic Software Process
1) What is a software process: a set of activities whose goal is the development or evolution of software
2) Four generic activities: specification, development, validation and evolution
 Specification answers the question of WHAT instead of HOW!
 Development: high-level design, detailed design, implantation (coding, unit test, integration test, system
test)
 Validation vs. Verification
 Evolution: fix, add, change, port, improve
3) CASE: Computer-Aided Software Engineering
3. Time Management (Personal Software Process)
1)
2)
3)
4)
How to avoid time overrun? Target and Estimate
The problem with target: unknown feasibility, unpredictable results, not repeatable, pressure
Basis for team estimation: previous project experience; assume “average” developers
Basis for individual work: time record
 record time in major activity categories
 record time in a standard way
 record completed tasks in standard units: for future estimation!
5) Word Breakdown System (WBS): continue decomposing till all tasks are under 2-4 hours duration
6) Construction Cost Model (COCOMO) 1:
 What do we need for the estimation? program size in KLOC
 What can we estimate? total effort in person-month, development time and people required
 Problem with COCOMO: need to have good estimation of the project size
4. Requirements
1) User requirement (definition) vs. System requirement (specification)
2) functional requirements vs. non-functional requirements vs. domain requirements
3) SMART requirements: specific, measurable, attainable, realizable, traceable
5. Requirement Engineering
1) Requirement engineering process: feasibility study elicitation and analysis specification validation
2) feasibility study:
 a short focused study (2-3 weeks)
 Organizational objective, technical, economic, operational, schedule, legal and political
3) Elicitation and analysis: work with customers on gathering domain, services and constraints information
 stakeholder
 problems
 Elicitation and analysis process: discoveryclassification and organizationprioritization and
negotiationdocumentation
 Iterative: business req.  user req.  system req.
 Interview: closed, open, mixed
 Ethnography: observations and contextual interview
i. Benefits?
ii. Issues?
 Scenarios: real-life example of how a system can be used
i. basic structure
ii. How to write a good scenario? simple sentence; no conditions; observables only; specific
message
4) Specification: requirement specification items and use case model
 use case: generalization of a collection of scenarios
i. actors vs. stakeholders
ii. Basic structure: how to document exceptions?
iii. Writing guide: verb-noun name; distinguish actors’ actions from system’s actions; active voice;
complete; NO user interface; 2-3 pages; scenario writing guides also apply
 use case diagram:
i. UML: Unified Modeling Language
ii. four components: actors, use cases, system boundary box, relationships
iii. primary actors on the left, secondary actors on the right
iv. <<include>>:
(1).
multiple base cases share the same inclusion case OR
(2).
the inclusion case is an important part of the base case
v. <<extend>>: the extension case consists of additional behavior that can incrementally augment
the behavior of the base use case.
 Goal of refining use cases: completeness and correctness
5) Requirement validation:
 Review: SMART requirements
i. guidelines for effective review
 Prototyping: quickly generate something that can be validated by users
i. throw away
ii. fast
iii. for requirement elicitation, validation, design, etc.
6. HCI Design
1) CLI, UI, MCI, HCI, GUI
2) What is HCI: input and display devices
3) Limitations of people: memory, stress, physical limitation
4) Three golden rules
 Place users in control
i. Use modes judiciously (modeless)
(1).
application and system modal
ii. Allow users to use either the keyboard or mouse (flexible)
iii. Allow users to change focus (interruptible)
iv. Display descriptive messages and text (helpful)
v. Provide immediate and reversible actions, and feedback (forgiving)
vi. Provide meaningful paths and exits (navigable)
vii. Accommodate users with different skill levels (accessible)
viii. Make the user interface transparent (facilitative)
ix. Allow users to customize the interface (preferences)
x. Allow users to directly manipulate interface objects (interactive).
 Reduce users’ memory load
i. Relieve short-term memory (remember)
ii. Rely on recognition, not recall (recognition)
iii. Provide visual cues (inform)
iv. Provide defaults, undo, and redo (forgiving)
v. Provide interface shortcuts (frequency)
vi. Use real-world metaphors (transfer)
vii. User progressive disclosure (context)
viii. Promote visual clarity (organize)
 Make the user interface consistent
i. Sustain the context of users’ tasks (continuity)
ii. Maintain consistency within and across products (experience)
iii. Keep interaction results the same (expectations)
iv. Provide aesthetic appeal and integrity (attitude)
v. Encourage exploration (predictable)
7. Architectural Design
1) What is architectural design: decompose into sub-systems and identify the control and communications among
sub-systems
2) When should it be performed: early stage of design; in parallel with later stage of specification
3) Non-functional requirement is the key to make architectural design decisions!
 Performance: Localize critical operations and minimize communications; Use large rather than fine-grain
components.
 Security: Use a layered architecture with critical assets in the inner layers.
 Safety: Localize safety-critical features in a small number of sub-systems.
 Availability: Include redundant components and mechanisms for fault tolerance.
 Maintainability: Use fine-grain, replaceable components.
4) Architectural design conflicts: examples and what to do
5) How to use a block diagram to demonstrate your architecture design
6) Architecture style: generic reusable architectural models
 Repository
 Client server
 Layered
For each: features, examples, advantages and disadvantages
7) How to combine architecture styles
8. Class Diagrams
1) What are analysis and design phase?
2) Object models: object classes and their relationships
3) Differences among domain, design and implementation level class diagrams
4) How to draw a domain level class diagram given the requirements?
 Step 1: identify objects
i. Name identification method: how to perform it?
ii. Class name: nouns
 Step 2: identify associations
i. description: one or two words verb phrase
ii. navigation: open arrow pointing to B if A knows B and B doesn’t know A
iii. multiplicity on each end of the association: 0..1, 1, *, 1..*
 Step 3: refine relationships
i. inheritance: “is-a” relationship
(1).
make sure child is not simply an instance of the parent
(2).
problems with multiple inheritance
ii. aggregation: “part-of” relationship
(1).
still need to show multiplicities
(2).
composition: if A doesn’t exist, B cannot exist.
 Step 4: add major attributes and operations
i. attributes are none_phrases connected by _
ii. most attributes can be identified during name identification process
iii. if A and B have association/aggregation relationship, don’t list B as an attribute of A!
iv. operations are verb phrases or nouns if value-returning
v. use “verb identification” method to identify operations
5) CRC (Class Responsibility Collaboration) method
 What should be included in a CRC card
 CRC design process
6) Object oriented design process
7) High cohesion and low coupling:
 Cohesion:
i. Coincidental cohesion (worst): partition randomly.
ii. Logical cohesion: logically categorized to do the same thing.
iii. Temporal cohesion: grouped by when they are processed.
iv. Procedural cohesion: grouped because they always follow a certain sequence of execution.
v. Communicational cohesion: operate on the same data.
vi. Functional cohesion (best): single well defined task per module
 Coupling:
i. Content coupling (Worst/highest): public attributes or goto statement
ii. Common coupling: global variable
iii. Control coupling: module A controls B via control flag
iv. Stamp coupling: pass composite data structure and use only part of it
v. Data coupling (Best/lowest): a list of simple parameters and use all of them
9. Software testing, verification and validation
1) Two V&V approaches:
 Static: inspections and review.
 Dynamic: software testing; execution based.
2) Human makes error; a fault is the representation of an error; failure occurs when a fault executes.
 Three conditions necessary for a failure to be observed: reachability, infection and propagation.
3) Testing vs. debugging: testing finds failures while debugging look for faults causing those failures.
4) Cost of fixing a defect grows exponentially.
5) You should not test your own code!
6) What is a test case?
 inputs and expected outputs
 Also include ID, purpose, pre and post conditions in test documents.
7) Activities of test engineering:
 Test design: know the requirements and/or the structure of the system
 Test automation: write scripts, build test harness and stubs
 Test execution
 Test evaluation: compare actual outputs with expected outputs; also need automation.
8) Test case design techniques:
 Functional testing (black-box):
i. need to know requirement specification
ii. should be able to trace back to all requirements: traceability
iii. start early: right after requirement specification is done.
iv. Partition testing:
(1).
Partition same behaviors into one equivalence class.
(2).
ONLY one test case is generated from one equivalence class.
v. Boundary testing: testing on the edge and both sides of the edge of legal input values.
 Structural testing (white-box)
i. need to know the structure of the program
ii. use different types of coverage criteria
iii. Benefits:
(1).
Explicitly state the extent to which the software is tested
(2).
Makes testing management more meaningful
 Black-box vs. white-box: neither is perfect; we should use both.
9) V model: the design of different levels of testing is based on different stages of development.
10) Testing levels:
 Unit testing:
i. done together with coding
ii. mainly white-box testing
iii. test harness: simulating other parts of the system (the environment)
iv. test stub: minimal function implementations to check the logic of calling functions and
interfaces between calling and called functions.

Module/subsystem testing:
i. test the subsystem as a whole.
ii. starting from this level, we mainly use black-box testing.
 Integration testing:
i. done incrementally as new subsystems are added.
ii. focus on the interfacing
iii. a good time to tune for performance
 System testing:
i. after the system is completed
ii. focus on non-functional requirements (also should validate the functional requirements)
 Acceptance testing:
i. customer testing
ii. alpha test vs. beta test
11) Regression testing: ensure fixing a bug does not introduce new bugs.
 often automated: nightly build and test
 regression test suites are growing as new ways to break a system are discovered
12) Test plan and management:
 Start early!
 Make sure there is enough time scheduled for testing.
 Traceability is important!
10. Software process models (A.K.A. software life cycle models)
1) Software process: structure set of activities required to develop a software system
2) software process model: an abstract representation of a software process
3) Waterfall model:
 The result of each phase is document(s) that are approved (“signed off”).
 The following phase will not start until the previous phase is finished.
 Freeze output documents(s) after one phase is finished.
 No going back.
 In reality: phased may overlap; may rewind back to the previous phase.
 Advantages:
i. simple
ii. easy to manage. why?
iii. similar to other engineering process models
 Problem: hard to adapt to changes!
 When to use: requirements are fixed; changes are limited; part of a larger project with other
engineering fields involved.
4) Evolutionary development:
 Develop the system in many small but complete iterations.
 Each iteration results in a working deliverable.
 Present each deliverable to the customer for feedback.
 Allow intermediate corrections of requirements, specifications, and plans to make sure that the project
is targeting the right direction.
 Examples:
i. Exploratory development
(1).
Start with well-understood part of the requirements
(2).
Add new features as proposed by the customer.
ii. throwaway prototyping
(1).
mainly for experimenting when
a. Customers don’t know exactly what they want
b. We want to analyze the usability of an HCI
c. We are not sure if a particular approach to solving a problem will work
d. We have multiple approaches to solving a problem and we want to experiment
to see which is better.
(2).
NEVER use the prototype in the actual deliverable!
 Problems:
i. lack of process visibility
ii. systems are often poorly structured
iii. may need special skills
5) Component-based development:
 Based on systematic reuse where systems are integrated from existing components or COTS
(Commercial-off-the-shelf) systems.
 Process stages
i. Requirement analysis 
ii. Component assessment and specification 
iii. Requirements modification 
iv. System design with reuse 
v. Development, integration and testing.
 Advantages:
i. Reduce the amount of software to be developed
ii. Reduce costs and risks
iii. Usually faster delivery of the software
 Problem: requirement compromises are inevitable.
6) Costs of software engineering activities:
 Roughly 60% of costs are development costs, 40% are testing costs.
 For custom software, evolution costs often exceed development costs.
 Costs vary depending on the type of system being developed and the requirements of system attributes
such as performance and system reliability.
 Distribution of costs depends on the process model that is used.
11. Ethics
1) Issues of professional responsibility:
 Confidentiality
 Competence
 Intellectual property right
 Computer misuse
2) ACM/IEEE Code of Ethics: 8 principles
Download