Grading Effectiveness of Software and Processes Reading Team Issues

advertisement
Grading Effectiveness of
Software and Processes
Reading
• On the web site:
Essay on Objects and Modeling
CMM.pdf - Capability Maturity Model (download the PDF file)
READ EVERY SLIDE IN THIS LECTURE
Team Issues
• If any team member deserves less-thanfull credit, please let me know.
1
SEI CMM Levels –
in practical terms
• I - Ad hoc (Chaos) - hero-based, unrepeatable
• II- Software Project Management - schedules and
critical paths
• III- Sanctioned Software Engineering - architecture
and lifecycle stages
• IV- Measurement
• V- Closed-loop (measurement improves process)
Some have added
• Level 0 - stupidity and self destruction
Facts about the Levels
•
•
•
•
Can’t start at Level 3
Can’t skip levels
“C” encourages Level 1 Behavior
Ada encourages Level 3 Behavior / Ada is
dead
• Most companies at Level 1
• CASE tools do not help
2
Level V - 1995
• 6 companies:
– 1 division of NASA
– 1 division of Lockheed
– 1 division on Motorola
– 3 in India
Level V - 1999
• 7 companies:
– 1 division of NASA
– 1 division of Lockheed
– 1 division on Motorola
– 3 in India
– 1 in Japan
Level V - 2003
• 53 companies
3
4 aspects of SW “Effectiveness”
1.Defects:
– Defect – varies from intent
– Failure – an error in operation caused by a
variance from intent
– Error – an error in operation not caused by
variance from intent (a “bug”)
there is much attention paid to design, intent, and
anticipation of operation. So much so, that anything
that works as planned, even if unsuccessful, is
considered success.
2. Verification and Validation
Verification – all activities throughout the
lifecycle that ensure that interim steps
support the specification
Validation – verifying that the end product
meets specification and answers the
requirements.
Again – does it work? Doesn’t matter.
3. Grades – software SUBJECTIVE
opinions
4. Metrics – software measurements,
whether based on subjective opinions or
not.
Metrics participate heavily in “audits”.
An organization in Chaos will judge software
subjectively by Grade.
4
McCall’s Grades
Well-used in the industry
1.
2.
3.
4.
5.
6.
7.
Auditability – can it be checked?
Accuracy – precision of results
Commonality – use of industry standards
Completeness – all requirements met?
Conciseness – no more than needed
Consistency – company standards used
Data Commonality – strong data typing
8. Error Tolerance – ability to recover
(completely) from an extreme condition,
not a graceful shutdown
9. Execution Efficiency – is it a dog?
10. Expandability – at what point does the
architecture stop supporting changes:
•
•
House of Cards – Structural Limit
Justification Limit
11. Generality – the utility of its components
12. Hardware Independence
13. Instrumentation – monitoring,
diagnostics, and annunciation
14. Modularity – coupling and cohesion
15. Operability – ease of use
16. Security 1 – protection of data
17. Security 2 – protection of programs
18. Self-documentation – in use and design
19. Simplicity – clarity, freedom from
unneeded cleverness
5
20. SW System Independence –
independance from an isolated approach
– speaks to the likelihood that the
industry can and will contribute to the
system’s quality
21. Traceability – requirements origin of
modules
22. Training – effort needed to become
proficient in its theory (not in its use,
which is “Operability”)
McCall’s Attributes
What’s the difference? You tell me…
1.
2.
3.
4.
Correctness – does it work?
Reliability – with precision? All the time?
Efficiency – as well as it could?
Integrity – is it secure against unintended
use?
5. Usability – can it be run with no
complications
6. Maintainability – can it be fixed?
7. Flexibility / Enhance-ability – can it be
changed?
8. Testability – can it’s inner-workings be
audited?
9. Portability – is it useable on another
platform
10. Reuseability – useable after its first
deployment?
11. Interoperability – interface to another
system?
6
Grades vs. Attributes
• Attributes are collections of grades which
combine to form “more practical” criteria.
• Not just Q/A people should care –
everyone should
Personnel
• Project Manager – the accountant
• Systems Engineer – problem domain expert
• Software Project Manager – development &
methodology
• Software Architect – doesn’t dirty his/her
hands with code
• Lead Software Designer – coders’ point of
contact
• Programmers – lowest on the food chain
Project Planning Grades
• Scope defined
• Adequate schedule and budget
• Adequate resources available and
allocated
• Good basis for estimates (what’s been
guessed?)
• Critical Path is real
• Realistic schedule and budget
7
System Engineering Grades
Concerns over the software’s contribution to
a problem solution
• Well partitioned – HW and SW and
environment
• All interfaces defined
• Precision and performance bounds
reasonable and adequate
• Design constraints in place – everything
targets the solution
•
•
•
Best solution?
Technically Feasible?
Mechanisms for validation in place?
SEI Grades
•
•
•
•
•
•
•
•
•
Engineering principles applied to design
Requirements compliance
Reviews, walk-thrus
Documentation
Modularity
Coding standards
Simplicity and clarity
Control over changes
Extensive test – throughout (not after)
8
SW Design Grades
•
•
•
•
Architecture matches requirements
Modularity / cohesion / coupling
All module interfaces designed
Data dictionary – structure, content, type,
range, defaults, domain, flow
• Maintainable
Implementation Grades
•
•
•
•
•
•
•
•
•
•
Accomplish desired function
Consistent with the architecture
Clarity, not cleverness or complexity
Error handling
I/O to the module extensively designed
Debuggable, testable, maintainable
Design properly translated into code
Coding standards used
Documented
Global TYPES, not variables
Maintenance Grades
• Have side effects of change been
considered?
• RFC / STR been properly documented
• Relies on the existence of a maintenance
procedure
9
McCabe’s Complexity Measure
• Uses the count of decision paths within a
single module to gauge complexity
• Assumes complexity by that measure
reflects unreliability
• Assumes complexity by that measure
reflects testing difficulty
• Appears to be a practical limit for module
size
• Useless in OO and ultra-powerful systems
SW Failures show themselves…
•
•
•
•
•
•
•
•
•
Unintended effects
Over budget
Exceeded schedule
Under spec.
Unsafe
Undebuggable
Unenhanceable
Unmaintainable
Performance varies and
is unmeasurable
• Un-decipherable in theory
of operation
• Works under too-narrow
a range of conditions
• User difficulties
• Unable to dependably
reconstruct an executable
Failure Types
•
•
•
•
•
•
Errors in specification
Errors in design
Errors in implementation
Errors in measurement
Errors in customer expectation
Errors in administration – documentation
and configuration management
10
Testing
• two-thirds of all errors occur before coding
• theoretically, non-execution based tests
can eliminate 100% of errors
• pressure to complete is an anti-testing
message
• organizations in Chaos experience
– failure to define testing objectives
– testing at the wrong lifecycle phase
– ineffective testing techniques
• The biggest problem with testing is
boredom
Metrics
• Not subjective, but measureable.
Complexity Metrics
Measure the complexity of…
• Computation – efficiency of the design
• Psychology – what affects programmers’
performance in composing, comprehending,
and modifying the software
• Problem – how difficult is the problem to be
solved?
• Design – how complex is the solution?
• Process – development tools, experience, and
team dynamics
11
SW Engineering Metrics
measure…
• Mental capacity needed (volume and
difficulty)
• Language proficiency
• Experience needed
• Cost and time for each lifecycle phase
• Productivity (LOC/person/day)
Safety Metrics
measure…
• Deaths per 1000
• Extent and characteristic of people contact
• Mechanical vs. Electrical vs. SW Control
(technology scale)
• Worst case failure
• Probability of worst case failure
Lifecycle metrics
measure…
• Improperly interpreted requirements
• User-specified errors
• Requirements improperly transcribed
• Analysis omissions (hard to measure)
• Req-to-Design, and Design-to-Code errors
• Coding errors (especially boundary values)
• # of system recompilations (a 70’s thing)
• Data typing errors
12
Testing Metrics
measure…
•
•
•
•
•
•
Testing plan and procedure error count
Testing in wrong lifecycle phase
Testing setup error count
Error characterization error count
Error correction error count
# of corrected condition injected mistakes
Cost Metrics
measure…
•
•
•
•
Labor hours
Money
Delay in delivery (calendar)
Turn over and burnout
Reliability Metrics
•
•
•
•
MTBF – mean time between failures
MTTR – mean time to repair
Total Time = MTBF + MTTR
Availability = MTBF / Total Time as a
percentage – the chance that it’s operating
• Hard failures requiring restart
13
Relevance Metrics
• In the company’s product line
• In the company’s business plan
• Evolution or revision of current product
General form of a metric - everything
you need to know about a measurement
• Factor – reliability
• Characteristic – availability
•
Measurement – hrs. available / total hrs.
•
Impact – production halted
•
Tolerance - < 1% unavailabilty
•
Probability – 80%
•
Risk – High
•
Candidate for Test – Yes
•
Probable Error Lifecycle Stage –
Spec & Design
let’s do one for safety
• Factor – safety
• Characteristic – Unscheduled landings
•
Measurement – lost planes
•
Impact – reduced revenues
•
Tolerance - 0%
•
Probability (if you don’t test) – 50%
•
Risk – medium risk
•
Candidate for Test – Yes
•
Probable Error Lifecycle Stage – All
14
so what is important about your
project?
cost? safety? ease of use? reliability?
ubiquity? market share? high tech? low
tech? backward compatibility? industry
standard compliance?
15
Download