May 2011

advertisement
May 2011
Q1] (a) Construct class diagram & sequence diagram for Engineering College.
Q1] (b) Construct Use Case diagram & activity diagram for online Airline Reservation System.
Q2] (a) Explain software configuration management and change control management in detail.
Software configuration management (SCM) is an umbrella activity that is applied throughout the
software process. Because change can occur at any time, SCM activities are developed to (1) identify change,
(2) control change, (3) ensure that change is being properly implemented, and (4) report changes to others who
may have an interest. It is important to make a clear distinction between software support and software
configuration management. Support is a set of software engineering activities that occur after software has been
delivered to the customer and put into operation. Software configuration management is a set of tracking and
control activities that begin when a software engineering project begins and terminate only when the software is
taken out of operation. A primary goal of software engineering is to improve the ease with which changes can
be accommodated and reduce the amount of effort expended when changes must be made. In this chapter, we
discuss the specific activities that enable us to manage change.
The output of the software process is information that may be divided into three broad categories: (1)
computer programs (both source level and executable forms); (2) documents that describe the computer
programs (targeted at both technical practitioners and users), and (3) data (contained within the program or
external to it). The items that comprise all information produced as part of the software process are collectively
called a software configuration. As the software process progresses, the number of software configuration items
(SCIs) grows rapidly. A System Specification spawns a Software Project Plan and Software Requirements
Specification (as well as hardware related documents). These in turn spawn other documents to create a
hierarchy of information. If each SCI simply spawned other SCIs, little confusion would result. Unfortunately,
another variable enters the process—change. Change may occur at any time, for any reason. In fact, the First
Law of System Engineering states: “No matter where you are in the system life cycle, the system will change,
and the desire to change it will persist throughout the life cycle.” What is the origin of these changes? The
answer to this question is as varied as the changes themselves. However, there are four fundamental sources of
change:
 New business or market conditions dictate changes in product requirements or business rules.
 New customer needs demand modification of data produced by information systems, functionality
delivered by products, or services delivered by a computer-based system.
 Reorganization or business growth/downsizing causes changes in project priorities or software
engineering team structure.
 Budgetary or scheduling constraints cause a redefinition of the system or product.
Software configuration management is a set of activities that have been developed to manage change
throughout the life cycle of computer software. SCM can be viewed as a software quality assurance activity that
is applied throughout the software process. In the sections that follow, we examine major SCM tasks and
important concepts that help us to manage change.
1
Change Control:
Change control is vital. But the forces that make it necessary also make it annoying. We worry about
change because a tiny perturbation in the code can create a big failure in the product. But it can also fix a big
failure or enable wonderful new capabilities. We worry about change because a single rogue developer could
sink the project; yet brilliant ideas originate in the minds of those rogues, and a burdensome change control
process could effectively discourage them from doing creative work.
Bach recognizes that we face a balancing act. Too much change control and we create problems. Too
little, and we create other problems. For a large software engineering project, uncontrolled change rapidly leads
to chaos. For such projects, change control combines human procedures and automated tools to provide a
mechanism for the control of change. The change control process is illustrated schematically in Figure 9.5. A
change request4 is submitted and evaluated to assess technical merit, potential side effects, overall impact on
other configuration objects and system functions, and the projected cost of the change. The results of the
evaluation are presented as a change report, which is used by a change control authority (CCA)—a person or
group who makes a final decision on the status and priority of the change. An engineering change order (ECO)
is generated for each approved change. The ECO describes the change to be made, the constraints that must be
respected, and the criteria for review and audit. The object to be changed is "checked out" of the project
database, the change is made, and appropriate SQA activities are applied. The object is then "checked in" to the
database and appropriate version control mechanisms are used to create the next version of the software.
The "check-in" and "check-out" process implements two important elements of change control—access
control and synchronization control. Access control governs which software engineers have the authority to
access and modify a particular configuration object. Synchronization control helps to ensure that parallel
changes, performed by two different people, don't overwrite one another.
Q2] (b) Explain the open source software life cycle model.
This model demands a systematic and sequential approach to software development that begins
at the system level and progresses through analysis, design, coding testing and maintenance. Figure 1.1 shows a
diagrammatic representation of this model.
2
The life-cycle paradigm incorporates the following activities:
System engineering and analysis: Work on software development begins by establishing the requirements for all
elements of the system. System engineering and analysis involves gathering of requirements at the system level,
as well as basic top-level design and analysis. The requirement gathering focuses especially on the software.
The analyst must understand the information domain of the software as well as the required function,
performance and interfacing. Requirements are documented and reviewed with the client.
Design: Software design is a multi-step process that focuses on data structures, software
Architecture, procedural detail, and interface characterization. The design process translates requirements into a
representation of the software that can be assessed for quality before coding begins. The design phase is also
documented and becomes a part of the software configuration.
Coding: The design must be translated into a machine-readable form. Coding performs this task. If the design
phase is dealt with in detail, the coding can be done mechanically.
Testing: Once code is generated, it has to be tested. Testing focuses on the logic as well as the function of the
program to ensure that the code is error free and that o/p matches the requirement specifications.
Maintenance: Software undergoes change with time. Changes may occur on account of errors encountered, to
adapt to changes in the external environment or to enhance the functionality and / or performance. Software
maintenance reapplies each of the preceding life cycles to the existing program.
The classic life cycle is one of the oldest models in use. However, there are a few associated problems. Some of
the disadvantages are given below.
Disadvantages:

Real projects rarely follow the sequential flow that the model proposes. Iteration always occurs and
creates problems in the application of the model.
 It is difficult for the client to state all requirements explicitly. The classic life cycle requires this and it is
thus difficult to accommodate the natural uncertainty that occurs at the beginning of any new project.
 A working version of the program is not available until late in the project time span. A major blunder
may remain undetected until the working program is reviewed which is potentially disastrous.
 In spite of these problems the life-cycle method has an important place in software engineering work.
Some of the reasons are given below.
Advantages:



The model provides a template into which methods for analysis, design, coding, testing and maintenance
can be placed.
The steps of this model are very similar to the generic steps that are applicable to all software
engineering models.
It is significantly preferable to a haphazard approach to software development.
3
Q3] (a) Why is FTP necessary? How FTP is conducted?
The FTR software quality assurance activity with the objectives to uncover errors in function, logic or
implementation for any representation of the software; to verify that the software under review meets its
requirements; to ensure that the software has been represented according to predefined standards; to achieve
software that is developed in a uniform manner and to make projects more manageable.
FTR (Formal Technical Review) is also a learning ground for junior developers to know more about
different approaches to software analysis, design and implementation. It also serves as a backup and continuity
for the people who are not exposed to the software development so far. FTR (Formal Technical Review)
activities include walkthroughs, inspection and round robin reviews and other technical assessments. The
above-mentioned methods are different FTR formats.
Formal methods allow a software engineer to create a specification that is more complete, consistent,
and unambiguous than those produced using conventional or object oriented methods. Set theory and logic
notation are used to create a clear statement of facts (requirements). This mathematical specification can then be
analyzed to prove correctness and consistency. Because the specification is created using mathematical
notation, it is inherently less ambiguous than informal modes of representation.
In safety-critical or mission critical systems, failure can have a high price. Lives may be lost or severe
economic consequences can arise when computer software fails. In such situations, it is essential that errors are
uncovered before software is put into operation. Formal methods reduce specification errors dramatically and,
as a consequence, serve as the basis for software that has very few errors once the customer begins using it.
The first step in the application of formal methods is to define the data invariant, state, and operations
for a system function. The data invariant is a condition that is true throughout the execution of a function that
contains a collection of data. The state is the stored data that a function accesses and alters; and operations are
actions that take place in a system as it reads or writes data to a state. An operation is associated with two
conditions: a precondition and a post condition. The notation and heuristics of sets and constructive
specification—set operators, logic operators, and sequences—form the basis of formal methods.
A specification represented in a formal language such as Z or VDM is produced when formal methods
are applied. Because formal methods use discrete mathematics as the specification mechanism, logic proofs can
be applied to each system function to demonstrate that the specification is correct.
Q3] (b) Differentiate between static modeling and dynamic modeling in detail.
Static modeling is used to specify structure of the objects that exist in the problem domain. These are
expressed using class, object and USECASE diagrams. Dynamic modeling refers representing the object
interactions during runtime. It is represented by sequence, activity, collaboration and state chart diagrams
Static modeling is a time independent view of a system. However, Static modeling is supposed to detail
what preferably might happen instead of the numerous possibilities. That’s why, it is more rigid and cannot be
changed. Therefore, it is called Static Modeling. Use Case can be part of both Static and Dynamic Modeling
depending on how it is designed. Normally, it is a part of Static Modeling. Dynamic Modeling is time
dependant. And more appropriately, it show what an object does essentially with many possibilities that may
4
arise. It is flexible but its flexibility is limited to the design on the system. Interaction, State Chart and
Collaboration Diagrams are good examples of Dynamic Modeling.
The most notable difference between static and dynamic models of a system is that while a dynamic
model refers to runtime model of the system, static model is the model of the system not during runtime.
Another difference lies in the use of differential equations in dynamic model which are conspicuous by their
absence in static model.
Dynamic models keep changing with reference to time whereas static models are at equilibrium of in a
steady state. Static model is more structural than behavioral while dynamic model is a representation of the
behavior of the static components of the system.
Q4] (a) What is Requirement? Explain different types of requirements.
A requirement can range from a high-level abstract statement of a service or of a constraint toa detailed
mathematical functional specification. Requirements engineering provides the appropriate mechanism for
understanding what the customer wants, analyzing need, assessing feasibility, negotiating a reasonable solution,
specifying the solution unambiguously, validating the specification, and managing the requirements as they are
transformed into an operational system. The requirements engineering process can be described in five distinct
steps:
 Requirements elicitation
 Requirements analysis and negotiation
 Requirements specification
 System modeling
 Requirements validation
 Requirements management
The system engineer must reconcile these conflicts through a process of negotiation. Customers, users
and stakeholders are asked to rank requirements and then discuss conflicts in priority. Risks associated with
each requirement are identified and analyzed. Rough estimates of development effort are made and used to
assess the impact of each requirement on project cost and delivery time. Using an iterative approach,
requirements are eliminated, combined, and/or modified so that each party achieves some measure of
satisfaction.
In the context of computer-based systems (and software), the term specification means different things
to different people. A specification can be a written document, a graphical model, a formal mathematical model,
a collection of usage scenarios, a prototype, or any combination of these. Some suggest that a “standard
template” should be developed and used for a system specification, arguing that this leads to requirements that
are presented in a consistent and therefore more understandable manner. However, it is sometimes necessary to
remain flexible when a specification is to be developed. For large systems, a written document, combining
natural language descriptions and graphical models may be the best approach. However, usage scenarios may
be all that are required for smaller products or systems that reside within well-understood technical
environments.
The System Specification is the final work product produced by the system and requirements engineer.
It serves as the foundation for hardware engineering, software engineering, database engineering, and human
5
engineering. It describes the function and performance of a computer-based system and the constraints that will
govern its development. The specification bounds each allocated system element. The System Specification also
describes the information (data and control) that is input to and output from the system.
Q4] (b) State different types of coupling and cohension. Explain any 4 types of coupling and cohension.
Cohension:
A cohesive module performs a single task within a software procedure, requiring little interaction with
procedures being performed in other parts of a program. Stated simply, a cohesive module should (ideally) do
just one thing. Cohesion may be represented as a "spectrum." We always strive for high cohesion, although the
mid-range of the spectrum is often acceptable. The scale for cohesion is nonlinear. That is, low-end
cohesiveness is much "worse" than middle range, which is nearly as "good" as high-end cohesion. In practice, a
designer need not be concerned with categorizing cohesion in a specific module. Rather, the overall concept
should be understood and low levels of cohesion should be avoided when modules are designed.
At the low (undesirable) end of the spectrum, we encounter a module that performs a set of tasks that
relate to each other loosely, if at all. Such modules are termed coincidentally cohesive. A module that performs
tasks that are related logically (e.g., a module that produces all output regardless of type) is logically cohesive.
When a module contains tasks that are related by the fact that all must be executed with the same span of time,
the module exhibits temporal cohesion.
As an example of low cohesion, consider a module that performs error processing for an engineering
analysis package. The module is called when computed data exceed pre specified bounds. It performs the
following tasks: (1) computes supplementary data based on original computed data, (2) produces an error report
(with graphical content) on the user's workstation, (3) performs follow-up calculations requested by the user, (4)
updates a database, and (5) enables menu selection for subsequent processing. Although the preceding tasks are
loosely related, each is an independent functional entity that might best be performed as a separate module.
Combining the functions into a single module can serve only to increase the likelihood of error propagation
when a modification is made to one of its processing tasks.
Moderate levels of cohesion are relatively close to one another in the degree of module independence.
When processing elements of a module are related and must be executed in a specific order, procedural
cohesion exists. When all processing elements concentrate on one area of a data structure, communicational
cohesion is present. High cohesion is characterized by a module that performs one distinct procedural
task.
Coupling:
Coupling is a measure of interconnection among modules in a software structure. Coupling depends on
the interface complexity between modules, the point at which entry or reference is made to a module, and what
data pass across the interface. In software design, we strive for lowest possible coupling. Simple connectivity
among modules results in software that is easier to understand and less prone to a "ripple effect", caused when
errors occur at one location and propagates through a system. As long as a simple argument list is present (i.e.,
simple data are passed; a one-to-one correspondence of items exists), low coupling (called data coupling) is
exhibited in this portion of structure. A variation of data coupling, called stamp coupling is found when a
portion of a data structure (rather than simple arguments) is passed via a module interface.
Relatively high levels of coupling occur when modules are tied to an environment external to software.
For example, I/O couples a module to specific devices, formats, and communication protocols. External
6
coupling is essential, but should be limited to a small number of modules with a structure. High coupling also
occurs when a number of modules reference a global data area. The highest degree of coupling, content
coupling, occurs when one module makes use of data or control information maintained within the boundary of
another module. Secondarily, content coupling occurs when branches are made into the middle of a module.
This mode of coupling can and should be avoided.
Q5] (a) Explain how project scheduling and tracking is done for a software development project.
The system under test should be measured by its compliance to the requirements and the user acceptance
criteria. Each requirement and acceptance and criteria must be mapped to the specific test plans that validate
and measure the expected results for each test being performed. The objective should be listed in order of
importance and weighted by risk.
Feature and functions to be tested:
Every feature and function must be listed for test inclusion or exclusion; along with a description of the
exceptions. Some feature may not be testable due to lack of hardware or lack of control etc. The list should be
grouped by functional area to add clarity. The following is a basic list of functional areas:
1. Backup and recovery
2. Workflow
3. Interface design
4. Installation
5. Procedures
6. Requirements and design
7. Messaging
8. Notifications
9. Error handling
10. System exceptions and third party application faults
Testing approach:
The approach provides the detail necessary to describe the levels and type of testing. The basic Vmodel
shows, which type of testing, are needed to validate the system.
Most specific test types include functionality, performance testing, backup and recovery, security
testing, environmental testing, conversion testing, usability testing, installation and regression testing.
The specific testing methodology should be described and the entry/exit criteria for each phase noted in matrix
by phase.
Testing Process and Procedures:
The order of test execution and the steps necessary to perform each type of test should be described in
sufficient details to provide clear input into the creation of test plans and test cases. Procedures should include
how test data is created managed and loaded.
Test Compliance:
Every level of testing must have a defined set of entry\exit criteria to validate that all the pre requested
for the valid test has been made. All the main stream methodology provide and extensive list of entry/exit
criteria and checklist. In addition to the standard list, additional items should be added based on specific testing
needs
7
Testing Tools:
All the testing tool should be identified and there use, ownership, dependencies defined. The tools
category include manual tools, such as templates in spreadsheet and document as well as automated tools for
test management, defect tracking, regression testing, performance/load testing.
Q5] (b) Expalin Re-Engineering in detail.
Reengineering is radical redesign of an organization's processes, especially its business processes.
Rather than organizing a firm into functional specialties (like production, accounting, marketing, etc.) and
looking at the tasks that each function performs, we should, according to the reengineering theory, be looking at
complete processes from materials acquisition, to production, to marketing and distribution. The firm should be
re-engineered into a series of processes.
Re-engineering is the basis for many recent developments in management. The cross-functional team,
for example, has become popular because of the desire to re-engineer separate functional tasks into complete
cross-functional processes. Also, many recent management information systems developments aim to integrate
a wide number of business functions. Enterprise resource planning, supply chain management, knowledge
management systems, groupware and collaborative systems, Human Resource Management Systems and
customer relationship management systems all owe a debt to re-engineering theory.
Criticisms of re-engineering:
Reengineering has earned a bad reputation because such projects have often resulted in massive layoffs.
This reputation is not altogether unwarranted, since companies have often downsized under the banner of
reengineering. Further, reengineering has not always lived up to its expectations. The main reasons seem to be
that:
8


Reengineering assumes that the factor that limits an organization's performance is the ineffectiveness of
its processes (which may or may not be true) and offers no means of validating that assumption.
Reengineering assumes the need to start the process of performance improvement with a "clean slate,"
i.e. totally disregard the status quo.
Q6] (a) Compare waterfall model and spiral model of software development.
In waterfall model the process goes to the next step after completion of the previous step as first
requirement then design then coding then implementation then maintenance bur here no end user feedback
taken to consideration any changes in SRS will result into first step and goes step by step again.
But in case of spiral model each and every step there is testing for that a step carry on simultaneously
after finishing that step so that it will be easy to recover any error and fix it there. In this model we don’t have to
start work from beginning .In the water fall model the process flows from top to bottom like a flow of water.
But any new changes cannot be incorporated in the middle of the project development.
Whereas the spiral model is best suited for project associated with risk. Water fall model is cascaded in
nature, because it goes from one stage to the other and cannot come back to the previous stage. It follows the
principle like:
1: Requirement gathering
2: Analysis of the requirement
3:Design
4: Coding
5: Maintenance
While in spiral model the concept of waterfall model is combined with the iterations. It is also different
in the design from the waterfall model, after completing one stage we can easily come back to the previous
stage. Spiral model is useful when requirements are changing.
In waterfall model each step is distinct. After completion of step one can move to next step and once you
are through from one step you cannot move back, hence waterfall model is useful only those project where
requirements are understood
Waterfall Model:
9
Spiral Model:
Q6] (b) Explain the COCOMO model used for software estimation.
Most information for initial cost estimation comes from the feasibility study and requirement analysis.
When we estimate costs very early, there is less information upon which to base estimates and, therefore, this
information is less detailed and the estimates may be less accurate. Software cost estimation is important for
making good management decisions. It is also connected to determining how much effort and time a software
project requires.
Cost estimation has several uses:
 It establishes a firm, reliable budget for an in-house project.
 It facilitates competitive contract bids.
 It determines what is most cost effective for the organization.
Software cost estimation provides the vital link between the general concepts and techniques of
economic analysis and the particular world of software engineering. There is no good way to perform a software
cost-benefit analysis, break-even analysis, or make-or-buy analysis without some reasonably accurate method of
estimating software costs, and their sensitivity to various product, project, and environmental factors. These
methods provide an essential part of the foundation for good software management. Without a reasonably
accurate cost-estimation capability, software projects often experience the following problems: Software project
personnel have no firm basis for telling a manager, customer, or salesperson that their proposed budget and
schedule are unrealistic. This leads to optimistic overpromising on software development, low-balling on
competitive software contract bids, and the inevitable overruns and performance compromises as a
consequence.
Software analysts have no firm basis for making realistic hardware-software trade-off analysis during
the system design phase. This often leads to a design in which the hardware cost is decreased, at the expense of
an even larger increase in the software cost.
Project managers have no firm basis for determining how much time and effort each software phase and
activity should take. This leaves managers with no way to tell whether or not the software is proceeding
according to plan. That basically means that the software portion of the project is out of control from its
beginning.
10
Size Estimation:
Function Points is one method that has been developed from empirical evidence. Many projects were
examined with respect to their different characteristics and the size of the final products examined, and finally a
model produced to fit the data.
Function Points, method work best with business type information processing applications, since those
are the projects that were examined to produce the model. It is also not as widely known, used, or trusted for
estimating SIZE as COCOMO is for estimating EFFORT. 5 items determine the ultimate complexity of an
application. Take the weighted sum of these counts to get the number of function points (FP) in a system.
Characteristic
Weight
Number of Inputs
4
Number of Outputs
5
Number of Inquiries
4
Number of Files
10
Number of Interfaces
7
Software Cost Modeling Accuracy:
Software cost and effort estimation will never be an exact science. Too many parameters such as
technical, environmental, personal or political can affect the ultimate cost of software and the effort required to
develop it. It is important to recognize that we can't estimate the cost of producing 100,000 source instructions
of software as accurately as we can estimate the cost of producing 100,000 transistor radios. There are many
reasons for this; some of the main ones are:
 Source instructions are not a uniform commodity, nor are they the essence of the desired product.
 Software requires the creativity and co-operation of human beings, whose individual and group behavior
is generally hard to predict.
 Software has a much smaller base of relevant quantitative historical experience and it is hard to add to
the base by performing small controlled experiments.
Today, a software cost estimation model is doing well if it can estimate software development costs
within 20% of the actual costs, 70% of the time, within the class of projects to which it is calibrated. Currently
the Intermediate and Detailed COCOMO models do approximately this well ( within 20% of the actual costs, 68
to 70% of the time) over a fairly wide range of applications.
The Basic COCOMO Model:
The Basic Model makes its estimates of required effort ( measured in Staff-Months SM ) based
primarily on your estimate of the software project's size ( as measured in thousands of Delivered Source
Instructions KDSI ):
SM = a * ( KDSI )b
The Basic model also presents an equation for estimating the development schedule ( Time of Develop
TDEV ) of the project in months:
TDEV= c * ( SM )d
Intermediate COCOMO Model:
The Intermediate COCOMO is an extension of the basic COCOMO model. Here we use the same basic
equation for the model. But coefficients are slightly different for the effort equation. Also in addition to the size
11
as the basic cost driver we use 15 more predictor variables. These added cost drivers help to estimate effort and
cost with more accuracy. An estimator looks closely at many factors of a project such as amount of external
storage required, execution speed constraints, experience of the programmers on the team, experience with the
implementation language, use of software tools, etc., for each characteristic, the estimator decide where on the
scale of "very low" , " low", " Nominal", "High", "Very High" and "High" the project falls. Each characteristic
gives an adjustment factor( from the table 7 ) and all factors are multiplied together to to give an Effort
Adjustment Factor ( EAF).If a project is judged normal in some characteristic the adjustment factor will be 1 for
that characteristic ( Nominal column in Table 7 ), which means that that factor has no effect on overall EAF.
The effort equation for the intermediate model has the form of:
SM = EAF * a * ( KDSI )b
The Detailed COCOMO Model:
The detailed model differs from the Intermediate model in only one major aspect: the Detailed model
uses different Effort Multipliers for each phase of a project. These phase dependent Effort Multipliers yield
better estimates than the Intermediate model. the six phases COCOMO defines are:
Abbreviation
Phase
RQ
Requirements
PD
Product Design
DD
Detailed Design
CT
Code & Unit Test
IT
Integrate & Test
MN
Maintenance
The phases from Product Design through Integrate & Test are called the Development phases. Estimates
for the Requirements phase and for the Maintenance phase are performed in a different way than estimates for
the four Development phases.
The Programmer Capability cost driver is a good example of a phase dependent cost driver. The Very
High rating for the Programmer Capability Cost Driver corresponds to an Effort Multiplier of 1.00 (no
influence) for the Product Design phase of a project, but an Effort Multiplier of 0.65 is used for the Detailed
Design phase. These ratings indicate that good programmers can save time and money on the later phases of the
project, but they don't have an impact on the Product Design phase because they aren't involved.
Q7] Write short notes on:
(a) Software Architectural Style:
An architectural style, sometimes called an architectural pattern, is a set of principles—a coarse
grained pattern that provides an abstract framework for a family of systems. An architectural style
improves partitioning and promotes design reuse by providing solutions to frequently recurring
problems. You can think of architecture styles and patterns as sets of principles that shape an
application. Garlan and Shaw define an architectural style as:
1] Client/Server:
Segregates the system into two applications, where the client makes requests to the server. In
many cases, the server is a database with application logic represented as stored procedures.
12
2] Component-Based Architecture:
Decomposes application design into reusable functional or logical components, which expose
well-defined communication interfaces.
3] Domain Driven Design:
An object-oriented architectural style focused on modeling a business domain and defining
business objects based on entities within the business domain.
4] Layered Architecture:
Partitions the concerns of the application into stacked groups (layers).
5] Message Bus:
An architecture style that prescribes use of a software system that can receive and send messages
using one or more communication channels, so that applications can interact without needing to know
specific details about each other.
6] Service-Oriented Architecture (SOA):
Refers to applications which expose and consume functionality as a service, using contracts and
messages.
13
(b) Software Testing Strategies:
The system under test should be measured by its compliance to the requirements and the user
acceptance criteria. Each requirement and acceptance and criteria must be mapped to the specific test
plans that validate and measure the expected results for each test being performed. The objective should
be listed in order of importance and weighted by risk.
Feature and functions to be tested:
Every feature and function must be listed for test inclusion or exclusion; along with a description
of the exceptions. Some feature may not be testable due to lack of hardware or lack of control etc. The
list should be grouped by functional area to add clarity. The following is a basic list of functional areas:
1. Backup and recovery
2. Workflow
3. Interface design
4. Installation
5. Procedures
6. Requirements and design
7. Messaging
8. Notifications
9. Error handling
10. System exceptions and third party application faults
Testing approach:
The approach provides the detail necessary to describe the levels and type of testing. The basic
Vmodel shows, which type of testing, are needed to validate the system.
Most specific test types include functionality, performance testing, backup and recovery, security
testing, environmental testing, conversion testing, usability testing, installation and regression testing.
The specific testing methodology should be described and the entry/exit criteria for each phase noted in
matrix by phase.
Testing Process and Procedures:
14
The order of test execution and the steps necessary to perform each type of test should be
described in sufficient details to provide clear input into the creation of test plans and test cases.
Procedures should include how test data is created managed and loaded.
Test Compliance:
Every level of testing must have a defined set of entry\exit criteria to validate that all the pre
requested for the valid test has been made. All the main stream methodology provide and extensive list
of entry/exit criteria and checklist. In addition to the standard list, additional items should be added
based on specific testing needs
Testing Tools:
All the testing tool should be identified and there use, ownership, dependencies defined. The
tools category include manual tools, such as templates in spreadsheet and document as well as
automated tools for test management, defect tracking, regression testing, performance/load testing.
15
16
(c) Types of Maintenance:
All of the software metrics introduced in this chapter can be used for the development of new
software and the maintenance of existing software. However, metrics designed explicitly for
maintenance activities have been proposed. IEEE Std. 982.1-1988 [IEE94] suggests a software maturity
index (SMI) that provides an indication of the stability of a software product (based on changes that
occur for each release of the product). The following information is determined: MT = the number of
modules in the current release Fc = the number of modules in the current release that have been changed
Fa = the number of modules in the current release that have been added Fd = the number of modules
from the preceding release that were deleted in the current release.
The software maturity index is computed in the following manner:
SMI = [MT _ (Fa + Fc + Fd)]/MT (19-15)
As SMI approaches 1.0, the product begins to stabilize. SMI may also be used as metric for planning
software maintenance activities. The mean time to produce a release of a software product can be
correlated with SMI and empirical models for maintenance effort can be developed.
Software metrics provide a quantitative way to assess the quality of internal product attributes,
thereby enabling the software engineer to assess quality before the product is built. Metrics provide the
insight necessary to create effective analysis and design models, solid code, and thorough tests.
To be useful in a real world context, the software metric must be simple and computable,
persuasive, consistent, and objective. It should be programming language independent and provide
effective feedback to the software engineer. Metrics for the analysis model focus on function, data, and
behavior—the three components of the analysis model. The function point and the bang metric each
provide a quantitative means for evaluating the analysis model. Metrics for design consider architecture,
component-level design, and interface design issues. Architectural design metrics consider the structural
aspects of the design model. Component-level design metrics provide an indication of module quality by
establishing indirect measures for cohesion, coupling, and complexity. Interface design metrics provide
an indication of layout appropriateness for a GUI.
Software science provides an intriguing set of metrics at the source code level. Using the number
of operators and operands present in the code, software science provides a variety of metrics that can be
used to assess program quality. Few technical metrics have been proposed for direct use in software
testing and maintenance. However, many other technical metrics can be used to guide the testing process
and as a mechanism for assessing the maintainability of a computer program.
17
Download