Uploaded by study.guide

Software Design

advertisement
CS-6209 Software Engineering 1
Week 1: The Scope of Software Engineering
1
Module 001: The Scope of Software Engineering
Course Learning Outcomes:
1. Understand what software engineering is and why it is important;
2. Understand that the development of different types of software Systems
may require different software engineering techniques;
3. Understand some ethical and professional issues that are important for
software engineers;
4. Understand the scope of Software Engineering
Introduction
The term software engineering is composed of two words, software and engineering.
Software is more than just a program code. A program is an executable code, which serves
some computational purpose. Software is considered to be a collection of executable
programming code, associated libraries and documentations. Software, when made for a
specific requirement is called software product. Engineering on the other hand, is all about
developing products, using well-defined, scientific principles and methods. So, we can
define software engineering as an engineering branch associated with the development of
software product using well-defined scientific principles, methods and procedures.
The outcome of software engineering is an efficient and reliable software product. IEEE
defines software engineering as: The application of a systematic, disciplined, quantifiable
approach to the development, operation and maintenance of software. We can alternatively
view it as a systematic collection of past experience. The experience is arranged in the form
of methodologies and guidelines. A small program can be written without using software
engineering principles. But if one wants to develop a large software product, then software
engineering principles are absolutely necessary to achieve a good quality software cost
effectively. Without using software engineering principles it would be difficult to develop
large programs. In industry it is usually needed to develop large programs to accommodate
multiple functions. A problem with developing such large commercial programs is that the
complexity and difficulty levels of the programs increase exponentially with their sizes.
Software engineering helps to reduce this programming complexity. Software engineering
principles use two important techniques to reduce problem complexity: abstraction and
decomposition.
The principle of abstraction implies that a problem can be simplified by omitting irrelevant
details. In other words, the main purpose of abstraction is to consider only those aspects of
the problem that are relevant for certain purpose and suppress other aspects that are not
relevant for the given purpose. Once the simpler problem is solved, then the omitted details
can be taken into consideration to solve the next lower level abstraction, and so on.
Course Module
CS-6209 Software Engineering 1
Week 1: The Scope of Software Engineering
2
Abstraction is a powerful way of reducing the complexity of the problem. The other
approach to tackle problem complexity is DEPT OF CSE & IT VSSUT, Burla decomposition.
In this technique, a complex problem is divided into several smaller problems and then the
smaller problems are solved one by one. However, in this technique any random
decomposition of a problem into smaller parts will not help. The problem has to be
decomposed such that each component of the decomposed problem can be solved
independently and then the solution of the different components can be combined to get
the full solution.
A good decomposition of a problem should minimize interactions among various
components. If the different sub components are interrelated, then the different
components cannot be solved separately and the desired reduction in complexity will not
be realized.
Need of Software Engineering
The need of software engineering arises because of higher rate of change in user
requirements and environment on which the software is working.





Large software - It is easier to build a wall than to a house or building, likewise,
as the size of software become large engineering has to step to give it a scientific
process.
Scalability - If the software process were not based on scientific and engineering
concepts, it would be easier to re-create new software than to scale an existing
one.
Cost- As hardware industry has shown its skills and huge manufacturing has
lower down the price of computer and electronic hardware. But the cost of
software remains high if proper process is not adapted.
Dynamic Nature- The always growing and adapting nature of software hugely
depends upon the environment in which the user works. If the nature of
software is always changing, new enhancements need to be done in the existing
one. This is where software engineering plays a good role.
Quality Management- Better process of software development provides better
and quality software product.
Characteristics of Good Software
A software product can be judged by what it offers and how well it can be used.
This software must satisfy on the following grounds:



Operational
Transitional
Maintenance
This tells us how well software works in operations. It can be measured on:
CS-6209 Software Engineering 1
Week 1: The Scope of Software Engineering








3
Budget
Usability
Efficiency
Correctness
Functionality
Dependability
Security
Safety Transitional
This aspect is important when the software is moved from one platform to another





Portability
Interoperability
Reusability
Adaptability
Maintenance
This aspect briefs about how well software has the capabilities to maintain itself in
the ever-changing environment:




Modularity
Maintainability
Flexibility
Scalability
In short, Software engineering is a branch of computer science, which uses welldefined engineering concepts required to produce efficient, durable, scalable, inbudget and on-time software products
Software engineering diversity
Software engineering is a systematic approach to the production of software
that takes into account practical cost, schedule, and dependability issues, as
well as the needs of software customers and producers. How this systematic
approach is actually implemented varies dramatically depending on the
organization developing the software, the type of software, and the people
involved in the development process. There are no universal software
engineering methods and techniques that are suit-able for all systems and all
companies. Rather, a diverse set of software engineering methods and tools
has evolved over the past 50 years.
Perhaps the most significant factor in determining which software engineering
methods and techniques are most important is the type of application that is
being developed. There are many different types of application including:
Course Module
CS-6209 Software Engineering 1
Week 1: The Scope of Software Engineering
1. Stand-alone applications. These are application systems that run on a
local computer, such as a PC. They include all necessary functionality
and do not need to be connected to a network. Examples of such
applications are office applications on a PC, CAD programs, photo
manipulation software, etc.
2. Interactive transaction-based applications. These are applications that
execute on a remote computer and that are accessed by users from their
own PCs or terminals. Obviously, these include web applications such as
e-commerce applications where you can interact with a remote system
to buy goods and services. This class of application also includes
business systems, where a business provides access to its systems
through a web browser or special-purpose client program and cloudbased services, such as mail and photo sharing. Interactive applications
often incorporate a large data store that is accessed and updated in each
transaction.
3. Embedded control systems. These are software control systems that
control and manage hardware devices. Numerically, there are probably
more embedded systems than any other type of system. Examples of
embedded systems include the software in a mobile (cell) phone,
software that controls anti-lock braking in a car, and software in a
microwave oven to control the cooking process.
4. Batch processing systems. These are business systems that are designed
to process data in large batches. They process large numbers of
individual inputs to create corresponding outputs. Examples of batch
systems include periodic billing systems, such as phone billing systems,
and salary payment systems.
5. Entertainment system. These are systems that are primarily for personal
use and which are intended to entertain the user. Most of these systems
are games of one kind or another. The quality of the user interaction
offered is the most important distinguishing characteristic of
entertainment systems.
6. Systems for modeling and simulation. These are systems that are
developed by scientists and engineers to model physical processes or
situations, which include many, separate, interacting objects. These are
often computationally intensive and require high-performance parallel
systems for execution.
7. Data collection systems. These are systems that collect data from their
environment using a set of sensors and send that data to other systems
for processing. The software has to interact with sensors and often is
installed in a hostile environment such as inside an engine or in a
remote location.
4
CS-6209 Software Engineering 1
Week 1: The Scope of Software Engineering
8. Systems of systems. These are systems that are composed of a number of
other software systems. Some of these may be generic software
products, such as a spreadsheet program. Other systems in the assembly
may be specially written for that environment.
Of course, the boundaries between these system types are blurred. If you
develop a game for a mobile (cell) phone, you have to take into account the
same constraints (power, hardware interaction) as the developers of the phone
software. Batch processing systems are often used in conjunction with webbased systems. For example, in a company, travel expense claims may be
submitted through a web application but processed in a batch application for
monthly payment.
You use different software engineering techniques for each type of system
because the software has quite different characteristics. For example, an
embedded control system in an automobile is safety-critical and is burned into
ROM when installed in the vehicle. It is therefore very expensive to change.
Such a system needs very extensive verification and validation so that the
chances of having to recall cars after sale to fix software problems are
minimized. User interaction is minimal (or perhaps nonexistent) so there is no
need to use a development process that relies on user interface prototyping.
For a web-based system, an approach based on iterative development and
delivery may be appropriate, with the system being composed of reusable
components. However, such an approach may be impractical for a system of
systems, where detailed specifications of the system interactions have to be
specified in advance so that each system can be separately developed.
Nevertheless, there are software engineering fundamentals that apply to all
types of software system:
1. They should be developed using a managed and understood
development process. The organization developing the software should
plan the development process and have clear ideas of what will be
produced and when it will be completed. Of course, different processes
are used for different types of software.
2. Dependability and performance are important for all types of systems.
Software should behave as expected, without failures and should be
available for use when it is required. It should be safe in its operation
and, as far as possible, should be secure against external attack. The
system should perform efficiently and should not waste resources.
Course Module
5
CS-6209 Software Engineering 1
Week 1: The Scope of Software Engineering
3. Understanding and managing the software specification and
requirements (what the software should do) are important. You have to
know what different customers and users of the system expect from it
and you have to manage their expectations so that a useful system can
be delivered within budget and to schedule.
4. You should make as effective use as possible of existing resources. This
means that, where appropriate, you should reuse software that has
already been developed rather than write new software.
These fundamental notions of process, dependability, requirements,
management, and reuse are important themes of this book. Different
methods reflect them in different ways but they underlie all professional
software development.
You should notice that these fundamentals do not cover implementation
and programming. I don’t cover specific programming techniques in this
book because these vary dramatically from one type of system to another.
For example, a scripting language such as Ruby is used for web-based
system programming but would be completely inappropriate for embedded
systems engineering.
Software engineering ethics
Like other engineering disciplines, software engineering is carried out within a
social and legal framework that limits the freedom of people working in that
area. As a software engineer, you must accept that your job involves wider
responsibilities than simply the application of technical skills. You must also
behave in an ethical and morally responsible way if you are to be respected as
a professional engineer.
It goes without saying that you should uphold normal standards of honesty
and integrity. You should not use your skills and abilities to behave in a
dishonest way or in a way that will bring disrepute to the software engineering
profession. However, there are areas where standards of acceptable behavior
are not bound by laws but by the more tenuous notion of professional
responsibility. Some of these are:
1. Confidentiality. You should normally respect the confidentiality of your
employers or clients irrespective of whether or not a formal
confidentiality agreement has been signed.
6
CS-6209 Software Engineering 1
Week 1: The Scope of Software Engineering
7
2. Competence. You should not misrepresent your level of competence.
You should not knowingly accept work that is outside your
competence.
3. Intellectual property rights. You should be aware of local laws governing
the use of intellectual property such as patents and copyright. You
should be careful to ensure that the intellectual property of employers
and clients is protected.
4. Computer misuse. You should not use your technical skills to misuse
other people’s computers. Computer misuse ranges from relatively
trivial (game playing on an employer’s machine, say) to extremely
serious (dissemination of viruses or other malware).
Software Engineering Code of Ethics and Professional Practice
ACM/IEEE-CS Joint Task Force on Software Engineering Ethics and Professional
Practices
PREAMBLE
The short version of the code summarizes aspirations at a high level of the
abstraction; the clauses that are included in the full version give examples
and details of how these aspirations change the way we act as software
engineering professionals. Without the aspirations, the details can become
legalistic and tedious; without the details, the aspirations can become high
sounding but empty; together, the aspirations and the details form a
cohesive code.
Software engineers shall commit themselves to making the analysis,
specification, design, development, testing and maintenance of software a
beneficial and respected profession. In accordance with their commitment
to the health, safety and welfare of the public, software engineers shall
adhere to the following Eight Principles:
PUBLIC — Software engineers shall act consistently with the public interest.
CLIENT AND EMPLOYER — Software engineers shall act in a manner that is in
the best interests of their client and employer consistent with the public
interest.
PRODUCT — Software engineers shall ensure that their products and
related modifications meet the highest professional standards possible.
Course Module
CS-6209 Software Engineering 1
Week 1: The Scope of Software Engineering
JUDGMENT — Software engineers shall maintain integrity and independence
in their professional judgment.
MANAGEMENT — Software engineering managers and leaders shall
subscribe to and promote an ethical approach to the management of
software development and maintenance.
PROFESSION — Software engineers shall advance the integrity and
reputation of the profession consistent with the public interest.
COLLEAGUES — Software engineers shall be fair to and supportive of
their colleagues.
SELF — Software engineers shall participate in lifelong learning regarding
the practice of their profession and shall promote an ethical approach to the
practice of the profession.
Professional societies and institutions have an important role to play in setting
ethical standards. Organizations such as the ACM, the IEEE (Institute of Electrical
and Electronic Engineers), and the British Computer Society publish a code of
professional conduct or code of ethics. Members of these organizations
undertake to follow that code when they sign up for membership. These codes of
conduct are generally concerned with fundamental ethical behavior.
Professional associations, notably the ACM and the IEEE, have cooperated to
produce a joint code of ethics and professional practice. This code exists in both a
short form, shown in Figure 1.3, and a longer form (Gotterbarn et al., 1999) that
adds detail and substance to the shorter version. The rationale behind this code
is summarized in the first two paragraphs of the longer form:
Computers have a central and growing role in commerce, industry, government,
medicine, education, entertainment and society at large. Software engineers are
those who contribute by direct participation or by teaching, to the analysis,
specification, design, development, certification, maintenance and testing of
software systems. Because of their roles in developing software systems, software
engineers have significant opportunities to do good or cause harm, to enable others
to do good or cause harm, or to influence others to do good or cause harm. To
ensure, as much as possible, that their efforts will be used for good, software
engineers must commit themselves to making software engineering a beneficial and
respected profession. In accordance with that commitment, software engineers
shall adhere to the following Code of Ethics and Professional Practice.
8
CS-6209 Software Engineering 1
Week 1: The Scope of Software Engineering
The Code contains eight Principles related to the behaviour of and decisions made
by professional software engineers, including practitioners, educators, managers,
supervisors and policy makers, as well as trainees and students of the profession.
The Principles identify the ethically responsible relationships in which individuals,
groups, and organizations participate and the primary obligations within these
relationships. The Clauses of each Principle are illustrations of some of the
obligations included in these relationships. These obligations are founded in the
software engineer’s humanity, in special care owed to people affected by the work
of software engineers, and the unique elements of the practice of software
engineering. The Code prescribes these as obligations of anyone claiming to be or
aspiring to be a software engineer.
In any situation where different people have different views and objectives you
are likely to be faced with ethical dilemmas. For example, if you disagree, in
principle, with the policies of more senior management in the company, how
should you react? Clearly, this depends on the particular individuals and the
nature of the dis-agreement. Is it best to argue a case for your position from
within the organization or to resign in principle? If you feel that there are
problems with a software project, when do you reveal these to management? If
you discuss these while they are just a suspicion, you may be overreacting to a
situation; if you leave it too late, it may be impossible to resolve the difficulties.
Such ethical dilemmas face all of us in our professional lives and, fortunately, in
most cases they are either relatively minor or can be resolved without too much
difficulty. Where they cannot be resolved, the engineer is faced with, perhaps,
another problem. The principled action may be to resign from their job but this
may well affect others such as their partner or their children.
A particularly difficult situation for professional engineers arises when their
employer acts in an unethical way. Say a company is responsible for developing a
safety-critical system and, because of time pressure, falsifies the safety validation
records. Is the engineer’s responsibility to maintain confidentiality or to alert the
customer or publicize, in some way, that the delivered system may be unsafe?
The problem here is that there are no absolutes when it comes to safety.
Although the system may not have been validated according to predefined
criteria, these criteria may be too strict. The system may actually operate safely
throughout its lifetime. It is also the case that, even when properly validated, the
system may fail and cause an accident. Early disclosure of problems may result in
damage to the employer and other employees; failure to disclose problems may
result in damage to others.
Course Module
9
CS-6209 Software Engineering 1
Week 1: The Scope of Software Engineering
You must make up your own mind in these matters. The appropriate ethical
position here depends entirely on the views of the individuals who are involved.
In this case, the potential for damage, the extent of the damage, and the people
affected by the damage should influence the decision. If the situation is very
dangerous, it may be justified to publicize it using the national press (say).
However, you should always try to resolve the situation while respecting the
rights of your employer.
Another ethical issue is participation in the development of military and nuclear
systems. Some people feel strongly about these issues and do not wish to
participate in any systems development associated with military systems. Others
will work on military systems but not on weapons systems. Yet others feel that
national security is an overriding principle and have no ethical objections to
working on weapons systems.
In this situation, it is important that both employers and employees should make
their views known to each other in advance. Where an organization is involved in
military or nuclear work, they should be able to specify that employees must be
willing to accept any work assignment. Equally, if an employee is taken on and
makes clear that they do not wish to work on such systems, employers should
not put pressure on them to do so at some later date.
The general area of ethics and professional responsibility is becoming more
important as software-intensive systems pervade every aspect of work and
everyday life. It can be considered from a philosophical standpoint where the
basic principles of ethics are considered and software engineering ethics are
discussed with reference to these basic principles. This is the approach taken by
Laudon (1995) and to a lesser extent by Huff and Martin (1995). Johnson’s text
on computer ethics (2001) also approaches the topic from a philosophical
perspective.
However, I find that this philosophical approach is too abstract and difficult to
relate to everyday experience. I prefer the more concrete approach embodied in
codes of conduct and practice. I think that ethics are best discussed in a software
engineering context and not as a subject in their own right. In this book,
therefore, I do not include abstract ethical discussions but, where appropriate,
include examples in the exercises that can be the starting point for a group
discussion on ethical issues.
10
CS-6209 Software Engineering 1
Week 1: The Scope of Software Engineering
11
References and Supplementary Materials
Books and Journals
1. SOFTWARE ENGINEERING, 9th Edition; Ian Sommerville
Online Supplementary Reading Materials
1. Fundamentals of Software Engineering;
https://www.academia.edu/2586877/Lecture_01_Fundamentals_of_Software_Engine
ering; October 14, 2019
2. Introduction to Software Engineering;
https://en.wikibooks.org/wiki/Introduction_to_Software_Engineering; October 14,
2019
Online Instructional Videos
Course Module
CS-6209 Software Engineering 1
Week 2: Software Engineering Principles
1
Module 002: Software Engineering Principles
Course Learning Outcomes:
1. Ability to identify, formulate, and solve complex engineering problems by
applying principles of engineering, science, and mathematics
2. Understand the importance of the stages in the software life cycle,
including a range of software development methodologies.
3. Understand the principles of Software Engineering
Introduction
Software engineering is concerned with all aspects of software production from the early
stages of system specification through to maintaining the system after it has been used.
As a discipline, software engineering has progressed very far in a very short period of time,
particularly when compared to classical engineering field (like civil or electrical
engineering). In the early days of computing, not more than 50 years ago, computerized
systems were quite small. Most of the programming was done by scientists trying to solve
specific, relatively small mathematical problems. Errors in those systems generally had
only “annoying” consequences to the mathematician who was trying to find “the answer.”
Today we often build monstrous systems, in terms of size and complexity.
Principles of Software Engineering
The need of software engineering arises because of higher rate of change in user
requirements and environment on which the software is working.
Separation of Concerns
When specifying the behavior of a data structure component, there are often two
concerns that need to be dealt with: basic functionality and support for data
integrity. A data structure component is often easier to use if these two concerns are
divided as much as possible into separate sets of client functions. It is certainly
helpful to clients if the client documentation treats the two concerns separately.
Modularity
The principle of modularity is a specialization of the principle of separation of
concerns. Following the principle of modularity implies separating software into
components according to functionality and responsibility. It describes a
responsibility-driven methodology for modularization in an object-oriented context.
Course Module
CS-6209 Software Engineering 1
Week 2: Software Engineering Principles
2
Abstraction
The principle of abstraction is another specialization of the principle of separation
of concerns. Following the principle of abstraction implies separating the behavior
of software components from their implementation. It requires learning to look at
software and software components from two points of view: what it does, and how
it does it.
Failure to separate behavior from implementation is a common cause of
unnecessary coupling. For example, it is common in recursive algorithms to
introduce extra parameters to make the recursion work. When this is done, the
recursion should be called through a non-recursive shell that provides the proper
initial values for the extra parameters. Otherwise, the caller must deal with a more
complex behavior that requires specifying the extra parameters. If the
implementation is later converted to a non-recursive algorithm, then the client code
will also need to be changed.
Anticipation of Change
Computer software is an automated solution to a problem. The problem arises in
some context or domain that is familiar to the users of the software. The domain
defines the types of data that the users need to work with and relationships
between the types of data.
Software developers, on the other hand, are familiar with a technology that deals
with data in an abstract way. They deal with structures and algorithms without
regard for the meaning or importance of the data that is involved. A software
developer can think in terms of graphs and graph algorithms without attaching
concrete meaning to vertices and edges.
Working out an automated solution to a problem is a learning experience for both
the software developers and their clients. Software developers are learning the
domain that the clients work in. They are also learning the values of the client: what
form of data presentation is most useful to the client, what kinds of data are crucial
and require special protective measures.
The clients are learning to see the range of possible solutions that software
technology can provide. They are also learning to evaluate the possible solutions
with regard to their effectiveness in meeting the client’s needs.
Cohesiveness has a positive effect on ease of change. Cohesive components are
easier to reuse when requirements change. If a component has several tasks rolled
up into one package, it is likely that it needs to be split up when changes are made.
CS-6209 Software Engineering 1
Week 2: Software Engineering Principles
3
Generality
The principle of generality is closely related to the principle of anticipation of
change. It is important in designing software that is free from unnatural
restrictions and limitations. One excellent example of an unnatural restriction or
limitation is the use of two digit year numbers, which has led to the "year 2000"
problem: software that will garble record keeping at the turn of the century.
Although the two-digit limitation appeared reasonable at the time, good software
frequently survives beyond its expected lifetime.
Incremental Development
An incremental software development process simplifies verification. If you develop
software by adding small increments of functionality, then, for verification, you only
need to deal with the added portion. If there are any errors detected, they are
already partly isolated so they are much easier to correct.
A carefully planned incremental development process can also ease the handling of
changes in requirements. To do this, the planning must identify use cases that are
most likely to be changed and put them towards the end of the development
process.
Consistency
The principle of consistency is a recognition of the fact that it is easier to do things
in a familiar context. For example, coding style is a consistent manner of laying out
code text. This serves two purposes. First, it makes reading the code easier. Second,
it allows programmers to automate part of the skills required in code entry, freeing
the programmer's mind to deal with more important issues.
Consistency serves two purposes in designing graphical user interfaces. First, a
consistent look and feel makes it easier for users to learn to use software. Once the
basic elements of dealing with an interface are learned, they do not have to be
relearned for a different software application. Second, a consistent user interface
promotes reuse of the interface components. Graphical user interface systems have
a collection of frames, panes, and other view components that support the common
look. They also have a collection of controllers for responding to user input,
supporting the common feel. Often, both look and feel are combined, as in pop-up
menus.
Is Software Development an Engineering Discipline?
Engineering involves the systematic application of scientific knowledge to the
solution of problems. The complexity of problems that are tackled today are not
Course Module
CS-6209 Software Engineering 1
Week 2: Software Engineering Principles
4
that different from those tackled by an electric engineer in creating a circuit or a
chemical engineer in devising a manufacturing process or a mechanical engineer
in the creation of a device.
The fact that there is also a hands-on approach of applying existing plans
(development in this case) is simply similar to the fact that in other fields,
somebody else executes those plans (e.g., the construction worker).
It is true that most developers also carry software engineering tasks, and that our
education is often not in programming but rather in software engineering. So we get
our hands dirty whereas a civil engineer would not.
Information System
An Information System (IS) is a system composed of people and computers that
processes or interprets information. The term is also sometimes used in more
restricted senses, referring only to the software used to run a computerized
database or a computer system only.
Information systems is an academic study of systems with a specific reference to
information and the complementary networks of hardware and software that
people and organizations use to collect, filter, process, create and also distribute
data. An emphasis is placed on an Information System having a definitive
Boundary, Users, Processors, Stores, Inputs, Outputs and the aforementioned
communication networks.
Any specific information system aims to support operations, management and
decision making. An information system is the information and communication
technology (ICT) that an organization uses, and also the way in which people
interact with this technology in support of business processes.
The 6 components that must come together to produce an information system are:
Hardware
The term hardware refers to machinery. This category includes the computer itself,
which is often referred to as the central processing unit (CPU), and all of its support
equipment. Among the support equipment are input and output devices, storage
devices and communication devices.
Software
CS-6209 Software Engineering 1
Week 2: Software Engineering Principles
5
The term software refers to computer programs and the manuals (if any) that
support them. Computer programs are machine-readable instructions that direct
the circuitry within the hardware parts of the system to function in ways that
produce useful information from data.
Data
Data are facts that are used by programs to produce useful information. Like
programs, data are generally stored in machine-readable form on disk or tape until
the computer needs them.
Procedures
Procedures are the policies that govern the operation of a computer system.
"Procedures are to people what software is to hardware" is a common analogy that
is used to illustrate the role of procedures in a system.
People
Every system needs people if it is to be useful. Often, the most over-looked element
of the system is the people when in fact it is probably the component that
influences the success or failure of information systems the most. This includes not
only the users, but those who operate and service the computers, those who
maintain the data, and those who support the network of computers.
Feedback
It is another component of the IS, that defines that an IS may be provided with a
feedback.
Data is the bridge between Hardware and People. This means that the data we
collect is only data, until we involve people. At that point, data will be information.
Course Module
CS-6209 Software Engineering 1
Week 2: Software Engineering Principles
6
Types of information system
----
---
-------
The image above is a four level pyramid model of different types of information
systems based on the different levels of hierarchy in an organization.
The "classic" view of Information systems found in the textbooks in the 1980s was
of a pyramid of systems that reflected the hierarchy of the organization. Usually,
transaction processing systems is at the bottom of the pyramid, followed by
management information systems, decision support systems, and ending with
executive information systems at the top. Although the pyramid model remains
useful, since it was first formulated a number of new technologies have been
developed and new categories of information systems have emerged, some of
which no longer fit easily into the original pyramid model.
Some examples of such systems are:





data warehouses
enterprise resource planning enterprise systems
expert systems search engines
geographic information system
global information system
office automation
A computer (-based) information system is essentially an IS using computer
technology to carry out some or all of its planned tasks. The basic components of
computer based information system are:
CS-6209 Software Engineering 1
Week 2: Software Engineering Principles
7
Hardware- these are the devices like the monitor, processor, printer and
keyboard, all of which work together to accept, process, show data and
information.
Software- are the programs that allow the hardware to process the data.
Databases- are the gathering of associated files or tables containing
related data.
Networks- are a connecting system that allows diverse computers to distribute
resources.
Procedures- are the commands for combining the components above to process
information and produce the preferred output.
The first four components (hardware, software, database, and network) make up
what is known as the information technology platform. Information technology
workers could use these components to create information systems that watch over
safety measures, risk and the management of data. These actions are known as
information technology services.
Information System Development
Information technology departments in larger organizations tend to strongly
influence the development, use, and application of information technology in the
organizations. A series of methodologies and processes can be used to develop and
use an information system. Many developers now use an engineering approach
such as the system development life cycle (SDLC), which is a systematic procedure of
developing an information system through stages that occur in sequence. An
information system can be developed in house (within the organization) or
outsourced. This can be accomplished by outsourcing certain components or the
entire system. A specific case is the geographical distribution of the development
team (offshoring, global information system).
Geographic information systems, land information systems, and disaster
information systems are examples of emerging information systems, but they
can be broadly considered as spatial information systems. System development
is done in stages which include:

Course Module
Problem recognition and specification
CS-6209 Software Engineering 1
Week 2: Software Engineering Principles






8
Information gathering
Requirements specification for the new system
System design
System construction
System implementation
Review and maintenance
The academic discipline
The field of study called information systems encompasses a variety of topics
including systems analysis and design, computer networking, information security,
database management and decision support systems. Information management
deals with the practical and theoretical problems of collecting and analyzing
information in a business function area including business productivity tools,
applications programming and implementation, electronic commerce, digital media
production, data mining, and decision support. Communications and networking
deal with the telecommunication technologies. Information system bridges
business and computer science using the theoretical foundations of information
and computation to study various business models and related algorithmic
processes within a computer science discipline.
Computer information system(s) (CIS) is a field studying computers and
algorithmic processes, including their principles, their software and hardware
designs, their applications, and their impact on society, whereas IS emphasizes
functionality over design.
Several IS scholars have debated the nature and foundations of Information
Systems which has its roots in other reference disciplines such as Computer
Science, Engineering, Mathematics, Management Science, Cybernetics, and others.
Information systems can also be defined as a collection of hardware, software,
data, people and procedures that work together to produce quality information.
Information Systems have a number of different areas of work:




IS strategy
IS management
IS development IS iteration
IS organization
There is a wide variety of career paths in the information systems discipline.
"Workers with specialized technical knowledge and strong communications skills
will have the best prospects. Workers with management skills and an
understanding of business practices and principles will have excellent
CS-6209 Software Engineering 1
Week 2: Software Engineering Principles
9
opportunities, as companies are increasingly looking to technology to drive their
revenue."
Information technology is important in the operation of contemporary businesses
for it offers many employment opportunities. The information systems field
includes the people in organizations who design and build information systems, the
people who use those systems, and the people responsible for managing those
systems. The demand for traditional IT staff such as programmers, business
analysts, systems analysts, and designer is significant. Many well-paid jobs exist in
areas of Information technology. At the top of the list is the chief information officer
(CIO).
The CIO is the executive who is in charge of the IS function. In most organizations,
the CIO works with the chief executive officer (CEO), the chief financial officer
(CFO), and other senior executives. Therefore, he or she actively participates in the
organization's strategic planning process.
Information systems research
Information systems research is generally interdisciplinary, concerned with the
study of the effects of information systems on the behavior of individuals, groups,
and organizations.
Salvatore March and Gerald Smith proposed a framework for researching different
aspects of Information Technology including outputs of the research (research
outputs) and activities to carry out this research (research activities). They
identified research outputs as follows:
1. Constructs are concepts that form the vocabulary of a domain. They
constitute a conceptualization used to describe problems within the domain
and to specify their solutions.
2. A model is a set of propositions or statements expressing relationships
among constructs.
3. A method is a set of steps (an algorithm or guideline) used to perform a
task. Methods are based on a set of underlying constructs and a
representation (model) of the solution space.
4. An instantiation is the realization of an artifact in its environment.
Course Module
CS-6209 Software Engineering 1
Week 2: Software Engineering Principles
10
Furthermore, research activities include:
1. Building an artifact to perform a specific task.
2. Evaluating the artifact to determine if any progress has been achieved.
3. Given an artifact whose performance has been evaluated, it is important to
determine why and how the artifact worked or did not work within its
environment. Therefore, another research activity is the theorizing and
justifying of theories about IT artifacts.
References and Supplementary Materials
Books and Journals
Online Supplementary Reading Materials
1. Principles of Software Engineering;
https://www.d.umn.edu/~gshute/softeng/principles.html; October 15, 2019
2. Seven Basic Principles of Software Engineering;
3. https://csse.usc.edu/TECHRPTS/1983/usccse83-500/usccse83-500.pdf; October 15,
2019
Online Instructional Videos
CS-6209 Software Engineering 1
Week 3: Technical Development
1
Module 003: Technical Development
Course Learning Outcomes:
1.
2.
3.
4.
Identify the Life Cycle of a Software Development Project
Understand the Requirements Gathering
Understand the different types of testing in developing software
Know the components of developing a system
Introduction
Software development is a complicated process. It requires careful planning and execution
to meet the goals. Sometimes a developer must react quickly and aggressively to meet everchanging market demands. Maintaining software quality hinders fast-paced software
development, as many testing cycles are necessary to ensure quality products.
This chapter provides an introduction to the software development process. As you will
learn, there are many stages of any software development project. A commercial software
product is usually derived from market demands. Sales and marketing people have firsthand knowledge of their customers’ requirements. Based upon these market requirements,
senior software developers create architecture for the products along with functional and
design specifications. Then the development process starts. After the initial development
phase, software testing begins, and many times it is done in parallel with the development
process. Documentation is also part of the development process because a product cannot
be brought to market without manuals. Once development and testing are done, the
software is released and the support cycle begins. This phase may include bug fixes and
new releases.
Life Cycle of a Software Development Project
Software development is a complicated process comprising many stages. Each stage
requires a lot of paperwork and documentation in addition to the development and
planning process. This is in contrast to the common thinking of newcomers to the
software industry who believe that software development is just “writing code.”
Each software development project has to go through at least the following stages:







Course Module
Requirement gathering
Writing functional specifications
Creating architecture and design documents
Implementation and coding
Testing and quality assurance
Software release
Documentation
CS-6209 Software Engineering 1
Week 3: Technical Development

2
Support and new features
Figure 3-1 shows a typical development process for a new product.
There may be many additional steps and stages depending upon the nature of the
software product. You may have to go through multiple cycles during the testing
phase as software testers find problems and bugs and developers fix them before a
software product is officially released. Let us go into some detail of these stages.
Requirement Gathering
Requirement gathering is usually the first part of any software product. This stage
starts when you are thinking about developing software. In this phase, you meet
customers or prospective customers, analyzing market requirements and features
that are in demand. You also find out if there is a real need in the market for the
software product you are trying to develop.
In this stage, marketing and sales people or people who have direct contact with the
customers do most of the work. These people talk to these customers and try to
understand what they need. A comprehensive understanding of the customers’
needs and writing down features of the proposed software product are the keys to
success in this phase. This phase is actually a base for the whole development effort.
If the base is not laid correctly, the product will not find a place in the market. If you
develop a very good software product which is not required in the market, it does
not matter how well you build it. You can find many stories about software products
that failed in the market because the customers did not require them. The marketing
people usually create a Marketing Requirement Document or MRD that contains
formal data representation of market data gathered.
Spend some time doing market research and analysis. Consider your competitors’
products (if any), a process called competitive analysis. List the features required by
the product. You should also think about the economics of software creation at this
point. Is there a market? Can I make money? Will the revenue justify the cost of
development?
Course Module
CS-6209 Software Engineering 1
Week 3: Technical Development
3
'"""""
Figure 3-1 Typical processes for software development projects.
Writing Functional Specifications
Functional specifications may consist of one or more documents. Functional
specification documents show the behavior or functionality of a software product on
an abstract level. Assuming the product is a black box, the functional specifications
define its input/output behavior. Functional specifications are based upon the
product requirements documentation put forward by people who have contact with
the end user of the product or the customers.
Course Module
CS-6209 Software Engineering 1
Week 3: Technical Development
4
In larger products, functional specifications may consist of separate documents for
each feature of the product. For example, in a router product, you may have a
functional specification document for RIP (Routing Information Protocol), another
for security features, and so on.
Functional specifications are important because developers use them to create
design documents. The documentation people also use them when they create
manuals for end users. If different groups are working in different physical places,
functional specifications and architecture documents (discussed next) are also a
means to communicate among them. Keep in mind that sometimes during the
product development phase you may need to amend functional specifications
keeping in view new marketing requirements.
Creating Architecture and Design Documents
When you have all of the requirements collected and arranged, it is the turn of the
technical architecture team, consisting of highly qualified technical specialists, to
create the architecture of the product. The architecture defines different
components of the product and how they interact with each other. In many cases the
architecture also defines the technologies used to build the product. While creating
the architecture documents of the project, the team also needs to consider the
timelines of the project. This refers to the target date when the product is required
to be on the market. Many excellent products fail because they are either too early
or late to market. The marketing and sales people usually decide a suitable time
frame to bring the product to market. Based on the timeline, the architecture team
may drop some features of the product if it is not possible to bring the full-featured
product to market within the required time limits.
Once components of the product have been decided and their functionality defined,
inter-faces are designed for these components to work together. In most cases, no
component works in isolation; each one has to coordinate with other components of
the product. Interfaces are the rules and regulations that define how these
components will interact with each other. There may be major problems down the
road if these interfaces are not designed properly and in a detailed way. Different
people will work on different components of any large software development
project and if they don’t fully understand how a particular component will
communicate with others, integration becomes a major problem.
For some products, new hardware is also required to make use of technology
advancements. The architects of the product also need to consider this aspect of the
product.
Course Module
CS-6209 Software Engineering 1
Week 3: Technical Development
5
After defining architecture, software components and their interfaces, the next
phase of development is the creation of design documents. At the architecture level,
a component is defined as a black box that provides certain functionality. At the
design documents stage, you have to define what is in that black box. Senior
software developers usually create design documents and these documents define
individual software components to the level of functions and procedures. The design
document is the last document completed before development of the software
begins. These design documents are passed on to software developers and they start
coding. Architecture documents and MRDs typically need to stay in sync, as sales
and marketing will work from MRDs while engineering works from engineering
documents.
Implementation and Coding
The software developers take the design documents and development tools (editors,
compilers, debuggers etc.) and start writing software. This is usually the longest
phase in the product life cycle. Each developer has to write his/her own code and
collaborate with other developers to make sure that different components can
interoperate with each other. A revision control system such as CVS (Concurrent
Versions System) is needed in this phase. There are a few other open source
revision control systems as well as commercial options. The version control system
pro-vides a central repository to store individual files. A typical software project
may contain any-where from hundreds to thousands of files. In large and complex
projects, someone also needs to decide directory hierarchy so that files are stored in
appropriate locations. During the development cycle, multiple persons may modify
files. If everyone is not following the rules, this may easily break the whole
compilation and building process. For example, duplicate definitions of the same
variables may cause problems. Similarly, if included files are not written properly,
you can easily cause the creation of loops. Other problems pop up when multiple
files are included in a single file with conflicting definitions of variables and
functions.
Coding guidelines should also be defined by architects or senior software
developers. For example, if software is intended to be ported to some other platform
as well, it should be written on a standard like ANSI.
During the implementation process, developers must write enough comments
inside the code so that if anybody starts working on the code later on, he/she is able
to understand what has already been written. Writing good comments is very
important as all other documents, no matter how good they are, will be lost
Course Module
CS-6209 Software Engineering 1
Week 3: Technical Development
6
eventually. Ten years after the initial work, you may find only that information
which is present inside the code in the form of comments.
Development tools also play an important role in this phase of the project. Good
development tools save a lot of time for the developers, as well as saving money in
terms of improved productivity. The most important development tools for time
saving are editors and debuggers. A good editor helps a developer to write code
quickly. A good debugger helps make the written code operational in a short period
of time. Before starting the coding process, you should spend some time choosing
good development tools.
Review meetings during the development phase also prove helpful. Potential
problems are caught earlier in the development. These meeting are also helpful to
keep track of whether the product is on time or if more effort is needed complete it
in the required time frame. Sometimes you may also need to make some changes in
the design of some components because of new requirements from the marketing
and sales people. Review meetings are a great tool to convey these new
requirements. Again, architecture documents and MRDs are kept in sync with any
changes/problems encountered during development.
Testing
Testing is probably the most important phase for long-term support as well as for
the reputation of the company. If you don’t control the quality of the software, it will
not be able to compete with other products on the market. If software crashes at the
customer site, your customer loses productivity as well money and you lose
credibility. Sometimes these losses are huge. Unhappy customers will not buy your
other products and will not refer other customers to you. You can avoid this
situation by doing extensive testing. This testing is referred to as Quality Assurance,
or QA, in most of the software world.
Usually testing start as soon as the initial parts of the software is available. There
are multiple types of testing and these are explained in this section. Each of these
has its own importance.
Unit Testing
Unit testing is testing one part or one component of the product. The developer
usually does this when he/she has completed writing code for that part of the
product. This makes sure that the component is doing what it is intended to do. This
also saves a lot of time for software testers as well as developers by eliminating
Course Module
CS-6209 Software Engineering 1
Week 3: Technical Development
7
many cycles of software being passed back and forth between the developer and the
tester.
When a developer is confident that a particular part of the software is ready, he/she
can write test cases to test functionality of this part of the software. The component
is then forwarded to the software testing people who run test cases to make sure
that the unit is working properly.
Sanity Testing
Sanity testing is a very basic check to see if all software components compile with
each other without a problem. This is just to make sure that developers have not
defined conflicting or multiple functions or global variable definitions.
Regression or Stress Testing
Regression or stress testing is a process done in some projects to carry out a test for
a longer period of time. This type of testing is used to determine behavior of a
product when used continuously over a period of time. It can reveal some bugs in
software related to memory leak-age. In some cases developers allocate memory but
forget to release it. This problem is known as memory leakage. When a test is
conducted for many days or weeks, this problem results in allocation of all of the
available memory until no memory is left. This is the point where your soft-ware
starts showing abnormal behavior.
Another potential problem in long-term operation is counter overflow. This occurs
when you increment a counter but forget to decrement, it resulting in an overflow
when the product is used for longer periods.
The regression testing may be started as soon as some components are ready. This
testing process requires a very long period of time by its very nature. The process
should be continued as more components of the product are integrated. The
integration process and communication through interfaces may create new bugs in
the code.
Functional Testing
Functional testing is carried out to make sure that the software is doing exactly
what it is supposed to do. This type of testing is a must before any software is
released to customers. Functional testing is done by people whose primary job is
software testing, not the developers them-selves. In small software projects where a
company can’t afford dedicated testers, other developers may do functional testing
also. The key point to keep in mind is that the person who wrote a software
Course Module
CS-6209 Software Engineering 1
Week 3: Technical Development
8
component should not be the person who tested it. A developer will tend to test the
software the way he/she has written it. He/she may easily miss any problems in the
software.
The software testers need to prepare a testing plan for each component of the
software. A testing plan consists of test cases that are run on the software. The
software tester can prepare these test cases using functional specifications
documents. The tester may also get help from the developer to create test cases.
Each test case should include methodology used for testing and expected results.
In addition to test cases, the tester may also need to create a certain infrastructure
or environment to test the functionality of a piece of code. For example, you may
simulate a network to test routing algorithms that may be part of a routing product.
The next important job of the tester is to create a service request if an anomaly is
found. The tester should include as much information in the service request as
possible. Typical information included in reporting bugs includes:





Test case description
How the test was carried out
Expected results
Results obtained
If a particular environment was created for testing, a description of that
environment
The service request should be forwarded to the developer so that the developer may
correct the problem. Many software packages are available in the market to track
bugs and fix problems in software. There are many web-based tools as well. For a
list of freely available open source projects, go to http://www.osdn.org or
http://www.sourceforge.net and search for “bug track”. OSDN (Open Source
Developers Network) hosts many open source software development projects. You
can find software packages that work with CVS also.
Software Releases
Before you start selling any software product, it is officially released. This means
that you create a state of the software in your repository, make sure that it has been
tested for functionality and freeze the code. A version number is assigned to release
software. After releasing the software, development may continue but it will not
make any change in the released software. The development is usually carried on in
a new branch and it may contain new features of the product. The released software
is updated only if a bug fixed version is released.
Course Module
CS-6209 Software Engineering 1
Week 3: Technical Development
9
Usually companies assign incremental version numbers following some scheme
when the next release of the software is sent to market. The change in version
number depends on whether the new software contains a major change to the
previous version or it contains bug fixes and enhancement to existing functionality.
Releases are also important because they are typically compiled versions of a
particular version of the code, and thus provide a stable set of binaries for testing.
Branches
In almost all serious software development projects, a revision or version control
system is used. This version control system keeps a record of changes in source
code files and is usually built in a tree-like structure. When software is released, the
state of each file that is part of the release should be recorded. Future developments
are made by creating branches to this state. Sometimes special branches may also be
created that are solely used for bug fixing.
Release Notes
Every software version contains release notes. These release notes are prepared by
people releasing the software version with the help of developers. Release notes
show what happened in this software version. Typically the information includes:





Bug fixes
New functionality
Detail of new features added to the software
Any bugs that are not yet fixed
Future enhancements
If a user needs a change in the configuration process, it is also mentioned in the
release notes
Typically a user should be given enough information to understand the new release
enhancements and decide whether an upgrade is required or not.
Documentation
There are three broad categories of documentation related to software development
processes.
1. Technical documentation developed during the development process. This
includes architecture, functional and design documents.
Course Module
CS-6209 Software Engineering 1
Week 3: Technical Development
10
2. Technical documentation prepared for technical support staff. This includes
technical manuals that support staff use to provide customer support.
3. End-user manuals and guides. This is the documentation for the end user to
assist the user getting started with the product and using it.
All three types of documents are necessary for different aspects of product support.
Technical documents are necessary for future development, bug fixes, and adding
new features. Technical documentation for technical support staff contains
information that is too complicated for the end user to understand and use. The
support staff needs this information in addition to user manuals to better support
customers. Finally each product must contain user manuals.
Technical writers usually develop user manuals which are based on functional
specifications. In the timelines of most software development projects, functional
specifications are pre-pared before code development starts. So the technical
writers can start writing user manuals while developers are writing code. By the
time the product is ready, most of the work on user manuals has already been
completed.
Support and New Features
Your customers need support when you start selling a product. This is true
regardless of the size of the product, and even for products that are not software
related. Most common sup-port requests from customers are related to one of the
following:



The customer needs help in installation and getting started.
The customer finds a bug and you need to release a patch or update to the
whole product.
The product does not fulfill customer requirements and a new feature is
required by the customer.
In addition to that, you may also want to add new features to the product for the
next release because competitor products have other features. Better support will
increase your customer loyalty and will create referral business for you.
You may adopt two strategies to add new features. You may provide an upgrade to
the cur-rent release as a patch, or wait until you have compiled and developed a list
of new features and make a new version. Both of these strategies are useful
depending how urgent the requirement for new features is.
Course Module
CS-6209 Software Engineering 1
Week 3: Technical Development
11
Components of a Development System
Like any other system, a development system is composed of many components that
work together to provide services to the developer for the software development
task. Depending upon the requirements of a project, different types of components
can be chosen. Many commercial companies also sell comprehensive development
tools. On Linux systems, all of the development tools are available and you can
choose some of these depending upon your level of expertise with these tools and
your requirements. Typically each development platform consists of the following
components:







Hardware platform
Operating system
Editor
Compilers and assemblers
Debuggers
Version control system
Collaboration and bug tracking
Let us take a closer look on these components and what role they play in the
development cycle.
Hardware Platform
This is the tangible part of the development system. A hardware platform is the
choice of your hardware, PC or workstation, for the development system. You can
choose a particular hardware platform depending upon different factors as listed
below:
Course Module
Cost
Cost
Depending
Depending upon
upon budget.
budget. you
you may
may chose
chose different
different
types
types of
of hardware
hardware
Usually
Usually UNIX
UNIX workstations
workstations are
are costly
costly IO
IO buy
buy and
and
maintain.
maintain. On the
the other
other hand,
hand, PC-based
PC-based workstabons
workstabons
are
are cheap
cheap and
and the
the maintenance
maintenance cos!
cos! is
is also
also low.
low.
Performance
Performance
Usually
Usually UNIX
UNIX workstations
workstations have
have high
high performance
performance
and
and stability
as compared
compared to
to PC·based
PC·based solutions.
solutions.
stability as
Tools
Tools
You
You also
also need
need to
to keep
in mind
mind avallab!Uty
of
keep in
avallab!Uty of
tools on
on aa particular
development
development tools
particular platform.
platform.
Development
Development
T>P•
T>P•
If
If the
the target
target system
system ls
ls the
the same
same as
as the
the hos!
hos! system
system on
on
whkh
whkh development
development IS
IS done,
done, the
the development
development IS
IS
relatively
relatively easy
easy and
and nabve
nabve tools
tools are
are cheap
cheap as
as we!l,
we!l,
IO cross-platform
tools.
compared
compared IO
cross-platform development
development tools.
CS-6209 Software Engineering 1
Week 3: Technical Development
12
Depending upon these factors, you may make a choice from the available hardware
plat-forms for development.
If hardware is part of the final product, selection of hardware platform also depends
upon customer/market requirement.
Operating System
Choice of a particular operating system may be made depending upon:




Cost
Availability of development tools
Hardware platform
Native or cross compiling
Some operating systems are cheaper than others. Linux is an excellent choice, as far
as cost is concerned. Linux is also a very good operating system as it has all of the
development tools available. Now you can install Linux on high-end workstations
from Sun Microsystems, HP, and IBM as well as commodity PC hardware available
everywhere. It provides stability and most of the people are familiar with
development tools. You can also use the operating system for cross-platform
development using GNU tools.
Editors
Editors play an important role in the development work. Easy-to-use and feature
rich editors, like Emacs, increase developers’ productivity. You should look at a few
things while selecting editors. These features include:







Understanding syntax of language
Collapsing of context
Support of tags
Opening multiple files
Easy editing for generally used editing functions like cut, copy, paste, search,
replace and so on
Multiple windows
Support of user defined functions and macros
If you look at the open source community, you can find a lot of good editors
available to developers. The most commonly used editors are Jed, Emacs and
Xemacs. However, many other variants of these editors are also available. You can
also use X-Windows-based editors available on Linux platform. A lot of people also
edit in vi or vim, both of these have been very popular historically.
Course Module
CS-6209 Software Engineering 1
Week 3: Technical Development
13
Compilers and Assemblers
Compilers and assemblers are the core development tools because they convert
source code to executable form. Quality of compilers does affect the output code. For
example, some compilers can do much better code optimization compared to others.
If you are doing some cross-platform development, then your compiler should
support code generation for the target machine as well.
GNU compilers collection, more commonly called GCC, is a comprehensive set of
com-pilers for commonly used languages including the following:




C
C++
Java
Fortran
In addition to GCC, you can find a lot of other open source compilers available for
Linux.
GNU utilities set, also known as binutils, include GNU assembler and other utilities
that can be used for many tasks. GNU assembler is used whenever you compile a
program using GNU compiler.
Debuggers
Debuggers are the also an important part of all development systems. You can’t
write a program that is free of bugs. Debugging is a continuous part of software
development and you need good tools for this purpose. GNU debugger, more
commonly known as GDB, is a common debugger. Many other debuggers are also
built on this debugger. The GNU debugger and some other debuggers will be
introduced later in this book.
Version Control Systems
The revision control system is a must for any serious development effort where
multiple developers work on a software product. The most popular version control
system on Linux is known as Concurrent Versions System or CVS. CVS allows many
people to work on files at the same time and provides a central repository to store
files. Developers can check out files from this repository, make changes and check
the files back into the repository. CVS also works with editors like GNU Emacs.
Course Module
CS-6209 Software Engineering 1
Week 3: Technical Development
14
When multiple developers are modifying the same file at the same time, conflict may
occur between different changes made by multiple developers. When a conflict is
detected in the files being checked in, CVS provides a mechanism to merge the files
appropriately.
CVS can be used over secure links as well. This is required when developers are not
physically located at the same place. A server on the Internet can be used to provide
secure access to the central software repository.
There are other version control systems as well which are popular in the software
development community. Examples are Aegis, PRCS, RCS and SCCS.
E-mail and Collaboration
In any software development project, collaboration among developers, designers
and architects as well as marketing people is a must. The objective can be achieved
in many ways. Probably e-mail is the most efficient and cheapest way. Some
collaboration tools provide more functionality than just e-mailing.
X-Windows
X-Windows is much more than just a GUI interface on Linux, but for development
purposes, it provides a very good user interface. This is especially useful for editors
like Emacs.
Miscellaneous Tools
Many miscellaneous tools are also important during the development process. Some
of these tools are listed below:




The make utility
The ar program
The ranlib utility
The hexdump utility
Information about these tools is provided later in this book.
Selection Criteria for Hardware Platform
The development process needs computers, networks, storage, printing and other
hardware components. However the important hardware decision is the selection of
Course Module
CS-6209 Software Engineering 1
Week 3: Technical Development
15
PCs and workstations for developers. There is no hard and fast rule about how to
select a particular hardware platform. It depends upon the requirements of a
development project. Some factors that you may keep in mind are as follows:




Cost of the hardware.
Availability of desired operating system on the hardware. For example, you
can’t run HP-UX on PCs.
Availability of development tools.
Maintenance cost.
There may be other factors as well and you are the best person to judge what you
need. However, keep in mind that reliability of hardware is one major factor that
people usually over-look. If you buy cheap systems that decrease productivity of
developers, you lose a lot of money.
Selection Criteria for Software Development Tools
After selecting the hardware, software development tools are the next major initial
expense in terms of money and time to set these up. Selection of software
development tools depends upon the choice of hardware and operating system. In
many cases GNU tools are very well suited. Selection of development tools also has a
major effect on the productivity of the whole development team.
Managing Development Process
In large software development projects, management of the development process is
a big task and a dedicated person may be needed to take care of this aspect of the
project. A development manager usually acts as a binding and coordinating force
among different parties with conflict-ing interests. These parties include:






Course Module
Marketing and sales people who put forward requirements, change
requirements and come up with new requirements, usually when much of
the work is already done!
Architecture and design people.
Software developers who always think that they always have less amount of
time.
Release management people.
Software testers.
Documentation writers.
CS-6209 Software Engineering 1
Week 3: Technical Development

16
Senior managers who often push to complete the project earlier than the
deadline.
Coordinating all of these parties is not an easy task. The manager has to convince
senior management that a new feature needs that much time for development. At
the same time he has to push developers to meet the deadlines. Some of the
important tasks of software management in a real-life project are as follows.
Creating Deadlines
The manager usually coordinates with the software developers to set reasonable
dead-lines for certain features. These deadlines must conform to the product
delivery time lines. The manager may have to arrange additional resources to
complete feature development in the allotted time.
Project management software can help a manager to set and meet deadlines and
track completion of different components.
Managing the Development Team
The manager has to keep track of how development among different parts of the
software is going on. If part of the product is behind schedule, she has to re-arrange
resources to get it back on track.. She may also need to hire new people to finish the
development of a particular component on schedule.
Resolving Dependencies
Usually software components are dependent on one another. If the development of
one component is lagging behind, it may affect development of other components.
The development manager needs to keep an eye on these dependencies and how
these may affect the over-all progress of the project. Well-known project
management methods are usually helpful for this task.
References and Supplementary Materials
Books and Journals
Course Module
CS-6209 Software Engineering 1
Week 3: Technical Development
Online Supplementary Reading Materials
1.
2.
3.
4.
5.
LDPS web site at http://www.freestandards.org/ldps/; October 15, 2019
CVS web site at http://www.gnu.org/software/cvs/; October 15, 2019
Aegis at web site http://aegis.sourceforge.net/index.html; October 15, 2019
PRCS at its web site http://prcs.sourceforge.net/; October 15, 2019
GNU Software at http://www.gnu.org; October 15, 2019
Online Instructional Videos
Course Module
17
CS-6209 Software Engineering 1
Week 4: Software Project Management
1
Module 004: Software Project Management
Course Learning Outcomes:
1. Understand the specific roles within a software organization as related to
project and process management
2. Understand the basic infrastructure competences (e.g., process modeling
and measurement)
3. Understanding the basic steps of project planning, project management,
quality assurance, and process management and their relationships
Introduction
Project management has been practiced since early civilization. Until the beginning of
twentieth century civil engineering projects were actually treated as projects and were
generally managed by creative architects and engineers. Project management as a
discipline was not accepted. It was in the 1950s that organizations started to systematically
apply project management tools and techniques to complex projects. As a discipline,
Project Management developed from several fields of application including construction,
engineering, and defense activity. Two forefathers of project management are commonly
known: Henry Gantt, called the father of planning and control techniques who is famous for
his use of the Gantt chart as a project management tool; and Henri Fayol for his creation of
the five management functions which form the foundation of the body of knowledge
associated with project and program management. The 1950s marked the beginning of the
modern Project Management era. Project management became recognized as a distinct
discipline arising from the management discipline.
What is a Project?
All of us have been involved in projects, whether they be our personal projects or in
business and industry. Examples of typical projects are for example:
Personal projects:




obtaining an MCA degree
writing a report
planning a party
planting a garden
Industrial projects:
Course Module
CS-6209 Software Engineering 1
Week 4: Software Project Management




2
Construction of a building
Provide electricity to an industrial estate
Building a bridge
Designing a new airplane
Projects can be of any size and duration. They can be simple, like planning a party,
or complex like launching a space shuttle.
Project Definition:
A project can be defined in many ways :
A project is “a temporary endeavor undertaken to create a unique product, service,
or result.” Operations, on the other hand, is work done in organizations to sustain
the business. Projects are different from operations in that they end when their
objectives have been reached or the project has been terminated.
A project is temporary. A project’s duration might be just one week or it might go
on for years, but every project has an end date. You might not know that end date
when the project begins, but it’s there somewhere in the future. Projects are not the
same as ongoing operations, although the two have a great deal in common.
A project is an endeavor. Resources, such as people and equipment, need to do
work. The endeavor is undertaken by a team or an organization, and therefore
projects have a sense of being intentional, planned events. Successful projects do not
happen spontaneously; some amount of preparation and planning happens first.
Finally, every project creates a unique product or service. This is the deliverable for
the project and the reason, why that project was undertaken.
Project Attributes
Projects come in all shapes and sizes. The following attributes help us to define a
project further:
Course Module

A project has a unique purpose. Every project should have a well-defined
objective. For example, many people hire firms to design and build a new
house, but each house, like each person, is unique.

A project is temporary. A project has a definite beginning and a definite end.
For a home construction project, owners usually have a date in mind when
they’d like to move into their new homes.

A project is developed using progressive elaboration or in an iterative fashion.
Projects are often defined broadly when they begin, and as time passes, the
specific details of the project become clearer. For example, there are many
CS-6209 Software Engineering 1
Week 4: Software Project Management
3
decisions that must be made in planning and building a new house. It works
best to draft preliminary plans for owners to approve before more detailed
plans are developed.

A project requires resources, often from various areas. Resources include
people, hardware, software, or other assets. Many different types of people,
skill sets, and resources are needed to build a home.

A project should have a primary customer or sponsor. Most projects have
many interested parties or stakeholders, but someone must take the primary
role of sponsorship. The project sponsor usually provides the direction and
funding for the project.

A project involves uncertainty. Because every project is unique, it is
sometimes difficult to define the project’s objectives clearly, estimate exactly
how long it will take to complete, or determine how much it will cost.
External factors also cause uncertainty, such as a supplier going out of
business or a project team member needing unplanned time off. This
uncertainty is one of the main reasons project management is so challenging.
Project Constraints
Like any human undertaking, projects need to be performed and delivered under
certain constraints. Traditionally, these constraints have been listed as scope, time,
and cost. These are also referred to as the Project Management Triangle, where each
side represents a constraint. One side of the triangle cannot be changed without
impacting the others. A further refinement of the constraints separates product
'quality' or 'performance' from scope, and turns quality into a fourth constraint.
The time constraint refers to the amount of time available to complete a project. The
cost constraint refers to the budgeted amount available for the project. The scope
constraint refers to what must be done to produce the project's end result. These
three constraints are often competing constraints: increased scope typically means
increased time and increased cost, a tight time constraint could mean increased
costs and reduced scope, and a tight budget could mean increased time and reduced
scope.
The discipline of project management is about providing the tools and techniques
that enable the project team (not just the project manager) to organize their work to
meet these constraints.
Another approach to project management is to consider the three constraints as
finance, time and human resources. If you need to finish a job in a shorter time, you
can allocate more people at the problem, which in turn will raise the cost of the
project, unless by doing this task quicker we will reduce costs elsewhere in the
project by an equal amount.
Course Module
CS-6209 Software Engineering 1
Week 4: Software Project Management
4
Time:
For analytical purposes, the time required to produce a product or service is
estimated using several techniques. One method is to identify tasks needed to
produce the deliverables documented in a work breakdown structure or WBS. The
work effort for each task is estimated and those estimates are rolled up into the final
deliverable estimate.
The tasks are also prioritized, dependencies between tasks are identified, and this
information is documented in a project schedule. The dependencies between the
tasks can affect the length of the overall project (dependency constraint), as can the
availability of resources (resource constraint). Time is not considered as cost nor a
resource since the project manager cannot control the rate at which it is expended.
This makes it different from all other resources and cost categories.
Cost:
Cost to develop a project depends on several variables including: labor rates,
material rates, risk management, plant (buildings, machines, etc.), equipment, and
profit. When hiring an independent consultant for a project, cost will typically be
determined by the consultant's or firm's per diem rate multiplied by an estimated
quantity for completion.
Resources
Figure 4.1: The Project management Triangle
Scope:
Scope is requirement specified for the end result. The overall definition of what the
project is supposed to accomplish, and a specific description of what the end result
should be or accomplish can be said to be the scope of the project. A major
Course Module
CS-6209 Software Engineering 1
Week 4: Software Project Management
5
component of scope is the quality of the final product. The amount of time put into
individual tasks determines the overall quality of the project. Some tasks may
require a given amount of time to complete adequately, but given more time could
be completed exceptionally. Over the course of a large project, quality can have a
significant impact on time and cost or vice versa.
Together, these three constraints viz. Scope, Schedule & Resources have given rise
to the phrase "On Time, On Spec, On Budget". In this case, the term "scope" is
substituted with "spec(ification)"
What is Project Management?
Project management is “the application of knowledge, skills, tools and techniques
to project activities to meet the project requirements.” The effectiveness of project
management is critical in assuring the success of any substantial activity. Areas of
responsibility for the person handling the project include planning, control and
implementation. A project should be initiated with a feasibility study, where a clear
definition of the goals and ultimate benefits need to be determined. Senior
managers' support for projects is important so as to ensure authority and direction
throughout the project's progress and, also to ensure that the goals of the
organization are effectively achieved in this process.
Knowledge, skills, goals and personalities are the factors that need to be considered
within project management. The project manager and his/her team should
collectively possess the necessary and requisite interpersonal and technical skills to
facilitate control over the various activities within the project.
The stages of implementation must be articulated at the project planning phase.
Disaggregating the stages at its early point assists in the successful development of
the project by providing a number of milestones that need to be accomplished for
completion. In addition to planning, the control of the evolving project is also a
prerequisite for its success. Control requires adequate monitoring and feedback
mechanisms by which senior management and project managers can compare
progress against initial projections at each stage of the project. Monitoring and
feedback also enables the project manager to anticipate problems and therefore
take pre-emptive and corrective measures for the benefit of the project.
Projects normally involve the introduction of a new system of some kind and, in
almost all cases, new methods and ways of doing things. This impacts the work of
others: the "users". User interaction is an important factor in the success of projects
and, indeed, the degree of user involvement can influence the extent of support for
the project or its implementation plan. A project manager is the one who is
responsible for establishing a communication in between the project team and the
user. Thus one of the most essential quality of the project manager is that of being a
good communicator, not just within the project team itself, but with the rest of the
organization and outside world as well.
Course Module
CS-6209 Software Engineering 1
Week 4: Software Project Management
6
Features of projects:
Projects are often carried out by a team of people who have been assembled for that
specific purpose. The activities of this team may be co-ordinated by a project
manager.
Project teams may consist of people from different backgrounds and different parts
of the organization. In some cases project teams may consist of people from
different organizations.
Project teams may be inter-disciplinary groups and are likely to lie outside the
normal organization hierarchies.
The project team will be responsible for delivery of the project end product to some
sponsor within or outside the organization. The full benefit of any project will not
become available until the project has been completed.
Project Classification:
In recent years more and more activities have been tackled on a project basis.
Project teams and a project management approach have become common in most
organizations. The basic approaches to project management remain the same
regardless of the type of project being considered. You may find it useful to consider
projects in relation to a number of major classifications:
a) Engineering and construction
The projects are concerned with producing a clear physical output, such as
roads, bridges or buildings. The requirements of a project team are well defined
in terms of skills and background, as are the main procedures that have to be
undergone. Most of the problems which may confront the project team are likely
to have occurred before and therefore their solution may be based upon past
experiences.
b) Introduction of new systems
These projects would include computerization projects and the introduction of
new systems and procedures including financial systems. The nature and
constitution of a project team may vary with the subject of the project, as
different skills may be required and different end-users may be involved. Major
projects involving a systems analysis approach may incorporate clearly defined
procedures within an organization.
c) Responding to deadlines and change
An example of responding to a deadline is the preparation of an annual report by
a specified date. An increasing number of projects are concerned with designing
organizational or environmental changes, involving developing new products
and services.
Course Module
CS-6209 Software Engineering 1
Week 4: Software Project Management
7
Project Management Tools and techniques:
Project planning is at the heart of project management. One can't manage and
control project activities if there is no plan. Without a plan, it is impossible to know
if the correct activities are underway, if the available resources are adequate or of
the project can be completed within the desired time. The plan becomes the
roadmap that the project team members use to guide them through the project
activities. Project management tools and techniques assist project managers and
their teams in carrying out work in all nine knowledge areas. For example, some
popular time-management tools and techniques include Gantt charts, project
network diagrams, and critical path analysis. Table 4.1 lists some commonly used
tools and techniques by knowledge area.
Knowled e Area
Integration
management
Tools & Technlnues
Project selection methods, project
management
methodologies, stakeholder analyses,
project charters, project management
plans, project management software,
change requests, change control boards,
project review meetings, lessons-learned
reeorts
reeorts
Scope management
Cost Management
Course Module
Scope statements, work breakdown
structures,
mind maps, statements of work,
requirements
analyses, scope management plans,
scope verification techniques, and scope
chance controls
Net present value, return on Investment,
payback
analyses, earned value management,
project portfolio management, cost
estimates, cost management plans, cost
baselines
CS-6209 Software Engineering 1
Week 4: Software Project Management
Time management
Human resource
management
Quality management
Risk management
Communication
management
Procurement
management
8
Gantt charts, project network diagrams,
critical-path analyses, crashing, fast
tracking, schedule
nerformance measurements
Motivation techniques, empathic listening,
responsibility assignment matrices,
project
organizational charts, resource
histograms, team
building exercises
Quality metrics, checklists, quality control
charts,
Pareto diagrams. flshbone diagrams.
meturltv models. statistical methods
Risk management plans, risk registers,
orobabultvzlmoact matrices, risk rankincs
Communications management plans,
kickoff
meetings. conflict management.
communications
media selection, status and progress
reports, virtual communications,
templates, crctect Web sites
Make-or-buy analyses, contracts,
requests for
proposals or quotes, source selections,
supplier
evaluation matrices
Table 4.1: Project Management Tools and Techniques
Project Success Factors:
The successful design, development, and implementation of information technology
(IT) projects is a very difficult and complex process. However, although developing
IT projects can be difficult, the reality is that a relatively small number of factors
control the success or failure of every IT project, regardless of its size or complexity.
The problem is not that the factors are unknown; it is that they seldom form an
integral part of the IT development process.
Course Module
CS-6209 Software Engineering 1
Week 4: Software Project Management
9
Some of the factors that influence projects and may help them succeed are







Executive Support
User involvement
Experienced project managers
Limited scope
Clear basic requirements
Formal methodology
Reliable estimates
The Role of Project Manager
The project manager is the driving force in the management control loop. This
individual seldom participates directly in the activities that produce the end result,
but rather strives to maintain the progress and productive mutual interaction of
various parties in such a way that overall risk of failure is reduced.
A project manager is often a client representative and has to determine and
implement the exact needs of the client, based on knowledge of the firm he/she is
representing. The ability to adapt to the various internal procedures of the
contracting party, and to form close links with the nominated representatives, is
essential in ensuring that the key issues of cost, time, quality, and above all, client
satisfaction, can be realized.
In whatever field, a successful project manager must be able to envisage the entire
project from start to finish and to have the ability to ensure that this vision is
realized.
When they are appointed, project managers should be given terms of reference that
define their:
 Objectives;
 Responsibilities;
 Limits of authority.
Responsibilities of a Project Manager:
The objective of every project manager is to deliver the product on time, within
budget and with the required quality. Although the precise responsibilities of a
project manager will vary from company to company and from project to project,
they should always include planning and forecasting. Three additional areas of
management responsibility are:

Course Module
interpersonal responsibilities, which include:
- leading the project team;
- liaising with initiators, senior management and suppliers;
- Being the 'figurehead', i.e. setting the example to the project team and
representing the project on formal occasions.
CS-6209 Software Engineering 1
Week 4: Software Project Management


10
informational responsibilities, which include:
- monitoring the performance of staff and the implementation of the
project plan;
- disseminating information about tasks to the project team;
- disseminating information about project status to initiators and
senior management;
- Acting as the spokesman for the project team.
decisional responsibilities, which include:
- allocating resources according to the project plan, and adjusting those
allocations when circumstances dictate (i.e. the project manager has
responsibility for the budget);
- negotiating with the initiator about the optimum interpretation of
contractual obligations, with the company management for resources,
and with project staff about their tasks;
- Handling disturbances to the smooth progress of the project such as
equipment failures and personnel problems.
Project Life Cycle
The Project Life Cycle refers to a logical sequence of activities to accomplish
the project’s goals or objectives. Regardless of scope or complexity, any project
goes through a series of stages during its life. There is first an Initiation or Starting
phase, in which the outputs and critical success factors are defined, followed by a
Planning phase, characterized by breaking down the project into smaller
parts/tasks, an Execution phase, in which the project plan is executed, and lastly a
Closure or Exit phase, that marks the completion of the project. Project activities
must be grouped into phases because by doing so, the project manager and the core
team can efficiently plan and organize resources for each activity, and also
objectively measure achievement of goals and justify their decisions to move ahead,
correct, or terminate. It is of great importance to organize project phases into
industry-specific project cycles. Why? Not only because each industry sector
involves specific requirements, tasks, and procedures when it comes to projects, but
also because different have industry sectors had different needs for life cycle
management methodology. And paying close attention to such details is the
difference between doing things well and excelling as project managers.
Diverse project management tools and methodologies prevail in the different
project cycle phases. Let’s take a closer look at what’s important in each one of these
stages:
Project Initiation:
The initiation stage determines the nature and scope of the development. If this
stage is not performed well, it is unlikely that the project will be successful in
Course Module
CS-6209 Software Engineering 1
Week 4: Software Project Management
11
meeting the business’s needs. The key project controls needed here are an
understanding of the business environment and making sure that all necessary
controls are incorporated into the project. Any deficiencies should be reported and a
recommendation should be made to fix them.
The initiation stage should include a plan that encompasses the following areas:







Analyzing the business needs/requirements in measurable goals.
Reviewing of the current operations.
Conceptual design of the operation of the final product.
Equipment and contracting requirements including an assessment
of long lead time items.
Financial analysis of the costs and benefits including a budget.
Stakeholder analysis, including users, and support personnel for
the project.
Project charter including costs, tasks, deliverables, and schedule.
lrnnating
Morntoong
Controlling
& Controlling
Figure 4.2: Project Life Cycle
Course Module
CS-6209 Software Engineering 1
Week 4: Software Project Management
12
Planning & Design:
After the initiation stage, the system is designed. Occasionally, a small prototype of
the final product is built and tested. Testing is generally performed by a
combination of testers and end users, and can occur after the prototype is built or
concurrently. Controls should be in place that ensures that the final product will
meet the specifications of the project charter. The results of the design stage should
include a product design that:




Satisfies the project sponsor (the person who is providing the
project budget), end user, and business requirements.
Functions as it was intended.
Can be produced within acceptable quality standards.
Can be produced within time and budget constraints.
Execution & Controlling:
Monitoring and Controlling consists of those processes performed to observe
project execution so that potential problems can be identified in a timely manner
and corrective action can be taken, when necessary, to control the execution of the
project. The key benefit is that project performance is observed and measured
regularly to identify variances from the project management plan.
Monitoring and Controlling includes:




Measuring the ongoing project activities (where we are);
Monitoring the project variables (cost, effort, scope, etc.) against the project
management plan and the project performance baseline (where we should
be);
Identify corrective actions to address issues and risks properly (How can we
get on track again);
Influencing the factors that could circumvent integrated change control so
only approved changes are implemented
In multi-phase projects, the Monitoring and Controlling process also provides
feedback between project phases, in order to implement corrective or preventive
actions to bring the project into compliance with the project management plan.
Project Maintenance is an ongoing process, and it includes:



Course Module
Continuing support of end users
Correction of errors
Updates of the software over time
CS-6209 Software Engineering 1
Week 4: Software Project Management
13
In this stage, auditors should pay attention to how effectively and quickly user
problems are resolved.
Over the course of any IT project, the work scope may change. Change is normal and
expected part of the process. Changes can be the result of necessary design
modifications, differing site conditions, material availability, client-requested
changes, value engineering and impacts from third parties, to name a few. Beyond
executing the change in the field, the change normally needs to be documented to
show what was actually developed. This is referred to as Change Management.
Hence, the owner usually requires a final record to show all changes or, more
specifically, any change that modifies the tangible portions of the finished work. The
record is made on the contract documents – usually, but not necessarily limited to,
the design drawings. The end product of this effort is what the industry terms asbuilt drawings, or more simply, “as built.”
When changes are introduced to the project, the viability of the project has to be reassessed. It is important not to lose sight of the initial goals and targets of the
projects. When the changes accumulate, the forecasted result may not justify the
original proposed investment in the project.
Closure:
Closing includes the formal acceptance of the project and the ending
thereof. Administrative activities include the archiving of the files and
documenting lessons learned.
This phase consists of:


Project close: Finalize all activities across all of the process groups
to formally close the project or a project phase.
Contract closure: Complete and settle each contract (including the
resolution of any open items) and close each contract applicable to
the project or project phase.
Sample Questions
1. Why is there a new or renewed interest in the field of project
management?
2. What is a project, and what are its main attributes? How is a project
different from what most people do in their day-to-day jobs? What
is the triple constraint?
3. What is project management? Briefly describe the project
management framework, providing examples of stakeholders,
knowledge areas, tools and techniques, and project success factors.
Course Module
CS-6209 Software Engineering 1
Week 4: Software Project Management
4. Discuss the relationship between project, program, and portfolio
Management and their contribution to enterprise success.
5. What are the roles of the project, program, and portfolio
managers? What are suggested skills for project managers? What
additional skills do program and portfolio managers need?
References and Supplementary Materials
Books and Journals
1. Information Technology Project Management: Kathy Schwalbe Thomson
Publication.
2. Information Technology Project Management providing measurable
organizational value Jack Marchewka Wiley India.
3. Applied software project management Stellman & Greene SPD.
4. Software Engineering Project Management by Richard Thayer, Edward Yourdon
WILEY INDIA.
Online Supplementary Reading Materials
Online Instructional Videos
Course Module
14
CS-6209 Software Engineering 1
Week 6: Software Quality Management
1
Module 005: Software Quality Management
Course Learning Outcomes:
1. Understand the issue of Software Quality and the activities present in a
typical Quality Management process.
2. Understand the advantages of difficulties presented by the use of Quality
standards in Software Engineering
3. Understand the concepts of Software Quality Management
Introduction
Computers and software are ubiquitous. Mostly they are embedded and we don’t even
realize where and how we depend on software. We might accept it or not, but software is
governing our world and society and will continue further on. It is difficult to imagine our
world without software. There would be no running water, food supplies, business or
transportation would disrupt immediately, diseases would spread, and security would be
dramatically reduced – in short, our society would disintegrate rapidly. A key reason our
planet can bear over six billion people is software. Since software is so ubiquitous, we need
to stay in control. We have to make sure that the systems and their software run as we
intend – or better. Only if software has the right quality, we will stay in control and not
suddenly realize that things are going awfully wrong. Software quality management is the
discipline that ensures that the software we are using and depending upon is of right
quality. Only with solid under- standing and discipline in software quality management, we
will effectively stay in control.
What exactly is software quality management? To address this question we first need to
define the term “quality”. Quality is the ability of a set of inherent characteristics of a
product, service, product component, or process to fulfill requirements of customers [1].
From a management and controlling perspective quality is the degree to which a set of
inherent characteristics fulfills requirements. Quality management is the sum of all planned
systematic activities and processes for creating, controlling and assuring quality [1]. Fig. 1
indicates how quality management relates to the typical product development. We have
used a V-type visualization of the development process to illustrate that different quality
control techniques are applied to each level of abstraction from requirements engineering
to implementation. Quality control questions are mentioned on the right side. They are
addressed by techniques such as reviews or testing. Quality assurance questions are
mentioned in the middle. They are addressed by audits or sample checks. Quality
improvement questions are mentioned on the left side. They are addressed by dedicated
improvement projects and continuous improvement activities.
Course Module
CS-6209 Software Engineering 1
Week 6: Software Quality Management
2
Quality Concepts
The long-term profitability of a company is heavily impacted by the quality
perceived by customers. Customers view achieving the right balance of reliability,
market window of a product and cost as having the greatest effect on their longterm link to a company. This has been long articulated, and applies in different
economies and circumstances. Even in restricted competitive situations, such as a
market with few dominant players (e.g., the operating system market of today or the
database market of few years ago), the principle applies and has given rise to open
source development. With the competitor being often only a mouse-click away,
today quality has even higher relevance. This applies to Web sites as well as to
commodity goods with either embedded or dedicated software deliveries. And the
principle certainly applies to investment goods, where suppliers are evaluated by a
long list of different quality attributes.
Methodological approaches to guarantee quality products have led to international
guidelines (e.g., ISO 9001) and widely applied methods to assess the development
processes of software providers (e.g., SEI CMMI). In addition, most companies apply
certain techniques of criticality prediction that focus on identifying and reducing
release risk. Unfortunately, many efforts usually concentrate on testing and
reworking instead of proactive quality management.
Yet there is a problem with quality in the software industry. By quality we mean the
bigger picture, such as delivering according to commitments. While solutions
abound, knowing which solutions work is the big question. What are the most
fundamental underlying principles in successful projects? What can be done right
now? What actually is good or better? What is good enough – considering the
immense market pressure and competition across the globe?
A simple – yet difficult to digest and implement – answer to these questions is that
software quality management is not simply a task, but rather a habit. It must be
engrained in the company culture. It is something that is visible in the way people
are working, independent on their role. It certainly means that every single person
in the organization sees quality as her own business not that of a quality manager or
a testing team. A simple yet effective test to quickly identify the state of practice
with respect to quality management is to ask around what quality means for an
employee and how he delivers ac- cording to this meaning. You will identify that
many see it as a bulky and formal approach to be done to achieve necessary
certificates. Few exceptions exist, such as industries with safety and health impacts.
But even there, you will find different approaches to quality, depending on culture.
Those with carrot and stick will not achieve a true quality culture. Quality is a habit.
It is driven by objectives and not based on believes. It is primarily achieved when
each person in the organization knows and is aware on her own role to deliver
quality.
Quality management is the responsibility of the entire enterprise. It is
strategically defined, directed and operationally implemented on various
Course Module
CS-6209 Software Engineering 1
Week 6: Software Quality Management
3
organizational levels. Fig. 1 shows in a simplified organizational layout with four
tiers the respective responsibilities to successfully implement quality management.
Note that it is not a top-down approach where management sets unrealistic targets
that must be implemented on the project level. It is even more important that
continuous feedback is pro- vided bottom-up so that decisions can be changed or
directions can be adjusted.
Slrottgy
l'"pro, ...... t Ol)ltct,, ..
Scortc••O
Qoalt1Ypoo,q
Pol,cy,
dortct,on.
Object-,
'"''"P'"'""'""'
""";""'""' ,,,1....,
,"",,
••,.,.. !ffdt,10,
��..
,
'""
Optronon•l ,a.rg,.,
pro;,oms
""'""""'
fetdbad(,
COfffCIIVt
acbon
Si,oe,!iep,oJtct p,oc ...
ano p,M<,e1 ol)ltcti,o,
�on oo enocl; IC!
(0001<00 ...... ,0:01
Fig. 1: Quality management within the organization
Quality is implemented along the product life-cycle. Fig. 2 shows some pivotal
quality-related activities mapped to the major life-cycle phases. Note that on the left
side strategic directions are set and a respective management system is
implemented. Towards the right side, quality related processes, such as test or
supplier audits are implemented. During evolution of the product with dedicated
services and customer feedback, the product is further optimized and the
management system is adapted where necessary.
It is important to understand that the management system is not specific to a
project, but drives a multitude of projects or even the entire organization. Scale
effects occur with having standardized processes that are systematically applied.
This not only allows moving people to different projects with- out long learning
curve but also assures proven quality at best possible efficiency. A key step along all
these phases is to recognize that all quality requirements can and should be
specified in quantitative terms. This does not mean “counting defects” as it would be
too late and reactive. It means quantifying quality attributes such as security,
portability, adaptability, maintainability, robustness, usability, reliability and
performance as an objective before the design of a product.
Course Module
CS-6209 Software Engineering 1
Week 6: Software Quality Management
• Evaluate marl<et I
"""""' reeos
• Define own
""""""""
quality strategy
• Convnumcate
quality policy
• Implement and
commumc.ate
management
,,,.-
• Walk the ta!k
• Agree an optimized feature set
• Align oo,ectives
with esbmates
• Pnontize specific
quality require-
ments
• Define specific
quality processes
• Agree suppher
quality manage-
ment
4
--
• Assure ltle protect
v.il delrver as
• Execute quality
processes (e g ,
venficatJon,
validation, checks,
audits)
- MJbgate nsks
• Actueve optimal
mar1<.e1 entry with
nght quality level
• Opbmize life-cycle
performance
(quality, cost,
usage, changes)
• Optsmze I reduce
Me-cycle cost
• Manage repair
and replacement
• Improve services
"""�"'
• Adjust managemen! system
Fig. 2: Quality management along the product life-cycle
A small example will illustrate this need. A software system might have strict
reliability constraints. Instead of simply stating that reliability should achieve less
than one failure per month in operation, which would be reactive, related quality
requirements should target underlying product and process needs to achieve such
reliability. During the strategy phase, the market or customer needs for reliability
need to be elicited. Is reliability important as an image or is it rather availability?
What is the perceived value of different failure rates? A next step is to determine
how these needs will be broken down to product features, components and
capabilities. Which architecture will deliver the desired re- liability and what are
cost impacts? What component and supplier qualification criteria need to be
established and maintained throughout the product life-cycle? Then the underlying
quality processes need to be determined. This should not be done ad-hoc and for
each single project individually but by tailoring organizational processes, such as
product life-cycle, project reviews, or testing to the specific needs of the product.
What test coverage is necessary and how will it be achieved? Which test equipment
and infrastructure for interoperability of components needs to be applied? What
checklists should be used in preparing for reviews and releases? These processes
need to be carefully applied during development. Quality control will be applied by
each single engineer and quality assurance will be done systematically for selected
processes and work products. Finally the evolution phase of the product needs to
establish criteria for service request management and assuring the right quality
level of follow-on releases and potential defect corrections. A key question to
address across all these phases is how to balance quality needs with necessary effort
and availability of skilled people. Both relate to business, but that is at times
overlooked. We have seen companies that due to cost and time constraints would
Course Module
CS-6209 Software Engineering 1
Week 6: Software Quality Management
5
reduce requirements engineering or early reviews of specifications and later found
out that follow-on cost were higher than what was cut out. A key understanding to
achieving quality and therefore business performance has once been phrased by
Abraham Lincoln: “If someone gave me eight hours to chop down a tree, I would
spend six hours sharpening the axe.”
Process Maturity and Quality
The quality of a product or service is mostly determined by the processes and
people developing and delivering the product or service. Technology can be bought,
it can be created almost on the spot, and it can be introduced by having good
engineers. What matters to the quality of a product is how they are working and
how they introduce this new technology. Quality is not at a stand-still, it needs to be
continuously questioned and improved. With today’s low entry barriers to software
markets, one thing is sure: There is always a company or entrepreneur just
approaching your home turf and conquering your customers. To continuously
improve and thus stay ahead of competition, organizations need to change in a
deterministic and results-oriented way. They need to know and improve their
process maturity.
The concept of process maturity is not new. Many of the established quality models
in manufacturing use the same concept. This was summarized by Philip Crosby in
his bestselling book “Quality is Free” in 1979. He found from his broad experiences
as a senior manager in different industries that business success depends on quality.
With practical insight and many concrete case studies he could empirically link
process performance to quality. His credo was stated as: “Quality is measured by the
cost of quality which is the expense of nonconformance – the cost of doing things
wrong.”
First organizations must know where they are; they need to assess their processes.
The more detailed the results from such an assessment, the easier and more
straightforward it is to establish a solid improvement plan. That was the basic idea
with the “maturity path” concept proposed by Crosby in the 1970s. He distinguishes
five maturity stages, namely





Stage 1: Uncertainty
Stage 2: Awakening
Stage 3: Enlightening
Stage 4: Wisdom
Stage 5: Certainty
Defects – Prediction, Detection, Correction and Prevention
To achieve the right quality level in developing a product it is necessary to understand
what it means not to have insufficient quality. Let us start with the concept of defects. A
Course Module
CS-6209 Software Engineering 1
Week 6: Software Quality Management
6
defect is defined as an imperfection or deficiency in a system or component where that
component does not meet its requirements or specifications which could yield a failure.
Causal relationship distinguishes the failure caused by a defect which itself is caused by
a human error during design of the product. Defects are not just information about
something wrong in a software system or about the progress in building up quality.
Defects are information about problems in the process that created this software. The
four questions to address are:
1. How many defects are there and have to be removed?
2. How can the critical and relevant defects be detected most efficiently?
3. How can the critical and relevant defects be removed most effectively and
efficiently?
4. How can the process be changed to avoid the defects from reoccurring?
These four questions relate to the four basic quality management techniques of
prediction, detection, correction and prevention. The first step is to identify how
many defects there are and which of those defects are critical to product performance.
The underlying techniques are statistical methods of defect estimation, reliability
prediction and criticality assessment. These defects have to be detected by quality
control activities, such as inspections, reviews, unit test, etc. Each of these techniques
has their strengths and weaknesses which explains why they ought to be combined to
be most efficient. It is of not much value to spend loads of people on test, when in-depth
requirements reviews would be much faster and cheaper. Once defects are detected and
identified, the third step is to remove them. This sounds easier than it actually is due to
the many ripple effects each correction has to a system. Regression tests and reviews of
corrections are absolutely necessary to assure that quality won’t degrade with changes.
A final step is to embark on preventing these defects from re-occurring. Often engineers
and their management state that this actually should be the first and most relevant step.
We agree, but experience tells that again and again, people stumble across defect
avoidance simply because their processes won’t support it. In order to effectively avoid
defects engineering processes must be defined, systematically applied and
quantitatively managed. This being in place, defect prevention is a very cost-effective
means to boost both customer satisfaction and business performance, as many highmaturity organizations such as Motorola, Boeing or Wipro show.
Defect removal is not about assigning blame but about building better quality and
improving the processes to ensure quality. Reliability improvement always needs
measurements on effectiveness (i.e., percentage of removed defects for a given activity)
compared to efficiency (i.e., effort spent for detecting and removing a defect in the
respective activity). Such measurement asks for the number of residual defects at a
given point in time or within the development process.
But how is the number of defects in a piece of software or in a product estimated? We
will outline the approach we follow for up-front estimation of residual defects in any
software that may be merged from various sources with different degrees of stability.
We distinguish between upfront defect estimation which is static by nature as it looks
Course Module
CS-6209 Software Engineering 1
Week 6: Software Quality Management
7
only on the different components of the system and their inherent quality before the
start of validation activities, and reliability models which look more dynamically during
validation activities at residual defects and failure rates.
Only a few studies have been published that typically relate static defect estimation to
the number of already detected defects independently of the activity that resulted in
defects, or the famous error seeding which is well known but is rarely used due to the
belief of most software engineers that it is of no use to add errors to software when
there are still far too many defects in, and when it is known that defect detection costs
several person hours per defect.
Defects can be easily estimated based on the stability of the underlying software. All
software in a product can be separated into four parts according to its origin:




Software that is new or changed. This is the standard case where software had
been designed especially for this project, either internally or from a supplier.
Software reused but to be tested (i.e., reused from another project that was
never integrated and therefore still contains lots of defects; this includes ported
functionality). This holds for reused software with unclear quality status, such as
internal libraries.
Software reused from another project that is in testing (almost) at the same
time. This software might be partially tested, and therefore the overlapping of
the two test phases of the parallel projects must be accounted for to estimate
remaining defects. This is a specific segment of software in product lines or any
other parallel usage of the same software without having hardened it so far for
field usage.
Software completely reused from a stable product. This software is considered
stable and there- fore it has a rather low number of defects. This holds especially
for commercial off the shelf software components and opens source software
which is used heavily.
The base of the calculation of new or changed software is the list of modules to be used
in the complete project (i.e., the description of the entire build with all its components).
A defect correction in one of these components typically results in a new version, while
a modification in functionality (in the context of the new project) results in a new
variant. Configuration management tools are used to distinguish the one from the other
while still maintaining a single source.
To statically estimate the number of residual defects in software at the time it is
delivered by the author (i.e., after the author has done all verification activities, she can
execute herself), we distinguish four different levels of stability of the software that are
treated independently
f = a × x + b × y + c × z + d × (w – x – y – z)
Course Module
CS-6209 Software Engineering 1
Week 6: Software Quality Management
8
with




x: the number of new or changed KStmt designed and to be tested within this
project. This soft- ware was specifically designed for that respective project. All
other parts of the software are re- used with varying stability.
y: the number of KStmt that are reused but are unstable and not yet tested
(based on functionality that was designed in a previous project or release, but
was never externally delivered; this includes ported functionality from other
projects).
z: the number of KStmt that are tested in parallel in another project. This
software is new or changed for the other project and is entirely reused in the
project under consideration.
w: the number of KStmt in the total software – i.e., the size of this product in its
totality.
The factors a-d relate defects in software to size. They depend heavily on the
development environment, project size, maintainability degree and so on. Our
starting point for this initial estimation is actually driven by psychology. Any person
makes roughly one (non-editorial) defect in ten written lines of work. This applies
to code as well as a design document or e-mail, as was observed by the personal
software process (PSP) and many other sources [1,16,17]. The estimation of
remaining defects is language independent because defects are introduced per
thinking and editing activity of the programmer, i.e., visible by written statements.
This translates into 100 defects per KStmt. Half of these defects are found by careful
checking by the author which leaves some 50 defects per KStmt delivered at code
completion. Training, maturity and coding tools can further reduce the number
substantially. We found some 10-50 defects per KStmt de- pending on the maturity
level of the respective organization. This is based only on new or changed code, not
including any code that is reused or automatically generated.
Most of these original defects are detected by the author before the respective work
product is re- leased. Depending on the underlying individual software process, 40–
80% of these defects are re- moved by the author immediately. We have
experienced in software that around 10–50 defects per KStmt remain. For the
following calculation we will assume that 30 defects/KStmt are remaining (which is
a common value [18]. Thus, the following factors can be used:


Course Module
defects per KStmt (depending on the engineering methods; should be based on
own data)
b: 30 × 60% defects per KStmt, if defect detection before the start of testing is
60%
CS-6209 Software Engineering 1
Week 6: Software Quality Management

9
c: 30 × 60% × (overlapping degree) × 25% defects per KStmt (depending on
overlapping degree of resources)
d: 30 × 0.1–1% defects per KStmt depending on the number of defects remaining
in a product at the time when it is reused
The percentages are, of course, related to the specific defect detection
distribution in one’s own historical database (Fig. 3). A careful investigation of
stability of reused software is necessary to better substantiate the assumed
percentages.


........,..., """�
""°" °""" "''""
""""""
3
2
'"""
T
"'
0%
lJoil Test
·- -"""
T..
""'
,S%
RNM�e
2FIKLOC
5
0
%
"'
'"'
6
5
%
Fig. 3: Typical benchmark effects of detecting defects earlier in the life-cycle
Since defects can never be entirely avoided, different quality control techniques are
used in combination for detecting defects during the product life-cycle. They are listed
in sequence when they are applied throughout the development phase, starting with
requirements and ending with system test:






Course Module
Requirements (specification) reviews
Design (document) reviews
Compile level checks with tools
Static code analysis with automatic tools
Manual code reviews and inspections with checklists based on typical defect
situations or critical areas in the software
Enhanced reviews and testing of critical areas (in terms of complexity, former
CS-6209 Software Engineering 1
Week 6: Software Quality Management






10
failures, expected defect density, individual change history, customer’s risk and
occurrence probability)
Unit testing
Focused testing by tracking the effort spent for analyses, reviews, and
inspections and separating according to requirements to find out areas not
sufficiently covered
Systematic testing by using test coverage measurements (e.g., C0 and C1
coverage) and improvement
Operational testing by dynamic execution already during integration testing
Automatic regression testing of any redelivered code
System testing by applying operational profiles and usage specifications.
We will further focus on several selected approaches that are applied for improved
defect detection before starting with integration and system test because those
techniques are most cost-effective.
Note that the starting point for effectively reducing defects and improving reliability is
to track all defects that are detected. Defects must be recorded for each defect
detection activity. Counting defects and deriving the reliability (that is failures over
time) is the most widely applied and accepted method used to determine software
quality. Counting defects during the complete project helps to estimate the duration of
distinct activities (e.g., unit testing or subsystem testing) and improves the underlying
processes. Failures reported during system testing or field application must be traced
back to their primary causes and specific defects in the design (e.g., design decisions or
lack of design reviews).
Quality improvement activities must be driven by a careful look into what they mean
for the bottom line of the overall product cost. It means to continuously investigate
what this best level of quality re- ally means, both for the customers and for the
engineering teams who want to deliver it.
One does not build a sustainable customer relationship by delivering bad quality and
ruining his reputation just to achieve a specific delivery date. And it is useless to spend
an extra amount on improving quality to a level nobody wants to pay for. The optimum
seemingly is in between. It means to achieve the right level of quality and to deliver in
time. Most important yet is to know from the beginning of the project what is actually
relevant for the customer or market and set up the project accordingly. Objectives will
be met if they are driven from the beginning.
We look primarily at factors such as cost of non-quality to follow through this business
reasoning of quality improvements. For this purpose we measure all cost related to
error detection and removal (i.e., cost of non-quality) and normalize by the size of the
product (i.e., normalize defect costs). We take a conservative approach in only
considering those effects that appear inside our engineering activities, i.e., not
considering opportunistic effects or any penalties for delivering insufficient quality.
The most cost-effective techniques for defect detection are requirements reviews. For
Course Module
CS-6209 Software Engineering 1
Week 6: Software Quality Management
11
code re- views, inspections and unit test are most cost-effective techniques aside static
code analysis. Detecting defects in architecture and design documents has
considerable benefit from a cost perspective, because these defects are expensive to
correct at later stages. Assuming good quality specifications, major yields in terms of
reliability, however, can be attributed to better code, for the simple reason that there
are many more defects residing in code that were inserted during the coding activity.
We therefore provide more depth on techniques that help to improve the quality of
code, namely code reviews (i.e., code reviews and formal code inspections) and unit
test (which might include static and dynamic code analysis).
Formal
Code
Inspection
integration test
Code
Reviews
Unit
test
Entire set of modules
There are six possible paths of combining manual defect detection techniques in the
delivery of a piece of software from code complete until the start of integration test
(Fig. 4). The paths indicate the per- mutations of doing code reviews alone, performing
code inspections and applying unit test. Each path indicated by the arrows shows
which activities are performed on a piece of code. An arrow crossing a box means that
the activity is not applied. Defect detection effectiveness of a code inspection is much
higher than that of a code review. Unit test finds different types of defects than
reviews. However cost also varies depending on which technique is used, which
explains why these different permutations are used. In our experience code reviews is
the cheapest detection technique (with ca. 1-2 PH/defect), while manual unit test is
the most expensive (with ca. 1-5 PH/defect, depending on automation degree). Code
inspections lie somewhere in between. Although the best approach from a mere defect
detection perspective is to apply inspections and unit test, cost considerations and the
objective to reduce elapsed time and thus improve throughput suggest carefully
evaluating which path to follow in order to most efficiently and effectively detect and
remove defects
Fig. 4: Six possible paths for modules between end of coding and start of
integration test
Unit tests, however, combined with C0 coverage targets, have the highest effectiveness
for regression testing of existing functionality. Inspections, on the other hand, help in
detecting distinct defect classes that can only be found under real load (or even stress)
in the field.
Course Module
CS-6209 Software Engineering 1
Week 6: Software Quality Management
12
Defects are not distributed homogeneously through new or changed code. An analysis
of many projects revealed the applicability of the Pareto rule: 20-30% of the modules
are responsible for 70- 80% of the defects of the whole project. These critical
components need to be identified as early as possible, i.e., in the case of legacy systems
at start of detailed design, and for new software during coding. By concentrating on
these components the effectiveness of code inspections and unit testing is increased
and fewer defects have to be found during test phases. By concentrating on defectprone modules both effectiveness and efficiency are improved. Our main approach to
identify defect- prone software-modules is a criticality prediction taking into account
several criteria. One criterion is the analysis of module complexity based on
complexity measurements. Other criteria concern the number of new or changed code
in a module, and the number of field defects a module had in the pre- ceding project.
Code inspections are first applied to heavily changed modules, in order to optimize
payback of the additional effort that has to be spent compared to the lower effort for
code reading. Formal code reviews are recommended even for very small changes
with a checking time shorter than two hours in order to profit from a good efficiency of
code reading. The effort for know-how transfer to another designer can be saved.
It is of great benefit for improved quality management to be able to predict early on in
the development process those components of a software system that are likely to
have a high defect rate or those requiring additional development effort. Criticality
prediction is based on selecting a distinct small share of modules that incorporate sets
of properties that would typically cause defects to be introduced during design more
often than in modules that do not possess such attributes. Criticality prediction is thus
a technique for risk analysis during the design process.

Criticality prediction addresses typical questions often asked in software
engineering projects:

How can I early identify the relatively small number of critical components that
make significant contribution to defects identified later in the life-cycle?

Which modules should be redesigned because their maintainability is bad and
their overall criticality to the project’s success is high?

Are there structural properties that can be measured early in the code to
predict quality attributes?

If so, what is the benefit of introducing a measurement program that
investigates structural properties of software?

Can I use the often heuristic design and test know-how on trouble identification
and risk assessment to build up a knowledge base to identify critical
components early in the development process?
Criticality prediction is a multifaceted approach taking into account several criteria.
Complexity is a key influence on quality and productivity. Having uncontrolled
Course Module
CS-6209 Software Engineering 1
Week 6: Software Quality Management
13
accidental complexity in the product will definitely decrease productivity (e.g., gold
plating, additional rework, more test effort) and quality (more defects). A key to
controlling accidental complexity from creeping into the project is the measurement
and analysis of complexity throughout in the life-cycle. Volume, structure, order or the
connections of different objects contribute to complexity. However, do they all account
for it equally? The clear answer is no, because different people with different skills
assign complexity subjectively, according to their experience in the area. Certainly
criticality must be predicted early in the life-cycle to effectively serve as a managerial
instrument for quality improvement, quality control effort estimation and resource
planning as soon as possible in a project. Tracing comparable complexity metrics for
different products throughout the life-cycle is advisable to find out when essential
complexity is over- ruled by accidental complexity. Care must be used that the
complexity metrics are comparable, that is they should measure the same factors of
complexity.
Having identified such overly critical modules, risk management must be applied. The
most critical and most complex, for instance, the top 5% of the analyzed modules are
candidates for a redesign. For cost reasons mitigation is not only achieved with
redesign. The top 20% should have a code inspection instead of the usual code
reading, and the top 80% should be at least entirely (C0 coverage of 100%) unit tested.
By concentrating on these components the effectiveness of code inspections and unit
test is increased and fewer defects have to be found during test phases. To achieve
feedback for improving predictions the approach is integrated into the development
process end-to-end (requirements, design, code, system test, deployment).
It must be emphasized that using criticality prediction techniques does not mean
attempting to detect all defects. Instead, they belong to the set of managerial
instruments that try to optimize resource allocation by focusing them on areas with
many defects that would affect the utility of the delivered product. The trade-off of
applying complexity-based predictive quality models is estimated based on

limited resources are assigned to high-risk jobs or components

impact analysis and risk assessment of changes is feasible based on affected or
changed complexity

gray-box testing strategies are applied to identified high-risk components

ewer customers reported failures
Our experiences show that, in accordance with other literature corrections of defects in
early phases is more efficient, because the designer is still familiar with the problem
and the correction delay during testing is reduced
The effect and business case for applying complexity-based criticality prediction to a
Course Module
CS-6209 Software Engineering 1
Week 6: Software Quality Management
14
new project can be summarized based on results from our own experience database
(taking a very conservative ratio of only 40% defects in critical components):

20% of all modules in the project were predicted as most critical (after coding);

These modules contained over 40% of all defects (up to release time). Knowing
from these and many other projects that

60% of all defects can theoretically be detected until the end of unit test and
defect correction during unit test and code reading costs less than 10%
compared to defect correction during system test
It can be calculated that 24% of all defects can be detected early by investigating 20% of
all modules more intensively with 10% of effort compared to late defect correction
during test, therefore yielding a 20% total cost reduction for defect correction.
Additional costs for providing the statistical analysis are in the range of two person
days per project. Necessary tools are off the shelf and account for even less per project.
References and Supplementary Materials
Books and Journals
1. Information Technology Project Management: Kathy Schwalbe Thomson
Publication.
2. Information Technology Project Management providing measurable
organizational value Jack Marchewka Wiley India.
3. Applied software project management Stellman & Greene SPD.
4. Software Engineering Project Management by Richard Thayer, Edward Yourdon
WILEY INDIA.
Online Supplementary Reading Materials
Online Instructional Videos
Course Module
CS-6209 Software Engineering 1
Week 6: Software Quality Management
1
Module 005: Software Quality Management
Course Learning Outcomes:
1. Understand the issue of Software Quality and the activities present in a
typical Quality Management process.
2. Understand the advantages of difficulties presented by the use of Quality
standards in Software Engineering
3. Understand the concepts of Software Quality Management
Introduction
Computers and software are ubiquitous. Mostly they are embedded and we don’t even
realize where and how we depend on software. We might accept it or not, but software is
governing our world and society and will continue further on. It is difficult to imagine our
world without software. There would be no running water, food supplies, business or
transportation would disrupt immediately, diseases would spread, and security would be
dramatically reduced – in short, our society would disintegrate rapidly. A key reason our
planet can bear over six billion people is software. Since software is so ubiquitous, we need
to stay in control. We have to make sure that the systems and their software run as we
intend – or better. Only if software has the right quality, we will stay in control and not
suddenly realize that things are going awfully wrong. Software quality management is the
discipline that ensures that the software we are using and depending upon is of right
quality. Only with solid under- standing and discipline in software quality management, we
will effectively stay in control.
What exactly is software quality management? To address this question we first need to
define the term “quality”. Quality is the ability of a set of inherent characteristics of a
product, service, product component, or process to fulfill requirements of customers [1].
From a management and controlling perspective quality is the degree to which a set of
inherent characteristics fulfills requirements. Quality management is the sum of all planned
systematic activities and processes for creating, controlling and assuring quality [1]. Fig. 1
indicates how quality management relates to the typical product development. We have
used a V-type visualization of the development process to illustrate that different quality
control techniques are applied to each level of abstraction from requirements engineering
to implementation. Quality control questions are mentioned on the right side. They are
addressed by techniques such as reviews or testing. Quality assurance questions are
mentioned in the middle. They are addressed by audits or sample checks. Quality
improvement questions are mentioned on the left side. They are addressed by dedicated
improvement projects and continuous improvement activities.
Course Module
CS-6209 Software Engineering 1
Week 6: Software Quality Management
2
Quality Concepts
The long-term profitability of a company is heavily impacted by the quality
perceived by customers. Customers view achieving the right balance of reliability,
market window of a product and cost as having the greatest effect on their longterm link to a company. This has been long articulated, and applies in different
economies and circumstances. Even in restricted competitive situations, such as a
market with few dominant players (e.g., the operating system market of today or the
database market of few years ago), the principle applies and has given rise to open
source development. With the competitor being often only a mouse-click away,
today quality has even higher relevance. This applies to Web sites as well as to
commodity goods with either embedded or dedicated software deliveries. And the
principle certainly applies to investment goods, where suppliers are evaluated by a
long list of different quality attributes.
Methodological approaches to guarantee quality products have led to international
guidelines (e.g., ISO 9001) and widely applied methods to assess the development
processes of software providers (e.g., SEI CMMI). In addition, most companies apply
certain techniques of criticality prediction that focus on identifying and reducing
release risk. Unfortunately, many efforts usually concentrate on testing and
reworking instead of proactive quality management.
Yet there is a problem with quality in the software industry. By quality we mean the
bigger picture, such as delivering according to commitments. While solutions
abound, knowing which solutions work is the big question. What are the most
fundamental underlying principles in successful projects? What can be done right
now? What actually is good or better? What is good enough – considering the
immense market pressure and competition across the globe?
A simple – yet difficult to digest and implement – answer to these questions is that
software quality management is not simply a task, but rather a habit. It must be
engrained in the company culture. It is something that is visible in the way people
are working, independent on their role. It certainly means that every single person
in the organization sees quality as her own business not that of a quality manager or
a testing team. A simple yet effective test to quickly identify the state of practice
with respect to quality management is to ask around what quality means for an
employee and how he delivers ac- cording to this meaning. You will identify that
many see it as a bulky and formal approach to be done to achieve necessary
certificates. Few exceptions exist, such as industries with safety and health impacts.
But even there, you will find different approaches to quality, depending on culture.
Those with carrot and stick will not achieve a true quality culture. Quality is a habit.
It is driven by objectives and not based on believes. It is primarily achieved when
each person in the organization knows and is aware on her own role to deliver
quality.
Quality management is the responsibility of the entire enterprise. It is
strategically defined, directed and operationally implemented on various
Course Module
CS-6209 Software Engineering 1
Week 6: Software Quality Management
3
organizational levels. Fig. 1 shows in a simplified organizational layout with four
tiers the respective responsibilities to successfully implement quality management.
Note that it is not a top-down approach where management sets unrealistic targets
that must be implemented on the project level. It is even more important that
continuous feedback is pro- vided bottom-up so that decisions can be changed or
directions can be adjusted.
Slrottgy
l'"pro, ...... t Ol)ltct,, ..
Scortc••O
Qoalt1Ypoo,q
Pol,cy,
dortct,on.
Object-,
'"''"P'"'""'""'
""";""'""' ,,,1....,
,"",,
••,.,.. !ffdt,10,
��..
,
'""
Optronon•l ,a.rg,.,
pro;,oms
""'""""'
fetdbad(,
COfffCIIVt
acbon
Si,oe,!iep,oJtct p,oc ...
ano p,M<,e1 ol)ltcti,o,
�on oo enocl; IC!
(0001<00 ...... ,0:01
Fig. 1: Quality management within the organization
Quality is implemented along the product life-cycle. Fig. 2 shows some pivotal
quality-related activities mapped to the major life-cycle phases. Note that on the left
side strategic directions are set and a respective management system is
implemented. Towards the right side, quality related processes, such as test or
supplier audits are implemented. During evolution of the product with dedicated
services and customer feedback, the product is further optimized and the
management system is adapted where necessary.
It is important to understand that the management system is not specific to a
project, but drives a multitude of projects or even the entire organization. Scale
effects occur with having standardized processes that are systematically applied.
This not only allows moving people to different projects with- out long learning
curve but also assures proven quality at best possible efficiency. A key step along all
these phases is to recognize that all quality requirements can and should be
specified in quantitative terms. This does not mean “counting defects” as it would be
too late and reactive. It means quantifying quality attributes such as security,
portability, adaptability, maintainability, robustness, usability, reliability and
performance as an objective before the design of a product.
Course Module
CS-6209 Software Engineering 1
Week 6: Software Quality Management
• Evaluate marl<et I
"""""' reeos
• Define own
""""""""
quality strategy
• Convnumcate
quality policy
• Implement and
commumc.ate
management
,,,.-
• Walk the ta!k
• Agree an optimized feature set
• Align oo,ectives
with esbmates
• Pnontize specific
quality require-
ments
• Define specific
quality processes
• Agree suppher
quality manage-
ment
4
--
• Assure ltle protect
v.il delrver as
• Execute quality
processes (e g ,
venficatJon,
validation, checks,
audits)
- MJbgate nsks
• Actueve optimal
mar1<.e1 entry with
nght quality level
• Opbmize life-cycle
performance
(quality, cost,
usage, changes)
• Optsmze I reduce
Me-cycle cost
• Manage repair
and replacement
• Improve services
"""�"'
• Adjust managemen! system
Fig. 2: Quality management along the product life-cycle
A small example will illustrate this need. A software system might have strict
reliability constraints. Instead of simply stating that reliability should achieve less
than one failure per month in operation, which would be reactive, related quality
requirements should target underlying product and process needs to achieve such
reliability. During the strategy phase, the market or customer needs for reliability
need to be elicited. Is reliability important as an image or is it rather availability?
What is the perceived value of different failure rates? A next step is to determine
how these needs will be broken down to product features, components and
capabilities. Which architecture will deliver the desired re- liability and what are
cost impacts? What component and supplier qualification criteria need to be
established and maintained throughout the product life-cycle? Then the underlying
quality processes need to be determined. This should not be done ad-hoc and for
each single project individually but by tailoring organizational processes, such as
product life-cycle, project reviews, or testing to the specific needs of the product.
What test coverage is necessary and how will it be achieved? Which test equipment
and infrastructure for interoperability of components needs to be applied? What
checklists should be used in preparing for reviews and releases? These processes
need to be carefully applied during development. Quality control will be applied by
each single engineer and quality assurance will be done systematically for selected
processes and work products. Finally the evolution phase of the product needs to
establish criteria for service request management and assuring the right quality
level of follow-on releases and potential defect corrections. A key question to
address across all these phases is how to balance quality needs with necessary effort
and availability of skilled people. Both relate to business, but that is at times
overlooked. We have seen companies that due to cost and time constraints would
Course Module
CS-6209 Software Engineering 1
Week 6: Software Quality Management
5
reduce requirements engineering or early reviews of specifications and later found
out that follow-on cost were higher than what was cut out. A key understanding to
achieving quality and therefore business performance has once been phrased by
Abraham Lincoln: “If someone gave me eight hours to chop down a tree, I would
spend six hours sharpening the axe.”
Process Maturity and Quality
The quality of a product or service is mostly determined by the processes and
people developing and delivering the product or service. Technology can be bought,
it can be created almost on the spot, and it can be introduced by having good
engineers. What matters to the quality of a product is how they are working and
how they introduce this new technology. Quality is not at a stand-still, it needs to be
continuously questioned and improved. With today’s low entry barriers to software
markets, one thing is sure: There is always a company or entrepreneur just
approaching your home turf and conquering your customers. To continuously
improve and thus stay ahead of competition, organizations need to change in a
deterministic and results-oriented way. They need to know and improve their
process maturity.
The concept of process maturity is not new. Many of the established quality models
in manufacturing use the same concept. This was summarized by Philip Crosby in
his bestselling book “Quality is Free” in 1979. He found from his broad experiences
as a senior manager in different industries that business success depends on quality.
With practical insight and many concrete case studies he could empirically link
process performance to quality. His credo was stated as: “Quality is measured by the
cost of quality which is the expense of nonconformance – the cost of doing things
wrong.”
First organizations must know where they are; they need to assess their processes.
The more detailed the results from such an assessment, the easier and more
straightforward it is to establish a solid improvement plan. That was the basic idea
with the “maturity path” concept proposed by Crosby in the 1970s. He distinguishes
five maturity stages, namely





Stage 1: Uncertainty
Stage 2: Awakening
Stage 3: Enlightening
Stage 4: Wisdom
Stage 5: Certainty
Defects – Prediction, Detection, Correction and Prevention
To achieve the right quality level in developing a product it is necessary to understand
what it means not to have insufficient quality. Let us start with the concept of defects. A
Course Module
CS-6209 Software Engineering 1
Week 6: Software Quality Management
6
defect is defined as an imperfection or deficiency in a system or component where that
component does not meet its requirements or specifications which could yield a failure.
Causal relationship distinguishes the failure caused by a defect which itself is caused by
a human error during design of the product. Defects are not just information about
something wrong in a software system or about the progress in building up quality.
Defects are information about problems in the process that created this software. The
four questions to address are:
1. How many defects are there and have to be removed?
2. How can the critical and relevant defects be detected most efficiently?
3. How can the critical and relevant defects be removed most effectively and
efficiently?
4. How can the process be changed to avoid the defects from reoccurring?
These four questions relate to the four basic quality management techniques of
prediction, detection, correction and prevention. The first step is to identify how
many defects there are and which of those defects are critical to product performance.
The underlying techniques are statistical methods of defect estimation, reliability
prediction and criticality assessment. These defects have to be detected by quality
control activities, such as inspections, reviews, unit test, etc. Each of these techniques
has their strengths and weaknesses which explains why they ought to be combined to
be most efficient. It is of not much value to spend loads of people on test, when in-depth
requirements reviews would be much faster and cheaper. Once defects are detected and
identified, the third step is to remove them. This sounds easier than it actually is due to
the many ripple effects each correction has to a system. Regression tests and reviews of
corrections are absolutely necessary to assure that quality won’t degrade with changes.
A final step is to embark on preventing these defects from re-occurring. Often engineers
and their management state that this actually should be the first and most relevant step.
We agree, but experience tells that again and again, people stumble across defect
avoidance simply because their processes won’t support it. In order to effectively avoid
defects engineering processes must be defined, systematically applied and
quantitatively managed. This being in place, defect prevention is a very cost-effective
means to boost both customer satisfaction and business performance, as many highmaturity organizations such as Motorola, Boeing or Wipro show.
Defect removal is not about assigning blame but about building better quality and
improving the processes to ensure quality. Reliability improvement always needs
measurements on effectiveness (i.e., percentage of removed defects for a given activity)
compared to efficiency (i.e., effort spent for detecting and removing a defect in the
respective activity). Such measurement asks for the number of residual defects at a
given point in time or within the development process.
But how is the number of defects in a piece of software or in a product estimated? We
will outline the approach we follow for up-front estimation of residual defects in any
software that may be merged from various sources with different degrees of stability.
We distinguish between upfront defect estimation which is static by nature as it looks
Course Module
CS-6209 Software Engineering 1
Week 6: Software Quality Management
7
only on the different components of the system and their inherent quality before the
start of validation activities, and reliability models which look more dynamically during
validation activities at residual defects and failure rates.
Only a few studies have been published that typically relate static defect estimation to
the number of already detected defects independently of the activity that resulted in
defects, or the famous error seeding which is well known but is rarely used due to the
belief of most software engineers that it is of no use to add errors to software when
there are still far too many defects in, and when it is known that defect detection costs
several person hours per defect.
Defects can be easily estimated based on the stability of the underlying software. All
software in a product can be separated into four parts according to its origin:




Software that is new or changed. This is the standard case where software had
been designed especially for this project, either internally or from a supplier.
Software reused but to be tested (i.e., reused from another project that was
never integrated and therefore still contains lots of defects; this includes ported
functionality). This holds for reused software with unclear quality status, such as
internal libraries.
Software reused from another project that is in testing (almost) at the same
time. This software might be partially tested, and therefore the overlapping of
the two test phases of the parallel projects must be accounted for to estimate
remaining defects. This is a specific segment of software in product lines or any
other parallel usage of the same software without having hardened it so far for
field usage.
Software completely reused from a stable product. This software is considered
stable and there- fore it has a rather low number of defects. This holds especially
for commercial off the shelf software components and opens source software
which is used heavily.
The base of the calculation of new or changed software is the list of modules to be used
in the complete project (i.e., the description of the entire build with all its components).
A defect correction in one of these components typically results in a new version, while
a modification in functionality (in the context of the new project) results in a new
variant. Configuration management tools are used to distinguish the one from the other
while still maintaining a single source.
To statically estimate the number of residual defects in software at the time it is
delivered by the author (i.e., after the author has done all verification activities, she can
execute herself), we distinguish four different levels of stability of the software that are
treated independently
f = a × x + b × y + c × z + d × (w – x – y – z)
Course Module
CS-6209 Software Engineering 1
Week 6: Software Quality Management
8
with




x: the number of new or changed KStmt designed and to be tested within this
project. This soft- ware was specifically designed for that respective project. All
other parts of the software are re- used with varying stability.
y: the number of KStmt that are reused but are unstable and not yet tested
(based on functionality that was designed in a previous project or release, but
was never externally delivered; this includes ported functionality from other
projects).
z: the number of KStmt that are tested in parallel in another project. This
software is new or changed for the other project and is entirely reused in the
project under consideration.
w: the number of KStmt in the total software – i.e., the size of this product in its
totality.
The factors a-d relate defects in software to size. They depend heavily on the
development environment, project size, maintainability degree and so on. Our
starting point for this initial estimation is actually driven by psychology. Any person
makes roughly one (non-editorial) defect in ten written lines of work. This applies
to code as well as a design document or e-mail, as was observed by the personal
software process (PSP) and many other sources [1,16,17]. The estimation of
remaining defects is language independent because defects are introduced per
thinking and editing activity of the programmer, i.e., visible by written statements.
This translates into 100 defects per KStmt. Half of these defects are found by careful
checking by the author which leaves some 50 defects per KStmt delivered at code
completion. Training, maturity and coding tools can further reduce the number
substantially. We found some 10-50 defects per KStmt de- pending on the maturity
level of the respective organization. This is based only on new or changed code, not
including any code that is reused or automatically generated.
Most of these original defects are detected by the author before the respective work
product is re- leased. Depending on the underlying individual software process, 40–
80% of these defects are re- moved by the author immediately. We have
experienced in software that around 10–50 defects per KStmt remain. For the
following calculation we will assume that 30 defects/KStmt are remaining (which is
a common value [18]. Thus, the following factors can be used:


Course Module
defects per KStmt (depending on the engineering methods; should be based on
own data)
b: 30 × 60% defects per KStmt, if defect detection before the start of testing is
60%
CS-6209 Software Engineering 1
Week 6: Software Quality Management

9
c: 30 × 60% × (overlapping degree) × 25% defects per KStmt (depending on
overlapping degree of resources)
d: 30 × 0.1–1% defects per KStmt depending on the number of defects remaining
in a product at the time when it is reused
The percentages are, of course, related to the specific defect detection
distribution in one’s own historical database (Fig. 3). A careful investigation of
stability of reused software is necessary to better substantiate the assumed
percentages.


........,..., """�
""°" °""" "''""
""""""
3
2
'"""
T
"'
0%
lJoil Test
·- -"""
T..
""'
,S%
RNM�e
2FIKLOC
5
0
%
"'
'"'
6
5
%
Fig. 3: Typical benchmark effects of detecting defects earlier in the life-cycle
Since defects can never be entirely avoided, different quality control techniques are
used in combination for detecting defects during the product life-cycle. They are listed
in sequence when they are applied throughout the development phase, starting with
requirements and ending with system test:






Course Module
Requirements (specification) reviews
Design (document) reviews
Compile level checks with tools
Static code analysis with automatic tools
Manual code reviews and inspections with checklists based on typical defect
situations or critical areas in the software
Enhanced reviews and testing of critical areas (in terms of complexity, former
CS-6209 Software Engineering 1
Week 6: Software Quality Management






10
failures, expected defect density, individual change history, customer’s risk and
occurrence probability)
Unit testing
Focused testing by tracking the effort spent for analyses, reviews, and
inspections and separating according to requirements to find out areas not
sufficiently covered
Systematic testing by using test coverage measurements (e.g., C0 and C1
coverage) and improvement
Operational testing by dynamic execution already during integration testing
Automatic regression testing of any redelivered code
System testing by applying operational profiles and usage specifications.
We will further focus on several selected approaches that are applied for improved
defect detection before starting with integration and system test because those
techniques are most cost-effective.
Note that the starting point for effectively reducing defects and improving reliability is
to track all defects that are detected. Defects must be recorded for each defect
detection activity. Counting defects and deriving the reliability (that is failures over
time) is the most widely applied and accepted method used to determine software
quality. Counting defects during the complete project helps to estimate the duration of
distinct activities (e.g., unit testing or subsystem testing) and improves the underlying
processes. Failures reported during system testing or field application must be traced
back to their primary causes and specific defects in the design (e.g., design decisions or
lack of design reviews).
Quality improvement activities must be driven by a careful look into what they mean
for the bottom line of the overall product cost. It means to continuously investigate
what this best level of quality re- ally means, both for the customers and for the
engineering teams who want to deliver it.
One does not build a sustainable customer relationship by delivering bad quality and
ruining his reputation just to achieve a specific delivery date. And it is useless to spend
an extra amount on improving quality to a level nobody wants to pay for. The optimum
seemingly is in between. It means to achieve the right level of quality and to deliver in
time. Most important yet is to know from the beginning of the project what is actually
relevant for the customer or market and set up the project accordingly. Objectives will
be met if they are driven from the beginning.
We look primarily at factors such as cost of non-quality to follow through this business
reasoning of quality improvements. For this purpose we measure all cost related to
error detection and removal (i.e., cost of non-quality) and normalize by the size of the
product (i.e., normalize defect costs). We take a conservative approach in only
considering those effects that appear inside our engineering activities, i.e., not
considering opportunistic effects or any penalties for delivering insufficient quality.
The most cost-effective techniques for defect detection are requirements reviews. For
Course Module
CS-6209 Software Engineering 1
Week 6: Software Quality Management
11
code re- views, inspections and unit test are most cost-effective techniques aside static
code analysis. Detecting defects in architecture and design documents has
considerable benefit from a cost perspective, because these defects are expensive to
correct at later stages. Assuming good quality specifications, major yields in terms of
reliability, however, can be attributed to better code, for the simple reason that there
are many more defects residing in code that were inserted during the coding activity.
We therefore provide more depth on techniques that help to improve the quality of
code, namely code reviews (i.e., code reviews and formal code inspections) and unit
test (which might include static and dynamic code analysis).
Formal
Code
Inspection
integration test
Code
Reviews
Unit
test
Entire set of modules
There are six possible paths of combining manual defect detection techniques in the
delivery of a piece of software from code complete until the start of integration test
(Fig. 4). The paths indicate the per- mutations of doing code reviews alone, performing
code inspections and applying unit test. Each path indicated by the arrows shows
which activities are performed on a piece of code. An arrow crossing a box means that
the activity is not applied. Defect detection effectiveness of a code inspection is much
higher than that of a code review. Unit test finds different types of defects than
reviews. However cost also varies depending on which technique is used, which
explains why these different permutations are used. In our experience code reviews is
the cheapest detection technique (with ca. 1-2 PH/defect), while manual unit test is
the most expensive (with ca. 1-5 PH/defect, depending on automation degree). Code
inspections lie somewhere in between. Although the best approach from a mere defect
detection perspective is to apply inspections and unit test, cost considerations and the
objective to reduce elapsed time and thus improve throughput suggest carefully
evaluating which path to follow in order to most efficiently and effectively detect and
remove defects
Fig. 4: Six possible paths for modules between end of coding and start of
integration test
Unit tests, however, combined with C0 coverage targets, have the highest effectiveness
for regression testing of existing functionality. Inspections, on the other hand, help in
detecting distinct defect classes that can only be found under real load (or even stress)
in the field.
Course Module
CS-6209 Software Engineering 1
Week 6: Software Quality Management
12
Defects are not distributed homogeneously through new or changed code. An analysis
of many projects revealed the applicability of the Pareto rule: 20-30% of the modules
are responsible for 70- 80% of the defects of the whole project. These critical
components need to be identified as early as possible, i.e., in the case of legacy systems
at start of detailed design, and for new software during coding. By concentrating on
these components the effectiveness of code inspections and unit testing is increased
and fewer defects have to be found during test phases. By concentrating on defectprone modules both effectiveness and efficiency are improved. Our main approach to
identify defect- prone software-modules is a criticality prediction taking into account
several criteria. One criterion is the analysis of module complexity based on
complexity measurements. Other criteria concern the number of new or changed code
in a module, and the number of field defects a module had in the pre- ceding project.
Code inspections are first applied to heavily changed modules, in order to optimize
payback of the additional effort that has to be spent compared to the lower effort for
code reading. Formal code reviews are recommended even for very small changes
with a checking time shorter than two hours in order to profit from a good efficiency of
code reading. The effort for know-how transfer to another designer can be saved.
It is of great benefit for improved quality management to be able to predict early on in
the development process those components of a software system that are likely to
have a high defect rate or those requiring additional development effort. Criticality
prediction is based on selecting a distinct small share of modules that incorporate sets
of properties that would typically cause defects to be introduced during design more
often than in modules that do not possess such attributes. Criticality prediction is thus
a technique for risk analysis during the design process.

Criticality prediction addresses typical questions often asked in software
engineering projects:

How can I early identify the relatively small number of critical components that
make significant contribution to defects identified later in the life-cycle?

Which modules should be redesigned because their maintainability is bad and
their overall criticality to the project’s success is high?

Are there structural properties that can be measured early in the code to
predict quality attributes?

If so, what is the benefit of introducing a measurement program that
investigates structural properties of software?

Can I use the often heuristic design and test know-how on trouble identification
and risk assessment to build up a knowledge base to identify critical
components early in the development process?
Criticality prediction is a multifaceted approach taking into account several criteria.
Complexity is a key influence on quality and productivity. Having uncontrolled
Course Module
CS-6209 Software Engineering 1
Week 6: Software Quality Management
13
accidental complexity in the product will definitely decrease productivity (e.g., gold
plating, additional rework, more test effort) and quality (more defects). A key to
controlling accidental complexity from creeping into the project is the measurement
and analysis of complexity throughout in the life-cycle. Volume, structure, order or the
connections of different objects contribute to complexity. However, do they all account
for it equally? The clear answer is no, because different people with different skills
assign complexity subjectively, according to their experience in the area. Certainly
criticality must be predicted early in the life-cycle to effectively serve as a managerial
instrument for quality improvement, quality control effort estimation and resource
planning as soon as possible in a project. Tracing comparable complexity metrics for
different products throughout the life-cycle is advisable to find out when essential
complexity is over- ruled by accidental complexity. Care must be used that the
complexity metrics are comparable, that is they should measure the same factors of
complexity.
Having identified such overly critical modules, risk management must be applied. The
most critical and most complex, for instance, the top 5% of the analyzed modules are
candidates for a redesign. For cost reasons mitigation is not only achieved with
redesign. The top 20% should have a code inspection instead of the usual code
reading, and the top 80% should be at least entirely (C0 coverage of 100%) unit tested.
By concentrating on these components the effectiveness of code inspections and unit
test is increased and fewer defects have to be found during test phases. To achieve
feedback for improving predictions the approach is integrated into the development
process end-to-end (requirements, design, code, system test, deployment).
It must be emphasized that using criticality prediction techniques does not mean
attempting to detect all defects. Instead, they belong to the set of managerial
instruments that try to optimize resource allocation by focusing them on areas with
many defects that would affect the utility of the delivered product. The trade-off of
applying complexity-based predictive quality models is estimated based on

limited resources are assigned to high-risk jobs or components

impact analysis and risk assessment of changes is feasible based on affected or
changed complexity

gray-box testing strategies are applied to identified high-risk components

ewer customers reported failures
Our experiences show that, in accordance with other literature corrections of defects in
early phases is more efficient, because the designer is still familiar with the problem
and the correction delay during testing is reduced
The effect and business case for applying complexity-based criticality prediction to a
Course Module
CS-6209 Software Engineering 1
Week 6: Software Quality Management
14
new project can be summarized based on results from our own experience database
(taking a very conservative ratio of only 40% defects in critical components):

20% of all modules in the project were predicted as most critical (after coding);

These modules contained over 40% of all defects (up to release time). Knowing
from these and many other projects that

60% of all defects can theoretically be detected until the end of unit test and
defect correction during unit test and code reading costs less than 10%
compared to defect correction during system test
It can be calculated that 24% of all defects can be detected early by investigating 20% of
all modules more intensively with 10% of effort compared to late defect correction
during test, therefore yielding a 20% total cost reduction for defect correction.
Additional costs for providing the statistical analysis are in the range of two person
days per project. Necessary tools are off the shelf and account for even less per project.
References and Supplementary Materials
Books and Journals
1. Information Technology Project Management: Kathy Schwalbe Thomson
Publication.
2. Information Technology Project Management providing measurable
organizational value Jack Marchewka Wiley India.
3. Applied software project management Stellman & Greene SPD.
4. Software Engineering Project Management by Richard Thayer, Edward Yourdon
WILEY INDIA.
Online Supplementary Reading Materials
Online Instructional Videos
Course Module
CS-6209 Software Engineering 1
Week 8-9: Software Implementation and Documentation
1
Module 007: Software Implementation and
Documentation
Course Learning Outcomes:
1. Implement the Structured Programming in Software Coding and
Implementation
2. Understand the different Programming style and coding guidelines
3. Know the key issues that have to be considered when implementing
software, including software reuse and open-source development.
Software Implementation
In this chapter, we will study about programming methods, documentation and
challenges in software implementation.
Structured Programming
In the process of coding, the lines of code keep multiplying, thus, size of the software
increases. Gradually, it becomes next to impossible to remember the flow of
program. If one forgets how software and its underlying programs, files, procedures
are constructed, it then becomes very difficult to share, debug, and modify the
program. The solution to this is structured programming. It encourages the
developer to use subroutines and loops instead of using simple jumps in the code,
thereby bringing clarity in the code and improving its efficiency Structured
programming also helps programmer to reduce coding time and organize code
properly.
Structured programming states how the program shall be coded. It uses three main
concepts:
1. Top-down analysis - A software is always made to perform some rational
work. This rational work is known as problem in the software parlance. Thus
it is very important that we understand how to solve the problem. Under topdown analysis, the problem is broken down into small pieces where each one
has some significance. Each problem is individually solved and steps are
clearly stated about how to solve the problem.
2. Modular Programming - While programming, the code is broken down into
smaller group of instructions. These groups are known as modules,
subprograms, or subroutines. Modular programming based on the
understanding of top-down analysis. It discourages jumps using ‘goto’
Course Module
CS-6209 Software Engineering 1
Week 8-9: Software Implementation and Documentation
2
statements in the program, which often makes the program flow nontraceable. Jumps are prohibited and modular format is encouraged in
structured programming.
3. Structured Coding - In reference with top-down analysis, structured coding
sub-divides the modules into further smaller units of code in the
Functional Programming
Functional programming is style of programming language, which uses the concepts
of mathematical functions. A function in mathematics should always produce the
same result on receiving the same argument. In procedural languages, the flow of
the program runs through procedures, i.e. the control of program is transferred to
the called procedure. While control flow is transferring from one procedure to
another, the program changes its state.
In procedural programming, it is possible for a procedure to produce different
results when it is called with the same argument, as the program itself can be in
different state while calling it. This is a property as well as a drawback of procedural
programming, in which the sequence or timing of the procedure execution becomes
important.
Functional programming provides means of computation as mathematical functions,
which produces results irrespective of program state. This makes it possible to
predict the behavior of the program.
Functional programming uses the following concepts:
First class and High-order functions - These functions have capability to accept
another function as argument or they return other functions as results.
 Pure functions - These functions do not include destructive updates, that is,
they do not affect any I/O or memory and if they are not in use, they can
easily be removed without hampering the rest of the program.
 Recursion - Recursion is a programming technique where a function calls
itself and repeats the program code in it unless some pre-defined condition
matches. Recursion is the way of creating loops in functional programming.
 Strict evaluation - It is a method of evaluating the expression passed to a
function as an argument. Functional programming has two types of
evaluation methods, strict (eager) or non-strict (lazy). Strict evaluation
always evaluates the expression before invoking the function. Non-strict
evaluation does not evaluate the expression unless it is needed.
Course Module
CS-6209 Software Engineering 1
Week 8-9: Software Implementation and Documentation
3
 λ-calculus - Most functional programming languages use λ-calculus as their
type systems. λ-expressions are executed by evaluating them as they occur.
Common Lisp, Scala, Haskell, Erlang, and F# are some examples of functional
programming languages.
Programming style
Programming style is set of coding rules followed by all the programmers to write
the code. When multiple programmers work on the same software project, they
frequently need to work with the program code written by some other developer.
This becomes tedious or at times impossible, if all developers do not follow some
standard programming style to code the program.
An appropriate programming style includes using function and variable names
relevant to the intended task, using well-placed indentation, commenting code for
the convenience of reader and overall presentation of code. This makes the program
code readable and understandable by all, which in turn makes debugging and error
solving easier. Also, proper coding style helps ease the documentation and updation.
Coding Guidelines
Practice of coding style varies with organizations, operating systems and language
of coding itself.
The following coding elements may be defined under coding guidelines of an
organization:







Course Module
Naming conventions - This section defines how to name functions,
variables, constants and global variables.
Indenting - This is the space left at the beginning of line, usually 2-8
whitespace or single tab.
Whitespace - It is generally omitted at the end of line.
Operators - Defines the rules of writing mathematical, assignment and
logical operators. For example, assignment operator ‘=’ should have space
before and after it, as in “x = 2”.
Control Structures - The rules of writing if-then-else, case-switch, whileuntil and for control flow statements solely and in nested fashion.
Line length and wrapping - Defines how many characters should be there
in one line, mostly a line is 80 characters long. Wrapping defines how a line
should be wrapped, if is too long.
Functions - This defines how functions should be declared and invoked, with
and without parameters.
CS-6209 Software Engineering 1
Week 8-9: Software Implementation and Documentation


4
Variables - This mentions how variables of different data types are declared
and defined.
Comments - This is one of the important coding components, as the
comments included in the code describe what the code actually does and all
other associated descriptions. This section also helps creating help
documentations for other developers.
Software Implementation Challenges
There are some challenges faced by the development team while implementing the
software. Some of them are mentioned below:



Code-reuse - Programming interfaces of present-day languages are very
sophisticated and are equipped huge library functions. Still, to bring the cost
down of end product, the organization management prefers to re-use the
code, which was created earlier for some other software. There are huge
issues faced by programmers for compatibility checks and deciding how
much code to re-use.
Version Management - Every time a new software is issued to the customer,
developers have to maintain version and configuration related
documentation. This documentation needs to be highly accurate and
available on time.
Target-Host - The software program, which is being developed in the
organization, needs to be designed for host machines at the customers end.
But at times, it is impossible to design software that works on the target machines.
Software Documentation
Software documentation is an important part of software process. A well written
document provides a great tool and means of information repository necessary to
know about software process. Software documentation also provides information
about how to use the product.
A well-maintained documentation should involve the following documents:
 Requirement documentation - This documentation works as key tool for
software designer, developer, and the test team to carry out their respective
tasks. This document contains all the functional, non-functional and
behavioral description of the intended software.
Source of this document can be previously stored data about the software,
already running software at the client’s end, client’s interview,
Course Module
CS-6209 Software Engineering 1
Week 8-9: Software Implementation and Documentation
5
questionnaires, and research. Generally it is stored in the form of
spreadsheet or word processing document with the high-end software
management team.
This documentation works as foundation for the software to be developed
and is majorly used in verification and validation phases. Most test-cases are
built directly from requirement documentation.

Software Design documentation - These documentations contain all the
necessary information, which are needed to build the software. It contains:
(a) High-level software architecture, (b) Software design details, (c) Data
flow diagrams, (d) Database design
These documents work as repository for developers to implement the
software. Though these documents do not give any details on how to code
the program, they give all necessary information that is required for coding
and implementation.

Technical documentation - These documentations are maintained by the
developers and actual coders. These documents, as a whole, represent
information about the code. While writing the code, the programmers also
mention objective of the code, who wrote it, where will it be required, what it
does and how it does, what other resources the code uses, etc.
The technical documentation increases the understanding between various
programmers working on the same code. It enhances re-use capability of the
code. It makes debugging easy and traceable.
There are various automated tools available and some comes with the
programming language itself. For example java comes JavaDoc tool to
generate technical documentation of code.

Course Module
User documentation - This documentation is different from all the above
explained. All previous documentations are maintained to provide
information about the software and its development process. But user
documentation explains how the software product should work and how it
should be used to get the desired results.
CS-6209 Software Engineering 1
Week 8-9: Software Implementation and Documentation
6
These documentations may include software installation procedures, how-to
guides, user-guides, uninstallation method and special references to get more
information like license updation etc.
References and Supplementary Materials
Books and Journals
1. SOFTWARE ENGINEERING, 9th Edition; Ian Sommerville
Online Supplementary Reading Materials
1. Software Design and Implementation;
https://courses.cs.washington.edu/courses/cse331/10sp/lectures/lectures.html;
November 4, 2019
Online Instructional Videos
Course Module
CS-6209 Software Engineering 1
Week 11: Software Testing
1
Module 008: Software Testing
Course Learning Outcomes:
1. Apply modern software testing processes in relation to software
development and project management.
2. Create test strategies and plans, design test cases, prioritize and execute
them.
3. Manage incidents and risks within a project.
4. Contribute to efficient delivery of software solutions and implement
improvements in the software development processes.
5. To gain expertise in designing, implementation and development of
computer based systems and IT processes.
Introduction
Software Testing is evaluation of the software against requirements gathered from
users and system specifications. Testing is conducted at the phase level in software
development life cycle or at module level in program code. Software testing
comprises of Validation and Verification.
Software Validation
Validation is process of examining whether or not the software satisfies the user
requirements. It is carried out at the end of the SDLC. If the software matches
requirements for which it was made, it is validated.



Validation ensures the product under development is as per the user
requirements.
Validation answers the question – "Are we developing the product which
attempts all that user needs from this software?".
Validation emphasizes on user requirements.
Software Verification
Verification is the process of confirming if the software is meeting the business
requirements, and is developed adhering to the proper specifications and
methodologies.



Course Module
Verification ensures the product being developed is according to design
specifications.
Verification answers the question– "Are we developing this product by firmly
following all design specifications ?"
Verifications concentrates on the design and system specifications.
CS-6209 Software Engineering 1
Week 11: Software Testing
2
Target of the test are –



Errors - These are actual coding mistakes made by developers. In addition,
there is a difference in output of software and desired output, is considered
as an error.
Fault - When error exists fault occurs. A fault, also known as a bug, is a result
of an error which can cause system to fail.
Failure - failure is said to be the inability of the system to perform the
desired task. Failure occurs when fault exists in the system.
Manual Vs. Automated Testing
Testing can either be done manually or using an automated testing tool:

Manual - This testing is performed without taking help of automated testing
tools. The software tester prepares test cases for different sections and levels
of the code, executes the tests and reports the result to the manager.
Manual testing is time and resource consuming. The tester needs to confirm
whether or not right test cases are used. Major portion of testing involves
manual testing.

Automated This testing is a testing procedure done with aid of automated
testing tools. The limitations with manual testing can be overcome using
automated test tools.
A test needs to check if a webpage can be opened in Internet Explorer. This can be
easily done with manual testing. But to check if the web-server can take the load of 1
million users, it is quite impossible to test manually.
There are software and hardware tools which helps tester in conducting load
testing, stress testing, regression testing.
Testing Approaches
Tests can be conducted based on two approaches –
1. Functionality testing
2. Implementation testing
When functionality is being tested without taking the actual implementation in
concern it is known as black-box testing. The other side is known as white-box
testing where not only functionality is tested but the way it is implemented is also
analyzed.
Course Module
CS-6209 Software Engineering 1
Week 11: Software Testing
3
Exhaustive tests are the best-desired method for a perfect testing. Every single
possible value in the range of the input and output values is tested. It is not possible
to test each and every value in real world scenario if the range of values is large.
Black-box testing
It is carried out to test functionality of the program and also called ‘Behavioral’
testing. The tester in this case, has a set of input values and respective desired
results. On providing input, if the output matches with the desired results, the
program is tested ‘ok’, and problematic otherwise.
._.
...
In this testing method, the design and structure of the code are not known to the
tester, and testing engineers and end users conduct this test on the software.
Black-box testing techniques:





Equivalence class - The input is divided into similar classes. If one element
of a class passes the test, it is assumed that all the class is passed.
Boundary values - The input is divided into higher and lower end values. If
these values pass the test, it is assumed that all values in between may pass
too.
Cause-effect graphing - In both previous methods, only one input value at a
time is tested. Cause (input) – Effect (output) is a testing technique where
combinations of input values are tested in a systematic way.
Pair-wise Testing - The behavior of software depends on multiple
parameters. In pairwise testing, the multiple parameters are tested pair-wise
for their different values.
State-based testing - The system changes state on provision of input. These
systems are tested based on their states and input.
White-box testing
It is conducted to test program and its implementation, in order to improve code
efficiency or structure. It is also known as ‘Structural’ testing.
Course Module
CS-6209 Software Engineering 1
Week 11: Software Testing
input
...
4
output
...
In this testing method, the design and structure of the code are known to the tester.
Programmers of the code conduct this test on the code.
The below are some White-box testing techniques:


Control-flow testing - The purpose of the control-flow testing to set up test
cases which covers all statements and branch conditions. The branch
conditions are tested for both being true and false, so that all statements can
be covered.
Data-flow testing - This testing technique emphasis to cover all the data
variables included in the program. It tests where the variables were declared
and defined and where they were used or changed.
Testing Levels
Testing itself may be defined at various levels of SDLC. The testing process runs
parallel to software development. Before jumping on the next stage, a stage is tested,
validated and verified.
Testing separately is done just to make sure that there are no hidden bugs or issues
left in the software. Software is tested on various levels –
Unit Testing
While coding, the programmer performs some tests on that unit of program to know
if it is error free. Testing is performed under white-box testing approach. Unit
testing helps developers decide that individual units of the program are working as
per requirement and are error free.
Integration Testing
Even if the units of software are working fine individually, there is a need to find out
if the units if integrated together would also work without errors. For example,
argument passing and data updation etc.
Course Module
CS-6209 Software Engineering 1
Week 11: Software Testing
5
System Testing
The software is compiled as product and then it is tested as a whole. This can be
accomplished using one or more of the following tests:



Functionality testing - Tests all functionalities of the software against the
requirement.
Performance testing - This test proves how efficient the software is. It tests
the effectiveness and average time taken by the software to do desired task.
Performance testing is done by means of load testing and stress testing
where the software is put under high user and data load under various
environment conditions.
Security & Portability - These tests are done when the software is meant to
work on various platforms and accessed by number of persons.
Acceptance Testing
When the software is ready to hand over to the customer it has to go through last
phase of testing where it is tested for user-interaction and response. This is
important because even if the software matches all user requirements and if user
does not like the way it appears or works, it may be rejected.


Alpha testing - The team of developer themselves perform alpha testing by
using the system as if it is being used in work environment. They try to find
out how user would react to some action in software and how the system
should respond to inputs.
Beta testing - After the software is tested internally, it is handed over to the
users to use it under their production environment only for testing purpose.
This is not as yet the delivered product. Developers expect that users at this
stage will bring minute problems, which were skipped to attend.
Regression Testing
Whenever a software product is updated with new code, feature or functionality, it
is tested thoroughly to detect if there is any negative impact of the added code. This
is known as regression testing.
Testing Documentation
Testing documents are prepared at different stages –
Before Testing
Testing starts with test cases generation. Following documents are needed for
reference –

Course Module
SRS document - Functional Requirements document
CS-6209 Software Engineering 1
Week 11: Software Testing



6
Test Policy document - This describes how far testing should take place
before releasing the product.
Test Strategy document - This mentions detail aspects of test team,
responsibility matrix and rights/responsibility of test manager and test
engineer.
Traceability Matrix document - This is SDLC document, which is related to
requirement gathering process. As new requirements come, they are added
to this matrix. These matrices help testers know the source of requirement.
They can be traced forward and backward.
While Being Tested
The following documents may be required while testing is started and is being done:




Test Case document - This document contains list of tests required to be
conducted. It includes Unit test plan, Integration test plan, System test plan
and Acceptance test plan.
Test description - This document is a detailed description of all test cases
and procedures to execute them.
Test case report - This document contains test case report as a result of the
test.
Test logs - This document contains test logs for every test case report.
After Testing
The following documents may be generated after testing :

Test summary - This test summary is collective analysis of all test reports
and logs. It summarizes and concludes if the software is ready to be
launched. The software is released under version control system if it is ready
to launch.
Testing vs. Quality Control & Assurance and Audit
We need to understand that software testing is different from software quality
assurance, software quality control and software auditing.
 Software quality assurance - These are software development process
monitoring means, by which it is assured that all the measures are taken as
per the standards of organization. This monitoring is done to make sure that
proper software development methods were followed.
 Software quality control - This is a system to maintain the quality of
software product. It may include functional and non-functional aspects of
software product, which enhance the goodwill of the organization. This
system makes sure that the customer is receiving quality product for their
requirement and the product certified as ‘fit for use’.
Course Module
CS-6209 Software Engineering 1
Week 11: Software Testing
7
 Software audit - This is a review of procedure used by the organization to
develop the software. A team of auditors, independent of development team
examines the software process, procedure, requirements and other aspects
of SDLC. The purpose of software audit is to check that software and its
development process, both conform standards, rules and regulations.
References and Supplementary Materials
Books and Journals
1. SOFTWARE ENGINEERING, 9th Edition; Ian Sommerville
Online Supplementary Reading Materials
1. Software Testing Overview; https://www.w3schools.in/softwaretesting/overview/; November 13, 2019
Online Instructional Videos
Course Module
CS-6209 Software Engineering 1
Week 12: Software Design Strategies
1
Module 009: Software Design Strategies
Course Learning Outcomes:
1.
2.
3.
4.
Assess the impact of a GUI in Software Design
Describe the Software structured Design
Identify the different design process of function oriented design
Discuss the Software User Interface Design
Introduction
Software design is a process to conceptualize the software requirements into
software implementation. Software design takes the user requirements as
challenges and tries to find optimum solution. While the software is being
conceptualized, a plan is chalked out to find the best possible design for
implementing the intended solution.
There are multiple variants of software design. Let us study them briefly:
Structured Design
Structured design is a conceptualization of problem into several well-organized
elements of solution. It is basically concerned with the solution design. Benefit of
structured design is, it gives better understanding of how the problem is being
solved. Structured design also makes it simpler for designer to concentrate on the
problem more accurately.
Structured design is mostly based on ‘divide and conquer’ strategy where a problem
is broken into several small problems and each small problem is individually solved
until the whole problem is solved.
The small pieces of problem are solved by means of solution modules. Structured
design emphasis that these modules be well organized in order to achieve precise
solution.
These modules are arranged in hierarchy. They communicate with each other. A
good structured design always follows some rules for communication among
multiple modules, namely –


Course Module
Cohesion - grouping of all functionally related elements.
Coupling - communication between different modules.
CS-6209 Software Engineering 1
Week 12: Software Design Strategies
2
A good structured design has high cohesion and low coupling arrangements.
Function Oriented Design
In function-oriented design, the system comprises of many smaller sub-systems
known as functions. These functions are capable of performing significant task in
the system. The system is considered as top view of all functions.
Function oriented design inherits some properties of structured design where
divide and conquer methodology is used.
This design mechanism divides the whole system into smaller functions, which
provides means of abstraction by concealing the information and their operation.
These functional modules can share information among themselves by means of
information passing and using information available globally.
Another characteristic of functions is that when a program calls a function, the
function changes the state of the program, which sometimes is not acceptable by
other modules. Function oriented design works well where the system state does
not matter and program/functions work on input rather than on a state.
Design Process




The whole system is seen as how data flows in the system by means of data
flow diagram.
DFD depicts how functions change data and state of the entire system.
The entire system is logically broken down into smaller units known as
functions on the basis of their operation in the system.
Each function is then described at large.
Object Oriented Design
Object Oriented Design (OOD) works around the entities and their characteristics
instead of functions involved in the software system. This design strategies focuses
on entities and its characteristics. The whole concept of software solution revolves
around the engaged entities.
Let us see the important concepts of Object Oriented Design:

Course Module
Objects - All entities involved in the solution design are known as objects.
For example, person, banks, company, and customers are treated as objects.
CS-6209 Software Engineering 1
Week 12: Software Design Strategies
3
Every entity has some attributes associated to it and has some methods to
perform on the attributes.

Classes - A class is a generalized description of an object. An object is an
instance of a class. Class defines all the attributes, which an object can have
and methods, which defines the functionality of the object.
In the solution design, attributes are stored as variables and functionalities
are defined by means of methods or procedures.

Encapsulation - In OOD, the attributes (data variables) and methods
(operation on the data) are bundled together is called encapsulation.
Encapsulation not only bundles important information of an object together,
but also restricts access of the data and methods from the outside world. This
is called information hiding.

Inheritance - OOD allows similar classes to stack up in hierarchical manner
where the lower or sub-classes can import, implement and re-use allowed
variables and methods from their immediate super classes. This property of
OOD is known as inheritance. This makes it easier to define specific class and
to create generalized classes from specific ones.

Polymorphism - OOD languages provide a mechanism where methods
performing similar tasks but vary in arguments, can be assigned same name.
This is called polymorphism, which allows a single interface performing
tasks for different types. Depending upon how the function is invoked,
respective portion of the code gets executed.
Design Process
Software design process can be perceived as series of well-defined steps. Though it
varies according to design approach (function oriented or object oriented, yet It may
have the following steps involved:




A solution design is created from requirement or previous used system
and/or system sequence diagram.
Objects are identified and grouped into classes on behalf of similarity in
attribute characteristics.
Class hierarchy and relation among them is defined.
Application framework is defined.
Software Design Approaches
Here are two generic approaches for software designing:
Course Module
CS-6209 Software Engineering 1
Week 12: Software Design Strategies
4
Top Down Design
We know that a system is composed of more than one sub-systems and it contains a
number of components. Further, these sub-systems and components may have their
own set of sub-systems and components, and creates hierarchical structure in the
system.
Top-down design takes the whole software system as one entity and then
decomposes it to achieve more than one sub-system or component based on some
characteristics. Each sub-system or component is then treated as a system and
decomposed further. This process keeps on running until the lowest level of system
in the top-down hierarchy is achieved.
Top-down design starts with a generalized model of system and keeps on defining
the more specific part of it. When all the components are composed the whole
system comes into existence.
Top-down design is more suitable when the software solution needs to be designed
from scratch and specific details are unknown.
Bottom-up Design
The bottom up design model starts with most specific and basic components. It
proceeds with composing higher level of components by using basic or lower level
components. It keeps creating higher level components until the desired system is
not evolved as one single component. With each higher level, the amount of
abstraction is increased.
Bottom-up strategy is more suitable when a system needs to be created from some
existing system, where the basic primitives can be used in the newer system.
Both, top-down and bottom-up approaches are not practical individually. Instead, a
good combination of both is used.
Software User Interface Design
User interface is the front-end application view to which user interacts in order to
use the software. User can manipulate and control the software as well as hardware
by means of user interface. Today, user interface is found at almost every place
where digital technology exists, right from computers, mobile phones, cars, music
players, airplanes, ships etc.
Course Module
CS-6209 Software Engineering 1
Week 12: Software Design Strategies
5
User interface is part of software and is designed in such a way that it is expected to
provide the user insight of the software. UI provides fundamental platform for
human-computer interaction.
UI can be graphical, text-based, audio-video based, depending upon the underlying
hardware and software combination. UI can be hardware or software or a
combination of both.
The software becomes more popular if its user interface is:





Attractive
Simple to use
Responsive in short time
Clear to understand
Consistent on all interfacing screens
UI is broadly divided into two categories:
 Command Line Interface
 Graphical User Interface
Command Line Interface (CLI)
CLI has been a great tool of interaction with computers until the video display
monitors came into existence. CLI is first choice of many technical users and
programmers. It is the minimum interface software can provide to its users.
CLI provides a command prompt, the place where the user types the command and
feeds to the system. The user needs to remember the syntax of command and its
use. Earlier CLI were not programmed to handle the user errors effectively.
A command is a text-based reference to set of instructions, which are expected to be
executed by the system. There are methods like macros, scripts that make it easy for
the user to operate.
CLI uses less amount of computer resource as compared to GUI.
Course Module
CS-6209 Software Engineering 1
Week 12: Software Design Strategies
6
CLI Elements
A text-based command line interface can have the following elements:

Command Prompt - It is text-based notifier that is mostly shows the context
in which the user is working. It is generated by the software system.

Cursor - It is a small horizontal line or a vertical bar of the height of line, to
represent position of character while typing. Cursor is mostly found in
blinking state. It moves as the user writes or deletes something.

Command - A command is an executable instruction. It may have one or
more parameters. Output on command execution is shown inline on the
screen. When output is produced, command prompt is displayed on the next
line.
Graphical User Interface
Graphical User Interface (GUI) provides the user graphical means to interact with
the system. GUI can be combination of both hardware and software. Using GUI, user
interprets the software.
Course Module
CS-6209 Software Engineering 1
Week 12: Software Design Strategies
7
Typically, GUI is more resource consuming than that of CLI. With advancing
technology, the programmers and designers create complex GUI designs that work
with more efficiency, accuracy, and speed.
GUI Elements
GUI provides a set of components to interact with software or hardware.
Every graphical component provides a way to work with the system. A GUI system
has following elements such as:
Window - An area where contents of application are displayed. Contents in a
window can be displayed in the form of icons or lists, if the window represents file
structure. It is easier for a user to navigate in the file system in an exploring window.
Windows can be minimized, resized or maximized to the size of screen. They can be
moved anywhere on the screen. A window may contain another window of the same
application, called child window.

Course Module
Tabs - If an application allows executing multiple instances of itself, they
appear on the screen as separate windows. Tabbed Document Interface
has come up to open multiple documents in the same window. This interface
also helps in viewing preference panel in application. All modern webbrowsers use this feature.
CS-6209 Software Engineering 1
Week 12: Software Design Strategies
8

Menu - Menu is an array of standard commands, grouped together and
placed at a visible place (usually top) inside the application window. The
menu can be programmed to appear or hide on mouse clicks.

Icon - An icon is small picture representing an associated application. When
these icons are clicked or double clicked, the application window is opened.
Icon displays application and programs installed on a system in the form of
small pictures.

Cursor - Interacting devices such as mouse, touch pad, digital pen are
represented in GUI as cursors. On screen cursor follows the instructions from
hardware in almost real-time. Cursors are also named pointers in GUI
systems. They are used to select menus, windows and other application
features.
Application specific GUI components
A GUI of an application contains one or more of the listed GUI elements:




Course Module
Application Window - Most application windows uses the constructs
supplied by operating systems but many use their own customer created
windows to contain the contents of application.
Dialogue Box - It is a child window that contains message for the user and
request for some action to be taken. For Example: Application generate a
dialogue to get confirmation from user to delete a file.
Text-Box - Provides an area for user to type and enter text-based data.
Buttons - They imitate real life buttons and are used to submit inputs to the
software.
CS-6209 Software Engineering 1
Week 12: Software Design Strategies



9
Radio-button - Displays available options for selection. Only one can be
selected among all offered.
Check-box - Functions similar to list-box. When an option is selected, the box
is marked as checked. Multiple options represented by check boxes can be
selected.
List-box - Provides list of available items for selection. More than one item
can be selected.
Other impressive GUI components are:




Sliders
Combo-box
Data-grid
Drop-down list
User Interface Design Activities
There are a number of activities performed for designing user interface. The process
of GUI design and implementation is alike SDLC. Any model can be used for GUI
implementation among Waterfall, Iterative or Spiral Model.
Course Module
CS-6209 Software Engineering 1
Week 12: Software Design Strategies
10
A model used for GUI design and development should fulfill these GUI specific steps.
Course Module

GUI Requirement Gathering - The designers may like to have list of all
functional and non-functional requirements of GUI. This can be taken from
user and their existing software solution.

User Analysis - The designer studies who is going to use the software GUI.
The target audience matters as the design details change according to the
knowledge and competency level of the user. If user is technical savvy,
advanced and complex GUI can be incorporated. For a novice user, more
information is included on how-to of software.

Task Analysis - Designers have to analyze what task is to be done by the
software solution. Here in GUI, it does not matter how it will be done. Tasks
can be represented in hierarchical manner taking one major task and
dividing it further into smaller sub-tasks. Tasks provide goals for GUI
presentation. Flow of information among sub-tasks determines the flow of
GUI contents in the software.

GUI Design and implementation - Designers after having information about
requirements, tasks and user environment, design the GUI and implements
into code and embed the GUI with working or dummy software in the
background. It is then self-tested by the developers.
CS-6209 Software Engineering 1
Week 12: Software Design Strategies

11
Testing - GUI testing can be done in various ways. Organization can have inhouse inspection, direct involvement of users and release of beta version are
few of them. Testing may include usability, compatibility, user acceptance
etc.
References and Supplementary Materials
Books and Journals
Online Supplementary Reading Materials
1. Strategies of Software Design; https://www.slideshare.net/Amitgehu/strategy-ofsoftware-design; November 15, 2019
2. Design Strategy and Software Design Effectiveness;
https://www.computer.org/csdl/magazine/so/2012/01/mso2012010051/13rR
UxZ0nZw; November 15, 2019
Online Instructional Videos
Course Module
CS-6209 Software Engineering 1
Week 13: Software Maintenance
1
Module 010: Software Maintenance
Course Learning Outcomes:
1. Assess the impact of a change request to an existing product of medium
size.
2. Describe techniques, coding idioms and other mechanisms for
implementing designs that are more maintainable.
3. Refactor an existing software implementation to improve some aspect of
its design.
4. Identify the principal issues associated with software evolution and
explain their impact on the software lifecycle.'
5. Discuss the advantages and disadvantages of different types of software
reuse.
Introduction
Software maintenance is widely accepted part of SDLC now a days. It stands for all
the modifications and updations done after the delivery of software product. There
are number of reasons, why modifications are required, some of them are briefly
mentioned below:




Market Conditions - Policies, which changes over the time, such as taxation
and newly introduced constraints like, how to maintain bookkeeping, may
trigger need for modification.
Client Requirements - Over the time, customer may ask for new features or
functions in the software.
Host Modifications - If any of the hardware and/or platform (such as
operating system) of the target host changes, software changes are needed to
keep adaptability.
Organization Changes - If there is any business level change at client end,
such as reduction of organization strength, acquiring another company,
organization venturing into new business, need to modify in the original
software may arise.
Types of maintenance
In a software lifetime, type of maintenance may vary based on its nature. It may be
just a routine maintenance tasks as some bug discovered by some user or it may be
a large event in itself based on maintenance size or nature. Following are some
types of maintenance based on their characteristics:
Course Module
CS-6209 Software Engineering 1
Week 13: Software Maintenance




2
Corrective Maintenance - This includes modifications and updations done
in order to correct or fix problems, which are either discovered by user or
concluded by user error reports.
Adaptive Maintenance - This includes modifications and updations applied
to keep the software product up-to date and tuned to the ever changing
world of technology and business environment.
Perfective Maintenance - This includes modifications and updates done in
order to keep the software usable over long period of time. It includes new
features, new user requirements for refining the software and improve its
reliability and performance.
Preventive Maintenance - This includes modifications and updations to
prevent future problems of the software. It aims to attend problems, which
are not significant at this moment but may cause serious issues in future.
Cost of Maintenance
Reports suggest that the cost of maintenance is high. A study on estimating software
maintenance found that the cost of maintenance is as high as 67% of the cost of
entire software process cycle.
On an average, the cost of software maintenance is more than 50% of all SDLC
phases. There are various factors, which trigger maintenance cost go high, such as:
Course Module
CS-6209 Software Engineering 1
Week 13: Software Maintenance
3
Real-world factors affecting Maintenance Cost






The standard age of any software is considered up to 10 to 15 years.
Older softwares, which were meant to work on slow machines with less
memory and storage capacity cannot keep themselves challenging against
newly coming enhanced softwares on modern hardware.
As technology advances, it becomes costly to maintain old software.
Most maintenance engineers are newbie and use trial and error method to
rectify problem.
Often, changes made can easily hurt the original structure of the software,
making it hard for any subsequent changes.
Changes are often left undocumented which may cause more conflicts in
future.
Software-end factors affecting Maintenance Cost




Structure of Software Program
Programming Language
Dependence on external environment
Staff reliability and availability
Maintenance Activities
IEEE provides a framework for sequential maintenance process activities. It can be
used in iterative manner and can be extended so that customized items and
processes can be included.
Course Module
CS-6209 Software Engineering 1
Week 13: Software Maintenance
4
These activities go hand-in-hand with each of the following phase:

Identification & Tracing - It involves activities pertaining to identification
of requirement of modification or maintenance. It is generated by user or
system may itself report via logs or error messages.Here, the maintenance
type is classified also.

Analysis - The modification is analyzed for its impact on the system
including safety and security implications. If probable impact is severe,
alternative solution is looked for. A set of required modifications is then
materialized
into
requirement
specifications.
The
cost
of
modification/maintenance is analyzed and estimation is concluded.

Design - New modules, which need to be replaced or modified, are designed
against requirement specifications set in the previous stage. Test cases are
created for validation and verification

Implementation - The new modules are coded with the help of structured
design created in the design step.Every programmer is expected to do unit
testing in parallel.

System Testing - Integration testing is done among newly created modules.
Integration testing is also carried out between new modules and the system.
Finally the system is tested as a whole, following regressive testing
procedures.

Acceptance Testing - After testing the system internally, it is tested for
acceptance with the help of users. If at this state, user complaints some issues
they are addressed or noted to address in next iteration.

Delivery - After acceptance test, the system is deployed all over the
organization either by small update package or fresh installation of the
system. The final testing takes place at client end after the software is
delivered.
Training facility is provided if required, in addition to the hard copy of user
manual.

Course Module
Maintenance management - Configuration management is an essential part
of system maintenance. It is aided with version control tools to control
versions, semi-version or patch management.
CS-6209 Software Engineering 1
Week 13: Software Maintenance
5
Software Re-engineering
When we need to update the software to keep it to the current market, without
impacting its functionality, it is called software re-engineering. It is a thorough
process where the design of software is changed and programs are re-written.
Legacy software cannot keep tuning with the latest technology available in the
market. As the hardware become obsolete, updating of software becomes a
headache. Even if software grows old with time, its functionality does not.
For example, initially Unix was developed in assembly language. When language C
came into existence, Unix was re-engineered in C, because working in assembly
language was difficult.
Other than this, sometimes programmers notice that few parts of software need
more maintenance than others and they also need re-engineering.
Re-Engineering Process



Course Module
Decide what to re-engineer. Is it whole software or a part of it?
Perform Reverse Engineering, in order to obtain specifications of existing
software.
Restructure Program if required. For example, changing function-oriented
programs into object-oriented programs.
CS-6209 Software Engineering 1
Week 13: Software Maintenance


6
Re-structure data as required.
Apply Forward engineering concepts in order to get re-engineered
software.
There are few important terms used in Software re-engineering
Reverse Engineering
It is a process to achieve system specification by thoroughly analyzing,
understanding the existing system. This process can be seen as reverse SDLC model,
i.e. we try to get higher abstraction level by analyzing lower abstraction levels.
An existing system is previously implemented design, about which we know
nothing. Designers then do reverse engineering by looking at the code and try to get
the design. With design in hand, they try to conclude the specifications. Thus, going
in reverse from code to system specification.
Program Restructuring
It is a process to re-structure and re-construct the existing software. It is all about
re-arranging the source code, either in same programming language or from one
programming language to a different one. Restructuring can have either source
code-restructuring and data-restructuring or both.
Re-structuring does not impact the functionality of the software but enhance
reliability and maintainability. Program components, which cause errors very
frequently can be changed, or updated with re-structuring.
The dependability of software on obsolete hardware platform can be removed via
re-structuring.
Forward Engineering
Forward engineering is a process of obtaining desired software from the
specifications in hand which were brought down by means of reverse engineering. It
assumes that there was some software engineering already done in the past.
Course Module
CS-6209 Software Engineering 1
Week 13: Software Maintenance
7
Forward engineering is same as software engineering process with only one
difference – it is carried out always after reverse engineering.
Component reusability
A component is a part of software program code, which executes an independent
task in the system. It can be a small module or sub-system itself.
Example
The login procedures used on the web can be considered as components, printing
system in software can be seen as a component of the software.
Components have high cohesion of functionality and lower rate of coupling, i.e. they
work independently and can perform tasks without depending on other modules.
In OOP, the objects are designed are very specific to their concern and have fewer
chances to be used in some other software.
In modular programming, the modules are coded to perform specific tasks which
can be used across number of other software programs.
There is a whole new vertical, which is based on re-use of software component, and
is known as Component Based Software Engineering (CBSE).
Course Module
CS-6209 Software Engineering 1
Week 13: Software Maintenance
8
Re-use can be done at various levels



Application level - Where an entire application is used as sub-system of new
software.
Component level - Where sub-system of an application is used.
Modules level - Where functional modules are re-used.
Software components provide interfaces, which can be used to establish
communication among different components.
Reuse Process
Two kinds of method that can be adopted: either by keeping requirements same and
adjusting components or by keeping components same and modifying
requirements.
Course Module

Requirement Specification - The functional and non-functional
requirements are specified, which a software product must comply to, with
the help of existing system, user input or both.

Design - This is also a standard SDLC process step, where requirements are
defined in terms of software parlance. Basic architecture of system as a
whole and its sub-systems are created.

Specify Components - By studying the software design, the designers
segregate the entire system into smaller components or sub-systems. One
CS-6209 Software Engineering 1
Week 13: Software Maintenance
9
complete software design turns into a collection of a huge set of components
working together.

Search Suitable Components - The software component repository is
referred by designers to search for the matching component, on the basis of
functionality and intended software requirements..

Incorporate Components - All matched components are packed together to
shape them as complete software.
References and Supplementary Materials
Books and Journals
Online Supplementary Reading Materials
1. Software Maintenance and Evolution; https://uclouvain.be/en-cours-2019LINGI2252 ; November 13, 2019
2. CS302: Software Engineering; https://learn.saylor.org/course/CS302; November
13, 2019
Online Instructional Videos
Course Module
CS-6209 Software Engineering 1
Week 14: Software Engineering Best Practices
1
Module 011: Software Engineering Best
Practices
Course Learning Outcomes:
1. Function effectively in teams and adapt teaming strategies to improve
productivity;
2. Recognize human, security, social, and entrepreneur issues and
responsibilities relevant to engineering software and the digitalization of
services.
3. Integrate into a multi-cultural working environment with a practical
orientation and collaborate in professional networks;
4. Acknowledge life-long learning as a way to stay up to date in the
profession.
Introduction
Software is not immune to the failing economy. Many software companies will close,
and thousands of layoffs will occur as company’s contract and try to save money.
Historically, software costs have been a major component of corporate expense.
Software costs have also been difficult to control, and have been heavily impacted
by poor quality, marginal security, and other chronic problems.
Poor software engineering, which gave rise to seriously flawed economic models,
helped cause the recession. As the recession deepens, it is urgent that those
concerned with software engineering take a hard look at fundamental issues:
quality, security, measurement of results, and development best practices. This will
discuss the following topics that are critical during a major recession:







Course Module
Minimizing harm from layoffs and downsizing
Optimizing software quality control
Optimizing software security control
Migration from custom development to certified reusable components
Substituting legacy renovation for new development
Measuring software economic value and risk
Planning and estimating to reduce unplanned overruns
CS-6209 Software Engineering 1
Week 14: Software Engineering Best Practices
2
What Are “Best Practices” and How Can They Be Evaluated?
A book entitled Software Engineering Best Practices should start by defining exactly
what is meant by the phrase “best practice” and then explain where the data came
from in order to include each practice in the set. A book on best practices should
also provide quantitative data that demonstrates the results of best practices.
Because practices vary by application size and type, evaluating them is difficult. For
example, the Agile methods are quite effective for projects below about 2,500
function points, but they lose effectiveness rap-idly above 10,000 function points.
Agile has not yet even been attempted for applications in the 100,000–function
point range and may even be harmful at that size.
To deal with this situation, an approximate scoring method has been developed that
includes both size and type. Methods are scored using a scale that runs from +10 to
–10 using the criteria shown in Table 11-1. Both the approximate impact on
productivity and the approximate impact on quality are included. The scoring
method can be applied to specific ranges such as 1000 function points or 10,000
function points. It can also be applied to specific types of software such as
information technology, web application, commercial software, military software,
and several others. The scoring method runs from a maximum of +10 to a minimum
of –10, as shown in Table 11-1.
The midpoint or “average” against which improvements are measured are
traditional methods such as waterfall development performed by organizations
either that don’t use the Software Engineering Institute’s Capability Maturity Model
or that are at level 1. This fairly primitive combination remains more or less the
most widely used development method even in 2009.
One important topic needs to be understood. Quality needs to be improved faster
and to a higher level than productivity in order for productivity to improve at all.
The reason for this is that finding and fixing bugs is overall the most expensive
activity in software development. Quality leads and productivity follows. Attempts
to improve productivity without improving quality first are ineffective.
For software engineering, a historically serious problem has been that measurement
practices are so poor that quantified results are scarce. There are many claims for
tools, languages, and methodologies that assert each should be viewed as a best
practice. But empirical data on their actual effectiveness in terms of quality or
productivity has been scarce.
Course Module
CS-6209 Software Engineering 1
Week 14: Software Engineering Best Practices
3
To be described as a best practice, a language, tool, or method needs to be
associated with soft-ware projects in the top 15 percent of the applications
measured and studied by the author and his colleagues. To be included in the set of
best practices, a specific method or tool has to demonstrate by using quantitative
data that it improves schedules, effort, costs, quality, customer satisfaction, or some
combination of these factors. Furthermore, enough data needs to exist to apply the
scoring method shown in Table 11-1.
This criterion brings up three important points:
Point 1: Software applications vary in size by many orders of magnitude. Methods
that might be ranked as best practices for small programs of 1000 functions point
may not be equally effective for large systems of 100,000 functions point. Therefore
the scoring method use size as a criterion for judging “best in class” status.
Course Module
CS-6209 Software Engineering 1
Week 14: Software Engineering Best Practices
4
Point 2: Software engineering is not a “one size fits all” kind of occupation. There
are many different forms of software, such as embedded applications, commercial
software packages, information technology projects, games, military applications,
outsourced applications, open source applications, and several others. These
various kinds of software applications do not necessarily use the same languages,
tools, or development methods. Therefore this will considers the approaches that
yield the best results for each type of software application.
Point 3: Tools, languages, and methods are not equally effective or important for all
activities. For example, a powerful programming language such as Objective C will
obviously have beneficial effects on coding speed and code quality. But which
programming language is used has no effect on requirements creep, user
documentation, or project management. Therefore the phrase “best practice” also
has to identify which specific activities are improved. This is complicated because
activities include development, deployment, and post-deployment maintenance and
enhancements. Indeed, for large applications, development can take up to five years,
installation can take up to one year, and usage can last as long as 25 years before the
application is finally retired. Over the course of more than 30 years, hundreds of
activities will occur.
The result of the preceding factors is that selecting a set of best practices for
software engineering is a fairly complicated undertaking. Each method, tool, or
language needs to be evaluated in terms of its effective-ness by size, by application
type, and by activity. The best practices in a variety of contexts:



Best practices by size of the application
Best practices by type of software (embedded, web, military, etc.)
Best practices by activity (development, deployment, and maintenance)
In 2009, software engineering is not yet a true profession with state certification,
licensing, board examinations, formal specialties, and a solid body of empirical facts
about technologies and methods that have proven to be effective. There are, of
course, many international standards. Also, various kinds of certification are
possible on a voluntary basis. Currently, neither standards nor certification have
demonstrated much in the way of tangible improvements in software success rates.
This is not to say that certification or standards have no value, but rather that
proving their value by quantification of quality and productivity is a difficult task.
Several forms of test certification seem to result in higher levels of defect removal
efficiency than observed when uncertified testers work on similar applications.
Certified function-point counters have been shown experimentally to produce more
accurate results than uncertified counters when counting trial examples. However,
Course Module
CS-6209 Software Engineering 1
Week 14: Software Engineering Best Practices
5
much better data is needed to make a convincing case that would prove the value of
certification.
As to standards, the results are very ambiguous. No solid empiri-cal data indicates,
for example, that following ISO quality standards results in either lower levels of
potential defects or higher levels of defect removal efficiency. Some of the security
standards seem to show improvements in reduced numbers of security flaws, but
the data is sparse and unverified by controlled studies.
Multiple Paths for Software Development, Deployment, and Maintenance
One purpose is to illustrate a set of “paths” that can be followed from the very
beginning of a software project all the way through development and that lead to a
successful delivery. After delivery, the paths will continue to lead through many
years of maintenance and enhancements.
Because many paths are based on application size and type, a network of possible
paths exists. The key to successful software engineering is to find the specific path
that will yield the best results for a specific project. Some of the paths will include
Agile development, and some will include the Team Software Process (TSP). Some
paths will include the Rational Unified Process (RUP), and a few might even include
traditional water-fall development methods.
No matter which specific path is used, the destination must include fundamental
goals for the application to reach a successful conclusion:





Project planning and estimating must be excellent and accurate.
Quality control must be excellent.
Change control must be excellent.
Progress and cost tracking must be excellent.
Measurement of results must be excellent and accurate.
Examples of typical development paths are shown in Figure 11-1. This figure
illustrates the development methods and quality practices used for three different
size ranges of software applications.
To interpret the paths illustrated by Figure 11-1, the Methods boxes near the top
indicate the methods that have the best success rates. For example, at fewer than
1000 function points, Agile has the most success. But for larger applications, the
Team Software Process (TSP) and Personal Software Process (PSP) have the
greatest success. However, all of the methods in the boxes have been used for
applications of the sizes shown, with reasonable success.
Course Module
CS-6209 Software Engineering 1
Week 14: Software Engineering Best Practices
6
Figure 11-1 Development practices by size of application
Moving down, the Defect Prevention and Defect Removal boxes show the best
combinations of reviews, inspections, and tests. As you can see, larger applications
require much more sophistication and many more kinds of defect removal than
small applications of fewer than 1000 function points.
Course Module
CS-6209 Software Engineering 1
Week 14: Software Engineering Best Practices
7
Paths for Software Deployment
Best practices are not limited to development. A major gap in the literature is that of
best practices for installing or deploying large applications. Readers who use only
personal computer software such as Windows Vista, Microsoft Office, Apple OS X,
Intuit Quicken, and the like may wonder why deployment even matters. For many
applications, installation via download, CD, or DVD may require only a few minutes.
In fact, for Software as a Service (SaaS) applications such as the Google word
processing and spreadsheet applications, downloads do not even occur. These
applications are run on the Google servers and are not in the users’ computers at all.
However, for large mainframe applications such as telephone switching systems,
large mainframe operating systems, and enterprise resource planning (ERP)
packages, deployment or installation can take a year or more. This is because the
applications are not just installed, but require substantial customization to match
local business and technical needs.
Also, training of the users of large applications is an important and time-consuming
activity that might take several courses and several weeks of class time. In addition,
substantial customized documentation may be created for users, maintenance
personnel, customer support personnel, and other ancillary users. Best practices for
installation of large applications are seldom covered in the literature, but they need
to be considered, too.
Not only are paths through software development important, but also paths for
delivery of software to customers, and then paths for maintenance and
enhancements during the active life of software applications. Figure 11-2 shows
typical installation paths for three very different situations: Software as a Service,
self-installed applications, and those requiring consultants and installation
specialists.
Software as a Service (SaaS) requires no installation. For self-installed applications,
either downloads from the Web or physical installation via CD or DVD are common
and usually accomplished with moderate ease. However, occasionally there can be
problems, such as the release of a Norton AntiVirus package that could not be
installed until the previous version was uninstalled. However, the previous version
was so convoluted that the normal Windows uninstall procedure could not remove
it. Eventually, Symantec had to provide a special uninstall tool (which should have
been done in the first place).
Course Module
CS-6209 Software Engineering 1
Week 14: Software Engineering Best Practices
8
Figure 11-2 Deployment practices by form of deployment
However, the really complex installation procedures are those associated with large
mainframe applications that need customization as well as installation. Some large
applications such as ERP packages are so complicated that sometimes it takes install
teams of 25 consultants and 25 in-house personnel a year to complete installation.
Because usage of these large applications spans dozens of different kinds of users in
various organizations (accounting, marketing, customer support, manufacturing,
etc.), a wide variety of custom user manuals and custom classes need to be created.
From the day large software packages are delivered until they are cut-over and
begin large-scale usage by all classes of users, as long as a year can go by. Make no
Course Module
CS-6209 Software Engineering 1
Week 14: Software Engineering Best Practices
9
mistake: installation, deployment, and training users of large software applications
is not a trivial undertaking.
Paths for Maintenance and Enhancements
Once software applications are installed and start being used, several kinds of
changes will occur over time:
All software applications have bugs or defects, and as these are found, they will need
to be repaired.
As businesses evolve, new features and new requirements will surface, so existing
applications must be updated to keep them current with user needs.
Government mandates or new laws such as changes in tax structures must be
implemented as they occur, sometimes on very short notice.
As software ages, structural decay always occurs, which may slow down
performance or cause an increase in bugs or defects. Therefore if the software
continues to have business value, it may be necessary to “renovate” legacy
applications. Renovation consists of topics such as restructuring or refactoring to
lower complexity, identification and removal of error-prone modules, and perhaps
adding features at the same time. Renovation is a special form of maintenance that
needs to be better covered in the literature.
After some years of usage, aging legacy applications may outlive their utility and
need replacement. However, redeveloping an existing application is not the same as
starting a brand-new application. Existing business rules can be extracted from the
code using data-mining techniques, since the original requirements and
specifications usually lag and are not kept current.
Therefore, this will attempt to show the optimal paths not only for development, but
also for deployment, maintenance, and enhancements. Figure 11-3 illustrates three
of the more common and important paths that are followed during the maintenance
period.
As can be seen from Figure 11-3, maintenance is not a “one size fits all” form of
modification. Unfortunately, the literature on software maintenance is very sparse
compared with the literature on software development. Defect repairs,
Course Module
CS-6209 Software Engineering 1
Week 14: Software Engineering Best Practices
10
enhancements, and renovations are very different kinds of activities and need
different skill sets and sometimes different tools.
Figure 11-3 Major forms of maintenance and enhancement
Course Module
CS-6209 Software Engineering 1
Week 14: Software Engineering Best Practices
11
Quantifying Software Development, Deployment, and Maintenance
This will include productivity benchmarks, quality benchmarks, and data on the
effectiveness of a number of tools, methodologies, and programming practices. It
will also include quantitative data on the costs of training and deployment of
methodologies. The data itself comes from several sources. The largest amount of
data comes from the author’s own studies with hundreds of clients between 1973
and 2009.
Other key sources of data include benchmarks gathered by Software Productivity
Research LLC (SPR) and data collected by the nonprofit International Software
Benchmarking Standards Group (ISBSG). In addition, selected data will be brought
in from other sources. Among these other sources are the David Consulting Group,
the Quality/ Productivity (Q/P) consulting group, and David Longstreet of
Longstreet consulting. Other information sources on best practices will include the
current literature on software engineering and various portals into the software
engineering domain such as the excellent portal provided by the Information
Technology Metrics and Productivity Institute (ITMPI) . Information from the
Software Engineering Institute (SEI) will also be included. Other professional
associations such as the Project Management Institute (PMI) and the American
Society for Quality (ASQ) will be cited, although they do not publish very much
quantitative data.
All of these sources provide benchmark data primarily using function points as
defined by the International Function Point Users Group (IFPUG). It uses IFPUG
function points for all quantitative data dealing with quality and productivity.
There are several other forms of function point, including COSMIC (Common
Software Measurement International Consortium) function points and Finnish
function points. While data in these alternative metrics will not be discussed at
length in this module, citations to sources of benchmark data will be included. Other
metrics such as use case points, story points, and goal-question metrics will be
mentioned and references provided.
(It is not possible to provide accurate benchmarks using either lines of code metrics
or cost per defect metrics. As will be illustrated later, both of these common metrics
violate the assumptions of standard economics, and both distort historical data so
that real trends are concealed rather than revealed.)
On the opposite end of the spectrum from best practices are worst practices. The
author has been an expert witness in a number of breach-of-contract lawsuits where
depositions and trial documents revealed the major failings that constitute worst
Course Module
CS-6209 Software Engineering 1
Week 14: Software Engineering Best Practices
12
practices. These will be discussed from time to time, to demonstrate the differences
between the best and worst practices.
In between the sets of best practices and worst practices are many methods and
practices that might be called neutral practices. These may provide some benefits for
certain kinds of applications, or they may be slightly harmful for others. But in
neither case does use of the method cause much variation in productivity or quality.
This will attempts to replace unsupported claims with empirical data derived from
careful measurement of results. When the software industry can measure
performance consistently and accurately, can estimate the results of projects with
good accuracy, can build large applications without excessive schedule and cost
overruns, and can achieve excellence in quality and customer satisfaction, then we
can call ourselves “software engineers” without that phrase being a misnomer. Until
our successes far outnumber our failures, software engineering really cannot be
considered to be a serious and legitimate engineering profession.
Yet another major weakness of software engineering is a widespread lack of
measurements. Many software projects measure neither productivity nor quality.
When measurements are attempted, many projects use metrics and measurement
approaches that have serious flaws. For example, the most common metric in the
software world for more than 50 years has been lines of code (LOC). LOC metrics
penalize high-level languages and can’t measure noncode activities at all. In the
author’s view, usage of lines of code for economic studies constitutes professional
malpractice.
Another flawed metric is that of cost per defect for measuring quality. This metric
actually penalizes quality and achieves its lowest values for the buggiest
applications. Cost per defect cannot be used to measure zero-defect applications.
Here, too, the author views cost per defect as professional malpractice if used for
economic study.
Mathematical problems with the cost per defect metric have led to the urban legend
that “it costs 100 times as much to fix a bug after delivery as during development.”
This claim is not based on time and motion studies, but is merely due to the fact that
cost per defect goes up as numbers of defects go down. Defect repairs before and
after deployment take about the same amount of time. Bug repairs at both times
range from 15 minutes to more than eight hours. Fixing a few subtle bugs can take
much longer, but they occur both before and after deployment.
Course Module
CS-6209 Software Engineering 1
Week 14: Software Engineering Best Practices
13
Neither lines of code nor cost per defect can be used for economic analysis or to
demonstrate software best practices. Therefore this will use function point metrics
for economic study and best-practice analysis. As mentioned, the specific form of
function point used is that defined by the International Function Point Users Group
(IFPUG).
There are other metrics in use such as COSMIC function points, use case points,
story points, web object points, Mark II function points, Finnish function points,
feature points, and perhaps 35 other function point variants. However, as of 2008,
only IFPUG function points have enough measured historical data to be useful for
economic and best-practice analysis on a global basis. Finnish function points have
sev-eral thousand projects, but most of these are from Finland where the work
practices are somewhat different from the United States. COSMIC function points
are used in many countries, but still lack substantial quantities of benchmark data as
of 2009 although this situation is improving.
This will offer some suggested conversion rules between other metrics and IFPUG
function points, but the actual data will be expressed in terms of IFPUG function
points using the 4.2 version of the counting rules.
IFPUG function point metrics are far from perfect, but they offer a number of
advantages for economic analysis and identification of best practices. Function
points match the assumptions of standard economics. They can measure
information technology, embedded applications, commercial software, and all other
types of software. IFPUG function points can be used to measure noncode activities
as well as to measure coding work. Function points can be used to measure defects
in requirements and design as well as to measure code defects. Function points can
handle every activity during both development and maintenance. In addition,
benchmark data from more than 20,000 projects is available using IFPUG function
points. No other metric is as stable and versatile as function point metrics.
One key fact should be obvious, but unfortunately it is not. To demonstrate high
quality levels, high productivity levels, and to identify best practices, it is necessary
to have accurate measurements in place. For more than 50 years, the software
engineering domain has utilized measurement practices and metrics that are
seriously flawed. An occupation that cannot measure its own performance with
accuracy is not qualified to be called an engineering discipline. Therefore another
purpose of this is to demonstrate how economic analysis can be applied to software
engineering projects. This will demonstrate methods for measuring productivity
and quality with high precision.
Course Module
CS-6209 Software Engineering 1
Week 14: Software Engineering Best Practices
14
Critical Topics in Software Engineering
As of 2009, several important points about software engineering have been proven
beyond a doubt. Successful software projects use state-of-the-art quality control
methods, change control methods, and project management methods. Without
excellence in quality control, there is almost no chance of a successful outcome.
Without excellence in change control, creeping requirements will lead to
unexpected delays and cost overruns. Without excellent project management,
estimates will be inac-curate, plans will be defective, and tracking will miss serious
problems that can cause either outright failure or significant overruns. Quality
control, change control, and project management are the three critical topics that
can lead to either success or failure.
A Preview of Software Development and Maintenance in 2049
Requirements analysis circa 2049 Design in 2049
Software development in 2049 User documentation circa 2049 Customer support in
2049 Maintenance and enhancement in 2049 Deployment and training in 2049
Software outsourcing in 2049
Technology selection and technology transfer in 2049 Software package evaluation
and acquisition in 2049 Enterprise architecture and portfolio analysis in 2049 Due
diligence in 2049
How Software Personnel Learn New Skills
Evaluation of software learning channels in descending order:















Course Module
Number 1: Web browsing
Number 2: Webinars, podcasts, and e-learning
Number 3: Electronic books (e-books)
Number 4: In-house education
Number 5: Self-study using CDs and DVDs
Number 6: Commercial education
Number 7: Vendor education
Number 8: Live conferences
Number 9: Wiki sites
Number 10: Simulation web sites
Number 11: Software journals
Number 12: Self-study using books and training materials
Number 13: On-the-job training
Number 14: Mentoring
Number 15: Professional books, monographs, and technical reports
CS-6209 Software Engineering 1
Week 14: Software Engineering Best Practices


15
Number 16: Undergraduate university education
Number 17: Graduate university education
Team Organization and Specialization







Large teams and small teams
Finding optimal organization structures
Matrix versus hierarchical organizations Using project offices
Specialists and generalists Pair programming
Use of Scrum sessions for local development
Communications for distributed development
In-house development, outsource development, or both
References and Supplementary Materials
Books and Journals
Online Supplementary Reading Materials
1. Best Practices in Software Development;
https://www.cs.utexas.edu/~mitra/csSummer2014/cs312/lectures/bestPractice
s.html; November 16, 2019
2. Best Practice Software Engineering; http://best-practice-softwareengineering.ifs.tuwien.ac.at/; November 16, 2019
Online Instructional Videos
Course Module
Download