System Changeover

advertisement
Systems Analysis & Design
Eighth Edition
Chapter 11
Systems Implementation
Phase Description
● Systems Implementation is the fourth of
five phases in the systems development
life cycle (SDLC)
● Includes application development,
testing, documentation, training, data
conversion, system changeover, and
post-implementation evaluation of the
results
2
Introduction
● The system design specification serves as a
blueprint for constructing the new system
● The initial task is application development
● Before a changeover can occur, the system
must be tested and documented carefully,
users must be trained, and existing data must
be converted
● A formal evaluation of the results takes place
as part of a final report to management
3
Software Quality Assurance
● Quality assurance
● Software Engineering
– Software Engineering Institute (SEI) at Carnegie Mellon
University
to find better, faster, and less expensive methods of
software development
– Capability Maturity Model (CMM ) (1991)
A set of software development standards to improve
software quality, reduce development time and cut costs
– Capability Maturity Model Integration (CMMI)
Process improvement: Integrates software and systems
development into a much larger framework
– CMMI tracks an organization's processes, using five
maturity layers (p. 501, Fig. 11-3)
4
CMMI levels
5
Software Quality Assurance
● International Organization for
Standardization (ISO)
– Many firms seek assurance that software
systems will meet rigid quality standards
– In 1991, ISO established a set of guidelines
called ISO 9000-3
– ISO requires a specific development plan,
which outlines a step-by-step process for
transforming user requirements into a finished
product
6
Overview of Application Development
● Application development
● Objective is to translate the logical
design into program and code modules
that will function properly
7
Overview of Application Development
●
Application Development Steps
– Start by reviewing
documentation from prior
SDLC phases and creating
a set of program designs
– Planning(Ch.3-10)
– Module: related program
code organized into small
units that are easy to
understand and maintain.
– After the design is created,
coding can begin
8
Overview of Application Development
● Application Development Tasks
– Traditional methods
• Start by reviewing documentation from prior SDLC
phases and creating a set of program designs
• At this point, coding and testing tasks begin
– Agile Methods
• Intense communication and collaboration will now
begin between the IT team and the users or customers
• Objective is to create the system through an iterative
process
9
9
Overview of Application Development
● System Development
Tools
– Entity-relationship
diagrams
– Flowcharts
– Pseudocode
– Decision tables and
decision trees
10
10
Overview of Application Development
● Project Management
– Even a modest-sized project might have
hundreds or even thousands of modules
– Important to set realistic schedules, meet project
deadlines, control costs, and maintain quality
– Should use project management tools and
techniques
11
11
Structured Application Development
● Structure Charts
Structure charts: a tool to
show the program modules
and the relationships
among them
• Control module: a
higher-level module
• Subordinate modules:
lower-level modules
• Library module: reusable
code and can be invoked
from more than one
point in the chart
12
Structured Application Development
● Structure Charts
– Data and controls are passed between
modules
– Data Couple: an empty-circle arrow
– Control Couple: a filled-circle arrow
• Flag(旗標)
• A module uses a flag to signal a specific condition or
action to another module
– Fig. 11-11, 12, p. 506-507
13
Structured Application Development
● Structure Charts
– Condition: a line with diamond on one end
• A condition line indicates that a control module
determines which subordinate modules will be
invoked, depending on a specific condition
– Loop
• A loop indicates that one or more modules are
repeated
– Example: Figure 11-13, 11-14
14
Structured Application Development
● General guidelines for good module design:
more cohesive and loosely coupled
● Cohesion(內聚) and Coupling(結合)
– Cohesion: a module that performs a single function or
task has a high degree of cohesion (good)
– If you need to make a module more cohesive, you can
split it into separate units, each of which performs a
single function
– Example: p. 508, Figure 11-15
– Coupling: Modules that are independent are loosely
coupled (good)
– Loosely coupled: easy to maintain
– Tightly coupled: hard to maintain
– Status flag: an indicator that allows one module to send
a message to another module (poor design)
Example: p. 508, Figure 11-16
15
Cohesion and Coupling (Fig. 11-16)
16
Structured Application Development
● Drawing a Structure Chart
–
–
–
–
Step 1: Review the DFDs
Step 2: Identify Modules and Relationships
Step 3: Add Couples, Loops, and Conditions
Step 4: Analyze the Structure Chart and the
Data Dictionary
– Example: p. 509-510
17
Coding
● Coding
● Programming Environments
– Each IT department has its own programming
environment and standards
– Integrated development environments (IDEs):
dotNET
● Generating Code
– CASE can generate editable program code
directly from macros, keystrokes, or mouse
actions
18
Testing the System
● After coding, a programmer must test
each program to make sure that it
functions correctly
● Syntax errors and Semantic errors
● Desk checking
– Looking for logic errors
● Structured walkthrough, or code review
19
Testing the System
●
Unit Testing
– The testing of an individual program or module
– Programmers must test programs that interact
with other programs and files individually
– Test data: correct and erroneous data
– Stub testing: simulation of each outcome by
displaying a message so that it is easier to link
with another program later
– Regardless of who creates the test plan, the
project manager or a designated analyst also
reviews the final test results
20
Testing the System
● Integration Testing
– Integration testing, or link testing
– Testing the programs independently does not
guarantee that the data passed between them is
correct
– A testing sequence should not move to the
integration stage unless it has performed
properly in all unit tests
21
Testing the System
●
System Testing
– Aka Acceptance tests
– Major objectives:
• Perform a final test of all programs
• Verify that the system will handle all input
data properly, both valid and invalid
• Ensure that the IT staff has the
documentation and instructions needed to
operate the system properly and that backup
and restart capabilities of the system are
adequate
22
Testing the System
●
System Testing
– Major objectives:
• Demonstrate that users can interact with the
system successfully
• Verify that all system components are integrated
properly and that actual processing situations will
be handled correctly
• Confirm that the information system can handle
predicted volumes of data in a timely and efficient
manner
23
24
Documentation
● Program Documentation
● System Documentation: data dictionary,
DFD,…
● Operations Documentation: scheduling
info for printed output, etc.
● User Documentation
– Online documentation, eg. FAQ’s
25
Management Approval
● After system testing is complete, you
present the results to management
● If system testing produced no technical,
economical, or operational problems,
management determines a schedule for
system installation and evaluation
26
System Installation and Evaluation
● Remaining steps in systems
implementation:
– Prepare a separate operational and test
environment
– Provide training for users, managers, and IT
staff
– Perform data conversion and system
changeover
– Carry out post-implementation evaluation of
the system
– Present a final report to management
27
Operational and Test Environments
● The environment for the actual system
operation is called the operational environment
or production environment
● The environment that analysts and
programmers use to develop and maintain
programs is called the test environment
● A separate test environment is necessary to
maintain system security and integrity and
protect the operational environment
28
Operational and Test Environments
29
Training
● Training Plan
– The first step is to identify who should receive
training and what training is needed
– The three main groups for training are users,
managers, and IT staff
– You must determine how the company will
provide training
30
Training
● Vendor Training
– If the system includes the purchase of software or
hardware, then vendor-supplied training is one of
the features you should include in the RFPs
(requests for proposal) and RFQs (requests for
quotation) that you send to potential vendors
31
Training
● Outside Training Resources
– Many training consultants, institutes, and firms
are available that provide either standardized or
customized training packages
– You can contact a training provider and obtain
references from clients
32
Training
● In-House Training
– The IT staff and user departments often share
responsibility
– When developing a training program, you
should keep the following guidelines in mind:
• Train people in groups, with separate training
programs for distinct groups
• Select the most effective place to conduct the
training
• Provide for learning by hearing, seeing, and doing
• Prepare effective training materials, including
interactive tutorials
• Tutorial
33
Training
● In-House Training
– When developing a training program, you should
keep the following guidelines in mind:
• Rely on previous trainees
• Train-the-trainer strategy
– When Training is complete, many organizations
conduct a full-scale test, or simulation
34
Data Conversion
● Data Conversion Strategies
– The old system might be capable of exporting
data in an acceptable format for the new
system or in a standard format such as ASCII or
ODBC
– If a standard format is not available, you must
develop a program to extract the data and
convert it
– Often requires additional data items, which
might require manual entry
35
Data Conversion
● Data Conversion Security and Controls
– You must ensure that all system control
measures are in place and operational to protect
data from unauthorized access and to help
prevent erroneous input
– Some errors will occur
– It is essential that the new system be loaded
with accurate, error-free data
36
System Changeover
37
System Changeover
● Direct Cutover
– Involves more risk than other changeover
methods
– Companies often choose the direct cutover
method for implementing commercial software
packages
– Cyclical(週期性) information systems usually
are converted using the direct cutover method
at the beginning of a quarter, calendar year, or
fiscal year
38
System Changeover
● Parallel Operation
– Easier to verify that the new system is working
properly under parallel operation than under
direct cutover
– Running both systems might place a burden
on the operating environment and cause
processing delay
– Is not practical if the old and new systems are
incompatible technically
– Also is inappropriate when the two systems
perform different functions
39
System Changeover
● Pilot Operation
– The group that uses the new system first is
called the pilot site
– The old system continues to operate for the
entire organization
– After the system proves successful at the pilot
site, it is implemented in the rest of the
organization, usually using the direct cutover
method
– Is a combination of parallel operation and
direct cutover methods
40
System Changeover
● Phased Operation
– You give a part of the system to all users
– The risk of errors or failures is limited to the
implemented module only
– Is less expensive than full parallel operation
– Is not possible, however, if the system cannot
be separated easily into logical modules or
segments
41
Post-Implementation Tasks
● Post-Implementation Evaluation
– Includes feedback for the following areas:
• Accuracy, completeness, and timeliness of information
system output
• User satisfaction
• System reliability and maintainability
• Adequacy of system controls and security measures
• Hardware efficiency and platform performance
42
Post-Implementation Tasks
● Post-Implementation Evaluation
– Includes feedback for the following areas:
•
•
•
•
•
Effectiveness of database implementation
Performance of the IT team
Completeness and quality of documentation
Quality and effectiveness of training
Accuracy of cost-benefit estimates and development
schedules
43
Post-Implementation Tasks
● Post-Implementation Evaluation
– When evaluating a system, you should:
• Interview members of management and key users
• Observe users and computer operations personnel
actually working with the new information system
• Read all documentation and training materials
• Examine all source documents, output reports, and
screen displays
• Use questionnaires to gather information and opinions
from a large number of users
• Analyze maintenance and help desk logs
44
Post-Implementation Tasks
● Post-Implementation Evaluation
– Users can forget details of the developmental
effort if too much time elapses
– Pressure to finish the project sooner usually
results in an earlier evaluation in order to
allow the IT department to move on to other
tasks
45
Post-Implementation Tasks
● Final Report to Management
– Your report should include the following:
• Final versions of all system documentation
• Planned modifications and enhancements to the
system that have been identified
• Recap of all systems development costs and
schedules
• A comparison of actual costs and schedules to the
original estimates
• Post-implementation evaluation, if it has been
performed
– Marks the end of systems development work
46
Download