Course Presentation Project: Network Reliability Tests Team: Gladiator EEL5881, Fall, 2003

advertisement
Course Presentation
EEL5881, Fall, 2003


Project: Network Reliability Tests
Team: Gladiator



Shuxin Li
Victor Velez
Xin Bai
Overview


Current System problems
Proposed System





Requirements
Specification
Software Project Management Plan
Software Requirement Specification
Test Plan
Current system problems

Servers tests are not reliable given a
domain at any random time.



The servers are monitored by manual
Doesn’t monitoring nor logging the
machines available in a domain.
No statistical analysis is performed.
Proposed System

The ping Network Management tool can

Monitoring network services status:
telnet, web, ftp, ssh, smtp and pop3


Every event of network status will be
recorded and displayed in services detail
Client can configure the parameters
CSV logger (comma-separated-value)
 Email notification
 UI notification (User Interface)

Operational Scenarios








User begin to load new server lists.
User press “begin monitor” button
The system will show all monitored
servers status at services detail
System will show the statistics chart
User press “stop monitor” button
System will stop monitoring
User can configure the parameters
System will keep historic situations of
monitored servers.
Operational Features

Must Have:








Allow user import the correct server list file (Config.xml)
The capacity of monitor all possible type of servers
Show all services details
Show all current faults (server down, bad ping package reply..)
Keep all servers status history
Show the real-time statistics
Generate log file
Would Like to Have:


Ability for user to configure the parameters
Distinguish the different server status by different colorful logo
Expected Impacts




Automatically monitor different types of servers
Display the current services detail and faults with no delay
Use statistics chart to help user get visualized results
Allow user to configure the log file and notification method






CVS log
E-mail Notification
UI Notification
Best tool designed for collecting server status information.
Higher effectiveness
Friendly user interface design
Software Environment

System Processor


Operating System


Windows 2000 or Windows xp
Other Requirement


P2 233Mhz or above
Must have java JDK 1.4
Programming environment

Borland Jbuilder 9.0
Project Management Plan

Software Life Cycle process
Project Management Plan

Project Team Organization

Democratic team organization
Egoless programming.
 No single leader.
 Shuxin Li, Xin Bai, Victor Velez are in the team.
 All the technique decisions will be discussed and
finally made democratically.
 Communication will be handled through email and
scheduled face-to-face meetings when ideas and
design need to be decided.

Project Management Plan

Quality Assurance

Non-execution based testing - Walkthroughs
(the participant driven approach).


Each team member also act as a QA member,
detect and record each other’s faults.
Risk Management


Client’s resistance to change
The iterative process inherent in the Waterfall
model will help us in managing this risk.
Deliverables Timetable
Artifact
Due Dates <some will have
multiple deliveries>
Meeting Minutes
In maintenance. Update
approximately once every two
week
Individual Logs
Group Project Management
Reports
In maintenance. Update when
necessary
High-Level Design
10/21/03
Detailed Design
10/21/03
Test Plan
10/01/03
User's Manual
11/18/03
Final Test Results
11/18/03
Source, Executable, Build
Instructions
11/18/03
Project Legacy
11/18/03
N/A
ConOps
10/01/03
Project Plan
10/01/03
SRS
10/01/03
Software Requirements Specification

Product Overview


The software will run under a windows
Environment
The software will be able to monitor:
Web services: http and https
 Email servers: POP3, SMTP
 FTP servers
 Telnet or ssh servers


The software will report:
Real-time Statistic chart
 Services detail and current faults

Event Table
Event Name
External Stimuli
External Responses
Internal data and state
Load Server List
Open button is pressed
Search the server list file
and Load it into system
Server list file is read and
recorded
Monitor the server
Begin to monitor button is
pressed
System begin to test the
listed server list show the
monitoring results
Monitoring of any network
device with an IP address
using PING (ICMP) or TCP
Ports
Stop the Monitoring
Stop monitor button is
pressed
System stop the
monitoring immediately
Stop sending PING (ICMP) or
sense the TCP Ports
Check each server domain
detail status
Click the server name in
the left frame
All the server detail will be
shown immediately
Execute “Show detail” method
for each server
Check the statistics server
status
Statistics Label is selected
and periodically, refresh
button is pressed
A statistics chart will be
shown including time +
faults detected
Execute the “draw graphic”
method by given arguments
Check the history of
servers status
History Label is selected
A history of server recent
changing will be shown
Execute the “show history”
method by given arguments
Configure the parameters
Config Label is selected
User can configure: CSV
logger, Email notification
or UI notification
Execute the relevant methods
and store the configuration
into the config.xml file
Use Case Diagram
Software Requirements Specification

Use Case Descriptions




Load server list - the user import a server list
Start monitoring - the user will let system begin to
monitor all required servers
Stop monitoring - the user will stop monitor all
required servers
Configuration - the user will configure three
parameters



CSV logger: comma-separated-values log file
Email notification: configure the smtp email server
UI notification: configure user interface notification
Software Requirements Specification


Check statistic result – The system will show
user the statistic chart with X axes represents
the time while Y axes represents the faults
detected
Check history – The system will show user the
overall servers status history including:
Alive
 Dead
 Bad reply

Test Plan

Overall Objectives



Find as many errors as possible before the
user of the software finds them
Ensure good software quality, that is, a robust
final product
Make sure that our software adheres very
closely to the client requirements and
specification documents
Test Cases

Case 1




Objective: to demonstrate the behavioral sanity of the system.
Description:
 Whether the software is starting properly.
 Whether the Graphical User Interface (GUI) is displaying
correctly and it’s feel and look is consistent with rest of the
application.
 Whether the GUI is displaying the fields correctly.
 Whether the software is stopping properly.
Test Condition: We will test it for both development environment
and test environment
Expected Results: Information is displaying properly in the GUI
Test Cases

Case 2

Objective: demonstrate the functional correctness of “Import the
server list” module.

Description: check if a given server list .xml file is correct, The
system will accept it and ready to monitor. If a give server list file
is incorrect, The system will give an alert asking for the right file.

Test Condition: We will test it for development environment and
test environment

Expected Results: Only correct .xml file will be accepted
Test Cases

Case 3


Objective: demonstrate the functional correctness of “Monitor”
module.
Description:




As soon as correct server list is loaded and user has pressed the
start monitoring, the software can monitor all the servers
simultaneously
Each server status will be displayed and recorded at the same time
Test Condition: We will test it for both development environment
and test environment
Expected Results: Software will distinguish different type of
servers and be able to monitor all servers, showing the servers
status
Test Cases

Case 4


Objective: demonstrate the functional correctness of
“Statistic” module.
Description:



As soon as correct server list is loaded and user has pressed
the start monitoring, the software can draw a real-time faultdetected chart
Test Condition: We will test it for both development
environment and test environment
Expected Results: Software will draw the statistic
chart with no delay
Test Cases

Case 5

Objective: prove that the system works as integrated unit when all
the fixes are complete.

Description: it demonstrates whether all areas of the system
interface with each other correctly. Also test whether the software
has error handling behavior for every possible error.

Test Condition: test it for both development environment and test
environment

Expected Results: The system integrates well and execute
correct functions. The system passes the above functional tests
as one integrated system
Test Cases

Case 6


Objective: demonstrate the compatibility of our
software with other hardware.
Description:




We will install our software on other types of computers
such as with different processors, speeds and capabilities
We will check our software on these computers
Test Condition: We will test it for both development
environment and test environment
Expected Results: We expect our software to pass
these tests for different system environments
Snap
Thank you!
Download