Table of Contents - PSU Systems Engineering

advertisement

Portland State University

Systems Engineering

Extending the Life Cycle of Power

Plants through Predictive Maintenance and Performance Testing

Submitted by

Donald Ogilvie

Masters Project - SYSE 506

Submitted August 22, 2010

ACKNOWLEDGEMENT

First I have to thank my advisor Dr. Herman Migliore for his support, guidance and constant encouragements to make this project a reality. I am very grateful that I was given the opportunity to work on such a challenging project that has created a new horizon to herald the dawn of a new day. The continuous support from my family and friends is the key component that allows me to be successful at the graduate level. I will be forever grateful to my wife Caron who stood by me to make this day possible. I must say thanks to my daughter Thandi and my son Hakeem for their understanding while I was pursuing graduate studies. Finally I must say thanks to Robert Drummonds of

Ontario Power Generation who was always available for technical assistance to make this project a success.

ii

Table of Contents

Abstract

Section 1

1.0

1.1

Introduction

ConOps

1.2 Product Improvement Strategies

1.3 System Engineering Management Plan (SEMP)

1.4 Scope

1.5 System Engineering Process

1.6 Capabilities and Constraints

1.7 Functional Requirements

1.8

1.9

1.10

1.11

Verification Process

Design Synthesis

Functional Analysis/Allocation

Requirement Loop

1.12 Cost & Schedules

1.13 Project Variance Cost and Explanation of Estimates

1.14 PMTE Concept

1.15 Requirement Analysis

Section 2

2.0 Boiler Flow Nozzle Calibration & Gross Boiler Test (GBET)

2.1 Technical Performance Measure

2.2 Technical Reviews and Audit

2.3 Boiler Flow Nozzle Test - Process

2.4 Experimental Arrangement

2.5 Testing Procedure

2.6 Results

2.7

2.8

2.9

2.10

2.11

2.12

Findings

Work Environment

Work Order Task

Gross Boiler Efficiency Test

Boiler Test Curve Data

Post-Overhauled Finding

Section 3

3.0

3.1

Thermal Heat Rate

Test Objective

3.2 Test Programme

3.3 The Process Concepts

3.4 Work Breakdown Structure (WBS)

3.5 Work Task Schedule

3.6 Group-3 Plant

Page

1

21

22

23

24

26

32

32

33

34

39

40

2

4

9

13

20

45

46

47

48

49

50

51

53

58

59

61

64

65

67

67

68

69

73

74

75 iii

3.7

Group-1 & Group-2 (Coal Fire & Gas Fire) Plants

3.8

Variable Back Pressure Test

3.9

Summary of Reading for Turbine Variable Back-Pressure Test

3.10

The THR Data Extraction System (DES) Flow Diagram

Section 4

4.0 Validating the System

4.1 Validating Test with Computer Network

4.2 Test Plan

Section 5

5.0 Integrating the System

5.1 Software Engineering Project Plan

5.2 Software Engineering Process

76

77

78

81

82

83

83-95

98

99

100

Section 6

6.0

6.1

System Integration and Verification (SI&V)

System Requirements Review

6.2 The System Requirements Traceability Matrix

6.3 System Integration and Verification Planning

6.4 System Integration and Verification Development

105

106

107

108

110

6.5 System Integration and Verification Execution

6.6 Applying OPC Interface

111

112

6.7 OPC Data Access Architecture

6.8 System Integration Test using HMI

113

114

6.9 SI&V Test Procedures

6.10 SI&V Test Results

6.11 SI&V Test Case

6.12 Normal Operating State

6.13 Data Acquisition from Plant Instrumentation

Conclusion

Glossary

Appendix A Thermal Performance Curve

Appendix B System Instrumentation

Appendix-C System Engineering Master Schedule (SEMS)

Appendix D Maintenance and Training

List of Figures & Tables

Figure 1.1

Figure 1.2

Context of a Power Plant

ConOps Context Framework

Figure 1.3a Pre-Planned Product Improvement

Figure 1.3b Product Improvement Template

Figure 1.4 Integrated & Process Development

Figure 1.5 System Engineering Management Plan Layout

Figure 1.6 System Engineering Development Environment Plan

117

118

119

121

121

125

127

129

130

137

140

4

8

10

11

12

13

14 iv

Figure 1.7 Project Management Process Flow

Figure 1.8 Planning Process

Figure 1.9 The Essential Elements of Risk Management

Figure 1.10 Risk Management Control and Feedback

Figure 1.11 System Engineering Process

Figure 1.12 Flow Diagram

Figure 1.13 Traceability Flow Plan

Figure 1.14 Functional Architecture

Figure 1.15 Functional Flow Block Diagram

Figure 1.16 Design Synthesis

Figure 1.17 PMTE Layout

Figure 1.18 Quality Function Deployment

Figure 2.1 Boiler Flow Nozzle Test - Process

Figure 2.2 Calibrated Instrumentation to Measure Parameters

Figure 2.3 Tools-1

Figure 2.4 Work Environment

Figure 2.5 Work Breakdown Structure-1

Figure 2.6 Work Order Task Schedule-1

Figure 2.7 Discharge Coefficient versus Reynolds Number

Figure 2.8 Gross Boiler Efficiency Test

Figure 2.9 Method for Gross Boiler Efficiency

Figure 2.10 Tools-2

Figure 2.11 Environment

Figure 2.12 Work Breakdown Structure-2

Figure 2.13 Work Order Schedule-2

Figure 2.14 Gross Boiler Efficiency I/O Curve

Figure 2.15 Flow Nozzle Loop

Figure 3.1 Process Concepts

Figure 3.2 Analyzing the Process

Figure 3.3 Analyzing the Method

Figure 3.4 THR Loading

Figure 3.5 Work Order Task Outline & Instructions

Figure 3.6 Work Breakdown Structure

Figure 3.7 Work Task Schedule

Figure 3.8 Power Plant Concept of Description Diagram

Figure 3.9 The System THR I/O Curve

Figure 3.10 Extraction HP, 5 & 6

Figure 3.11 Extraction 2, 3 & 4

Figure 3.12 THR Data Extraction System

Figure 4.1 Data Extraction for System Under Test

Figure 4.2 Hardware Connections

Figure 5.1 Computerize Interface

Figure 5.2 Hybrid Prototype

Figure 6.1 SI&V Sub-Process

Figure 6.2 SI&V Planning Activity

Figure 6.3 SI&V Planning Development Activity

79

80

80

81

82

83

98

99

105

108

110

21

25

26

27

16

17

18

18

28

32

39

39

48

49

65

66

69

70

70

71

62

63

63

64

72

73

74

75

57

58

59

59

60

61

62 v

Figure 6.4 SI&V Execution Activity

Figure 6.5 OPC Architecture

Table 1.1 Project Management Plan

Table 1.2 Preliminary Judgements Regarding Risk Classification

Table 1.3 Gantt Timeline

Table 1.4 Requirements Allocation Sheet-1

Table 1.5 Requirements Allocation Sheet-2

Table 1.6 Staffing Cost

Table 1.7 Five Year Project Expenditure

Table 1.8 A Breakdown of Cost Increase

Table 1.9 Design Estimate

Table 1.10 Work Group Cost Estimate

Table 1.11 MOE Matrix for Nuclear Plant - 1

Table 1.12 MOE Matrix for Nuclear Plant - 2

Table 1.13 MOE Matrixes for Coal & Gas Fired Plants

Table 2.1 Pre-Overhaul Test Schedule

Table 2.2 Mean Reynolds Numbers vs. ASME PT 16.5 Code

Table 2.3 Mean Reynolds Numbers vs. Manufacture Recommendation

Table 2.4 System Reviews and Audits – As found Data

Table 2.5 Flow Conditions

Table 2.6 Nozzle Coefficient

Table 2.7 Test Setting – Mean Reynolds Number

Table 2.8 Manometer versus Kent Gauge

Table 2.9 Manometer & Kent Gauge Error

Table 2.10 Post Modification Boiler Test

Table 2.11 System Reviews and Audits – As Left Data

Table 3.1 Turbine Heat Rate Loading Test

Table 3.2 Turbine Back Pressure Test

Table 4.1 Validation Requirements

Table 5.1 Acronym (Software Development)

Table 5.2 Software Engineering Process Plan

Table 6.1 System Requirements Review Checklist

Table 6.2 System Baseline Requirements

Table 6.3 System Integration and Verification Planning

Table 6.4 SI&V Planning Input Checklist

Table 6.5 Test Case (Start-up & Shutdown)

Table 6.6 System Verification & Test Results

Table 6.7 DAH/SCADA RVM

Table 6.8 Performance Verification Matrix

Table 6.9 Test Case Results

111

113

109

117

118

122

123

124

54

55

56

57

64

66

76

78

93-95

101

102

106

107

109

33

34

34

36

38

42

15

19

29

30

31

43

44

45

46

46

47

53 vi

Abstract:

The aim of this paper is to demonstrate the process of extending the life cycle of power plants (also referred to as systems) through predictive maintenance and performance testing.

The plants are divided into three groups, namely group 1, group 2 and group 3 that correspond to Coal Fire, Gas and Nuclear respectively. The problems that led to this project are summarized in ConOps. ConOps describes the stakeholders’ vision regarding how they will use a contemplated system to affect the challenging aspects of their “as is” situation

This project will apply System Engineering Methods, Concepts, Techniques and Practices such as System Engineering Management Plan (SEMP) to provide added value to the industrial sponsor. The method used to achieve the desired target is periodical and scheduled tests in the form of instrumentation measurements, calibration, and system design calculations versus test results. By using the System Engineering process this paper shows that by targeting the high level components such as boiler, turbine, feed-water and condenser, the entire system can be successfully tested, analyzed and documented. Therefore, this project will use software programs, and computerized networks to monitor and trend the system performance before failure occurs.

1

1.0 Introduction:

This project is designed to use various tests and equations such as boiler feed-water flow nozzle calibration, gross boiler efficiency test and thermal heat rate test to analyze and improve the system performance through software engineering. The engineering data will be extracted from the physical system through the tests mentioned, and will be included in a computer program source code that will be used to analyze and validate the system.

The validated system will be tied to a network of computers that will be used in this project for major decision making. The results will also allow the operating and maintenance staff to utilize the statistical and other analytical data (collected through practical system tests), empirical table data, test performance and equations, in procedures when a system performance test is required. These tests will also be used for: (a) components history, (b) monitoring the failure trend, (c) major component replacement/repairs that will prevent unscheduled plant outage and predict system failure.

Boiler Feed-Water Nozzle Calibration/Test and Gross Boiler Efficiency Test (GBET) will be carried out to provide performance data such as boiler efficiency and nonlinear regression Rsquare value. Data obtained from the tests are very critical to the day to day operation and to the improved service life cycle of this large complex system.

Thermal Heat Rate (THR) performance test will be carried out to: (a) provide performance data in the form of input and output (I/O) equation, (b) determine I/O curves which are primarily used to calculate the total and incremental running costs, (c) discover deficiencies in the feed-water system, extraction line pressure drops and other information which are relevant to the turbine cycle, (d) benchmark data for station performance reference, monitoring, Analog Input (AI) and Computer Input (CI).

Another key purpose of this project is to use Programmable Logic Controllers (PLC), Human to Machine Interface (HMI)/Supervisory Control and Data Acquisition (SCADA) to justify replacing aged instrumentation controls and to allow the system to evolve with time through computerized network. The PLC operation will be linked to SCADA computer through data bus interface that will be apart of the system validation, verification and integration network.

2

The data for the SCADA computer will be extracted from the existing system through the tests mentioned here. Each Data Acquisition Hardware (DAH) Input/Output (I/O) maintenance screen will display the following items of each analog signal that can be displayed: (a) module ID number, (b) I/O description, current voltage reading and current engineering value, including units. Additional information will be provided to identify the applicable DAH unit, and the panel location of the hardware. The DAH I/O maintenance display screen will encompass all Data Extraction System (DES) inputs that are obtained directly from the dedicated DAH that is connected to the field instrumentation. The Digital

Control Computer (DCC) I/O Maintenance display screen will encompass all DES inputs obtained from DCC-X, through the Gateway-X Computer. Only one screen of data will be viewed at a time.

This project uses a work management and project management software program known as

PASSPORT

.

Some of the commonly used tasks in PASSPORT are: initiate work request, expedite work order, work planning, scheduling activities, work order completion, work report, equipment history analysis, calibration support, analysis activities, material inventory and reliability centered maintenance.

This program provides a wide range of panels and interfaces such as project work details and project management interface. The project work schedule detail panel help us to identify and track project work, project management development and maintenance summary tasks.

The project manager/SE also uses this panel to set and review dates associated with major projects events, milestones and tasks during the project. This panel also supports the entry of planned labour and work percent completion information, which is used to plan variance reporting and analysis. Furthermore, scheduled percentage completion information is calculated and displayed based on the date entered. Summary tasks may be associated with an activity, which is a user-defined field used to categorize the project work task. This panel also carries a field which relates project work to project budgets and actual costs. In addition,

PASSPORT provides an interface panel to upload and download scheduled and selected tasks to Microsoft (MS) Project.

3

1.1 ConOps

The Problem Space

Figure 1.1 Context of a Power Plant

(See Figure 1.1 above) Sponsor (Stakeholder) needs the system performance upgrade to:

(a) Minimize force outages

(b) Address the concern of the End-Users

(c) Meet and exceed the various Standards set by the industry

(d) Meet the needs of the professional organizations that regulate the day to day system operations

(e) Address the public outcry for loss revenue and inadequate system performance

(f) Address the findings from an independent Technical Audit Review group

(g) Make the evolutionary changes that will allow the system to evolve with time, such as Human to Man Interfaces (HMI), Machine to Man Interface (MMI),

Programmable logic controller (PLC) and SCADA

(h) Maximize the system output capacity in Megawatts and revenue

(i) Extend the Life Cycle of the system

4

The Sponsor will realize the value from the system upgraded performance and improved operational conditions. A Technical Review and Audit group was used to assess the problems.

Employer: The sponsor typically relies on the employer to manage the project that will improve the system. Furthermore, both sponsor and employer share the risks inherent in making the necessary changes to improve the system.

Project: The project is where time, schedule, resources competence, goal, status reviews and task activities are mixed together to yield the required results in a given Cycle Time

(CT). The project is also seen as the carrier that will deliver the required changes to the system.

Practitioners of Systems Engineering (PSE’s) : Plays a major role in the project as the employees who collaboratively design this project and documenting the scheme as a

Systems Engineering Management Plan (SEMP). The PSE’s will operate the project with other work groups with non-SE inputs.

Power Plant “As Is”

In this project, value added to PSE’s is considered latent until the overall project produces the desired output to effect the change that will improve the “AS Is system.

The Power Plant will consist of at least five types of providers:

(1) Industry Standards (ASME)

(2) SE relevant standards bodies

(3) Professional organizations

(4) On the job training (OJT)

(5) Manufacturers of the different sub-system

These five types of providers form an integral part of the tools and skill required by PSEs to add the desired value that will change the “As Is” power plant to make it a reliable source of power output and revenue generator. The output will be evaluated through the

5

Measure of Effective (MOE) which will be discussed in detail in this project using the major system components.

The major components of interest are: boiler, turbine, feed-water and condenser.

These components were chosen because they represent the primary components in the power plant system. Any impairment to the primary components such as the boiler, turbine and condenser/feed-water system will reduce capacity or shut down. The same cannot be said of other subsystem components such as: turbo-control and governor control. However, the system carries multiple layers of redundancy and spare modules that can be easily substituted or set to operate on manual mode without impairing the final product (output power). In addition, by focusing on these upper level components, the lower level components will also be addressed in the process. For example, the condenser is the final destination for most of the steam produced by the boiler. The condenser removes the latent heat from the steam before turning the steam back into water to allow the re-heat process to continue the cycle. All the lower level subsystems between the condenser and the boiler must be fully functional for the boiler to deliver the required steam to the turbine. Therefore, if the requirements for boiler and turbine are met then we know that the related subsystems are functional.

Addressing the problems discovered by the Technical Review Auditors

The problems will be addressed in the following ways:

(a) Use of more than one source of obtaining critical data

(b) System analysis and evaluation of the primary components with the present instrumentation in conjunction with new computerized system and interfaces that will allow the system to do advance trending and remote data monitoring.

(c) Provide the guideline to which day to day and periodic testing will become the new standard

(d) Continuous improvements maintenance practices directed through PASSPORT

(Work Management software)

(e) By breaking down all work orders into work tasks that will be followed by a work report for trending, history and maintenance traceability

6

(f) By reducing force loss rate to less than 5% annually to optimize revenue and extend the life cycle of systems across the various plants

The forecasting benefits of the project are :

(a) To keep force outage to less than 5%

(b) To meet the peak demand power to the End-User

(c) To meet and exceed the industry standard

(d) To meet the needs of the regulators

(e) To generate the required revenue that the system is capable of.

(f) To address the findings of the Technical Review Audit (TRA)

(g) To have new computer base system interface that will evolve with time

The tests & controls that will be used to deliver the forecasted benefits are:

(a) Boiler nozzle calibration test & gross boiler efficiency test

(b) Thermal Heat Rate (THR) test

(c) Supervisory Control and Data Acquisition (SCADA)

(d) Programmable Logic Controller & Human Machine Interface (HMI)

The expected relationships among the tests to be performed

There is an interdependent behaviour of the test parameters. For example, the boiler flow nozzle calibration test will directly affect the Gross Boiler Efficiency Test

(GBET) because flow rate depends on temperature. Furthermore, if the gross efficiency test is low, the thermal heat rate will be higher that normal. Therefore, in order to produce optimum power output, all three tests mentioned above will have to produce results showing maximum performance.

7

The ConOps framework is summarized in Figure-1.2 below with the three major players interconnected to solve the common problem. The three-body components shown will be elastic in their decision making scheme in order to create non-hierarchical relationship patterns between each body. The decision making data derived from independent TRA group will lead to choices, options and appropriate actions to achieve the desired goal.

Figure-1.2 ConOps Context Framework.

8

1.2 Product Improvement Strategies

Complex power plant systems do not have to be stagnant in their operational configurations and the need for change can come from sources such as software engineering. By applying software engineering technology the operational configuration can be affected in infinite ways. The major problem with configuration change is the risk associated with the predicted change and the unknown changes to a large complex system. Acting on the Technical Review Audit findings in ConOps, this project will use technology availability that allows the system to perform better and be more cost effective throughout its life cycle.

In addition, to better the system performance, the project will also focus on reliability and maintainability through data obtained from performance testing and documentation.

Performance testing and documentation will be done through system re-validation and verification of the existing systems. This project will adapt the Pre-Planned Product

Improvement (P3I) approach to establish effective and complete interface requirements to upgrade and improve the existing system performance. See Figure-1.3a below. Using the

P3I approach will generally increase the initial cost, configuration management activity, and technical complexity. However, the big payoff using this method is that it provides modular equipment upgrade with flexible interface connectivity and open systems development.

A Test Plan will be in place to validate the existing system using. This plan will include the causes for test failures, failure of system under test, error in expected results, failure due to test script or procedure, recording of failure in test report and tests deferred to subsystem testing. An integration test plan will integrate the validated data into a network of computers with some operating under different protocols and software program, such as Microsoft C/C++ and Microsoft Visual Basic (VB). A configuration plan will be in place to look at the various systems of computer within the network and make sure that their operation meets the baseline requirements.

9

Product Improvement Strategies

Figure-1.3a Pre-Planned Product Improvement

By embracing the concept that equipment fails when its capability drops below desired performance, this P3I systems engineering management scheme will target causes of the failures to prevent any drop in performance as it relates to power plant (system). In order to control performance, this project will test, monitor and analyze the engineering data extracted from the system. The monitoring and analyzing will be done through a network of computers.

10

Product Improvement Strategies

Figure-1.3b Product Improvement Template

Figure-1.3b shown above is used as a template to guide the development of this project to meet the needs of the external and internal stakeholders. PASSPORT is the hub that drives the process such as what gets done and the sequence of how all work tasks are done. Furthermore, PASSPORT provides an indispensable traceability path that will track the status of a device from the manufacturer through identification codes such as Cat ID,

UTC number and Material Request (MR) number to the final work report. The business

LAN is not a physical link to the process plan. However, all tasks carried out must include a work order with a number, and at the end of each day a work report must be filed against the work order to support the task, even if the task is incomplete.

11

Product Improvement Strategies

Open System Approach to the Systems Engineering Process

Figure-1.4 – Integrated and Process Development

This open system approach is a combined approach that includes business and technical components to develop a process that results into a system that is easier to change, upgrade and replace components. This approach allows the project to develop and produce flexible interfaces and software programs that maximize the current commercial available competitive product that will evolve with time and enhance the process for future upgrade. From a business standpoint, this type of operation will directly reduce the system’s life cycle cost. Meanwhile, from a technical perspective, the emphasis is on interface control, modular design and design upgrade.

Furthermore, incremental upgrades are tied to the growing trend in new technology capability and technical maturity in areas such as Human to Man Interface and client servers. This concept also add advantages that will benefit this project, such as (a) a wide range of commercial products that are tested to various industrial standards (b) obsolete control equipment due to lack of spare parts, can be replaced with interface common to the industry standard. Moreover, replacement of obsolete control equipment will have the capability to be modular, and therefore minimizes system downtime and enhances productivity as recommended by Technical Review Audit group. Figure 1.4 above, summarizes the open concept approach.

12

1.3 System Engineering Management Plan (SEMP)

The SEMP in layout in Figure-1.5 is a top level management plan with technical components that is used to form a live document that integrates the project activities, operation and expansion. The Project Management Plan is shown in Table-1.1 matrix, and the various phases such as Project Identification, Project Initiation, Project Definition and Execution

Phase will be covered under the Project Management Plan. This project is using the SEMP plan as a guide to answer questions about the problem we are trying to solve, the sensitivity parameters that will be influencing the process dynamics, and which matrix will be used to measure the technical progress.

System Engineering Management Plan (SEMP) Layout

Figure-1.5 System Engineering Management Plan (SEMP) Layout

13

System Engineering Development Environment (SEDE) Plan

Figure-1.6 System Engineering Development Environment

The objective of having a SEDE plan is to support the tools and methods with input from the process and the environment, (see Figure 1.6 above). In addition, the Systems

Engineer will ensure that all the processes and environments on this project are integrated and used properly.

The control and integration of SEDE embraces the management structure and Systems

Engineering Policy that create the process, which in turn feeds the tools and method. On the other hand, the environment supports the tools and methods with the required utilities such as computer facilities, laboratory facilities, communication and networking. In addition, SEDE fills the gap that exists in the SEMP plan.

In this project the environment is anchored by a work management software package known as PASSPORT. This software covers every facet of work management such as material inventory, staging, tracking material, work procedures, work orders, work task and work reports. The process and methods are tailored to different tasks that will be performed here. While tools are common to the system’s computerized network that forms an integrated process for a complete system. The environment will provide each task with the required software/tools to complete the task.

14

The Project Management Plan

Table 1.1 - The Project Management Plan: Project Estimation & Accuracy Matrix

Stage Conceptual Budgetary Release Quality Definitive Close Out

Objectives (a) Screen concept with respect to strategic fit

(b) Identify and clarify needs and select projects which maximize long term values

(c) Identify high level resource needs

(a) Establish & list alternatives that meet need or opportunity

(b) Prioritization based on resource availability and needs in the Life

Cycle plan

(c) Develop and get approval for preliminary business case to proceed.

(a) Define the preferred alternative for

Business Case

(b) Project

Execution Plan and schedule project milestones for approval.

(a) Design, build, test & commission

(b) Detailed execution plans to ensure effective conduct and control.

(a) Assess efficiency and effectiveness of the project management process

(b) Feedback drive to achieve continuous improvement

Estimate

Accuracy

Cost

Tracking &

Control

(a) Up to +/-60% during stage

(b) Target +/-40%

(a) Total Project estimate +30% to -15%

(a) Release

Quality Estimate

+/- 15% to -10%

(b) Target to be within +/-5%

(a) Approved definition work is charged to

Project Work

Order

Definitive

Estimate +/-

5%

(a) Actual

Cost incurred is reported

(b) Cost charged to

Work Order

(a) Reviews estimate quality at each phase

(b) Search out means for estimate accuracy improvement

(a) Project

Complete & put in service

(b) Complete inservice report

( c) Transfer to

Fixed Assets

(d) Complete

Project Closure

Report

15

Project Management Process Flow

Figure-1.7 Project Management Process Flow

Figure 1.7 shows the project management process flow. The needs were identified by a

Technical Review Audit (TRA) group, as discussed earlier in the ConOps. However, special attention will be paid to technology insertion and engineering data extracted from the physical system. This data will be converted into a computer program source code for the purpose of: documenting, improving, analyzing and validating the system. The engineering data will be available through a series of tests described here. Each test will provide an interface to the computer network, so that the system can be tracked continuously. The Initiation phase was done with internal Stakeholders, external

Stakeholders, and Project Team coming together to share ownership of the project as a group. This group acts on the findings of the TRA group to determine the high level deliverables, resource requirements, policies, procedures and objectives to obtain the project initial approval. The planning process will define work activities, sequence of work activities, identify work resources, estimate work durations, identify the risk, estimate cost and establish a base budget. Furthermore, this planning phase will estimate the project size; produce a schedule with the work breakdown structure (WBS) that will be governed by a System Engineering management plan as shown below in Figure-1.8.

16

Planning Phase

Figure-1.8 Planning Process

The Figure 1.8 refers to the planning process which includes a series of activities such as

Boiler Feed-Water flow nozzle calibration, Gross Boiler Efficiency Test, Thermal Heat

Rate test and software program interfacing. The steps will result in a complete management plan. The process activities will:

(a) Yield a plan that will define how the scope, schedule, deliverables, resources and cost will meet the project objectives

(b) Establish a subsidiary management plan to define a given area of the project. In this project, the subsidiary management plan is the software development plan.

Monitoring & Control process

This group will develop project performance reports, review and track project status as it relates to performance criteria such as scope, cost, schedule and quality assurance. This group will also develop and manage a corrective action plan that will keep the project on track to meet the objectives.

Executing & Close Phase

The Executing Phase will be done with regular schedule and requirement reviews in order to resolve any requirement issues, while the closing out phase will review the lessons learned in all of the phases, and signed off on the project completion.

17

Risk Management

Figure-1.9 The Essential Elements of Risk Management

Figure-1.10 Risk Management Control and Feedback

Adapted from DoD

18

Table-1.2 Preliminary Judgements Regarding Risk Classification

Consequences

Probability of

Occurrence

Extend of

Demonstration

Existence of Capability

Low Risk

In-signification cost,

Schedule, or technical impact

Little or no impact likelihood

Full-scale, integrated technology has demonstrated previously

Capacity exists in known product; requires integration into new system

Moderate Risk

Affects program objective, cost or schedule, however cost, schedule, performance are achievable

Probability sufficiently high to be concern to management

Has been demonstrated but design changes test in relevant environments required

Capability exists, but not at performance levels required for new system

High Risk

Significant impact, requiring reserve or alternate courses of action to recover

High likelihood of occurrence

Significant design changes required in order to achieve required/desired results

Capability does not currently exist

Adapted from DoD

Figure-1.9 uses the four essential elements of risk to identify, analyze, mitigate, control and plan the risk process. These elements provide a continuous feedback process that is used to monitor and control the risk management process. Figure-1.10 shows an integrated risk management system with feedback control that reacts with four key elements to identify activities shown in Table-1.2. By identifying risk and analyzing uncertainty sources and potential drivers, one can transform uncertainty into risk that could affect the life cycle. This project will aim at minimizing or avoiding the risk, because power operation at this capacity (500/950 MW) can lead to a very large sum of money in system downtime. The matrix in Table-1.2

is used to help select the mode of the risk, such as low, medium and high . In avoiding the risk, the project will remove requirement that leads to uncertainty and high risk probability. The control process is built into the software development plan, so the design process follows a low risk pattern.

For example, software program insertion is considered to be high risk. Therefore, a prototype software program model was used to provide a mock-up operation of the existing power plant system in order to iron out the bugs and minimizing the risk.

19

1.4 Scope

The scope as it relates to the Systems Engineer is to:

(a) Ensure that the correct technical task gets done during the development and planning phase.

(b) Develop a total system solution that balances cost, schedule, performance and risk

(c) Develop and track technical information needed for decision making.

(d) Develop verification and technical solution procedures to satisfy customer requirements.

(e) Develop baseline and configuration control

(f) Develop a system that can produce economically throughout the full life cycle

Scope and Structure of Document

The Standards and Procedures Guidebook (SPG) for the improvement of the system as it relates to the three groups of power plant is used to embrace an open Systems

Engineering process. This guide is based on the related activities and recommendation of the TRA-group findings shown in the ConOps.

The SPG Provides description for:

(a) Validated data from Boiler feed-water flow nozzle, GBET and THR will be analyzed and displayed through computerized network

(b) New technology insertion such as Supervisory Control and Data Acquisition

(SCADA) and software program using Microsoft C/C++ and Visual Basics

(c) The software development plan to track system performance

(d) The software engineering model to mock the current system function baseline

(e) The software engineering process to run different programs

(f) The software organization and interfaces

(g) The verification & integration test plan

(h) A full qualified verified & integrated system to meet the needs of the sponsor and the various stakeholders

(i) Training Maintainers and Operators

20

1.5 System Engineering Process (Road Map for this Project)

Figure-1.11 System Engineering Process

The System Engineering Process as outlined in Figure 1.11 includes the following:

Process Input: Process Output:

Customer needs/Objectives/Requirements Development Level Dependent

(a) Missions (a) Decision Data Base

(b) Measures of Effectiveness

(c) Environments

(d) Constraints

(b) System/Configuration

Item Architecture

(C) Specification and Baseline

Related Term : Customer = Organization responsible for Primary Functions

Primary Functions = Development, Production/Construction, verification,

Deployment, Operations, Support and Disposal

System Element: Hardware, Software, Personnel, Service and Techniques

21

1.6 Capabilities and Constraints

Group-1 Coal Fire Capabilities:

(1) Plants are capable of producing 510 MW at Maximum Capacity Rating (MCR)

(2) Using ASME PTC6 code standard plants should be able to:

(a) operate 4 mills pulverization, reduced throttle pressure, throttle temperature, full isolation and 4 Governor Valve Wide Open (4VWO) to produce 375 MW at 74% High Pressure (HP) efficiency

(b) operate 5 mills pulverization, reduced throttle pressure, throttle temperature, full isolation and 4 Governor Valve Wide Open (4VWO) to produce 395 MW at 77% High Pressure (HP) efficiency

(c) operate with 4VWO condition, while a mix of coal blend burnt at the boiler to produce a ratio of: 85/15 by mass

Group-2 Gas or Oil Fire Capabilities:

Start-up and shut-down in less than 4 hours

Constraints

Constraint: Design, such as fineness of atomization, size and shape of fuel oil particles are limited by:

(a) Burner capacity which depends on fuel oil pressure and on nozzle diameter

(b) Spiral pitch of fuel oil which is an integral part of the nozzle diameter

(c) Flow resistance due to turbulent flow and frictions

(d) The throughput capacity of the mechanical burner is limited by flow-rate through the nozzle. This flow-rate is also directly proportional to the square root of the fuel oil pressure.

(e) Adjustable mechanical atomizer carries a narrow throttling range, due to the fact that doubling the pressure will increase the flow-rate by only 40%.

22

Group-3 Nuclear Capabilities:

Can operate at 36 efficiency, which is above the industry average for power plants

Constraints

(a) Start-up and Shut-down are limited by the K-Value: K-1 = ρ (reactivity)

K

Starting up a reactor (CANDU) can be a very long process that varies from 7-14 days if the outage is forced by an unplanned event during normal operation. The K-value limits the use of reactor to the following: supercritical, critical and subcritical.

A Reactor with a K-value >1 indicates that the neutron population is increasing with time. Since the reactor power is directly proportional to the neutron level in the reactor,

K> 1 indicates that the reactor power is increasing with time and state and is said to be a

Supercritical Reactor. When then K-value =1, the reactor is said to be Critical. When

K-value < 1, the reactor power is decreasing with time and the reactor power is said to be

Subcritical.

(b) Regulating constraints such as those set by the Canadian Nuclear Safety Commission

(CNSC), Federal environmental standard and the world wide industry standard set by

World Association of Nuclear Operators (WANO)

1.7 Functional Requirements:

1.

To measure Thermal Heat Rate (THR) test using, GBET and Manufacturer Design

Specifications as a guide

2.

To apply THR test in the form of net input and output (I/O) characteristics curve from

Quadratic Equation concept such as: Input (GJ/h) = A + B *MWnet + C*MWnet^2

3.

To measure the steam at the turbine stop valve

4.

To measure the steam flow to the re-heater

5.

To measure the enthalpy of steam supplied to IP turbine before the interceptor valve

6.

To measure the enthalpy of the feed water at the HP heater outlet

7.

To measure the enthalpy of steam turbine exhaust

8.

To measure net generated power output

9.

To calculate the electrical power drawn by the boiler feed pump

23

10.

To calculate the following using the actual test condition:

(a) Main steam pressure

(b) Re-heat temperature

(c) Re-heat pressure drop

(d) Extraction line pressure drop

(e) Turbine internal efficiency

1.8 Verification Process

The verification process will follow the sequence shown in the top down flow diagram below. This will be done with the use of System Classification Index (SCI), and the objective for applying this index is to assign every device to a subsystem using

PASSPORT (Work Management Program). Every device will be in format of: Unit-

System-Device, a typical example could be the Boiler Feed Pump number-1 (BFP) in unit-3; therefore the SCI would read 3-1101-BFP #1. In terms of calibrated device, the procedure and specification and data sheet are available in the PASSPORT. See

Appendix-B for a general overview of the calibration setup and procedures.

This method of verification allows us to provide a seamless path to the system integration and traceability. In addition this method should use two persons to carry out the correct component/device verification. This is done by using one person to read the correct component/device from a live document while placing his or her finger on the device.

The other person will verify that the reading from the live document corresponds to the device of interest that will be undergoing maintenance, test or repairs. The second person will indicate a verbal yes or no to the physical device verification.

The second person will now take on the role of the first person by reading the live document while pointing to the device. This process is very important in identifying the correct device for verification.

24

Figure-1.12 Flow Diagram

The above flow diagram shown Figure-1.12 is used to describe the steps required to effectively verify a complex system such as a power plant(s). This process is used to verify the functional and physical architecture of a system, which includes the functional analysis/allocation and synthesis. The physical verification approach is used to coordinate design, develop and test processes. This process ensures also that the requirements are testable and achievable. Variances and conflicts will be identified by this process. Therefore, allowing the process to meet the system configuration baseline that will control the product breakdown structure.

25

1.9 Functional Analysis/ Allocation

The goal of this analysis is to move from higher level requirements to lower level requirements with a flow plan for traceability. See the Figure 1.13 below.

Figure 1.13 Traceability Flow Plan

Input: The output from the requirements analysis

Output: System configuration

Enablers:

1.

Various work groups such as:

(a) Mechanical maintainers

(b) Electro-mechanical maintainers

(c) Electrical and Instrumentation Control Technician/ Technologist

(d) Engineering Support

(e) Assessors & Planners

2.

Decision database from calculated input and output data shown on the R-square regression analysis curve

3.

Function flow block diagram and behaviour diagrams

4.

Requirement allocation sheet and timelines

26

Functional Architecture

Figure-1.14 Functional Architecture

This functional requirements allocation (Figure 1.14) is used to document the link between allocated function and performance requirements of the physical system. It is used as an indispensable source of traceability analysis and design synthesis. The function numbers are used to match the Functional Flow Block Diagram (FFBD) traceability and indenture numbers. Figure 1.15 shows FFBD in the top level down format to match the operational sequence of the events for the system. By using the test performance data collected through the boiler, turbine and generator, we are able to produce the required output. When the required output meets system requirement, then level sub-system 1.1 through to 1.4.1 are also meeting the system requirements.

Furthermore, every single block component in the first and second level blocks can cause the system to fail. Each component in the system is given a SCI that will identify the block and each device in the block, such as pressure switch, flow switch, flow transmitter and temperature transmitter. This is done to establish a flow diagram with continuous reference for traceability, integration and verification.

27

Functional Flow Block Diagram (FFBD)

Figure-1.15 Functional Flow Block Diagram

The objectives of the Functional Flow Block Diagram (FFBD) are to ensure that all life cycles are covered and all elements of the system are identified and defined to specific system functions. The numbering scheme is used as an explicit source for system verification. Furthermore, these numbers introduce identification that will be present through all functional Analysis and Allocation activities from lower level to top level.

See Tables 1.4 and 1.5 Requirements Allocation below.

28

Table 1.3 Gantt Timeline

Timeline (Gantt) 5 Years – January 2009 – December 2013

Duration Duration % Description Start Time Delay from

Start

Task 1

Task 2

02/2009

01/2010

1 month

Delay %

0.02

12 months 0.2

12 months

12 months

0.2

0.2

Task 3

Task 4

Task 5

12/2010

01/2012

11/2012

23 months 0.38

36 months 0.6

46 months 0.77

12 months 0.2

12 months 0.2

13 months 0.22

The Timeline Gantt is used to give an immediate update on what should have been achieved at any given point in time. This is in accordance with Event Based-Detailed

Schedule Interrelation shown in the System Engineering Master Schedule (SEMS). See

Table 1.3 above, and Appendix-C.

29

Requirements Allocation Sheet

Table 1.4 Requirements Allocation Sheet-1

Requirements

Allocation Sheet

Function Name &

No.

200.1 provide guidance for boiler pressure

200.1.2 provide guidance for boiler level

200.1.3 provide guidance for boiler temperature

200.1.4 provide guidance for combined parameters:

(a) boiler level

(b) feed water

(c) steam flow through summer amplifier

Function flow diagram No.

200.1

Function

Performance and design requirements

Boiler pressure must be maintained at initial calibration pressure of

7mpa. Initial boiler pressure:

70% - 75% of the design requirement

Boiler level must be maintained between 70 –

75% of design requirement

Boiler temperature must be maintained at calibration temperature of

300 degrees C.

The initial boiler temperature is

280 degrees C.

Boiler summer amplifier output must be maintained at 7.6

MA. The initial calibration will be between 7.5

MA and 7.6 MA

Facility

Requirements

Group-3

Nuclear

Group-3

Nuclear

Group-3

Nuclear

Group-3

Nuclear

Equipment Identification

Nomenclature

Boiler Pressure

Channel D, E, and

F

Boiler Pressure

Channel D, E, and

F

Boiler Pressure

Channel D, E, and

F

Boiler Pressure

Channel D, E, and

F

Computer Input (CI) or Analog

Input (AI)

AI

16 MA

14.933 MA

14.933 MA

7.6 MA

CI

75

70

70

30

Table 1.5 Requirements Allocation Sheet-2

Requirements

Allocation Sheet

Function Name &

No.

200.1 provide guidance for boiler pressure

200.1.2 provide guidance for boiler level

200.1.3 provide guidance for boiler temperature

200.1.4 provide guidance for combined parameters:

(a) boiler level

(b) feed water

(c) steam flow through summer amplifier

Function flow diagram No.

200.1

Function

Performance and design requirements

Boiler pressure must be maintained at initial calibration pressure of

7mpa. Initial boiler pressure:

70% - 75% of the design requirement

Boiler level must be maintained between 70 –

75% of design requirement

Boiler temperature must be maintained at calibration temperature of

520 degrees C.

The initial boiler temperature is

365 degrees C.

Boiler summer amplifier output must be maintained at 7.6

MA. The initial calibration will be between 7.5

MA and 7.6 MA

Facility

Requirements

Coal & Gas

Plants

Coal & Gas

Plants

Coal & Gas

Plants

Coal & Gas

Plants

Equipment Identification

Nomenclature

Boiler Pressure

Channel D, E, and

F

Boiler Pressure

Channel D, E, and

F

Boiler Pressure

Channel D, E, and

F

Boiler Pressure

Channel D, E, and

F

Computer Input (CI) or Analog

Input (AI)

AI

16 MA

14.933 MA

14.933 MA

7.6 MA

CI

75

70

70

31

1.10 Requirement Loop

This loop is used to identify each function for the purpose of traceability. It also performs the interactive process between the requirements analysis and the functional analysis/ allocation.

1.11 The Design Synthesis

This design synthesis is used to combine the physical process with the computer process and software. The physical architecture will provide a base structure for the specification baseline operation. See Figure 1.16 below.

Figure 1.16 Design Synthesis

The output will serve several different forms of interface such as Openness, Productivity, and Collaboration (OPC). This form of connectivity will allow system parameters such as temperature, pressure and flow to be converted to current or voltage and fed directly into the Excel software or other operating system. This allows one to write programs, control various subsystems, trend and monitor the complete system using SCADA in conjunction with PLC and Human to Machine Interface (HMI/MMI). In addition, the output will be interfaced to accommodate two independent systems namely X & Y computer that will provide the same data processing for reliability and contingency.

32

1.12 Cost & Schedules

System Engineering Staffing Cost by Management and Technical Staff

Table-1.6 Staffing Cost

Personnel

Category

Jan1, 2009 Jan1, 2010 Jan1, 2011 Jan1, 2012 Jan1, 2013 Totals Grand Total

$1,625,000 Management

Staff Cost

$325,000 $325,000 $325,000 $325,000 $325,000 $1,625,000

Technical

Staff Cost

$275,000 $275,000 $275,000 $275,000 $275,000 $1,375,000 $1,375,000

Grand Total

Cost

$600,000 $600,000 $600,000 $600,000 $600,000 $3,000,000

Table-1.6 above shows the cost breakdown of staffing for both managerial and technical

$3,000,000 human resource for the project. The cost value is based on benchmark costs against other business units and companies in the same industries. The sum of $325,000 will allow the project to operate with a group of five management staffs, while the sum of $275,000 will allow the project to operate with a group of four technically skilled persons.

There are several software packages available to estimate cost and scheduling such as

Microsoft Project and Timeline. However, while cost of staffing can be predicted quite safely over the five year period of this project, the same cannot be said about cost components such as installation, removal and material shown in Table-1.7. The primary source of getting cost related data is from actual cost experience from past projects.

Using the data collected the components cost was estimated on the basis of previous benchmark setting.

By comparing the release estimate against the definitive estimate, the project was able to establish a variance cap of 30% between the total release estimate and the definitive estimate. The variance cap allows the budgeting and scheduling to be tracked and controlled to prevent cost overrun or stoppages due to unavailable cash flow.

33

Table-1.7 5 Year Project Expenditure

Cost Component Release Estimate

Cost M$

Engineering

Commissioning

Material

Install & Removal

Indirect Costs

1.30

0

3.50

2.60

1.5

Interest & Overheads 1.25

Contingency 1.75

Definitive Estimate

Cost M$

1.29

1.5

3.80

4.30

2.50

1.65

2.0

Variance Definitive

Cost M$ Release

1

1.5

3.0

1.7

1.0

0.4

0.25

Project Total 11.9 17.04 5.14

1.13 Project Variance Cost M$ and Explanation of Estimates

As summarized in Table-1.7 the Definitive Estimate of 17.04 million represents an increase of 5.14 over the release estimate. This increase of $5.14 million is broken down in Table-1.8 with an explanation provided in the following sections.

Table-1.8 A Breakdown of Cost Increase

Cost Items Cost Increase % Of Variance

Engineering

Commissioning

Material

1

1.5

0.3

1%

Not in the Release

7%

Install & Removal 1.7

Indirect Costs 1

Interest &

Overheads

0.4

40%

40%

24.24%

Contingency 0.25 12.5%

Project Total 5.14 30%

Explanation of Decrease Estimate for Engineering

The forecast Engineering cost of 1.29 M$ is 1 M$ (1%) under the release estimate of 1.30

M$ and it is within the estimated range.

34

Explanation of increase Estimate for Commissioning and Training

The 1.5 M$ increase in costs is due to omission in the release estimate for commissioning, training, and station support functions. No separate work order task for commissioning will be issued for this project.

Explanation of Increase Estimate for Material

The increase of 1.7 M$ is mainly due to addition of Allen Bradley Programmable Logic

Controller (PLC-5) Control Unit System Modules.

Explanation of Increase Estimate for Installation and Removal

The 1.71 M$ increase is due to additional work on the SCADA system to a allow remote access to power plant live trend data.

Explanation of Increase Estimate for Material for Indirect Costs

The Increase of 1.71 M$ is to the addition of Turbine Governor Electro module that will evolve with current and future computer based interface

Explanation of Increase Estimate for Interest, Overheads and Contingency a) The increase in interest and overheads of 0.4M$ is due to an increased direct and indirect minor scope creep b) The project is carrying a 12.5% contingency on the balance of the work resulting from increase of 0.25 M$ in contingency. c) By allowing a variance cap of 30%, the project is preventing use of the contingency funds, assuming every task executed as planned as scheduled.

Cost Variance Summary

The 5.14 million dollar increase is mainly due to added scope for SCADA based

Equipment such as indirect cost increases and unplanned repairs.

35

Design Estimate

This design estimate, Table 1.9, uses the matrix below to identify the man hours for the various tasks. The man hours will be assigned a cost value that will be used as an integral part of the total cost value.

Table 1.9 Design Estimate

Task

No.

Task

Description

Deliverables or

Documents to be

Produced

Hours

Comments

1.0 Preliminary engineering

1.1 Pre-design Problem Definition

1.1.1 Scope 7

Development

Definition

1.1.2

“As built”

14

Info Package

1.1.3 COMS

Review

1.1.4 MOD

Scoping

Check Sheet

1.1.5 Document

Scoping

5

3

3

Checklist

1.1.6 Needs

Statement

1.2 Assessment of Problem

Definition

1.2.1 Develop

Alternate

Solution

3

5

30

1.2.3 Assessment

Report

1.3 Stakeholder

Review

1.3.1 Finalize

COMS

21

7

6

36

Table 1.9

Design Estimate (continued)

Task

No.

Task Description Deliverables or Documents to be

Produced

Duration in working days

Hours

7 1.3.2 Stake Holder

Review/

Meeting sign off

1.4 Design

Requirements

1.5 Preliminary

Design Plan

1.6 Preliminary

Flow Diagram

Mark-ups

Complete

Detailed

Design

Estimate

1.8 Review/ Issue

2.0 Contract

Management

2.1 Technical

Specification

2.2 Contract

Tender

2.3 Bid Evaluation

2.4 Award Contract

3.0 Detailed

Engineering

3.1 Detailed

Design Plan

3.2 Detailed

Design

Requirements

3.3 Review/

Update MOD

Scoping Sheet

3.4 Review/Update

Doc. Scoping

3.5

Checklist

Communication

Review

31

21

7

7

7

7

31

7

14

30

31

14

5

N/A

31

5

37

Comments

3.6 Regulatory

Approval

3.6.1 Regulatory

Approval

Needs

Assessment

3.6.2 Code

Classification

7

90

Cost Estimate by Work Group

A summary of the costs in dollars by work groups include contingencies, support, overhead, etc. When adding contingencies and support dollars, explanations are required for the values used. Large ticket items such as material may be broken down into separate rows as shown in Table 1.10 below.

Table 1.10 Work Group Cost Estimate

Work Group Total Hours

Support

Mechanical

3000

6000

Civil

Electrical/I&C

Drawing Office

Contract

Materials

2500

2000

5000

4000

Contingency (State Contingency Rate)

Total Cost summary

Staffing Cost

Components Cost

Work Group

Total Project Cost

Rate

$75/hour

$60/hour

$60/hour

$60/hour

$50/hour

$60/hour

Total

3,000,000K

17,040,000K

7,097,500K

27,137500 $M

Cost ($K)

225K

360K

150K

120K

250K

240K

5000K

1000K

7097,500K

38

1.14

PMTE Concept

Process, Method, Tools and Environment (PMTE) is the method used in carrying out this project. The PMTE layout shown in Figure-1.17 and Figure-1.18 is used as the primary path to execute a given task. The process starts with management setting the agenda for a given task. The Method and Tools used is controlled by the Systems Engineer (SE), and the

Environment is controlled by management. The PMTE concept also shows the overlap between the Management and Engineering domain. The method employed to execute the various task is the QFD process shown in Figure-1.17 below.

PMTE Layout

Figure-1.17 PMTE Layout

The Quality Function Deployment (QFD) Process

Figure-1.18 Quality Function Deployment

39

PMTE Environment

The environment concept of the PMTE pyramid is embodied in management software known as PASSPORT. This software is used to manage projects and tasks by setting the agenda. For example, under the work management folder, the various tasks for the project are outlined and put in order of priority. Each task will have a procedure that must be carried out as written in the steps. At the end of the work day or shift, a worker from the other shift should be able to continue from the last step completed. In addition, each task that is assessed to use material will show the material request (MR) number and catalogue identification (Cat ID) number. Both MR and Cat ID numbers are stored in the main computer system to allow traceability of parts and equipment. At the end of every shift a detailed work report with the work order number and task number must be written.

1.15 Requirement Analysis

Customer Mission: Is the continuous system improvement through performance testing, preventative and predictive maintenance over the system life cycle with minimum operating cost.

Customer Performance Requirements :

1.

Less than 5% force outages (un-scheduled power loss) per annum for each operating power plant unit

2.

100% operating capacity for each power plant unit

3.

Less than 1% change in steam enthalpy entering the high pressure turbine from the boiler

4.

Condensate flow to the heater must maintain temperature within 1 degree of designed requirements

5.

Steam quality form boiler into the high pressure turbine must be greater than or equal to 95%

6.

Condensate cooling water returning to the lake shall not be greater than 30 degrees Celsius

7.

Boiler feed pump efficiency shall be less than 88%

8.

Carbon pollution to the environment shall not exceed the industry standard ( ppm)

40

9.

The regression analysis R-square value should not be less that 98% for THR test

Assumptions:

1.

All control maintenance procedures (CMP) such as continuous, reference and information is followed as written.

2.

all mechanical maintenance procedure (MMP) are followed as written

3.

All operating procedures are carried out in the required steps

4.

There will be no environmental infraction(s) that will put the public health and safety at risk.

5.

All data and test results shown in the project are related to a specific group of power plants, or should be viewed as concept only.

6.

The author assumes no responsibility of data, procedures, and or results tailored to other projects.

The Measure of Effectiveness (MOE) shown in Table 1.11 is used to measure the relative important aspect of the system. The measurements are qualitative in nature and they describe the customer expectation of the product or system.

41

Table 1.11

- Measure of Effectiveness (MOE) Matrix for Nuclear Plant-1

Instrumentation Calibration Requirements - Output Matrix for 950 MW Units

Requirements Requirements

Boiler pressure (BP) should not be

> 5.5 Mpa

Boiler pressure should not be

< 70%

Description

BP should not be > 75% of maximum

Pressure

BP should not be < 70% of maximum pressure

Test Script Test Case Analog

I/P (AI)

If boiler

Pressure

(BL) >

75%, then display high pressure alarm.

If BP <

70%, then display low pressure alarm

Figure 1.1 current loop output

(o/p) = 16 mA

14.933

16 mA

14.933

Computer

I/P (CI)

75

70

Boiler temperature

(BT) should not be > 300

◦ c

BT should exceed 75% of the maximum temperature

If BT >

75% then display high temperature alarm.

Figure 1.2 current loop o/p =

16mA

16mA 75

Boiler

Temperature is not < 280

◦ c

Boiler level

(BL) should not be > 540”

H

2

O

Boiler level should not be

< 516” H

2

O

If BT dropped below 70% of maximum temperature

BL should not exceed 100% of the maximum height

BL should not be < 95% of maximum level

If BT <

70% then display low temperature alarm

If BL =

105% then display high level alarm

If BL is <

95%, then display low level

Figure 1.2 current loop o/p

14.933 mA

Figure 1.3 current loop o/p

16 mA

Figure 1.3 current loop o/p

14.933

14.933 mA

16mA

15.2mA

70

75

95

Table 1.11 to 1.13 shows the operating parameters and corresponding AI & CI for the

(MOE)

42

Table 1.12

- Measure of Effectiveness (MOE) Matrix for Nuclear Plant-2

Instrumentation Calibration Requirements - Output Matrix for 450 MW Units

Requirements Requirements

Boiler pressure (BP) should not be

> 5.5 Mpa

Boiler pressure should not be

< 70%

Description

BP should not be > 75% of maximum

Pressure

BP should not be < 75% of maximum pressure

Test Script Test Case Analog

I/P (AI)

If boiler

Pressure

(BL) >

75%, then display high pressure alarm.

If BP <

70%, then display low pressure alarm

Figure 1.1 current loop output

(o/p) = 16 mA

14.933

16 mA

14.933

Computer

I/P (CI)

75

70

Boiler temperature

(BT) should not be > 300

Boiler

Temperature is not < 280

◦ c

Boiler level

(BL) should not be > 270”

H

2

O c

Boiler level should not be

< 257” H

2

O

BT should exceed 75% of the maximum temperature

BT is not dropped below

70% of maximum temperature

BL should not exceed 75% of the maximum height

BL should not be < 95% of maximum level

If BT >

75% then display high temperature alarm.

If BT <

70% then display low temperature alarm

If BL =

105% then display high level alarm

If BL is <

95%, then display low level

Figure 1.2 current loop o/p =

16mA

Figure 1.2 current loop o/p

14.933 mA

Figure 1.3 current loop o/p

16 mA

Figure 1.3 current loop o/p

14.933

16mA

14.933 mA

16mA

15.2mA

75

70

75

95

43

Table 1.13 - Measure of Effectiveness (MOE) Matrix for Coal & Gas Fired Plants

Calibration Requirements Output Matrix

Requirements Requirements

Description

BP should be Boiler pressure (BP) should not be

> 14 Mpa

> 75% of maximum

Pressure

Boiler pressure should not be

< 70%

BP should not be < 75% of maximum pressure

Test Script Test Case Analog

I/P (AI)

If boiler

Pressure

(BL) >

75%, then display high pressure alarm.

If BP <

70%, then display low pressure alarm

Figure 1.1 current loop output

(o/p) = 16 mA

14.933

16 mA

14.933

Computer

I/P (CI)

75

70

Boiler temperature

(BT) should not be > 520

◦ c

Boiler

Temperature is not < 365

Boiler level c

(BL) should not be > 270”

H

2

O

Boiler level should not be

< 256” H

2

O

BT should exceed 75% of the maximum temperature

BT is not dropped below

70% of maximum temperature

BL should not exceed 75% of the maximum height

BL should not be < 70% of maximum level

.

If BT >

75% then display high temperature alarm.

If BT <

70% then display low temperature alarm

If BL

>105% then display high level alarm

If BL is <

100%, then display low level

Figure 1.2 current loop o/p =

16mA

Figure 1.2 current loop o/p

14.933 mA

Figure 1.3 current loop o/p

16 mA

Figure 1.3 current loop o/p

14.933

16mA

14.933 mA

16mA

15.2mA

75

70

75

95

44

2.0

Boiler Flow Nozzle Calibration/Test

Test Objectives:

The main objectives of the test were as follows:

(a) To assess the efficiency of the Boiler prior to the retrofit outage

(b) To determine the performance of the unit turbine cycle and its major components before and after a planned overhaul by means of test grade instruments

(c) To identify the potential controllable operating problems that would degrade the Thermal Heat Rate (THR) test

(d) To ensure that the Boiler Flow Nozzle co-efficient is in compliance with the

ASME PT Code 19.5

(e) To ensure the numerical results of the nozzle co-efficient do not exceeds ±

0.95

(f) To ensure that the Boiler performance meets the Manufacture Design specifications

(g) To identify test results that can be used for predictive maintenance practices which will extend the lifecycle of the system. See Table 2.1.

Table-2.1 Pre-Overhaul Test Schedule

Test No. Test Date Time (EST)

Pre-Overhaul

1

2

Feb.2, 2009

Feb.3, 2009

Feb.4, 2009

0945 -1213

1001-1245

1025 -1245

Unit load

(MW)

~ 440

~ 440

~ 440

Test

Description

Preliminary

Test;

Instrument

Checks

4 GV-VWO @

Rated Pressure and

Temperature

Repeat Test 1:4

GV VWO

45

2.1 Technical Performance Measure (TPM )

In order to measure the technical performance of the boiler nozzle calibration and test, the mean value of the Reynolds Number from the four tests setting, (namely: A, B, C &

D) will be used to plot against the ASME Code PT 16.5 Nozzle Coefficient Standard. See

Table 2.2 and 2.3. In addition, the same Reynolds Number will be plotted against the manufacturer’s recommended “Mean” for the Nozzle Coefficient. The results from both plots will be used as the standard to validate the boiler calibration/test performance.

Table 2.2 Mean Reynolds Numbers Vs ASME PT 16.5 Code

Reynolds Number ASME-PT 16.5 Code

516

767

504

769

0.9900

0.9920

0.990

0.992

Table 2.3 Mean Reynolds Numbers Vs Manufacture Recommendation

Reynolds Number ASME-PT 16.5 Code

516

767

0.9835

0.9856

504 0.9825

769 0.9855

The result from the Mean Reynolds Numbers Vs ASME PT 16.5 Code will be analyzed with regression analysis tool to produce an R-Square value of (R 2 = 0.9988).

The results from Mean Reynolds Numbers Vs Manufacture Recommendation using the same regression tool should yield an R-Square value of (R 2 = 0.9423). Both plots must meet the acceptable minimum standard for the purpose of system validation.

46

2.2 Technical Reviews and Audits (TRA)

This project uses TRA to review and verify all activities at various points in the process to ensure operational procedures are followed and deficiencies are identified and corrected before system integration. See Table 2.4. It also looks at the process such as major system overhaul, Input and Output (I/O) curves before the task begins. The information collected before the task begins will be recorded as: As Found and the data at the end of the task will be classified as: As Left.

Table 2.4 System Reviews and Audits

Test Type Manufacture

Boiler Gross 88%

ASME

Requirements Code

86% eff.

0.992 Nozzle

Calibration

Test

0.9833

Coefficient of Discharge

K = 1.049

(K)

500 Gross

Nominal

Load (MW)

K = 1.0550

N/A

Test

W

As Found Data

Parameter

Temperature in Deg.C cfs

K = 1.0550

Current

System

76%

Not available

Not available

440

Thermal

Heat Rate

Cycle

Efficiency

Isentropic efficiency

4000 GJ/h

35

80

N/A

33

78

Joules

N/A

N/A

% Error

12%

N/A

N/A

12%

4480GJ/h 12%

30.8

68

12%

15%

47

2.3 Boiler Flow Nozzle Test - Process:

Figure 2.1 Boiler Flow Nozzle Test – Process

Ontario Hydro performed the boiler flow nozzle test in September, 1961 as part of a commissioning test scheme, to check the calibration of an 8.735 inch by 5.028 inch flow nozzle. The result of the test was measured against the ASME Code standard. A Kent flow meter, with Serial No 6/2018A/1, was employed as the primary differential pressure measuring instrument during the calibration. An independent performance check was carried out with an independent manometer. The Boiler Flow Nozzle Test – Process is outlined in Figure 2.1 above.

48

Today, the same test is used in projects to look at the calibration of boiler feed flow nozzle, as it relates to discharge coefficient versus Reynolds Number. The findings are shown in Table-2.5 to Table-2.9, while Figure-2.1 to Figure-2.4 shows the process concept. Task-1 in the process flow diagram is ASME PT Code 19.5 Section-7, while

Task-2 is the local test being performed.

2.4 Experimental Arrangement

Calibrated Instrumentation to Measure Parameters

Figure 2.2 Calibrated Instrumentation to Measure Parameters

The nozzle assembly, 12 ft. 8 ½ in. long, was installed in OPG hydraulics laboratory. The water entered the nozzle test section through a 90 degree elbow which contained straightening vanes to minimize the swirl of the secondary flow produced by the elbow.

A 5 ft. 4 in. length of straight pipe was installed at the outlet from the test assembly. A gate valve installed downstream from the nozzle was used to control the flow and ensure that the pipe flowed full.

49

The differential pressure across the nozzle (for calibration purposes) was measured by an independently connected, well-type, single scale mercury manometer. This was connected to one set of pressure taps on the assembly. For the first two series of tests

(settings A and B) the Kent flow gauge was connected to the other set of taps on the assembly and its performance checked by the independent manometer. For the third and fourth series of tests (settings C and D) an additional manometer was used and connected in parallel with the Kent gauge. These readings, as well, were checked against the independent manometer. The latter arrangement is to be used in the plant and it was necessary to determine if any peculiarities would result.

A bypass was provided for each manometer so that the zero point could be checked between tests without altering the flow setting. The pressure measuring instruments were placed in the lower laboratory, approximately 20 feet below the flow nozzle assembly.

This minimized problems of air leakage into the instruments.

2.5 Testing Procedure:

Two test settings were chosen. One was the highest flow that could be read on the Kent gauge, and the second was the intermediate flow. For each setting, 5 tests were run, each lasting 15 minutes. Thus, the Reynolds number, which remained constant during each series of five 15 minute tests, was considered to be the controlled variable for the calibration.

Discharge was measured with the two weighing tanks situated in the lower laboratory.

The tanks, each having a capacity of 15,000 lbs., were used in series. During each 15 minute test, differential head readings were taken independently upon the instruments every 15 seconds. After each test, the instruments were checked for the zero point without altering the discharge setting. Water and air temperatures were recorded for each test. See Table-2.5.

50

2.6 Results:

Test data, observations and results are presented in Tables 2.5 to 2.9. The calibration coefficient of the nozzle (C) is a function of the Reynolds number and the ratio (see Table

2.5). It is defined by the following equation:

Where

Q is the discharge in cfs.

F is the velocity of approach factor

A d

is the nozzle area in sq ft.

G is the gravitational constant in ft/sec

2

(lb/ft

3

)

H is the pressure head across the nozzle in feet of fluid flowing

The calibration of the flow nozzle was based on differential readings from the independent manometer only and was calculated for each of the five tests of a setting.

The best estimate of “C” for a setting is the mean of the five individual co-efficients.

Mean values of co-efficients are compared to ASME code values in Table 2.7, and are found to be well within the allowable tolerance of + 0.95%. It is noted that for only two test points, the co-efficient was not within the allowable tolerance (see Table 2.7, test points C-2 and D-1). This may have been due to the fact that the flow was not completely at steady state.

The differential readings recorded by the Kent gauge and the manometer connected with results from the independent manometer in Table 2.8. The absolute and per-cent deviations are given in Table 2.9. In each case the independent manometer was chosen as the standard reference. There appears to be no significant difference in the performance of the Kent gauge when another manometer is connected in parallel with it. The absolute error in the Kent gauge was approximately constant at +0.042 ft. of test water.

The pressure differential recorded by the Kent gauge was primarily used for comparison.

The measurement was converted to feet of test fluid. The following relation was used:

51

Q = CFA d

*√2GH

Where Q = the discharge in cfs

F = the velocity of approach factor

A d

= the nozzle area in sq. ft

G = the gravitation constant in ft/sec 2

H = the pressure head across the nozzle in feet of fluid flowing.

The calibration of the flow nozzle was based on the differential reading from the independent manometer only and calculated for each of the five steps of test setting. The best estimate of “C” for a setting is the mean of the five individual co-efficients. The mean value of the co-efficients are compared to ASME code values in Table-2.7 and are found to be well within the allowable tolerance of ± 0.95%. It is noted that for only two test points, the co-efficient was not within the allowable tolerance (See Table-2.7, test points C-2 and D-1). This may have been due to the fact that the flow had not settled down completely and lingering effect from turbulence still existed.

The differential reading recorded by Kent gauge and the monometer connected in parallel with the Kent gauge are compared with the results from the independent manometer was chosen as the standard reference manometer in Table-2.8

.

The absolute percent deviations are given in Table-2.9. In each case the independent manometer was chosen as the reference standard. The results show no significant difference in the performance of the Kent gauge when another manometer is connected in parallel with it. The absolute error found in the Kent gauge was approximately constant at +0.043 ft of test water.

The pressure differential recorded by the Kent gauge was primarily used for comparison.

The measurement was converted to feet of test fluid. The following relation was used:

( H ) w t

(

   m w

(

 w

) t

) a 

(

 w

(

) m a

)

( h ) w a

12

Where

γ m

=

γ w

= specific weight of mercury (lb/ft

3

) specific weight of water (lb/ft

3

)

52

H w

= Feet of water h w

= inches of water (i.e. gauge reading)

( ) a

= ambient temperature

( ) t

= test water temperature

2.7 Findings:

The ASME Code values for the nozzle co-efficient may be used confidently for this nozzle assembly. The Kent flow meter may be used satisfactorily for the pressure differential measurement as long as the absolute error in the instrument is taken into account.

Flow Nozzle Particulars:

D = 8.735 inches (inside pipe diameter) d = 5.028 inches (nozzle diameter)

A d

=

β

=

0.13788

0.5756

(nozzle area)

F

1

1

 

4

1.0599

Table-2.5 Flow Conditions

Setting

A

B

C

D

Air Temperature in

Degree .F

74

76

71

71

Water Temperature in

Degree .F

72

73

72

72

53

Setting

B

A

C

Test

1

2

3

4

5

1

2

3

4

5

1

2

3

4

5

Table-2.6

Discharge

(cfs)

3.010

3.021

3.027

3.031

3.032

4.461

4.843

4.484

4.476

4.473

2.949

2.9848

2.959

2.960

2.960

Nozzle Coefficient

Reynolds #

(Re

D

)

Independent

Manometer

Nozzle

Coefficient

Diff.(ft.test)water (c)

513,500

515,400

516,400

517,100

517,200

6.807

6.846

6.886

6.899

6.913

0.9836

0.9824

0.9836

0.9839

0.9832

765,100

768,800

769,000

767,600

767,100

14.940

15.039

15.085

15.042

15.003

503,100

502,900

502,900

504,800

505,000

6.559

6.574

6.582

6.589

6.591

0.9840

0.9856

0.9843

0.9840

0.9846

0.9817

0.9802

*

0.9835

0.9830

0.9830

0.9817

* D 1

2

3

4.487

4.504

4.503

765,500

768,400

768,200

15.184

15.208

15.218

4

5

4.528

4.522

772,500

771,500

15.333

15.242

(*) = Coefficient deviation above the allowable tolerance

0.9847

0.9842

0.9858

0.9875

54

Setting

A

B

C

D

Table-2.7 Test Setting - Mean Reynolds Number

Mean Re

D

Mean C C(code) % error

516

767

504

769

0.9833

0.9854

0.9823

0.9851

0.990

0.992

0.990

0.992

- 0.71

- 0.71

- 0.81

- 0.71

55

B

C

A

D

Setting

1

2

3

4

4

5

1

2

3

1

2

3

4

5

2

3

4

5

5

1

Test

Table-2.8 Manometer vs. Kent Gauge

Independent

Manometer

(ft. test) water

Kent Gauge

(ft. test) water

6.807

6.846

6.886

6.899

6.913

14.940

15.039

15.085

15.042

15.003

6.599

6.574

6.582

6.589

6.591

15.184

15.208

15.218

15.333

15.242

6.869

6.902

6.919

6.931

6.939

14.958

15.112

15.112

15.072

15.068

6.533

6.595

6.519

6.544

6.503

15.125

15.186

15.228

15.272

15.231

Manometer in

Parallel with

Kent Gauge

(ft. test) water

Not connected

Not connected

6.595

6.639

6.640

6.655

6.655

15.218

15.236

15.249

15.322

15.284

56

Setting

A

B

C

D

Table-2.9 Manometer & Kent Gauge Error

Kent Gauge

Different

(ft. test water)

Kent Gauge

% Error

Manometer in

Parallel with

Kent Gauge

+ 0.042

+ 0.043

+ 0.058

+ 0.029

+ 0.61

+ 0.29

+ 0.88

+ 0.19

_

_

- 0.036

- 0.029

Tools-1: Computer Base

………………………………

Manometer

% Error

_

_

0 .55

0. 18

Figure 2.3 Tools 1

57

2.8 Work Environment

Figure 2.4 Work Environment

Maintainer: (1) Log into PASSPORT and enter work order instructions: a.

using Task outline, select the required task and then print a work package b.

identify the required tools, maintenance procedure and instruction to complete the task c.

after the task is completed, attach a calibration sticker with date and work order number and name of the maintainer. d.

at the end of each shift, a work report will be completed to allow subsequent shift maintainer to continue the procedure.

The Work Breakdown Structure (WBS) is a formal exposition of the task to be performed as illustrated in Figure-2.5 and Figure-2.12. This process shows how the major task such as Boiler Flow Nozzle and I/O Curve is broken down sub-tasks with a structure of accountability and traceability.

58

Work Breakdown Structure

Figure 2.5 Work Breakdown Structure 1.

Work Order Task Schedule to sustain test records and traceability.

Start

Date

Work Order task 2.1

Prepare Test Bench

Work Order task 2.2

Calibrate Test Equip

Work Order task 2.3

Kent Gauge Test

H c

W r

Graph & Plot

Results

File Work Report

Work Order task 2.4

Manometer Test

Figure 2.6 Work Order Task

Work Order task 2.5

Manometer & Kent

Gauge Test

Work Order Task 2.6

Input Calibration data

In the computer

2.9

Work Order Task

Work order task must be done in the format in which they are written. For example, task

2.3 shall be done before task 2.4. If two tasks can be done simultaneously, then it should be written in the procedure. All procedures must be treated as continuous, that is, the procedure must be in the maintainer or operator’s possession at all times and the appropriate steps marked.

59

Figure-2.7 Discharge Coefficient versus Reynolds Number

The discharge co-efficient (K-Value) shown in Figure-2.7, shows the recommended calibration/test results to be better than that of the America Society of Mechanical

Engineers (ASME) code standard. The normal life cycle for boilers of this type is around

20 year ± 5 years. Therefore, this unit boiler easily surpasses the retrofit (refurbish) mean time to failure period of 15 years.

60

2.10 Gross Boiler Efficiency Test

The objectives for this test are as follows:

(f) To tabulate the average efficiency of the boiler given operating period

(g) To tabulate the net efficiency of the boiler at rated load

(h) To establish the availability factor, i.e. the ratio of operation time and the reverse time to the calendar year.

(i) To check the manoeuvrability of the monobloc (boiler & steam turbine) unit such as: variation in start-up and shutdown characteristics, operating range, dynamic properties, the characteristics at sudden surging of the load and load-shedding

(j) To look at factors that are responsible for internal disturbances, such as: flow rate of

BFW, temperature of BFW, fuel consumption rate and combustion air flow rate

(k) To look at the factors that causes external disturbances, such as: the steam pressure at the steam main header, turbo-generator load, and degree of opening of the start-up and shut-down devices. See Figure-2.8 below.

The Process

Figure-2.8 Gross Boiler Efficiency Test

Figure-2.9 to 2.11 features the operational concept of using PMTE, while Figure-2.12 displays the work breakdown strucure-2.

61

Figure-2.9 Method for Gross Boiler Efficiency

Figure-2.10 Tools-2

62

Environment

Figure-2.11 Environment

Figure-2.12 Work Breakdown Structure 2

63

2.11 Boiler Test Curve Data (Method of Calculation, 0.5% un-accounted loss)

Table-2.10 Post Modification Boiler Test of February 1, 2009 - Group-1 Unit-6

Test Number Test-1

Gross Nominal

Load (MW)

Gross

Corrected

Efficiency with no Credits

Reference Air

Temperature

@ Inlet deg. C

& Coal Blend of 70%/30%

USLS/PRB

500

87.45

Test-4

500

87.47

Test-2

400

87.94

26.60

Test-5

250

88.18

Boiler Efficiency Equation: A + B*MWg + C*MWg^2 with co-efficient of

Test-6

120

86.83

A = 84.9 B = 0.01984 C = -0.00003 is used to calculate the updated curve. Post

Overhaul results yield: Y = -3E-05x

2

+ 0.0198x + 84.929. In Figure-2.14 nominal gross power plotted against the boiler efficiency corrected, result yields a very good R-

Square (R 2 = 0.9472) value which is better than the industry standard set by ASME and the in-house local computerized estimate value. In addition, the calibration results from the boiler feed flow nozzle discharge coefficient versus Reynolds Number shown

Figure-2.7 was consistent with similar test completed as September 28, 1961, with reference to ASME PTC 19.5, Section-3. Table-2.10 shows the loading data.

Figure-2.13 is continuation of the WBS to the point where work gets down.

Work-Task-Breakdown

Figure-2.13 Work Order Task Schedule

64

Group-1 Gross Boiler Efficiency Test

87.6

87.4

87.2

87

86.8

88.2

88

87.8

Y = -3E-05x

2

+ 0.0198x + 84.929

R

2

= 0.9472

86.6

100 150 200 250 300

Nominal Gross MW

350 400 450 500

Figure–2.14 Gross Boiler Efficiency I/O Curve for Group-1 Unit-6 Coal Fire Plant

2.12 Post-Overhauled Findings

Gross boiler Efficiency Test Results shows an increase in 12% in

Megawatts MW after post overhauled work was competed in

March 2009.

Pre-Overhaul de-rating of the unit was caused by a combination of problems associated with the boiler, such as high temperature oxidation and tube creep-cracking at high temperature. This led to boiler deterioration at elevated temperature.

After preliminary Test/Instrumentations Checks, Full Isolation,

Thermal Heat Rate (THR), Sliding Pressure Mode Test and Rated

Temperature the unit was able to achieve the maximum capacity rating in MW.

From the new findings, the I/O curve will be updated.

Table-2.11 below displays the TRA “AS Found” and “As Left” conditions

65

Table 2.11

Test Type

System Reviews and Audits

Manufacture

Requirements

ASME

Code

Test

Parameter

As Left Data

Current

System

86% Boiler Gross 88% eff.

0.9833 Nozzle

Calibration

Test

0.9920

Temperature 87.2% in Deg.C cfs 0.9823

% Error

0.8%

0%

Coefficient of Discharge

K = 1.049

(K)

Gross 500

Nominal

Load (MW)

Thermal

Heat Rate

4000h

K = 1.0550

N/A

4200

K = 1.0550

W

Joules

Cycle

Efficiency

Isentropic efficiency

35

80

33

78

N/A

N/A

Boiler Flow Nozzle Instrument Calibration Loop

1.049

500

3900GJ/h

35

79

0%

0%

0%

0%

1%

Figure-2.15 Flow Nozzle Loop

Figure 2.15 shows the data from the boiler feed-water nozzle will be sent to computer network. The computer network will provide monitoring, trending and alarm conditions.

66

Section 3

3.0 Thermal Heat Rate (THR)

THR is commonly used throughout the Power Industry to set the standards for the thermal performance of turbine systems. Turbine manufacturers generally sell power plant systems with guaranteed heat rate values and acceptance conditions. Using the guaranteed values, turbines should be able to produce maximum power output and operate at optimum efficiency. Any deterioration in the steam process, such as leak, broken seal, re-heater drop pressure and unaccounted condensate losses will change the guaranteed THR valve. Therefore, in order to solve turbine heat rate problems, this project will breakdown the system into stages and blocks to simplify the analysis. In addition, before any major or minor systems overhaul, this project will strategically look at a series of pre-overhauled system performances in order to accumulate enough data to systematically analyze the system.

3.1 Test Objective:

To determine the following:

(a) Unit capability under practical operating conditions and the associated THR

(b) Heat-rate for the unit at valve point loads

(c) Performance of feed heating system, extraction line pressure drops and other related turbine cycle data

(d) Bench mark data for station performance monitoring and procedures

(e) Identify test results that can be used for predictive maintenance practices which will extend the lifecycle of the system

Prior to the main turbine acceptance tests, the system was isolated in accordance with the manufacturer isolation list and make-up system.

67

3.2 Test Program

Thermal heat rate is required by a turbine-generator system to produce power at the generator terminal. Heat rate is measured in kilojoules per kilowatt (KJ/KWh). The lower the heat rate, the more efficient the turbine cycle, the more cost effective it is to operate, and longer life cycle expectancy. The turbine efficiency is the reciprocal value of the THR, which is the cycle Eff. = 100 * (3600/THR) where 3600 KJ/h = 1KW.

There are several different approaches to measure and analyze THR. However, the method used for group-1, group-2 and group-3 plants in this project is shown in equation below.

THR was calculated using the following equation:

GJ/h

KWH

Where:

= W1 (H1 – h1) + Wr ( Hr – hr)

P + PB

W1 = Steam to HP Turbine Stop Valve Kg/h

Wr = Steam Flow to Re-heater Kg/h

H1 = Enthalpy of Steam Supplied to the High Pressure Turbine Stop

Valve (KW/Kg) hr = Enthalpy of Steam Supplied to the Intermediate Pressure (IP)Turbine before Interceptor Valve (KW/Kg) h1 = Enthalpy at the Feed-water at HP outlet (KW/Kg) hr = Enthalpy of Steam at HP Turbine Exhaust (KW/Kg)

P = Net Generated Output (KW)

PB = Equivalent Electrical Output from Boiler Feed Pump Turbine (KW)

In addition, other methods are used to measure THR, such as the Input and Output (I/O) method and the condensate flow measurement test. See Appendix-A for more information on how to apply the I/O method.

The I/O method is currently linked to a computerized software that is programmed to do live trend mathematical analysis of the THR through non-linear equation such as:

Input (GJ/h) = A + B * MWnet + C*MWnet^2.

The result of I/O is generally analyzed in terms of the R-Square value.

68

3.3 The Process Concepts

Figure 3.1 Process Concepts

The thermal heat rate can be performed using direct and indirect methods and alternate test method. Several other tests such as Deaerator Vent Leakage, Back Pressure Tests,

Feed-water Terminal Difference Test, HP Throttle Flow Test, HP & IP Extraction

Pressure Tests and LP Extraction Pressure Test were carried out to validate the THR test.

The reason for all the above tests mentioned being carried out is to make sure that the sub-systems are operating at maximum efficiency since they have direct impact on the

THR performances. Furthermore, the series of tests carried out in-conjunction with the

THR help to isolate the deficient sub-system(s) that is responsible for less than adequate

THR results. In addition, when these tests are performed periodically they provide an invaluable source of data for trending and systems improvement.

The process concepts are summarized in Figure-3.1, 3.2, 3.3, 3.4 and 3.5.

69

Process

Figure-3.2 Analyzing the Process (THR Test)

Method

Figure-3.3 Analyzing the Method

In Figure-3.9 the I/O calculation will be measured with R^2 regression statistics . At a given THR, the R^2 should be within a given percentage. This comparison is with reference to the design calculation. Undesirable results will be further analyzed with the possibility of a planned maintenance solution.

70

Tools

Figure-3.4 THR Loading

The method shown in Figure-3.9 will allow the system to stay within the safe THR region. This is done by the DES SCADA computer. The Thermal Heat Rate produced by each test shown will be measured against the manufacturing THR, the results will be captured by the Data Extraction Computer (SCADA) in live trend. As shown in the

Table-3.1 each load will be taken with governor valves in given a position.

71

Work Environment

Figure 3.5 Work Order Task Outline & Instructions

Maintainer: (1) Log into PASSPORT and enter work order number

Instructions

Using Task Outline, select the required task and then print a work package.

(i) Identify the required tools, maintenance procedure and instruction to complete the task

(2) Maintainer shall read the procedure and get all supporting documents to complete the task(s).

(3) Maintainer shall do a field walk down to make sure the task can be done in compliance with the work task instructions.

(4) Maintainer requests a written pre-job briefing from the supervisor before attempting to do work.

(5) Maintainer shall be aware of all hazards and constraints associated with the task(s) before proceeding with work activities

72

3.4 Work Breakdown Structure (WBS)

Figure 3.6 WBS

Figure-3.6 (WBS) shows the work order tasks for two workgroups: Operator and

Mechanical. Each operator task is aligned to a mechanical task. Therefore, both workgroups must coordinate the task to satisfy the work order completion. Moreover, every task must be given a written work authorization from operation-workgroup. This work-group dictates when and how the task(s) should be carried out. Upon completion of any given task, the work-group must sign off the written work authorization in the presence of the issuing authority. A work report must follow all work order tasks, and this is done using PASSPORT.

73

3.5

Work Order Task Schedule to sustain test records and traceability.

Figure 3.7 Work Task Schedule

Figure 3.7 shows the outline of work task schedule with reference to a given start date.

Work order tasks must be done in the format in which they are written. For example, task 3.3 will be done before task 3.4. If two tasks can be done simultaneously, then it should be written in the procedure. All procedures must be treated as continuous, that is, the procedure must be in the maintainer or operator possession at all times and the appropriate steps marked. If an incoming workgroup is taking over the task(s) they have to sign the old authorization in order to assume ownership. The incoming work group must also login to PASSPORT to see what steps were performed last, and the results that were obtained by the previous workgroup. All irregularities shall be reported to the

Systems Engineer immediately before processing the work. In addition, irregularities shall be filed in PASSPORT. All work tasks are subjected to be audited by the Technical

Review Audit group without giving any prior notice to the workgroup carrying out the task.

Figure-3.8 shows the power plant (system) concept of description diagram and Table-3.1 shows the turbine heat rate loading test that was performed on the system.

74

3.6 Group-3 Power Plants (Nuclear)

Figure-3.8 Power Plant Concept of Description Diagram

GJ/h

KWH

= W1 (H1 – h1) + Wr ( Hr – hr)

P + PB

Thermal Heat-Rate = 10199 KJ/KWh

This method of calculating heat rate is common to Group-1 (Coal Fire), Group-2 (Gas

Fire, and Group-3 Nuclear.

1250.446(2790.6 – 750.8) + 59.93(2790.6 -1135.0) * 3600)

935300

75

3.5 Group-1 & Group-2 (Coal Fire & Gas Fire ) Power Plants

Turbine Heat Rates and Net Generator Output Test

To effectively measure thermal heat rate, this project uses a common approach that is maintained through the power industry and ASME. For example, most tests done on power systems to obtain a guaranteed THR by the manufacturer is done with a minimum of 4-tests with different power output (MW). Furthermore, at various load capacities the governor settings are adjusted to match the guaranteed THR condition. In Table-3.1 there is a breakdown of how the THR test should be carried out or performed for Group-1 and Group-2 plants with load capacity up to 610 MW. Nuclear plants (Group-3) will not use this method, due to risk of transient and instability with the process of reactivity.

However, when the system is undergoing the process of coming offline or shutting down for any major overhaul, it is a time consuming process and similar heat rate loading test results can be obtained. For example, at 90%, 70%, 50% and 30% nominal load, the

THR will be taken and when the system is running-up, the same set of reading should be taken to establish a correlation between the pre-overhaul and post-overhaul condition.

Table-3.1 Turbine Heat Rate Loading Test

Test # &

Nominal

Load MW

Control

Valve

Position

1/555 2/501 3/482

4-VWO 4-Valves

@ 30%

Open

3-Valve

Open

7704.3 7710.9 7676.2

4/361

2-Valve

Open

5/502

4-VWO

6/558

4-VWO

Test Cycle

Heat Rate

Test Cycle

Net

553.300 480.590

Generator

Output (MW)

4-Valves Wide Open (4-VWO)

480.590

7665

360.730

7703.9

501.170

7679.3

557.040

76

This ASME PTC-6S test is used to monitor performance using feed-water heaters and cold reheat points. The data gathered will be used to determine the kilowatt capacity, turbine cycle heat rate, high pressure turbine (HP) and low pressure turbine (LP) efficiency. The instruments used will be calibrated with high precision, resolution and repeatability. Steam and water leak should be zero in order to optimize the output capacity in kilowatt.

Steam and water leak can provide significant heat rate loss that drastically affects the results. The output capacity test requires the flow measuring instruments to be able to repeat the performance over several steps. If repeatable measurement results are inadequate, then measurements should be carried out periodically with the turbine control valve fully open or wide (VWO). However, one can customize the test to get the data of interest, such as in test-

2 with 501 MW where all four valves were set at 30% to yield a heat rate output of 7711

BTU.

3.8 Variable Back Pressure Tests

Two Variable Back Pressure tests were carried out using test instrumentation for all measurements. The duration of the test is ½ hour. The absolute condenser pressures, average over test periods were 2.2 -3.0” Hg with corresponding loads, corrected for variations in main steam and reheat temperatures of 492.5 and 482 MW respectively.

Both tests were carried out with the main steam pressure at 2061 psia before the Turbine

Stop Valves and with all control valves wide open. This test is used as a source of verification for the entire system. The system under test will be analyzed against the designed calculated values. It is the single most inclusive test that effectively looks at the system extraction, feed-water and THR in single test.

Table-3.2 shows the raw data for the pressure back test, which includes the main steam flow for test-1a (3,251,777 lb/hr) and test-1b (3,254,123 lb/hr). This test was used to support the thermal heat rate loading at various points shown Table-3.1. In addition, the calculated heat rate was compared against the tested heat rate to establish a correlation between two findings.

77

3.9 Summary of Readings for Turbine Variable Back-Pressure Verification Test

Table-3.2 Turbine Back Pressure Test

Item Parameters Back

Net Load

Pressure

Test-1A

491.710 MW

Power Factor 0.988

Back

Pressure

Test-2B

480.670

0.987

Test-3

VWO

501.117

0.987

Deg.F 993.833 991 990.8 Main Steam

Temperature

Re-heat

Steam

Temperature

Main Steam

Pressure

Deg.F

PSI

992.77

2062

99.85

2062

997.9

2085.4

Back

Pressure

Main Steam

Flow

Hg lb/hr

Group-1 Coal Fire Plant

2.21

3,251,777

3.09

3,254,123

1.34

3,291,220

Steam Flow Verification by Calculation

Since no flow measurements were taken during these test, the throttle steam flow was calculated using the following relationship:

K

W

1

P *

Where K = a constant (for valve wide open tests only)

W = steam flow to Turbine Throttle Valves lb/hr

P = pressure before Turbine Stop Valves lb/hr

T = absolute temperature of steam before the Turbine Stop Valves

R = Universal Gas Constant

78

K

P *

W

1

K and if K

0 1

1

R

Then

K

1

W

K at valve wide open

1

1

P *

T

3, 291, 220

2085.4*

1

990.8

460

60124.96

1 & 2 B

Test #1 A

WA

K

1

* PA *

Test #1 A

3, 251, 777 lb hr

1

TA

60124.96 * 2061.9 *

Test # 2 B

WB

K

1

* PB *

Test # 2 B

3, 254,132 lb hr

1

T B

60124.96 * 2061.9 *

1

993.8

460

1

990.8

460

Figure-3.9 shows the Input/Output (I/0) curve for the heat rate test. The system shows six extraction lines including the high pressure (HP). Figure 3.10 shows HP extraction, extraction 6 and extraction 5. Figure-11 shows extraction 4, extraction 3 and extraction 2.

All the extractions are shown with reference to the turbine first stage. The graph results show the proportional decrease in pressure from high to low extraction.

Group-1 Plant Thermal Heat Rate Test Using I/O Method

8000 data 1

7000

Y = 0.0039 *x.

2 + 9.5028 *x + 399.31

6000

R

2 = 0.991

5000

4000

3000

2000

1000

100 150 200 250 300 350 400

Test Nominal Load MW

450 500 550 600

Figure-3.9

The System THR I/O Curve

79

Turbine Acceptance Tests

700

600

500

400

300

200

100

0

0 400 800 1000 1450 1600

1st. Stage Pressure

1800

Figure 3.10 Extractions HP, 5 & 6

Turbine Acceptance Tests

50

40

30

20

10

0

100

90

80

70

60

0 400 800 1000 1450

1st. Stage Pressure

1600 1800

Figure-3.11 Extractions 2, 3 & 4

80

HP EXHAUST

EXTRACTION 5

EXTRACTION 6

Extraction 2

Extraction 3

Extraction 4

3.10

The THR Data Extraction System (DES) Flow Diagram

Figure- 3.12 Thermal Heat Rate Data Extraction System

Figure-3.12 shows a block diagram with calculating points for the thermal heat rate data.

The extraction data was converted from engineering units to computer source code for the purpose of system validation and system integration.

The Thermal Heat Rate I/O curve did not match the expected results. However, other tests such as THR calculation and the variable back pressure tests proved that the measured values are within the design specification range. The coefficients of the I/O equation will be revisited and the required adjustment will be done through a proprietor software program.

81

4.0 Validating the System

The System as it relates to this project and the tests used to validate the system requirements are shown in Figure 4.1 below. This computerized validation will ensure that the requirements are consistent and complete to meet the system high level functional requirements. Although validation was completed at the time of commissioning, in order to maximize the system performance, this project will revalidate the system shown with reference to the manufacturer designed validation.

Furthermore, allowances will be given to match the service and operational life cycle of the system, since most of the plants are fast approaching the end of their life cycle. By basing the validation on the designed requirements, this will allow for ongoing traceability of technical requirements. The GBET is done at the boiler; the THR test will be done using the parameters from position 1, 2 & 3. The Boiler Feed-Water Flow

Nozzle test will be at points 1 & 6. Data extracted from the system is fed into a network of computers to carry out the required calculation, and data processing to validate the system. The interface between computers and the data extracted will be done through software programs such as Microsoft C/C++ and Microsoft Visual Basic (VB) shown here.

Figure-4.1 Data Extraction for System Under Test

82

4.1 Validation Test - Using Computer Network

Figure-4.2 Hardware Connections

4.2 TEST PLAN

The testing consists of cases that are identified in this document. These tests are:

 performed manually with observations being recorded by the tester, performed under the control of a test script with results being recorded into log files.

The observations and the log files from each test case are compared against the expected, predefined output. If the actual output from the system under test matches the expected output within stated tolerances, then the system under test is deemed to have passed the test case. If the output does not match the expected output, a failure must be recorded.

The tester may note unusual behaviour (not anticipated) during the test or during the analysis of the test results. Further analysis or testing must be done to determine if the behaviour is acceptable. If the tester is unable to confirm acceptability or if additional

83

testing is required the behaviour shall be treated as a failure, as described in the following section.

Causes for Test Case Failures

When a test case fails, the reason may be one of the following:

 system under test did not respond as required, expected results were not specified correctly, test case procedure was incomplete or incorrectly specified,

Test script or test tool had an error.

The reason for the failure shall be determined and the appropriate course of action taken as outlined in the following sections. Each failure shall be reported.

Failure of System under Test

All failures of the system under test must be reported promptly.

If a modification to the hardware and /or software is made to correct a problem, the test shall be repeated, using the modified system and a new determination of pass/fail for the test case shall be made. Other test cases that may have been affected by the correction will be repeated to reassess the correctness of function and performance.

If no change was made to correct the failure, a reason for not making the change must be documented in the test reported. The reason:

 could result in a change to the requirements (to be reissued after testing), or could be permitted by some documented, mitigating situations

Error in Expected Results

If the expected results were incorrect, an analysis justifying any change of the expected result shall be made and documented. A new determination of pass/fail shall then be made and recorded. The test is normally not repeated. The test procedure and test scripts shall be corrected as required and reissued after all corrections have been made.

84

Failure due to Test Script or Procedure

If the test failure was due to an error in the test procedure, a test script or test tool, the appropriate correction shall be made and the test case shall be repeated as corrected and a new pass/fail determination shall be made. The test procedure, test scripts and test tools shall be reissued after all corrections have been made. Repeating the test also confirms the correction was properly made.

Recording of Failures in the Test Report

The test report shall record all failures, the reason for the failure, and the action taken in response to the failure report as described in preceding sections.

The new revisions of all changed software, test procedures, and test scripts shall be recorded in the test report. As well, the results of all retesting shall be recorded.

Subsequent failures during retesting shall be treated in the manner described above with the results of each retest being fully documented.

85

Notes to Testers

After each test is completed, save the files that are relevant to the analysis of the test.

This must be done to preserve the output of each test before a subsequent test is run that could modify or overwrite those files.

Test deferred to Subsystem Testing

Some of the abnormal conditions that may occur are tested during system testing rather than validation testing. These include:

Data packet content errors

Configuration file content errors

Long term issues are performed during the subsystem testing and include the following:

Directory and file creation

Year end roll over and leap year.

4.3 Start-up Application Software in Normal Environment

Test Setup

Before doing the test, be sure that the computer has been properly shut off and that the mouse has been disconnected (for this test only).

Test Procedures:

1.

Turn on the computer.

Verify that there is a choice of entering the operating system or entering the

Gateway application software.

2.

Do not choose either option.

Verify that the program runs after a short delay and that the version of the software is shown as well as the unit number and DCC-X or DCC-Y.

3.

Send data from the DCC to the Gateway Computer.

Confirm that no errors exist that relate to missing hardware or computers. That is, the DCC, the RTU, DES SCADA, and the archive computer are all available and data transfer errors are not being reported. Confirm that the current data and time is being updated on the display at least once every 5 seconds.

4.

Terminate the Gateway application software using the keyboard. Confirm that the program exits to the operating system.

86

5.

Examine the log file messages. Confirm that all messages are clear and justified.

Confirm that the reason for shutdown is correct.

6.

Examine the setup in the Control Panel application. Confirm the monitor is capable of 800 x 600 pixels and that the monitor is currently using that setting.

4.4 Start-up Test in Abnormal Configurations

This test concentrates on start-up issues that do not relate to configuration files.

DCC-X Gateway Computer shall receive data packet from DCC-X only at 5 second intervals. DCC-Y Gateway Computer shall receive data packet from

DCC-y only at 5 second interval.

Failure of the Gateway Computer software shall not affect the functionality of the network

The Gateway Computer shall start automatically, without operator intervention after a cold or warm reboot. There should be no need to enter user ID or password to start the operating system.

Failure of the Gateway Computer software shall not affect the network functions

4.5 Validation Test Procedures

Prior to this test to be sure that the Gateway Computer starts in a normal configuration.

This test performs a start-up with various components missing or disabled to confirm that error detection and recovery are possible. Configuration abnormalities to be tested are:

• No LAN connection (with recovery of data after the remote disk is accessible),

• No DES SCADA connection,

• No RTU connection,

• No DCC connection.

Test Procedure

1. Stop the Gateway Computer.

2. Disconnect the LAN link to the remote archive computer or turn it off.

3. Disconnect the DES SCADA serial link.

4. Disconnect the RTU or turn it off.

5. Stop the DCC program.

6. Start the Gateway Computer and choose to start the application software.

Confirmed error messages and logs are indicating:

87

• No LAN connection,

• No DCC packets.

• All software start-ups and control shutdown shall be recorded in the event log

7. Start the DCC program and send data packets to the Gateway Computer. Confirm

that packets are received.

8. Reconnect the RTU.

Confirm (later) that data is sent to the RTU after the reconnection.

9. Reconnect the DES SCADA serial link. Confirm that data is being sent to DES

SCADA.

10. Reconnect the LAN link to the remote archive computer.

Confirm that the link is recognized and that all DCC data packets are copied from

the local disk to the remote archive disk.

11. Turn off the power to the Gateway Computer and then turn it back on (normal

configuration).

4.6 Abnormal Conditions - Disk Full

The disk full conditions are tested here while the application software is running. This includes both the local disk and the remote archive disk.

Test Procedure

1. Record the available space on the local drive of the Gateway Computer.

Confirm that there is capacity on the local drive to store data for ten days. Each ten- minute file is approximately 1 MB in size. This is 6 MB per hour for 240 hours, which is 1.44 GB for ten days.

2. Start the application software and continuously send data packets from the DCC

Simulation Computer to the Gateway Computer.

Confirm data transfers are occurring.

3. On the remote archive computer, create a temporary directory and copy large files into it so as to greatly reduce the space available on the disk of that computer.

Confirm that appropriate error messages are generated, indicating low disk space on the remote archive disk.

88

4. Continue to copy files into the temporary directory so as to deplete all space on the disk drive.

Confirm that the application software produces appropriate error messages.

Record the names of the most recently archived files for later comparison. Newly received data must be recorded on the local drive of the Gateway Computer as long as space remains.

5. Wait for one hour and then delete the temporary directory with all the copied files on the remote archive disk.

Confirm that the archive data, not previously written to the remote archive disk, gets copied to the correct archive locations. Examine these files later to confirm that neither data packets nor log file data were omitted from the archives.

6. Create a temporary directory on the Gateway Computer local drive.

Confirm that the application software generates appropriate messages, indicating low disk space on the local disk.

7. Continue to copy files into the temporary directory so as to deplete all space on the local disk drive.

Confirm that the application software produces appropriate error messages.

Record the names of the most recently archived files for later comparison. It is likely that the application will fail (or intentionally exit) due to the tow disk space.

8. Delete the temporary directory with all the copied files. Restart the application software.

Confirm that the application software runs, sending outputs to all the required locations (archives, SCADA, RTU).

9. Stop the application software via the keyboard. Save all log files.

Examine the log files. Ensure that all logged messages are consistent with the events that occurred.

89

4.7 Abnormal Conditions - Watchdog Failure

Test Setup

In the normal test configuration, the watchdog card is enabled and controlled by the

Gateway software. This test uses the normal test configuration, but will disable the watchdog output by means of the software and confirm that the software responds correctly.

Prepare a modified version of the software that sets up the watchdog, but does not do the periodic update, because this will cause the watchdog to time out.

The purpose of the first step is to confirm that the modified software runs, if the computer does not reboot. It does not have to be repeated in subsequent reruns of the test, provided that the software has not been recompiled.

Test Procedure

1. Load the modified version of the software on the Gateway Computer. Power off the computer and remove the watchdog output signal that causes the computer to reset. Restart the computer and the Gateway software.

Confirm that the program continues to run. (The watchdog output has been disconnected and is unable to restart the computer.)

2. Power off the computer and connect the watchdog output signal that causes the computer to reset. Restart the computer and the Gateway Software, and

Confirm that the output from the watchdog card causes the computer to restart.

3. Wait for the automatic restarting of the program.

Confirm that the Gateway software runs after the reset and that it shuts down again within 30 seconds after the restart, because of no watchdog update.

Confirm that the number of automatic restarts is limited by the software and that a message is logged to indicate that too many restarts have occurred within a short period of time. Confirm that the program exits with the watchdog disabled, allowing the computer to continue running under control of the operating system.

90

4. Save all log files and archive files - both local and remote.

Confirm that all log files and all archive files that were written or updated during this test are all readable (uncorrupted).

4.8 Performance Timing Tests

This procedure tests that the Gateway Computer can accept data from the DCC every five seconds and retransmit the required portions of it to the RTU and to DES SCADA within a required time. The test sends packets to the Gateway Computer under normal conditions and then introduces upsets (such as loss of the network link) and expects recovery from the upsets to occur during the test.

The log file will be examined to determine the elapsed time from the receipt of a DCC data packet until the expected output occurs. The Gateway software has been programmed to record events to the nearest second; however, this test requires more precise measurements.

Test Setup

Ensure the following hardware is properly connected: Gateway Computer, DES SCADA

Computer, remote archive computer on the LAN, and the DCC

Simulation Computer and the Console Terminal Compute Validation Test Procedures

Prior to the test, create a performance test directory (C:\PERFORM). Copy all the files from the normal test directory to C:\PERFORM.

In order to put load on the LAN and the Remote Archive Computer, simulator versions of the Gateway software can be run on other computers on the LAN. This is the build of the software that is used in the subsystem testing. Each instance should be set up to allow archive files to be written to the Remote Archive Computer at the same time as this test is running. Confirm there is no conflict in Gateway assignment numbers.

91

Test Procedure

1. Run all the simulator versions of the software on their respective computers.

Confirm all are storing data onto Remote Archive Computer.

2. Delete all Gateway Computer log files and archived files from the local drive and from the remote drive.

3. Start all computers and run their required application programs. The Gateway

Software must be the modified version in C:\PERFORM.

4. Send data from the DCC Simulation Computer to the Gateway Computer (one packet

Every five seconds).

Confirm that data files are being written to the remote archive drive.

5. After twenty minutes of operation, power down the remote archive computer.

6. After ten minutes more, disconnect the link from the Gateway Computer to the SMP

Gateway,

7. Disconnect the DES SCADA serial link.

8. After two minutes, reconnect the DES SCADA serial link.

9. Commence sending data from the DES SCADA Computer to the Gateway Computer.

This data must be ignored while all required data from the Gateway Computer must

be received by the DES SCADA Computer.

10. After it has been off for at least three hours, turn on the remote archive computer.

Confirm that the data packet files are updated correctly to the remote archive drive.

( This may take several hours to begin.)

11. When the update of all recovered files is complete, the test can be terminated. Stop all

Application programs and test tool programs.

12. Save all log files and archive files. Confirm that all data (both new data and recovered data) have been transferred correctly to the remote archive drive.

Confirm that all local drive files are correct.

92

13. Confirm that while data is available, the DES SCADA computer receives all packets correctly, with packet numbers correctly identified. In three hours there will have been

2160 packets sent from the DCC to the Gateway computer, resulting in two numbers sequence wrap-around. There will be a skip in the number sequence while the DES

SCADA computer was not available. Confirm that all induced failures and abnormal events are correctly recorded in the log files.

14. Using the Gateway log file, record the times taken to write data to DES SCADA and

to the RTU after it receives data from the DCC.

Confirm that the time from receiving a DCC packet until the log of transmission to

DES SCADA is no longer than 3 seconds.

Confirm that the time from receiving a DCC packet until the log of transmission to the

RTU is no longer than 2.75 seconds.

Confirm that the performance requirements from the SRS have been met.

Record the maximum time for each event type and the confidence level that the limit

was not exceeded.

15. All data points to be transferred must be previously converted to engineering unit.

16. All data points transmitted to DES SCADA must come from the same DCC computer.

93

4.9

After receiving each complete DCC Data packet, the Gate way computer shall send the Packet to the DES SCADA computer at a baud rate of up to

38400 bit/sec

The Gateway computer Data packets shall have a Sequence Number in from 0-1023 that is incremented by one for each packet

All data points to be transferred by the Gateway Computer to DES

SCADA shall be previously converted to engineering units in format that is readily useable by the DES computer. (16 bit value scaled & 16 bit scale factor).

All data from points transmitted to the DES SCADA Computer shall come from the same DCC data packet.

The Gateway Computer shall transmit only to the DES SCADA Computer using the same DCC data packet.

The Gateway Computer shall transmit only to the DES SCADA Computer and shall not attempt to receive data from the DES SCADA Computer

Failure of the performance of the DES SCADA should not affect the functions of the DCC nor the Gateway computer.

The watchdog timer shall be enabled on the Gateway software start-up and be disabled on the Gateway shutdown. A mechanism shall be developed in the software to prevent the possibility of continuous reboots if the software terminates abnormally.

The transfer of the RTU shall be complete within 2.75 second after the

DCC packet has been received on verified

94

Table-4.1 Validation Requirements

Section in

Requirements

#

Section in this document

Notes

System Functional Requirements and Bases

Gateway Computer Functional Requirements

Data Collection Functional Requirements

R2-1000 4.4

R2-1001 STP

DCC-X Gateway Computer shall receive data packet from

DCC-X only at 5 second intervals. DCC-Y Gateway

Computer shall receive data packet from DCC-y only at 5 second interval

The Gateway Computer shall only receive data from the

DCC and shall not attempt to send data to DCC

R2-1002 4.4

R2-1003

R2-1004

R2-1005

STP

STP

STP

Failure of the Gateway Computer software shall not affect the functionality of the network

The Gateway Computer shall validate the integrity of each data pocket received from the DCC using Checksum value incorporated in the DCC data packet

The Gateway Computer software shall validate the integrity of each data packet by examining the order of the analog input data blocks and their associated addresses

The Gateway Computer software shall discard any data packet that has failed the integrity check

Data Archiving Functional Requirements

R2-1006 STP The Gateway Computer software shall save a snapshot of the most recent data packet on the network drive. In addition, the file should be overwritten with every new data packet

R2-1007 STP After receiving each data packet, if the LAN server is available, the Gateway software shall insert the DCC data packet information into the appropriate location of the appropriate “ten minutes data file on the network drive

R2-1008

R2-1009

STP

STP

The Gateway Computer software shall determine the appropriate ten minute and the appropriate location in this file to store data from a particular DCC data packet based upon the DCC time stamp included in that packet

The algorithm for determining the ten minute file and slot in shall be capable of handling abnormal transmissions from the

DCC with corrupting data files or recording software misleading information in them

R2-1010 STP The Gateway Computer software shall be capable of handling a directory structure that : (a) differentiates between directories allocated to Gateway-X and Gateway-Y

Export Data to Data Extraction System (DES) SCADA Computer

R2-1011 4.9 After receiving each complete DCC Data Packet, the

Gateway Computer shall send the Packet to the DES

95

R2-1012

R2-1013

R2-1014

4.9

4.9

4.9

SCADA computer set at a baud rate of up to 38400 bit/sec

The Gateway Computer Data Packets shall have a Sequence

Number in interval from

0-1023 that is incremented by one for each packet

All data points to be transferred by the Gateway Computer to

DES SCADA shall be previously converted to engineering units in format that is readily useable by the DES computer.

(16 bit value Scaled & 16 bit scale factor)

All data points transmitted to the DES SCADA Computer shall come from the same DCC data packet.

R2-1015

R2-1016

R2-1017

R2-1022

4.9

4.9

4.9

4.9

The Gateway computer shall transmit only to the DES

SCADA Computer shall come from the same DCC data packet

The Gateway Computer shall transmit only to the DES

SCAD Computer and shall not attempt to receive data from the DES SCADA Computer

Failure of the performance of the DES SCADA should not affect the functionality DCC nor the Gateway computer

Start-up, Shutdown, and Watchdog Function

R2-1018 4.4 The Gateway Computer shall start automatically, without operator intervention after a cold or warm reboot. There should be no need to enter user ID or password to start the

R2-1019 4.5 operating system

All software start-ups and control shutdown shall be recorded in the event log

R2-1020 STP The Software shall periodically access the watchdog with a period of one half the timeout periods

R2-1021 4.9 The watchdog timer shall be enabled on the Gateway software start-up and be disabled on Gateway software shutdown. A mechanism shall be developed in the software to prevent the possibility of continuous reboots if the software terminates abnormally

The transfer of the RTU shall be complete within 2.75 seconds after the DCC packet has been received and verified

96

The Spanning Tree Protocol (STP) shown in Table-1 uses the span algorithm to provide a hierarchical tree that spans the entire network with switch included.

This protocol allows only one path to be active at any given time. The STP protocol is part of the IEEE 802.1 standards. Remote Terminal Unit (RTU) in the hardware connection shown in the Figure-4.2 is a device that collects data acquisition information and sends them to the main computer system. The Symmetric Multiple Process (SMP) computer mentioned here provides fast performance by making multiple CPU’s available to complete the individual processes simultaneously.

The Gateway computer uses protocol conversion to connect dissimilar communications systems. The gateway provides the translation from one set of protocol to another, such as a local area network to a mainframe. An interface provides the boundary between two systems or devices. Moreover, the interface also provides logical connection between different transmission systems or equipment. See Figure-5.1.

97

5.0

Integrating the System

Figure-5.1 Computerized Interface

Figure-5.1 shows a summary of a computerized integrated system with test points.

In integrating the system, a Software Engineering Model was developed using a hybrid prototype and an Iterative-Waterfall model as illustrated in Figure-5.2. The prototype phase of the model allows for full software prototypes to be created and redefined, meeting the needs of the various stakeholders such as system maintainers and operators.

The prototype also allows for unplanned technical issues to be addressed and the requirements of the system to be redefined. Once the process is considered completed, the Iterative-Waterfall phase of the Software Engineering follows a top-down design approach with feedback loops to correct any defects that may be introduced in any step of the Software Engineering process. When defects are detected in the Iterative Waterfall phase, the earliest process affected will be addressed first, followed by subsequent process until the defect has been eliminated. No process is considered complete until the predecessor process is completed, and it meets the Software system configuration control.

In addition, any document considered to be an input to the software engineering process shall be placed under the configuration control as per the System Engineering

Management Plan.

98

5.1 Software Engineering Project Plan

Hybrid Prototype & Iterative Water Fall Model

Figure-5.2 Hybrid Prototype

The objective of using a prototype system is to avoid the risk that is associated with using the software interface directly to the application. If the risk associated with integration is not identified during the development of the functional baseline, then the probability for serious problems become very large as it relates to the product baseline. Therefore, by using this prototype, the system was able to avoid transferring the risk, accepting the risk and controlling the risk.

The software design and coding were developed to look at the Thermodynamics of the power plant system, such as temperature, steam flow, enthalpy, pressure, British Thermal

Units (BTU), mechanical and electrical.

99

By using the software engineering code to extract data from the power plant system, the project was able to interface the major components in the system locally and remotely using SCADA and thus improve the system.

5.2 Software Engineering Process

Table-5.2 shows the list of processes and all of the relevant software engineering tasks. In this Table:

(a) A draft document is considered to be suitable for input starting process, but is not likely to be adequate to complete the process

(b) The SPG is an input to all development, verification, and validation processes

(c) Item 9 through 18 are related to custom software. The two types are software written in C and Visual Basic. For the documents pertaining to the custom C software, the suffix-“C” is appended to the end of the document acronym. Like wise, for the custom VB software, the suffix-“VB” is used.

(d) Item 19 is related to configurable software and is described in the CSD.

(e) Item 14 has the following exception to the design standard.

(i) Defining the interface using four- variable model shall be optional. If the four variable models are not used, the interface specification shall still meet the following requirements:

The logical inputs to and logical outputs from the functional system shall be described.

The characteristics of the logical and physical inputs and outputs addressing such issues as types, formats, units and valid ranges shall be described.

(ii) Cross reference to the CSD shall be optional. If the cross reference to the CSD is not include in the SRS, it shall be included in RRR

(g) Item 11 and 16 have the following exceptions to the Design Standard

100

Table-5.1 Acronym (Software Development)

TP

TR

VB

VT

RRR

SCADA

SCR

SDD

SDP

SIT

SPG

SRN

SRS

STP

STR

Acronym Definition

CMS Configuration Management System

CSD Computer System Design

CS

CSDRR

CSR

DCN

Configuration Software

Computer System Design Requirements

Computer System Requirements

Design Change Notice

DES

DRR

DSSM

FCN

FIX

DMACS

MDR

Data Extraction System

Data Review Report

Des SCADA Monitor

Field Change Notice

Fully Integrated Control System Distribution Manufacturing Automation an Control Software

Modification Design Notice

Requirement Review Report

Supervisory Control and Data Acquisitions

Software Categorization Report

Software Design Description

Software Development Plan

System Integrated Testing

Standard and Procedure Guidebook

Software Release Notes

Software Requirements Specification

Sub-system Test Plan

Sub-system Test Report

Test Plan

Test Report

Visual Basic (Microsoft)

Validation Test

101

Table-5.2 Software Engineering Process Plan

#

01

02

03

04

05

06

07

08

09

10

#

Process Inputs Outputs Procedure/

Guidelines

SPG

Deliverables

Software

Categorization

CSR Software

Categorization

Report

None Review Existing Code Software from

Prototype

CSR Computer System

Design

Computer System

Design Review

CSR, CDS

CSD Software

Requirements

Specification

(VB Program)

Software

Requirements

Specification

(VB Program)

Review

Software Design (of

Custom VB programs)

SRS-VB, CSD

SRS-VB, CSD

CDS

CSRR

SRS-VB

RRR-C

SDD-VB

Software Design (of

Custom VB programs)

Review

Coding (of Custom

VB Program)

SRS-VB, SDD-

VB

SRC-VB, SDD-

VB

CSD

DRR-VB

VB Source

Code &

Executable

SRS-C Software

Requirements

Specification (of

Custom C program)

Process Inputs Outputs

N/A

SPG

SPG

SPG

SPG

SPG

SPG

SPG

SPG

Yes

No

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Procedure/ Deliverables

102

14

15

18

19

12

13

11

16

17

20

21

Guidelines

Software

Requirements

Specification Review

(of Custom C program)

SRS-C, CSD

Software Design (of

Custom C Programs

Software Design

Review (of Custom C

Programs)

Coding of Custom C

Program

SRS-C, CSD-C

SRS-C,

CDD-C

SDD-C

DRR-C

SRS-C,

CDD-C

C Source Code

Executable

Software

Configuration

Of the FIX DMACE

Configurable Software

Revise Code

(Integration of custom software & configurable software)

CSD

Completed

Subsystem and System

Integration Planning

SRS-VB,

CDD-C, CSD,

Source Code

Configured

Software

Media Issued

Software System for Testing

Sub-system &

System

Integration

Test Plan and

Procedures

Validation Test

Planning

Execute Validation

Tests

CSR

Validation Test

Plan &

Procedures

Software Release Note Test Report

Software Qualification of Deliveries

SRS-VB,

SRS-C, SDD-

VB, SDD-C,

RRR-C

SRN

Software Tools and

Qualification

SPG

SPG

SPG

SPG

FIX DMACS

Documentation

SDP

SDP

Validation Test

& Procedures

Validate Test

Report

SDP

SDP

SDP

SDP

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

103

24

25

22

23

Software Qualification of Software

Engineering Tools

Computer Software

Installation and

Configuration

Procedure

Configuration of Test

Facility

Configuration

Management &

Change Control

CSR, CSD,

Source Code.

Manufacturer

Doc. of System

Development &

Simulator

Reports

Software Tools SDP

&

Qualification

Report

SDP Completed Computer

Software System Software

Installation &

Configuration

Procedure

Completed

All

Test Facility

Software System Report

Bugs, Software and document in CMS

SDP

SDP

104

Yes

Yes

Yes

Yes

6.0 System Integration and Verification (SI&V) Sub-process

Figure-6.1 SI&V Sub-Process

This System Integration and Verification (SI&V) sub-process shown in Figure-6.1 to 6.4 is used to describe all activities that must be integrated and verified. This SI&V will follow a process that will be synthesized into its final deliverable form and verified to meet its requirement. The System Engineering Management Plan (SEMP) defines how the product and service will be verified in a top down format using system classification index (SCI) to verify each component. Table-6.1 provides the system requirements checklist that acts as guide to make sure requirements are correct and complete

105

6.1 System Requirements Review

Table-6.1 System Requirements Review (SRR) Checklist

1 Review Preparation Y

,

N, U-R, Comments

NA O

2

3

Have Standards been identified to explicitly define the review process

Were guide lines used to prepare for the review

Y

Y The Standards and Procedures

Guidebook (SPG)

8

9

10

11

12

4

5

6

7

13

14

15

Have the project summated any request for deviation or waiver to the defined process

Y

Y Have entrance and exit criteria been establish for the review

Was an agenda prepared distributed in advance of Y the review

Was the review package provided with ample time

Was the correctness and completeness of the requirements to be implemented confirmed

Y for the review

Were the appropriate stake holders in attendance

SW Test Plan Review

Have all system requirements been identified

Y

Y

Y

Is the SW requirements completed to the subsystem Y

Y

Y Was the SRR Project scope defined clearly to avoid any confusion during the discussions

Were interface requirements with external systems defined

Were interface requirements between independent systems elements defined

Were the functional requirements for subsystems and

Y

Y

Y 16

17 components of each independent system element defined so as to fully achieve the system requirement traceability

Were all functional and performance requirements to be implemented completed

Y

.Y = Yes, N = N0, NA = Not Applicable, U-R = Under Review, and O = Observation

Software Engineering Technology

Insertion

See Schedule

Statement of Work (SW)

Via software program

106

6.2 Requirements Traceability Matrix (RTM)

Table-6.2 The System Baseline Requirements

Requirement ID

R1-2.0 Boiler Feed Water

Flow Input

Requirement statement

Turbine Cycle Check: Input Voltage & Value Ranges

1 – 5V

0 – 1250.466 Kg/s

R2-2.0 Boiler

Discharge Flow

R3-3.0 Turbine Steam Flow

@ point 1.1 Boiler Feed

Pump

R4-3.0 Turbine Steam Flow

Boiler Discharge

Turbine Cycle Check: Input Voltage & Value Ranges

1 – 5V

0 – 1310.380Kg/s

Turbine Cycle Check: Input Voltage & Value Ranges

1 – 5V

0 – 1250.466 Kg/s

Turbine Cycle Check: Input Voltage & Value Ranges

1 – 5V

0 – 1250.466 Kg/s

R5 -3.0 Turbine Steam Flow

@ Moisture Separator

Turbine Cycle Check: Input Voltage & Value Ranges

1 – 5V

0 – 59.934 Kg/s

R6- 1.4.2 Condenser Flow Turbine Cycle Check:

0 – 650 kg/s

R7-1.4.3 Condenser

Extraction LP1

Turbine Cycle Check:

0 -35 KPa (a)

R8-1.4.3 Condenser

Extraction LP2

R9-1.4.3 Condenser

Extraction LP3

Turbine Cycle Check:

0 -35 KPa (a)

Turbine Cycle Check:

0 -35 KPa(a)

R10 -Vacuum 1.4.4 Turbine Cycle Check:

4.2 KPa

The RTM shown in Table-6.2 above is used with group-3 plant. Group 1& 2 plant data is not available. However, the data extraction process and engineering parameters are similar to groups. The requirement statements 1- 5V is used by the Digital Control

Computer (DCC) to represents a given range such as 0 – 1250.466 Kg/s.

107

6.3 System Integration and Verification Planning

Figure-6.2 SI&V Planning Activity

101.

Integration and verification test plan completed

102.

Developed requirements verification matrix (RVM) available

103.

Integration requirements are defined

104.

Developed Integration and Verification (I&V) defined

105.

Verification requirements defined

106.

Facility needs defined

107.

Staffing and discipline needs defined

108.

& 109. Data and equipment need defined

110. SI&V Plan developed

108

Table-6.3 SI&V Planning Input Checklist

Supplier

SE Mgmt

SE Mgmt

SE Mgmt

SE Mgmt

SE Mgmt

SE Reqts

Definition

SE Reqts

Definition

SE Reqts

Definition

SE Reqts

Definition

SE Reqts

Definition

SE Reqts

Definition

SE Reqts

Definition

SE Reqts

Definition

SE Reqts

Definition

Input

System Engineering Management Plan (SEMP) Approved by SEM

User Requirements Document (URD)

Technology Insertion Plan

Configuration Management Plan

Deployment Plan

Interface Control Documents

Approved by SEM

Approved by SEM

Approved by SEM

Approved by SEM

Approved by SEM

Test Requirements Specification (TRS)

Test Performance Measurement (TPM)

Operational Concept Document

Mission Profile

Measures of Effectiveness (MOE)

Requirements Traceability Matrix (RTM)

Technical Parameter (TP)

Deployment Procedures

Entry Criteria

Approved by SEM

Approved by SEM

Approved by SEM

Approved by SEM

Approved by SEM

Approved by SEM

Approved by SEM

Approved by SEM

Status

Table-6.4

SI&V Planning Input Checklist

Customer

All

PM

PM

PM

PM

ALL

Output Exit Criteria

Requirements Verification Matrix (RVM) Inspected

SI&V Facility Request

SI&V Staffing Request

SI&V Data Request

SI&V Equipment Request

System Integration & Verification Plan

Approved by SEM

Approved by SEM

Approved by SEM

Approved by SEM

Approved by SEM

Task # Status

Table-6.3 and Table-6.4 show the entry and exit criteria respectively as they relate to various areas of SE such as the System Engineering Management Plan, User requirements Document, technology insertion and System Integration & verification

Plan. By using the checklists, one can verify that the required data and documentation are completed and approved by the system engineering management process.

109

6.4 System Integration and Verification Development

Figure-6.3 SI&V Development Activity

201.

Operators/maintainers training developed

202.

Integration test plan development

203.

Verification test plan development

204.

Designed integration procedure developed

205.

Designed Verification procedure developed

206.

Integration equipment & data certified

207.

Verification equipment & data certified

6.5

System Integration and Verification Execution

110

Figure-6.4 SI&V Execution Activity

SI&V by physical inspection (301), analysis (302), process operation (303), functional configuration (309) and failure mode analysis (311) were carried out. The result of each test must be decided by a pass or fail check mark with the tester signature.

If the test fails, then the process will go from review test failure (310) to failure mode analysis (311) and then finally to troubleshoot fix and re-verify (312).

111

6.6 Applying OPC Interface

Applying Human to Man Interface (HMI) or Machine to Man Interface (MMI) to power can significantly increase reliability, robustness and operational efficiency to the system.

HMI/MMI can be supported by most industrial connectivity protocol. However, OPC provides an open protocol approach that allows power plant instrumentation devices such as pressure switches, temperature switches, flow switches and level switches data to be read directly from the field into Microsoft Excel Data Sheets. OPC data can be captured with serial port, Ethernet, radio, 4-20 mA and proprietary. Bus generally used to carry data is profibus, HART and modbus. OPC servers allow both local and remote PCs access. See Figure-6.6 for the OPC Data Access Architecture layout.

112

6.7 OPC Data Access Architecture

Figure- 6.5 OPC Architecture

The Data Access Server can be a simple program, for example one which provides access to the registers of PLCs. Furthermore, Data Access Clients can use the following software to access the OPC data access various servers:

(1) Excel Spread Sheet

(2) Visual Basic/C/C++

(3) HMI/MMI or SCADA

One of the key advantages of this OPC data access is that it provides clients to have several servers at the same time. Data Access Clients can create several OPC objects in

Data Access Servers to define its view of process.

113

6.8 System Integration Test Using HMI

Purpose

This report specifies test cases and test procedures to be executed for system integration testing and system upgrade using HMI software. The main function of the HMI software

(SCADA) is to provide an immediate action operator display access to field devices and their current function(s). Operator displays will be touch screens object oriented tag and

Icons with multiple layers of menu. The current system uses two independent computer network systems which are known as odd and even that correspond to X & Y respectively. This odd and even system is designed as a mainframe system from the early

70s, and in order for the system to evolve with open ended approach that will prolong the system life cycle, HMI software is necessary.

Test Methods

(1) Preliminary testing

This test will be done using two test-boxes instead of an odd and even computer (DCC-X and DCC-Y) system. Both boxes will use switches to emulate the field devices. Box-1 and box-2 were connected to HMI PC-1 through a RS-485 serial communication port to

COM-5 port and COM-6 port respectively. The field inputs and outputs and flags will be transmitted to HMI PC and dull application window should be displayed. Check date and time format change in the status bar at the bottom of the screen. If time and date format is incorrect then edit accordingly. Check Legend display window is available to describe

I/O state symbols and colour code.

(2) Usage Model testing

Using the Usage model testing, both software and hardware will be integrated for testing using realistic scenarios in functional environment. This method proves if the integrated environment meets customers’ expectation. The main disadvantage for this approach is that debugging the system can be very time consuming and this could lead to cost and scheduling issues.

114

Top-down and Bottom-up testing

Top down testing is an integrated approach where branches of a module are tested in a step by step format until the complete module is tested.

Bottom-up testing

This approach will allow integrated testing for the lowest level components to be tested first. The process will continue until the upper hierarchy (top level) is tested.

The main advantage of this approach is that bugs in the system are easily detected. While in the top down approach it is easier to locate the missing branch or link.

Test Approach

The objective of the System Integration Testing is to verify that the functions as specified in the SCADA/HMI Operator Display Software requirement specification are satisfied for the HMI PC under test. The strategy is essentially based on a black box functional testing. Defined test case and functional performance will be carried in the following manner:

(a) Each will have a procedure and other supporting information detailed enough to allow another person who did not carry out the original test to successfully complete the test.

(b) Carry out the test procedures as described in the test scripts. Describe expected results of each test case so that a pass/fail outcome is achieved.

(c) Provide traceability matrices cross-referencing the section of the software requirements specification to the test cases

(d) Produce the System Integration Test Report summarizing the test results.

115

Test Cases and Procedure Specifications

A total of 14 test cases are constructed based on the requirements specified in the HMI

Operator Display Software Requirements Specification document, they are as follows:

 Test Case “HMI.1”

 Test Case “HMI.2”

 Test Case “HMI.3”

 Test Case “HMI.4”

Test Case “HMI.5”

 Test Case “HMI.6”

HMI PC Start-up/Shut-down Test

HMI PC Communication Interface Test

HMI PC Information Header Test

HMI PC Status Bar Test

HMI PC Stop Push Button Test

HMI PC Pushbutton Symbolic Icon &

“Tooltip” Test

 Test Case “HMI.7”

 Test Case “HMI.8”

 Test Case “HMI.9”

Test Case “HMI.10”

 Test Case “HMI.11”

HMI PC I/O Status Flag Test

HMI PC Legend Display Test

HMI PC Zoom Window for I/O Byte Test

HMI PC Reset Button & State Change Test

HMI PC Communication Timeout &

Recovery Test

 Test Case “HMI.12”

 Test Case “HMI.13”

 Test Case “HMI.14”

HMI PC Flag test Box Test

HMI Performance Test

HMI Stress Test

Pass/Fail Criteria

If any step fails in the test case, the test is considered failed. The test case is only considered “Pass” if all test steps are found to be successful. This project features only the test case HMI.1 (Start-up/Shut-down) and test case HMI.13 (Performance Test).

116

6.9

System Integration & Verification Test Procedures

Table 6.5

Test Case “HMI.1” HMI PC (Start-up & Shut-down Test)

Test Case ID: HMI.1 Pass Criteria: Expected results achieved

Pass/Fail

Description of features to be tested

To Verify:

(1) That the HMI application shall automatically startup and shall run as full screen application.

(2)

That there shall be no “exit” functions in the application

(3) That the application shall be restarted upon

Windows XP Operating System reboot

Test case environment specification

Test case inputs

Test Procedure

HMI.1.1

The system test configuration as specified in software requirement document and test & simulation facility setup lab

None

Prerequisites: (1) None

Verify HMI PC Start-up and Shutdown

Step-1: Power on the HMI PC

Step-2 Verify that the HMI application is automatically started with full screen graphic display

Step-3 Verify that there is no “ exit” button /key allocated on the screen to terminate this application

Step-4 Power down the PC verifies that the application is terminated with no display on the screen.

Step-5 Power up the PC. Verify that the HMI application is restarted with full screen display

Complete: Yes/No

117

6.10 System Integration & Verification Test Results

Table-6.6 Test Case Results

Test Case Expected Results Test Case ID:

HMI.1

Results #1 When the HMI PC is powered up, the HMI application is automatically started up to run as full screen application

Results #2

Results #3

When the HMI PC application is started up, there is no exit function provided to terminate this application

When the HMI PC is powered down, the HMI application is shut down

Results #4

Comments

Tester

Name……………

When the HMI PC is powered up again, the HMI application is restarted to run a full screen application

Signature……………………

Test Case output (Actual

Results)

Pass

(Step-1 and Step-2)

Pass

(Step-3)

Pass

Pass

(Step-5)

Date……………………..

118

6.11 System Verification & Integration Test Case

6.11

HMI Performance Test Plan- Case HMI.13

The following types of DAH I/O interfaces will be required for DES: (a) Voltage analog inputs with the range 0 to 5 VDC; (b) DC voltage digital inputs with the range of 10 to 32

VDC; and, (c) Watchdog timer outputs with a time-out period of approximately 5 seconds.

Rationale 1: The voltage analog inputs are required to convert the voltage signals associated with various field process variables into numerical count values that are then provided to the DES SCADA computer. The digital inputs are required to convert the presence or absence of a voltage signal into logical “1” or “0” values that are then provided to the DES SCADA computer. The watchdog timer outputs are dry contacts that remain closed as long as the DES SCADA computer toggles the watchdog interface at least once every 5 seconds, otherwise the contacts open automatically. For the DAH, the analog inputs will be of the differential type.

Rationale: The field signals which are connected to the analog inputs do not have a common ground.

The DAH I/O devices will provide optical isolation between the analog field signals and the logic circuits. Each optically-isolated I/O device shall survive common mode or differential voltages up to 300 VDC and 200 VAC.

Rationale 2: The optical isolation will prevent a fault in the common logic circuits of a given DAH device from affecting the field signals polled by the DAH device.

Furthermore, the optical isolation will also alleviate the possibility of a fault in one of the field signals affecting the other field points connected to the same DAH device.

For the DAH, the analog inputs shall have a minimum input resistance of 1 M eg-Ohm , whether the DAH is powered on or off.

119

The DAH shall be implemented as a network of intelligent devices distributed throughout the plant. As a result, the communications between the DAH and the DES SCADA computer shall use the RS422/485 serial protocol, or equivalent.

Rationale 3: The RS422 protocol requires a total cable length of up to 1200 meters (4000 feet) between the computer and the DAH devices. The RS485 addressing mode requires multiple devices (up to 256) to be connected to the RS422/485 network.

All intelligent hardware will be equipped with status indicator lights.

The DAH and the termination for all signal wiring will be housed in dedicated cabinets.

I/O modules must be replaceable without disconnecting the signal wires, and without having to power down the DAH components located in cabinets other than the cabinet where the affected I/O module is located.

If possible, commercial off-the-shelf software will be employed in the DES SCADA computer for acquiring data from the DAH.

Rationale 4: Using a COTS software package should increase reliability and simplify the integration effort. The DAH used in DES will support the data acquisition mode in which it is polled periodically by the DES SCADA software.

The serial link between the DES SCADA computer and the DAH will support a minimum communication rate of 9600 bit/s.

Rationale 5: A minimum communication rate of 9600 bit/s is needed to achieve the desired polling rate, especially if retries are required due to communication errors.

The measurement performed by any voltage analog input module will be correct for common mode voltages up to 24 V. For common mode voltages higher than 24 V, the measurement may be inaccurate.

120

6.12 Normal Operating State

In normal operating mode, DES will transfer data from DCC-X to the Gateway-X computer, and then to the DES SCADA computer, and from the Data Acquisition

Hardware to the DES SCADA computer. This will be done at such rates that ensure the steady periodic refreshing of the DES process database with input data no fewer than 10 times per minute, which will maintain a scan time of 6 seconds.

6.13 Data Acquisition from Plant Instrumentation

The DES SCADA computer will poll the Data Acquisition Hardware every 3 seconds, and obtain a new measurement for each defined field point.

The DES SCADA computer will obtain data from the Data Acquisition Hardware with a minimum precision of 12 bits.

Rationale: 12 bits of precision is acceptable for the calculations to be performed by using the field inputs. 12 bits of input precision is equal to a decimal representation in the range of 0 to 4095 over a 0 to 5 Volt range. This correlates to approximately 819 per volt.

121

Table 6.7 DAH/SCADA Requirements Verification Matrices (RVM)

Reqts Plan # Requirement Statement

ID

R-1 6.11 DES interface input shall be 0 – 5VDC & DC digital input range shall be 0 – 32 VDC

R-2 6.11 Analog input voltage signals shall convert the process variable into

Numerical values

R-3 6.11 DES computer must toggle the watch dog timer ever 5 sec

R-4 6.11 DAH I/O device shall provide optical isolation between analog field

Device and logic circuit

R-5 6.11 DAH input shall have a minimum resistance of 1 Meg-Ohm

R-6 6.11 Communication between DAH and the DES SCADA computer shall

be RS 422/485 protocol

R-7 6.11 RS protocol shall not exceed 1200 M cable run

R-8 6.11 RS 485 addressing mode shall not exceed 256 devices

R-9 6.11 The serial link between the DES SCADA computer on the DAH

Shall maintain a minimum baud rate 9600 bit/s

R-10 6.12 In normal operating mode, DES shall transfer data from the DCC-X to the Gateway-X computer and from the DAH to the SCADA computer

R-13 6.13 DES SCADA computer shall obtain data from the DAH with a minimum precision of 12 bits

R-14 6.13 12 bits shall correspond to 0 to 4095 over a 0 to 5V range

R-11 6.12 The DES SCADA database with input data shall refresh no less than

10 times per minute, and shall maintain a scan time of 6 seconds

R-12 6.13 The DES SCADA computer will poll the DAH every 3 seconds, and obtain a new measurement for each defined field point

122

System Integration & Verification Test Procedures

Table-6.8 Performance Verification Matrix

Test Case ID: HMI.13 Pass Criteria: Expected results achieved

R-1. Description of features to be tested

R-2. Isolation

Pass/Fail

To verify:

Watchdog timer output contacts remain closed as long as the

DES SCADA computer toggles the interface at least once every 5 seconds, or else contacts should open automatically

To verify:

That the DAH I/O device provides optical isolation between analog field signal and the logic circuits

R-3. Analog Input

Resistance

R-4. Communication

R-5. Plant

Instrumentation

Comment

To verify:

The analog input shall have a minimum resistance of 1 Meg-

Ohm whether the DAH is power on or off

To verify:

The serial link between DES SCADA computer and the DAH will support a baud rate of 9600 bits/sec

To verify

1.

That the DES SCADA computer will poll the

Data Acquisition Hardware every 3-seconds and obtain a new measurement for each defined field point

2.

That the DES SCADA computer will obtain data from the DAH with a minimum precision of 12 bits

3. That a 12-bit input precision is equal to the decimal representation in the range of 0 to 4095 over a 0 -5 VDC range

Complete: Yes/No

123

System Integration & Verification Test Procedures

Table-6.9 Test Case Results

Test Case ID: Test case Expected Results

HMI.13

Watchdog function Results #1

Results #2

Results #3

Results #4

Results #5

Optical Isolation function

Analog Input Resistance > 1MΩ

Communication Baud Rate > 9600

Plant Instrumentation

1.

DES SCADA Polls

DAH every 3sec

2.

DES SCADA obtain data from DAH with minimum precision of

12 bits

3.

0 to 4095 represents a range of 0 to 5 VDC

Tester

Name ………. Signature…………………….

Test Case output

(Actual Results)

Pass

Pass

Pass

Pass

Pass

Date……………………

124

Conclusion

In extending the life cycle of the power plants through predictive and preventative maintenance, this project is on target to meet its goals. ConOps was used to create a vision of what is required to achieve the desired goals and added value to the sponsor.

The product improvement template shown in Figure-1.3b provides a framework for the project. It also provides an open ended approach to allow the system to evolve with time.

For example, the business LAN with the use of PASSPORT provides the hub for managing the work activities, reports, procedures and support documentations. The process LAN is used to provide monitoring, trending functional requirements and performance requirements using a network of computers that provides a source of monitoring redundancy. The LAN network is also used to mitigate the risk of one system of computers failing, and leaving the power plant system without any source of monitoring the system performance.

Starting with the upper level components hierarchy, this project was able to display the outcome from the boiler feed flow nozzle in graphs. Furthermore, by using the quadratic

I/O curve and the raw data provided from the GBET, the efficiency of the boiler was tabulated in conjunction with an acceptable R-square valve of 88%.

The THR test was carried out immediately after the GBET and the findings were within tolerance range of the manufacturer’s design specification. In addition, this report allows for future periodic and scheduled THR tests to be carried out, using the step by step procedure shown. There were other tests that directly affected the thermal heat rate, such as throttle/back pressure test, feed-water and condensate extraction test. These tests were carried out to make sure that the guaranteed heat rate set by the manufacturer is at its optimum value.

By applying SCADA/HMI technology using OPC platform, this report will allow the targeted power plants to use open ended computer base interfaces and connectivity to become more reliable, more robust and more efficient. Furthermore, by using SCADA, this project was able to bring the operating plant to the office by using large computer monitors strategically placed around the office. This enables technical and non technical staff to see the live trending process and become more involved in the system

125

improvement plan. These same technologies will be used to enhance the process of the preventive and predictive maintenance scheme implemented here, and provide the system with added value to meet the sponsor’s goals. In the first 9 months of this project, SE practices and procedures have reduced force outages from 25% to 12% which resulted in a net return on investment of $27M, and in putting the project on track to meet its goals and extend the system life cycle.

This project was able to eliminate the risk of applying software programs to a complex system in a seamless manner. This was done through a mock-up scenario of a hybrid prototype that emulates the power plant operational parameters to provide the system functional requirements to improve the system. By using software programs this project was able to validate and integrate a network of computers that was used to provide an indispensable source of monitoring and trending the system. By trending the system, one can predict and prevent failure through selective maintenance practices, thus improving the system reliability and robustness.

By applying new technology to the existing system, the need for training has emerged.

This project uses Systems Engineering practices to implement On The-Job-Training programs to teach maintainers and operators how to use the new tools to improve the reliability of the system. By using trained personnel, the predictive and preventive maintenance program has already proven to be effective in addressing the problems mentioned in ConOps. For example, operators have used SCADA to monitor specific devices and identify gradual failure over time. This enables the Operator to use

PASSPORT to initiate a work request for maintenance to be done. It also allows the process to go from predictive to selective maintenance, which reduces force outages by

12%.

Project Continuation

I recommend the continuation of this project to the end of the required schedule so that the desired goals will be achieved to improve the system performance, reliability, and maintainability as it seeks to address the needs of the stakeholders.

126

Glossary

Terms

API

Boot

Client

COM

COT

CRC

DAQ

Database

DCOM

Ethernet

(IEEE 802.1)

HMI

Definition

Application Programming Interface

The process of initializing the computer and loading system

A Client is a software application that contacts and obtains data from a server software application

Component Objective Model

Commercial Off-The-Self

Cycle Redundancy Check

Data Acquisition (is the collection and measurement of signals from a sensor or transducer

A collection of records or pieces of information

Distribution Component Object Model (DCOM) is a Microsoft Proprietary

Technology that enables windows executable applications to communicate with each other between two user accounts. OPC communication depends on DCOM

Is a standard communication for wireless LANs that enable computers to exchange data

Human Machine Interface (HMI) is a software application that uses Graphical User

Interface (GUI)

Local Area Network

Openess, Product and Collaboration

OPC A & E

LAN

OPC

OPC

Alarm & Event

OPC Client

OPC Server

PLC

Polling

Protocol

RTU

An OPC client is a software module that enables applications to acquire data from an

OPC Server or Supervisory Control using an OPC Server

An OPC Server is a software module that enables applications to provide their data to the outside world using OPC

(Programmable Logic Controller ) a device that employs the hardware architecture of a computer and relay ladder diagram language

A Communication control method whereby a “ Master” station asks the devices the that is attached to common transmission medium if they have information to transmit

Protocol is set of rules governing the format of the message exchanged between computers and enabling them to communicate

( Remote Terminal Unit) is industrial data collection device/computer in remote location that communicates through telemetry such as Radio, Modem or Landline

127

References

□ Chan, A. Y., and K.C. Chan,

Post-Retrofit HP Turbine Acceptance Test . Ontario Power

Generation Performance Testing Department, 2007.

□ Chattopadhyay, P.,

Boiler Operation Engineering, Questions and Answers . McGraw-Hill,

2000.

□ Department of Defence System Management College, Systems Engineering

□ International Council on Systems Engineering (INCOSE),

Concept of Operations

(ConOps) . Document No. INCOSE-TP-2003-015-01

□ Jasansky, A., P. Bodach, and K. Jones,

Pre and Post Overhaul Turbine Heat Rate Test .

Ontario Power Generation, 1991.

□ Dr. Ross Lord

, Calibration of Flow Nozzle , Ontario Power Generation, 1961

□ Martin, James N.,

Systems Engineering Guidebook, A Process for Developing Systems and Products . CRC Press LLC, 1997.

□ Moran, Michael J. and Howard N. Shapiro, Fundamentals of Engineering

Thermodynamics . John Wiley & Sons Inc., 2004.

□ www.matworks.com

, A MATLAB-Based Tutorial Approach

□ www.opcti.com/OPC-Terms -and-Definitions.aspx

□ www.weibull.com/SystemRelWeb/preventive_maintenance.htm

, Preventive Maintenance

128

Appendix A

Thermal Performance Curve Derivation and Overview

Input and Output (I/O) Curve Derivation :

Input & Output tests and calculations were used to provide system performance data, in the form of input and output equation. I/O curves were primarily used in the calculation of the total incremental cost. The following two methods were used to establish the I/O curve for a given unit.

(1) Direct I/O Method of measurement: This is the most commonly used method, and one of the reasons is that it only requires measurements of fuel flow, calorific value of fuel samples taken during tests of net unit output and reference operating conditions. The test is ideally suitable for gas and oil-fired units. However, this test is not ideal for coalfired units.

(2) Indirect I/O method (Calculation) : This method of calculating I/O curves is more complex than the direct method. However, it is much more accurate for coal fired units.

I/O results are derived from two sets of separate performance test namely: Thermal Heat

Rate Test (THR) and Steam Generator efficiency test. Both tests require at least four test points, ranging from 25% to 100% of the maximum capacity Rate (MCR) to calculate the heat inputs for given net outputs. Results are then “curve-fitted” to yield the desired format of I/O equations, where the required input is expressed as a function of net load desired:

I = A + BxMWn + Cx MWn

2

Where I

A, B, C

Is Unit Heat Input (GJ/h)

Is the Coefficients of I/O Equation

Is Net Unit Output (MW) MWn

129

Appendix-B System Instrumentation

System Instrumentation (common to all tests performance) and The Summer Amplifier Used to Analyse Boiler Parameters

The boiler system control shown in Figure B1.0 can be verified to meet the system requirements with the use of the summer amplifier. This device will take multiple input signals and then perform calculation/scaling functions to produce the required output. Boiler level depends on two other variables, namely: feed water flow & steam flow. These three variables are input to a signal summer amplifier, and each is individually scaled for optimum boiler level control. To achieve this, each variable is connected to an amplifier (K) with its own calibration setting of zero & span (scaling).

The signal summer is calibrated to perform the following functions:

M = LK1 + FW * K2 – FS * K3 = summer output

L = Boiler Level

FW = Feed water Flow

FS = Steam Flow

K1 = 1

K2 = 0.25

K3 = 0.95

Assuming we have a high level design requirement to maintain the boiler level at 75%, then the feed water flow and steam flow will also be set to75%. So any change in one parameter will cause a proportional change in the other parameters.

This requirement can be verified with the signal summer formula:

M = LK1 + FW * K2 –FS * K3

M = (0.75 * 1) + (0.75% * 0.25) – (0.75 * 0.95)

M = (0.75 * 1) + (0.1875) – (0.7125) = 22.5 % or 7.6mA

130

Figure-B1 Summer Amplifier

System Instrumentation & Control

The boiler pressure flow diagram shown in Figure B1 is consisted of a pressure transmitter, a pressure alarm (PA), pressure indicator alarm (PIA), pressure recorder (PR), digital and computer analog input (DCC A/I). This is one of three channels that will monitor the boiler pressure.

This system operates from a two out of three logic principles, which means that a faulty channel will generate the necessary alarm, but will not trip the system. However, if there is a fault on two channels simultaneously, then the system will trip. This multi-channel design is used to make the system more reliable, more robust and enhance redundancy. Figure B2, B3 and B4 show the layout of the commonly used loop in the power plants such as pressure, temperature and level.

131

Figure-B2 PT Loop

Figure B3 TT Loop

Figure-B4 LT Loop

132

Instrument Sigal Loop Calculation :

Industry St

Industry St an an dard Instrumentation dard Instrumentation

(1)

(2)

Range

Range

: (20 Kpa

100

: (4

20 )

)

16

80 Kpa mAa

( ) 20 Kpa & 4 mA

% Sig nal

  

20

 kpa

Span or

% Signal

  

LRV

Span

Assu min g a measured signal of 70 Kpa ,

The % Signal

% Signal

:

(70 Kpa )

20 kpa )

* (100%)

62.5%

80 Kpa

 

( Signal * Span )

LRV

100%

( )

(% sig nal *16)

100%

4

Assu min g a signal level of 75% :

( )

(0.75*16)

100%

4 16 mA

133

A Typical Square Root Extractor Application in a Flow Loop

Figure-B.5

Figure B6 Graph of SORT extractor

The Graph of SORT extractor input % DP from FT versus % output

Figure B6 above shows the measurement concept that utilizes a differential pressure transmitter to give an output that is proportional to the square of flow (flow)^2. The flow equation states that flow “Q” is proportional to the square root of the different pressure delta-

P α Flow^2 or the Flow α (delta-P)^(1/2). In looking at the graph of differential pressure created by the flow in the pipe, we easily see that the increase is very small at low flow rates.

For example a differential pressure of 1% will give a 10% process flow. However, for control purposes we need a linear signal to represent flow.

134

Square Root Extractor Input & Output Calculations Requirement

There are four common signal ranges that are industry approved. They are two electronic ranges: 4-20 mA and 10-50 mA, and two pneumatic ranges: 3-15 psi. and 6-30 psi. These ranges all have a positive “Live” zero value to represent a 0% signal.

To calibrate a square root extractor we must first know what output values to expect for a set of input values representing 0 to 100% transmitter output signal. Tables B1 and B 2 show the input and output values for electronic square root extractors.

Table B1

10-50 mA Loops

1

25

50

%

0

10 - 50 mA Loops

∆P INPUT FLOW OUTPUT mA

100.00

%

0 mA

10.00

75

100

10.40

20.00

30.00

40.00

50.00

10

50

70.71

86.60

100

14.00

30.00

38.28

44.60

50.00

Table B2: 4-20 mA Loops

1

25

50

4 - 20 mA Loops

;∆P INPUT

%

0 mA

4.00

75

100

4.16

8.00

12.00

16.00

20.00

FLOW OUTPUT

%

0 mA

4.00

10

50

70.71

86.60

100

5.60

12.00

15.31

17.86

20.00

135

To make up the calibration values in Tables B1 & B2 the input values versus output values are calculated by: a) Input % b) Output %

= (output % )

2

= √input% c) % signal = signal value - live zero signal range d) Signal value = (% signal value x signal range) + live zero

Table-B3 & B4 shows a typical calibration data sheet for Resistance Temperature

Detection (RTD).

Table-B3 A Typical Instrumentation Calibration Sheet for Group-3 Plants

Cal Input

Range

Cal Output

Range

Fr: 376.406 “ FR: 230

To: 442.748

Unit Ohms

To: 320

Unit: °C

Tolerance Calculation TUR

% Span: 1.111 # of Points: 9 4

AS Left:1

AS Found: 3

3

2

1

Error %

Table-B4 Sample Calibration Points for RTD Temperature Transmitter

Points

1

2

3

4

5

6

7

8

9

Percent

0000

25

50

75

100

75

50

25

0

Input

376.406

392.992

409.577

424.320

442.748

424.320

409.577

392.992

376.406

Unit

Ohms

Ohms

Ohms

Ohms

Ohms

Ohms

Ohms

Ohms

Ohms

AF ±

3

3

3

3

3

3

3

3

3

Output

230.0

252.5

275.0

295.0

320.0

295.0

275.0

252.5

230.0

Unit

°C

°C

°C

°C

°C

°C

°C

°C

°C

AF = As Found While AL = As Left

0.2778

0.3704

0.5556

1.1111

AL ±

1

1

1

1

1

1

1

1

1

136

Appendix-C System Engineering Master Schedule (SEMS)

Figures C1, C2 and C3 show the event based schedule that forms the frame work for the

SEMS.

Schedules

System Requirements

Review (SRR)

• Mission Analysis completed

• Support Strategy defined

• System options decisions

completed

• Design usage defined

• Operational performance

requirement defined

• Manpower sensitivities

completed

• Operational architecture

available and reviewed

System Functional

Review/Software Spec

Review (SFR/SSR)

• Installed environments defined

• Maintenance concept defined

• Preliminary design criteria

established

• Preliminary design margins

established

• Interfaces defined/preliminary

interface specs completed

• Software and software support

requirements completed

• Baseline support/resources

requirements defined

• Support equipment capability

defined

• Technical architecture prepared

• System defined and requirements

shown to be achievable

Preliminary Design

Review (PDR)

• Design analyses/definition

completed

• Material/parts characterization

completed

• Design maintainability analysis

completed/support requirements

defined

• Preliminary production plan

completed

• Prototype model Completed

completed

• Coupon testing completed

• Design margins completed

• Preliminary FMECA completed

• Software functions and

architecture and support defined

• Maintenance tasks trade studies

completed

• Support equipment development

specs completed

Figure-C1.0 Event Based Schedule Exit Criteria

Adapted from Department of Defence (DoD)

137

Schedules

Critical Design Review

Test Readiness Review

System Verification

Review/

(CDR/TRR)

• Parts, materials, processes selected

• Development tests completed

• Inspection points/criteria completed

• Repair level analysis completed

• Facility requirements defined

• Software test descriptions completed

• Hardware and software hazard analysis completed

• Firmware spt completed

• Software programmers manual completed

• Durability test completed

• Maintainability analyses completed

• Qualification test procedures approved

• Productivity analyses completed

Functional Configuration

Audit (SVR/FCA)

• All verification tasks completed

• Durability tests completed

• Long lead time items identified

• PME and operational training completed

• Tech manuals completed

• Support and training equipment developed

• Fielding analysis completed

• Provisioning data verified

Figure C2 Event Based Schedule Exit Criteria Continues

Physical Configuration

Audit

(PCA)

• Qualification testing completed

• All QA provisions finalized

• All manufacturing process requirements and documentation finalized

• Product fabrication specifications finalized

• Support and training equipment qualification completed

• All acceptance test requirements completed

• Life management plan completed

• System support capability demonstrated

• Post production support analysis completed

•Final software description document and all user manuals complete

138

System Engineering Master Schedule (SEMS)

Figure-C3 Event-Based-Detailed Schedule Interrelationship

Figure-C3 shows the detailed break down of the SEMS and the related tasks that are tied to Gantt chart

139

Appendix D

Maintenance and Training

Preventive & Predictive Maintenance

Objective:

(a) To decease system down time

(b) To decease cost of replacement

(c) To improve system reliability & robustness

(d) To extend the life cycle of the system

The mathematical model shown below is used to determine the optimum time for

Preventative Maintenance (PM) action. The mathematical model is based on the premise that the unit or system fails before time (t.) If the unit continues to operate after time (t) the corrective action will be in place to prevent the possibility of failure and to extend the life cycle of the unit.

By minimizing the cost per unit time, the optimum replacement time can be found as shown below:

Total Expected Re placement Cost Per Cycle

Expected Cycle Length

C p

* ' '

C u

[1

 t

0

 t

[ CPUT ]

Where: R (t) = Reliability at the time (t)

C p

= Cost of planned replacement

C u

= Cost of unplanned replacement

In maintain the PM program, one must first calculate the cost of the PM action, it should be less than the corrective action. Correction action cost will include the following:

(a) Intangible costs

(b) Down time cost & loss of production cost

140

Predictive Maintenance Program

The Predictive Maintenance Program (PDM) is integrated into the overall preventive maintenance program so that planned maintenance can be performed before equipment deterioration or possible failure. Predictive maintenance will be selectively applied based on the risk significance of the equipment involved and the ability to monitor specific conditions and failure modes. This “As Is” system has already taken the initiative to monitor the turbine system, boiler feed pump, condenser cooling water pump, and primary heat transport pump with vibration sensors. This type of monitoring is presently a part of the shutdown system. However, this project looks at extending the predictive maintenances to incorporate electrical power phase buses, Motor Control Centre (MCC), inverter units and battery banks. The reason for this extension is to minimize the alarming rate of force outages that occur in these areas over the last 5-years.

The method

Deterioration of important equipment was identified using techniques, such as infrared thermograph, portable vibration monitoring, and lubricant sampling analysis. This program will provide new equipment hardware and software for all three technologies.

In addition, a new lube oil screening facility will be built to allow assessment of oil sample.

Extensive training will be provided to a dedicate team of Engineering and Maintenance personnel. Routine tests and inspection will be incorporated into the Work Management

Program. Furthermore, this practice will replace break-down maintenance of equipment with discretionary condition based maintenance, thus extending the life cycle of the system. Attached below is form that was developed and placed into PASSPORT to address Technical Review Audit (TRA) finding on poor maintenance practices.

Test Data

Field test results from the Boiler feed Nozzle Calibration Test/GBET and THR will be used to document the system and used as a guide for predictive and preventative maintenance practices. In addition, by using Supervisory Controls and Data Acquisition

(SCADA), field operators will be able to check the field devices and make request for repairs where irrational or abnormal reading is found before equipment failure.

141

Preventive & Predictive Maintenance

Plant: Group-1 Location A

System:

Unit:

SCI :

Device:

Doc #

Usage Classification:

Reason for Test

Scheduled

Post Maintenance

Elective Maintenance

Preventive Maintenance

WO#

Other:

Test Results

Continuous

Pass

Comments:

Fail

Test performed by:

Print and Initial

Test: Completion and results verified by Print and Initial

Reference

Date:

Time:

Unit:

WR Filed

Rev

142

On-The -Job Training Guide

Objectives

Terminal Objective(s)

Given a Laptop computer with access to an Ethernet network that includes a SCADA computer program and procedures, the trainee should be able to demonstrate his or her competency in performing the required operational checks.

Enabling Objectives

Upon completion of this training session, the trainee will:

Describe how to manipulate the menu to track history events and current events

Describe the required steps to perform verification and calibration checks on field devices

State what action should be taken if the data reveals a failed device

Perform or simulate a verification check on a given device

Understand how to check integrity of surveillance data

Understand how to check the function of the HMI in the main control room for displaying history process data that is archived by the SCADA portion of DES

Should be able to assist performance Engineering staff in carrying out normal plant surveillance and post event analysis

Pass the OPC Certification Test

Pass the OPC server black box test

Pre-requisite Training or Qualification(s)

Complete all Computer Base Training (CBT) on SCADA and OPC overview

Estimated Training Time

3-Hours

143

On-The-Job-Training-

Guide

Document Number:

Training – 0001-OJT

Work Group/Training Program

Engineering Staffing and Training

Site

Fossil & Nuclear

OJT # /Title: 0001- OJT- SCADA Precision & Accuracy Checks

Steps Description and Observable performance Standard

Trainer/Trainee obtains tools, and equipment required for the task(s)

Trainer/Trainee performs/simulates physical check for the SCADA system operation, and verbalizing actions before executing them using “Touch & Talk” techniques. Trainer shall actively solicit and encourage dialog and questions for Trainee to gain understanding

Trainer/Trainee use OPC communication drivers to bridge the gap between applications and devices the SCADA network

Trainer/Trainee explore multiple level of capability such as reporting on multiple clients

Trainer/Trainee create and manage OPC alarm event

Complete

Complete

Complete

Trainer/Trainee explore abnormal alarm condition

Trainer/Trainee explore abnormal event condition

Trainer/Trainee performs independent lab test to mock field process using combined server/client products

Trainer/Trainee review the OPC certification requirements

Complete

Complete

Complete

Complete

Complete

Complete

Complete

Instructor Comments:

144

Download