Simulation Integration, Acceptance and Testing

advertisement

Chapter 1

Simulation Integration, Acceptance and Testing

It is important to ensure that the designed simulator is error free in terms of both hardware and software. This chapter addresses the issues other than modelling of aircraft dynamics and systems.

The chapter explores the software and hardware integration and testing, such that when all elements are put together into one package and making it function as intended. This chapter thus describes how the simulator building blocks are integrated step by step into a cohesive training machine, as well as discussing the nature of tests performed along the way and procedure used for the acceptance by the customer when the final end product is evaluated. The analysis is broken down into following sections: Elements of full flight simulator are given in Appendix A.1.

Hardware Integration

Software Integration

Hardware-Software Integration and Testing

Simulator Acceptance

Hardware Integration

The sequence of hardware testing on a simulator varies depending on its construction, but basically begins with initial testing of elements on a stand-alone basis. The following analysis describes the procedures for hardware test and integration of each simulator element individually:

It is apparent from the simulation figure given in Appendix, that the brain of entire operation is simulator (host) computer which coordinates and schedules the operation of all other simulator hardware.

The integration therefore, starts by establishing a functional computer and then adding peripheral hardware in a logical sequence.

The basic operating system is generated to match the hardware configuration, and the task scheduler is created to service on-line, background and foreground tasks.

1 | P a g e

Once done component is now ready to receive the data files, describing the interface configuration which are created from the lists of memory locations and their corresponding interface addresses.

Those files define the transfer of data between all elements of the flight simulator and the computer.

Cockpit Connections – The cockpit interface or linkages consists of a number of circuit boards which perform analogue to digital conversion and digital to analogue, conversion to interface the cockpit instruments and controls to the computational system.

The interface can be tested stand-alone through the use of a test set which will run through all inputs and output channel addresses. Diagnostic boards are embedded to perform wrap around testing by using input channels to read outputs. Once the stand-alone test is performed, the interface is linked to the computer which provides the addressing and read/write requests to drive the interface under real-time computer control.

Once the interface is tested and completely functional the cockpit instrumentation, switches, lights and circuit breakers are integrated with the interface. The procedure is first to all to check the wiring, and then drive the instrument or light to ensure electrical integrity and then perform instrument calibration to ensure the maximum range of signal covering the instrument’s full scale. The calibration is important to achieve maximum correlation between computed and displayed instrument values which would otherwise be limited by interface resolution.

Control Loading System – the control loading system uses small stroke (typically 4in.) hydrostatic hydraulic actuators controlled through a servo valve and linked to the flight control leavers.

The servo valve is positioned as a function of a mathematical model of the aircraft feel system which may be implemented through analogue circuitry or microprocessor software.

The model includes spring gradient, break-out forces and artificial q-feel system in addition to the cable characteristics, friction and the inertia of non-present aircraft parts at the control and at the surface.

Models can either comprise analogue circuitry or microprocessor computer. Most designs of both types allow stand-alone testing to set up the basic feel curves and inertia terms using some form of force versus position measuring equipment in the cockpit. In the case of an entirely digital controller the testing involves the calibration of the load unit forces and position transducers and establishing a null position of the servo valve. Thereafter, the whole process is done via a video terminal hooked up to the digital controllers which run tuning and calibration utilities. Integration with the host computer is in the form of a digital data link transmitting parameters such as friction and dynamic pressure to the controller and receiving simulated aircraft surface actuator position in return.

Instructor Operating Station (IOS) – The IOS panel buttons and lights are tested as part of the interface checkout. The graphics display system can also be checked out using test patterns to electrically align the CRT and establish the correct operation of the symbol generator.

The instructor station basic software such as CRT page utilities and the pages themselves can now be installed and checked for format and colour. After interface integration the button logic, including page select and direct instructor controls, can be checked in readiness for the installation of the simulation software.

2 | P a g e

Motion Platform – Motion platform systems and seat shaker systems are all servo driven mechanisms utilizing analogue or digital servo controllers. Stand-alone tests are used to establish the nominal performance level of the equipment in terms of displacement and dynamic responses. The actuators are tested individually prior to hardware integration to reduce the amount of simulator time used on routine tuning of standard equipment. The construction of this equipment allows complete stand-alone testing with communication to the host CPU through a simple serial data path or by analogue signal.

This philosophy allows addition to old generation simulators through a convenient interface, and for this reason most motion systems contain embedded facilities for stand-alone tuning.

Sound and Audio System – The sound and audio system is used to generate aircraft sounds at frequencies up to 20KHz and to supply station identification signals, ATIS messages etc., to the crew members via communication system and speakers. The sophistication of the simulation sound system has now reached the level where a frequency analysis of a recorded cockpit sounds is performed and resulting identified dominant frequencies and bands of the low frequencies noise are linked with events in the simulation such as engine spool speed and the level of thrust.

In a modern digital system, the sound is modelled as a combination of sine waves, square waves and special function waveforms to create special effects such as windshield wipers slap sounds.

On the propeller aircraft the phasing between the engines is also of importance. Testing prior to integration is therefore used to calibrate the frequency, and amplitude of the sound generator/speaker combination which must be optimised for the speaker position.

Image Generation and Visuals – The visual image generation equipment (IGE) and display system undergo stand-alone tests with the display CRT or projection device separated from display optics. The IGE is tested for capacity and scene alignment of the display optics with the pilot’s nominal eye-point data.

This is aligned to give the required field of view after which the CRT is installed and aligned. The total package is integrated with the host CPU through a high speed link across which aircraft position and attitude and environmental data is passed.

Concluding Remarks

The technical material discussed in above section must be referred alongside this chapter. In order to perform efficient hardware tests and to understand the generics behind the hardware integration following technical issues must be well understood:

Task Scheduler Modelling, interface configuration

Cockpit circuits boards, with analogue-digital or digital-analogue conversion algorithms

Basic Actuators operation, servo valve positing

Analogue circuitry and microcontroller software (Digital Controllers)

Position Transducers (Digital Data Link Transmission)

Servo Driven mechanisms and actuator testing

Artificial sound modelling, dominant frequency identification and band of low noise signals

3 | P a g e

Software Integration

Unlike hardware, to find out if the software is error free requires a detailed engineering test of the entire aircraft, the simulator represents. The simulation represents aircraft systems which interact via mechanical, hydraulic, pneumatic and electrical interfaces. This software operates under the control of simulator executive responsible for transferring the cockpit information to the simulation software, running the programme and then feeding the information back to the flight deck where the actor awaits the reaction. Alteration and testing of simulator software is available through utilities used for program and data file modification and which also monitor and manage the simulation software configuration. The flight simulator software components are summarised in Table 1.1

Operating System

Simulator Utilities

Simulator Executive

Software

Simulation Software

Instructor Operating

System (IOS)

Table 1: Simulator Software components

Background

Foreground

Description

Peripheral handlers, file management, compilers, linkers, loaders, editors and debuggers

Configuration management, radio aids stations, CRT page editors

Computerized test systems

(CTS), and real time debugger

(RTD)

Real-Time Scheduler

Aero, systems, motion, visual, sound, tactics etc.

CRT pages, Panel functions, map, hardcopy etc.

A communication between the simulator components is done via a common data base where information is stored by one program and read by another. This common data is also accessed by simulator’s interface which reads instrument values and converts them into the required signal types to drive the display (such as digital to analogue conversion). Values of switch and control positions are deposited in the database to be used by simulation programs in mathematical and logical equations.

The development of a simulator can be viewed as two parallel paths – hardware and software – which are joined together and then undergo integrated testing. Prior to this event, if some software can be joined together and run in a simulated, simulator environment, then much of the testing can be transferred to a test facility, reducing simulator integration time.

4 | P a g e

Software Development and Pre Integration Testing

U.S.A military’s MIL-STD-1644A (1982) procedures involves a detailed analysis of the programming requirements of each simulator element though a creation of a program performance specifications.

The specifications detail the simulation of an aircraft element in the mathematical terms, which can then be divided into individual software modules, each of which is individually tested. Thereafter these modules are then integrated and tested as a whole system. The basic idea of placing the emphasis on the simulation analysis and modelling at the beginning of a design cycle is a healthy one as redesigns during integration or acceptance can prove very costly. Sometime, also it is not possible to test individual module as it depends on other modules. In short simulator development procedure follows:

Analysis

Design

Production and testing

Integration

Hardware Integration

The simulator Acceptance and Test Manuals (ATM) used during software debug and testing and acceptance are developed during this phase. The ATM description is given later in the section.

Software Debug and Integration

A computerised test system (CTS) is used to establish a link between software developer and the running program. It sets up all the conditions required for tests and can either create graphical plots of critical variables or perform online comparisons against required results and flag values out of tolerance. Once the systems have been tested on a stand-alone basis against the test guide partial software integration can be undertaken on the software development facility under the control of CTS.

This integration is designed to debug combinations of system which have a high level of interactions such as the aerodynamics model; power plant and autopilot present a highly interactive performance loop which cannot be adequately assessed until they run together.

One major potential pitfall with this approach occurs if the sequence of execution on the test facility does not match that of the simulator as incorrect performance can result due to the introduction or omission of phase delays. This is particularly true of a distributed process approach to simulation where critical control elements reside in separate computers.

Simulator Maintenance

Specific tests are conducted for all the sub-system modules. If the module contains a processor or memory, these will be exercised and tested for failures. The visual system and projection system will be checked for alignment, where patterns of rectangles and dots are used to detect drift in the optical systems. Light levels and grey scales are also checked for illumination intensity, together with variation in colour bands for the red, blue and green components, often caused by ageing of the projector bulbs.

5 | P a g e

For mechanical systems, tests are made for excessive wear or increased friction. Such tests are typically computer-based, where tests are initiated and the results compared with results from previous bench-mark tests. In addition to problems with wear in actuators, sensors may also fail, with ingress of dirt or hydraulic fluid contamination. When sensors or actuators are replaced, extensive diagnostic software is run to check the sensor: that it operates over its full range, that it has been reconnected correctly and that there is no discontinuity or noise on the sensor input.

In airline flight simulators, the simulator usage is constantly monitored and recorded. Usage is an important issue. If, for example, a simulator has been subjected to heavy landings, large jolts can affect the visual system mirror or reduce the life of projector bulbs or motion platform bearings. All faults are logged and at the end of a training session, the instructor can report any malfunctions or anomalies that occurred during the session. Moreover, to guarantee over 95% availability, the airline will have technical support teams, with specialized knowledge of the simulator modules, to provide fast response to any failures or problems and a large inventory of spares to minimize any down time.

A Concept of Real Time Simulation

Computation is a very different world. We write software programs and the computer executes the instructions of the program. These instructions may involve additions of machine registers, comparing memory locations and jumping from one address in the computer memory to another address.

However, each of these instructions takes a finite number of machine cycles and each of these cycles is clocked at the speed of the processor. In other words, for a given computer, a small fragment of code may take several microseconds to execute. This time may also vary depending upon the program and its data. For example, to sort 100 numbers may take much longer than sorting 20 numbers.

In an aircraft, the pilot moves the control column. Assuming direct cable linkage to the control surfaces (and ignoring the inertia of the control surface), the elevator moves immediately, causing a perturbation to the aircraft, which is seen as a change in pitch by the pilot who responds with another movement of the control column to correct the pitch attitude. In a flight simulator, the stick position is sampled, the elevator deflection is computed, a new pitch attitude is computed and an image is displayed by the visual system with the new pitch attitude, enabling the pilot to correct the aircraft attitude. The important point is that the overall time for this computation must be sufficiently short so that it appears instantaneous to the pilot. In a modern simulator, these computations must be completed within 1/50th of a second or 20ms.

In a safety critical real-time environment, it is necessary to demonstrate that the real-time frame rate can never be violated beyond any reasonable level of doubt (which may be never more than once in

10e7 hours, or once every 1141 years). Although flight simulation software is not safety critical, the real-time constraint must still be fulfilled; usually this is the responsibility of the simulator designer. It is possible to monitor the real-time performance of a flight simulator and record any frame period violations. However, if the frame rate does drop, it is usually apparent to the flight crew as there are observable discontinuities in the visual system, or discernible lags in the aircraft response or even changes of frequency in the sound system outputs.

6 | P a g e

The simulator designer has, in effect, a time budget to complete all computations within the frame and consequently, tries to exploit as much of the frame time as possible, leaving a small margin for error in these estimates (or for future expansion), particularly as some computation times are data dependent. Given all the constraints on the scene content of the visual system, the processing of the flight model, the engine model, the weather model and so on, it is not uncommon for the frame period to be exceeded occasionally, even for a full flight simulator, particularly as the simulator manufacturer may not have full control over the behaviour of a graphics card under all flight conditions. Nevertheless, ensuring real-time performance, particularly for worst-case conditions is an essential part of system validation and acceptance tests.

Understanding Pilot Cues – Visual and Motion

Motion Cues

While it is clear that a motion platform, securely anchored to the ground inside a hangar cannot produce the same forces on the human body as an aircraft that can fly at 600kt and climb to 60,000 ft, an understanding of the human motion sensing system is essential for two reasons:

It explains the way the human body detects and responds to motion;

It may identify limitations with the human sensing systems that enable forces to be mimicked in a way that is indiscernible to the pilot in a flight simulator.

The motion sensors of the human body comprise the vestibular system and the haptic system. The vestibular system detects static and dynamic orientation of the head. It also stabilizes the eyes so that clear vision is achieved during movement of the head. In many ways, the vestibular system is equivalent to a gyro-stabilized inertial platform. The haptic system consists of the pressure and touches sensors over the human body, particularly interaction with the seat, pedals and hand controls.

The vestibular system consists of two sets of sensors, one in each ear, which measure the angular and linear accelerations. The angular accelerations are sensed by the semi-circular canals, organized in three mutually perpendicular planes. At low angular frequency (below 0.1 Hz) they measure acceleration. At high frequency (above 5Hz) they measure displacement. In the mid range, they measure velocity. Engineers will recognize this behaviour as similar to a second order filter. These are heavily damped systems with time constants of 0.1 s and approximately 11 s (yaw axis), 5 s (pitch axis) and 6 s (roll axis). There is also a minimum value of acceleration below which no motion is detectable. For flight, this threshold is between 0

.

5 deg

/ s

2 and 2 deg

/ s

2

. In other words, someone placed inside a large box with no visual stimulus will be unable to detect that they are being moved about one of the three axes, if the acceleration is below 0 .

5 deg

/ s

2

. If this box is a flight simulator cabin, this observation implies that:

The pilot can be rotated by applying a signal to the motion platform without being aware of the rotation;

Initial motion can be leaked away to allow additional acceleration to be applied, also without the pilot being aware of the additional motion.

7 | P a g e

This threshold is exploited in motion systems to regain some of the limitation on displacement of the motion platforms used in flight simulation. Whereas the semi-circular canals measure angular acceleration, the otoliths sense linear accelerations. The transfer function is similar to the semicircular canals but with time constants of 0.66 s and 10 s and a threshold of 0

.

0004m

/ s

2

. For combined motion of angular and linear accelerations, the semi-circular canal signals dominate with the otoliths providing measurements of predominantly linear motion.

There is one further advantage in knowing the response of the body’s acceleration sensors. If the transfer functions of the motion actuators are known, the desired motion of the platform can be matched as closely as possible to the actual motion. Nevertheless, it is important to note that the traditional platform has, for the most part, been abandoned in military simulation in favour of high quality image generation systems, wide-angle projection systems and the use of G-seats to provide tactile motion cues

Design of Task Scheduler

This section points out the basics of task scheduler development from Object Oriented perspective.

The task scheduler is designed to fulfil the following requirements: In summary the task tasks scheduler works as Capture, Schedule and Display

Recognise and register the common task-types

Capturing task sub-types (each of which is likely to be different template of needed properties)

Breakdown the task item into a series of sub-tasks

Then establishing a dependency relationships between tasks and subtasks

Capturing a time estimates for a tasks, date and other data required

Also what is required from the designed task scheduler is that, once the predecessor task has been completed, the scheduler must close it by an end-user. This section addresses this issue, as well as describing the process of notifying a user when all sub-tasks of a certain task have been completed.

Problem Overview

When the workflow begins, a number of sub-tasks are launched. Although it is possible to begin working on these sub-tasks as soon as they appear on a queue, however, this is not always possible since most of the sub-tasks cannot be closed until their predecessor tasks (as defined by the workflow) have been closed. So, the approach adapted in this chapter is as follows:

For a TASK, all sub-tasks are first distributed

 Each one of them is then set to particular status i.e. “Awaiting Scheduling” – this means that work can still be done on them, but they cannot be closed

Once that is completed and closed, user can then proceed to its subtask (that follows it)

8 | P a g e

Each of those sub-cases may have one or other predecessor that haven’t been completed yet, so user must check the status of them, if all of them are complete then the status can be changed to “READY TO WORK” – informing user that he can now work on sub-task

 Status can also be set to “IN PROGRESS” – if work is under progress

Finally once all tasks are accomplished, status can be set to “WORKFLOW COMPLETE”

Object Oriented Approach

Object oriented approach allows us to keep the data-processing code as close as possible to the data being processed, and thus helps us see whether we’re missing any pieces, or whether a certain piece of code is doing too much work . The example pseudo code template is given below, where each of the sub-task is communicating with the other sub-tasks as well as their parent tasks. Moreover, each of successor, in turn checks to see whether it is safe to move to “READY TO

WORK” state, if there are no more successors than the sub-task notifies its parent task about its completion, however, the parent task checks that all case are done, if so inform the end-user and invoke a status change message.

The class structure in C++ allows member to be categorised as private, public or protected. Besides, the class supports inheritance, meaning that further classes can be built upon the existing class. Thus a very big program can be thought of consisting of several classes with objects that serves their purpose.

Each class may be developed independently, debug and tested before integrated into a binger program. It must be remembered that class itself do nothing, and class functions cannot be called on their own. Thus, they are called by linking them with the objects of class on whose data they operate.

So, the class program works only when we create an object or objects of the class. A class contain data variables and functions which are called data members and function members respectively.

Various class functions operate when they are linked with the object of that class. The function members of the class generate information relevant to a particular object with whose identifier the functions are called. Hence in the class, the operation of the function member is tied to the objects of the class. The class body comprises of data members and function members. The data members comprise abstract data that applies to all objects of the class and is relevant to the aim of the program.

The common characteristics of objects of class are described by the function members of the class.

The objects of the class carry individual copies of their data unless the data is declared as “static” in which case all objects share same copy of data. When declaring the object within a class, the class name becomes the type of the object followed by object name [ class Cars mycar ]. With the declaration of a class no memory is allocated, memory is allocated when the class objects are defined.

Multiple objects belonging to same class are defined in a similar way, separated by comma.

It was seen that within the class, the member are labelled as private, public or protected. It must be noted that private member are not accessible to other functions (other than friendly functions) of the class. Although private member can be accessed via public function members, much neater way to access these private members with the help of pointers – who can indirectly access private data members provided the class contains least one public data member. In general to prevent the accidental changes, data members are defined – private, while the functional members are defined – public. However public members are accessible from inside or outside of the class, other than

9 | P a g e

providing interface to private members as seen above. OOP’s key is to hide the data. In general, the intention is to hide as much as is feasible without impairing the normal execution of the program.

Accessing the Private Data Members in Class – Role of

Constructors and Destructors

Since in a simulation environment many data members and functions are kept hidden (to prevent the accidental loss or change) it is necessary to understand how to access these hidden members within a class – this is where constructor and destructor come in. As discussed earlier, one way to access these private members is by using a public member function sometimes defined outside the class. These functions are called after the objects have been created. In the case of objects also the same may be achieved with a constructor function which is a special public function of the class. The constructor function is a public function, and has a same name as a class. It is used to initialise the object data variables or assign dynamic memory in the process of creation of an object of a class so that the object becomes operational. In very simple terms a constructor function initialises the object, and does not return any value (it is not even void type). So nothing is written for its type. The general characteristics of the constructor function are given below:

It is a public function, with no return (but not type void)

Constructor function has a same name as class

It may be defined inside or outside the class, however, if it defined outside the class its prototype must be defined within the class body

The constructor is automatically called when the object is created

On the other hand, destructor removes the object from the computer memory after its relevance is over. For a local object the destructor is called at the end of the block enclosed by a pair of braces {} wherein object is created and for a static object and it is called at the end of main function. Unlike private data members, private functions cannot be called from outside. However, in order to access these function a public function needed to be created which returns a value of private function. The objects can access these public functions. When making function outside the class remember to use the “Scope Resolution Operator” (

:: ). The use of points to perform this function of calling private members (data or functions) is discussed later in the section.

When pointers are used to call the functions or members from the class, for the defined object first a pointer must be defined, and then the value is assigned to that pointer. Once done, the pointer in the form (*ptr).Name() can be sued to call function from the class.

The private data of an object may be accessed through pointers if there is one public data member.

This is possible because the public data members and private data members of an object are stored in sequential blocks of memory. Thus, if the address of one member is known we can determine the values of other data members by increment or decrement of the pointer.

10 | P a g e

Hardware and Software Integration

The integration follows a step by step build up software which is then tested in conjunction with the simulator or hardware. The process begins within the integrated hardware being driven by the common data base resident in the computer complex. The common data base is initialised to some predetermined value corresponding to some aircraft condition such as engines off, all busses powered, aircraft on ground, ISA conditions.

Environmental Integration

The first software to be added is that associated with the IOS which enables the simulator environment to be controlled. The actions of all instructor keyboards, buttons and portable control units are tested by examining the results in the common database, the CRT display, the remote display readouts and button lights. The radio aids station database and the position update program can now be added. They define the simulated physical environment in terms of absolute position on the earth’s surface and the ground level altitude above sea level.

One of the major problems in environment integration is matching up those different sources of the same positional information as the equipment generating the data does not even use the same geographical co-ordinate system and may assume a flat earth or at best purely spherical earth. Aircraft navigation system must not be fooled by this wealth of conflicting data because they know what shape the earth really is and can make corrections accordingly.

In commercial aircraft flight simulators the database mismatch is most obvious when correlating radio navigation indications and visual positions relative to the runway touchdown point. The other most common problem is elevation where the visual system assumes a level database and the flight simulator has an accurate profile of the runway. Unless this is taken into account the simulated aircraft can end up landing on the glideslope, but visually below the airfield.

Cohesion among data bases is achieved by selecting one source of positional information as the master to which all other data bases are slaved. The usual selection of commercial simulator is the radio navigation facility database which is used to give range, bearing and depression between a facility and the aircraft, and the elevation of the ground below the aircraft. The aircraft position is transformed into the coordinate system used by the visual system and a table of offsets is created to compensate for inaccuracies at key points such as the runway threshold position or the slope of the sloping runway.

Simulation Module Integration

This stage follows the IOS integration stage discussed above, and a build-up follows a logical process starting with the basic ancillary systems listed in Table 2.0 (Flight Simulation J.M.Rolfe)

The first program is in the Electrics Program – which gives simulated power to all circuit breakers (CB) in response to generator power availability and a suitable crew activation of cockpit electrical distribution controls

11 | P a g e

Correct correlation between Circuit Breaker labels and common database description are therefore possible via a cockpit resident terminal as is the ability to check bus dependency by listing the CBs affected when the bus is taken offline – when electric system is integrated the simulator can be electrically powered via ground cart

The next system to be integrated in the Auxiliary Power Unit (APU) , Fuel and Pneumatic systems – which when integrated allow the engine to be installed together with all other ancillary systems and the secondary flight controls

The controls feel system is tested and calibrated against the aircraft design and recorded force versus position data, at the start of the hardware-software integration. This calibration covers both the static and dynamic responses of all the primary flight control and servo controllers to ensure correlation between controller performance, measured simulator cockpit force and measured aircraft forces.

Primary Flight Controls Program – introduced after the surface position versus the control position is measured against the simulator test guide for all on ground, static conditions.

Extensive use is made of Computerised Test Systems (CTS) during this phase to produce surface and force versus control position plots. The air data computer, flight instruments and radio navigation receivers are also introduced in readiness for the harmonisation between simulator visual database and radio facilities.

Parallel to this stage the designer performs the on ground portion of the Acceptance Test Manuals

(ATM) to check the performance of the system. Finally the simulator is ready for the introduction of the aero package. Once the simulator can fly and be positioned in the air the in flight testing of the system integrated can be completed against the ATM. As all the simulated components of the aircraft performance loop are now present it is possible to begin flight testing.

Complete subjective evaluation testing is possible after integration of the motion system software which uses the flight accelerations to provide the pilot with attitude and acceleration stimuli. The phasing of this stimulus relative to visual and instrument response, and the absence of the spurious motion during washout of the previous motion displacements to neutral is important to avoid generating false motion cues.

Possible Errors/Delays for Man-in-the-Loop Testing

Sampling Frequency of the primary flight controls program of the control position

Execution of the control loop (aero, position update, distributed systems etc)

Delays due to visual image generation system

Advanced Avionics Systems Integration

This is the final phase of simulator integration (including the tactical system integration for military simulators). The advanced avionics and tactical systems to be integrated are:

12 | P a g e

TCC – Thrust Control Computer

AFDS – Autopilot/Flight Director System

FMC – Flight Management Computer

Radar

PRW – Passive Radar Warning

Weapons and Weapons Aiming

Stores Management

Countermeasures

The first two of these systems provide the link between positional guidance data and the control of the aircraft and so form part of a critical control loop. Within this system the margin of the stability of the loop is very narrow; this is due to the build up delays on the critical control path consisting of the following program sequence:

Inertial reference signal computation (software) – autopilot (hardware) – control force simulation

(hardware) – flight control surface position computation (software) – aerodynamic computation

(software) – inertial platform simulation (software)

A difference of 20 or 30ms in this control loop can make the difference between stable and unstable loop.

For tactical system simulation – the automatic interaction between trainee and threat environment is introduced so that a target can react to a trainee’s action. ATMs are written for all tactical systems to ensure correct system operation and interaction with the simulated world. The ultimate test comes when the integrated simulator is flown on a training flight involving navigation, malfunctions, emergency procedures and foul weather.

13 | P a g e

Download