Uploaded by muhaba muhamed

Chapter 1 FINAL

advertisement
Chapter 1
Basic Concepts of Measurement and Instrumentation
1.1
Introduction
Measurement techniques have been of immense importance ever since the start of human
civilization, when measurements were first needed to regulate the transfer of goods in barter
trade to ensure that exchanges was fair. The industrial revolution during the nineteenth century
brought about a rapid development of new instruments and measurement techniques to satisfy
the needs of industrialized production techniques. Since that time, there has been a large and
rapid growth in new industrial technology. This has been particularly evident during the last part
of the twentieth century, encouraged by developments in electronics in general and computers in
particular. This, in turn, has required a parallel growth in new instruments and measurement
techniques.
Measurement systems have important vital applications in our everyday lives, whether at home,
in our vehicles, offices or factories. We use measuring devices in buying our fruits and
vegetables. We assume that the measuring devices are accurate, and we assume that we are all
referring to the same units (e.g., kilogram, meter, liter…). The consequence of inaccurate
measuring devices in this case leads to financial losses on our part. We check the temperature of
our homes and assume that the thermostats reading the temperature are accurate. If not, then the
temperature will be either too high or too low, leading to inconvenience and discomfort.
We pay for our electricity in units of kWh and we assume that the electricity meter is accurate
and faithfully records the correct number of electricity units that we have used. We pay for the
water we consume in liters, and we also assume that the water meter is correctly measuring the
flow of water in liters. In this case as well, the error will lead to financial loss.
The accuracy of the measurement systems mentioned above is very important, but is more
critical in some applications than others. For example, a pharmacist preparing a medication is
reliant on the accuracy of his/her scales to make sure he/she includes the correct amounts of
ingredients in the medication. Another example is the manufacturing of present-day integrated
circuits and photo-masks that requires a high degree of accuracy. Certain chemical reactions
require high accuracy in the measurement and control of temperature.
The massive growth in the application of computers to industrial process control and monitoring
tasks has spawned a parallel growth in the requirement for instruments to measure, record and
control process variables. As modern production techniques dictate working to tighter and tighter
accuracy limits, and as economic forces limiting production costs become more severe, so the
requirement for instruments to be both accurate and cheap becomes ever harder to satisfy. This
latter problem is at the focal point of the research and development efforts of all instrument
manufacturers. In the past few years, the most cost-effective means of improving instrument
accuracy has been found in many cases to be the inclusion of digital computing power within
Prepared by Samuel A.
Page 1
instruments themselves. These intelligent instruments therefore feature prominently in current
instrument manufacturers’ catalogues.
1.2
The evolution of measurement
We can look at the evolution of measurement by focusing on invented instruments or by using
the instruments themselves. We will list the steps of progress in measurement, which we define
somewhat arbitrarily, according to human needs as these emerged throughout history:
 The need to master the environment (dimensional and geographical aspects);
 The need to master means of production (mechanical and thermal aspects);
 The need to create an economy (money and trade);
 The need to master and control energy (electrical, thermal, mechanical, and hydraulic
aspects);
 The need to master information (electronic and optoelectronic aspects).
In addition to these is the mastery of knowledge which has existed throughout history and is
intimately connected:
 measurement of time;
 measurement of physical phenomena;
 Measurement of chemical and biological phenomena.
1.3
Functions of Measurement systems
Measurements are made or measurement systems are set up for one or more of the following
functions:
To monitor processes and operations
To control processes and operations
To carry out some analysis
1.3.1. Monitoring
Thermometers, barometers, anemometers, water, gas and electricity meters only indicate
certain quantities. Their readings do not perform any control function in the normal sense. These
measurements are made for monitoring purposes only.
1.3.2. Control
The thermostat in a refrigerator or geyser determines the temperature of the relevant
environment and accordingly switches off or on the cooling or heating mechanism to keep the
temperature constant, i.e. to control the temperature. A single system sometimes may require
many controls. For example, an aircraft needs controls from altimeters, gyroscopes, angle-ofPrepared by Samuel A.
Page 2
attack sensors, thermo- couples, accelerometers, etc. Controlling a variable is rather an involved
process and is therefore a subject of study by itself.
1.3.3. Analysis
Measurement is also made to:
test the validity of predictions from theories,
build empirical models, i.e. relationships between parameters and quantities
associated with a problem, and
Characterize materials, devices and components.
In general, these requirements may be called analysis.
1.4
Elements of measurement system
Most of the measurement systems contain three main functional elements. They are:
I.
Primary sensing element
II.
Variable conversion element
III.
Data presentation element
I. Primary sensing element
 The quantity under measurement makes its first contact with the primary sensing element
of a measurement system. E.g. thermometer, a thermocouple, and a strain gauge.
 i.e., the measurand- (the unknown quantity which is to be measured) is first detected by
primary sensor which gives the output in a different analogous form
 This output is then converted into an electrical signal by a transducer - (which converts
energy from one form to another).
 The first stage of a measurement system is known as a detector.
II. Variable conversion element
It was needed where the output variable of a primary transducer is in an inconvenient
form and has to be converted to a more convenient form. E.g. the displacementmeasuring strain gauge has an output in the form of a varying resistance.
III. Data presentation element
The information about the quantity under measurement has to be conveyed to the
personnel handling the instrument or the system for monitoring, control or analysis
purpose.
Prepared by Samuel A.
Page 3
1.5
Instrumentation
Instrumentation is the process by which pieces of equipment that may be used to supply
information concerning some physical quantity (usually referred to as a variable). This variable
may be fixed and thus have the same value for a long time for a given physiological system, or it
may be a quantity, that can change with time.
Instruments, therefore, are used to provide information about physiologic systems. In providing
such information the instrument is carrying out three functions.
1. Indicating function: This function may be achieved by a moving pointer on a meter, an
aural or visual alarm, or by flashing numbers or words on a screen to describe the variable
being measured.
2. Recording function: Many instruments not only indicate the value of a variable at a
particular instant in time, but can also make a permanent record of this quality as time
progresses, thus carrying out a recording function as well as an indicating function.
Instruments that present the measured variable on a graphic chart, a computer screen, a
magnetic or compact disk, or a printed page carry out the recording function. Today
computers perform these functions by storing data in digital form on media such as
semiconductor memory and magnetic or optical discs.
3. Controlling function: third function that some instruments perform is that of control.
Controlling instruments can, after indicating a particular variable, exert an influence upon
the source of the variable to cause it to change. A simple example of a controlling instrument
is an ordinary room thermostat. If the room is too cold, the thermostat measures the
temperature and senses that it is too cold; then it sends a signal to the room heating system,
encouraging it to supply more heat to the room to increase the temperature. If, on the other
hand, the thermostat determines that the room is too hot, it turns off the source of heat, and
in some cases supplies cooling to the room to bring the temperature back to the desired
point. In our discussion of temperature control later on, we will look more closely at this
controlling function of instruments, however, for the most part, we will be concerned with
instruments that only indicate and record.
Prepared by Samuel A.
Page 4
1.6
Measuring Units and Standards
1.6.1 Units of Physical Quantities
All physical quantities in science and engineering are measured in terms of well-defined units.
The value of any quantity is thus simply expressed in terms of the product of number (e.g. 100)
and a unit of measurement (e.g. meters).
In any system of units the smallest sets of quantities are the units of which are to be accepted by
definition is called the set of the Fundamental quantity. In a purely mechanical system Length,
Mass and Time are the Fundamental quantities. All other units are expressed in term of
Fundamental units are called Derived units.
System of Mechanical Units
There are Four general system of Mechanical units used in Engineering, These are
1. Foot-Pound-Second (FPS) system of unit
2. Centimeter-gram-Second (cgs) system of unit
3. Meter-Kilogram-Second (KGS) system of unit
4. The System of International unit (SI)
The most recent and widely accepted system of units is called the SI system of units with seven
quantities as defined below
Quality
Unit
Symbol
Length
Meter
M
Mass
Kilogram
Kg
Time
Second
S
Current
Ampere
A
Temperature
Kelvin
K
Luminous Intensity
Candela
Cd
Amount of Substance
Mole
Mol
All other quantities deferent from the above seven SI quantities defined by Derived quantity.
1.6.2 Standard of Measurement System
The Standard is the physical embodiment (typical representative) of the units defined in the
system of units. The precise value of a quantity is called as standard. There are different types of
standards of measurements on the basis of their function and a hierarchy of standards with
different level of accuracy. Accuracy can be defined as a measure of the closeness of the
measured magnitude of a given quantity to its true or exact magnitude.
Prepared by Samuel A.
Page 5
Types of Standard
1. International Standard
The international standard are defined by international agreements and they represent certain
units of measurement to the closest possible accuracy that the production and measurement
technology allow. These standards are highest accuracy.
2. Primary Standard




These are the highest standards of basic units
These are having international standards
These are having international acceptance
These are having high accuracy
The Primary Standard is maintained by the international standard Laboratories for different parts
of the world. The main function of primary standard is the verification and calibration of
secondary standard.
3. Secondary Standard



Secondary standards are copied from the primary standards
These are having national wide acceptance
These are having low accuracy when compared with primary standards
Secondary standards are the basic reference standards in industrial measurement Laboratories.
4. Working Standard



Working standards are copied from the secondary standards
These are having local wide acceptance
These are having low accuracy when compared with secondary standards
Working standard are the principal tools of a measurement laboratory. These standard are used to
check and calibrate general laboratory instruments for accuracy.
 Note: All standards are checked by the highest or greater standard.
Prepared by Samuel A.
Page 6
1.7
Methods and Modes of Measurement
Measurement is a process of comparison between the predetermined standard (known value) and
the unknown value of a quantity (measurand).
 The physical quantity which is to be measured by the instrument is called as measurand.
1.7.1 Methods of measurements
There are two methods in measurements.
1. Direct comparison method
In this process the quantity which is to be measured is directly compared with the standards
Ex: 1.measuring of mass
2. Measuring of length
2. Indirect comparison method
In this process the quantity which is to be measured is indirectly compared with the standards
1. Ex: measuring of temperature by thermometer.
2. Measuring of pressure by manometer
1.7.2 Modes of measurements
There are three modes of measurements.
1. Primary measurement
In this primary measurement the quantity which is to be measured is directly
compared with the standards.
this primary measurement requires no conversion (or) translation
So this primary measurement is a one kind of direct comparison method
Prepared by Samuel A.
Page 7
2. Secondary measurement
In this secondary measurement the quantity which is to be measured is
indirectly compared with the standards.
this method requires one conversion (or) one translation
So this secondary measurement is a one kind of indirect comparison method
3. Tertiary measurement
It is one kind of measurement system
But it requires two conversions to determine the unknown value of a quantity
Ex: measuring of temperature by thermocouple
1.8
Error in a Measurement System
Any measurement system has an input variable which is the true value of the quantity to be
measured and an output variable which is the measured value. One of the main aims in designing
a measurement system is to minimize the error between the true value and the measured value.
The reason for this error developing could one of the following:
a. Systematic Errors: These are errors that have a clear understood explanation
within the measurement system. Systematic error can be sub-divided into:
 Static errors caused by the static characteristics of the measurement system (effectively
the steady state characteristics).
 Dynamic errors caused by the dynamic response of the measurement system (transient
response of the device).
Prepared by Samuel A.
Page 8
b. Random errors caused by unknown reasons.
c. Internal and external noise disturbances
1.9
Definition of Terms
The following terms are often employed to describe the quality of an instruments reading.
i.
Range
The region between the limits within which a quantity is measured, received or
transmitted, expressed by starting the lower and upper range values. Example: 0 to 150℉,
20 to 200 psi.
ii.
Span
the algebraic difference between the upper and lower range values.
For example:
a) Range 0 to 150 ℉ , span 150 ℉.
b) Range -20 to 200 ℉, span 220℉.
c) Range 20 to 150 psi, span 130 psi
iii.
Accuracy
The accuracy of an instrument indicates the deviation of the reading from a known value.
iv.
Precision
The difference between the instruments reported values during repeated measurements of
the same quantity. Typically, this value is determined by statistical analysis of repeated
measurement.
v.
Repeatability
It is the ability of an instrument to reproduce the same measurement each time the same
set of conditions is repeated. This does not imply that the measurement is correct, but
rather that the measurement is the same each time.
Prepared by Samuel A.
Page 9
Poor Repeatability means poor Accuracy.
Good Accuracy means good repeatability.
Good Repeatability does not necessarily mean good Accuracy.
vi.
Sensitivity
The change of an instrument or transducer output per unit change in the measured
quantity. A more sensitive instrument reading changes significantly in response to
smaller changes in the measured quantity. Typically an instrument with higher sensitivity
will also have better repeatability and higher accuracy.
Sensitivity = D (output) / D (input)
Example:
If the measured output is increased by 100 mV for a temperature change of 4oC, the
sensitivity is
S = DV/DT = 100 mV/4 oC = 25 mV/oC
vii.
Resolution
The smallest increment of change in the measured valve that can be determined from the
instrument readout scale.
viii.
Dead space
In process instrumentation the range through which an input signal may be varied upon
reversal of direction, without initiating an observable change in output signal. Dead zone
is usually expressed in percent of span.
ix.
Hysteresis
An instrument is said to exhibit hysteresis when there is a difference in readings
depending on whether the value of the measured quantity is approached from above or
below. Hysteresis results from the inelastic quantity of an element or device. In other
word, it may be the result of mechanical friction, magnetic effects, elastic deformation, or
thermal effects. Hysteresis is expressed in percent of span. Dead band term is included in
the hysteresis.
Prepared by Samuel A.
Page 10
x.
Drift
The term drift is the change in output that occurs over time. It is expressed as the
percentage of full range output.
The drift is defined as the gradual shift in the indication over a period of time
where in the input variable does not change.
Because of environment factors like stray electric fields, stray magnetic fields,
thermal e.m.fs, changes in temperature, mechanical vibrations etc.
Drift is classified into three categories:
Output
Output
Zero drift
sensitivity drift
zero
drift
input
Output
Sensitivity drift
input
sensitivity drift
Zonal drift
zero
drift
input
2
1.10 ANALYSIS OF THE ERRORS
I. Arithmetic Mean
When a set of readings of an instrument is taken, the individual readings will vary
Some what from each other, and the experimenter is usually concerned with the
mean of all the readings. If each reading is denoted by xi and there are n readings,
the arithmetic mean is given by
Prepared by Samuel A.
Page 11
II. Deviation
The deviation, d, for each reading is given by
We may note that the average of the deviations of all readings is zero since
The average of the absolute value of the deviations is given by
III. Standard Deviation
It is also called root mean-square deviation. It is defined as
IV. Variance
The Square of standard deviation is called variance.
Exercise
The following readings are taken of a certain physical length. Compute the mean reading,
standard deviation, variance and average of the absolute value of the deviation using the
biased bases. Ans (5.613, 0.594, 0.3533, 0.4224).
Prepared by Samuel A.
Page 12
Download