Data Acquisition Introduction--Prof Borden

Updated 1/11/2010 JWD
Introduction to Data Acquisition &
Analysis
In all fields of engineering, data acquisition is a critical step in testing any
design or theory. It is important to understand the methods and limitations
of measurements in order to create experiments and controls which help
analyze the phenomena of the world. This lab will cover the basics of using
the ITLL’s LabStation setup, calibrating a force sensor (load cell), and
using the force sensor to measure weight.
Transducers are sensors that transform one type of energy (e.g., mechanical and thermal) into a
different form of energy (e.g., electrical). Many sensors in this lab convert energy from force,
pressure, displacement, and temperature into electrical signals. This electrical signal is often
analog (continuous) and must be converted into a digital signal that can be processed by your
computer. The digital representation of an analog signal offers several advantages: 1) a digital
signal can be stored in volatile (RAM) or permanent magnetic memory, 2) digital signals can be
reproduced error-free, and 3) digital signals can be imported into a computer for computation and
analysis. However, when digitizing a signal, the analog data between each digital sample (datum
point) is lost. The benefits and limitations of digital data acquisition are important to
understanding your measurement and will be further explored in this lab.
Data Acquisition Card (DAQ Card)
The device in the computer that reads the analog voltage and converts it to a digital signal is
known as a data acquisition (DAQ) card, and the LabStation computers have a National
Instruments PCI-MIO-16E-4 (PCI 6040E) DAQ card, Figure 1.
Figure 1: MIO-16E-4 DAQ card
1 of 26
Updated 1/11/2010 JWD
This is a “multifunction input-output” card. You can connect directly to the MIO channels via
the “Multifunction I/O Module” section of the LabStation breakout panel, shown in Figure 2.
Also, the military connectors on the side of the LabStation can connect directly to the DAQ card.
Figure 2: LabStation breakout panel (MIO connections).
Locate the channels as you read through their description. The multiple functions of the DAQ
card are described in the following sections.
Analog input
There are eight differential analog voltage inputs, but only seven are actually available for
measurement. The analog input ACH0 is used for communication with the SCXI chassis
(described later) and not generally available for measurement, but connections ACH 1-ACH 7
on the LabStation breakout panel are available for measurement usage (see Figure 2). An analog
input is simply an input port to the DAQ card specifically for continuous, analog signals. The
analog signal is then converted to a digital signal (digitized) by an analog-to-digital converter
(ADC or A/D Converter). The ADC for the LabStations digitizer has a 12-bit analog-to-digital
quantizer that quantifies an analog data sample into 1 of 4096 possible digital values, Figure 3.
The ADC can acquire data at a rate of up to 250 thousand Samples/s, and the LabStation software
can then take the acquired digital data, manipulate it, display it, and save it to a file.
2 of 26
Updated 1/11/2010 JWD
a)
b)
Figure 3. a) The typical components of an ADC; b) the comparison of an analog and a quantized signal
Analog output
There are two analog outputs, DAC 0 and DAC 1, on the LabStation panel (Figure 2). These
channels have a 12-bit digital-to-analog converter with update (output) rates up to 250 kHz. Note
that these outputs can only deliver 20 mA of current! If these outputs are used to control a motor
or some other power-hungry device, a power amplifier must be used! Otherwise, the DAC
outputs can be damaged.
Digital input/output
There are eight digital channels (DIO 0-DIO 7) that can be configured as input or output.
However, only channels 3, 5, 6 and 7 are available for student use because the others are used to
communicate with our SCXI data-acquisition modules describes later in the lab.
Counter/timers
Two counter/timers, inputs CTR0 and CTR1, can be used for event counting and timing. We use
event counting to find the frequency of a square wave signal. For example, an optical encoder
puts out a square wave whose frequency is proportional to its rotational velocity. Therefore, its
velocity can be measured with our counter. A timer is the opposite of a counter, in that we can
define an output frequency in software and the timer puts out the appropriate square wave.
3 of 26
Updated 1/11/2010 JWD
Reading a Signal using LabVIEW
LabVIEW is a powerful graphical programming tool. The program includes a well-designed
driver (called NI-DAQmx) for interfacing with the DAQ card. The power and simplicity of this
interface is a direct result of the fact that the hardware and LabVIEW software are both created
by National Instruments. Keep in mind that almost all instruments, including the HP, Agilent,
and Tektronix equipment on your LabStation, have LabVIEW drivers that are programmed in a
similar manner. A simple program to read a voltage signal will be created below.
Equipment
Function Generator
Oscilloscope
1 short BNC-BNC cable
1 long BNC-BNC cable
1 T-connector
Programming in LabVIEW Procedure
1. Open a blank VI in LabVIEW. LabVIEW programs are called VIs which stands for
Virtual Instrument. Notice two screens will appear. The gray screen, called the front
panel, is the interface that the end user will see. The white screen, called the block
diagram, is where the code is written.
2. Right click on your block diagram. Select the DAQ assistant,
Express>>Input and place it on the block diagram.
, under
3. On the screen that pops up, select Acquire Signals>>Analog Input and then Voltage.
Notice that measurement types can be acquired with the hardware and software available
on the LabStation.
4. The next screen is used to select the channels used in the measurement. We want to
acquire our signal directly from the DAQ card (not through one of the filter modules that
will be discussed later in the lab) so open PCI-MIO-16E-4 and select analog channel 1
(ai1). This corresponds to ACH 1 on the LabStation panel. Click finish.
5. Choose the following settings:
a. Under Settings and Signal Input Range, enter the voltage range expected. For
the purpose of this lab enter ±5 V.
b. Under Timing Settings and Acquisition Mode select N Samples. This makes the
VI takes a series of data points at a specified rate:
i. Enter the Samples to Read to 250. Sample to Read controls how many
samples the DAQ card sends to the computer with each function
execution.
4 of 26
Updated 1/11/2010 JWD
ii. Enter the Rate as 10,000 Hz. Rate controls how many samples are taken
per second. The overall execution time of the function, therefore, is
simply the number of samples to read divided by the sampling rate.
c. Press OK.
d. Any of these settings can be changed at any time by double clicking on the DAQ
Assistant.
6. Create a graph to display the data by right clicking on the arrow to the right of data on
the DAQ assistant and selecting Create>>Graph Indicator. On the front panel, a graph
should be present that will display the data.
e. Right click on the graph and select Visible Items>>Graph Palette. This tool,
Figure 4, is used to zoom in and out on the data in the graph.
Figure 4: The graph palette of a LabVIEW graph.
7. Create two controls, which are user input icons on the front panel, that change the sample
rate and number of samples on the front panel. Right click on the “rate” arrow on the
DAQ Assistant (hold the mouse over the arrow and the wiring tool
and name will
pop up). Select Create>>Control, and repeat this process for the “number of samples”
arrow on the DAQ Assistant. The program should now look like that shown in Figure 5.
Figure 5: The block diagram for a the first LabVIEW program.
8. Now use the function generator on top of the LabStation to produce a signal to read.
5 of 26
Updated 1/11/2010 JWD
a. Connect the T-connector to the function generator’s (FG’s) output. Then using
the BNC-BNC cables, connect the FG to channel 1 of the oscilloscope (O-scope)
so the signal can be monitored in real time. Turn both units ON.
b. Adjust the function generator to produce a 1 Vpp (Volt peak to peak), 100 Hz,
sine wave. Double check on the oscilloscope that the output signal is the right
voltage. If not, contact a TA for help.
c. Using a BNC cable, connect the other side of the T-connector to the DAQ card
by using the LabStation panel channel ACH 1.
9. Run the program by pushing the run button, , in the top left hand corner of the VI. A
time waveform should appear in the graph on the front panel. Save the program for later
use because this program is a simple LabVIEW tool to help understand measurement
error.
Uncertainty Analysis
Uncertainty analysis is important to make valid conclusions about data. We use uncertainty
analysis to estimate how well we know the absolute value of the item we are measuring. We are
usually concerned with both the single sample uncertainty, and with the statistics of multiple
measurements. Single-sample uncertainty is an estimate of the error in a single measurement.
We use statistics to characterize the variability of multiple measurements and to estimate the
statistical properties of the population that our measurements are sampling.
Below are a few key terms used in uncertainty analysis:
Single-Sample: A single measurement of some quantity.
Sample: Multiple measurements made of the same quantity, but of a lesser number than
the entire population.
Population: The entire set of possible values of the quantity being measured.
Single-Sample Uncertainty
Single-sample uncertainty estimates the effect of three types of error on the measurement:
resolution, systematic, and random. Keep in mind that single-sample uncertainty is an estimate of
the error in a single measurement. If we make multiple measurements then we can estimate the
statistical properties of the population, such as mean and standard deviation.
Resolution Error
Resolution is the smallest increments that a measurement system can measure. An easy example
would be a ruler as seen in Figure 6.
6 of 26
Updated 1/11/2010 JWD
Figure 6: A typical ruler with a 1mm resolution.
The smallest marked metric increment is 1mm; therefore, 1mm is the resolution of the ruler.
Granted, it is likely possible to tell if a measurement is approximately halfway between lines or
on the line, but resolution is defined as the distance between two successive quantization levels
which, in this case, are millimeter marks.
Question 1)
What is the resolution of the inch side of the ruler?
This same concept applies to any measurement, including the sine wave just acquired with the
above LabVIEW program. As previously mentioned, resolution is equivalent to the quantization
step size which is the distance between two successive quantization levels. There are two types
of resolution with which to be concerned in the sine wave just acquired with the DAQ equipment:
time resolution and amplitude (voltage) resolution.
Figure 7: An analog signal and digital representation of a 0.1 Hz sine wave (n=sample number).
7 of 26
Updated 1/11/2010 JWD
Time Resolution
Figure 7 shows a schematic drawing of the discrete representation of an analog signal, in this case
a 0.10 Hz sine wave. The time-varying, continuous, analog voltage is ‘sampled’ and converted
into discrete values as a function of the sample number and time between samples, y(nt). The
sample rate is defined as fs=1/t. The total length of time stored in memory is (N-1)t, where N
is the total number of samples collected. The total number of samples taken may be limited by
the memory of the instrument (up to 5 million samples for the Tektronix DPO 3012 oscilloscope),
or may be limited only by the available RAM of the computer used for data acquisition.
Time Resolution Procedure
1. Calculate the expected time period between samples and the total sample period for a 100
Hz, 1 Vpp sine wave, acquiring 250 samples at 10,000 samples/sec.
2. To show discrete data points on the VI graph, right click on the graph and go to
Properties>>Plots and select a marker. Click OK.
3. Run the VI with the settings in Question 1. Record the total sampling period. Zoom in,
using the graph palette shown in Figure 4, until discrete data points are seen and find the
time interval between each point.
Question 2)
What is the total sampling period and time period between samples from
the VI graph? Do they agree with the calculated period and t? Show all calculations.
Nyquist Frequency
How well the digital signal represents the original analog signal depends on the sample rate and
number of samples taken. The faster the sample rate, the more closely the analog waveform may
be described with digital (discrete) data. If the sample rate is too low, errors may be experienced
and the nature of the waveform will be lost. Figure 8 shows what happens to the digital
representation of a 10 Hz sine wave when the sample rate is: (b) fs=100 samples/sec, (c) fs=27
samples/sec, (d) fs=12 samples/sec. As the sample rate decreases, the amount of information per
unit time describing the signal decreases. In Figure 8 (b) and (c), the 10 Hz frequency content of
the original signal can be discerned; however, at the slower rate, in (c), the representation of
amplitude is distorted. At an even lower sample frequency, as in (d), the apparent frequency of
the signal is highly distorted. This is called aliasing of the signal. In order to avoid aliasing, the
sample rate, fs, must be at least twice the maximum frequency component of the analog signal:
f s  2 f max .
The high frequency that can be correctly measured for a given sampling rate is known as the
Nyquist frequency and is half of the sampling rate.
8 of 26
(1)
Updated 1/11/2010 JWD
Figure 8: Effects of sample rate on digital signal amplitude and frequency.
Nyquist Frequency Procedure
In order to demonstrate the effects of aliasing, the same sine wave will be read at multiple sample
rates.
1. Make sure the function generator is still producing a 100 Hz, 1 Vpp sine wave.
2. Run your program acquiring 250 samples at 10,000 samples/sec.
3. Record the approximate apparent frequency and voltage on the VI. The easiest way to do
this is do zoom in on exactly one period and then calculate the frequency from the period.
4. Save a graph of the time waveform while zoomed in enough to see the cycles of the
signal. This can be done by right clicking on the graph of the front panel and selecting
Export Simplified Image… or by dragging a selection box around the graph and
selecting Edit>>Copy and pasting the image in a different file.
5. Repeat step 3 and 4 for the following sample rates: 500Hz, 200Hz and 150Hz.
Question 3)
What happens to the apparent frequency and voltage as the sample rate
decreased? What is the smallest sample rate possible that still reads an accurate
frequency? Include the pictures of the time waveforms from the above exercise when
answering these questions.
9 of 26
Updated 1/11/2010 JWD
Question 4)
What type of problems can aliasing cause?
Question 5)
Say five cycles of a 4 Vpp, 15-kHz sine wave need to be digitized. What is
the slowest sample rate that will capture the frequency of the signal? How many points
will be captured if all five cycles are digitized at this slowest sampling frequency?
Amplitude (Voltage) Resolution of the A/D Converter (DAQ Card)
As you probably already know, all information stored in a computer is stored in binary numbers.
For example, the number 168 is stored as 10101000. Each 1 or zero is referred to as a bit. The
entire 8 bit value is referred to as a byte. Due to the binary nature of computers, only certain
discrete values can be stored on a digital system.
The digital representation of the analog signal, shown in Figure 9, is discrete in amplitude as well
as time. The increment in time is the inverse of the sample rate; the increment in amplitude is the
amplitude resolution of the A/D converter. The voltage resolution of an A/D converter is given
by
Q
V fs
2n
(2)
where Vfs is the full-scale voltage range and n is the number of bits in the A/D converter. Typical
A/D converters have 8, 12, 16, or 24 bits, corresponding to a division of Vfs into 28=256,
212=4096, 216=65,536, or 224=16,777,216 increments. For example, an 8-bit converter with a -10
to +10 V range has a voltage resolution of 78 mV and a 16-bit converter with the same voltage
range has a resolution of 0.3 mV. In other words, no voltage value can be specified more
precisely than the resolution of the A/D converter. Music CDs have a signal resolution that is of
a 16-bit A/D converter in order to achieve high sound quality. The DAQ card used in the
LabStations has a 12 bit converter.
10 of 26
Updated 1/11/2010 JWD
0.575
0.57
0.565
Voltage (V)
0.56
0.555
0.55
0.545
0.54
0.535
0.128
Figure 9: An analog signal (
0.13
0.132
0.134
Time (s)
0.136
0.138
) read with a 12 bit A/D with a ±5V Range in order to create a digital signal (-*).
Voltage Resolution Procedure
1. Calculate the voltage resolution of the DAQ with a full scale range of 10 volts.
(Remember that your voltage range you entered when setting up the DAQ was ±5V) The
DAQ system has 12-bit resolution, so all samples are represented by a binary number
between 0 and 212=4096.
2. To view the resolution in LabVIEW, you will need a voltage that varies slowly compared
to the sample rate, so decrease the function generator frequency and increase the
sample rate.
3. Run your program and then zoom in on a peak or a trough until you see the discrete
points. Vary the sample rate and/or signal frequency until you can see that several
samples appear to be at the same voltage level, with a jump to several samples at the next
level as in Figure 9. The voltage resolution of the DAQ is just the voltage difference
between points.
Question 6)
What was your measured voltage resolution of the DAQ when the full-scale
range is 10V? What is the expected voltage resolution of a 12-bit A/D converter with a
10V range?
Question 7)
How much memory (MB) is necessary to store 8 minutes of acoustic data
(mono) that is digitized at 10 ksamples/s with an 8-bit A/D converter? In stereo (2 data
vectors) at 44 ksamples/s with a 16 bit A/D converter? Note that a Megabyte is equal to
220 bytes (BE CAREFUL!).
11 of 26
Updated 1/11/2010 JWD
Sensors
As we mentioned before, transducers are sensors that transform one type of energy into another
type, and, often, they are used to translate a physical phenomenon (i.e. forces, pressures,
displacements, temperatures) into an electrical signal. There are many types of sensors, some of
which are listed below.
Voltage Transducers
Many sensors, including pressure sensors, load cells and accelerometers, are simple voltage
transducers. They take a mechanical reading, such as force, and transform it into a voltage signal
that can be read by a data acquisition system.
Encoders
Encoders are used to measure rotational velocity and position. They consist of a disc, as seen in
Figure 10, mounted to the shaft of a motor. A sensor detects the markings as it rotates and outputs
a corresponding square wave that is read by the counter on the DAQ Card and correlated to
velocity and position.
Figure 10: Encoders are used to measure angular velocity and position.
Filtering and Amplification
When measuring with a sensor, the signal is sometimes so small that you will need to amplify it.
Moreover, electrical noise may need to be filtered out to achieve a clean signal. For very small
signals, such as that of a load cell, this filtering is essential. While electrical noise can be at any
frequency, you will find that it will be most often at 60 Hz. This is due to the fact that the AC
power grid in the United States operates at 60 Hz. A few of the filter modules in the signal
conditioning extensions for instrumentation (SCXI) chassis shown in Figure 12 have hardwarebased, low-pass, analog filters for reducing this type of noise.
12 of 26
Updated 1/11/2010 JWD
Figure 11: The LabStation SCXI chassis.
Note that when using an SCXI channel, the SCXI chassis must be powered on using the black
switch on the left side of the chassis!
The SCXI chassis inputs are available on the LabStation panel shown in Figure 13. They can also
be accessed using the military connectors on the side of the LabStation. The filter modules used
for this experiment simply amplify the entire signal, filter high-frequency noise, and then send the
signal to the DAQ card to be converted into digital data.
Figure 12: The LabStation SCXI chassis inputs.
The SCXI module that is used in this lab is the SCXI 1141, and the features of this module are
outlined in Table 1.
The SCXI-1141 is an eight-channel, low-pass filter module with much more sophisticated filters
than those in the SCXI-1122. They are eighth-order elliptical filters and their cutoff frequency
can be set to any value from 10-25 kHz in software, but the filters are hardware, analog filters. In
13 of 26
Updated 1/11/2010 JWD
addition, this module has software programmable gain of up to 100, which combined with a gain
of 100 for the data acquisition card, allows a gain of up to 10,000. Gains and filter frequencies
can be set separately for each channel. The last four channels (4-7) are AC coupled, meaning that
only time varying signals such as sinusoids can be measured, so DO NOT USE THESE
CHANNELS IF MEASURING A CONSTANT (DC) SIGNAL.
Table 1: SCXI 1141 module specifications.
Full Scale Voltage Range
Filter
Channels
SCXI 1141
±50mV to ±5V
variable
8
Acquiring Data from a Load Cell
You will now write a LabVIEW program to read from your own load cell using a SCXI module
to eliminate noise. The load cell used for this lab is a Measurement Specialties FX1901-001
transducer that you can get from your TA. The procedure of creating the LabVIEW program for
the load cell is outlined below.
1. Find the specifications for your transducer. Many can be found online at the website of
the manufacturer or through a browser search the transducer part number. For this load
cell, the specification sheet is in the H:\ITLL Documentation\ITLL Modules\Data
Acquisition Intro\Support Docs folder.
2. Most voltage transducers require an excitation voltage that can be found on the
calibration sheet or specifications. If it requires 5Vdc or 15Vdc, use the fixed power
supply located on the LabStation next to the monitor, Figure 13. Any other voltage
required can be supplied using the variable power supply, Figure 14, found on each
LabStation.
Figure 13: Fixed power supply
14 of 26
Updated 1/11/2010 JWD
Figure 14: Variable power supply
3. Next, estimate the response function (or sensitivity) of the transducer (load cell) at the
excitation voltage. The response function is the relationship between the desired quantity
(weight) and the signal (voltage). The response function (sensitivity) can be estimated by
using the full-scale output voltage and the maximum capacity/range (e.g., 10 lbf) of the
transducer. The output voltage is often given in mV/V; multiplying this by the excitation
voltage gives you the full-scale voltage range in mV. The full-scale voltage range is the
voltage output at your largest possible load. You can estimate the response function by
dividing the range of your sensor by the full-scale voltage range.
Example:
Given: Excitation Voltage= 5Vdc, Output Voltage=2 mV/V for a 20lb load cell
Therefore: Full-Scale Voltage Range= (Excitation Voltage)(Output Voltage) = (5V)(2
mV/V) = 10 mV
Response Function (or Sensitivity) = Max. Load/Full-Scale Voltage Range= 20lb/10mV=
2lbs/mV
4. The LabStation equipment will be used to read your voltage signal. Use the SCXI 1141
amplification module, and remember to choose one of the first four channels (0-3).
5. Connect your transducer to the LabStation breakout panel and the power supply with the
wiring diagram found in the specs or calibration sheet. Make sure to connect the output of
the transducer to the appropriate input on the LabStation panel (the 1141 inputs are
clearly marked). Also, use a banana cable to connect the (-) terminal of the power supply
to the AIGND terminal of the appropriate chassis (SCXI 1141) in order to ensure that the
power supply and the data acquisition system have the same ground potential.
6. Open a new VI in LabVIEW.
7. Right click on your block diagram. Select the DAQ assistant,
and place it on the block diagram.
, under Express>>Input
8. On the screen that pops up, select Acquire Signals>>Analog Input and then Voltage,
because that is the output of the load cell.
9. Select the SCXI module and the channel to which you connected your load cell. Note that
the LabStation sides (A and B) have different configurations. This means that you must
15 of 26
Updated 1/11/2010 JWD
change your channel selection in your VI accordingly if you switch between side A and
side B of the LabStations.
10. Choose the following settings for your DAQ assistant:
a. In the Configuration tab, under Voltage Input Setup>>Settings and Signal
Input Range, enter the voltage range that you expect. This should be just a little
larger than your full-scale voltage range that you found in step 3. Remember the
smaller the range, the higher the resolution of your measurement.
b. In the Configuration tab, under Timing Settings and Acquisition Mode select “1
Sample (On Demand).” This has the VI take one voltage sample every time the
loop you will make is completed.
c. In the Device tab under Voltage Input Setup, enable the lowpass filter and select
the cutoff frequency. On the SCXI 1141, the cutoff frequency may be set to any
frequency between 10-10,000 Hz. For this lab, the voltage signal from the load
cell is a DC signal (i.e., it has a frequency of 0 Hz); therefore, set the cutoff
frequency to 10 Hz.
A lowpass filter allows frequency content below the cutoff frequency to “pass
through” the filter unaffected (relatively), but the lowpass filter greatly attenuates
the amplitude of frequencies about the cutoff frequency. Therefore, for lowfrequency signals, a lowpass filter is a great way to reduce the high-frequency
noise that corrupts the signal. Data-acquisition systems will generally utilize a
lowpass filter with a cutoff frequency at or below the Nyquist frequency in order
to prevent aliasing.
d. Press OK.
e. Any of these settings can be changed at any time by double clicking on the DAQ
Assistant icon.
11. Place the function within a while loop. Right click on the block diagram, and select the
loop icon,
, and choose the While Loop icon. Drag a selection box around the DAQ
Assistant icon which will create the loop structure in your block diagram.
12. If not already present, wire a stop button to the stop function,
, in the bottom
right hand corner of the while loop. Create the stop button by right clicking on the stop
function and selecting Create Control. The function will now run continuously until you
hit the stop button.
13. The DAQ Assistant VI outputs data in a format called Dynamic Data Type (DDT). To
convert from the DDT to a simple voltage values, right click on the block diagram and
click on Express»Sig Manip»From DDT,
16 of 26
, and place the icon in your loop. On the
Updated 1/11/2010 JWD
screen that pops up and in the Resulting data type list, select Single Scalar for a single
number output. You should not have to change anything else within this window.
14. Wire data from the DAQ assistant to the data-conversion icon by clicking on the arrow
next to data and connecting a wire to the data-conversion icon.
15. To Display the voltage recorded by the VI from the load cell, right click somewhere in
the front panel and select Express>>Graph Indicators,
, >>Chart. Place the
Waveform Chart somewhere on the front panel, and, on the block diagram, wire the
From DDT icon to this chart. Be sure the chart is within the while loop. Run the VI to
make sure it works.
16. The VI is now collecting data as fast as it can. If you would like to slow it down and set
the sample rate right click in the block diagram and select Programming>>Timing,
,>>Wait (ms),
, and place the icon in the loop. You can now right click on the
left edge of the icon and select Create Control to control the milliseconds between each
sample. Set this control to a reasonable value for your measurement.
17. Test the VI with the loop delay, and label the voltage chart appropriately.
Now a chart with the calibrated load values will be created. Please reference Figure 15 for an
example of the completed VI.
18. Place an addition node and control that will be used to account for the zero-load offset of
the load cell. Access the Add function node by right clicking on the block diagram and
selecting Programming>>Numeric>>Add.
19. Create a multiplication node and control that multiples the voltage signal (less the noload offset) by a calibration factor. The Multiply function node is found in the same
menu as the Add function.
20. On the front panel, create another Waveform Chart.
21. On the block diagram, wire the new function nodes and chart as shown in Figure 15. Be
sure you understand why the wiring is as shown.
22. Add a digital display to the two waveform charts. Do this by right clicking on the chart,
and selecting Visible Items>>Digital Display. Change the number of digits displayed in
the digital display by right clicking on the digital display and changing the number in the
DisplayFormat…>>Digits field.
23. Label all the new load charts and controls so as not to confuse them with the voltage-only
chart and controls.
24. Test the VI.
17 of 26
Updated 1/11/2010 JWD
a)
b)
Figure 15. Front panel (a) and block diagram (b) of VI for weight measurement.
18 of 26
Updated 1/11/2010 JWD
Question 8)
What is the (estimated) response function in lbf/V for your load cell? Show
all of your work.
Calibration
Calibration is the process of determining the instrument response function by measuring known
values of the quantity of interest. The response function you determined above is based on the
sensor and DAQ-card specifications provided by the manufacturers. It is possible that the specific
DAQ card and transducer you are using are slightly different in their response than reported by
the manufacturers. The only way to know for sure is to place masses of known weight on the load
cell and record the output voltage.
Estimating Measurement Errors
Before we perform the calibration, it is important to understand how to deal with errors. The
process of quantifying your understanding of error is called Uncertainty Analysis and is a critical
element of properly conducted experiments.
The minimum uncertainty is always the instrument resolution. In the absence of any other
information about a measurement system, you should quote the resolution as the minimum
uncertainty. However, unless you are measuring something directly, as in a ruler or manometer,
the uncertainty will generally be larger than the resolution because of systematic and/or random
errors. As seen previously for an A/D converter, having a high enough resolution (both in
amplitude and time) is important in order to get an accurate signal that correctly depicts the
voltage.
Systematic Error
Systematic error, also called bias, is error that occurs consistently. Systematic error can only be
determined by comparing measured readings with known values. You can reduce systematic error
by keeping your instruments properly calibrated and operated. It is often the case that you do not
have the equipment to carry out a calibration. For commercially available transducers, the
company that manufactured the transducer will usually provide calibration information. This can
be either a direct calibration of your specific sensor (and connected to a serial number), or a
general calibration for your type of sensor. The calibration information can often be found on the
company website.
Systematic errors include:
Calibration errors
Consistently occurring human errors
Errors of the measurement system
Consistent calculation errors
Uncorrected errors caused by the presence of the instrument
19 of 26
Updated 1/11/2010 JWD
Random Error
Random errors are those that are cause by uncontrolled variability introduced by the measurement
system. If you suspect that there might be random errors, an easy way to tell is to make multiple
measurements of a fixed value of the quantity of interest. If there is variability in the
measurements, when you know the value is fixed, then the statistics of that variability
characterize the random error.
Random errors include:
Errors stemming from uncontrolled environmental variations
Inconsistently occurring human errors
Errors derived from lack of sensitivity
Instrumental noise
We use statistics to characterize random error. By measuring a constant quantity multiple times,
we can obtain the statistical properties of the error. There are steps that can be taken to reduce
random errors. For example, keeping the experiment in an environmentally controlled area will
reduce random errors due to laboratory temperature and/or pressure variations.
Question 9)
What is the maximum measurement uncertainty in your load cell, based on
the factory specifications, if you were (hypothetically) measuring a 5 lbf weight? Be
sure to include all sources of uncertainty, and remember that you are eliminating most
of the “Output at No Load” and “Thermal Zero Shift within Operating Range”
uncertainty because you account for the signal offset in your LabVIEW program.
NOTE: The overall measurement uncertainty is that of both the sensor and the rest of the
measurement system. As with the sensor, uncertainty information can be found by looking up the
specifications for our data acquisition system, which comprises of both a SCXI filter and the
DAQ card, Figure 16. For the LabStation measurement system, the output of the filter (SCXI)
module always utilizes the ±10 V range of the DAQ card. Therefore, after the signal is digitized
by the DAQ card, the digital signal is divided by the gain used in the SCXI filter; this attenuation
will result in a digital signal of the same magnitude as the original analog signal.
Analog Input
SCXI Module
--filter and amplifier
Analog Signal
(range: ± 10 V)
DAQ Card
--ADC
Digital
Attenuation
Digital Output
(amplitude same
as analog input)
Figure 16. The SCXI module amplifies and filters the analog signal so that the signal passed to the DAQ card
utilizes the full ±10 V range.
The SCXI and DAQ-card data sheets have detailed instructions on determining your absolute
system accuracy and the data sheets can be found in the H:\ITLL Documentation\ITLL
20 of 26
Updated 1/11/2010 JWD
Modules\Data Acquisition Intro\Support Docs folder. Remember that the absolute accuracy of
the DAQ card is based on a voltage range of ±10 V, but the absolute accuracy of the DAQ card
must also be divided by the “overall gain” of the SCXI module (see the data sheet for the SCXI
module for the “overall gain” for a given input-voltage range). The division of the absolute
accuracy of the DAQ card by the overall gain of the SCXI module is similar to the process of the
actual voltage signal being attenuated so that the digital signal properly approximates the
amplitude of the original analog input.
Question 10) What is the maximum uncertainty of the filter and DAQ card for measuring
a 5 lbf weight with your load cell if utilizing 1) a ±250 mV range and 2) a ±500 mV
range? Comment on the difference. Keep in mind that you are using the SCXI 1141
and the DAQ card PCI 6040E, ( Tot.Uncert ainty  ( SCXI ) 2  ( DAQ ) 2 ).
WARNING: DO NOT START THIS NEXT SECTION OF THE
LAB UNLESS YOU CAN COMPLETE THE ENTIRE SECTION
IN ONE CLASS PERIOD.
A calibration must be carried out under “constant” conditions. That
means that you must perform the calibration for a specific load cell,
wires and DAQ/filter units. For this reason, we recommend that you
wait until you next lab session unless you are sure you will have time.
Please read the following before you start so that you can reasonably
estimate how long it will take.
Question 11) Using the previously-determined (based on data from manufacturers)
response function in your VI and zeroing the load, what is the weight of one of the
unknown cylinders? What is the single-sample measurement uncertainty for this
weight, based on the specs for your load cell, DAQ card, and filter? Be sure to note
which cylinder you used and your input-voltage measurement range.
Linear Regression and Uncertainty Analysis in Excel or Minitab
You will now calibrate your load cell using the precision weights provided. By calibrating your
specific device you can eliminate the systematic error. For the load cell up to 10 lbf, the
relationship between voltage and weight should be linear, thus linear regression is the best way to
find the sensitivity.
1. Record the voltage output of your transducer at different weight levels in Excel. Do this
for at least 5 different weight levels, and make multiple measurements for each
21 of 26
Updated 1/11/2010 JWD
calibration weight. Multiple measurements of the calibration weights will help determine
if the instrument (sensor, DAQ card and filter) is introducing any noise and/or hysteresis
error. Specifically, to test for hysteresis effects, measure the weights in increasing order
from zero to 10 lbf and from 10lbf back to zero. Perform at least 2-3 cycles of this
ascending and descending measurement, and then investigate the data for hysteresis
effects. Keep in mind that you have to share the weight set with the rest of your class so
plan accordingly. The available weights and their tolerances can be found in Table 2.
2. In Excel, go to the Data tab >>Analysis>>Data Analysis>>Regression
3. Select the x and y input range.
4. Select the output range to where you want the data to be located.
Ignoring the single sample uncertainty in the voltages, the uncertainty of any single measurement
of weight using the regression is the confidence interval for any “future” measurement. The
weight with its regression uncertainty is given by the equation below
1
𝑊 = 𝑊𝑟𝑒𝑔𝑟𝑒𝑠𝑠𝑖𝑜𝑛 ± 𝑡𝛼/2,𝑛−2 √𝜎 2 [1 + 𝑛 +
̅ )2
(𝑉𝑚𝑒𝑎𝑠𝑢𝑟𝑒𝑑 −𝑉
𝑆𝑉𝑉
]
(3)
where, σ is the population standard deviation, Wregression is the weight predicted using the
regression analysis, n is the number of measurements, V measured is the voltage in question, 𝑉̅ is the
average voltage signal for the regression line, and SVV is the sum squares of the voltages [1].
Table 2: Specifications for NIST calibrated weights.
Weight (lbf) Tolerance (lbf)
1
±0.00015
2
±0.0002
5
±0.0003
10
±0.0005
Question 12) What response function (sensitivity) did you find? What is the uncertainty
in your response function? Include the new calibration curve and the uncertaintyanalysis spreadsheet.
Question 13) What is the weight of the unknown cylinder that you measured in Question
11 with the new calibrated response function? What is the uncertainty in this weight
due to your calibrated response function? How does this compare to the single-sample
uncertainty you estimated in Question 11?
22 of 26
Updated 1/11/2010 JWD
Propagation of Errors
If you use measured values to calculate another value, you will need to use propagation of errors
to find the measurement uncertainty in your calculation as in Equation 4.
2
F ( x1 ,
 F

, xn )  
x1  
 x1

 F

 
xn 
 xn

2
(4)
1. Use the calipers to find the volume of the unknown block.
2. Calculate the density of the unknown block and propagate the errors.
Question 14) What is the equation for propagation of errors, for the density
measurement, in terms of the variables you have?
Statistical Analysis
Statistical uncertainty is used to predict the behavior of an entire population based on a measured
sample. It is important in experimental work because we often make multiple measurements to
obtain estimates of the mean and/or standard deviation of a population. You will recall that the
sample mean is often distributed with the normal distribution as seen in Figure 16.
Figure 17: The sample distribution of a 95% confidence interval.
23 of 26
Updated 1/11/2010 JWD
The confidence interval is the probabilistic range that the mean of the population will fall into at a
given level of significance. The confidence interval of the mean is given in Equation 5. It is
standard engineering practice to use a significance level of 5%, resulting in a 95% confidence
interval as shown in the above figure.
 s 
C.I .  y  t / 2, n1 

 n
(5)
y is the average of the samples; s is the standard deviation, and n is the number of samples. The
fewer samples taken the less likely the data will accurately fit the curve. To compensate for a
small sample size, you need to incorporate a t factor, tα/2,ν=n-1 where α is the significance level
(for a 95% you take 1-0.95 to get 0.05 and ν is the degrees of freedom (one less than the number
of samples, n-1). With fewer samples, the t correction factor will be larger, and, therefore, the
confidence interval will be larger as well. The t factor can be found in a t-distribution table or by
using Excel.
We can use statistical analysis in connection with calibration to determine the variability
introduced by the measurement system, i.e., random error, and the systematic error. If we take
multiple data points on the same, unvarying, and known specimen, then any variability in the
measured values results from the measurement system and the sample mean can be corrected.
1. Weigh all three unknown cylinders 10 times each.
2. Calculate the mean density for each unknown cylinder.
3. Calculate the 95% confidence interval of the mean density for each unknown cylinder for
the first 3, 5, 7 and 10 measurements.
Question 15) What is the confidence interval of density as a function of the number of
measurements? Show the result in a plot.
4. Now estimate the single-sample uncertainty for each cylinder’s mean density, based on
the resolution of your load cell, DAQ and filter.
Question 16) How does the single-sample measurement uncertainty compare to the
confidence interval for each weight?
When to use Single-Sample Uncertainty vs. Statistical Analysis
The uncertainty you report will depend on the nature of the measurements you make and it is
always important to include how the reported uncertainty was calculated. At the very least, the
single-sample uncertainty should always be estimated. If the single-sample uncertainty is larger
than variability in the measured item, then you cannot reach any valid conclusions from a
statistical analysis. Indeed, this is why it is important to understand the single sample uncertainty,
even when you make multiple measurements.
24 of 26
Updated 1/11/2010 JWD
Question 17) Which uncertainty would you report for your mean density of the unknown
cylinders: the single-sample uncertainty, the uncertainty due to repeat measurements as
quantified by the confidence interval you calculated for your 10 measurements, or both?
Why?
25 of 26
Updated 1/11/2010 JWD
References
1. Navidi, Statistics for Engineers and Scientists, McGraw Hill 2006.
2. Holman, Experimental Methods for Engineers, 7th Edition, McGraw-Hill, New York,
2001, A/D conversion, aliasing 14.5 p. 588.
3. Figliola and D.E. Beasley, Theory and Design for Mechanical Measurements, Wiley,
New Your, 1991, p. 225.
4. Wheeler, A.J. and Ganji, A.R., Introduction to Engineering Experimentation, PrenticeHall, 1996, Chapters 4 and 5. (This is a short intro to A/D concepts)
5. Proakis, J.G. and Manolakis, D.G., Digital Signal Processing Principles, Algorithms, and
Applications, Pearson Prentice Hall, New Jersey, 2007.
26 of 26