Materials Testing Machines Investigation of error sources and

advertisement
EUROLAB International Workshop: Investigation and Verification of Materials Testing Machines
Materials Testing Machines
Investigation of error sources and
determination of measurement uncertainty
G. Dahlberg; MTS Systems Corporation, Eden Prairie, USA
In an effort to meet National, International, and Commercial Accreditation
requirements, users of Materials Testing
Machines must establish adequate methods
of determining calibration and test measurement uncertainties. Determining measurement uncertainty for testing machines and
test data can be a very complex process.
Many factors affect the uncertainty of test
data produced by testing machines.
This paper examines measurement
uncertainty contributors associated with the
calibration and use of modern materials testing machines. A list of uncertainty contributors is provided including examples of error
source values and their assumed statistical
distribution types. Methods of minimizing
certain error sources are described. Also
included, is a short section discussing measurement uncertainty related to ISO calibration practices ISO 376, Metallic materials –
Calibration of force-proving instruments
used for the verification of uniaxial testing
machines and ISO 7500-1, Metallic materials – Verification of static uniaxial testing
machines – Part 1: Tension/compression
testing machines – Verification and calibration of the force-measuring system
This paper discusses the effect of calibration uncertainties on test data produced by
materials testing machines. Examples of how
operators, materials, testing methods, and
testing machine limitations, contribute to the
measurement uncertainty of test data.
One of two conditions will exist when
testing materials or components. The material or component will either be under tested
or over tested. There is simply no way to perform a materials test that contains zero error
or has a zero value for the measurement
uncertainty. The task then is to identify as
many sources of error as possible, and take
measures to reduce or at least quantify, the
resultant measurement uncertainty of data
and or results produced by the material testing machine. Under testing a material or
component may lead to safety, warranty, and
liability problems due to premature failure or
damage. This is of particular concern for the
transportation and medical industries. Under
testing conditions exist when the testing
machine end level forces are not achieved
and or the test speed, frequency, or cycle
count is less than required to meet testing
criteria. Over testing a material or component may lead to waste of time and material
for design, fabrication, and test. Over testing
is expensive and can potentially reduce
competitive advantage. An over testing condition typically occurs when forces exceed
the testing criteria. Under testing and over
testing conditions can occur when acceleration induced forces are present, during transient cyclic overshoot of a start up waveform, when a testing machine is poorly
controlled, or when the system is misadjusted or out of calibration status. So aside from
regulatory accreditation pressures, currently
being imposed on all areas of test and measurement, there are other excellent reasons
for examining measurement uncertainty related to the operation of material testing
machines.
One common misconception and important point of interest, is that Testing Machine
Manufacturers can provide total or combined
measurement uncertainty values for material
testing machines. This is simply not true and
it would be unwise for manufacturers to provide these values. Testing Machine manufacturers design systems and software to
perform specific tests under specified condi-
21
22
EUROLAB International Workshop: Investigation and Verification of Materials Testing Machines
tions. Many specifications are published as
optimal operating specifications. Few if any
systems are operated under optimal conditions. Sources of measurement uncertainty
should be assessed experimentally for each
type of material, system configuration, and
testing protocol to be performed.
This paper concentrates on the indication
and application of force when performing
tests using material testing machines. The
author acknowledges that material testing
machines are used to apply and measure
additional physical metrology parameters
that may include but are not limited to displacement, torsional force, angular displacement, pressure, and strain. An uncertainty
analysis pertaining to any metrologically significant parameter applied or measured, and
reported by the testing machine should be
assessed.
Major sources of measurement uncertainty can be grouped into the following categories.
1) Uncertainty due to the calibration equipment and calibration processes
2) Uncertainty of the Testing Machine as
calibrated
3) Uncertainty of the Testing Machine
during use
4) Uncertainty of the Test Results
It must be understood that the uncertainty values presented in these examples are
representative of a particular uncertainty
analysis and do not represent all material
testing machines of any specific type and or
configuration. It should also be recognized
that various interpretations related to error
sources and their statistical contributions
may vary depending on the method of analysis applied.
Static Calibration Uncertainty
(1) ISO 10012-1: 1992(E),
ISO 7500-1: 1999(E)
(2) ANSI/NCSL Z540-1,1997.
Calibration – The set of
operations, which establish,
under specified conditions,
the relationship between
values indicated by a measuring instrument or measuring system, and the corresponding standard or
known values derived from
the standard.
(3) Standard Uncertainty
express as one standard
deviation or one sigma.
(4) Statistical Distribution.
Reference ANSI/NCSL
Z540-2-1997, NIS 3003
Edition 8, May 1995
(5) ISO 376, Metallic material
– Calibration of force-proving instruments used for
the verification of uniaxial
testing machines, Table 2.
Calibration – The set of operations which
establish, under specified conditions, the relationship between values indicated by a
measuring instrument or measuring system,
or values represented by a material measure
or a reference material, and the corresponding values of a quantity realized by a reference standard.(1) (2)
provides information so that adequate adjustments can be made if required. Performing a calibration does not always require an
adjustment.
Uncertainty Static Calibration:
Usc = v 0.052 + 0.252 + 0.042 + 0.022 +
0.022 = 0.26%
(Root Sum Squared Method – RSS)
The term Calibration has often been
associated with the act of making adjustments. When in fact the calibration process
Source
Primary Force Calibration
Force-Proving Device
Long Term Drift of Calibration Device
Environment
Process Repeatability
Uncertainty (3)
0.05% Class 1 (5)
0.25% Class 1
0.04%
0.02%
0.02%
Distribution (4)
normal
normal
rectangular
rectangular
normal
EUROLAB International Workshop: Investigation and Verification of Materials Testing Machines
Primary Force Proving Devices are calibrated in compliance with well documented
practices. Although many countries have
developed there own calibration/verification
procedures, it is likely that most countries
will soon adopt ISO 376 as a common guide
when calibrating force-proving devices.
Table 1, shows the parameters evaluated
during an ISO 376 calibration and associated
classification criteria. (6)
I believe that most countries will also be
adopting ISO 7500-1 as a guide when calibrating the static force indicating performance of the testing machine. Table 2,
shows the measurement parameters and
23
classification criteria for static force calibration of material testing machines.(7)
Environment is an important parameter
when calibrating testing machines. The temperature is very closely controlled in laboratories calibrating force-proving devices. ISO
376 requires that the laboratory temperature
to be maintained with in ±1 ºC.(8) The calibration is to be performed with in a temperature range of 18 ºC to 28 ºC. Adequate time
must also be allowed for the force-proving
device to attain a stable temperature. This
may take as long as an hour and can be
assessed by monitoring the zero force indication.
(6)
(7)
(8)
ISO 376, Table 2.
ISO 7500-1, Table 2.
ISO 376, Section 6.4.3.
Characteristics of force-proving instruments
of applied
Uncertaintyª
%
Class
of reproducibility
of repeatability
of interpolation
of zero
of reversibility
calibration
force %
00
0.05
0.025
±0.025
±0.012
0.07
±0.01
0.5
0.10
0.05
±0.05
±0.025
0.15
±0.02
1
0.20
0.10
±0.10
±0.05
0.30
±0.05
2
0.40
0.20
±0.20
±0.10
0.50
±0.10
ª The uncertainty of the calibration force is obtained by combining the random and systematic errors of the calibration force.
Table 1
Characteristic values of the force-measuring system
Class of
Machine
range
accuracy
q
Maximum permissible value, %
Relative error of
repeatability
reversibility
b
v
zero
fo
Relative
resolution
a
0.5
±0.5
±0.5
±0.75
±0.05
0.25
1
±1.0
±1.0
±1.5
±0.1
0.5
2
±2.0
±2.0
±3.0
±0.2
1.0
3
±3.0
±3.0
±4.5
±0.3
1.5
Table 2
24
(9)
ISO 7500-1, Section 6.4.2.
EUROLAB International Workshop: Investigation and Verification of Materials Testing Machines
When these devices are used to calibrate
material testing machines, the environment
that the testing machine operates in can be
quite different. ISO 7500-1 requires that the
temperature during machine calibration not
vary more than ±2 ºC.(9) The calibration of
the testing machine shall be carried out at
an ambient temperature between 10 ºC and
35 ºC. Most force proving devices have
temperature compensation gauges or methods for calculating the reference calibration
adjustment due to temperature.
When using dead weights, the force must
be compensated for the local value of gravity
and air buoyancy. I have not found any
evidence that humidity has an effect when
performing calibrations of material testing
systems. However, if force proving devices
are not adequately sealed against humidity,
extended time exposure to high humidity
environments can cause bonding of force
sensing gauges to degrade.
Process Repeatability is determined experimentally by having a number of technicians perform calibrations with the same
equipment. This can be done two ways. The
technicians can use the calibration equipment to calibrate an artifact that simulates a
system or the technicians can calibrate and
actual testing machine. If the technicians calibrate a testing machine, some of the variance in the data will be due to the testing
machine itself. This is difficult to assess.
Automated calibration systems help reduce
the variance due to process repeatability.
I have not included Alignment as one of
the uncertainty contributors related to calibration. Calibration technicians are normally
trained to ensure proper axial alignment
when doing calibrations. Most systems
would be difficult to fixture if alignment were
in such a condition that the calibration data
would be significantly affected. The forceproving device is rotated and data is acquired during calibration. The force-proving
device’s sensitivity to a normally experienced
out of alignment condition is included in the
combined uncertainty for the device. This is
why fixtures and studs used with force-proving devices, should be used when forceproving devices are being calibrated. I did
however experience a rare situation while
calibrating a universal testing machine. This
was a testing machine in which the system
force transducer is physically moved when
switching from Tension to Compression
mode or the reverse. An amount of dirt
and debris present on the mating surface
between the system crosshead and force
transducer caused an out of alignment condition resulting in a 2% shift in the sensitivity
of the force indication device. Therefore it is
important to inspect these mating surfaces
prior to performing a calibration.
Many quality control programs require
that «As-Found» calibration data be obtained
prior to making any changes to a testing
machine. This is good metrology practice
and can provide confidence in data and tests
performed between calibration intervals. On
the other hand this practice can also uncover
evidence that materials or components have
been incorrectly tested. If As-Found calibration data is required, calibration data must
be recorded prior to cleaning or changing the
condition of the machine.
EUROLAB International Workshop: Investigation and Verification of Materials Testing Machines
25
Static Testing Machine Uncertainty
Source
Resolution
Uncorrected Error
Uncertainty Static Testing Machine:
Ustm = v 0.22 + 0.22 = 0.28%
Resolution is often defined as one half
the noise, with the lowest calibration force
applied, or one digit of a digital display,
whichever is greater.(10)
Uncorrected Error is the amount of error
from the static calibration of the testing
machine. This amount of error could be
equal to the maximum value of the error
allowable specific to the required Testing
Machine Classification. I have chosen 0.2%
because this is normally the largest amount
of error we would leave when recalibrating a
Class 1 type, testing machine. This value
may be as large as ±1.0% for a Class 1
machine.
Repeatability is included in the estimate
for Uncorrected Error. Repeatability may be
treated separately depending on the calibration or evaluation procedures used.
So far we have examined uncertainty
contributors related only to the static calibration and static performance of the testing
machine. This is an important point. Often
this is where the uncertainty analysis ends.
Currently accepted Material Testing Machine
calibration /verification procedures(11) allow
for the calibration of the system at low forces
with certified deadweights. This essentially
means that the force applying apparatus of
the testing system need not be turned on or
running during the calibration. This will provide for very stable and repeatable calibration data, but will not reflect any real world
application. The problem is that very few real
world tests are purely static in nature. It is my
opinion that most tests performed with mod-
Uncertainty
0.20%
0.20%
Distribution
rectangular
rectangular
ern Material Testing Machines are dynamic
to a certain degree. Tests that utilize deadweights for simple proof testing, are the
exception.
Most modern material testing machines
provide for some type of closed loop control
of the machine. Simply described, closed
loop means that a signal from one of the system’s calibrated transducers is fed back into
the system’s control circuit to automatically
re-adjust the system to maintain desired levels of force and or displacement. System
performance is greatly influenced by many
factors related to closed loop operation. This
process is occurring continually during a
test. Therefore, the changing physical parameters being applied to the specimen as the
specimen’s physical characteristics change
cause the system to react producing a
dynamic response.
Does this mean that in order to have a
defined level of confidence in the data produced for any test requires a dynamic calibration or verification? Possibly. But before
anyone should under take such an involved
task, there are a number of other things that
should be investigated.
Testing Machine Uncertainty
During Use
The measurement uncertainty contributors in the section have the potential of making all calibration sources of measurement
uncertainty insignificant. Keep in mind that
sources of uncertainty are combined in the
RSS method. This results in added weight
for the major contributors. A material testing
machine ill suited for a particular test can
(10) ASTM E4-99,
Standard Practices for
Force Verification of Testing
Machines, Sections 3.1.12,
3.1.13.
(11) ISO 7500-1 Section 6.1,
ASTM E4-99 Section 1.1.
26
EUROLAB International Workshop: Investigation and Verification of Materials Testing Machines
contribute errors in force measurement and
application well in excess of all other combined uncertainties related to the testing
machine’s use.
I do not have data to provide representative uncertainty values for all listed error
sources. I have designated those error
sources for which I have no objective evidence as Not Available (N/A).
Uncertainty Testing Machine During Use:
Utmdu = v0.042 + 0.12 + 0.52 + 0.12 +
0.012 + 0.12 = 0.53%
Force Measuring and Application System Effects
Source
Drift
Noise
Resolution
Stability (servo-hydraulic supply)
Backlash (electro-mechanical)
Uncertainty
.04%
.1%
.5%
N/A
.1%
Distribution
rectangular
rectangular
rectangular
Uncertainty
.01%
N/A
Distribution
rectangular
Uncertainty
N/A
N/A
Distribution
Uncertainty
.1%
N/A
.5%
.2% to > 10%
Distribution
rectangular
rectangular
Environment
Source
Temperature
Power Fluctuations
Specimen Alignment
Source
Testing Machine and Grips
Damage to the machine
Application and Procedural Errors
Source
Errors due to system zeroing
Specimen preparation
Errors in reading displays
Test Speed
(12)
NIS 3003 Edition 8 May
1995, ANSI/NCSL
Z540-2-1997
I have not included the uncertainty contribution values for Errors in reading displays or
for Test Speed because I have included a
fairly large value for resolution. Depending on
the type of evaluation and testing being performed, one of the other sources of resolution error may become the dominate contributor resulting in a value less than or greater
than 0.5%.
normal
rectangular
The combined Testing Machine
uncertainty can be expressed:
CUtm = v Usc2 + Ustm2 + Utmdu2 or
CUtm = v 0.262 + 0.282 + 0.532 = 0.65%
Expanded Testing Machine Uncertainty
(k = 2) (12) can be expressed:
EUtm = 2 x CUtm or
2 x 0.65 = 1.3% for a confidence level
of 95%
EUROLAB International Workshop: Investigation and Verification of Materials Testing Machines
Force Measuring and Application
System Effects
Drift during use can be related to the system’s inability to control well. The system
may need to wonder off the target force by a
relatively large amount before the control
signal makes an adjustment to correct the
system. Drift can also occur due to system
devices changing relative to temperature.
Creep in the system’s force transducer can
add to this uncertainty value as well.
Noise can occur in the form of mechanical noise, electrical noise, or both. Mechanical noise to some degree is almost always
present when the system is running. In modern well designed and controlled systems
operating with in normal operating ranges of
measurement and control, the total noise
due to electrical and mechanical influences
may be ±0.1% or less.
Resolution error can be due to noise, data
acquisition capabilities, and or test speed. If
resolution is evaluated as a factor of noise
during test, the uncertainty contributor for
resolution determined from the static calibration analysis need not be summed in the
total combined uncertainty value. See Test
Speed below for additional explanation.
Stability can be affected by the number
of servo-hydraulic systems on a single
hydraulic supply. It is fairly easy to know
when the pump does not have enough flow
or pressure to produce the end level forces
required. But it is not so easy to assess what
happens to the specimen or system when
running long tests and numerous machines
on the same supply are starting and stopping
tests. The only way to know the effect is to
experimentally test this condition. I have not
included an uncertainty contribution value
because it is highly subject to the type of
testing and type of specimen /component
being tested. Ideally, there would be one
testing machine per supply or only one
machine would be running at one time. This
27
may not be economically feasible so it is an
area of potential error that should be investigated.
Backlash in electro-mechanically driven
testing machines can influence testing
results. The amount of backlash present can
be assessed during calibration. Backlash
can also be minimized by causing the
crosshead to advance past the point in
which the test is to be started and then
adjusted to the start point of the test in the
intended direction of the test. This is a procedural issue and is only effective for unidirectional tests. If automated bi-directional or
cyclic testing is to be performed with these
types of machines, the uncertainty contribution should be determined and include in the
combined measurement uncertainty.
Environment
Temperature during a test may be significantly different than the temperature during
calibration. An evaluation of the effect of
this difference should be performed. Where
applicable, test results may be corrected
due to temperature differences. Temperature
changes during the test may also affect the
test results. These gradients should be
known and included in the uncertainty analysis.
A typical load cell temperature coefficients specification.(13)
Effect on Output - %/ºC Maximum: ±0.0015
Effect on Zero - %RO/ºC Maximum: ±0.0015
We have found in our testing that as long
as the load cell temperature remains at or
very near to the temperature at which the
test is started, the error due to the temperature being significantly different than the
temperature at the time of calibration is minimal. Because the system load cell is normally zeroed at the beginning of a test, if the
temperature then changes during the test,
(13) Interface, Inc. 1200
series load cell specifications, 2000 product catalog
28
EUROLAB International Workshop: Investigation and Verification of Materials Testing Machines
this is when the system force device will
contain errors when indicating force. I have
included an amount of uncertainty for this
contributor that would reflect a 5 ºC change
in temperature from the start of a test.
Power fluctuations may affect testing
machine operation and data integrity. If the
power source meets recommended manufacturer’s specifications, no value for measurement uncertainty need be included for
this condition.
Specimen Alignment
Good specimen alignment can be critical
to the life of the specimen being tested
and thus important to the data characterizing a particular material property. Testing
Machines and Grips are manufactured to
apply and maintain good alignment. Again, it
is difficult to put an uncertainty value to this
contributor because the value would be only
significant for a specific type of specimen
and test. Some manufacturers produce
alignment devices that are easy to use and
can be adjusted with force applied. I have
found these to work well when good alignment is critical.
Damage to the machine can occur at
any time. Inspection of the system is critical
to maintaining operational performance. A
damaged machine can induce an out of
alignment condition. Poorly maintained systems can lead to seal friction problems,
which can result in loss of hydraulic pressure
and oil leakage during cyclic testing. ServoHydraulic systems require a clean oil supply
in order to operate optimally. Most manufacturers will offer oil sample testing. Poorly
maintained or damaged electro-mechanical
testing machines may have excessive gear
play and or drive belts that are stretched.
This can cause excessive backlash and start
up lag times. Assigning a value for measurement uncertainty due to machine wear and
or damage is difficult.
With all potential sources of measurement uncertainty in this section thus far,
round robin testing between different testing
machines and if possible between different
laboratories using adequate reference materials can provide evidence that the combined
measurement uncertainty due to these contributors are minimized and with in adequate
control limits.
Application and Procedural Errors
These sources of measurement uncertainty have the potential to be the most significant sources of errors when performing
tests on materials and or components. These
sources of measurement uncertainty are also
the most commonly overlooked sources
when performing an uncertainty analysis.
Common data acquisition zeroing errors
can occur when the test operator arbitrarily
zeros the data acquisition system at the start
of a test. Careful attention and an understanding of the test conditions and testing
system configurations are important or significant offsets in the resultant data can be
induced. This source of measurement uncertainty is specific to the test and can have a
wide range of values. Fixturing, preloading,
and backlash can have an effect on this
component. I have included a value for measurement uncertainty equal to the amount
commonly experienced when the system is
not zeroed after performing initial hysteresis
cycling or if hysteresis cycling is not performed.
Specimen preparation is critical to repeatable and predictable results. It is my experience that most testing laboratories do a good
job of following established procedures when
fabricating and preparing specimens. But,
even relatively small dimensional machining
errors can cause large errors in specimen performance. Again this is a difficult measurement uncertainty source to quantify. Lot testing can often uncover problems of this type.
EUROLAB International Workshop: Investigation and Verification of Materials Testing Machines
Test Speed The largest errors I have
experienced related to the testing of materials and or components have been due to
inappropriate application of the testing
machine. Testing machines are expensive
and it is not unrealistic to expect users of
testing machines to use their systems for as
many different types of tests as possible.
Many times the system’s limitations are not
well understood. Here is an example of a real
world situation I was involved in.
A testing laboratory designed a test protocol to test a polycarbonate component
used in the medical industry. The test engineers wanted to simulate a patient’s instantaneous arm movement, such as a convulsion or involuntary muscle reaction. This
movement would result in applying force to
the component attached to the patient via
intravenous tubes. A Tensile Pull Test was
designed to determine the force at which
the polycarbonate component would break
under such conditions. A minimum force was
required to validate the component for continued production. The test protocol was
performed for validation of design as well
as for continued production. Manufactured
samples were brought in periodically for lot
qualification testing.
The test was designed to run on an electro-mechanical material testing machine. The
system was selected to move the crosshead
at 20 inches/minute. The specimen to be
tested was fixtured so that there was zero
pre-travel before force was applied. The brittle nature of the polycarbonate specimen
resulted in a test duration that lasted only 0.2
second. There were two basic problems with
this test. The test engineer assumed that the
crosshead would be moving at the expected
speed when the failure occurred. Specimen
failure was defined as the peak force at
which the component physically came apart.
Crosshead Speed Ramp
20
Speed (in./min)
Errors in reading displays are quickly
becoming a small source of measurement
uncertainty due to modern computer controlled testing machines. I still on occasion
have situations where the customers are trying to determine a force or data point on a
stress-strain curve below the first one inch
increment on a 10 inch chart recorder. Often
times the potential error related to this practice is neither assessed nor included in an
uncertainty analysis. Errors can easily equal
± 0.5%.
29
15
10
5
0
0
5
45
85
125
165
225
milliseconds
The graph shows that the crosshead was
moving at approximately 16 inches/minute
just prior to specimen failure. The customer’s
test protocol specified that the test speed
have a tolerance of ±5% of desired speed.
The graph shows that the speed at specimen
failure was 20% below the test criteria.
The second mistake was to assume that
the testing machine’s data acquisition system was adequate to capture the force indication at the time of specimen failure. The
customer expected that the error of peak
force indication would be no greater than
±5%. This was also not the case. Table 3,
shows the data acquired just prior to specimen failure.
Force (N)
Graph 1 – Testing Machine
Crosshead Speed
Table 3 – Date Acquired
from Tensile Pull Test of
Polycarbonate specimen at
20 inches/minute
∆ Force (N)
%
2106
187
8.9
2301
195
8.5
2507
206
8.5
1919
30
EUROLAB International Workshop: Investigation and Verification of Materials Testing Machines
By examining the data shown in Table 3,
a couple of important things are apparent.
The data in the ∆ Force (N) column shows
the difference in force between sequentially
acquired data points. The difference between
the first two data points is 187 N and the
force between the last two data points prior
it specimen failure is 206 N. Because the
force is increasing between each acquired
data point, we know that the crosshead is
still accelerating to the desired test speed.
The next data point if the specimen had not
failed would likely have been approximately
216 N or 8.6% from the previous data point.
The system reported the specimen failure at
2507 N. The specimen could have failed anywhere with in an 8.6% window greater than
2507 N. Data from the reference device used
to prove these findings recorded an average
error in the reported peak force at failure of
-7.22% with a maximum error recorded at
-19.47%. This was based on a population of
60 tested samples.
data fast enough. Few systems are capable
of sample rates sufficient to meet the
requirements of this testing protocol. The
best way to run this test and acquire data
with in the accuracy desired is to simply slow
the test down. When the test was slowed to
2 inches/minute the data acquired averaged
-0.43% with a maximum error of -1.22%
based on 12 tested samples. This of course
would not provide the instantaneous movement the design engineers had originally
wanted to simulate.
It is important to note that this test resulted in errors that where conservative in
nature. The actual peak forces required to
cause the specimen to fail were much
greater than the test results would indicate.
Therefore related to safety and liability this
was an overly safe test.
An interesting side note to this scenario
is that the customer would probably never
have become aware of this problem if they
had not purchased a new machine for testing
in the laboratory. They planned to run the
same tests on both machines to increase
testing efficiency and found that the new
machine produced data quite different than
their older machine. While I was there, I tested their older machine as well and found that
the results from that machine were considerably worse than the new machine. We determined that the older machine was sampling
at 50 samples per second where as the new
machine was sampling at 200 samples per
second. Errors in peak force indication for
the older machine averaged -35% with a
maximum error of -45%. Neither machine
was sampling fast enough to produce data
with in the desire specification.
This system was not capable of performing the test with in the expected design
specifications. There are a couple of things
that could have been done in order to run
this test closer to the design criteria. In order
for the crosshead to be moving at the
desired speed when the event occurs, the
system needs sufficient time to overcome
ramp up conditions. This could be accomplished by designing fixtures that let the
crosshead move a distance prior to the
application of force on the specimen. In
order to acquire test data with in the desired
±5% specification, the system’s data acquisition system would need to sample much
faster. The system had sufficient static resolution but dynamically it could not acquire
The equipment used to validate the testing protocol consisted of a certified traceable
high-speed data acquisition system capable
of 100,000 samples per second with 16 bits
of resolution. The data acquisition system
was connected to a certified force-proving
device fixtured in the load train during the
test. I wrote custom software to use with the
system in order to acquire and present the
results.
EUROLAB International Workshop: Investigation and Verification of Materials Testing Machines
Uncertainty of Test Results
Recommendations
One method of determining the uncertainty of test results is to obtain the standard
deviation of a series of tests performed with
one particular set of control samples (Ucsr)
on the same machine. The standard uncertainty will include data scatter attributed to
the samples. If reference material samples
are available, tests can be run and the standard deviation derived (Urmsr). These values
are very dependent on the type of materials
and tests performed. The range of values
can easily range from 0.1% to 1.0% and
greater. For the purpose of this exercise I will
assign 1.0% to Ucsr and 0.5% to Urmsr.
These recommendations are intended to
minimize measurement uncertainty and provide increased confidence in data produced
by material testing machines.
The combined measurement uncertainty
for the testing machine and testing results
can be derived.
Combined Uncertainty of Testing Results:
CUtr = v Usc2 + Ustm2 + Utmdu2 + Ucsr2
- Urmsr2
CUtr = v 0.262 + 0.282 + 0.532 + 1.02 - 0.52
= 1.08%
Expanded Uncertainty for the Test Results
(k = 2)(14):
EUtr = 2 x CUtr = 2.16% for a confidence
level of 95%
Note: I have not included any value for
the uncertainty due to errors induced by
acceleration, system resonance, or other
dynamically induced error sources. It is not
recommended that tests be performed
when dynamically induce errors are present.
An evaluation of dynamically induced error
sources is beyond the scope of this paper.
31
1.) Once an uncertainty analysis is complete,
concentrate on reducing major sources
of measurement uncertainty.
2.) Know your testing machine’s capabilities.
Design your tests to operate with in the
testing machine’s capabilities. Verify
experimentally that all testing protocols
are operating with in expected specifications.
3.) Examine the raw test data from your
tests. Verify that the system is reporting
the correct results from the raw data.
4.) Test reference materials in a number of
machines and compare results between
machines. Participate in round robin testing of reference materials between different laboratories.
5.) Keep the testing machine in optimal
operating condition. Have your testing
machine on a scheduled maintenance
program and have it serviced by trained
individuals.
6.) Keep your testing machine in a calibrated
condition. Shorten calibration intervals
based on historical calibration results
where applicable.
(14) NIS 3003, ANSI/NCSL
Z540-2-1997
32
EUROLAB International Workshop: Investigation and Verification of Materials Testing Machines
Bibliography
[1]
[2]
[3]
[4]
ISO 10012-1: 1992 (E), Quality
assurance requirements for measuring equipment –
Part 1: Metrological confirmation
system for measuring equipment.
[5]
ANSI/NCSL Z540-2-1997,
U.S. Guide to the Expression of
Uncertainty in Measurement.
[6]
ASTM E74-00a, Standard Practice
of Calibration of Force-Measuring
Instruments for Verifying the Force
Indication of Testing Machines.
ASTM Volume 03.01.
[7]
ASTM E4-99, Standard Practices
for Force Verification of Testing
Machines. ASTM Volume 03.01.
[8]
EAL-G22, Uncertainty of
Calibration Results in Force
Measurements
[9]
Interface, Inc. Product Catalog
2000.
ISO 376: 1999 (E), Metallic materials – Calibration of force-proving
instruments used for the verification of uniaxial testing machines.
ISO 7500-1: 1999 (E), Metallic
materials – Verification of static
uniaxial testing machines –
Part 1: Tension/compression testing machines – Verification and
calibration of the force-measuring
system.
NIS 3003 Edition 8: May 1995,
The Expression of Uncertainty and
Confidence in Measurement for
Calibrations.
Download