On the subject of optical testing and rating:

advertisement
On the subject of optical testing and rating:
Much controversy now exists between claims from various manufacturers about the wavefront rating of
optics, especially apo lenses. Markus has asked me to clarify my position on this subject, so that further
confusion between wavefront ratings can be seen in light of reason, and further attacks against
manufacturer's credibility can be avoided..
Manufacturers use interferometers to compare the wavefront errors of a finished optic against some
reference standard. In the case of the interferometer, it is a reference sphere of known high quality which is
used to form interference fringes with the optic under test. When testing a mirror, it would not matter what
wavelength was used, since mirrors are totally achromatic. In the case of lenses, it matters greatly what
wavelength is used, since there is typically only one point in the wavelength range where the lens was
nulled or figured by the optician. Testing at another wavelength almost always results in a lower
wavelength rating.
There are numerous methods for measuring and interpreting the results, so that testing the same optic on
several different interferometer systems can result in different wavefront numbers. Typically, an optician
will adjust the interferometer to display 6 to 8 lines across the aperture. Computer software then measures
the deviation of these lines from parallelism, and assigns wavefront errors to the interferogram. These are
sometimes further broken down to elements such as spherical error, astigmatism, coma, etc. This is
repeated a number of times with the fringes tilted to various angles, and the reference optics, mirrors and
beamsplitters may be rotated to eliminate any possibility of local errors being added to the test optic. The
results are averaged in order to get a more accurate and realistic picture of the aberrations. These averaged
results usually have the same RMS rating, but may result in better P-V ratings due to the cancellation of
systemic errors.
There are some systems that use hundreds and even thousands of very tightly spaced fringes (Zeiss
interferometer system in particular), so that there will be absolutely no gaps across the aperture that is not
measured. The RMS ratings will again be similar, but the P-V will be overly pessimistic. I now quote from
the "Bible" of optical shop testing. Malacara states: "The P-V error must be regarded with some skepticism,
particularly when it is derived from a large number of measured data points, as is the case with phase
shifting interferometry. Even relatively large wavefront errors often have little effect on the optical
performance if the error involves only a very small part of the aperture. Because the P-V error is calculated
from just two data points out of possibly thousands, it might make the system under test appear worse than
it actually is. The RMS error is a statistic that is calculated from all of the measured data, and it gives a
better indication of the overall system performance".
I cannot speak for the method used by other manufacturers to rate their optics, but the following is the
method I use. The optic is normally nulled or figured to the best possible inside/outside diffraction pattern
on a double pass autocollimator using a green light pinhole laser source and a seperate white light source.
The pattern is checked for roundness (absence of astigmatism and coma) and smoothness (absence of
roughness and zones). The reference element is then inserted into the optical path, and the corresponding
fringes are captured on the computer using a small CCD camera. I use QuickFringe software developed in
Canada and recommended by Peter Ceravolo, who also made my reference elements. These elements were
certified by equipment at the Canadian Bureau of Standards. Many passes are taken to nullify the effects of
dust particles and slight air disturbances in the optical path and the results are averaged. If the optic passes
the 1/10 wave P-V criteria, it is ready for coating and assembly. If not, it continues to be figured and tested
until it does pass. The test data in the form of the Zernicky Polynomials is stored on the computer, along
with the serial number and other pertinent data, so that if a question arises in the future about some lens, it
can be looked up in the database.
In day to day testing, the optician can pretty quickly tell whether a set of fringes will meet the performance
goals or not. The fringe patterns that are recorded on the computer screen will not be absolutely clean, even
if the optic under test is perfect. There always exists dust particles on the reference elements and
autocollimating mirrors, as well as on the beam splitter cubes and laser collimating optics. Since
interferometers are analog devices, this is akin to the clicks and pops that appear on vinyl phonograph
records. This "noise" can cause the software to add spurious data points to the fringes where none should
be, and this will normally lower the P-V rating, but again, the RMS is unaffected. In order to get a fair
rating for the optics, I average multiple passes, something that Peter Ceravolo has recommended to do. Just
as we would not downgrade the performance of the Chicago Symphony for every little recording noise, so I
do not downgrade the performance of our optics because of interferometer noise. In fact, Quick Fringe
software allows for the averaging of the various interference fringe runs, and then synthesizes a clean
pattern that can be displayed in the final report. Some may call this a "fake" interferogram, but it is based
on real data and is more representative of the actual performance than any one fringe pattern. Anyone who
has watched in frustration the dancing Airy disc interference pattern in his telescope on an unsteady night
can appreciate that the actual performance of the optic is not represented by a single snapshot of that
pattern. Rather, if an average of many patterns is combined, the true picture will emerge.
May I summarize then, that the P-V wavefront ratings given by various manufacturers can differ
considerably for the same optic. It depends on the test method used, and whether or not a single set of
fringes is measured, or an average of many is used, and whether the software creates 100 test points or tens
of thousands. They can vary also for RMS, if the optic is tested at different wavelengths, and has refractive
elements in it. This is the case with short focus refractors, SCTs, less so with Maksutov systems. Variation
can crop up if the optic is not supported properly, whether it was allowed to stabilize after being placed on
the interferometer (the warmth of your hands will significantly distort the figure of the lens for some time)
and whether the air is stable in the test setup.
It seems then that these numbers are useless to the final user. They are, however powerful tools to the
experienced optician, even if they are less than enlightening to the final user. Rather, he should be
concerned with whether the manufacturer has manufactured the optics well, than whether they meet or
exceed a certain number. The test certificate is just a piece of paper, after all, and cannot gather light and
resolve fine planetary detail. Secondarily, customers can and will misinterpret test data. Unless you have
been on the front lines and actually used an interferometer to test and figure an optic, it would be difficult
to know what all the numbers and squiggles on that piece of paper really mean. For these reasons, Astro
Physics does not supply a test report, even though each lens' test data is recorded and stored at our facility.
I hope this will end the test data and spec wars which seem to crop up regularly on SAA.
Roland Christen
ASTRO-PHYSICS
Download