Introduction to Engineering Camera Lab 3 Procedure

advertisement
Introduction to Engineering
Camera Lab 3
Procedure
Introduction
The purpose of this lab is to introduce you to some of the issues and procedures involved in
optics, especially as they are involved in camera design, as well as to introduce you to the idea
of the camera as an information appliance, i.e. a device for handling information in a seamless
and simple manner.
As you will have learned (hopefully) from reading the background notes, the critical parameter
for the camera optics is the focal length, or principal distance, of the lens system. The focal
length of the lens system tells us a lot about the camera’s characteristics as far as the type of
images that it will produce. So it is the first parameter to be selected in the design process.
In order to reverse engineer the camera, we need to start with finding out the focal length.
Goals
Today we will determine two critical parameters in the design of the camera: the focal length
and the depth of field. In addition, we will use measurements from images collected by the
camera to determine the locations of objects visible in those images.
Part I.
Focal Length
You will recall from the notes that we can think about the relationship between objects in the
object space and their images in the image space in terms of similar triangles. The diagram
below demonstrates the general principle.
So if we want to find the focal length, f, we can photograph objects of a known size at a known
distance, and by measuring their images on the film (or a positive contact print, not an
enlargement), we can compute the focal length. You will recall the photographs you took during
the first lab. We photographed an array of objects at set distances, from the center-line of the
objects, and from each side to give us a stereo-pair.
Camera Lab #3
Procedure 08/07/02 mjh
1
The objects were three special surveying instruments (now obsolete) called subtense bars.
These have the characteristic of having a very precise distance of 2·000 meters between the
points of the markers at the ends of the arms. The bars were set up at distances of 5 meters,
10 meters and 15 meters from the camera, all with the bars set at right angles to the line to the
camera.
So, we know the value for x, it being 2·000 meters. Values for h are 5, 10 and 15 meters. If we
can measure p on the negatives, we can compute f, because of the similar triangle relationship:
f
h
p = x
therefore
f =
ph
x
Measuring the Negatives
Procedure
1. Using the Flash Max camera, we shot a number of images of the array of subtense bars.
Find the best image of the ones taken from the center-line of the array, from the ones taken
by your group. Each group should view the chosen negative on the light tables and try to
identify the subtense bars (they are all on tripods). It might help if you secure the negative
with a little tape at the edges or corners.
2. Using the loupe, with its scale (to 0·1 mm), measure the length of the image of each of the
subtense bars on the negative. Your measurement should be between the tips of the
(roughly) triangular marks at each end of the bar. Measure each bar several times,
estimating the distance to 0·01 mm, i.e. estimating to one tenth of the finest marks in the
loupe. Each group member should make several measurements.
Note which bar you were measuring. This should be easy to ascertain, because as you go
farther away, the bars have smaller images,.
3. Taking the data you have collected as a group, enter it into the spreadsheet on the
computers. The particular worksheet you want in the spreadsheet is named “Focal Length”.
Key the measurements into the columns inside the area surrounded by the blue.
The spreadsheet will calculate the mean, ranges and standard deviation for each of the sets
of measurements you enter. It will then use the mean to compute the focal length for each
subtense bar, together with the standard deviation for that computed focal length.
Each of the computed values for f are then consolidated into an overall mean, and a
standard deviation for that overall mean is computed (note that this standard deviation is
just an estimate, not particularly rigorous).
4. Record the results, either by copying the spreadsheet data (to paper or disk) or by printing
the spreadsheet.
5. In your lab report, summarize your results and any significant departures from the above
procedure. Then answer the question set below.
Camera Lab #3
Procedure 08/07/02 mjh
2
Question Set 1
(a)
How good are your measurements? How can you ascertain this?
(b)
How good are your results, i.e., the computed focal lengths? Justify this answer.
(c)
How could you improve your results?
(d)
How does your precision compare to the rest of the class? Why might it be different?
Camera Lab #3
Procedure 08/07/02 mjh
3
Part II.
1.
Designing the Depth of Field
Determining s'
We would like the camera to provide a good range of different types of images, so we must
design it to keep a wide range of distances in reasonable focus. We will take ‘in reasonable
focus’ to mean a circle of confusion on the focal plane of 0·05 mm (the parameter ‘u’), which is
a distance that you could easily measure with the loupes. When enlarged to a normal print size,
this will still be smaller than about 1 mm, which will not disturb most people’s appreciation of
their pictures.
We would like photos to be able to be taken down to about 2 to 3 meters from the camera, and
this would constitute the sn parameter. For the far end of the depth of field, we want to take
pictures out to an infinite distance. In reality, infinity (in focal terms) is about 20 meters or so (as
there is almost no focusing correction needed beyond that distance). We can also look at the
equation:
sf 2
sf  2
f  ku(s  f )
If we re-arrange this equation to deal with sf at infinity, and try to get the denominator zero, and
combine this equation with the equivalent one for sn, we discover that if sf is set to 2sn, we get
the same effect, in that objects are in focus at infinity.
Let us adopt sn to be 3 meters, and sf to be 6 meters, consequent from the above discussion.
What is the ideal focusing distance for the camera lens to be set to? And how does this affect
the placement of the lens in the camera? Let’s explore this through the mathematical model we
have developed with the above equations. For convenience this model has been constructed in
the spreadsheet.
Procedure
1. Returning to the spreadsheet, go to the sheet named “Lens Placement”. Enter the sn and
sf parameters in the appropriate areas of the sheet. The s value, the perfect focusing
distance, is calculated, using:
s
2sn s f
sn  s f
2. Given that we know s and the focal length, we can compute the exact distance for the focal
plane to be from the lens to ensure perfect focus at that distance, s'. This uses the lens
equation:
1 1 1
 
s s' f
3. Enter the value you got for f on the previous sheet and the spreadsheet will compute the
value of s'.
Camera Lab #3
Procedure 08/07/02 mjh
4
4. Measure the distance between the focal plane of the camera (where the film would sit) and
the back of the lens and compare it to the value you computed for s'.
5. Experiment with the various input parameters to see if you can match the measured s'. See
what happens to s' as you change the input parameters. What would a graph of the
changes be like?
6. In your lab report, summarize your results and any significant departures from the above
procedures. Then answer the question set below.
Question Set 2
(a)
How does your measured value of s' compare with the computed value?
(b)
How could you improve on your measured or computed value?
(c)
What happens to the value of s' when you use other reasonable values for f (e.g. other
values that you actually determined earlier in the lab)? A graph for this may be helpful.
(d)
What happens to the value of s as you try other sn and sf values? A graph for this may
be helpful.
(e)
What happens to the value of s' as you try other sn and sf values? A graph for this may
be helpful.
(f)
What do the results of your experimentation in (d) and (e) tell you about the stability or
robustness of the solution to s' that you have determined?
Camera Lab #3
Procedure 08/07/02 mjh
5
2.
Designing the Aperture
The other big factor in the design of this camera is the size of the aperture, as this isn’t
adjustable. This also has a significant influence on the depth of field and the speed of film to be
used in the camera.
Given that we want the depth of field to extend from about 3 meters to infinity (which we got to
the more definite 6 meters in the previous section), what is the largest aperture size that will
allow this?
We know the values we want for sn, sf and s, as well as f and the value for u. We can therefore
f
re-arrange one of the previous equations and determine the aperture F/Stop (a ratio, d ), and
knowing the focal length, the actual aperture size.
Procedure
1.
Continuing down the same sheet of the spreadsheet, enter the value for u, 0·05 mm,
and the spreadsheet will compute the diameter of the aperture for you, together with the
F/Stop or F/No.
2.
Measure the size of the aperture in your camera.
3.
Summarize your findings in your lab report and answer the question set below.
Question Set 3
(a)
What is the difference between your computed aperture size and the measured aperture
size?
(b)
If these two are different, how might you account for this?
(c)
Can you manipulate the values for sn, sf and u to get the computed aperture to be the
same as the measured aperture? (Hint: sn is the best candidate.) What values did you
use?
(d)
If you can manage to get an aperture close to the actual one, are the values you used
for the parameters reasonable? What is ‘reasonable’ in these circumstances?
(e)
What are the effects of using a small aperture in the camera, as far as the type of film
that is to be supplied with the camera?
You have just been doing engineering design using mathematical modeling. While this is a very
simple example, you can see the general process, and can imagine using more complex
models.
Camera Lab #3
Procedure 08/07/02 mjh
6
Part III.
Determining 3-D Co-ordinates from a Stereo-pair
We will now take the negatives that give us a stereo-pair and use them to determine the coordinates of points in the image. The procedure we will use is to measure each negative in turn,
in effect setting up a form of mono-comparator. We will use the spreadsheet, which is set up to
compute the co-ordinates according to the procedures in the pre-lab handout.
Procedure
1.
You also took some shots of the test array from points off to either side of the center-lines.
Have a look through the negatives taken by the group and select the best pair, so that
there are a left and right view. The ideal photographs will have been taken with the camera
level and pointing parallel to the array center-line. The sighted object behind the array
should appear in about the middle of the negative.
2.
With your best pair of negatives, find the equivalent pair of prints. Set these up under the
stereoscopes and see if you can view the scene in 3-D. It may take a bit of fiddling to get
this to work for you, but when it does it can be pretty stunning! Because viewing the stereo
image is like seeing things as they were on the ground, but as though your eyes were 5
meters apart, the depth is greatly exaggerated. If you can’t get the stereo view, don’t worry;
it can take some practice to get it.
3.
Taking your pair of images, identify which are the left and which are the right negatives and
(in a moment) place the negatives on the light table so that they look like the scene you
can see in the prints, with the left negative on the left. This avoids confusion later on.
4.
You will find a piece of clear plastic with some lines on it. Line up the negatives so that the
point of aim of the camera is over the central crossed lines. Your point of aim was the
distant object at which the camera was to be pointed. Make this object sit over the vertical
line. Make a point about 1·5 meters (about 4·5 feet) above the ground at the object sit on
the horizontal line. This then aligns the camera shots so that they are approximately
parallel in direction, at the same height, and a fixed distance apart. This simplifies the
computations.
(Strictly speaking, this is not quite the proper way to do things, but it will be OK for this lab.)
The other lines (horizontal and vertical) are placed about 10 millimeters apart (although this
may have been distorted in the photocopying), and are there to help you measure coordinates on the image. Treat them as an X-Y co-ordinate system, with the usual sense of
the axes. Imagine that the origin, with co-ordinates (0, 0), is at the center cross, and that
the co-ordinates, positive upward and to the right, negative down and to the left, are
measured in millimeters with the loupe.
5.
Pick a couple of obvious points that appear in both negatives. They could be part of the
subtense bars, or anything else that is a well-defined point. Select the points so that they
are at a range of depths. Measure the co-ordinates of the images of the points on both
negatives. Try to measure the co-ordinates to 0·01 mm. Estimate how precisely you were
able to measure the co-ordinates using the loupe and the lines on the clear plastic.
6.
Enter the co-ordinates into the appropriate places on the sheet of the spreadsheet labeled
‘Co-ordinates’. The spreadsheet will then compute the object space co-ordinates for you.
The co-ordinate origin in this case is the left-hand camera, with X along the line to the
right-hand camera, Y going away from the camera, and Z going up. The spreadsheet will
also estimate the precision of the measured position based on the precision of the
parameters you used for the measurement.
Camera Lab #3
Procedure 08/07/02 mjh
7
7.
Summarize your results in your lab report, as well as any significant differences in your
procedure from that given above. Then answer the question set below.
Question Set 4
(a)
How good are the co-ordinates of the objects you measured?
(b)
If you could measure the negatives to microns (0·000 001 meters), how good would the
computed co-ordinates be then?
(c)
Do the size of the ‘errors’ associated with the location of a point change with depth
(distance from the camera)? Why do you think this might be the case?
(d)
Can you think of all the assumptions that we made about the cameras and their
placement and orientation, at least as far as the basic formulae for computing coordinates were concerned, that are less than perfect in reality? For example, we
assumed that the cameras were pointing exactly parallel, but we didn’t really check this.
What affect might these differences have?
Camera Lab #3
Procedure 08/07/02 mjh
8
Part IV.
The Camera as Part of a Measurement System
The basic purpose of photogrammetry is to allow the computation of co-ordinates in object
space, from measurements made in image space, very like how we did it in the previous part of
the lab. To do this to a very high level or precision, we need to know a lot about the camera and
use special ‘metric’ cameras, but we can still undertake a lot of measurements even with
everyday cameras, such as the one used in these labs.
As was mentioned in the notes, measurement of co-ordinates on a photograph, together with
knowledge of the location and orientation of the camera and its focal length, allow us to
determine vectors representing the rays of light that came from objects and formed the image.
One image will only allow us to determine the direction of these vectors, not their magnitude (or
length). If we have a second image, take from a different position, but showing the same
objects, we can intersect the two vectors to every point (one vector from each camera), and
determine the location of every point in the stereo image.
The solution of this ‘two vector’ problem has evolved over the years. The early methods solved
the problem by physically tracing the light rays, using either projectors or mechanical devices.
The Wild B8S plotter is one of these types of instruments. The stainless steel ‘space rods’ in
the middle of the machine duplicate the light rays, and the operator traces out the landscape
that appears to be in the space in front or him or her. This form of solution is termed ‘analog
photogrammetry’, as the machine is really a form of analog computer or analog model of what
is happening in the real world.
The next generation of machines took co-ordinate measurements from the two photographs
(the operator located the points, as before), and computed the vectors and the resultant ground
(object space) co-ordinates using a conventional digital computer. This solution is termed
‘analytical photogrammetry.’ You can see an analytical plotter in the Photogrammetry Lab, as
well as a more basic instrument for higher precision work, called a stereo-comparator. (This
machine is used for the basic solution to the locations and orientations of the camera for each
of a large number of photographs covering a large area, a ‘block’ of photos, and it can measure
to about one micron (a millionth of a meter)).
The most recent developments have been to scan the photographs, or get them directly from a
digital camera, and work with the image wholly in a computer. The current level of capability is
such that once the photographs are aligned correctly, so that ground co-ordinates are able to
be computed, the computer software can locate points in each image that are of the same
place on the ground, and automatically compute the co-ordinates of that point. It can do several
thousand point like this in a matter of minutes. This is termed ‘soft-copy photogrammetry’ or
‘digital photogrammetry’.
While the digital photogrammetric workstation (DPW) can produce a digital elevation model
(DEM) of an area, it cannot yet identify objects in the scene, such as roads, houses, fences and
churches, which form much of the detail on maps. This is currently a big area of research at this
University and around the world, and is closely related to the field of computer vision.
You will have seen in our lab work that we have started to use statistics in our measurements,
and used these statistical data to help us determine the reliability of our measurements and the
results we derive from those measurements. You might also see that we have developed a
system of measurement that will allow us to design almost any camera we might need, for
almost any purpose.
Camera Lab #3
Procedure 08/07/02 mjh
9
You will also have seen that we can develop a model for what happens in a camera completely
from equations, and can mathematically model the camera before we ever build it. This ability
to develop and manipulate mathematical models, especially so as to design optimal solutions to
design problems, is a critical part of an engineer’s skills.
An important part of a measurement system is the means to determine if it works properly. This
is just as important in the design of a camera as in the design of a measurement system, such
as is used for photogrammetry. Statistics forms a major basis for the understanding of what
happens in these measurement systems.
Part V.
The Camera as Part of an Information System
Why do we collect data like this? Why do we make measurements and maps? What use are
maps? This kind of data collection isn’t cheap, so what is the purpose for it all?
Ultimately, we use this information to help us make better decisions. We use it to help decide
where to build infrastructure, how to plan and manage our environments, find where the
problems are and how to solve them. We can run simulations in an information system to see
what might happen if we try stuff, without having to actually do it.
Spatially-referenced data and information are rather different to other forms of information, such
as a database of all OSU students’ academic progress. Spatial data has a much more complex
structure, has a strong tendency to auto-correlation (things near each other are more alike than
things that are far apart), and includes ideas beyond location by co-ordinates, ideas like
connectedness, contiguity and nearness. These issues are a major part of the basic theory of
Land and Geographic Information Systems (LIS and GIS).
LIS and GIS are used as decision support systems. This discipline is another major research,
development and teaching area for the University. Again, these technologies are by no means
‘complete’; there is still a lot of work to be done.
At the base of many of these systems sits the camera, used as a basic data collection tool. But
the camera can be used in many other ways. Imagery can be stored in a variety of forms and is
very rich in implicit data. One never uses all the data in a photograph, and it becomes an
historical record of the situation at the instant of exposure. We may not be able to conceive of a
use for some image or data, but it may be needed in the future. Photographs and other images
are very rich information sources, perhaps one of the richest and most compact forms of spatial
data.
Camera Lab #3
Procedure 08/07/02 mjh
10
Download