Uploaded by Ifa Asrat

remote assignment

advertisement
ANSWER
# 5. When measuring the reflected or emitted energy, either imaging or nonimaging sensors can be
used.
imaging sensors
imaging sensors can be processed to produce an image of an area, within which smaller parts of the
sensor's whole view are resolved visually
Imaging sensors produce an image of an area of interest, e.g. give a spatial
information about the incoming information. Spatial relationships between
objects can be identified and used for visual interpretation
Image data are desirable when spatial information (such as mapped output) is needed.
Image data uses-to provide an opportunity to look at spatial relationships, object shapes, and to
estimate physical sizes based on the data's spatial resolution and sampling.
Images produced from remote sensing data can be either analog (such as a photograph) or digital (a
multidimensional array or grid of numbers). Digital data can be analyzed by studying the values using
calculations performed on a computer, or processed to produce an image for visual interpretation.
Image interpretation is used to decipher information in a scene. In the past, image interpretation was
done largely using subjective visual techniques, but with the development and ongoing advancement of
computer technology, numeric or digital processing has become a powerful and common interpretation
tool.
In many cases, image interpretation involves the combination of both visual and digital techniques.
These techniques utilize a number of image features including tone and color, texture, shape, size,
patterns, and associations of objects. The human eye and brain are generally thought to more easily
process the spatial characteristics of an image, such as shape, patterns and how objects are associated
with one another. Computers usually are better suited for rapid analysis of the spectral elements of an
image such as tone and color.
Nonimaging sensors
Nonimaging sensors usually are hand held devices that register only a single response value, with no
finer resolution than the whole area viewed by the sensor, and therefore no image can be made from
the data. These single values can be referred to as a type of "point" data, however some small area is
typically involved depending on the sensor's spatial resolution.
Nonimage data uses -to give information for one specific (usually small) area or surface cover type, and
can be used to characterize the reflectance of various materials occurring in a larger scene and to learn
more about the interactions of electromagnetic energy and objects.
A non-imaging sensor measures a signal based on the intensity of the whole field of view, mainly as a
profile recorder. In contrast with imaging sensors, this type of sensor does not record how the input
varies across the field of view. In the remote sensing field, the commonly used non-imaging sensors
include radiometers, altimeters, spectrometers, spectroradiometers, and LIDAR
In the remote sensing field, non-imaging sensors typically work in the visible, IR, and microwave spectral
bands. The
applications for
non-imaging
sensors mainly
focus on height,
temperature,
wind speed, and
other
atmospheric
parameter
measurements.
Non imaging
sensors;
# 6. There are two main modes or methods of scanning employed to acquire
multispectral image data - across-track scanning and
-along-track scanning.
Across-track scanners
Across-track scanners scan the Earth in a series of lines. The lines are oriented
perpendicular to the direction of motion of the sensor platform (i.e. across the
swath). Each line is scanned from one side of the sensor to the other, using a
rotating mirror (A). As the platform moves forward over the Earth, successive
scans build up a two-dimensional image of the Earth´s surface. The incoming
reflected or emitted radiation is separated into several spectral components that
are detected independently. The UV, visible, near-infrared, and thermal radiation
are dispersed into their constituent wavelengths. A bank of internal detectors (B),
each sensitive to a specific range of wavelengths, detects and measures the
energy for each spectral band and then, as an electrical signal, they are converted
to
digital data and recorded for
subsequent computer processing.
The
IFOV (C) of the sensor and the
altitude of the platform
determine the ground resolution
cell viewed (D), and thus the spatial resolution. The angular field of view (E) is the
sweep of the mirror, measured in degrees, used to record a scan line, and
determines the width of the imaged swath (F). Airborne scanners typically sweep
large angles (between 90º and 120º), while satellites, because of their higher
altitude need only to sweep fairly small angles (10-20º) to cover a broad region.
Because the distance from the sensor to the target increases towards the edges of
the swath, the ground resolution cells also become larger and introduce geometric
distortions to the images. Also, the length of time the IFOV "sees" a ground
resolution cell as the rotating mirror scans (called the dwell time), is generally
quite short and influences the design of the spatial, spectral, and radiometric
resolution of the sensor.
Along-track scanners
Along-track scanners also use the forward motion of the platform to record
successive scan lines and build up a two-dimensional image, perpendicular to the
flight direction. However, instead of a scanning mirror, they use a linear array of
detectors (A) located at the focal plane of the image (B) formed by lens systems
(C), which are "pushed" along in the flight track direction (i.e. along track). These
systems are also referred to as pushbroom scanners, as the motion of the detector
array is analogous to the bristles of a broom being pushed along a floor. Each
individual detector measures the energy for a single ground resolution cell (D) and
thus the size and IFOV of the detectors determines the spatial resolution of the
system. A separate linear array is required to measure each spectral band or
channel. For each scan line, the energy detected by each detector of each linear
array is sampled electronically and digitally
recorded.
Along-track scanners with linear arrays have several advantages over across-track
mirror scanners. The array of detectors combined with the pushbroom motion
allows each detector to "see" and measure the energy from each ground
resolution cell for a longer period of time (dwell time). This allows more energy to
be detected and improves the radiometric resolution. The increased dwell time
also facilitates smaller IFOVs and narrower bandwidths for each detector. Thus,
finer spatial and spectral resolution can be achieved without impacting
radiometric resolution. Because detectors are usually solid-state microelectronic
devices, they are generally smaller, lighter, require less power, and are more
reliable and last longer because they have no moving parts. On the other hand,
cross-calibrating thousands of detectors to achieve uniform sensitivity across the
array is necessary and complicated.
Regardless of whether the scanning system used is either of these two types, it has
several advantages over photographic systems. The spectral range of
photographic systems is restricted to the visible and near-infrared regions while
MSS systems can extend this range into the thermal infrared. They are also
capable of much higher spectral resolution than photographic systems. Multiband or multispectral photographic systems use separate lens systems to acquire
each spectral band. This may cause problems in ensuring that the different bands
are comparable both spatially and radiometrically and with registration of the
multiple images. MSS systems acquire all spectral bands simultaneously through
the same optical system to alleviate these problems. Photographic systems record
the energy detected by means of a photochemical process which is difficult to
measure and to make consistent. Because MSS data are recorded electronically, it
is easier to determine the specific amount of energy measured, and they can
record over a greater range of values in a digital format. Photographic systems
require a continuous supply of film and processing on the ground after the photos
have been taken. The digital recording in MSS systems facilitates transmission of
data to receiving stations on the ground and immediate processing of data in a
computer environment.
# 7. Why don’t satellites fall out of the sky?
Satellites don’t fall from the sky because they are orbiting Earth. Even when
satellites are thousands of miles away, Earth’s gravity still tugs on them. Gravity-combined with the satellite’s momentum from its launch into space--cause the
satellite go into orbit above Earth, instead of falling back down to the ground.
A satellite is a type of machine that orbits Earth, taking pictures and collecting
information. There are thousands of satellites orbiting Earth right now.
How do they all stay up there—and why don’t they just fall out of the sky?
If you throw a ball into the air, the ball comes right back down. That’s because of
gravity—the same force that holds us on Earth and keeps us all from floating
away.
To get into orbit, satellites first have to launch on a rocket. A rocket can go 25,000
miles per hour! That’s fast enough to overcome the strong pull of gravity and
leave Earth’s atmosphere. Once the rocket reaches the right location above Earth,
it lets go of the satellite.
The satellite uses the energy it picked up from the rocket to stay in motion. That
motion is called momentum.
But how does the satellite stay in orbit? Wouldn’t it just fly off in a straight line out
into space?
Not quite. You see, even when a satellite is thousands of miles away, Earth’s
gravity is still tugging on it. That tug toward Earth--combined with the momentum
from the rocket… …causes the satellite to follow a circular path around Earth: an
orbit.
When a satellite is in orbit, it has a perfect balance between its momentum and
Earth’s gravity. But finding this balance is sort of tricky.
Gravity is stronger the closer you are to Earth. And satellites that orbit close to
Earth must travel at very high speeds to stay in orbit.
For example, the satellite NOAA-20 orbits just a few hundred miles above Earth. It
has to travel at 17,000 miles per hour to stay in orbit.
On the other hand, NOAA’s GOES-East satellite orbits 22,000 miles above Earth. It
only has to travel about 6,700 miles per hour to overcome gravity and stay in orbit.
Satellites can stay in an orbit for hundreds of years like this, so we don’t have to
worry about them falling down to Earth.
# 8.Why don't satellites crash into each other?
Considering the vastness of space and the quite large space around the earth, the
chance of satellites colliding is way too slim.
Most of the satellites that revolve around the earth has a specific trajectory and
they are closely tracked by ground control station in Earth. These satellites have a
control and navigation unit called Telemetry Tracking and Command. When
engineers design the satellite they take into account different perturbations in the
orbital equation. some of them are Gravitational field of moon and sun, oblate
shape of earth etc. Any change in predetermined path would be tracked and
compensated by firing of thrusters placed in satellites.Thus is all the satellites
remain in
their own trajectory, the problem of roaming loose is very less.
And Also-
Satellites don't collide with each other because:-Space is so vast
-Positioning of Satellites and
-Continuous tracking
Space is so vast:
Most of the satellites are placed at the altitude starting from 400 km to 600 km
from the surface of the Earth. (Leave Geostationary satellites as they are far
away). By using the volume of sphere formula, we get 637000000 cubic kilometers
(That is huge) as volume of this region. Currently we have 2271 satellites orbiting
Earth including the GeoSats. The total volume occupied by the satellites will be
2271*10*10*10 m which roughly comes around 2271000 cubic meters or
0.002271 cubic kilometers. So the satellites occupies around ~ 0.000000003
percent of space which is very small.
No traffic problem
To give a analogy, if Earth is a size of Football, then the satellites will be the size of
a custard.
Positioning the Satellites:
All satellites are positioned in space at a particular orbit. This orbit is precisely
calculated before the launch and each satellite has its own inclination, apogee,
perigee and the velocity. So no satellite will come near another satellite and there
will be a huge gap between these satellites always.
No confusion of the route
In order for a satellite to collide with other, it has to come to particular place at
particular time in relative to the other satellites which is not likely gonna happen.
Continuous Tracking
All the satellites are continuously monitored and all systems are maintained at
healthy condition 24 x 7 by the ground team. If any changes in orbit is detected
(due to orbital drag or any stability issues within the satellite), the ground team
will immediately use the propulsion unit in the satellite to rectify this issue. Most
of the case it will be orbit raising and firing the vernier thrusters for maintaining
the stability and orientation.
In case if a satellite is going to fail (On board system failure, End of service period),
the satellite will either be moved to Grave Yard orbit or it will be made to enter the
atmosphere and burn up.
So unless made intentionally the satellites will not collide with each other
Download