Report - Computer Science

advertisement
Brandye Loftin, Brandon Watts, Joshua Boutte
Dr. Osborne
COSC
April 21, 2008
Human Visual Perception and Computer Graphics:
The Human Eye
(Brandy Loftin)
I went over how the eye works and what parts are important for it to see.
Everything in the eye works together to make an image to send to the brain as quickly
as it sees something. Just saying that sounds like a paradox though doesn’t it? So
the best thing to do is to take a in depth look at how our eyes work and “see” things.
The first thing to under stand is that our eyes don’t see images what they see is the
reflection of light. Depending on how that light is being reflected by the world around
us is how we see things. Just understanding that is a way for new discoveries such as
holographs or invisible cars. Okay so maybe that a little extreme.
So first things first how does light enter the eye? Light rays enter the eye through
the cornea it then heads down through the pupil, past the lens and into the retina. So
what does all of that mean? Well when light first enters the eye it has to pass the
cornea and the cornea acts like a focus that bends all of the light into the pupil. As light
enters the pupil the pupil opens and closes depending on the amount of light that is
entering through it. After this light passes through the lens which then makes delicate
adjustments, using tiny muscles, in the path of the light rays to bring the light into focus
upon the retina. The retina changes the light rays into electrical impulses and sends
them through the optic nerve to the brain where an image is perceived. In general only
10% of light actually makes it into the retina, but that doesn’t matter because we don’t
need very much to be able to see.
So the next question some may wonder is how fast does all of this take place?
That is truly a difficult question because of all the different things that take place, but it is
said that everything happens in 1/6th of a blink of the eye. Of course the speed of sight
really doesn’t have a certain number, because every eye is different and it is hard to
really find an estimate of that.
The retina is where all of the action is for our eyes. All of the real interesting
things happen there. In the retina is where light changes into electrical signals that the
brain will understand. The eye though doesn’t tell our brains exactly what we see, but
instead all it really shows is “hints, edges in space & time,” says Franks S. Werblin.
The last part of the eye that does anything before information is sent along the optical
nerve are the ganglion cells (nerve cell). These cells are interesting to say the least
they act in thousands of different ways and not in the sense that most people would
understand.
When a ganglion cell sends a signal it only does so after meeting
certain specifications like when it is detecting different things such as moving edge or
large uniform areas. Upon receiving all of this information the brain then translates the
signals into sight as we know it. All of this takes place in a measly 50 milliseconds.
So what does all of this mean in respect to computer graphics? In computer
graphics the eye is still being researched and studied so that developers can properly
program graphics in respect to the human eye. Even now there are certain things that
the eye does that are slightly mystifying. A few examples of this can be seen with a
few simple examples.
One of the examples in how the human eye perceives certain objects in space
and time like depth. Humans have two eyes located on the front of their head that
each sends signals to the brain. When the brain receives these signals it puts them all
together into an image that we call sight and because of where are eyes are located it
gives us a sense of depth. Computer graphics have to take that into account when
they are being developed as well as when the hardware is being developed like
monitors. The human eye probably sees at around 4:3 aspect ratio. That is why most
monitors and screens are made to have a 4:3 aspect ratio. This ratio is actually a
mathematical ratio of height to width. The reason that the width is slightly larger to the
height is because of the where are eyes are located and because of how we perceive
things.
Another aspect of the human eyes and how they perceive computer graphics is
what they see. For example there are pictures out there that when looked at actually
have two different images. Depending on the person they will see one before the
other.
This makes it so that developers have to be careful of what they do because if
the graphics look wrong the viewer could see one thing when the developer wanted
them to see another. There are also instances of halo effects when looking at
something for to long creating anomalies in our vision. These are all instances of what
programmers have to be on the look out for when writing code. Of course none of
these things can be notice from code they can only be seen from the finished product.
That is why testing for errors is so important when writing code.
So what can be said about the human visual system? The most important is
probably that it is a complicated and intricate system that requires a lot of research and
time to understand.
Another thing is that everything about it happens in 1/6 th of a blink
of an eye. So everything must be considered on how fast things must be done so that
the graphics can be seen like they are suppose to as in real life. One of the last things
is that there are certain aspect of how things are seen that have to be considered when
developing graphics like the depth perception and anomalies in the human eye. Other
than those few things and the thousands not covered it’s simple.
The Digital World
(Brandon Watts)
In the ever more complex world of computer games, developers are constantly
looking for new ways to make the playing experience more life-like. One problem that
had remained unsolved was how to quickly simulate the gradation of shadows caused
by indirect light bouncing off objects. Developers are constantly looking for ways to
make computer graphics and games more life-like, in order to have a better visual
perception and playing experience. One way that they are taking steps to do this is by
correcting the problem which occurs during immediate movement. When simulating
prompt movement, creating a life-like shadow of any object or being is difficult. Similar
to a bad edited movie, when the words that are being spoken are not connected with
the mouth. Dr. Jan Kautz used a ball and a square when he was viewing the error which
occurs with the shadow being linked to real-time movement.
The differences in two dimensional and three dimensional computers graphic and
games are the basis and concepts of which they are created. For instance, when one is
trying to create a game with different facial expressions, there are completely different
step one would take to create such expressions. Computer vision software can now
map a person's face onto a mesh computer model and calculate facial expressions
based on facial points such as lip curvature, eyebrow position, and cheek contraction.
The software detects happiness, disgust, fear, anger, surprise and sadness with 85
percent accuracy, but researchers don't yet have the technology to detect more subtle
emotions. In order to create a number of distinct facial expressions, they would have to
draw each individual expression that they would want the program or game to simulate
which is very takes up a large amount of time and can lead to error. On the other hand,
to create a facial expression on a computer program or game using three dimensions
the process may be a little more complex, but easier than using two dimensions. Three
dimension images can be created using mesh and bones. By using mesh and bones,
only different points on the face are being used to simulate the different facial
expressions in which an image can form. Basically, for a person to create an actual
expression, instead of a person drawing by hand each expression, they can create a
more accurate expression and more expressions to be expressed on a computer
program/game which may decrease error with expressions.
Another difference when viewing two dimensional and three-dimensional objects
is that one can only view a picture from one angle using only two dimensions. Three
dimensional comes with camera angles. In 2D you are stuck with the images you see,
but in 3D you can rotate and pan the camera freely. That’s a pretty big difference. You
can also use different angles to create rear views or even 3D maps (simply move the
camera very high and you can see a nice map of the level). Two dimensions are stuck
at one point of the image. One cannot see the sides, back, top, or bottom. With a three
dimensional view, one is able to view a picture from several different angles. Each side
can be accessed by the eyes allowing more than just a basic frontal view. Any area of
the object can be exposed in three dimensional viewing. Creating art for 2D games
might sound less complex. It is possibly true that two dimensional arts requires fewer
artists than three dimensional, but creating three dimensional art assets might actually
require less time in a long run. If you have a very detailed two dimensional character,
and you need to change the animation there’s no other option than to re-draw the
animation. If on the other hand you need to do changes for three dimensional character
animation - you don’t have to touch the mesh or the texture or the bones - only to the
animation. I would preferably use three dimensional images rather than two dimensional
images, but I do like how the two can be separated.
In order to use three dimensional imagery, when creating a program one will
have to a mesh computer model. A mesh computer model is used to generate or read
facial expressions. It is often used when creating football games to capture a player’s
motion. Dots are placed on the bones and joints of a particular player to capture their
running style, the way they jump, fall, and many more actions. Also computer vision
software can now map a person's face onto a mesh computer model and calculate
facial expressions based on facial points such as lip curvature, eyebrow position, and
cheek contraction. The software was used to unleash another Da Vinci mystery--what
the famous Mona Lisa was feeling. Actually, the Mona Lisa's expression is
eighty-three-percent happy, nine-percent disgusted, six-percent fearful, and two-percent
angry, but researchers don't yet have the technology to detect more subtle emotions.
The Interaction of the Eye and Graphics
(Joshua Boutte)
The human eye is a miracle of the human body. It has mystified numerous
scientist for generations and it continues to do even now. In today's society scientist try
to understand how the human eye works for a number of different reasons. These
include things like helping people correct vision to actually allowing blind people to see
for the first time, but it also is studied for such things like computer graphics. Studying
the human visual system can help to create and improve upon various computer
algorithms.
These algorithms generally take the form of models of the human perception.
This is especially important because it helps to make graphics better and faster to build.
Why waste time creating the perfect object when the human eye can't see the
difference. This has to with the limitations of the human eye and are lack of completely
understanding all of it's components. As of right now most studies that examine the
human visual system only cover on aspect at a time while there is evidence that some
characteristics of the human visual system actually work in conjunction with each other.
The models of human vision help when considering image quality assessment and
image comparison.
Even so understanding the human eye and having models that replicate it
perfectly while still do no good with the current hardware technology. Luminosity is one
of the things that’s currently limited by technology. Displays that are currently available
only achieve intensities of about 100cd/m2 which is about 100:1 in respect to both
maximum and minimum luminance of the real world.
Of course another aspect concerning human perception that is always on the tips
of gamers' tongues is frames per second (fps). This in particular is constantly being
argued among gamers and many want to put a number on exactly what the human eye
is capable of handling. A common misconception is that the human eye can only
handle 30 fps. This is a complete falsehood that has been unproven time after time.
In fact one study that is always brought up when this subject is talked about is the one
that the Air Force conducted. In their study they put pilots in a dark room and flashed a
picture, at 1/220th of a second, of a jet. Then they asked the pilots to properly identify
what the plane was. The pilots were able to properly identify the jet each time.
Another piece of information that proves that the human eye can handle more that 30
fps is movies. Some might say that movies only run at 24 fps, but that is another
misconception. In actuality movies are run at 72 fps because each frame is shown
three times. If they were really ran at 24 fps there would be a noticeable black screen
in between each frame. So what is the reason that the human eye can obviously see
more than 30 fps?
The reason is that the human eye doesn't operate like a camera or film. It
doesn't take a single picture then send it to the brain. The human eye is constantly
streaming data to the brain that creates our vision as we know it. This is why hardware
is actually less technologically advanced than our own eye. That is one of the reasons
that programming for computer graphics is so hard. Programmers have to live inside
this digital world while all around them an analog world is taking place. The human eye
does all the things we know it can do in 1/6th of a blink of the eye. That is why some
algorithms that try to simulate the human eye are still only now being conceived.
One example of a new algorithm that was recently developed revolves around
the aspect of the human eye that is called motion blur. People take motion blur for
granted all the time never thinking of what it would be like if our eyes didn't operate that
way. For instance if the human eye didn't incorporate motion blur when someone
quickly moves their hand pasted their face all the person would see is their hand jump in
an out of existence. The game that is uses this new algorithm is called Cyris. It is
probably relevant to mention that this isn't the first attempt to use motion blur in a game
it's just a more advanced one. The first attempt to use motion blur is called screen
orientated blur. With screen orientated blur every time that the character on the screen
moves around quickly the whole screen will blur. This is realistic, but it also isn't
because in real life most of the time when a person turns he or she will have focused
their vision a certain spot. So when the game Cyris was developed the idea for object
orientated blur came to being. With this individual objects as well as the screen can
blur. So for example if the character on the screen quickly cocks his shotgun instead of
just seeing his hand move it will also blur like in real life.
This type of programming must take a great deal of time and complex
mathematical equations, but what are some other aspects of the human perception and
computer graphics. A certain example was conducted by a study concerning spatial
resolution with respect to video game players. In the study participants were asked to
focus on a certain object while other things crowded the screen. During the study it
was discovered that gamers where able to more easily distinguish the object that
participants that have never played a game. So not only does the human eye change
how computer graphics are created, but it also can change while watching the graphics
in progress.
So what is the conclusion to this research? That the eye is still a mysterious
piece of human anatomy and still requires a considerable amount of study. This also
means that computer graphics still are a long way off from actually resembling real-life.
This of course brings up another question such as what would games look like if they
completely resembled real life? Would people still want to play them and would it
further desensitize individual’s morality making them more susceptible to violence and
crime? That is a question for another day though.
Works Citied
Cadik, Martin, “Human Perception and Computer Graphics,” Czech Technical
University, Jan 2004
Chalmers, Alan, Dalton, Colin, “Visual Perception in Computer Graphics Education”
University of Bristol, April 2008
<http://virtual.inesc.pt/cge02/pdfs/chalmers.pdf>
“Realism Of Computer Games Dramatically Improved With New Modeling Of Light”
Science Daily, April 2008
http://www.sciencedaily.com/releases/2008/02/080229130355.htm
“Mona Lising Smiling” Science Daily, April 2008
http://www.sciencedaily.com/releases/2008/02/080229130355.htm
Hietalahti, Juuso, “Difference Between 2D and 3D Game Art Production” Game
Producer.net, July 27th 2007
http://www.sciencedaily.com/releases/2008/02/080229130355.htm
Holladay, April, “Wonder Quest” Wonder Quest, April 2008,
http://www.wonderquest.com/sight-whale-tern.htm
“The Human Eye” Kimball’s Biology Page, April 2008
http://users.rcn.com/jkimball.ma.ultranet/BiologyPages/V/Vision.html
Download