虛擬實境設計 by 魏兆煌 整理 K.-P. Beier

advertisement
虛擬實境設計
A Short Introduction by K.-P. Beier
魏兆煌 整理
Terminology
The term 'Virtual Reality' (VR) was initially coined by Jaron Lanier,
founder of VPL Research ("Visual Programming Language")
(1989). Other related terms include 'Artificial Reality' (Myron
Krueger, 1970s), 'Cyberspace' (William Gibson, 1984), and, more
recently, 'Virtual Worlds' and 'Virtual Environments' (1990s).
Today, 'Virtual Reality' is used in a variety of ways and often in a
confusing and misleading manner. Originally, the term referred to
'Immersive Virtual Reality.' In immersive VR, the user becomes fully
immersed in an artificial, three-dimensional world that is completely
generated by a computer.
Head-Mounted Display (HMD)
The head-mounted display (HMD) was the first device providing its
wearer with an immersive experience. Evans and Sutherland
demonstrated a head-mounted stereo display already in 1965. It took
more then 20 years before VPL Research introduced a commercially
available HMD, the famous "EyePhone" system (1989).
A typical HMD houses two miniature display screens and an optical
system that channels the images from the screens to the eyes, thereby,
presenting a stereo view of a virtual world. A motion tracker
continuously measures the position and orientation of the user's head
and allows the image generating computer to adjust the scene
representation to the current view. As a result, the viewer can look
around and walk through the surrounding virtual environment.
To overcome the often uncomfortable intrusiveness of a headmounted display, alternative concepts (e.g., BOOM and CAVE) for
immersive viewing of virtual environments were developed.
BOOM
The BOOM (Binocular Omni-Orientation Monitor) from Fakespace is
a head-coupled stereoscopic display device. Screens and optical
system are housed in a box that is attached to a multi-link arm. The
user looks into the box through two holes, sees the virtual world, and
can guide the box to any position within the operational volume of the
device. Head tracking is accomplished via sensors in the links of the
arm that holds the box.
CAVE
The CAVE (Cave Automatic Virtual Environment) was developed at
the University of Illinois at Chicago and provides the illusion of
immersion by projecting stereo images on the walls and floor of a
room-sized cube. Several persons wearing lightweight stereo glasses
can enter and walk freely inside the CAVE. A head tracking system
continuously
adjust
the stereo projection
to the current position
of the leading viewer.
Input Devices and other Sensual Technologies
A variety of input devices like data gloves, joysticks, and 3D Mouse
allow the user to navigate through a virtual environment and to
interact with virtual objects. 3D sound, tactile and force feedback
devices, voice recognition and other technologies are being employed
to enrich the immersive experience and to create more "sensualized"
interfaces.
Characteristics of Immersive VR
The unique characteristics of immersive virtual reality can be
summarized as follows:
• Head-referenced viewing provides a natural interface for the
navigation in three-dimensional space and allows for lookaround, walk-around, and fly-through capabilities in virtual
environments.
• Stereoscopic viewing enhances the perception of depth and the
sense of space.
• The virtual world is presented in full scale and relates properly
to the human size.
Characteristics of Immersive VR
continue
• Realistic interactions with virtual objects via data glove and
similar devices allow for manipulation, operation, and control of
virtual worlds.
• The convincing illusion of being fully immersed in an artificial
world can be enhanced by auditory, haptic, and other non-visual
technologies.
Networked applications allow for shared virtual environments
Shared Virtual Environments
In the example illustrated, three networked users at different locations
(anywhere in the world) meet in the same virtual world by using a
BOOM device, a CAVE system, and a Head-Mounted Display,
respectively. All users see the same virtual environment from their
respective points of view. Each user is presented as a virtual human
(avatar) to the other participants. The users can see each other,
communicated with each other, and interact with the virtual world as a
team.
Non-immersive VR
Today, the term 'Virtual Reality' is also used for applications that are
not fully immersive. The boundaries are becoming blurred, but all
variations of VR will be important in the future. This includes
mouse-controlled navigation through a three-dimensional
environment on a graphics monitor, stereo viewing from the monitor
via stereo glasses, stereo projection systems, and others. Apple's
QuickTime VR, for example, uses photographs for the modeling of
three-dimensional worlds and provides pseudo look-around and
walk-trough capabilities on a graphics monitor.
VRML
Most exciting is the ongoing development of VRML (Virtual Reality
Modeling Language) on the World Wide Web. In addition to HTML
(HyperText Markup Language), that has become a standard authoring
tool for the creation of home pages, VRML provides threedimensional worlds with integrated hyperlinks on the Web. Home
pages become home spaces. The viewing of VRML models via a
VRML plug-in for Web browsers is usually done on a graphics
monitor under mouse-control and, therefore, not fully immersive.
However, the syntax and data structure of VRML provide an
excellent tool for the modeling of three-dimensional worlds that are
functional and interactive and that can, ultimately, be transferred into
fully immersive viewing systems. The current version VRML 2.0 has
become an international ISO/IEC standard under the name VRML97.
VRML
continue
Rendering of Escher's Penrose Staircase (modeled by Diganta Saha):
VR-related Technologies
Other VR-related technologies combine virtual and real environments.
Motion trackers are employed to monitor the movements of dancers
or athletes for subsequent studies in immersive VR. The technologies
of 'Augmented Reality' allow for the viewing of real environments
with superimposed virtual objects. Telepresence systems (e.g.,
telemedicine, telerobotics) immerse a viewer in a real world that is
captured by video cameras at a distant location and allow for the
remote manipulation of real objects via robot arms and manipulators.
Applications
As the technologies of virtual reality evolve, the applications of VR
become literally unlimited. It is assumed that VR will reshape the
interface between people and information technology by offering new
ways for the communication of information, the visualization of
processes, and the creative expression of ideas.
Note that a virtual environment can represent any three-dimensional
world that is either real or abstract. This includes real systems like
buildings, landscapes, underwater shipwrecks, spacecrafts,
archaeological excavation sites, human anatomy, sculptures, crime
scene reconstructions, solar systems, and so on. Of special interest is
the visual and sensual representation of abstract systems like magnetic
fields, turbulent flow structures, molecular models, mathematical
systems, auditorium acoustics, stock market behavior, population
densities, information flows, and any other conceivable system
including artistic and creative work of abstract nature. These virtual
worlds can be animated, interactive, shared, and can expose behavior
and functionality.
Applications
continue
Useful applications of VR include training in a variety of areas
(military, medical, equipment operation, etc.), education, design
evaluation (virtual prototyping), architectural walk-through, human
factors and ergonomic studies, simulation of assembly sequences and
maintenance tasks, assistance for the handicapped, study and
treatment of phobias (e.g., fear of height), entertainment, and • • •
3D電腦繪圖-- 虛擬世界
影像處理技術層次
攝影
2D動畫
VR
3D動畫
3D遊戲
3D電腦繪圖--虛擬世界
一、前言
相信多數人都看過“朱儸紀公園”中張牙舞爪的恐龍
或者是“玩具總動員”中會說話的玩具和“蟲蟲危機”中可
愛的昆蟲等等,這些栩栩如生,讓人分不出真假的東西就是
由三度空間電腦繪圖(3D Computer Graphics)所創造出來的,
而隨著半導體技術的精進,原本只能在昂貴的高階電腦工作
站才能執行的3D電腦繪圖,已經普及到一般的個人電腦上,
現在幾乎九成以上的電腦都配備有基本的3D繪圖功能,電視
遊樂器更配備有超強的3D繪圖晶片,3D電腦繪圖已是十分普
及。
現在3D電腦繪圖已深入到各個層面,尤其在影音娛樂
多媒體方面,甚至應用在醫學上的斷層掃瞄及科學研究上,
虛擬實境更是未來的夢想。
3D電腦繪圖--虛擬世界
continue
二、3D電腦繪圖簡介
3D電腦繪圖的宗旨就是為了模擬出真實世界的物體,
如何讓產生出來的圖片更逼真、更快速是努力的重點,那為什
麼三度空間的物體可以顯現在二度空間平面的螢幕上呢?這就
是利用許多複雜的演算法或模型來模擬真實物體在三度空間中
的狀況,最後投射在二維平面的螢幕上。
如下圖所示,物體位於黃色區域中的三度空間內,人眼
透過螢幕Screen所看到的就是此三度空間透射到二維平面上的
狀態,而許多因素會影響到最後此影像的真實度,包含投射的
技巧、物體模型的細緻程度、光源模型的精確度等等,底下就
為3D繪圖流程簡單的說明(此一流程是最基本的概念,事實
上在3D電腦繪圖的領域中有許許多多不同的方式來產生最後
的圖形)。
3D電腦繪圖--虛擬世界
continue
1. 建立物體模型(Modeling)
如何描述一個三度空間中的物體其外觀形狀,此一動作
就是建模,通常我們會用點、線、面去近似一個物體的外觀
。
3D電腦繪圖--虛擬世界
continue
建立此物體的模型資料,通常用(X,Y,Z)三度空間的座標
來表示,這些資料就是供後面繪圖運算用的,有了物體外觀
(X,Y,Z)的資料,可以利用很簡單的線性代數運算,就可以將這
個物體放大縮小、移動和變形等等。
目前建立模型的方式大多靠軟體輔助設計如3DS Max、
SoftImage、Maya等來完成,此一方法需手工且耗時,因此也
有雷射掃瞄的儀器直接掃瞄真實物體的外觀。 物體模型的精
細度對最後產生出來的圖形的真實度有很大的影響,也牽涉到
資料處理的運算量。
3D電腦繪圖--虛擬世界
continue
2. 座標轉換 (Transform)
有了物體的三度空間模型資料,就可以用線性代數的方
法將此物體移動、縮放、變形等等,這些動作其實都是座標轉
換,藉由控制物體上每個點的座標變動,可以讓物體移動、旋
轉、產生動畫,如下圖,在一空間中擺進了三個物體,並調整
它們的位置和角度。
3D電腦繪圖--虛擬世界
continue
3. 光亮度計算 (Lighting) - 1
當一個三度空間中所有物體的座標資訊都處理好後,接
下來的就是要計算每個物體的顏色了,如何計算物體的顏色呢
?首先必須將此三度空間中的光源做一適當的model,就像光
源可以分成很多種,如太陽光、電燈泡、探照燈等等,model
好光源後,皆下來要做的就是計算此光源如何和這些物體作用
,而最後反射到眼睛的光就是我們所看到此物體的顏色了,此
部分就是利用光學反射、散射、透射等的原理來模擬 。
3D電腦繪圖--虛擬世界
continue
3. 光亮度計算 (Lighting) - 2
物體的顏色若要逼真就必須使用較複雜的運算,目前的
方法大致可分成幾種:Local illumination、Ray Tracing(光跡追
蹤)和Radiosity(熱幅射法) 。 Local illumination是較適合於硬體
設計,但其品質是此三種中最差的但運算量也是最低的;光跡
追蹤對於金屬物質的效果最逼真;熱幅射對於室內光源的效果
較好,但光跡追蹤和熱幅射都需要耗費很久的時間且不適合做
硬體加速,一般我們看到電影中逼真的動畫都是結合了光跡追
蹤和熱幅射後所得到的最好效果,通常一張畫面都需要一台工
作站運算幾小時,並不適合即時的3D應用。
3D電腦繪圖--虛擬世界
continue
3. 光亮度計算 (Lighting) - Gouraud Shading
Local illumination:此方法只針對每個光源對此點的影響納入計算,
不考慮到其他物體的影響,此方法還可分成兩種Gouraud Shading和Phong
Shading。 Gouraud Shading是首先對每個物體模型的頂點做光亮度的計算,
得到每個頂點(通常為三角形的頂點)的顏色值(如圖左),然後接著在用內插
的方式將每個面內部(通常是三角形)的點的顏色給內插出來,其結果就如
圖右,此方法的缺點是因為用內插的關係,所以無法模擬出金屬物體高亮
度、高反光的地方,但好處是硬體運算量較少。
3D電腦繪圖--虛擬世界
continue
3. 光亮度計算 (Lighting) - Phong Shading
Phong Shading是對物體上的每個點都去計算它的光亮度
值,所以可以模擬出高反光的部分,但由於光亮度需要大量的
浮點運算,所以目前的硬體架構幾乎都是採用Gouraud Shading
。
3D電腦繪圖--虛擬世界
continue
3. 光亮度計算 (Lighting) - Ray Tracing
Ray Tracing(光跡追蹤) 對於金屬物質的效果最逼真,也
適用於陰影之呈現。
3D電腦繪圖--虛擬世界
continue
3. 光亮度計算 (Lighting) - Radiosity
Radiosity(熱幅射法) 熱幅射對於室內光源的效果較好。
3D電腦繪圖--虛擬世界
continue
4. 貼圖 (Texture Mapping)
在計算完物體的顏色後整個影像已呈現出3D的立體效
果,但這只有純色的結果,在真實世界中物體表面都會有紋路
,甚至細小高低不平的紋路材質(如樹木、皮革等),我們利用
簡單的貼圖就可以模擬出物體表面的這些效果(下圖),此動作
牽涉到貼圖的精細度和材質使用的多寡,目前此單元是硬體中
最耗費記憶體大小和頻寬的單元,所以才會有貼圖壓縮
(Texture Compression) 的出現。
3D電腦繪圖--虛擬世界
continue
5. 隱藏面消除(Hidden Surface Removal) - 1
當我們眼睛往三度空間中的某個位置、角度看過去時,
有些物體因為在後面會被遮住,所以在最後整張影像成像的時
候必須將這些東西消除,最常用的方法是Z-Buffer,由於這些
看不見的點都會耗費不必要的運算(如座標轉換、光亮度、
貼圖等) ,造成整體效能的降
低,因此有許多方法被開發用
來儘早在前級時消除這些看不
到的物體 。
3D電腦繪圖--虛擬世界
continue
5. 隱藏面消除(Hidden Surface Removal) - 2
也許有人會問說,為什麼一開始要把這些看不到的東
西送入硬體加速呢?那是因為通常我們是無法經由簡單的計
算就可以得知物體的哪些部分是看不到的,況且3D電腦繪圖
的精神就是你可以從任何角度觀看此空間,而產生出來的圖
形的解析度也可以任意,所以當遇到互動式的3D應用時,就
很難預先知道使用者眼睛的位置而將那些看不見的物體消除
的。
3D電腦繪圖--虛擬世界
continue
5. 隱藏面消除(Hidden Surface Removal) - 3
隱藏面的消除使用Z-Buffer是最簡單有效也是最笨的方
法,Z-Buffer就是一塊儲存螢幕上每一點深度(Z)值的記憶體
,當有新的點進來時就去比對此點的深度值是否比之前在ZBuffer中的靠近眼睛,如果是的話就表示此點會遮蓋住後面的
點,更新Z-Buffer成此一新的值,若不是的話,就表示新的點
比目前的點還遠,所以看不到,就將它捨去掉。目前ATI在
Radeon中所發表的Hierarchical Z Buffer也是為了加速消除隱藏
面的方法。
3D電腦繪圖--虛擬世界
continue
概念總結:
到此我們簡單的呈現3D電腦繪圖的概念,事實上整個
3D電腦繪圖的過程非常繁雜,還有許許多多為了模擬真實世
界所發展出來的演算法,歸納起來整個3D繪圖系統可分成幾
個部分:應用程式(API)、幾何處理(Geometry Processing)、繪
圖 處 理 ( Rasterization Processing) 。 上 層 API 的 部 分 , 包 含
Library(如Direct3D、OpenGL),軟體開發者依循標準Library程
式庫開發出3D的應用軟體出來,透過支援標準程式庫的3D晶
片送到硬體加速,硬體加速目前包含幾何處理和繪圖處理,幾
何處理包含前級座標轉換Transform和光亮度計算Lighting等等
,統稱T&L,著重大量的浮點運算,後級繪圖處理包含顏色內
插、著色、貼圖、消除隱藏面、透明度、霧狀、陰影模擬等等
,均依賴像素的運算和記憶體存取等。
3D電腦繪圖--虛擬世界
finish
進階閱讀書目:
1. Alan Watt, “The Computer Image”, Addison Wesley
=> 2D、3D入門及深入
2. Alan Watt, “3D Computer Graphics”, Addison Wesley
=> 完整3D理論
3. Tomas Moller and Eric Haines, “ Real-Time Rendering ”,
AK Peters (http://www.realtimerendering.com/)。
=>3D軟硬體加速技巧
相關網站:
1.
2.
3.
4.
www.nvidia.com/developer.nsf
http://www.opengl.org/
www.microsoft.com/directx/
Computer Graphics on the NET:
http://ls7-www.cs.uni-dortmund.de/cgotn/
Download