Innovation of new world; Integration of virtual object in real world

advertisement
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 2, Issue 3, March 2013
ISSN 2319 - 4847
Innovation of new world; Integration of virtual
object in real world
Mrs.Snehlata Barde
SSGIBhilai(C.G.)
ABSTRACT
Augmented reality is a technology which allows 2D and 3D computer graphics to be accurately aligned or registered with
scenes of the real-world in real-time. The potential uses of this technology are numerous, from architecture and medicine, to
manufacturing and entertainment..The ambitious goal of AR is to create the sensation that virtual objects are present in the
real world. To achieve the effect, software combines virtual reality (VR) elements with the real world. Obviously, This paper
gives the brief idea of the field of Augmented Reality, in which 3-D virtual objects are integrated into a 3-D real environment in
real time. AR is most effective when virtual elements are added in real time. Because of this, AR commonly involves
augmenting 2D or 3D objects to a real-time digital video image.
Keywords: HMDs, MARS, VE.
1. INTRODUCTION
Augmented reality is a technology which allows 2D and 3D computer graphics to be accurately aligned or registered
with scenes of the real-world in real-time. The potential uses of this technology are numerous, from architecture and
medicine, to manufacturing and entertainment. Vision-based techniques which augment objects onto predetermined
planar patterns are considered the most promising approach for achieving accurate registrations, but the majority of the
proposed methods fail to provide any robustness to significant changes in pattern scale, orientation, or partial pattern
occlusion. Augmented Reality (AR) is a variation of Virtual Environments (VE), or Virtual Reality as it is more
commonly called. VE technologies completely immerse a user inside a synthetic environment. While immersed, the
user cannot see the real world around him. In contrast, AR allows the user to see the real world, with virtual objects
superimposed upon or composited with the real world. Therefore, AR supplements reality, rather than completely
replacing it. Ideally, it would appear to the user that the virtual and real objects coexisted in the same space. Some
researchers define AR in a way that requires the use of Head-Mounted Displays (HMDs). This definition allows other
technologies besides HMDs while retaining the essential components of AR.
Here with tracking of the human, appropriate visual interactions can be accomplished. This system uses calibrated
cameras and careful measurements of the location of objects in the real studio. Note that the objects are combined in 3D, so that the virtual globe AR can be thought of as the "middle ground" between VE (completely synthetic) and
telepresence (completely real).
2. CHARACTERISTICS OF AR

Besides adding objects to a real environment, AR also has the potential to remove them.
Volume 2, Issue 3, March 2013
Page 85
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 2, Issue 3, March 2013
ISSN 2319 - 4847








Graphic overlays might be used to remove or hide parts of the real environment from a user. e.g., to
remove a desk in the real environment, draw a representation of the real walls and floors behind the desk
and "paint" that over the real desk, effectively removing it from the user's sight.
Has been done in movies. Doing this interactively in an AR system will be much harder, but this removal
may not need to be photorealistic to be effective.
Blending the real and virtual poses problems with focus and contrast and some applications require
portable AR systems to be truly effective.
AR might apply to all senses, not just sight.
AR could be extended to include sound.
The user would wear headphones equipped with microphones on the outside. The headphones would add
synthetic, directional 3D sound, while the external microphones would detect incoming sounds from the
environment. Thus, one can cancel selected real incoming sounds and add others to the system. This is not
easy, but possible.
Another example is haptics.
Gloves with devices that provide tactile feedback might augment real forces in the environment. For
example, a user might run his hand over the surface of a real desk which can augment the feel of the desk,
perhaps making it feel rough in certain spots.
Optical see-through HMD conceptual diagram
Video see-through HMD conceptual diagram
3. ENVIRONMENT FOR AUGMENTATION
Augmented reality environment include certain devices like camera device, a display device,and in some cases a userinterface device to interact with the virtual objects.
3.1 Configuration of Camera and Display Technology
In order to combine the real world with virtual objects in real-time we must configure camera and display hardware.
The three most popular display configurations currently in use for augmented reality are Monitor-based, Video Seethrough and Optical See-through.
Monitor-based Display
The simplest approach is a monitor-based display, as depicted in Figure 1.4. The video camera continuously captures
individual frames of the real world and feeds each one into the augmentation system. Virtual objects are then merged
into the frame, and this final merged image is what users ultimately see on a standard desktop monitor. The advantage
of this display technology is its simplicity and affordability, since a consumer-level PC and USB or FireWire video
camera is all that is required. Additionally, by processing each frame individually, the augmentation system can use
vision-based approaches to extract pose (position and orientation) information about the user for registration purposes
(by tracking features or patterns, for example). However this simplicity comes at the expense of immersion. Clearly,
viewing the real world through a small desktop monitor limits the realism and mobility of the augmented world.
Additionally, since each frame from the camera must be processed by the augmentation system, there is a potential
delay from the time the image is captured to when the user actually sees the final augmented image. Finally, the quality
of the imagery is limited by the resolution of the camera.
Volume 2, Issue 3, March 2013
Page 86
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 2, Issue 3, March 2013
ISSN 2319 - 4847
Video See-through Display
In order to increase the sense of immersion in virtual reality systems, head-mounted displays (HMD) that fully
encompass the user’s view are commonly employed. There are two popular methods to bring HMDs into the augmented
reality environment. The schematic for a video see-through augmented reality system. In this configuration, the user
does not see the real world directly, but instead only sees what the computer system displays on the tiny monitors inside
the HMD. The difference between this and a virtual reality HMD is the addition of video cameras to capture images of
the real world. While this configuration is almost identical to the monitor-based technology in terms of functionality,
the use of a stereo camera pair (two cameras) allows the HMD to provide a different image to each eye, thereby
increasing the realism and immersion that the augmented world can provide. Like the monitor-based setup, the video
see through display is prone to visual lags due to the capture, processing, augmentation, and rendering of each video
frame. Additionally, a large offset between the cameras and the user’s eyes can further reduce the sense of immersion,
since everything in the captured scenes will be shifted higher or lower than where they should actually be (with respect
to the user’s actual eye level).
Optical See-through Display
The other popular HMD configuration for augmented reality is the optical see-through display system, as depicted in
Figure 1.6. In this setup, the user is able to view the real world through a semi-transparent display, while virtual objects
are merged into the scene optically in front of the user’s eyes based on the user’s current position. Thus when users
move their heads, the virtual objects maintain their positions in the world as if they were actually part of the real
environment. Unlike the video see-through displays, these HMDs do not exhibit the limited resolutions and delays
when depicting the real world. However, the quality of the virtual objects will still be limited by the processing speed
and graphical capabilities of the augmentation system. Therefore, creating convincing augmentations becomes
somewhat difficult since the real world will appear naturally while the virtual objects will appear pixilated. The other
major disadvantage with optical see-through displays is their lack of single frame captures of the real world, since no
camera is present in the default hardware setup. Thus position sensors within the HMD
are the only facility through which pose information can be extracted for registration purposes1. Some researchers have
proposed hybrid solutions that combine position sensors with video cameras in order to improve the pose estimation;
Section 2.4 discusses these hybrid solutions in more detail.
3.2 Haptic User-Interface Device
Similar to virtual reality systems, augmented reality requires some facility through which a user can physically interact
with the virtual objects. Haptic input devices provide this facility, but with the added ability to provide touch sensations
to the user. Consider a virtual coffee mug that has been augmented onto a user’s real desk. With a specially designed
glove, a user could physically grab hold of the coffee mug and hold it in his or her hand. With sensors in the palm and
fingertip area of the glove, the user could feel the weight of the virtual mug as well as sense the texture across the
mug’s surface. Further, the augmented environment could simulate gravity as well, and thus releasing the virtual mug
would send it crashing to the ground. Therefore, if simulated correctly, the user would not be able to distinguish this
virtual mug from a real one.
While conceptually very simple, these haptic devices are under intensive research since recreating realistic physical
sensations is a difficult problem. Many different devices have been proposed, from the above mentioned force-feedback
gloves to full-size bodysuits that apply forces to a user’s arms and legs. A detailed survey of the various haptic user
interfaces that are applicable to augmented reality environments is provided.
Volume 2, Issue 3, March 2013
Page 87
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 2, Issue 3, March 2013
ISSN 2319 - 4847
4. AUGMENTED REALITY AND ITS CHALLENGES
The major challenge for augmented reality systems is how to combine the real world and virtual world into a single
augmented environment. To maintain the user’s illusion that the virtual objects are indeed part of the real world
requires a consistent registration of the virtual world with the real world. While there are many aspects to consider
when creating an augmented reality application, one of the most difficult is precisely calculating the user’s viewpoint in
real-time so that the virtual objects are exactly aligned with the real-world objects. For example, to automatically
modify the scene in Figure 2.1a, the computer system must have some knowledge of the location and orientation of the
surrounding environment, from the user’s perspective, before it can properly augment the new virtual bridge over top of
the real scene. Additionally, if the user wishes to see the new bridge from different angles and viewpoints so that the
bridge appears to be a natural part of the environment, the system must be able to continually perform the augmentation
in real-time.
(a)
(b)
Figure 2.1 – Example of augmented reality for bridge building.(a) The untouched scene showing a view of a proposed
bridge location.
(b) The scene has been enhanced with an augmented reality model of the proposed bridge, allowing an engineer or city
planner to preview the bridge before physically erecting it.
As computers become increasingly smaller and more powerful, and as demand for interesting consumer electronics
devices continues to increase, augmented reality has the potential to become the “killer application” that computer
vision researchers have been waiting for.In spite of the great potential of mobile AR in many application areas,
progress in the field has so far almost exclusively been demonstrated through research prototypes. The time is obviously
not quite ripe yet for commercialization. When asking for the reasons why, one has to take a good look at the
dimension of the task. While increasingly better solutions to the technical challenges of wearable computing are being
introduced, a few problem areas remain, such as miniaturization of input/output technology, power sources, and
thermal dissipation, especially in small high-performance systems. Ruggedness is also required. With some early
wearable systems the phrase ‘wear and tear’ seemed to rather fittingly indicate the dire consequences of usage. In
addition to these standard wearable computing requirements, mobile AR adds many more: reliable and ubiquitous wide
area position tracking, accurate and self-calibrating (head-) orientation tracking; ultra-light, ultra-bright, ultratransparent, display eyewear with wide field of view; fast 3D graphics capabilities, to name just the most important
requirements. We will look at current technologies addressing these problem areas in the following section. If one were
to select the best technologies available today for each of the necessary components, one could build a powerful (though
large) MARS. will take a closer look at such a hypothetical device. AR in the outdoors is a particular challenge since
there is a wide range of operating conditions to which the system could be exposed. Moreover, in contrast
to controlled environments indoors, one has little influence over outdoor The list of challenges does not end with the
technology on the user’s side. Depending on the tracking technology, AR systems either need to have access to a model
of the environment they are supposed to annotate, or require that such environments be prepared (e.g., equipped with
visual markers or electronic tags). Vision-based tracking in unprepared environments is currently not a viable general
solution, but research in this field is trying to create solutions for future systems. The data to be presented in AR
overlays needs to be paired with locations in the environment. A standard access method needs to be in place for
retrieving such data from databases responsible for the area the MARS user is currently passing through. This requires
mechanisms such as automatic service detection and the definition of standard exchange formats that both the database
servers and the MARS software support. It is clear from the history of protocol standards that without big demand and
money-making opportunities on the horizon, progress on these fronts can be expected to be slow. On the other hand,
the World Wide Web, HTML, and HTTP evolved from similar starting conditions. Some researchers see location-aware
computing on a global scale as a legitimate successor of the World Wide Web as we know it today
REFERENCES
[1]3D Synthetic Environment Reconstruction. Kluwer Academic Publishers, New York, NY.
[2] R. 1997. A survey of augmented reality. Presence: Teleoperators and Virtual Environments, 6(4):355–385.
[3] The challenge of making augmented reality work outdoors. In Ohta, Y.and Tamura, H., editors, Mixed Reality,
Merging Real and Virtual Worlds, pages 379–390. Ohmsha/Springer, Tokyo/New York.
Volume 2, Issue 3, March 2013
Page 88
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 2, Issue 3, March 2013
ISSN 2319 - 4847
[4]Azuma, R., Baillot, Y., Behringer, R., Feiner, S., Julier, S., and MacIntyre, B. 2001. Recent advances in augmented
reality. IEEE Computer Graphics and Applications, 21(6):34–47.
[5]Azuma, R., Hoff, B., Neely III, H., and Sarfaty, R. 1999. A motion-stabilized outdoor augmented reality system. In
Proc. IEEE Virtual Reality '99, pages 242–259.
[6]Bahl, P. and Padmanabhan, V. 2000. RADAR: an in-building RF-based user location and tracking system. In Proc.
IEEE Infocom 2000, pages 775–784, Tel Aviv, Israel.
[7]Baillot, Y., Brown, D., and Julier, S. 2001. Authoring of physical models using mobile computers. In Proc. ISWC
'01 (Fifth IEEE Int. Symp. on Wearable Computers), pages 39–46, Zürich, Switzerland.
[8]Behringer, R. 1999. Registration for outdoor augmented reality applications using computer vision techniques and
hybrid sensors. In Proc. IEEE Virtual Reality '99, pages 244–251.
[9]Bell, B., Feiner, S., and Höllerer, T. 2001. View management for virtual and augmented reality. In Proc. ACM UIST
2001 (Symp. on User Interface Software and Technology), pages 101–110, Orlando, FL. (CHI Letters, vol. 3, no.
2).
[10]Bell, B., Höllerer, T., and Feiner, S. 2002. An annotated situation-awareness aid for augmented reality. In Proc.
ACM UIST 2002 (Symp. on User Interface Software and Technology), pages 213–216, Paris, France.
[11]Billinghurst, M., Bowskill, J., Dyer, N., and Morphett, J. 1998. Spatial information displays on a wearable
computer. IEEE Computer Graphics and Applications, 18(6):24–31.
[12]Broll, W., Schäfer, L., Höllerer, T., and Bowman, D. 2001. Interface with angels: The future of VR and AR
interfaces. IEEE Computer Graphics and Applications, 21(6):14–17.
[13]Webster, S. Feiner, B. MacIntyre, et al, "Augmented reality in architectural construction, inspection and
renovation." Proc. ASCE Third Congress on Computing in Civil Engineering, Anaheim, CA, June 17-19, 1996,
913-919.
[14]Stricker, P. Daehne, F. Seibert, et al, "Design and development issues for archeoguide: A augmented reality based
cultural heritage on-site guide, " [A]. In International Conference on Augmented, Virtual Environments and
Three-Dimensional Imaging [C]. Mykono Greece, IEEE 2001.
[15] Sutherland, "A Head-Mounted Three Dimensional Display." Proc. Fall Joint Computer Conference 1968,
Thompson Books, Washington, DC, 1968, 757-764.
[16] Myers, J. Snyder, and L. Chirca, "Database Usage in a Knowledgebase Environment for Building Design,"
Building and Environment, 27(2), 1992, 231-241.
[17]Bajura and U. Neumann, "Dynamic Registration and Correction in Augmented Reality Systems." Proc. VRAIS '95
(Virtual Reality Annual International Symp.), IEEE Computer Society Press. Los Alamitos, CA, 189-196.
[18]N. Faust, "The Virtual Reality of GIS, " Environment and Planning B: Planning and Design, 1995,22:257~258
[19]N. I. Durlach, & A.S. Mavor, (Eds.), "Virtual reality: Scientific and technological challenges, " National Academy
Press. Washington DC.,1995
[20]P. Milgram, F. Kishino, "A Taxonomy of Mixed Reality Visual Displays, "〔J〕IEICE Trans. Information
Systems, 1994, E77-D (12):1321~1329.
[21]R. Quinn, "1990s Jobsite Fashion Forecast: the Computerized Hardhat," Engineering News Record, March 29,
1993.
Volume 2, Issue 3, March 2013
Page 89
Download