Extended abstract presented at the NSF Lake

advertisement

Extended abstract presented at the NSF Lake Tahoe Workshop for Collaborative Virtual Reality and Visualization, Oct. 26-28, 2003 1

Emerging tangible interfaces for facilitating collaborative immersive visualizations

Brygg Ullmer, Andrei Hutanu, Werner Benger, and Hans-Christian Hege

Konrad-Zuse-Zentrum für Informationstechnik Berlin

{ullmer, hutanu, benger, hege}@zib.de

Takustrasse 7, Berlin, Germany 14195 http://www.zib.de/ullmer/

ABSTRACT

We describe work in progress toward a tangible interface for facilitating collaborative visualization within immersive environments. This is based upon a system of pads, cards, and wheels that physically embody key digital operations, data, and parameters. These “visualization artifacts” provide a simple means for collaboratively engaging with time, space, parameters, and information aggregates, which we believe will generalize over a variety of applications. enter both graphics (line drawing) and text (character recognition).”

Projected into the space of immersive virtual reality, this raises the question of whether 3D pointing devices such as wands, gloves, and more specialized descendants are indeed well-suited to serve as the primary interaction device for

both inherently geometrical and abstract content; or whether this pervasive combination could be a “design dead end” that is holding back progress in immersive environments.

INTRODUCTION

Several recent systems have made compelling use of instrumented, physically representational objects that serve as interface elements within immersive workbenches and

CAVEs [Schkolne et al. 2001, Keefe et al. 2001]. These

“tangibles” [Ullmer and Ishii 2001] have taken immersive interaction to new levels by allowing users to spatially multiplex key operations across specialized tools which remain “near to hand” within a kinesthetic reference frame

1

.

Applications involving spatial interaction with intrinsically three-dimensional data lend themselves naturally to immersive virtual reality (VR), allowing users to make direct deictic (pointing) reference toward 3D geometrical constructs. However, even applications that center upon inherently geometrical content usually build implicitly or explicitly upon interactions with abstract digital information that has no intrinsic spatial representation. Example tasks include loading and saving data, adjusting parameters, and establishing links with remote collaboration partners.

In this extended abstract, we introduce work in progress on a tangible interface [Ullmer and Ishii 2001] for collaborative interaction with virtual environments. Our interface is based on a system of physical pads, cards, and wheels that serve as both representations and controls for digital operations, parameters, data sets, computing resources, access credentials, and other computationally-mediated content

(Figures 1,2). We believe our approach may offer benefits including parallel two-handed interaction with spatial and abstract content; improved collaborative use; enhanced manipulation of digital parameters; improved migration between desktop and immersive environments; and easier user authentication to grid computing resources.

While considerable effort has been invested into 3D graphical interaction with abstract digital content, the area has proven challenging, and few widely accepted methods have emerged. In part, this likely relates to the visual design challenges involved in realizing legible, compelling

3D graphical representations of abstract content. However, we believe that the nature of prior interaction devices used for immersive VR is also a major factor. To consider a related example, in a chapter titled “The Invention of the

Mouse,” speaking with respect to the first application of the tablet-based stylus (during the mid-1960’s, in the GRAIL system), Bardini [2000] writes:

“it was a design dead end at that point… to consider that the same input device could be the device of choice to

1

Kinesthetic reference frames are “the workspace and origin within which the hands operate” [Balakrishnan and Hinckley 1999]

Figure 1: Prospective collaborative use of interaction pads with a stereo display wall.

Users hold a 3D pointer in one hand, while manipulating visualization artifacts with their second hand.

We begin by discussing our basic interaction approach. We then introduce the particular interaction devices we are developing and examples of their use, as well as considering related work. We also discuss several underlying technical factors, including the user interface implications of grid

Extended abstract presented at the NSF Lake Tahoe Workshop for Collaborative Virtual Reality and Visualization, Oct. 26-28, 2003 2 computing issues surrounding access to shared network resources, and conclude with a brief preview of future work.

INTERACTION APPROACH

Our work aims to create a system of “visualization artifacts” that facilitate the physical manipulation of abstract digital information. These are initially targeted for use within immersive virtual environments, but we believe their functionality generalizes more broadly. These visualization artifacts support a series of simple, task-specific manipulations of online information (e.g., remote datasets), in the absence of general-purpose pointing devices and alphabetic keyboards.

Our interface is based upon three kinds of physical/digital objects: pads, cards, and wheels (Figures 2-9). Interaction

pads are small, modular objects with specialized work surfaces that support specific digital operations. These pads are used together with data cards and parameter wheels.

These are physical tokens that are used as physical representations and controls for online data and parameters.

Figure 2: Close-up of visualization artifacts.

A floor-standing “interaction stand” supports and organizes a set of pads, cards, and wheels. Some of these objects are passively stored (on the left and right of the stand), while others are actively used (in the stand’s center).

Here, one of each of the four core interaction pads is placed upon the stand’s central workspace. A data card is placed on the binding pad, while two parameter wheels are placed on the parameter pad (their color is from LED backlighting).

These visualization artifacts provide a small subset of the functionality available on traditional desktop graphical interfaces. We assume that many interactions with the information underlying the immersive display will continue to be conducted in traditional desktop-based 2D GUIs. Our system will provide a GUI-based means for easily binding selected digital information onto physical tokens, building on the “monitor slot” approach of [Ullmer et al. 1998].

Users may then simply access and manipulate this content on interaction pads, while their visual focus remains direc-

ted toward the main object of interest. In immersive virtual environments, this will often be some form of 3D graphical visualization. In collaborative usage contexts, the object of interest may also be collocated or remote people.

We expect that within immersive environments, users may continue to use both generic and specialized 3D pointing devices as the primary tools for spatially manipulating 3D graphical content. Toward this, our interface allows users to hold a 3D tracker in one hand, while using the second hand to engage with the abstract digital information and operations represented by our visualization artifacts.

We imagine that truly simultaneous manipulation of both the tracker and visualization artifacts may be infrequent.

Nonetheless, we believe this two-handed approach will help minimize the “set-up” and “tear-down” time and distraction of acquiring the tracking device, switching software modes, retargeting to a new spatial area of interest, and continuing interaction that have been necessary in previous approaches for moving between many spatial and abstract operations.

We believe that perhaps the foremost value of our interface will be enabling users to easily perform tasks like loading and saving data, establishing video links, manipulating simulation parameters, and controlling presentations while their

attention is focused upon the object of interest.

Implicit is the belief that the thread of engagement with 3D graphical visualizations and collaborators are generally the main objects of interest, and not secondary (albeit important) interface tasks. We believe our interface’s simple controls and physical legibility may be less cognitively demanding than graphical interfaces (especially in the context of immersive environments), and could support VR use by a broader ranger of users. We hope that our visualization artifacts, used together in two-handed interaction with 3D pointing devices, will provide powerful tools for facilitating this balance between functionality and attentional cost.

INTERACTION DEVICES

Our tangible interface is based upon three kinds of physical objects: pads, cards, and wheels. Interaction pads are modular elements used for operations like loading and saving data, establishing video links, manipulating simulation parameters, and controlling presentations. In their initial incarnation, they are embodied as a series of rectangular modules, each roughly the size of a VHS cassette. They include embedded RFID readers for sensing tagged physical tokens, and communicate with wired and wireless Ethernet.

We are currently developing four core interaction pads:

- the binding pad: for establishing and accessing data card bindings;

- the placement pad: for spatially arranging data card contents on graphical displays;

- the parameter pad: for binding and manipulating digital parameters using parameter wheels; and

- the control pad: for navigating through media collections, streams, and temporal data

(e.g., images, video, simulation time steps, etc.)

Taken together, these pads will provide simple means for physically engaging with time, space, parameters, and information aggregates, which we believe will generalize over a variety of applications.

These interaction pads are used together with data cards and parameter wheels. Data cards may be used to represent

Extended abstract presented at the NSF Lake Tahoe Workshop for Collaborative Virtual Reality and Visualization, Oct. 26-28, 2003 3

Parameter wheels content such as data sets, simulation parameters, slide presentations, and live portals (e.g., video conference sessions).

Parameter wheels will be bindable to different parameters, and used as kinds of reconfigurable “dial box” knobs to adjust and control simulation and visualization parameters.

Figures 1 and 2 illustrate one prospective usage example.

In addition to the pads, cards, and users, these images depict several other details. First, they illustrate an “Immersa-

Desk” style large format stereo display where immersive visualizations are displayed. Secondly, the users wear stereo glasses and use a 3D tracker in one hand, while interacting with visualization artifacts with their second hand.

As interactive use of wall displays is frequently conducted while sitting or standing immediately adjacent to the display, we have designed “furniture” in the form of an “interaction stand” for physically supporting and organizing the visualization artifacts. The stand also helps provide power and Ethernet connectivity to the interaction pads.

We next consider the structure and function of our cards, wheels, and pads in more detail.

Data cards

Parameter wheels are an approach for expressing discrete and continuous parameters using small cylindrical “wheels”

[Ullmer et al. 2003] (Figure 4). These wheels are used in fashions resembling the dials of dial boxes. In addition to the strengths of dial boxes, parameter wheels can be dynamically bound to different digital parameters. The physical/digital constraints within which parameter wheels are used may also be bound to different digital interpretations, significantly increasing the expressive power of this approach. E.g., in Figure 4, the two left wheel constraints are bound to the “y” and “x” axes of a scatterplot visualization. By placing the wheel onto the right “x” constraint, the user both queries the database for the wheel’s parameter, and plots the results of this query along the scatterplot’s “x” axis. Wheel rotation then allows manipulation of the wheels’ associated parameter values.

“Data cards” – RFID-tagged cards with the size and feel of a credit card (Figure 3) – are the primary medium for representing digital information within our system. These cards are each marked with six numbered, (optionally) labeled rows. Each of these rows can be associated with one or more elements of online (URL/URN-referenced) information. One or more of these rows can be selected with the binding pad as the active binding of the card “container”.

This approach works to balance the benefits of physical embodiment and the legibility of visually labeled “contents,” while combating the flood of objects associated with traditional “one object, one binding” TUI approaches.

Figure 3 : Example data cards

Data cards are also color-coded and labeled with text (and perhaps visual icons) on their upper corner surfaces. This is intended to allow the cards to be rapidly sorted, and identified while held in a “hand” in a fashion similar to traditional playing cards [Parlett 1999].

The RFID tags within these cards each hold a unique serial number, as well as several hundred bytes of non-volatile

RAM. The non-volatile RAM is used to hold the network address for a SQL database, which is used to hold the actual

URLs and authentication information associated with data cards, as well as additional cryptographic information.

Figure 4: Example parameter wheels (from [Ullmer et al. 2003])

Parameter wheels can be bound to desired parameters using special data cards. Manipulation of these wheels on the

“parameter pad” can then be used to manipulate simulation parameters (e.g., time) as well as visualization parameters

(e.g., “transparency”).

Interaction pads

Interaction pads represent specific digital operations. Their individual work surfaces contain RFID sensing regions, buttons, and displays for sensing and mediating interactions with data cards and parameter wheels.

Each interaction pad has several shared features. Each has a work surface of roughly 16.5x10.5cm, and a depth of 4cm

(this will be reduced in future iterations). Each also has four indicator LEDs, two of which have associated buttons

(Figure 5). These include:

- The target LED indicates that the interaction pad is

“bound” to a specific visual display surface (e.g., an immersive wall or computer screen). By pressing the adjoining button, the target binding may be cleared.

- The authorization LED indicates that the interaction pad has been securely authenticated with one or more users’ authentication credentials. Especially within collaboration contexts, this security mechanism plays an important role in (e.g.) allowing users to access or save potentially sensitive remote datasets. Pad authori-

Extended abstract presented at the NSF Lake Tahoe Workshop for Collaborative Virtual Reality and Visualization, Oct. 26-28, 2003 4 zation may be revoked with the adjoining button.

- The sensing LED indicates that one or more of the embedded RFID sensors is successfully detecting an

RFID-tagged parameter card or data wheel. This helps users confirm that the device is functioning properly.

- The network LED indicates that the pad is successfully maintaining a network link with its remote proxying computer. The actual visualization operations mediated by interaction pads are proxied by remote computers. This LED confirms that the network link is active.

Here, pressing a selection button on the target pad will copy any selected links from the source card into the specified target card row.

In addition, we are designing special data cards that have different behaviors. For example, we are developing data cards that refer to different output devices (e.g., projectors and screens). Here, transferring data “into” such a row has the effect of displaying this data on the selected device.

2. Placement pad

The placement pad is used to place one or more information elements onto different regions of a graphical display (e.g., a monitor or projection screen). The information to be displayed is usually expressed by data cards. Examples data includes 3D datasets, slides, and live video streams. Placement is expressed by the card's location within the pad's five cells (four corner positions and one central position). Multiple datacards may be present simultaneously on the pad.

Figure 5: Interaction pad status indicators and controls

We next will briefly describe the functions of our current core interaction pads.

1. Binding pad

Binding, the process by which digital information is associated with physical objects, is a central activity within tangible interfaces [Cohen et al. 1999]. The binding pad is the principle visualization artifact used to express this assignment. It can be used in several different ways:

1) Selecting elements from the list of bindings contained within datacards; and

2) Copying information between physical tokens;

3) Making bindings to other interaction pads.

The binding pad is composed of special constraints (or

“cells”) for a “source” and “target” data card, and a series of selection buttons that can be used to select particular data card rows (Figure 6). Pad interactions begin by placing a data card onto the source or target cells. If a particular row of the card has previously been selected, it will be illuminated by side-facing LEDs. A selection button is located next to each row of the data card. For the source cell, pushing one (or more) button(s) selects the corresponding row, which is again indicated by edge-illumination. This selection is maintained until explicitly changed, even if the card is moved to a different pad.

Figure 7: Placement pad (CAD layout, faceplate view)

The placement pad can be associated with a number of different display devices. These destinations can be specified with the binding pad, or directly in conjunction with tagged output devices. Multiple placement pads can also be bound to the same display device, which we believe will hold special value in collaboration contexts.

3. Parameter pad

The parameter pad builds on the “parameter wheel” concept discussed earlier in the paper. To modify a parameter, the associated wheel is placed onto a cell within the parameter pad. When the wheels is turned, the parameter is modified accordingly, with its new value and consequence displayed on the shared screen. An LCD display may also be used within the pad. We are also considering force-feedback, which might have special value for high latency operations

(e.g., steering computationally-intensive simulations).

Figure 6: Binding pad (CAD layout, faceplate view)

The target cell’s behavior depends upon which data cards are present on the binding pad. In the simplest case, a

“normal” data card is present in both source and target cells.

Figure 8: Parameter pad (CAD layout, faceplate view)

Extended abstract presented at the NSF Lake Tahoe Workshop for Collaborative Virtual Reality and Visualization, Oct. 26-28, 2003 5

4. Control pad

Time-based navigation is an important component of many interactions. For example, in a slide presentation, it is very valuable to have a simple, rapid means to jump to next or previous slides, or to be able to jump with random access to different parts of a presentation. Video playback shares similar benefits. These kinds of controls are also useful for manipulating simulation parameters such as “time.”

The control pad is intended to simplify these kinds of interactions. It incorporates media player-style forward, back, fast forward/back, and play/pause buttons; and a slider for jumping to absolute positions. We expect this pad will be most frequently used for presentations, media browsing, and similar operations. We are also considering allowing the pad’s controls to operate upon arbitrary parameters.

Figure 9: Control pad (CAD layout, faceplate view)

EXAMPLE INTERACTIONS

RELATED WORK

Our visualization artifacts integrate and build upon a number of interaction techniques introduced in previous papers, both by ourselves and others. We feel this is allowing us to leverage, distill, and combine some of the strongest results of prior work into a single integrated system. Given limited space, we restrict related work discussion to these systems.

First, our use of symbolic physical objects as dynamically rebindable containers for aggregates of abstract digital information builds upon the mediaBlocks system [Ullmer et al. 1998]. We have used cards rather than blocks, as well as multiple bindings per token in an effort to improve scalability and usage pragmatics. The parameter pad and wheels draw from [Ullmer et al. 2003]. Also, the pad concept builds upon the DataTiles approach of Rekimoto, Ullmer, and Oba [2001].

The Paper Palette [Nelson et al. 1999] also made use of card objects as data representations within a tangible interface. Our work differs both in the way datacards are used to represent digital information aggregates, and the way our cards are composed with pads (which represent operations).

Finally, the ToonTown interface [Singer et al. 1999] demonstrated a compelling example of physical objects serving as representations of remote people in an audio conferencing application. Similarly, we are using data cards to represent remote participants for (e.g.) video conferencing.

The visualization artifacts are still under development, and a detailed discussion of possible interactions goes beyond the scope of this paper. However, it is useful to mention several examples of intended usage contexts. First, we have completed an early working prototype of the binding pad, and successfully used this to interactively present a number of stereo visualizations during an open house. The Amira visualization software was used to load ~15 visualizations of colliding black holes, which were mapped across four data cards. Some visualizations were stereo movies, while others allowed interaction with a 3D tracker or mouse. In this context, the binding pad was valuable in allowing rapid on-demand access and activation to a number of visualizations without consuming display real estate, and with a minimum of attentional requirements for the presenter.

We expect the use of visualization artifacts to deliver demonstrations and presentations will be a frequent application, especially once control and parameter pads availability allows increased interactivity with contents. A short summary of these and other promising applications includes:

Presentations and demonstrations

(with binding, control, parameter, and placement pads)

Displaying, browsing, and manipulating media

(with binding, control, and placement pads)

Parameter studies (with parameter and binding pads)

Loading data, saving visualization snapshots

(with the binding pad)

Video conferencing and other remote collaboration

(with binding and placement pads)

OTHER COLLABORATION ISSUES

Throughout the paper, we have mentioned some of the ways that we are working to support collaborative use. In the following paragraphs, we will consider a few additional collaboration issues that impact our design.

First, from a technical standpoint, collaborative use of our interface increases the importance of security mechanisms for linking our interaction devices with their associated data and operations. While these technical issues have received little attention in previous work with tangible interfaces

(and perhaps virtual reality systems as well), we believe they are broadly important for any interaction environment that supports multiple users and/or networked content. This is especially true for collaborative interfaces that cause information to be saved or modified.

For example, capabilities for saving data and steering remote supercomputing simulations are important for our users. Especially when multiple users collaboratively manipulate data that is hosted at different locales with different access rights, infrastructure and interfaces for managing user authentication and secure operations are essential.

We are linking our interface with grid computing infrastructure under development by the EC “GridLab” project [Allen

2003]. For example, we are developing special datacards that represent user credentials. These cards represent up to six different credentials. Internally, they store both a unique

ID and (behind the tag’s hardware cryptographic protection) a decryption key that can be used to reconstruct a valid credential. When a credential card is placed onto an inter-

Extended abstract presented at the NSF Lake Tahoe Workshop for Collaborative Virtual Reality and Visualization, Oct. 26-28, 2003 6 action pad and the networked authorization dialogue is successfully completed, the pad’s “auth” LED will light.

Future read and write accesses initiated by (or more precisely, on behalf of) the pad will use these access rights. In this way, different interaction pads within the same immersive environment potentially can have different credentials, allowing users (perhaps from competing organizations) to interact collaboratively with secure remote content.

At present, the processors embedded within interaction pads have limited power, and do not support the “Grid Application Toolkit” (GAT) necessary to carry out grid operations

[Allen et al. 2003]. For this reason, grid transactions will not be executed on the interaction pad themselves, but rather on remote proxying computers. The interaction pads’ embedded processor – currently, a Rabbit RCM3010 core module – are able to contact a remote server, and exchange their authentication information using AES encryption over the Rabbit’s Ethernet port. The remote server will then select a grid-accessible computer as a proxy for the interaction pad. Henceforward, the interaction pad will send “interaction events” (e.g., RFID entrance/exit, button presses, and rotation events) to its remote proxy; and the remote proxy will translate these events into grid-mediated visualizations.

These paragraphs give some flavor for technical aspects of our interface that relate to collaboration. In addition, collaboration raises a number of other interaction issues. As one example, ensuring the visibility of user’s physical actions both to the controlling user and any observing users is one such concern. Kinesthetic manipulation of physical controls should allow manipulation of the interface while the eyes are focused on other (often graphical) objects of interest. However, it is important to provide feedback to indicate the actions that are being sensed and interpreted.

Toward this, we are implementing a system of animated visual indicators. These are 3D graphical representations of parameters, menu selections, etc. which animate or fade onto the target display when the corresponding physical controls are modified, and animate away when input ceases.

CONCLUSION AND FUTURE WORK

We have described a tangible interface for facilitating collaborative visualization within immersive environments.

This is based upon a system of pads, cards, and wheels that embody key digital operations, data, and parameters. We are currently midway completion of this interface. As such, our first priority is to complete implementation of the functionality we have described, and to deploy the pads for test usage. We have received support from the Max Planck

Society for deploying a series of visualization artifacts for daily use by astrophysicists; the interface we have described has been developed in collaboration with these scientists.

We are hopeful that the resulting functionality will prove valuable to a broad range of immersive and non-immersive visualization tasks.

We are also considering a number of extensions to the functionality we have described, including special kinds of data cards; specialized (domain-specific) interaction pads; extensions to the parameter pad; and the use of rewritable data card surfaces, allowing automatic updating of labels when cards are bound to new contents.

ACKNOWLEDGMENTS

We thank the European Community (grant IST-2001-

32133) and the GridLab project for funding of this work.

The hardware for our interface’s first trial deployment at the

Max-Planck Institute for Gravitational Physics (the Albert

Einstein Institute) is funded by a BAR grant from the Max

Planck Society. We thank Ed Seidel, Christa Hausmann-

Jamin, Gabrielle Allen, Michael Koppitz, Frank Herrmann,

Thomas Radke, and others at AEI for their enthusiasm, collaboration and support. Our work builds in part on earlier research at the MIT Media Laboratory under the leadership of Prof. Hiroshi Ishii, with support from the Things That

Think consortium, IBM, Steelcase, Intel, and other sponsors. Fabrizio Iacopetti provided key PIC processor firmware used within the parameter pad. Indeed/TGS provided the Amira licenses used by our collaboration partners.

REFERENCES

1. Allen, G., Davis, K., et al. (2003). Enabling Applications on the

Grid: A GridLab Overview. In International Journal of High

Performance Computing Applications: Special Issue on Grid

Computing, August 2003.

2. Balakrishnan, R., and Hinckley, K. (1999). The Role of

Kinesthetic Reference Frames in Two Handed Input

Performance.” In Proc. of UIST’99, pp. 171-178.

3. Bardini, T. (2000). Bootstrapping: Douglas Engelbart,

Coevolution, and the Origins of Personal Computing.

Stanford: Stanford University Press, 2000.

4. Keefe, D., Acevedo, D., et al. (2001). CavePainting: A Fully

Immersive 3D Artistic Medium and Interactive Experience. In

Proceedings of I3D’01.

5. Nelson, L., Ichimura, S., Pedersen, E., and Adams, L. (1999).

Palette: a paper interface for giving presentations. In

Proceedings of CHI’99, pp. 354-361.

6. Parlett, D. (1999). Oxford History of Board Games. Oxford:

Oxford University Press.

7. Rekimoto, J., Ullmer, B., and Oba, H. (2001). DataTiles: A

Modular Platform for Mixed Physical and Graphical

Interactions. In Proceedings of CHI’01, pp. 269-276.

8. Schkolne, S., Pruett, M., and Schroeder, P. (2001). Surface

Drawing: Creating Organic 3D Shapes with the Hand and

Tangible Tools. In Proceedings of CHI’01.

9. Singer, A., Hindus, D., Stifelman, L., and White, S. (1999).

Tangible Progress: Less is More in Somewire Audio Spaces. In

Proc. of CHI’99.

10. Ullmer, B., Ishii, H., and Jacob, R. (2003). Tangible Query

Interfaces: Physically Constrained Tokens for Manipulating

Database Queries. In Proceedings of INTERACT’03.

11. Ullmer, B., and Ishii, H. (2001). Emerging Frameworks for

Tangible User Interfaces. In HCI in the New Millenium, John M.

Carroll, ed., pp. 579-601.

12. Ullmer, B., Ishii, H., and Glas, D. (1998). mediaBlocks:

Physical Containers, Transports, and Controls for Online Media.

In Computer Graphics Proceedings (SIGGRAPH'98), 1998, pp.

379-386.

Download